裸机 K8s:如何保留客户端的源 IP 并将流量定向到当前服务器上的 nginx 副本

Bare-Metal K8s: How to preserve source IP of client and direct traffic to nginx replica on current server on

我想请你帮忙:

http/https 的集群入口点是 NGINX: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.0 运行 作为 deamonset

我想实现两件事:

  1. 保留客户端的源IP
  2. 将流量定向到 nginx 副本 当前服务器(因此,如果请求发送到服务器 A,则列为 外部IP地址,节点A上的nginx应该处理它)

问题:

我正在考虑使用 metallb,但是 layer2 配置会导致瓶颈(集群上的高流量)。不知道BGP能不能解决这个问题

您可以通过将 externalTrafficPolicy 设置为 local 来保留客户端的源 IP,这会将请求代理到本地端点。这在 Source IP for Services with Type=NodePort.

上有解释

还可以看看Using Source IP

MetalLB的情况下:

MetalLB respects the service’s externalTrafficPolicy option, and implements two different announcement modes depending on what policy you select. If you’re familiar with Google Cloud’s Kubernetes load balancers, you can probably skip this section: MetalLB’s behaviors and tradeoffs are identical.

“Local” traffic policy

With the Local traffic policy, nodes will only attract traffic if they are running one or more of the service’s pods locally. The BGP routers will load-balance incoming traffic only across those nodes that are currently hosting the service. On each node, the traffic is forwarded only to local pods by kube-proxy, there is no “horizontal” traffic flow between nodes.

This policy provides the most efficient flow of traffic to your service. Furthermore, because kube-proxy doesn’t need to send traffic between cluster nodes, your pods can see the real source IP address of incoming connections.

The downside of this policy is that it treats each cluster node as one “unit” of load-balancing, regardless of how many of the service’s pods are running on that node. This may result in traffic imbalances to your pods.

For example, if your service has 2 pods running on node A and one pod running on node B, the Local traffic policy will send 50% of the service’s traffic to each node. Node A will split the traffic it receives evenly between its two pods, so the final per-pod load distribution is 25% for each of node A’s pods, and 50% for node B’s pod. In contrast, if you used the Cluster traffic policy, each pod would receive 33% of the overall traffic.

In general, when using the Local traffic policy, it’s recommended to finely control the mapping of your pods to nodes, for example using node anti-affinity, so that an even traffic split across nodes translates to an even traffic split across pods.

您需要为帐户考虑 MetalLB 的 BGP routing protocol 限制。

也请看看这个博客 post Using MetalLb with Kind.