Load Balancer External IP 与 K3s 集群节点的 Internal IP 相同
Load Balancer External IP is the same as Internal IP of node in K3s cluster
我已经使用以下方法在 k3s 集群中设置了一项服务:
apiVersion: v1
kind: Service
metadata:
name: myservice
namespace: mynamespace
labels:
app: myapp
spec:
type: LoadBalancer
selector:
app: myapp
ports:
- port: 9012
targetPort: 9011
protocol: TCP
kubectl get svc -n mynamespace
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
minio ClusterIP None <none> 9011/TCP 42m
minio-service LoadBalancer 10.32.178.112 192.168.40.74,192.168.40.88,192.168.40.170 9012:32296/TCP 42m
kubectl describe svc myservice -n mynamespace
Name: myservice
Namespace: mynamespace
Labels: app=myapp
Annotations: <none>
Selector: app=myapp
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.32.178.112
IPs: 10.32.178.112
LoadBalancer Ingress: 192.168.40.74, 192.168.40.88, 192.168.40.170
Port: <unset> 9012/TCP
TargetPort: 9011/TCP
NodePort: <unset> 32296/TCP
Endpoints: 10.42.10.43:9011,10.42.10.44:9011
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
我从上面假设我可以从以下位置访问 minIO 控制台:
http://192.168.40.74:9012 但这是不可能的。
错误信息:
curl: (7) Failed to connect to 192.168.40.74 port 9012: Connection
timed out
此外,如果我执行
kubectl get node -o wide -n mynamespace
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
antonis-dell Ready control-plane,master 6d v1.21.2+k3s1 192.168.40.74 <none> Ubuntu 18.04.1 LTS 4.15.0-147-generic containerd://1.4.4-k3s2
knodeb Ready worker 5d23h v1.21.2+k3s1 192.168.40.88 <none> Raspbian GNU/Linux 10 (buster) 5.4.51-v7l+ containerd://1.4.4-k3s2
knodea Ready worker 5d23h v1.21.2+k3s1 192.168.40.170 <none> Raspbian GNU/Linux 10 (buster) 5.10.17-v7l+ containerd://1.4.4-k3s2
如上所示,节点的内部 IP 与负载均衡器的外部 IP 相同。我是不是做错了什么?
K3S集群初始配置
为了重现环境,我按照后续步骤创建了一个双节点 k3s
集群:
在所需主机上安装 k3s 控制平面:
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC='--write-kubeconfig-mode=644' sh -
验证它是否有效:
k8s kubectl get nodes -o wide
要添加工作节点,这个命令应该在工作节点上运行:
curl -sfL https://get.k3s.io | K3S_URL=https://control-plane:6443 K3S_TOKEN=mynodetoken sh -
其中 K3S_URL
是控制平面 URL(具有 IP 或 DNS)
K3S_TOKEN
可以通过以下方式得到:
sudo cat /var/lib/rancher/k3s/server/node-token
你应该有一个 运行ning 集群:
$ k3s kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k3s-cluster Ready control-plane,master 27m v1.21.2+k3s1 10.186.0.17 <none> Ubuntu 18.04.5 LTS 5.4.0-1046-gcp containerd://1.4.4-k3s2
k3s-worker-1 Ready <none> 18m v1.21.2+k3s1 10.186.0.18 <none> Ubuntu 18.04.5 LTS 5.4.0-1046-gcp containerd://1.4.4-k3s2
复制和测试
我基于 nginx
图片创建了一个简单的部署:
$ k3s kubectl create deploy nginx --image=nginx
并暴露它:
$ k3s kubectl expose deploy nginx --type=LoadBalancer --port=8080 --target-port=80
这意味着 pod 中的 nginx
容器正在侦听端口 80
并且 service
可以在集群内的端口 8080
上访问:
$ k3s kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 29m <none>
nginx LoadBalancer 10.43.169.6 10.186.0.17,10.186.0.18 8080:31762/TCP 25m app=nginx
服务可通过 IP 或 localhost
和端口 8080
或 NodePort
访问。
+ 考虑到你得到的错误 curl: (7) Failed to connect to 192.168.40.74 port 9012: Connection timed out
意味着服务已配置,但它没有正确响应(它不是来自入口的 404 或 connection refused
).
第二个问题的答案 - 负载均衡器
从rancher k3s official documentation about LoadBalancer, Klipper Load Balancer开始使用。来自他们的 github 回购:
This is the runtime image for the integrated service load balancer in
klipper. This works by using a host port for each service load
balancer and setting up iptables to forward the request to the cluster
IP.
来自 how the service loadbalancer works:
K3s creates a controller that creates a Pod for the service load
balancer, which is a Kubernetes object of kind Service.
For each service load balancer, a DaemonSet is created. The DaemonSet
creates a pod with the svc prefix on each node.
The Service LB controller listens for other Kubernetes Services. After
it finds a Service, it creates a proxy Pod for the service using a
DaemonSet on all of the nodes. This Pod becomes a proxy to the other
Service, so that for example, requests coming to port 8000 on a node
could be routed to your workload on port 8888.
If the Service LB runs on a node that has an external IP, it uses the
external IP.
换句话说,是的,预计负载均衡器具有与主机 internal-IP
相同的 IP 地址。 k3s 集群中每个具有负载均衡器类型的服务在每个节点上都有自己的 daemonSet
来为初始服务提供直接流量。
例如我创建了第二个部署 nginx-two
并将其暴露在端口 8090
上,您可以看到有两个 pods 来自两个不同的部署和四个 pods 作为负载均衡器(请注意名字-svclb
开头):
$ k3s kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-6799fc88d8-7m4v4 1/1 Running 0 47m 10.42.0.9 k3s-cluster <none> <none>
svclb-nginx-jc4rz 1/1 Running 0 45m 10.42.0.10 k3s-cluster <none> <none>
svclb-nginx-qqmvk 1/1 Running 0 39m 10.42.1.3 k3s-worker-1 <none> <none>
nginx-two-6fb6885597-8bv2w 1/1 Running 0 38s 10.42.1.4 k3s-worker-1 <none> <none>
svclb-nginx-two-rm594 1/1 Running 0 2s 10.42.0.11 k3s-cluster <none> <none>
svclb-nginx-two-hbdc7 1/1 Running 0 2s 10.42.1.5 k3s-worker-1 <none> <none>
两种服务具有相同的 EXTERNAL-IP
s:
$ k3s kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx LoadBalancer 10.43.169.6 10.186.0.17,10.186.0.18 8080:31762/TCP 50m
nginx-two LoadBalancer 10.43.118.82 10.186.0.17,10.186.0.18 8090:31780/TCP 4m44s
我已经使用以下方法在 k3s 集群中设置了一项服务:
apiVersion: v1
kind: Service
metadata:
name: myservice
namespace: mynamespace
labels:
app: myapp
spec:
type: LoadBalancer
selector:
app: myapp
ports:
- port: 9012
targetPort: 9011
protocol: TCP
kubectl get svc -n mynamespace
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
minio ClusterIP None <none> 9011/TCP 42m
minio-service LoadBalancer 10.32.178.112 192.168.40.74,192.168.40.88,192.168.40.170 9012:32296/TCP 42m
kubectl describe svc myservice -n mynamespace
Name: myservice
Namespace: mynamespace
Labels: app=myapp
Annotations: <none>
Selector: app=myapp
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.32.178.112
IPs: 10.32.178.112
LoadBalancer Ingress: 192.168.40.74, 192.168.40.88, 192.168.40.170
Port: <unset> 9012/TCP
TargetPort: 9011/TCP
NodePort: <unset> 32296/TCP
Endpoints: 10.42.10.43:9011,10.42.10.44:9011
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
我从上面假设我可以从以下位置访问 minIO 控制台: http://192.168.40.74:9012 但这是不可能的。
错误信息:
curl: (7) Failed to connect to 192.168.40.74 port 9012: Connection timed out
此外,如果我执行
kubectl get node -o wide -n mynamespace
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
antonis-dell Ready control-plane,master 6d v1.21.2+k3s1 192.168.40.74 <none> Ubuntu 18.04.1 LTS 4.15.0-147-generic containerd://1.4.4-k3s2
knodeb Ready worker 5d23h v1.21.2+k3s1 192.168.40.88 <none> Raspbian GNU/Linux 10 (buster) 5.4.51-v7l+ containerd://1.4.4-k3s2
knodea Ready worker 5d23h v1.21.2+k3s1 192.168.40.170 <none> Raspbian GNU/Linux 10 (buster) 5.10.17-v7l+ containerd://1.4.4-k3s2
如上所示,节点的内部 IP 与负载均衡器的外部 IP 相同。我是不是做错了什么?
K3S集群初始配置
为了重现环境,我按照后续步骤创建了一个双节点 k3s
集群:
在所需主机上安装 k3s 控制平面:
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC='--write-kubeconfig-mode=644' sh -
验证它是否有效:
k8s kubectl get nodes -o wide
要添加工作节点,这个命令应该在工作节点上运行:
curl -sfL https://get.k3s.io | K3S_URL=https://control-plane:6443 K3S_TOKEN=mynodetoken sh -
其中 K3S_URL
是控制平面 URL(具有 IP 或 DNS)
K3S_TOKEN
可以通过以下方式得到:
sudo cat /var/lib/rancher/k3s/server/node-token
你应该有一个 运行ning 集群:
$ k3s kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k3s-cluster Ready control-plane,master 27m v1.21.2+k3s1 10.186.0.17 <none> Ubuntu 18.04.5 LTS 5.4.0-1046-gcp containerd://1.4.4-k3s2
k3s-worker-1 Ready <none> 18m v1.21.2+k3s1 10.186.0.18 <none> Ubuntu 18.04.5 LTS 5.4.0-1046-gcp containerd://1.4.4-k3s2
复制和测试
我基于 nginx
图片创建了一个简单的部署:
$ k3s kubectl create deploy nginx --image=nginx
并暴露它:
$ k3s kubectl expose deploy nginx --type=LoadBalancer --port=8080 --target-port=80
这意味着 pod 中的 nginx
容器正在侦听端口 80
并且 service
可以在集群内的端口 8080
上访问:
$ k3s kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 29m <none>
nginx LoadBalancer 10.43.169.6 10.186.0.17,10.186.0.18 8080:31762/TCP 25m app=nginx
服务可通过 IP 或 localhost
和端口 8080
或 NodePort
访问。
+ 考虑到你得到的错误 curl: (7) Failed to connect to 192.168.40.74 port 9012: Connection timed out
意味着服务已配置,但它没有正确响应(它不是来自入口的 404 或 connection refused
).
第二个问题的答案 - 负载均衡器
从rancher k3s official documentation about LoadBalancer, Klipper Load Balancer开始使用。来自他们的 github 回购:
This is the runtime image for the integrated service load balancer in klipper. This works by using a host port for each service load balancer and setting up iptables to forward the request to the cluster IP.
来自 how the service loadbalancer works:
K3s creates a controller that creates a Pod for the service load balancer, which is a Kubernetes object of kind Service.
For each service load balancer, a DaemonSet is created. The DaemonSet creates a pod with the svc prefix on each node.
The Service LB controller listens for other Kubernetes Services. After it finds a Service, it creates a proxy Pod for the service using a DaemonSet on all of the nodes. This Pod becomes a proxy to the other Service, so that for example, requests coming to port 8000 on a node could be routed to your workload on port 8888.
If the Service LB runs on a node that has an external IP, it uses the external IP.
换句话说,是的,预计负载均衡器具有与主机 internal-IP
相同的 IP 地址。 k3s 集群中每个具有负载均衡器类型的服务在每个节点上都有自己的 daemonSet
来为初始服务提供直接流量。
例如我创建了第二个部署 nginx-two
并将其暴露在端口 8090
上,您可以看到有两个 pods 来自两个不同的部署和四个 pods 作为负载均衡器(请注意名字-svclb
开头):
$ k3s kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-6799fc88d8-7m4v4 1/1 Running 0 47m 10.42.0.9 k3s-cluster <none> <none>
svclb-nginx-jc4rz 1/1 Running 0 45m 10.42.0.10 k3s-cluster <none> <none>
svclb-nginx-qqmvk 1/1 Running 0 39m 10.42.1.3 k3s-worker-1 <none> <none>
nginx-two-6fb6885597-8bv2w 1/1 Running 0 38s 10.42.1.4 k3s-worker-1 <none> <none>
svclb-nginx-two-rm594 1/1 Running 0 2s 10.42.0.11 k3s-cluster <none> <none>
svclb-nginx-two-hbdc7 1/1 Running 0 2s 10.42.1.5 k3s-worker-1 <none> <none>
两种服务具有相同的 EXTERNAL-IP
s:
$ k3s kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx LoadBalancer 10.43.169.6 10.186.0.17,10.186.0.18 8080:31762/TCP 50m
nginx-two LoadBalancer 10.43.118.82 10.186.0.17,10.186.0.18 8090:31780/TCP 4m44s