Kubernetes:无法在其他节点上 ping pod ip

Kubernate: Unable to ping pod ip on other node

Pod ips 仅从同一节点 ping。

当我尝试从其他 node/worker ping pod ip 时,它没有 ping。

master2@master2:~$ kubectl get pods --namespace=kube-system -o wide
NAME                                       READY   STATUS    RESTARTS   AGE     IP                NODE      NOMINATED NODE   READINESS GATES
calico-kube-controllers-6ff8cbb789-lxwqq   1/1     Running   0          6d21h   192.168.180.2     master2   <none>           <none>
calico-node-4mnfk                          1/1     Running   0          4d20h   10.10.41.165      node3     <none>           <none>
calico-node-c4rjb                          1/1     Running   0          6d21h   10.10.41.159      master2   <none>           <none>
calico-node-dgqwx                          1/1     Running   0          4d20h   10.10.41.153      master1   <none>           <none>
calico-node-fhtvz                          1/1     Running   0          6d21h   10.10.41.161      node2     <none>           <none>
calico-node-mhd7w                          1/1     Running   0          4d21h   10.10.41.155      node1     <none>           <none>
coredns-8b5d5b85f-fjq72                    1/1     Running   0          45m     192.168.135.11    node3     <none>           <none>
coredns-8b5d5b85f-hgg94                    1/1     Running   0          45m     192.168.166.136   node1     <none>           <none>
etcd-master1                               1/1     Running   0          4d20h   10.10.41.153      master1   <none>           <none>
etcd-master2                               1/1     Running   0          6d21h   10.10.41.159      master2   <none>           <none>
kube-apiserver-master1                     1/1     Running   0          4d20h   10.10.41.153      master1   <none>           <none>
kube-apiserver-master2                     1/1     Running   0          6d21h   10.10.41.159      master2   <none>           <none>
kube-controller-manager-master1            1/1     Running   0          4d20h   10.10.41.153      master1   <none>           <none>
kube-controller-manager-master2            1/1     Running   2          6d21h   10.10.41.159      master2   <none>           <none>
kube-proxy-66nxz                           1/1     Running   0          6d21h   10.10.41.159      master2   <none>           <none>
kube-proxy-fnrrz                           1/1     Running   0          4d20h   10.10.41.153      master1   <none>           <none>
kube-proxy-lq5xp                           1/1     Running   0          6d21h   10.10.41.161      node2     <none>           <none>
kube-proxy-vxhwm                           1/1     Running   0          4d21h   10.10.41.155      node1     <none>           <none>
kube-proxy-zgwzq                           1/1     Running   0          4d20h   10.10.41.165      node3     <none>           <none>
kube-scheduler-master1                     1/1     Running   0          4d20h   10.10.41.153      master1   <none>           <none>
kube-scheduler-master2                     1/1     Running   1          6d21h   10.10.41.159      master2   <none>           <none>

当我尝试从节点 3 使用 ip 192.168.104.8 ping 节点 2 上的 pod 时失败并显示 100% 数据丢失

master1@master1:~/cluster$ sudo kubectl get pods  -o wide
NAME                         READY   STATUS    RESTARTS   AGE     IP               NODE    NOMINATED NODE   READINESS GATES

contentms-cb475f569-t54c2    1/1     Running   0          6d21h   192.168.104.1    node2   <none>           <none>
nav-6f67d5bd79-9khmm         1/1     Running   0          6d8h    192.168.104.8    node2   <none>           <none>
react                        1/1     Running   0          7m24s   192.168.135.12   node3   <none>           <none>
statistics-5668cd7dd-thqdf   1/1     Running   0          6d15h   192.168.104.4    node2   <none>           <none>

这是路线问题

我为每个节点 eth0 和 eth1 使用了两个 ip。

在路由中它使用 eth1 代替 eth0 ip。

我禁用了 eth1 ips 并且一切正常。