同一区域中的 GCP VM 无法 Ping 使用 GKE 内部 LB 入口创建的内部 HTTPS 负载均衡器 IP
GCP VM in same region not able to Ping Internal HTTPS Load Balancer IP created with GKE internal LB ingress
我部署了一个版本为 1.20.10-gke.1600
的 GKE 集群。我已经使用 GCE 创建了内部入口,并将内部 ip 分配给了我的内部入口。但是,我无法从同一区域和网络中的虚拟机 ping 到这个内部入口 IP。 Ping 到外部入口工作正常。
我阅读了下面的文档,它说无法对内部 TCP/UDP 执行 ping 操作,因为它没有部署为网络设备。但是,我没有看到任何关于 internal HTTPS load balancer.
的信息
ping 10.128.0.174
Pinging 10.128.0.174 with 32 bytes of data:
Request timed out.
Ping statistics for 10.128.0.174:
Packets: Sent = 1, Received = 0, Lost = 1 (100% loss)
问题是:为什么我无法 ping 到我的内部 LB 入口 IP?我正在尝试从同一区域和网络中的虚拟机执行 ping 操作。 curl
到内部入口 IP 有效,但 ping
无效。
集群 IP 只是(如 )一个不会响应 ping
的虚拟设备。这是预期的行为。
有关 pinging LB's internall address 您链接的文档明确表示:
This test demonstrates an expected behavior: You cannot ping the IP address of the load balancer. This is because internal TCP/UDP load balancers are implemented in virtual network programming — they are not separate devices.
然后解释原因:
nternal TCP/UDP Load Balancing is implemented using virtual network programming and VM configuration in the guest OS. On Linux VMs, the Linux Guest Environment performs the local configuration by installing a route in the guest OS routing table. Because of this local route, traffic to the IP address of the load balancer stays on the load balanced VM itself. (This local route is different from the routes in the VPC network.)
因此 - 例如,如果您正在尝试设置某种自定义健康检查,请确保从集群内部“ping”LB 的内部地址也是不可靠的:
Don't rely on making requests to an internal TCP/UDP load balancer from a VM being load balanced (in the backend service for that load balancer). A request is always sent to the VM that makes the request, and health check information is ignored. Further, the backend can respond to traffic sent using protocols and destination ports other than those configured on the load balancer's internal forwarding rule.
甚至更多:
This default behavior doesn't apply when the backend VM that sends the request has an --next-hop-ilb route with a next hop destination that is its own load balanced IP address. When the VM targets the IP address specified in the route, the request can be answered by another load balanced VM.
You can, for example, create a destination route of 192.168.1.0/24 with a --next-hop-ilb of 10.20.1.1.
A VM that is behind the load balancer can then target 192.168.1.1. Because the address isn't in the local routing table, it is sent out the VM for Google Cloud routes to be applicable. Assuming no other routes are applicable with higher priority, the --next-hop-ilb route is chosen.
最后 - 查看 table of supported protocols - ICMP 仅适用于外部 TCP/UDP 负载平衡器。
我部署了一个版本为 1.20.10-gke.1600
的 GKE 集群。我已经使用 GCE 创建了内部入口,并将内部 ip 分配给了我的内部入口。但是,我无法从同一区域和网络中的虚拟机 ping 到这个内部入口 IP。 Ping 到外部入口工作正常。
我阅读了下面的文档,它说无法对内部 TCP/UDP 执行 ping 操作,因为它没有部署为网络设备。但是,我没有看到任何关于 internal HTTPS load balancer.
ping 10.128.0.174
Pinging 10.128.0.174 with 32 bytes of data:
Request timed out.
Ping statistics for 10.128.0.174:
Packets: Sent = 1, Received = 0, Lost = 1 (100% loss)
问题是:为什么我无法 ping 到我的内部 LB 入口 IP?我正在尝试从同一区域和网络中的虚拟机执行 ping 操作。 curl
到内部入口 IP 有效,但 ping
无效。
集群 IP 只是(如 ping
的虚拟设备。这是预期的行为。
有关 pinging LB's internall address 您链接的文档明确表示:
This test demonstrates an expected behavior: You cannot ping the IP address of the load balancer. This is because internal TCP/UDP load balancers are implemented in virtual network programming — they are not separate devices.
然后解释原因:
nternal TCP/UDP Load Balancing is implemented using virtual network programming and VM configuration in the guest OS. On Linux VMs, the Linux Guest Environment performs the local configuration by installing a route in the guest OS routing table. Because of this local route, traffic to the IP address of the load balancer stays on the load balanced VM itself. (This local route is different from the routes in the VPC network.)
因此 - 例如,如果您正在尝试设置某种自定义健康检查,请确保从集群内部“ping”LB 的内部地址也是不可靠的:
Don't rely on making requests to an internal TCP/UDP load balancer from a VM being load balanced (in the backend service for that load balancer). A request is always sent to the VM that makes the request, and health check information is ignored. Further, the backend can respond to traffic sent using protocols and destination ports other than those configured on the load balancer's internal forwarding rule.
甚至更多:
This default behavior doesn't apply when the backend VM that sends the request has an --next-hop-ilb route with a next hop destination that is its own load balanced IP address. When the VM targets the IP address specified in the route, the request can be answered by another load balanced VM.
You can, for example, create a destination route of 192.168.1.0/24 with a --next-hop-ilb of 10.20.1.1.
A VM that is behind the load balancer can then target 192.168.1.1. Because the address isn't in the local routing table, it is sent out the VM for Google Cloud routes to be applicable. Assuming no other routes are applicable with higher priority, the --next-hop-ilb route is chosen.
最后 - 查看 table of supported protocols - ICMP 仅适用于外部 TCP/UDP 负载平衡器。