具有 IPVS 模式的 Kube-proxy 不保持连接
Kube-proxy with IPVS mode doesn't keep a connection
我有一个带有 ipvs
kube-proxy 模式的 k8s 集群和一个 k8s 之外的数据库集群。
为了访问数据库集群,我创建了服务和端点资源:
---
apiVersion: v1
kind: Service
metadata:
name: database
spec:
type: ClusterIP
ports:
- protocol: TCP
port: 3306
targetPort: 3306
---
apiVersion: v1
kind: Endpoints
metadata:
name: database
subsets:
- addresses:
- ip: 192.168.255.9
- ip: 192.168.189.76
ports:
- port: 3306
protocol: TCP
然后我 运行 一个带有 MySQL 客户端的 pod 并尝试连接到这个服务:
mysql -u root -p password -h database
在网络转储中,我看到成功的 TCP 握手和成功的 MySQL 连接。在 pod 所在的节点 运行ning(以下称为工作节点)上,我看到下一个已建立的连接:
sudo netstat-nat -n | grep 3306
tcp 10.0.198.178:52642 192.168.189.76:3306 ESTABLISHED
然后我在打开的 MySQL 会话中从 pod 发送一些测试查询。它们都被发送到同一个节点。这是预期的行为。
然后我监控工作节点上建立的连接。大约 5 分钟后,与数据库节点建立的连接丢失。
但是在网络转储中我看到 TCP 终结数据包没有从工作节点发送到数据库节点。结果,我在数据库节点上发现连接泄漏。
ipvs
如何决定断开已建立的连接?如果 ipvs
断开连接,为什么它不能正确地完成 TCP 连接?这是一个错误还是我误解了 kube-proxy 中的 ipvs
模式?
Kube-proxy 和 Kubernetes 无助于平衡持久连接。
Kubernetes 中长期连接的整个概念在 this article:
中有详细描述
Kubernetes doesn't load balance long-lived connections, and some Pods
might receive more requests than others. If you're using HTTP/2, gRPC,
RSockets, AMQP or any other long-lived connection such as a database
connection, you might want to consider client-side load balancing.
我建议通读整个过程,但总的来说可以总结为:
Kubernetes Services are designed to cover most common uses for web applications.
However, as soon as you start working with application protocols that use persistent TCP connections, such as databases, gRPC, or
WebSockets, they fall apart.
Kubernetes doesn't offer any built-in mechanism to load balance long-lived TCP connections.
Instead, you should code your application so that it can retrieve and load balance upstreams client-side.
我有一个带有 ipvs
kube-proxy 模式的 k8s 集群和一个 k8s 之外的数据库集群。
为了访问数据库集群,我创建了服务和端点资源:
---
apiVersion: v1
kind: Service
metadata:
name: database
spec:
type: ClusterIP
ports:
- protocol: TCP
port: 3306
targetPort: 3306
---
apiVersion: v1
kind: Endpoints
metadata:
name: database
subsets:
- addresses:
- ip: 192.168.255.9
- ip: 192.168.189.76
ports:
- port: 3306
protocol: TCP
然后我 运行 一个带有 MySQL 客户端的 pod 并尝试连接到这个服务:
mysql -u root -p password -h database
在网络转储中,我看到成功的 TCP 握手和成功的 MySQL 连接。在 pod 所在的节点 运行ning(以下称为工作节点)上,我看到下一个已建立的连接:
sudo netstat-nat -n | grep 3306
tcp 10.0.198.178:52642 192.168.189.76:3306 ESTABLISHED
然后我在打开的 MySQL 会话中从 pod 发送一些测试查询。它们都被发送到同一个节点。这是预期的行为。
然后我监控工作节点上建立的连接。大约 5 分钟后,与数据库节点建立的连接丢失。
但是在网络转储中我看到 TCP 终结数据包没有从工作节点发送到数据库节点。结果,我在数据库节点上发现连接泄漏。
ipvs
如何决定断开已建立的连接?如果 ipvs
断开连接,为什么它不能正确地完成 TCP 连接?这是一个错误还是我误解了 kube-proxy 中的 ipvs
模式?
Kube-proxy 和 Kubernetes 无助于平衡持久连接。
Kubernetes 中长期连接的整个概念在 this article:
中有详细描述Kubernetes doesn't load balance long-lived connections, and some Pods might receive more requests than others. If you're using HTTP/2, gRPC, RSockets, AMQP or any other long-lived connection such as a database connection, you might want to consider client-side load balancing.
我建议通读整个过程,但总的来说可以总结为:
Kubernetes Services are designed to cover most common uses for web applications.
However, as soon as you start working with application protocols that use persistent TCP connections, such as databases, gRPC, or WebSockets, they fall apart.
Kubernetes doesn't offer any built-in mechanism to load balance long-lived TCP connections.
Instead, you should code your application so that it can retrieve and load balance upstreams client-side.