Kubernetes(本地)Metallb LoadBalancer 和粘性会话

Kubernetes (on-premises) Metallb LoadBalancer and sticky sessions

我在本地安装了一个 Kubernetes Master 和两个 kubernetes worker。

在我使用以下命令将 Metallb 安装为 LoadBalancer 之后:

$ kubectl edit configmap -n kube-system kube-proxy 
apiVersion: kubeproxy.config.k8s.io/v1alpha1 
kind: KubeProxy
Configuration mode:
"ipvs" ipvs:
   strictARP: true

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.6/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.6/manifests/metallb.yaml
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"

vim config-map.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 10.100.170.200-10.100.170.220

kubectl apply -f config-map.yaml
kubectl describe configmap config -n metallb-system

我创建了我的 yaml 文件如下:

myapp-tst-deploy.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-tst-deployment
  labels:
    app: myapp-tst
spec:
  replicas: 2
  selector:
    matchLabels:
      app: myapp-tst
  template:
    metadata:
      labels:
        app: myapp-tst
    spec:
      containers:
      - name: myapp-tst
        image: myapp-tomcat
        securityContext:
          privileged: true
          capabilities:
            add:
              - SYS_ADMIN

myapp-tst-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: myapp-tst-service
  labels:
    app: myapp-tst
spec:
  externalTrafficPolicy: Cluster
  type: LoadBalancer
  ports:
  - name: myapp-tst-port
    nodePort: 30080
    port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    app: myapp-tst
  sessionAffinity: None

myapp-tst-ingress.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: myapp-tst-ingress
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/affinity: "cookie"
    nginx.ingress.kubernetes.io/affinity-mode: "persistent"
    nginx.ingress.kubernetes.io/session-cookie-name: "INGRESSCOOKIE"
    nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
    nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
spec:
  rules: 
    - http:
        paths:
          - path: /
            backend:
              serviceName: myapp-tst-service
              servicePort: myapp-tst-port

我对所有三个文件 运行 kubectl -f apply,这是我的结果:

kubectl get all -o wide
NAME                                     READY   STATUS    RESTARTS   AGE     IP          NODE               NOMINATED NODE   READINESS GATES
pod/myapp-tst-deployment-54474cd74-p8cxk   1/1     Running   0          4m53s   10.36.0.1   bcc-tst-docker02   <none>           <none>
pod/myapp-tst-deployment-54474cd74-pwlr8   1/1     Running   0          4m53s   10.44.0.2   bca-tst-docker01   <none>           <none>

NAME                      TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)        AGE     SELECTOR
service/myapp-tst-service   LoadBalancer   10.110.184.237   10.100.170.15   80:30080/TCP   4m48s   app=myapp-tst,tier=backend
service/kubernetes        ClusterIP      10.96.0.1        <none>          443/TCP        6d22h   <none>

NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS   IMAGES                  SELECTOR
deployment.apps/myapp-tst-deployment   2/2     2            2           4m53s   myapp-tst      mferraramiki/myapp-test   app=myapp-tst

NAME                                           DESIRED   CURRENT   READY   AGE     CONTAINERS   IMAGES                  SELECTOR
replicaset.apps/myapp-tst-deployment-54474cd74   2         2         2       4m53s   myapp-tst      myapp/myapp-test   app=myapp-tst,pod-template-hash=54474cd74

但是当我尝试使用 LB 外部 IP (10.100.170.15) 进行连接时,系统会重定向浏览器请求 (在同一个浏览器上)在一个 pod 上,如果我刷新或打开一个新选项卡(在同一个 url 上),系统回复将请求重定向到另一个 pod。

我需要当浏览器中的用户数字 url 时,他必须在整个会话期间连接到特定的 pod,而不是切换到其他 pods。

如果可以的话怎么解决这个问题? 在我的 VM 中,我使用 stickysession 解决了这个问题,如何在 LB 或 Kubernetes 组件中启用它?

在 myapp-tst-service.yaml 文件中,“sessionAffinity”设置为“None”。

您应该尝试将其设置为“ClientIP”。

来自第 https://kubernetes.io/docs/concepts/services-networking/service/ 页:

“如果你想确保来自特定客户端的连接每次都传递到同一个 Pod,你可以 select 通过设置 service.spec.sessionAffinity 基于客户端 IP 地址的会话亲和力到“ClientIP”(默认为“None”)。您还可以通过适当设置 service.spec.sessionAffinityConfig.clientIP.timeoutSeconds 来设置最大会话粘滞时间。(默认值为 10800,可以计算出3 小时)。"