为什么这个在 kubernetes (GKE) 上使用 contour 的设置会产生 2 个正常运行的外部 IP?

Why does this setup with contour on kubernetes (GKE) result in 2 functioning external IPs?

我一直在试验 contour 作为测试 GKE kubernetes 集群上的替代入口控制器。

按照轮廓 deployment docs 进行一些修改,我得到了一个可用于测试 HTTP 响应的工作设置。

首先,我创建了一个 "helloworld" 提供 http 响应的 pod,通过 NodePort 服务和入口公开:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
    name: helloworld
spec:
  replicas: 4
  template:
    metadata:
      labels:
        app: helloworld
    spec:
      containers:
        - name: "helloworld-http"
          image: "nginxdemos/hello:plain-text"
          imagePullPolicy: Always
          resources:
            requests:
              cpu: 250m
              memory: 256Mi
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: app
                  operator: In
                  values:
                  - helloworld
              topologyKey: "kubernetes.io/hostname"
---
apiVersion: v1
kind: Service
metadata:
  name: helloworld-svc
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: helloworld
  sessionAffinity: None
  type: NodePort
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: helloworld-ingress
spec:
  backend:
    serviceName: helloworld-svc
    servicePort: 80

然后,我为 contour 创建了一个直接从他们的文档中复制的部署:

apiVersion: v1
kind: Namespace
metadata:
  name: heptio-contour
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: contour
  namespace: heptio-contour
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    app: contour
  name: contour
  namespace: heptio-contour
spec:
  selector:
    matchLabels:
      app: contour
  replicas: 2
  template:
    metadata:
      labels:
        app: contour
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "9001"
        prometheus.io/path: "/stats"
        prometheus.io/format: "prometheus"
    spec:
      containers:
      - image: docker.io/envoyproxy/envoy-alpine:v1.6.0
        name: envoy
        ports:
        - containerPort: 8080
          name: http
        - containerPort: 8443
          name: https
        command: ["envoy"]
        args: ["-c", "/config/contour.yaml", "--service-cluster", "cluster0", "--service-node", "node0", "-l", "info", "--v2-config-only"]
        volumeMounts:
        - name: contour-config
          mountPath: /config
      - image: gcr.io/heptio-images/contour:master
        imagePullPolicy: Always
        name: contour
        command: ["contour"]
        args: ["serve", "--incluster"]
      initContainers:
      - image: gcr.io/heptio-images/contour:master
        imagePullPolicy: Always
        name: envoy-initconfig
        command: ["contour"]
        args: ["bootstrap", "/config/contour.yaml"]
        volumeMounts:
        - name: contour-config
          mountPath: /config
      volumes:
      - name: contour-config
        emptyDir: {}
      dnsPolicy: ClusterFirst
      serviceAccountName: contour
      terminationGracePeriodSeconds: 30
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchLabels:
                  app: contour
              topologyKey: kubernetes.io/hostname
---
apiVersion: v1
kind: Service
metadata:
  name: contour
  namespace: heptio-contour
spec:
 ports:
 - port: 80
   name: http
   protocol: TCP
   targetPort: 8080
 - port: 443
   name: https
   protocol: TCP
   targetPort: 8443
 selector:
   app: contour
 type: LoadBalancer
---

默认和 heptio-contour 命名空间现在如下所示:

$ kubectl get pods,svc,ingress -n default
NAME                              READY     STATUS    RESTARTS   AGE
pod/helloworld-7ddc8c6655-6vgdw   1/1       Running   0          6h
pod/helloworld-7ddc8c6655-92j7x   1/1       Running   0          6h
pod/helloworld-7ddc8c6655-mlvmc   1/1       Running   0          6h
pod/helloworld-7ddc8c6655-w5g7f   1/1       Running   0          6h

NAME                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/helloworld-svc   NodePort    10.59.240.105   <none>        80:31481/TCP   34m
service/kubernetes       ClusterIP   10.59.240.1     <none>        443/TCP        7h

NAME                                    HOSTS     ADDRESS         PORTS     AGE
ingress.extensions/helloworld-ingress   *         y.y.y.y   80        34m

$ kubectl get pods,svc,ingress -n heptio-contour
NAME                          READY     STATUS    RESTARTS   AGE
pod/contour-9d758b697-kwk85   2/2       Running   0          34m
pod/contour-9d758b697-mbh47   2/2       Running   0          34m

NAME              TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)                      AGE
service/contour   LoadBalancer   10.59.250.54   x.x.x.x   80:30882/TCP,443:32746/TCP   34m

有 2 个 public 可路由的 IP 地址:

两个 public IP 上的 curl returns 来自 helloworld pods.

的有效 HTTP 响应
# the TCP load balancer
$ curl -v x.x.x.x
* Rebuilt URL to: x.x.x.x/  
*   Trying x.x.x.x...
* TCP_NODELAY set
* Connected to x.x.x.x (x.x.x.x) port 80 (#0)
> GET / HTTP/1.1
> Host: x.x.x.x
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 200 OK
< server: envoy
< date: Mon, 07 May 2018 14:14:39 GMT
< content-type: text/plain
< content-length: 155
< expires: Mon, 07 May 2018 14:14:38 GMT
< cache-control: no-cache
< x-envoy-upstream-service-time: 1
<
Server address: 10.56.4.6:80
Server name: helloworld-7ddc8c6655-w5g7f
Date: 07/May/2018:14:14:39 +0000
URI: /
Request ID: ec3aa70e4155c396e7051dc972081c6a

# the HTTP load balancer
$ curl http://y.y.y.y 
* Rebuilt URL to: y.y.y.y/
*   Trying y.y.y.y...
* TCP_NODELAY set
* Connected to y.y.y.y (y.y.y.y) port 80 (#0)
> GET / HTTP/1.1
> Host: y.y.y.y
> User-Agent: curl/7.58.0
> Accept: */*
> 
< HTTP/1.1 200 OK
< Server: nginx/1.13.8
< Date: Mon, 07 May 2018 14:14:24 GMT
< Content-Type: text/plain
< Content-Length: 155
< Expires: Mon, 07 May 2018 14:14:23 GMT
< Cache-Control: no-cache
< Via: 1.1 google
< 
Server address: 10.56.2.8:80
Server name: helloworld-7ddc8c6655-mlvmc
Date: 07/May/2018:14:14:24 +0000
URI: /
Request ID: 41b1151f083eaf30368cf340cfbb92fc

我有两个 public IP 是设计使然吗?我应该为客户使用哪一个?我可以根据自己的偏好在 TCP 和 HTTP 负载平衡器之间进行选择吗?

可能您配置了 GLBC 入口 (https://github.com/kubernetes/ingress-gce/blob/master/docs/faq/gce.md#how-do-i-disable-the-gce-ingress-controller)

您可以尝试使用以下入口定义吗?

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: "contour"
  name: helloworld-ingress
spec:
  backend:
    serviceName: helloworld-svc
    servicePort: 80

如果您想确保您的流量通过等高线,您应该使用 x.x.x.x ip。