Kubernetes DNS 在 Kubernetes 1.2 中失败

Kubernetes DNS fails in Kubernetes 1.2

我正在尝试在 Centos 7 上的 Kubernetes 1.2 中设置 DNS 支持。根据 documentation,有两种方法可以做到这一点。第一个适用于 "supported kubernetes cluster setup" 并涉及设置环境变量:

ENABLE_CLUSTER_DNS="${KUBE_ENABLE_CLUSTER_DNS:-true}"
DNS_SERVER_IP="10.0.0.10"
DNS_DOMAIN="cluster.local"
DNS_REPLICAS=1

我将这些设置添加到 /etc/kubernetes/config 并重新启动,但没有任何效果,所以要么我没有受支持的 kubernetes 集群设置(那是什么?),要么需要其他东西来设置它的环境。

第二种方法需要更多的手动设置。它向 kubelet 添加了两个标志,我通过更新 /etc/kubernetes/kubelet 来设置它们包括:

KUBELET_ARGS="--cluster-dns=10.0.0.10 --cluster-domain=cluster.local"

并用 systemctl restart kubelet 重启 kubelet。然后有必要启动一个复制控制器和一个服务。上面引用的文档页面为此提供了几个模板文件,需要进行一些编辑,都用于本地更改(我的 Kubernetes API 服务器侦听主机名的实际 IP 地址而不是 127.0.0.1,因此有必要添加 --kube-master-url 设置)并删除一些 Salt 依赖项。当我这样做时,复制控制器成功启动了四个容器,但 kube2sky 容器在完成初始化后约一分钟终止:

[david@centos dns]$ kubectl --server="http://centos:8080" --namespace="kube-system" logs -f kube-dns-v11-t7nlb -c kube2sky
I0325 20:58:18.516905       1 kube2sky.go:462] Etcd server found: http://127.0.0.1:4001
I0325 20:58:19.518337       1 kube2sky.go:529] Using http://192.168.87.159:8080 for kubernetes master
I0325 20:58:19.518364       1 kube2sky.go:530] Using kubernetes API v1
I0325 20:58:19.518468       1 kube2sky.go:598] Waiting for service: default/kubernetes
I0325 20:58:19.533597       1 kube2sky.go:660] Successfully added DNS record for Kubernetes service.
F0325 20:59:25.698507       1 kube2sky.go:625] Received signal terminated

我确定终止是由 healthz 容器在报告后完成的:

2016/03/25 21:00:35 Client ip 172.17.42.1:58939 requesting /healthz probe servicing cmd nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null
2016/03/25 21:00:35 Healthz probe error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local', at 2016-03-25 21:00:35.608106622 +0000 UTC, error exit status 1

除此之外,其他日志看起来都很正常。但是,有一个异常:创建复制控制器时必须指定 --validate=false,否则命令会收到消息:

error validating "skydns-rc.yaml": error validating data: [found invalid field successThreshold for v1.Probe, found invalid field failureThreshold for v1.Probe]; if you choose to ignore these errors, turn validation off with --validate=false

这有关系吗?这些论点直接来自 Kubernetes 文档。如果没有,需要什么才能得到这个 运行?

这里是我用的skydns-rc.yaml:

apiVersion: v1
kind: ReplicationController
metadata:
  name: kube-dns-v11
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    version: v11
    kubernetes.io/cluster-service: "true"
spec:
  replicas: 1
  selector:
    k8s-app: kube-dns
    version: v11
  template:
    metadata:
      labels:
        k8s-app: kube-dns
        version: v11
        kubernetes.io/cluster-service: "true"
    spec:
      containers:
      - name: etcd
        image: gcr.io/google_containers/etcd-amd64:2.2.1
        resources:
          # TODO: Set memory limits when we've profiled the container for large
          # clusters, then set request = limit to keep this container in
          # guaranteed class. Currently, this container falls into the
          # "burstable" category so the kubelet doesn't backoff from restarting it.
          limits:
            cpu: 100m
            memory: 500Mi
          requests:
            cpu: 100m
            memory: 50Mi
        command:
        - /usr/local/bin/etcd
        - -data-dir
        - /var/etcd/data
        - -listen-client-urls
        - http://127.0.0.1:2379,http://127.0.0.1:4001
        - -advertise-client-urls
        - http://127.0.0.1:2379,http://127.0.0.1:4001
        - -initial-cluster-token
        - skydns-etcd
        volumeMounts:
        - name: etcd-storage
          mountPath: /var/etcd/data
      - name: kube2sky
        image: gcr.io/google_containers/kube2sky:1.14
        resources:
          # TODO: Set memory limits when we've profiled the container for large
          # clusters, then set request = limit to keep this container in
          # guaranteed class. Currently, this container falls into the
          # "burstable" category so the kubelet doesn't backoff from restarting it.
          limits:
            cpu: 100m
            # Kube2sky watches all pods.
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 50Mi
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /readiness
            port: 8081
            scheme: HTTP
          # we poll on pod startup for the Kubernetes master service and
          # only setup the /readiness HTTP server once that's available.
          initialDelaySeconds: 30
          timeoutSeconds: 5
        args:
        # command = "/kube2sky"
        - --domain="cluster.local"
        - --kube-master-url=http://192.168.87.159:8080
      - name: skydns
        image: gcr.io/google_containers/skydns:2015-10-13-8c72f8c
        resources:
          # TODO: Set memory limits when we've profiled the container for large
          # clusters, then set request = limit to keep this container in
          # guaranteed class. Currently, this container falls into the
          # "burstable" category so the kubelet doesn't backoff from restarting it.
          limits:
            cpu: 100m
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 50Mi
        args:
        # command = "/skydns"
        - -machines=http://127.0.0.1:4001
        - -addr=0.0.0.0:53
        - -ns-rotate=false
        - -domain="cluster.local"
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
      - name: healthz
        image: gcr.io/google_containers/exechealthz:1.0
        resources:
          # keep request = limit to keep this container in guaranteed class
          limits:
            cpu: 10m
            memory: 20Mi
          requests:
            cpu: 10m
            memory: 20Mi
        args:
        - -cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null
        - -port=8080
        ports:
        - containerPort: 8080
          protocol: TCP
      volumes:
      - name: etcd-storage
        emptyDir: {}
      dnsPolicy: Default  # Don't use cluster DNS.

和 skydns-svc.yaml:

apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "KubeDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP:  "10.0.0.10"
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP

我刚刚注释掉 skydns-rc.yaml 中包含 successThresholdfailureThreshold 值的行,然后重新 运行 kubectl 命令。

kubectl create -f skydns-rc.yaml
kubectl create -f skydns-svc.yaml