Kubernetes DNS 不适用于本地 Ubuntu 18.04 环境

Kubernetes DNS not working on local Ubuntu 18.04 environment

我试图在我的本地计算机 (Ubuntu 18.04) 上部署 Kubernetes 系统,但 DNS 服务有一些问题(我无法通过他们的 DNS 名称与无头服务对话)。

我将 minikube 用于 运行 集群,版本是 -

Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-18T23:30:10Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-18T23:22:30Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}

无头服务 -

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)             AGE
kubernetes           ClusterIP   10.96.0.1    <none>        443/TCP             67m
zookeeper-headless   ClusterIP   None         <none>        2888/TCP,3888/TCP   3m58s

pods -

NAME                      READY   STATUS    RESTARTS   AGE
zookeeper-statefulset-0   1/1     Running   1          57m
zookeeper-statefulset-1   1/1     Running   1          56m
zookeeper-statefulset-2   1/1     Running   1          54m

不存在的 DNS 服务端点 - (kubectl get ep kube-dns --namespace=kube-system)

NAME       ENDPOINTS   AGE
kube-dns               68m

DNS pods(未准备好)-

NAME                       READY   STATUS    RESTARTS   AGE
coredns-6955765f44-gv42p   0/1     Running   1          58m
coredns-6955765f44-rfkm2   0/1     Running   1          58m

DNS pod 的日志是 -

[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7 CoreDNS-1.6.5 linux/amd64, go1.13.4, c2fd1b2 [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes" E0221 12:50:23.090626 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E0221 12:50:23.090668 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E0221 12:50:23.090671 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout I0221 12:50:23.090594 1 trace.go:82] Trace[146678255]: "Reflector pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98 ListAndWatch" (started: 2020-02-21 12:49:53.090061147 +0000 UTC m=+0.011664556) (total time: 30.000405618s): Trace[146678255]: [30.000405618s] [30.000405618s] END E0221 12:50:23.090626 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E0221 12:50:23.090626 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E0221 12:50:23.090626 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout I0221 12:50:23.090644 1 trace.go:82] Trace[653875127]: "Reflector pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98 ListAndWatch" (started: 2020-02-21 12:49:53.090057185 +0000 UTC m=+0.011660587) (total time: 30.00054106s): Trace[653875127]: [30.00054106s] [30.00054106s] END I0221 12:50:23.090654 1 trace.go:82] Trace[1501712764]: "Reflector pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98 ListAndWatch" (started: 2020-02-21 12:49:53.090052023 +0000 UTC m=+0.011655434) (total time: 30.000437703s): Trace[1501712764]: [30.000437703s] [30.000437703s] END E0221 12:50:23.090668 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E0221 12:50:23.090668 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E0221 12:50:23.090668 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E0221 12:50:23.090671 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E0221 12:50:23.090671 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E0221 12:50:23.090671 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes"

我试过 运行宁 -

kubectl run -i --tty --image busybox:1.28 dns-test --restart=Never --rm
/ # nslookup headless.default.svc.cluster.local

并得到 -

Server:    10.96.0.10
Address 1: 10.96.0.10

nslookup: can't resolve 'headless.default.svc.cluster.local'

我什至不知道从哪里开始解决这个问题。有人可以帮忙吗?

更新

我想我明白是什么导致了这个问题,但我不明白为什么会这样。 问题似乎是在防火墙激活后发生的。出于某种原因,core-dns pods 不能 运行 并卡在就绪状态。在我通过 运行ning 关闭防火墙后 - sudo ufw disable core-dns pods 状态已更改为 Running 并且服务现在具有端点地址。

 kubectl run -i --tty --image busybox:1.28 dns-test --restart=Never --rm
If you don't see a command prompt, try pressing enter.
/ # nslookup  zookeeper-headless.default
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      zookeeper-headless.default
Address 1: 172.17.0.4 zookeeper-statefulset-1.zookeeper-headless.default.svc.cluster.local
Address 2: 172.17.0.5 zookeeper-statefulset-0.zookeeper-headless.default.svc.cluster.local
Address 3: 172.17.0.6 zookeeper-statefulset-2.zookeeper-headless.default.svc.cluster.local

.

NAME       ENDPOINTS                                               AGE
kube-dns   172.17.0.2:53,172.17.0.3:53,172.17.0.2:53 + 3 more...   34m

.

NAMESPACE     NAME                             READY   STATUS    RESTARTS           12m
kube-system   coredns-6955765f44-2d8md         1/1     Running   4          34m
kube-system   coredns-6955765f44-n2gcp         1/1     Running   4          34m

我想我明白是什么导致了这个问题,但我不明白为什么会这样。 问题似乎是在防火墙激活后发生的。出于某种原因,core-dns pods 无法 运行 并卡在就绪状态。在我通过 运行ning -

关闭防火墙后

sudo ufw disable

core-dns pods 状态已更改为 Running,服务现在有端点地址。

 kubectl run -i --tty --image busybox:1.28 dns-test --restart=Never --rm
If you don't see a command prompt, try pressing enter.
/ # nslookup  zookeeper-headless.default
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      zookeeper-headless.default
Address 1: 172.17.0.4 zookeeper-statefulset-1.zookeeper-headless.default.svc.cluster.local
Address 2: 172.17.0.5 zookeeper-statefulset-0.zookeeper-headless.default.svc.cluster.local
Address 3: 172.17.0.6 zookeeper-statefulset-2.zookeeper-headless.default.svc.cluster.local

.

NAME       ENDPOINTS                                               AGE
kube-dns   172.17.0.2:53,172.17.0.3:53,172.17.0.2:53 + 3 more...   34m

.

NAMESPACE     NAME                             READY   STATUS    RESTARTS           12m
kube-system   coredns-6955765f44-2d8md         1/1     Running   4          34m
kube-system   coredns-6955765f44-n2gcp         1/1     Running   4          34m