无法连接到卡夫卡经纪人
Not able to connect to kafka brokers
我已经在我的本地 k8s 集群上部署了 https://github.com/confluentinc/cp-helm-charts/tree/master/charts/cp-kafka。
我正在尝试使用带有 nginx 的 TCP 控制器公开它。
我的 TCP nginx 配置图看起来像
data:
"<zookeper-tcp-port>": <namespace>/cp-zookeeper:2181
"<kafka-tcp-port>": <namespace>/cp-kafka:9092
并且我已经在我的 nginx 入口控制器中创建了相应的条目
- name: <zookeper-tcp-port>-tcp
port: <zookeper-tcp-port>
protocol: TCP
targetPort: <zookeper-tcp-port>-tcp
- name: <kafka-tcp-port>-tcp
port: <kafka-tcp-port>
protocol: TCP
targetPort: <kafka-tcp-port>-tcp
现在我正在尝试连接到我的 kafka 实例。
当我尝试使用 kafka 工具连接到 IP 和端口时,我收到错误消息
Unable to determine broker endpoints from Zookeeper.
One or more brokers have multiple endpoints for protocol PLAIN...
Please proved bootstrap.servers value in advanced settings
[<cp-broker-address-0>.cp-kafka-headless.<namespace>:<port>][<ip>]
当我进入时,我认为是正确的经纪人地址(我已经尝试了所有...)我得到了超时。没有来自 nginx 控制器的日志 excep
[08/Apr/2020:15:51:12 +0000]TCP200000.000
[08/Apr/2020:15:51:12 +0000]TCP200000.000
[08/Apr/2020:15:51:14 +0000]TCP200000.001
从 pod kafka-zookeeper-0
我得到了很多
[2020-04-08 15:52:02,415] INFO Accepted socket connection from /<ip:port> (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2020-04-08 15:52:02,415] WARN Unable to read additional data from client sessionid 0x0, likely client has closed socket (org.apache.zookeeper.server.NIOServerCnxn)
[2020-04-08 15:52:02,415] INFO Closed socket connection for client /<ip:port> (no session established for client) (org.apache.zookeeper.server.NIOServerCnxn)
虽然我不确定这些与它有什么关系?
对我做错了什么有什么想法吗?
提前致谢。
TL;DR:
- 在部署前将
cp-kafka/values.yaml
中的值 nodeport.enabled
更改为 true
。
- 更改 TCP NGINX Configmap 和 Ingress 对象中的服务名称和端口。
- 将你的kafka工具上的
bootstrap-server
设置为<Cluster_External_IP>:31090
解释:
The Headless Service was created alongside the StatefulSet. The created service will not be given a clusterIP
, but will instead simply include a list of Endpoints
.
These Endpoints
are then used to generate instance-specific DNS records in the form of:
<StatefulSet>-<Ordinal>.<Service>.<Namespace>.svc.cluster.local
它为每个 pod 创建一个 DNS 名称,例如:
[ root@curl:/ ]$ nslookup my-confluent-cp-kafka-headless
Server: 10.0.0.10
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
Name: my-confluent-cp-kafka-headless
Address 1: 10.8.0.23 my-confluent-cp-kafka-1.my-confluent-cp-kafka-headless.default.svc.cluster.local
Address 2: 10.8.1.21 my-confluent-cp-kafka-0.my-confluent-cp-kafka-headless.default.svc.cluster.local
Address 3: 10.8.3.7 my-confluent-cp-kafka-2.my-confluent-cp-kafka-headless.default.svc.cluster.local
- 这就是使这些服务在集群内相互连接的原因。
我经历了很多尝试和错误,直到我意识到它应该如何工作。根据您的 TCP Nginx Configmap,我相信您遇到了同样的问题。
- Nginx ConfigMap要求:
<PortToExpose>: "<Namespace>/<Service>:<InternallyExposedPort>"
.
- 我意识到您不需要公开 Zookeeper,因为它是一项内部服务并由 kafka 代理处理。
- 我也意识到你正试图公开
cp-kafka:9092
这是无头服务,也只在内部使用,正如我上面解释的那样。
- 为了获得外部访问权 您必须将参数
nodeport.enabled
设置为 true
,如下所述:External Access Parameters.
- 它在图表部署期间为每个 kafka-N pod 添加一项服务。
- 然后您更改您的 configmap 以映射到其中之一:
data:
"31090": default/demo-cp-kafka-0-nodeport:31090
请注意,创建的服务具有选择器 statefulset.kubernetes.io/pod-name: demo-cp-kafka-0
这就是服务识别它要连接到的 pod 的方式。
- 编辑 nginx-ingress-controller:
- containerPort: 31090
hostPort: 31090
protocol: TCP
- 将你的kafka工具设置为
<Cluster_External_IP>:31090
复制:
- 在 cp-kafka/values.yaml
中编辑的片段:
nodeport:
enabled: true
servicePort: 19092
firstListenerPort: 31090
- 部署图表:
$ helm install demo cp-helm-charts
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
demo-cp-control-center-6d79ddd776-ktggw 1/1 Running 3 113s
demo-cp-kafka-0 2/2 Running 1 113s
demo-cp-kafka-1 2/2 Running 0 94s
demo-cp-kafka-2 2/2 Running 0 84s
demo-cp-kafka-connect-79689c5c6c-947c4 2/2 Running 2 113s
demo-cp-kafka-rest-56dfdd8d94-79kpx 2/2 Running 1 113s
demo-cp-ksql-server-c498c9755-jc6bt 2/2 Running 2 113s
demo-cp-schema-registry-5f45c498c4-dh965 2/2 Running 3 113s
demo-cp-zookeeper-0 2/2 Running 0 112s
demo-cp-zookeeper-1 2/2 Running 0 93s
demo-cp-zookeeper-2 2/2 Running 0 74s
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demo-cp-control-center ClusterIP 10.0.13.134 <none> 9021/TCP 50m
demo-cp-kafka ClusterIP 10.0.15.71 <none> 9092/TCP 50m
demo-cp-kafka-0-nodeport NodePort 10.0.7.101 <none> 19092:31090/TCP 50m
demo-cp-kafka-1-nodeport NodePort 10.0.4.234 <none> 19092:31091/TCP 50m
demo-cp-kafka-2-nodeport NodePort 10.0.3.194 <none> 19092:31092/TCP 50m
demo-cp-kafka-connect ClusterIP 10.0.3.217 <none> 8083/TCP 50m
demo-cp-kafka-headless ClusterIP None <none> 9092/TCP 50m
demo-cp-kafka-rest ClusterIP 10.0.14.27 <none> 8082/TCP 50m
demo-cp-ksql-server ClusterIP 10.0.7.150 <none> 8088/TCP 50m
demo-cp-schema-registry ClusterIP 10.0.7.84 <none> 8081/TCP 50m
demo-cp-zookeeper ClusterIP 10.0.9.119 <none> 2181/TCP 50m
demo-cp-zookeeper-headless ClusterIP None <none> 2888/TCP,3888/TCP 50m
- 创建 TCP 配置图:
$ cat nginx-tcp-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: kube-system
data:
31090: "default/demo-cp-kafka-0-nodeport:31090"
$ kubectl apply -f nginx-tcp.configmap.yaml
configmap/tcp-services created
- 编辑 Nginx 入口控制器:
$ kubectl edit deploy nginx-ingress-controller -n kube-system
$kubectl get deploy nginx-ingress-controller -n kube-system -o yaml
{{{suppressed output}}}
ports:
- containerPort: 31090
hostPort: 31090
protocol: TCP
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
- 我的入口在 IP
35.226.189.123
上,现在让我们尝试从集群外部连接。为此,我将连接到另一个装有 minikube 的虚拟机,这样我就可以使用 kafka-client
pod 进行测试:
user@minikube:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
kafka-client 1/1 Running 0 17h
user@minikube:~$ kubectl exec kafka-client -it -- bin/bash
root@kafka-client:/# kafka-console-consumer --bootstrap-server 35.226.189.123:31090 --topic demo-topic --from-beginning --timeout-ms 8000 --max-messages 1
Wed Apr 15 18:19:48 UTC 2020
Processed a total of 1 messages
root@kafka-client:/#
如你所见,我能够从外部访问kafka。
- 如果你也需要外部访问 Zookeeper,我会为你留下一个服务模型:
zookeeper-external-0.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: cp-zookeeper
pod: demo-cp-zookeeper-0
name: demo-cp-zookeeper-0-nodeport
namespace: default
spec:
externalTrafficPolicy: Cluster
ports:
- name: external-broker
nodePort: 31181
port: 12181
protocol: TCP
targetPort: 31181
selector:
app: cp-zookeeper
statefulset.kubernetes.io/pod-name: demo-cp-zookeeper-0
sessionAffinity: None
type: NodePort
- 它将为其创建一个服务:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demo-cp-zookeeper-0-nodeport NodePort 10.0.5.67 <none> 12181:31181/TCP 2s
- 修补您的配置图:
data:
"31090": default/demo-cp-kafka-0-nodeport:31090
"31181": default/demo-cp-zookeeper-0-nodeport:31181
- 添加入口规则:
ports:
- containerPort: 31181
hostPort: 31181
protocol: TCP
- 使用您的外部 IP 进行测试:
pod/zookeeper-client created
user@minikube:~$ kubectl exec -it zookeeper-client -- /bin/bash
root@zookeeper-client:/# zookeeper-shell 35.226.189.123:31181
Connecting to 35.226.189.123:31181
Welcome to ZooKeeper!
JLine support is disabled
如有任何疑问,请在评论中告诉我!
我已经在我的本地 k8s 集群上部署了 https://github.com/confluentinc/cp-helm-charts/tree/master/charts/cp-kafka。 我正在尝试使用带有 nginx 的 TCP 控制器公开它。
我的 TCP nginx 配置图看起来像
data:
"<zookeper-tcp-port>": <namespace>/cp-zookeeper:2181
"<kafka-tcp-port>": <namespace>/cp-kafka:9092
并且我已经在我的 nginx 入口控制器中创建了相应的条目
- name: <zookeper-tcp-port>-tcp
port: <zookeper-tcp-port>
protocol: TCP
targetPort: <zookeper-tcp-port>-tcp
- name: <kafka-tcp-port>-tcp
port: <kafka-tcp-port>
protocol: TCP
targetPort: <kafka-tcp-port>-tcp
现在我正在尝试连接到我的 kafka 实例。 当我尝试使用 kafka 工具连接到 IP 和端口时,我收到错误消息
Unable to determine broker endpoints from Zookeeper.
One or more brokers have multiple endpoints for protocol PLAIN...
Please proved bootstrap.servers value in advanced settings
[<cp-broker-address-0>.cp-kafka-headless.<namespace>:<port>][<ip>]
当我进入时,我认为是正确的经纪人地址(我已经尝试了所有...)我得到了超时。没有来自 nginx 控制器的日志 excep
[08/Apr/2020:15:51:12 +0000]TCP200000.000
[08/Apr/2020:15:51:12 +0000]TCP200000.000
[08/Apr/2020:15:51:14 +0000]TCP200000.001
从 pod kafka-zookeeper-0
我得到了很多
[2020-04-08 15:52:02,415] INFO Accepted socket connection from /<ip:port> (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2020-04-08 15:52:02,415] WARN Unable to read additional data from client sessionid 0x0, likely client has closed socket (org.apache.zookeeper.server.NIOServerCnxn)
[2020-04-08 15:52:02,415] INFO Closed socket connection for client /<ip:port> (no session established for client) (org.apache.zookeeper.server.NIOServerCnxn)
虽然我不确定这些与它有什么关系?
对我做错了什么有什么想法吗? 提前致谢。
TL;DR:
- 在部署前将
cp-kafka/values.yaml
中的值nodeport.enabled
更改为true
。 - 更改 TCP NGINX Configmap 和 Ingress 对象中的服务名称和端口。
- 将你的kafka工具上的
bootstrap-server
设置为<Cluster_External_IP>:31090
解释:
The Headless Service was created alongside the StatefulSet. The created service will not be given a
clusterIP
, but will instead simply include a list ofEndpoints
. TheseEndpoints
are then used to generate instance-specific DNS records in the form of:<StatefulSet>-<Ordinal>.<Service>.<Namespace>.svc.cluster.local
它为每个 pod 创建一个 DNS 名称,例如:
[ root@curl:/ ]$ nslookup my-confluent-cp-kafka-headless
Server: 10.0.0.10
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
Name: my-confluent-cp-kafka-headless
Address 1: 10.8.0.23 my-confluent-cp-kafka-1.my-confluent-cp-kafka-headless.default.svc.cluster.local
Address 2: 10.8.1.21 my-confluent-cp-kafka-0.my-confluent-cp-kafka-headless.default.svc.cluster.local
Address 3: 10.8.3.7 my-confluent-cp-kafka-2.my-confluent-cp-kafka-headless.default.svc.cluster.local
- 这就是使这些服务在集群内相互连接的原因。
我经历了很多尝试和错误,直到我意识到它应该如何工作。根据您的 TCP Nginx Configmap,我相信您遇到了同样的问题。
- Nginx ConfigMap要求:
<PortToExpose>: "<Namespace>/<Service>:<InternallyExposedPort>"
. - 我意识到您不需要公开 Zookeeper,因为它是一项内部服务并由 kafka 代理处理。
- 我也意识到你正试图公开
cp-kafka:9092
这是无头服务,也只在内部使用,正如我上面解释的那样。 - 为了获得外部访问权 您必须将参数
nodeport.enabled
设置为true
,如下所述:External Access Parameters. - 它在图表部署期间为每个 kafka-N pod 添加一项服务。
- 然后您更改您的 configmap 以映射到其中之一:
data:
"31090": default/demo-cp-kafka-0-nodeport:31090
请注意,创建的服务具有选择器 statefulset.kubernetes.io/pod-name: demo-cp-kafka-0
这就是服务识别它要连接到的 pod 的方式。
- 编辑 nginx-ingress-controller:
- containerPort: 31090
hostPort: 31090
protocol: TCP
- 将你的kafka工具设置为
<Cluster_External_IP>:31090
复制:
- 在 cp-kafka/values.yaml
中编辑的片段:
nodeport:
enabled: true
servicePort: 19092
firstListenerPort: 31090
- 部署图表:
$ helm install demo cp-helm-charts
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
demo-cp-control-center-6d79ddd776-ktggw 1/1 Running 3 113s
demo-cp-kafka-0 2/2 Running 1 113s
demo-cp-kafka-1 2/2 Running 0 94s
demo-cp-kafka-2 2/2 Running 0 84s
demo-cp-kafka-connect-79689c5c6c-947c4 2/2 Running 2 113s
demo-cp-kafka-rest-56dfdd8d94-79kpx 2/2 Running 1 113s
demo-cp-ksql-server-c498c9755-jc6bt 2/2 Running 2 113s
demo-cp-schema-registry-5f45c498c4-dh965 2/2 Running 3 113s
demo-cp-zookeeper-0 2/2 Running 0 112s
demo-cp-zookeeper-1 2/2 Running 0 93s
demo-cp-zookeeper-2 2/2 Running 0 74s
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demo-cp-control-center ClusterIP 10.0.13.134 <none> 9021/TCP 50m
demo-cp-kafka ClusterIP 10.0.15.71 <none> 9092/TCP 50m
demo-cp-kafka-0-nodeport NodePort 10.0.7.101 <none> 19092:31090/TCP 50m
demo-cp-kafka-1-nodeport NodePort 10.0.4.234 <none> 19092:31091/TCP 50m
demo-cp-kafka-2-nodeport NodePort 10.0.3.194 <none> 19092:31092/TCP 50m
demo-cp-kafka-connect ClusterIP 10.0.3.217 <none> 8083/TCP 50m
demo-cp-kafka-headless ClusterIP None <none> 9092/TCP 50m
demo-cp-kafka-rest ClusterIP 10.0.14.27 <none> 8082/TCP 50m
demo-cp-ksql-server ClusterIP 10.0.7.150 <none> 8088/TCP 50m
demo-cp-schema-registry ClusterIP 10.0.7.84 <none> 8081/TCP 50m
demo-cp-zookeeper ClusterIP 10.0.9.119 <none> 2181/TCP 50m
demo-cp-zookeeper-headless ClusterIP None <none> 2888/TCP,3888/TCP 50m
- 创建 TCP 配置图:
$ cat nginx-tcp-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: kube-system
data:
31090: "default/demo-cp-kafka-0-nodeport:31090"
$ kubectl apply -f nginx-tcp.configmap.yaml
configmap/tcp-services created
- 编辑 Nginx 入口控制器:
$ kubectl edit deploy nginx-ingress-controller -n kube-system
$kubectl get deploy nginx-ingress-controller -n kube-system -o yaml
{{{suppressed output}}}
ports:
- containerPort: 31090
hostPort: 31090
protocol: TCP
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
- 我的入口在 IP
35.226.189.123
上,现在让我们尝试从集群外部连接。为此,我将连接到另一个装有 minikube 的虚拟机,这样我就可以使用kafka-client
pod 进行测试:
user@minikube:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
kafka-client 1/1 Running 0 17h
user@minikube:~$ kubectl exec kafka-client -it -- bin/bash
root@kafka-client:/# kafka-console-consumer --bootstrap-server 35.226.189.123:31090 --topic demo-topic --from-beginning --timeout-ms 8000 --max-messages 1
Wed Apr 15 18:19:48 UTC 2020
Processed a total of 1 messages
root@kafka-client:/#
如你所见,我能够从外部访问kafka。
- 如果你也需要外部访问 Zookeeper,我会为你留下一个服务模型:
zookeeper-external-0.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: cp-zookeeper
pod: demo-cp-zookeeper-0
name: demo-cp-zookeeper-0-nodeport
namespace: default
spec:
externalTrafficPolicy: Cluster
ports:
- name: external-broker
nodePort: 31181
port: 12181
protocol: TCP
targetPort: 31181
selector:
app: cp-zookeeper
statefulset.kubernetes.io/pod-name: demo-cp-zookeeper-0
sessionAffinity: None
type: NodePort
- 它将为其创建一个服务:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demo-cp-zookeeper-0-nodeport NodePort 10.0.5.67 <none> 12181:31181/TCP 2s
- 修补您的配置图:
data:
"31090": default/demo-cp-kafka-0-nodeport:31090
"31181": default/demo-cp-zookeeper-0-nodeport:31181
- 添加入口规则:
ports:
- containerPort: 31181
hostPort: 31181
protocol: TCP
- 使用您的外部 IP 进行测试:
pod/zookeeper-client created
user@minikube:~$ kubectl exec -it zookeeper-client -- /bin/bash
root@zookeeper-client:/# zookeeper-shell 35.226.189.123:31181
Connecting to 35.226.189.123:31181
Welcome to ZooKeeper!
JLine support is disabled
如有任何疑问,请在评论中告诉我!