Kubernetes 集群上的 VerneMQ
VerneMQ on Kubernetes cluster
我正在尝试通过 Oracle OCI usign Helm chart 在 Kubernetes 集群上安装 VerneMQ。
Kubernetes 基础设施似乎已经启动 运行,我可以毫无问题地部署我的自定义微服务。
我正在按照 https://github.com/vernemq/docker-vernemq
的说明进行操作
步骤如下:
helm install --name="broker" ./
来自 helm/vernemq 目录
输出为:
NAME: broker
LAST DEPLOYED: Fri Mar 1 11:07:37 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/RoleBinding
NAME AGE
broker-vernemq 1s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
broker-vernemq-headless ClusterIP None <none> 4369/TCP 1s
broker-vernemq ClusterIP 10.96.120.32 <none> 1883/TCP 1s
==> v1/StatefulSet
NAME DESIRED CURRENT AGE
broker-vernemq 3 1 1s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
broker-vernemq-0 0/1 ContainerCreating 0 1s
==> v1/ServiceAccount
NAME SECRETS AGE
broker-vernemq 1 1s
==> v1/Role
NAME AGE
broker-vernemq 1s
NOTES:
1. Check your VerneMQ cluster status:
kubectl exec --namespace default broker-vernemq-0 /usr/sbin/vmq-admin cluster show
2. Get VerneMQ MQTT port
echo "Subscribe/publish MQTT messages there: 127.0.0.1:1883"
kubectl port-forward svc/broker-vernemq 1883:1883
但是当我做这个检查时
kubectl exec --namespace default broker-vernemq-0 vmq-admin cluster show
我得到了
Node 'VerneMQ@broker-vernemq-0..default.svc.cluster.local' not responding to pings.
command terminated with exit code 1
我认为子域有问题(双点之间没有任何内容)
执行此命令
kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name | head -1) -c kubedns
最后一行日志是
I0301 10:07:38.366826 1 dns.go:552] Could not find endpoints for service "broker-vernemq-headless" in namespace "default". DNS records will be created once endpoints show up.
我也试过这个自定义 yaml:
apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: default
name: vernemq
labels:
app: vernemq
spec:
serviceName: vernemq
replicas: 3
selector:
matchLabels:
app: vernemq
template:
metadata:
labels:
app: vernemq
spec:
containers:
- name: vernemq
image: erlio/docker-vernemq:latest
imagePullPolicy: Always
ports:
- containerPort: 1883
name: mqtt
- containerPort: 8883
name: mqtts
- containerPort: 4369
name: epmd
env:
- name: DOCKER_VERNEMQ_KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: DOCKER_VERNEMQ_ALLOW_ANONYMOUS
value: "off"
- name: DOCKER_VERNEMQ_DISCOVERY_KUBERNETES
value: "1"
- name: DOCKER_VERNEMQ_KUBERNETES_APP_LABEL
value: "vernemq"
- name: DOCKER_VERNEMQ_VMQ_PASSWD__PASSWORD_FILE
value: "/etc/vernemq-passwd/vmq.passwd"
volumeMounts:
- name: vernemq-passwd
mountPath: /etc/vernemq-passwd
readOnly: true
volumes:
- name: vernemq-passwd
secret:
secretName: vernemq-passwd
---
apiVersion: v1
kind: Service
metadata:
name: vernemq
labels:
app: vernemq
spec:
clusterIP: None
selector:
app: vernemq
ports:
- port: 4369
name: epmd
---
apiVersion: v1
kind: Service
metadata:
name: mqtt
labels:
app: mqtt
spec:
type: ClusterIP
selector:
app: vernemq
ports:
- port: 1883
name: mqtt
---
apiVersion: v1
kind: Service
metadata:
name: mqtts
labels:
app: mqtts
spec:
type: LoadBalancer
selector:
app: vernemq
ports:
- port: 8883
name: mqtts
有什么建议吗?
非常感谢
杰克
这似乎是 Docker 图像中的错误。 github上的建议是自己构建镜像或者使用后面的VerneMQ镜像(1.6.x之后)已经修复
这里提到的建议:https://github.com/vernemq/docker-vernemq/pull/92
请求可能的修复:https://github.com/vernemq/docker-vernemq/pull/97
编辑:
我只让它在没有头盔的情况下工作。使用 kubectl create -f ./cluster.yaml
,以及以下 cluster.yaml
:
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: vernemq
namespace: default
spec:
serviceName: vernemq
replicas: 3
selector:
matchLabels:
app: vernemq
template:
metadata:
labels:
app: vernemq
spec:
serviceAccountName: vernemq
containers:
- name: vernemq
image: erlio/docker-vernemq:latest
ports:
- containerPort: 1883
name: mqttlb
- containerPort: 1883
name: mqtt
- containerPort: 4369
name: epmd
- containerPort: 44053
name: vmq
- containerPort: 9100
- containerPort: 9101
- containerPort: 9102
- containerPort: 9103
- containerPort: 9104
- containerPort: 9105
- containerPort: 9106
- containerPort: 9107
- containerPort: 9108
- containerPort: 9109
env:
- name: DOCKER_VERNEMQ_DISCOVERY_KUBERNETES
value: "1"
- name: DOCKER_VERNEMQ_KUBERNETES_APP_LABEL
value: "vernemq"
- name: DOCKER_VERNEMQ_KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MINIMUM
value: "9100"
- name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MAXIMUM
value: "9109"
- name: DOCKER_VERNEMQ_KUBERNETES_INSECURE
value: "1"
# only allow anonymous access for development / testing purposes!
# - name: DOCKER_VERNEMQ_ALLOW_ANONYMOUS
# value: "on"
---
apiVersion: v1
kind: Service
metadata:
name: vernemq
labels:
app: vernemq
spec:
clusterIP: None
selector:
app: vernemq
ports:
- port: 4369
name: empd
- port: 44053
name: vmq
---
apiVersion: v1
kind: Service
metadata:
name: mqttlb
labels:
app: mqttlb
spec:
type: LoadBalancer
selector:
app: vernemq
ports:
- port: 1883
name: mqttlb
---
apiVersion: v1
kind: Service
metadata:
name: mqtt
labels:
app: mqtt
spec:
type: NodePort
selector:
app: vernemq
ports:
- port: 1883
name: mqtt
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: vernemq
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: endpoint-reader
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["endpoints", "deployments", "replicasets", "pods"]
verbs: ["get", "list"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: endpoint-reader
subjects:
- kind: ServiceAccount
name: vernemq
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: endpoint-reader
需要几秒钟才能让 pods 准备好。
尝试设置环境变量 "DOCKER_VERNEMQ_KUBERNETES_APP_LABEL" 和 "DOCKER_VERNEMQ_KUBERNETES_NAMESPACE"。这对我有用。
默认选择器名称是 vernemMQ,
您可以使用环境变量 DOCKER_VERNEMQ_KUBERNETES_LABEL_SELECTOR
覆盖它并将值作为 app=name
传递
DOCKER_VERNEMQ_KUBERNETES_LABEL_SELECTOR="app={Name}"
例如:
DOCKER_VERNEMQ_KUBERNETES_LABEL_SELECTOR="app=demo"
阅读参考:
我正在尝试通过 Oracle OCI usign Helm chart 在 Kubernetes 集群上安装 VerneMQ。
Kubernetes 基础设施似乎已经启动 运行,我可以毫无问题地部署我的自定义微服务。
我正在按照 https://github.com/vernemq/docker-vernemq
的说明进行操作步骤如下:
helm install --name="broker" ./
来自 helm/vernemq 目录
输出为:
NAME: broker
LAST DEPLOYED: Fri Mar 1 11:07:37 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/RoleBinding
NAME AGE
broker-vernemq 1s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
broker-vernemq-headless ClusterIP None <none> 4369/TCP 1s
broker-vernemq ClusterIP 10.96.120.32 <none> 1883/TCP 1s
==> v1/StatefulSet
NAME DESIRED CURRENT AGE
broker-vernemq 3 1 1s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
broker-vernemq-0 0/1 ContainerCreating 0 1s
==> v1/ServiceAccount
NAME SECRETS AGE
broker-vernemq 1 1s
==> v1/Role
NAME AGE
broker-vernemq 1s
NOTES:
1. Check your VerneMQ cluster status:
kubectl exec --namespace default broker-vernemq-0 /usr/sbin/vmq-admin cluster show
2. Get VerneMQ MQTT port
echo "Subscribe/publish MQTT messages there: 127.0.0.1:1883"
kubectl port-forward svc/broker-vernemq 1883:1883
但是当我做这个检查时
kubectl exec --namespace default broker-vernemq-0 vmq-admin cluster show
我得到了
Node 'VerneMQ@broker-vernemq-0..default.svc.cluster.local' not responding to pings.
command terminated with exit code 1
我认为子域有问题(双点之间没有任何内容)
执行此命令
kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name | head -1) -c kubedns
最后一行日志是
I0301 10:07:38.366826 1 dns.go:552] Could not find endpoints for service "broker-vernemq-headless" in namespace "default". DNS records will be created once endpoints show up.
我也试过这个自定义 yaml:
apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: default
name: vernemq
labels:
app: vernemq
spec:
serviceName: vernemq
replicas: 3
selector:
matchLabels:
app: vernemq
template:
metadata:
labels:
app: vernemq
spec:
containers:
- name: vernemq
image: erlio/docker-vernemq:latest
imagePullPolicy: Always
ports:
- containerPort: 1883
name: mqtt
- containerPort: 8883
name: mqtts
- containerPort: 4369
name: epmd
env:
- name: DOCKER_VERNEMQ_KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: DOCKER_VERNEMQ_ALLOW_ANONYMOUS
value: "off"
- name: DOCKER_VERNEMQ_DISCOVERY_KUBERNETES
value: "1"
- name: DOCKER_VERNEMQ_KUBERNETES_APP_LABEL
value: "vernemq"
- name: DOCKER_VERNEMQ_VMQ_PASSWD__PASSWORD_FILE
value: "/etc/vernemq-passwd/vmq.passwd"
volumeMounts:
- name: vernemq-passwd
mountPath: /etc/vernemq-passwd
readOnly: true
volumes:
- name: vernemq-passwd
secret:
secretName: vernemq-passwd
---
apiVersion: v1
kind: Service
metadata:
name: vernemq
labels:
app: vernemq
spec:
clusterIP: None
selector:
app: vernemq
ports:
- port: 4369
name: epmd
---
apiVersion: v1
kind: Service
metadata:
name: mqtt
labels:
app: mqtt
spec:
type: ClusterIP
selector:
app: vernemq
ports:
- port: 1883
name: mqtt
---
apiVersion: v1
kind: Service
metadata:
name: mqtts
labels:
app: mqtts
spec:
type: LoadBalancer
selector:
app: vernemq
ports:
- port: 8883
name: mqtts
有什么建议吗?
非常感谢
杰克
这似乎是 Docker 图像中的错误。 github上的建议是自己构建镜像或者使用后面的VerneMQ镜像(1.6.x之后)已经修复
这里提到的建议:https://github.com/vernemq/docker-vernemq/pull/92
请求可能的修复:https://github.com/vernemq/docker-vernemq/pull/97
编辑:
我只让它在没有头盔的情况下工作。使用 kubectl create -f ./cluster.yaml
,以及以下 cluster.yaml
:
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: vernemq
namespace: default
spec:
serviceName: vernemq
replicas: 3
selector:
matchLabels:
app: vernemq
template:
metadata:
labels:
app: vernemq
spec:
serviceAccountName: vernemq
containers:
- name: vernemq
image: erlio/docker-vernemq:latest
ports:
- containerPort: 1883
name: mqttlb
- containerPort: 1883
name: mqtt
- containerPort: 4369
name: epmd
- containerPort: 44053
name: vmq
- containerPort: 9100
- containerPort: 9101
- containerPort: 9102
- containerPort: 9103
- containerPort: 9104
- containerPort: 9105
- containerPort: 9106
- containerPort: 9107
- containerPort: 9108
- containerPort: 9109
env:
- name: DOCKER_VERNEMQ_DISCOVERY_KUBERNETES
value: "1"
- name: DOCKER_VERNEMQ_KUBERNETES_APP_LABEL
value: "vernemq"
- name: DOCKER_VERNEMQ_KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MINIMUM
value: "9100"
- name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MAXIMUM
value: "9109"
- name: DOCKER_VERNEMQ_KUBERNETES_INSECURE
value: "1"
# only allow anonymous access for development / testing purposes!
# - name: DOCKER_VERNEMQ_ALLOW_ANONYMOUS
# value: "on"
---
apiVersion: v1
kind: Service
metadata:
name: vernemq
labels:
app: vernemq
spec:
clusterIP: None
selector:
app: vernemq
ports:
- port: 4369
name: empd
- port: 44053
name: vmq
---
apiVersion: v1
kind: Service
metadata:
name: mqttlb
labels:
app: mqttlb
spec:
type: LoadBalancer
selector:
app: vernemq
ports:
- port: 1883
name: mqttlb
---
apiVersion: v1
kind: Service
metadata:
name: mqtt
labels:
app: mqtt
spec:
type: NodePort
selector:
app: vernemq
ports:
- port: 1883
name: mqtt
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: vernemq
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: endpoint-reader
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["endpoints", "deployments", "replicasets", "pods"]
verbs: ["get", "list"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: endpoint-reader
subjects:
- kind: ServiceAccount
name: vernemq
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: endpoint-reader
需要几秒钟才能让 pods 准备好。
尝试设置环境变量 "DOCKER_VERNEMQ_KUBERNETES_APP_LABEL" 和 "DOCKER_VERNEMQ_KUBERNETES_NAMESPACE"。这对我有用。
默认选择器名称是 vernemMQ,
您可以使用环境变量 DOCKER_VERNEMQ_KUBERNETES_LABEL_SELECTOR
覆盖它并将值作为 app=name
DOCKER_VERNEMQ_KUBERNETES_LABEL_SELECTOR="app={Name}"
例如:
DOCKER_VERNEMQ_KUBERNETES_LABEL_SELECTOR="app=demo"
阅读参考: