运行 kubernetes 中的本地开发 kafka,具有 Kind 和持久卷
Running local development kafka in kubernetes with Kind and persisting volumes
我正在 运行 开发 Linux 机器并设置本地 Kafka 以在 Kubernetes 上进行开发(从 docker-compose 移动以学习和练习 pourposes) Kind 一切正常,但我现在正在尝试将卷从 Kafka 和 Zookeeper 映射到主机,但我只能映射 Kafka 卷。
对于 zookeeper,我配置数据和日志路径并将其映射到一个卷,但内部目录没有在主机上公开(这发生在 kafka 映射中),它只显示数据和日志文件夹,但实际上没有内容存在于主机,因此重新启动 zookeeper 会重置状态。
我想知道在使用 Kind 和映射来自不同 pods 的多个目录时是否存在限制或不同的方法,我错过了什么?为什么只有 Kafka 卷成功持久化在主机上。
完整设置以及关于如何 运行 它位于 pv-pvc-setup
文件夹下的 on Github 的自述文件。
Zookeeper 有意义的配置,部署:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
service: zookeeper
name: zookeeper
spec:
replicas: 1
selector:
matchLabels:
service: zookeeper
strategy: {}
template:
metadata:
labels:
network/kafka-network: "true"
service: zookeeper
spec:
containers:
- env:
- name: TZ
- name: ZOOKEEPER_CLIENT_PORT
value: "2181"
- name: ZOOKEEPER_DATA_DIR
value: "/var/lib/zookeeper/data"
- name: ZOOKEEPER_LOG_DIR
value: "/var/lib/zookeeper/log"
- name: ZOOKEEPER_SERVER_ID
value: "1"
image: confluentinc/cp-zookeeper:7.0.1
name: zookeeper
ports:
- containerPort: 2181
resources: {}
volumeMounts:
- mountPath: /var/lib/zookeeper
name: zookeeper-data
hostname: zookeeper
restartPolicy: Always
volumes:
- name: zookeeper-data
persistentVolumeClaim:
claimName: zookeeper-pvc
持久卷声明:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: zookeeper-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: zookeeper-local-storage
resources:
requests:
storage: 5Gi
持久卷:
apiVersion: v1
kind: PersistentVolume
metadata:
name: zookeeper-pv
spec:
accessModes:
- ReadWriteOnce
storageClassName: zookeeper-local-storage
capacity:
storage: 5Gi
persistentVolumeReclaimPolicy: Retain
hostPath:
path: /var/lib/zookeeper
种类配置:
apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
nodes:
- role: control-plane
- role: worker
extraPortMappings:
- containerPort: 30092 # internal kafka nodeport
hostPort: 9092 # port exposed on "host" machine for kafka
- containerPort: 30081 # internal schema-registry nodeport
hostPort: 8081 # port exposed on "host" machine for schema-registry
extraMounts:
- hostPath: ./tmp/kafka-data
containerPath: /var/lib/kafka/data
readOnly: false
selinuxRelabel: false
propagation: Bidirectional
- hostPath: ./tmp/zookeeper-data
containerPath: /var/lib/zookeeper
readOnly: false
selinuxRelabel: false
propagation: Bidirectional
正如我提到的设置工作,我现在只是试图确保相关的 kafka 和 zookeeper 卷映射到持久的外部存储(在本例中为本地磁盘)。
我终于解决了。我在初始设置中遇到了 2 个主要问题,现在已解决。
用于在本地主机上保存数据的文件夹需要预先创建,因此它们与用于创建初始 Kind 集群的文件夹具有相同的 uid:guid
,如果不存在,文件夹将没有数据正确保存。
从 zookeeper(数据和日志)为每个持久文件夹创建了特定的持久卷和持久卷声明,并在 kind-config 上配置它们。这是最终的配置:
apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
nodes:
- role: control-plane
- role: worker
extraPortMappings:
- containerPort: 30092 # internal kafka nodeport
hostPort: 9092 # port exposed on "host" machine for kafka
- containerPort: 30081 # internal schema-registry nodeport
hostPort: 8081 # port exposed on "host" machine for schema-registry
extraMounts:
- hostPath: ./tmp/kafka-data
containerPath: /var/lib/kafka/data
readOnly: false
selinuxRelabel: false
propagation: Bidirectional
- hostPath: ./tmp/zookeeper-data/data
containerPath: /var/lib/zookeeper/data
readOnly: false
selinuxRelabel: false
propagation: Bidirectional
- hostPath: ./tmp/zookeeper-data/log
containerPath: /var/lib/zookeeper/log
readOnly: false
selinuxRelabel: false
propagation: Bidirectional
如果您想 运行 它只是为了好玩,此存储库中提供了使用持久卷和持久卷声明的完整设置以及进一步的说明。 https://github.com/mmaia/kafka-local-kubernetes
我正在 运行 开发 Linux 机器并设置本地 Kafka 以在 Kubernetes 上进行开发(从 docker-compose 移动以学习和练习 pourposes) Kind 一切正常,但我现在正在尝试将卷从 Kafka 和 Zookeeper 映射到主机,但我只能映射 Kafka 卷。 对于 zookeeper,我配置数据和日志路径并将其映射到一个卷,但内部目录没有在主机上公开(这发生在 kafka 映射中),它只显示数据和日志文件夹,但实际上没有内容存在于主机,因此重新启动 zookeeper 会重置状态。
我想知道在使用 Kind 和映射来自不同 pods 的多个目录时是否存在限制或不同的方法,我错过了什么?为什么只有 Kafka 卷成功持久化在主机上。
完整设置以及关于如何 运行 它位于 pv-pvc-setup
文件夹下的 on Github 的自述文件。
Zookeeper 有意义的配置,部署:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
service: zookeeper
name: zookeeper
spec:
replicas: 1
selector:
matchLabels:
service: zookeeper
strategy: {}
template:
metadata:
labels:
network/kafka-network: "true"
service: zookeeper
spec:
containers:
- env:
- name: TZ
- name: ZOOKEEPER_CLIENT_PORT
value: "2181"
- name: ZOOKEEPER_DATA_DIR
value: "/var/lib/zookeeper/data"
- name: ZOOKEEPER_LOG_DIR
value: "/var/lib/zookeeper/log"
- name: ZOOKEEPER_SERVER_ID
value: "1"
image: confluentinc/cp-zookeeper:7.0.1
name: zookeeper
ports:
- containerPort: 2181
resources: {}
volumeMounts:
- mountPath: /var/lib/zookeeper
name: zookeeper-data
hostname: zookeeper
restartPolicy: Always
volumes:
- name: zookeeper-data
persistentVolumeClaim:
claimName: zookeeper-pvc
持久卷声明:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: zookeeper-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: zookeeper-local-storage
resources:
requests:
storage: 5Gi
持久卷:
apiVersion: v1
kind: PersistentVolume
metadata:
name: zookeeper-pv
spec:
accessModes:
- ReadWriteOnce
storageClassName: zookeeper-local-storage
capacity:
storage: 5Gi
persistentVolumeReclaimPolicy: Retain
hostPath:
path: /var/lib/zookeeper
种类配置:
apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
nodes:
- role: control-plane
- role: worker
extraPortMappings:
- containerPort: 30092 # internal kafka nodeport
hostPort: 9092 # port exposed on "host" machine for kafka
- containerPort: 30081 # internal schema-registry nodeport
hostPort: 8081 # port exposed on "host" machine for schema-registry
extraMounts:
- hostPath: ./tmp/kafka-data
containerPath: /var/lib/kafka/data
readOnly: false
selinuxRelabel: false
propagation: Bidirectional
- hostPath: ./tmp/zookeeper-data
containerPath: /var/lib/zookeeper
readOnly: false
selinuxRelabel: false
propagation: Bidirectional
正如我提到的设置工作,我现在只是试图确保相关的 kafka 和 zookeeper 卷映射到持久的外部存储(在本例中为本地磁盘)。
我终于解决了。我在初始设置中遇到了 2 个主要问题,现在已解决。
用于在本地主机上保存数据的文件夹需要预先创建,因此它们与用于创建初始 Kind 集群的文件夹具有相同的 uid:guid
,如果不存在,文件夹将没有数据正确保存。
从 zookeeper(数据和日志)为每个持久文件夹创建了特定的持久卷和持久卷声明,并在 kind-config 上配置它们。这是最终的配置:
apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
nodes:
- role: control-plane
- role: worker
extraPortMappings:
- containerPort: 30092 # internal kafka nodeport
hostPort: 9092 # port exposed on "host" machine for kafka
- containerPort: 30081 # internal schema-registry nodeport
hostPort: 8081 # port exposed on "host" machine for schema-registry
extraMounts:
- hostPath: ./tmp/kafka-data
containerPath: /var/lib/kafka/data
readOnly: false
selinuxRelabel: false
propagation: Bidirectional
- hostPath: ./tmp/zookeeper-data/data
containerPath: /var/lib/zookeeper/data
readOnly: false
selinuxRelabel: false
propagation: Bidirectional
- hostPath: ./tmp/zookeeper-data/log
containerPath: /var/lib/zookeeper/log
readOnly: false
selinuxRelabel: false
propagation: Bidirectional
如果您想 运行 它只是为了好玩,此存储库中提供了使用持久卷和持久卷声明的完整设置以及进一步的说明。 https://github.com/mmaia/kafka-local-kubernetes