Kubernetes 本地裸机上的 Strimzi Kafka

Strimzi Kafka on Kubernetes local bare metal

我在多台本地(裸机 metal/physcal)机器上有一个 kubernetes 集群 运行。 我想在集群上部署 kafka,但我不知道如何在我的配置中使用 strimzi。

我尝试按照快速入门页面上的教程进行操作:https://strimzi.io/docs/quickstart/master/
我的动物园管理员 pods 在点 2.4. Creating a cluster 待处理:

Events:
  Type     Reason            Age        From               Message
  ----     ------            ----       ----               -------
  Warning  FailedScheduling  <unknown>  default-scheduler  pod has unbound immediate PersistentVolumeClaims
  Warning  FailedScheduling  <unknown>  default-scheduler  pod has unbound immediate PersistentVolumeClaims

我通常使用 hostpath 作为我的卷,我不知道这是怎么回事...

编辑:我尝试使用 Arghya Sadhu 的命令创建一个 StorageClass,但问题仍然存在。
我的 PVC 的描述:

kubectl describe -n my-kafka-project persistentvolumeclaim/data-my-cluster-zookeeper-0
Name:          data-my-cluster-zookeeper-0
Namespace:     my-kafka-project
StorageClass:  local-storage
Status:        Pending
Volume:        
Labels:        app.kubernetes.io/instance=my-cluster
               app.kubernetes.io/managed-by=strimzi-cluster-operator
               app.kubernetes.io/name=strimzi
               strimzi.io/cluster=my-cluster
               strimzi.io/kind=Kafka
               strimzi.io/name=my-cluster-zookeeper
Annotations:   strimzi.io/delete-claim: false
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Mounted By:    my-cluster-zookeeper-0
Events:
  Type    Reason                Age                 From                         Message
  ----    ------                ----                ----                         -------
  Normal  WaitForFirstConsumer  72s (x66 over 16m)  persistentvolume-controller  waiting for first consumer to be created before binding

还有我的播客:

kubectl describe -n my-kafka-project pod/my-cluster-zookeeper-0
Name:           my-cluster-zookeeper-0
Namespace:      my-kafka-project
Priority:       0
Node:           <none>
Labels:         app.kubernetes.io/instance=my-cluster
                app.kubernetes.io/managed-by=strimzi-cluster-operator
                app.kubernetes.io/name=strimzi
                controller-revision-hash=my-cluster-zookeeper-7f698cf9b5
                statefulset.kubernetes.io/pod-name=my-cluster-zookeeper-0
                strimzi.io/cluster=my-cluster
                strimzi.io/kind=Kafka
                strimzi.io/name=my-cluster-zookeeper
Annotations:    strimzi.io/cluster-ca-cert-generation: 0
                strimzi.io/generation: 0
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  StatefulSet/my-cluster-zookeeper
Containers:
  zookeeper:
    Image:      strimzi/kafka:0.15.0-kafka-2.3.1
    Port:       <none>
    Host Port:  <none>
    Command:
      /opt/kafka/zookeeper_run.sh
    Liveness:   exec [/opt/kafka/zookeeper_healthcheck.sh] delay=15s timeout=5s period=10s #success=1 #failure=3
    Readiness:  exec [/opt/kafka/zookeeper_healthcheck.sh] delay=15s timeout=5s period=10s #success=1 #failure=3
    Environment:
      ZOOKEEPER_NODE_COUNT:          1
      ZOOKEEPER_METRICS_ENABLED:     false
      STRIMZI_KAFKA_GC_LOG_ENABLED:  false
      KAFKA_HEAP_OPTS:               -Xms128M
      ZOOKEEPER_CONFIGURATION:       autopurge.purgeInterval=1
                                     tickTime=2000
                                     initLimit=5
                                     syncLimit=2

    Mounts:
      /opt/kafka/custom-config/ from zookeeper-metrics-and-logging (rw)
      /var/lib/zookeeper from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from my-cluster-zookeeper-token-hgk2b (ro)
  tls-sidecar:
    Image:       strimzi/kafka:0.15.0-kafka-2.3.1
    Ports:       2888/TCP, 3888/TCP, 2181/TCP
    Host Ports:  0/TCP, 0/TCP, 0/TCP
    Command:
      /opt/stunnel/zookeeper_stunnel_run.sh
    Liveness:   exec [/opt/stunnel/stunnel_healthcheck.sh 2181] delay=15s timeout=5s period=10s #success=1 #failure=3
    Readiness:  exec [/opt/stunnel/stunnel_healthcheck.sh 2181] delay=15s timeout=5s period=10s #success=1 #failure=3
    Environment:
      ZOOKEEPER_NODE_COUNT:   1
      TLS_SIDECAR_LOG_LEVEL:  notice
    Mounts:
      /etc/tls-sidecar/cluster-ca-certs/ from cluster-ca-certs (rw)
      /etc/tls-sidecar/zookeeper-nodes/ from zookeeper-nodes (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from my-cluster-zookeeper-token-hgk2b (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  data-my-cluster-zookeeper-0
    ReadOnly:   false
  zookeeper-metrics-and-logging:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      my-cluster-zookeeper-config
    Optional:  false
  zookeeper-nodes:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  my-cluster-zookeeper-nodes
    Optional:    false
  cluster-ca-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  my-cluster-cluster-ca-cert
    Optional:    false
  my-cluster-zookeeper-token-hgk2b:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  my-cluster-zookeeper-token-hgk2b
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age        From               Message
  ----     ------            ----       ----               -------
  Warning  FailedScheduling  <unknown>  default-scheduler  0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind.
  Warning  FailedScheduling  <unknown>  default-scheduler  0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind.

您需要 PersistentVolume 满足 PersistentVolumeClaim 的约束。

使用本地存储。使用 local 存储 class:

$ cat <<EOF
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
EOF | kubectl apply -f -

您需要在集群中配置默认​​存储类,以便 PersistentVolumeClaim 可以从那里获取存储。

$ kubectl patch storageclass local-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

是的,在我看来,Kubernetes 在基础架构级别上缺少一些东西。您应该提供用于静态分配给 PVC 的 PersistentVolumes,或者正如 Arghya 已经提到的,您可以提供用于动态分配的 StorageClasses。

在我的例子中,我在另一个命名空间 my-cluster-kafka 中创建了 kafka,但 strimzi 运算符在命名空间 kafka.

所以我只是在同一个命名空间中创建的。出于测试目的,我使用了临时存储。

这里是kafla.yaml:

apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    replicas: 1
    listeners:
      - name: plain
        port: 9092
        type: internal
        tls: false
      - name: tls
        port: 9093
        type: internal
        tls: true
        authentication:
          type: tls
      - name: external
        port: 9094
        type: nodeport
        tls: false
    storage:
      type: ephemeral
    config:
      offsets.topic.replication.factor: 1
      transaction.state.log.replication.factor: 1
      transaction.state.log.min.isr: 1
  zookeeper:
    replicas: 1
    storage:
      type: ephemeral
  entityOperator:
    topicOperator: {}
    userOperator: {}