如何在 microk8s 集群上部署 Mongodb 副本集

How to deploy Mongodb replicaset on microk8s cluster

我正在尝试在 microk8s 集群上部署一个 Mongodb ReplicaSet。我在 Ubuntu 20.04 上安装了一个虚拟机 运行ning。部署后mongopods不运行而是崩溃。我启用了 microk8s 存储、dns 和 rbac 附加组件,但同样的问题仍然存在。任何人都可以帮我找到背后的原因吗?下面是我的清单文件:

apiVersion: v1
kind: Service
metadata:
  name: mongodb-service
  labels:
    name: mongo
spec:
  ports:
  - port: 27017
    targetPort: 27017
  clusterIP: None
  selector:
    role: mongo
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mongo
spec:
  selector:
    matchLabels:
      role: mongo
      environment: test
  serviceName: mongodb-service
  replicas: 3
  template:
    metadata:
      labels:
        role: mongo
        environment: test
        replicaset: MainRepSet
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: replicaset
                  operator: In
                  values:
                  - MainRepSet
              topologyKey: kubernetes.io/hostname
      terminationGracePeriodSeconds: 10
      volumes:
        - name: secrets-volume
          secret:
            secretName: shared-bootstrap-data
            defaultMode: 256
      containers:
        - name: mongod-container
          #image: pkdone/mongo-ent:3.4
          image: mongo
          command:
            - "numactl"
            - "--interleave=all"
            - "mongod"
            - "--wiredTigerCacheSizeGB"
            - "0.1"
            - "--bind_ip"
            - "0.0.0.0"
            - "--replSet"
            - "MainRepSet"
            - "--auth"
            - "--clusterAuthMode"
            - "keyFile"
            - "--keyFile"
            - "/etc/secrets-volume/internal-auth-mongodb-keyfile"
            - "--setParameter"
            - "authenticationMechanisms=SCRAM-SHA-1"
          resources:
            requests:
              cpu: 0.2
              memory: 200Mi
          ports:
            - containerPort: 27017
          volumeMounts:
            - name: secrets-volume
              readOnly: true
              mountPath: /etc/secrets-volume
            - name: mongodb-persistent-storage-claim
              mountPath: /data/db
  volumeClaimTemplates:
  - metadata:
      name: mongodb-persistent-storage-claim     
    spec:
      storageClassName: microk8s-hostpath
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 5Gi

此外,这里是 pv、pvc 和 sc 输出:

yyy@xxx:$ kubectl get pvc
NAME                                       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS        AGE
mongodb-persistent-storage-claim-mongo-0   Bound    pvc-1b3de8f7-e416-4a1a-9c44-44a0422e0413   5Gi        RWO            microk8s-hostpath   13m
yyy@xxx:$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                              STORAGECLASS        REASON   AGE
pvc-5b75ddf6-abbd-4ff3-a135-0312df1e6703   20Gi       RWX            Delete           Bound    container-registry/registry-claim                  microk8s-hostpath            38m
pvc-1b3de8f7-e416-4a1a-9c44-44a0422e0413   5Gi        RWO            Delete           Bound    default/mongodb-persistent-storage-claim-mongo-0   microk8s-hostpath            13m
yyy@xxx:$ kubectl get sc
NAME                          PROVISIONER            RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
microk8s-hostpath (default)   microk8s.io/hostpath   Delete          Immediate           false                  108m

yyy@xxx:$ kubectl get pods -n kube-system 
NAME                                         READY   STATUS    RESTARTS   AGE
metrics-server-8bbfb4bdb-xvwcw               1/1     Running   1          148m
dashboard-metrics-scraper-78d7698477-4qdhj   1/1     Running   0          146m
kubernetes-dashboard-85fd7f45cb-6t7xr        1/1     Running   0          146m
hostpath-provisioner-5c65fbdb4f-ff7cl        1/1     Running   0          113m
coredns-7f9c69c78c-dr5kt                     1/1     Running   0          65m
calico-kube-controllers-f7868dd95-wtf8j      1/1     Running   0          150m
calico-node-knzc2                            1/1     Running   0          150m

我已经使用这个命令安装了集群:

sudo snap install microk8s --classic --channel=1.21

mongo数据库部署的输出:

yyy@xxx:$ kubectl get all
NAME          READY   STATUS             RESTARTS   AGE
pod/mongo-0   0/1     CrashLoopBackOff   5          4m18s

NAME                      TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)     AGE
service/kubernetes        ClusterIP   10.152.183.1   <none>        443/TCP     109m
service/mongodb-service   ClusterIP   None           <none>        27017/TCP   4m19s

NAME                     READY   AGE
statefulset.apps/mongo   0/3     4m19s

Pod 日志:

yyy@xxx:$ kubectl logs pod/mongo-0
{"t":{"$date":"2021-09-07T16:21:13.191Z"},"s":"F",  "c":"CONTROL",  "id":20574,   "ctx":"-","msg":"Error during global initialization","attr":{"error":{"code":2,"codeName":"BadValue","errmsg":"storage.wiredTiger.engineConfig.cacheSizeGB must be greater than or equal to 0.25"}}}
yyy@xxx:$ kubectl describe pod/mongo-0
Name:         mongo-0
Namespace:    default
Priority:     0
Node:         citest1/192.168.9.105
Start Time:   Tue, 07 Sep 2021 16:17:38 +0000
Labels:       controller-revision-hash=mongo-66bd776569
              environment=test
              replicaset=MainRepSet
              role=mongo
              statefulset.kubernetes.io/pod-name=mongo-0
Annotations:  cni.projectcalico.org/podIP: 10.1.150.136/32
              cni.projectcalico.org/podIPs: 10.1.150.136/32
Status:       Running
IP:           10.1.150.136
IPs:
  IP:           10.1.150.136
Controlled By:  StatefulSet/mongo
Containers:
  mongod-container:
    Container ID:  containerd://458e21fac3e87dcf304a9701da0eb827b2646efe94cabce7f283cd49f740c15d
    Image:         mongo
    Image ID:      docker.io/library/mongo@sha256:58ea1bc09f269a9b85b7e1fae83b7505952aaa521afaaca4131f558955743842
    Port:          27017/TCP
    Host Port:     0/TCP
    Command:
      numactl
      --interleave=all
      mongod
      --wiredTigerCacheSizeGB
      0.1
      --bind_ip
      0.0.0.0
      --replSet
      MainRepSet
      --auth
      --clusterAuthMode
      keyFile
      --keyFile
      /etc/secrets-volume/internal-auth-mongodb-keyfile
      --setParameter
      authenticationMechanisms=SCRAM-SHA-1
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Tue, 07 Sep 2021 16:24:03 +0000
      Finished:     Tue, 07 Sep 2021 16:24:03 +0000
    Ready:          False
    Restart Count:  6
    Requests:
      cpu:        200m
      memory:     200Mi
    Environment:  <none>
    Mounts:
      /data/db from mongodb-persistent-storage-claim (rw)
      /etc/secrets-volume from secrets-volume (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b7nf8 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  mongodb-persistent-storage-claim:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  mongodb-persistent-storage-claim-mongo-0
    ReadOnly:   false
  secrets-volume:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  shared-bootstrap-data
    Optional:    false
  kube-api-access-b7nf8:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                     From               Message
  ----     ------            ----                    ----               -------
  Warning  FailedScheduling  7m53s                   default-scheduler  0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
  Warning  FailedScheduling  7m52s                   default-scheduler  0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
  Normal   Scheduled         7m50s                   default-scheduler  Successfully assigned default/mongo-0 to citest1
  Normal   Pulled            7m25s                   kubelet            Successfully pulled image "mongo" in 25.215669443s
  Normal   Pulled            7m21s                   kubelet            Successfully pulled image "mongo" in 1.192994197s
  Normal   Pulled            7m6s                    kubelet            Successfully pulled image "mongo" in 1.203239709s
  Normal   Pulled            6m38s                   kubelet            Successfully pulled image "mongo" in 1.213451175s
  Normal   Created           6m38s (x4 over 7m23s)   kubelet            Created container mongod-container
  Normal   Started           6m37s (x4 over 7m23s)   kubelet            Started container mongod-container
  Normal   Pulling           5m47s (x5 over 7m50s)   kubelet            Pulling image "mongo"
  Warning  BackOff           2m49s (x23 over 7m20s)  kubelet            Back-off restarting failed container

您提供的日志显示您设置的参数不正确 wiredTigerCacheSizeGB。在你的情况下它是 0.1,并且根据消息

"code":2,"codeName":"BadValue","errmsg":"storage.wiredTiger.engineConfig.cacheSizeGB must be greater than or equal to 0.25"

应该至少为 0.25。

containers 部分:

containers:
        - name: mongod-container
          #image: pkdone/mongo-ent:3.4
          image: mongo
          command:
            - "numactl"
            - "--interleave=all"
            - "mongod"
            - "--wiredTigerCacheSizeGB"
            - "0.1"
            - "--bind_ip"
            - "0.0.0.0"
            - "--replSet"
            - "MainRepSet"
            - "--auth"
            - "--clusterAuthMode"
            - "keyFile"
            - "--keyFile"
            - "/etc/secrets-volume/internal-auth-mongodb-keyfile"
            - "--setParameter"
            - "authenticationMechanisms=SCRAM-SHA-1"

你应该在这个地方改变

-  "--wiredTigerCacheSizeGB"  
-  "0.1"

"0.1" 大于或等于 "0.25"


此外我还看到另一个错误:

1 pod has unbound immediate PersistentVolumeClaims

应该和我之前写的有关。但是,您可能会找到其他方法来解决它 , here and here.