重启 Kubernetes petset 将清理持久卷

Restart Kubernetes petset will clean the persistent volume

我是 运行 3 zookeepers petset 哪个卷正在使用 glusterfs 持久卷。如果是第一次启动petset,一切都很好。

我的一个要求是,如果 petset 被杀死,那么在我重新启动它之后,它们仍将使用相同的持久卷。

我现在面临的问题是重启petset后,persistent volume中的原有数据会被清理掉。那么我该如何解决这个问题而不是手动将文件从该卷中复制出来呢?我尝试了 reclaimPolicy retain 和 delete,它们都将清理卷。谢谢

下面是配置文件。

pv

apiVersion: v1
kind: PersistentVolume
metadata:
  name: glusterfsvol-zookeeper-0
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  glusterfs:
    endpoints: gluster-cluster
    path: zookeeper-vol-0
    readOnly: false
  persistentVolumeReclaimPolicy: Retain
  claimRef:
    name: glusterfsvol-zookeeper-0
    namespace: default
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: glusterfsvol-zookeeper-1
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  glusterfs:
    endpoints: gluster-cluster
    path: zookeeper-vol-1
    readOnly: false
  persistentVolumeReclaimPolicy: Retain
  claimRef:
    name: glusterfsvol-zookeeper-1
    namespace: default
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: glusterfsvol-zookeeper-2
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  glusterfs:
    endpoints: gluster-cluster
    path: zookeeper-vol-2
    readOnly: false
  persistentVolumeReclaimPolicy: Retain
  claimRef:
    name: glusterfsvol-zookeeper-2
    namespace: default

PVC

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: glusterfsvol-zookeeper-0
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
       storage: 1Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: glusterfsvol-zookeeper-1
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
       storage: 1Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: glusterfsvol-zookeeper-2
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
       storage: 1Gi

宠物

apiVersion: apps/v1alpha1
kind: PetSet
metadata:
  name: zookeeper
spec:
  serviceName: "zookeeper"
  replicas: 1
  template:
    metadata:
      labels:
        app: zookeeper
      annotations:
        pod.alpha.kubernetes.io/initialized: "true"
    spec:
      terminationGracePeriodSeconds: 0
      containers:
      - name: zookeeper
        securityContext:
          privileged: true
          capabilities:
            add:
              - IPC_LOCK
        image: kuanghaochina/zookeeper-3.5.2-alpine-jdk:latest
        imagePullPolicy: Always
        ports:
          - containerPort: 2888
            name: peer
          - containerPort: 3888
            name: leader-election
          - containerPort: 2181
            name: client
        env:
        - name: ZOOKEEPER_LOG_LEVEL
          value: INFO
        volumeMounts:
        - name: glusterfsvol
          mountPath: /opt/zookeeper/data
          subPath: data
        - name: glusterfsvol
          mountPath: /opt/zookeeper/dataLog
          subPath: dataLog
  volumeClaimTemplates:
  - metadata:
      name: glusterfsvol
    spec:
      accessModes: 
        - ReadWriteMany
      resources:
        requests:
          storage: 1Gi

发现的原因是我使用zkServer-initialize.sh强制zookeeper使用id,但是在脚本中,它会清理dataDir。

发现的原因是我使用zkServer-initialize.sh强制zookeeper使用id,但是在脚本中,它会清理dataDir。