glusterfs、heketi 和 kubernetes 自动配置问题

glusterfs, heketi and kubernetes auto provisioning problem

我有一个 gluster 节点,我测试了 heketi,它正在使用它的 cli 创建卷。

这是我的存储空间class:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: myglusterfs
  annotations:  
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/glusterfs
allowVolumeExpansion: true
reclaimPolicy: Retain
parameters:
  resturl: "http://x.x.x:8080"
  restuser: "admin"
  secretName: "heketi-secret"
  secretNamespace: "default"
  volumetype: "replicate:0"
  volumenameprefix: "k8s-dev"
  clusterid: "4d9a77f712zb12x57dd42477b993e9af"

当我创建一个示例 PVC 时,它将停留在挂起状态:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
 name: mypvc
spec:
 accessModes:
   - ReadWriteOnce
 resources:
  requests:
   storage: 1Gi
# kubectl get pvc
NAME    STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mypvc   Pending                                      myglusterfs    5m11s
# kubectl describe pvc mypvc 
Name:          mypvc
Namespace:     default
StorageClass:  myglusterfs
Status:        Pending
Volume:        
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/glusterfs
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Mounted By:    <none>
Events:
  Type     Reason              Age                  From                         Message
  ----     ------              ----                 ----                         -------
  Warning  ProvisioningFailed  14s (x10 over 6m9s)  persistentvolume-controller  Failed to provision volume with StorageClass "myglusterfs": failed to create volume: failed to create volume: see kube-controller-manager.log for details

当我看到 kube-controller-manager pod 日志时,它看起来像这样:

1 event.go:291] "Event occurred" object="default/mypvc" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to provision volume with StorageClass \"myglusterfs\": failed to create volume: failed to create volume: see kube-controller-manager.log for details"

问题是:我如何找出 pvc 永远处于挂起模式的原因?详细日志在哪里?

当gluster集群只有一个节点时卷类型必须是none.

volumetype: "none"

检查resturl IP地址 调用命令

curl http://x.x.x.x:8080/hello

这条命令的输出应该如下

Hello from Heketiroot

查看heketi Pod的IP地址

kubectl get pods -o wide | grep "heketi-"

IP-address x.x.x.x (resturl) 和输出的 ip-address 必须相等

检查命令的输出

kubectl get secrets | grep "heketi-secret"

输出不应为空

必须使用 restauthenabled 启用授权:“true”

结果得到了3个yml配置

heketi-secret.yml

apiVersion: v1
kind: Secret
metadata:
  name: heketi-secret
  namespace: default
data:
  key: cGFzc3dvcmQ=
type: kubernetes.io/glusterfs

data.key - echo -n "密码" | base64

storage-class.yml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: gluster-heketi
  annotations:
    storageclass.beta.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/glusterfs
reclaimPolicy: Delete
allowVolumeExpansion: true
parameters:
  resturl: "http://x.x.x.x:8080"
  restauthenabled: "true"
  restuser: "admin"
  secretNamespace: "default"
  secretName: "heketi-secret"
  volumetype: "replicate:3"
  volumenameprefix: "k8s-dev"
  gidMin: "40000"
  gidMax: "50000"
  clusterid: "2309963f1aee540437c2aabaeb7a6253"

test-pvc.yml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: gluster-pvc
  annotations:
   volume.beta.kubernetes.io/storage-class: gluster-heketi
spec:
  storageClassName: gluster-heketi
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

应用这些配置

kubectl create -f heketi-secret.yml
kubectl create -f storage-class.yml
kubectl create -f test-pvc.yml

kubectl get pvc 命令的输出 kubectl get pvc PersistentVolumeClaim Pending