EKS 中的持久存储无法配置卷

Persistent Storage in EKS failing to provision volume

我按照 AWS 知识库中的步骤创建持久存储:Use persistent storage in Amazon EKS

很遗憾,PersistentVolume(PV) 未创建:

kubectl get pv
No resources found

当我检查 PVC 日志时,我收到以下配置失败消息:

storageclass.storage.k8s.io "ebs-sc" not found

failed to provision volume with StorageClass "ebs-sc": rpc error: code = DeadlineExceeded desc = context deadline exceeded

我正在使用 Kubernetes v1.21.2-eks-0389ca3


更新:

示例中使用的 storageclass.yaml 的供应商设置为 ebs.csi.aws.com

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: ebs-sc
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer

当我使用@gohm'c 回答更新它时,它创建了一个 pv。

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: ebs-sc
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer

У我们的问题已经被问过好几次了,还是没有人回答。

例如这里:SweetOps #kubernetes for March, 2020

或此处(需要登录 AWS 控制台):AWS Developer Forums: PVC are in Pending state that are ...

源代码为here

    opComplete := util.OperationCompleteHook(plugin.GetPluginName(), "volume_provision")
    volume, err = provisioner.Provision(selectedNode, allowedTopologies)
    opComplete(volumetypes.CompleteFuncParam{Err: &err})
    if err != nil {
        // Other places of failure have nothing to do with VolumeScheduling,
        // so just let controller retry in the next sync. We'll only call func
        // rescheduleProvisioning here when the underlying provisioning actually failed.
        ctrl.rescheduleProvisioning(claim)

        strerr := fmt.Sprintf("Failed to provision volume with StorageClass %q: %v", storageClass.Name, err)
        klog.V(2).Infof("failed to provision volume for claim %q with StorageClass %q: %v", claimToClaimKey(claim), storageClass.Name, err)
        ctrl.eventRecorder.Event(claim, v1.EventTypeWarning, events.ProvisioningFailed, strerr)
        return pluginName, err
    }

但是在另一个仓库中有一个解决方案,/kubernetes-sigs/aws-ebs-csi-driver

the issue was resolved after fixing a misconfigured CNI setup, which prevented inter-node-communication and thus a provisioning of storage never got triggered.

We have not tried upgrading our current working cluster (v1.15.x) to any newer versions, but we can confirm that mounting volumes and provisioning storage works on v1.17.x when starting from scratch (aka. building a new test-cluster in our case).

we are using the specs provided above by @gini-schorsch - but since opening this issue we also moved to the external AWS cloud-controller-manager (aka. aws-cloud-controller-manager)

we have been using the provided IAM profiles for both components (CSI and CCM) and cut them down to the use-cases we require for our operations and did not see any problems with that so far.

那么,请检查您的连接情况。也许@muni-kumar-gundu 是对的。然后您可能想检查节点的可用区。

storageclass.storage.k8s.io "ebs-sc" not found

failed to provision volume with StorageClass "ebs-sc"

安装 EBS CSI 驱动程序后,您需要创建存储 class“ebs-sc”,示例:

cat << EOF | kubectl apply -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: ebs-sc
provisioner: ebs.csi.aws.com
parameters:
  type: gp3
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
EOF

有关更多选项,请参阅 here