默认调度程序 0/1 个节点可用:1 个节点未找到可用的持久卷来绑定

Default-scheduler 0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind

我正在尝试为我的 Microk8s kubernetes 项目创建一些持久性 space,但到目前为止没有成功。

到目前为止我所做的是:

第一。我用以下 yaml 创建了一个 PV:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: dev-pv-0001
  labels:
   name: dev-pv-0001
spec:
  capacity:
    storage: 10Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: local-storage
  local:
    path: /data/dev
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - asfweb

After applying it kubernetess is showing: 
NAME         CAPACITY ACCESS   MODES     RECLAIM    POLICY    STATUS     CLAIM    STORAGECLASS
dev-pv-0001  10Gi     RWO                Retain              Available   local-storage     


Name:              dev-pv-0001
Labels:            name=dev-pv-0001
Annotations:       <none>
Finalizers:        [kubernetes.io/pv-protection]
StorageClass:      local-storage
Status:            Available
Claim:
Reclaim Policy:    Retain
Access Modes:      RWO
VolumeMode:        Filesystem
Capacity:          10Gi
Node Affinity:
  Required Terms:
    Term 0:        kubernetes.io/hostname in [asfweb]
Message:
Source:
    Type:  LocalVolume (a persistent volume backed by local storage on a node)
    Path:  /data/dev
Events:    <none>

这是我的部署 yaml:

apiVersion: "v1"
kind: PersistentVolumeClaim
metadata:
  name: "dev-pvc-0001"
spec:
 storageClassName: "local-storage"
 accessModes:
    - "ReadWriteMany"
 resources:
  requests:
    storage: "10Gi"
 selector:
    matchLabels:
      name: "dev-pv-0001"
---
# Source: server/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: RELEASE-NAME-server
  labels:
    helm.sh/chart: server-0.1.0
    app.kubernetes.io/name: server
    app.kubernetes.io/instance: RELEASE-NAME
    app.kubernetes.io/version: "0.1.0"
    app.kubernetes.io/managed-by: Helm
spec:
  type: ClusterIP
  ports:
    - port: 4000
  selector:
    app.kubernetes.io/name: server
    app.kubernetes.io/instance: RELEASE-NAME
---
# Source: server/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: RELEASE-NAME-server
  labels:
    helm.sh/chart: server-0.1.0
    app.kubernetes.io/name: server
    app.kubernetes.io/instance: RELEASE-NAME
    app.kubernetes.io/version: "0.1.0"
    app.kubernetes.io/managed-by: Helm
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: server
      app.kubernetes.io/instance: RELEASE-NAME
  template:
    metadata:
      labels:
        app.kubernetes.io/name: server
        app.kubernetes.io/instance: RELEASE-NAME
    spec:
      imagePullSecrets:
        - name: gitlab-auth
      serviceAccountName: default
      securityContext:
        {}
      containers:
        - name: server
          securityContext:
            {}
          image: "registry.gitlab.com/asfweb/asfk8s/server:latest"
          imagePullPolicy: Always
          resources:
            {}
          volumeMounts:
            - mountPath: /data/db
              name: server-pvc-0001
      volumes:
            - name: server-pvc-0001
              persistentVolumeClaim:
                claimName: dev-pvc-0001
---
# Source: server/templates/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: RELEASE-NAME-server
  labels:
    helm.sh/chart: server-0.1.0
    app.kubernetes.io/name: server
    app.kubernetes.io/instance: RELEASE-NAME
    app.kubernetes.io/version: "0.1.0"
    app.kubernetes.io/managed-by: Helm
  annotations:
    certmanager.k8s.io/cluster-issuer: letsencrypt-prod
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/use-regex: "true"
spec:
  tls:
    - hosts:
        - "dev.domain.com"
      secretName: dev.domain.com
  rules:
    - host: "dev.domain.com"
      http:
        paths:
          - path: /?(.*)
            pathType: Prefix
            backend:
              service:
                name: RELEASE-NAME-server
                port:
                  number: 4000

其他一切都是持久卷声明部分的一部分。 如果有帮助,这里有更多信息:

kubectl get pvc -A

NAMESPACE                     NAME                           STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS        AGE
controller-micro              storage-controller-0           Bound     pvc-f0f97686-c59f-4209-b349-cacf3cd0f126   20Gi       RWO            microk8s-hostpath   69d
gitlab-managed-apps           prometheus-prometheus-server   Bound     pvc-abc7ea42-8c74-4698-9b40-db2005edcb42   8Gi        RWO            microk8s-hostpath   69d
asfk8s-25398156-development   dev-pvc-0001                   Pending                                                                        local-storage       28m

kubectl describe pvc dev-pvc-0001 -n asfk8s-25398156-development

Name:          dev-pvc-0001
Namespace:     asfk8s-25398156-development
StorageClass:  local-storage
Status:        Pending
Volume:
Labels:        app.kubernetes.io/managed-by=Helm
Annotations:   meta.helm.sh/release-name: asfk8s
               meta.helm.sh/release-namespace: asfk8s-25398156-development
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
Used By:       asfk8s-server-6c6bc89c7b-hn44d
Events:
  Type    Reason                Age                  From                         Message
  ----    ------                ----                 ----                         -------
  Normal  WaitForFirstConsumer  31m (x2 over 31m)    persistentvolume-controller  waiting for first consumer to be created before binding
  Normal  WaitForPodScheduled   30m                  persistentvolume-controller  waiting for pod asfk8s-server-6c6bc89c7b-hn44d to be scheduled
  Normal  WaitForPodScheduled   12s (x121 over 30m)  persistentvolume-controller  waiting for pod asfk8s-server-6c6bc89c7b-hn44d to be scheduled

kubectl describe pod asfk8s-server-6c6bc89c7b-hn44d -n asfk8s-25398156-development

Name:           asfk8s-server-6c6bc89c7b-hn44d
Namespace:      asfk8s-25398156-development
Priority:       0
Node:           <none>
Labels:         app.kubernetes.io/instance=asfk8s
                app.kubernetes.io/name=server
                pod-template-hash=6c6bc89c7b
Annotations:    <none>
Status:         Pending
IP:
IPs:            <none>
Controlled By:  ReplicaSet/asfk8s-server-6c6bc89c7b
Containers:
  server:
    Image:        registry.gitlab.com/asfweb/asfk8s/server:3751bf19e3f495ac804ae91f5ad417829202261d
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:
      /data/db from server-pvc-0001 (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-lh7dl (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  server-pvc-0001:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  dev-pvc-0001
    ReadOnly:   false
  default-token-lh7dl:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-lh7dl
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  33m   default-scheduler  0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind.
  Warning  FailedScheduling  32m   default-scheduler  0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind.

有人可以帮我解决这个问题吗?提前致谢。

问题是您在创建 PV 时使用了节点亲和力

像你说的那样,通知 Kubernetes 我的磁盘将附加到这种类型的节点。由于亲缘关系,您的磁盘或 PV 仅附加到一种类型的特定节点。

当您部署工作负载或部署 (POD) 时,它没有在该特定节点上获得计划,您的 POD 也没有获得该 PV 或 PVC。

解决这个问题

确保同一节点上的 POD 和 PVC 计划也将节点亲和力添加到部署中,以便 POD 计划在该节点上。

否则

从PV中移除节点亲和性规则并创建新的PV和PVC并使用它。

这里是你提到节点亲和力规则的地方

nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - asfweb

我可以在您的部署中看到没有规则,因此您的 POD 被安排在集群中的任何位置。

在这里您可以看到创建 PV 和 PVC 并将其用于 MySQL Db 的简单示例:https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/