在 kubernetes 1.20 中挂载到 Persistent-Memory-Backed 本地持久卷失败
Failed mounting to Persistent-Memory-Backed local persisten volume in kubernetes 1.20
我正在尝试让 k8s pod 能够在不使用 privileged mode
的情况下使用 PMEM。我尝试的方法是在 k8s 中使用 PVC 在 fsdax 目录之上创建一个本地 PV,并让我的 pod 使用它。但是,我总是收到 MountVolume.NewMounter initialization failed ... : path does not exist
错误。
这是我的 yaml 文件和 PMEM 状态:
存储Classyaml:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
PV yaml:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pmem-pv-volume
spec:
capacity:
storage: 50Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /mnt/pmem0/vol1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: disktype
operator: In
values:
- pmem
PVC 标签:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pmem-pv-claim
spec:
storageClassName: local-storage
volumeName: pmem-pv-volume
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Pod yaml:
apiVersion: v1
kind: Pod
metadata:
name: daemon
labels:
env: test
spec:
hostNetwork: true
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: disktype
operator: In
values:
- pmem
containers:
- name: daemon-container
command: ["/usr/bin/bash", "-c", "sleep 3600"]
image: mm:v2
imagePullPolicy: Never
volumeMounts:
- mountPath: /mnt/pmem
name: pmem-pv-storage
- mountPath: /tmp
name: tmp
- mountPath: /var/log/memverge
name: log
- mountPath: /var/memverge/data
name: data
volumes:
- name: pmem-pv-storage
persistentVolumeClaim:
claimName: pmem-pv-claim
- name: tmp
hostPath:
path: /tmp
- name: log
hostPath:
path: /var/log/memverge
- name: data
hostPath:
path: /var/memverge/data
一些状态和 k8s 输出:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 745.2G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 740G 0 part
├─cl-root 253:0 0 188G 0 lvm /
├─cl-swap 253:1 0 32G 0 lvm [SWAP]
└─cl-home 253:2 0 520G 0 lvm /home
sr0 11:0 1 1024M 0 rom
nvme0n1 259:0 0 7T 0 disk
└─nvme0n1p1 259:1 0 7T 0 part /mnt/nvme
pmem0 259:2 0 100.4G 0 disk /mnt/pmem0
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pmem-pv-volume 50Gi RWO Delete Bound default/pmem-pv-claim local-storage 20h
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pmem-pv-claim Bound pmem-pv-volume 50Gi RWO local-storage 20h
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default daemon 0/1 ContainerCreating 0 20h
kube-system coredns-74ff55c5b-5crgg 1/1 Running 0 20h
kube-system etcd-minikube 1/1 Running 0 20h
kube-system kube-apiserver-minikube 1/1 Running 0 20h
kube-system kube-controller-manager-minikube 1/1 Running 0 20h
kube-system kube-proxy-2m7p6 1/1 Running 0 20h
kube-system kube-scheduler-minikube 1/1 Running 0 20h
kube-system storage-provisioner 1/1 Running 0 20h
$ kubectl get events
LAST SEEN TYPE REASON OBJECT MESSAGE
108s Warning FailedMount pod/daemon MountVolume.NewMounter initialization failed for volume "pmem-pv-volume" : path "/mnt/pmem0/vol1" does not exist
47m Warning FailedMount pod/daemon Unable to attach or mount volumes: unmounted volumes=[pmem-pv-storage], unattached volumes=[tmp log data default-token-4t8sv pmem-pv-storage]: timed out waiting for the condition
37m Warning FailedMount pod/daemon Unable to attach or mount volumes: unmounted volumes=[pmem-pv-storage], unattached volumes=[default-token-4t8sv pmem-pv-storage tmp log data]: timed out waiting for the condition
13m Warning FailedMount pod/daemon Unable to attach or mount volumes: unmounted volumes=[pmem-pv-storage], unattached volumes=[pmem-pv-storage tmp log data default-token-4t8sv]: timed out waiting for the condition
$ ls -l /mnt/pmem0
total 20
drwx------ 2 root root 16384 Jan 20 15:35 lost+found
drwxrwxrwx 2 root root 4096 Jan 21 17:56 vol1
它在抱怨 path "/mnt/pmem0/vol1" does not exist
但它确实存在:
$ ls -l /mnt/pmem0
total 20
drwx------ 2 root root 16384 Jan 20 15:35 lost+found
drwxrwxrwx 2 root root 4096 Jan 21 17:56 vol1
除了用local-PV,我也尝试过:
PMEM-CSI。但是 PMEM-CSI 方法被 containerd/kernel 问题阻止了我:https://github.com/containerd/containerd/issues/3221
PV。当我尝试创建由 PMEM 支持的 PV 时,pod 无法正确声明 PMEM 存储,但始终作为覆盖 fs 安装在主机 /
之上。
有人可以帮忙吗?非常感谢!
正如评论中所讨论的那样:
使用 minikube、rancher 和任何其他容器化版本的 kubelet 将导致 MountVolume.NewMounter initialization failed for volume,说明此路径 不 存在。
If the kubelet is running in a container, it cannot access the host
filesystem at the same path. You must adjust hostDir to the correct
path in the kubelet container.
您还可以像 suggested on github 一样为本地卷添加绑定。如果您将使用它,请根据您的需要调整复制粘贴的示例
"HostConfig": {
"Binds": [
"/mnt/local:/mnt/local"
],
像kubeadm这样的常规安装(非容器化)将不会执行相同的操作,您不会收到此类错误。
我正在尝试让 k8s pod 能够在不使用 privileged mode
的情况下使用 PMEM。我尝试的方法是在 k8s 中使用 PVC 在 fsdax 目录之上创建一个本地 PV,并让我的 pod 使用它。但是,我总是收到 MountVolume.NewMounter initialization failed ... : path does not exist
错误。
这是我的 yaml 文件和 PMEM 状态:
存储Classyaml:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
PV yaml:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pmem-pv-volume
spec:
capacity:
storage: 50Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /mnt/pmem0/vol1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: disktype
operator: In
values:
- pmem
PVC 标签:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pmem-pv-claim
spec:
storageClassName: local-storage
volumeName: pmem-pv-volume
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Pod yaml:
apiVersion: v1
kind: Pod
metadata:
name: daemon
labels:
env: test
spec:
hostNetwork: true
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: disktype
operator: In
values:
- pmem
containers:
- name: daemon-container
command: ["/usr/bin/bash", "-c", "sleep 3600"]
image: mm:v2
imagePullPolicy: Never
volumeMounts:
- mountPath: /mnt/pmem
name: pmem-pv-storage
- mountPath: /tmp
name: tmp
- mountPath: /var/log/memverge
name: log
- mountPath: /var/memverge/data
name: data
volumes:
- name: pmem-pv-storage
persistentVolumeClaim:
claimName: pmem-pv-claim
- name: tmp
hostPath:
path: /tmp
- name: log
hostPath:
path: /var/log/memverge
- name: data
hostPath:
path: /var/memverge/data
一些状态和 k8s 输出:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 745.2G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 740G 0 part
├─cl-root 253:0 0 188G 0 lvm /
├─cl-swap 253:1 0 32G 0 lvm [SWAP]
└─cl-home 253:2 0 520G 0 lvm /home
sr0 11:0 1 1024M 0 rom
nvme0n1 259:0 0 7T 0 disk
└─nvme0n1p1 259:1 0 7T 0 part /mnt/nvme
pmem0 259:2 0 100.4G 0 disk /mnt/pmem0
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pmem-pv-volume 50Gi RWO Delete Bound default/pmem-pv-claim local-storage 20h
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pmem-pv-claim Bound pmem-pv-volume 50Gi RWO local-storage 20h
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default daemon 0/1 ContainerCreating 0 20h
kube-system coredns-74ff55c5b-5crgg 1/1 Running 0 20h
kube-system etcd-minikube 1/1 Running 0 20h
kube-system kube-apiserver-minikube 1/1 Running 0 20h
kube-system kube-controller-manager-minikube 1/1 Running 0 20h
kube-system kube-proxy-2m7p6 1/1 Running 0 20h
kube-system kube-scheduler-minikube 1/1 Running 0 20h
kube-system storage-provisioner 1/1 Running 0 20h
$ kubectl get events
LAST SEEN TYPE REASON OBJECT MESSAGE
108s Warning FailedMount pod/daemon MountVolume.NewMounter initialization failed for volume "pmem-pv-volume" : path "/mnt/pmem0/vol1" does not exist
47m Warning FailedMount pod/daemon Unable to attach or mount volumes: unmounted volumes=[pmem-pv-storage], unattached volumes=[tmp log data default-token-4t8sv pmem-pv-storage]: timed out waiting for the condition
37m Warning FailedMount pod/daemon Unable to attach or mount volumes: unmounted volumes=[pmem-pv-storage], unattached volumes=[default-token-4t8sv pmem-pv-storage tmp log data]: timed out waiting for the condition
13m Warning FailedMount pod/daemon Unable to attach or mount volumes: unmounted volumes=[pmem-pv-storage], unattached volumes=[pmem-pv-storage tmp log data default-token-4t8sv]: timed out waiting for the condition
$ ls -l /mnt/pmem0
total 20
drwx------ 2 root root 16384 Jan 20 15:35 lost+found
drwxrwxrwx 2 root root 4096 Jan 21 17:56 vol1
它在抱怨 path "/mnt/pmem0/vol1" does not exist
但它确实存在:
$ ls -l /mnt/pmem0
total 20
drwx------ 2 root root 16384 Jan 20 15:35 lost+found
drwxrwxrwx 2 root root 4096 Jan 21 17:56 vol1
除了用local-PV,我也尝试过:
PMEM-CSI。但是 PMEM-CSI 方法被 containerd/kernel 问题阻止了我:https://github.com/containerd/containerd/issues/3221
PV。当我尝试创建由 PMEM 支持的 PV 时,pod 无法正确声明 PMEM 存储,但始终作为覆盖 fs 安装在主机
/
之上。
有人可以帮忙吗?非常感谢!
正如评论中所讨论的那样:
使用 minikube、rancher 和任何其他容器化版本的 kubelet 将导致 MountVolume.NewMounter initialization failed for volume,说明此路径 不 存在。
If the kubelet is running in a container, it cannot access the host filesystem at the same path. You must adjust hostDir to the correct path in the kubelet container.
您还可以像 suggested on github 一样为本地卷添加绑定。如果您将使用它,请根据您的需要调整复制粘贴的示例
"HostConfig": {
"Binds": [
"/mnt/local:/mnt/local"
],
像kubeadm这样的常规安装(非容器化)将不会执行相同的操作,您不会收到此类错误。