如何在不同的 k8s 命名空间中的 pods 之间共享一个 cephfs 卷
How to share a cephfs volume between pods in different k8s namespaces
我正在尝试在 k8s 集群内的命名空间之间共享一个 cephfs 卷。
我正在使用 ceph-csi 和 cephfs。
按照 https://github.com/ceph/ceph-csi/blob/devel/docs/static-pvc.md#cephfs-static-pvc 在两个命名空间中创建静态 pv+pvc。
如果我不在同一个节点上启动两个 pods 就可以工作。
如果两个 pods 在同一个节点上,第二个 pod 卡住并出现错误:
MountVolume.SetUp failed for volume "team-test-vol-pv" : rpc error: code = Internal desc = failed to bind-mount /var/lib/kubelet/plugins/k
ubernetes.io/csi/pv/team-test-vol-pv/globalmount to /var/lib/kubelet/pods/007fc605-7fa4-4dc6-890f-fc0dabe5740b/volumes/kubernetes.io~csi/team-test-vol-pv/mount: an error (exit status 32) occurred while running mount arg
s: [-o bind,_netdev /var/lib/kubelet/plugins/kubernetes.io/csi/pv/team-test-vol-pv/globalmount /var/lib/kubelet/pods/007fc605-7fa4-4dc6-890f-fc0dabe5740b/volumes/kubernetes.io~csi/team-test-vol-pv/moun
关于如何解决这个问题或如何在不同的 NS 中使用单个 RWX 卷有什么想法吗?
x 队的 PV+PVC:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-vol
namespace: team-x
spec:
storageClassName: ""
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
volumeMode: Filesystem
# volumeName should be same as PV name
volumeName: team-x-test-vol-pv
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: team-x-test-vol-pv
spec:
claimRef:
namespace: team-x
name: test-vol
storageClassName: ""
accessModes:
- ReadWriteMany
capacity:
storage: 1Gi
csi:
driver: cephfs.csi.ceph.com
nodeStageSecretRef:
name: csi-cephfs-secret-hd
namespace: ceph-csi
volumeAttributes:
"clusterID": "cd79ae11-1804-4c06-a97e-aeeb961b84b0"
"fsName": "cephfs"
"staticVolume": "true"
"rootPath": /volumes/team/share/8b73d3bb-282e-4c32-b13a-97459419bd5b
# volumeHandle can be anything, need not to be same
# as PV name or volume name. keeping same for brevity
volumeHandle: team-share
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
y 队的 PV+PVC
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-vol
namespace: team-y
spec:
storageClassName: ""
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
volumeMode: Filesystem
# volumeName should be same as PV name
volumeName: team-y-test-vol-pv
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: team-y-test-vol-pv
spec:
claimRef:
namespace: team-y
name: test-vol
storageClassName: ""
accessModes:
- ReadWriteMany
capacity:
storage: 1Gi
csi:
driver: cephfs.csi.ceph.com
nodeStageSecretRef:
name: csi-cephfs-secret-hd
namespace: ceph-csi
volumeAttributes:
"clusterID": "cd79ae11-1804-4c06-a97e-aeeb961b84b0"
"fsName": "cephfs"
"staticVolume": "true"
"rootPath": /volumes/team-y/share/8b73d3bb-282e-4c32-b13a-97459419bd5b
# volumeHandle can be anything, need not to be same
# as PV name or volume name. keeping same for brevity
volumeHandle: team-share
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
您可能需要提供 ReadWriteMany
选项
参考link:https://kubernetes.io/docs/concepts/storage/persistent-volumes/
每个 pv 的 volumeHandle: xyz
都是独一无二的。测试在 3 个不同的命名空间中部署 3xdaemonsets。
我正在尝试在 k8s 集群内的命名空间之间共享一个 cephfs 卷。 我正在使用 ceph-csi 和 cephfs。
按照 https://github.com/ceph/ceph-csi/blob/devel/docs/static-pvc.md#cephfs-static-pvc 在两个命名空间中创建静态 pv+pvc。 如果我不在同一个节点上启动两个 pods 就可以工作。
如果两个 pods 在同一个节点上,第二个 pod 卡住并出现错误:
MountVolume.SetUp failed for volume "team-test-vol-pv" : rpc error: code = Internal desc = failed to bind-mount /var/lib/kubelet/plugins/k
ubernetes.io/csi/pv/team-test-vol-pv/globalmount to /var/lib/kubelet/pods/007fc605-7fa4-4dc6-890f-fc0dabe5740b/volumes/kubernetes.io~csi/team-test-vol-pv/mount: an error (exit status 32) occurred while running mount arg
s: [-o bind,_netdev /var/lib/kubelet/plugins/kubernetes.io/csi/pv/team-test-vol-pv/globalmount /var/lib/kubelet/pods/007fc605-7fa4-4dc6-890f-fc0dabe5740b/volumes/kubernetes.io~csi/team-test-vol-pv/moun
关于如何解决这个问题或如何在不同的 NS 中使用单个 RWX 卷有什么想法吗?
x 队的 PV+PVC:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-vol
namespace: team-x
spec:
storageClassName: ""
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
volumeMode: Filesystem
# volumeName should be same as PV name
volumeName: team-x-test-vol-pv
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: team-x-test-vol-pv
spec:
claimRef:
namespace: team-x
name: test-vol
storageClassName: ""
accessModes:
- ReadWriteMany
capacity:
storage: 1Gi
csi:
driver: cephfs.csi.ceph.com
nodeStageSecretRef:
name: csi-cephfs-secret-hd
namespace: ceph-csi
volumeAttributes:
"clusterID": "cd79ae11-1804-4c06-a97e-aeeb961b84b0"
"fsName": "cephfs"
"staticVolume": "true"
"rootPath": /volumes/team/share/8b73d3bb-282e-4c32-b13a-97459419bd5b
# volumeHandle can be anything, need not to be same
# as PV name or volume name. keeping same for brevity
volumeHandle: team-share
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
y 队的 PV+PVC
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-vol
namespace: team-y
spec:
storageClassName: ""
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
volumeMode: Filesystem
# volumeName should be same as PV name
volumeName: team-y-test-vol-pv
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: team-y-test-vol-pv
spec:
claimRef:
namespace: team-y
name: test-vol
storageClassName: ""
accessModes:
- ReadWriteMany
capacity:
storage: 1Gi
csi:
driver: cephfs.csi.ceph.com
nodeStageSecretRef:
name: csi-cephfs-secret-hd
namespace: ceph-csi
volumeAttributes:
"clusterID": "cd79ae11-1804-4c06-a97e-aeeb961b84b0"
"fsName": "cephfs"
"staticVolume": "true"
"rootPath": /volumes/team-y/share/8b73d3bb-282e-4c32-b13a-97459419bd5b
# volumeHandle can be anything, need not to be same
# as PV name or volume name. keeping same for brevity
volumeHandle: team-share
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
您可能需要提供 ReadWriteMany
选项
参考link:https://kubernetes.io/docs/concepts/storage/persistent-volumes/
每个 pv 的 volumeHandle: xyz
都是独一无二的。测试在 3 个不同的命名空间中部署 3xdaemonsets。