为什么 ReadWriteOnce 在不同的节点上工作?
Why ReadWriteOnce is working on different nodes?
我们在 K8s 上运行的平台有不同的组件。我们需要在其中两个组件(comp-A 和 comp-B)之间共享存储,但我们错误地将其定义为 ReadWriteOnce
的 PV 和 PVC,即使这两个组件是 运行在不同的节点上一切正常,我们能够从两个组件读取和写入存储。
根据 K8s 文档,ReadWriteOnce
可以挂载到一个节点上,我们必须使用 ReadWriteMany
:
- ReadWriteOnce -- 卷可以被单个节点挂载为读写
- ReadOnlyMany -- 该卷可以被多个节点以只读方式挂载
- ReadWriteMany -- 卷可以被许多节点挂载为读写
所以我想知道为什么一切正常但不应该正常工作?
更多信息:
我们使用 NFS 进行存储,我们没有使用动态配置,下面是我们如何定义我们的 pv 和 pvc(我们使用 helm):
- apiVersion: v1
kind: PersistentVolume
metadata:
name: gstreamer-{{ .Release.Namespace }}
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
mountOptions:
- hard
- nfsvers=4.1
nfs:
server: {{ .Values.global.nfsserver }}
path: /var/nfs/general/gstreamer-{{ .Release.Namespace }}
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gstreamer-claim
namespace: {{ .Release.Namespace }}
spec:
volumeName: gstreamer-{{ .Release.Namespace }}
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
更新
一些kubectl命令的输出:
$ kubectl get -n 149 pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
gstreamer-claim Bound gstreamer-149 10Gi RWO 177d
$ kubectl get -n 149 pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
gstreamer-149 10Gi RWO Recycle Bound 149/gstreamer-claim 177d
我认为它以某种方式处理了它,因为 pods 唯一需要做的就是连接到该 IP。
这是关于 accessMode
的误导性概念,尤其是在 NFS
中。
在 Kubernetes Persistent Volume docs 中提到 NFS
支持所有类型的访问。 RWO
、RXX
和 RWX
。
然而 accessMode
类似于 matching criteria
,与 storage size
相同。在 OpenShift Access Mode documentation
中有更好的描述
A PersistentVolume
can be mounted on a host in any way supported by the resource provider. Providers have different capabilities and each PV’s access modes
are set to the specific modes supported by that particular volume. For example, NFS can support multiple read-write
clients, but a specific NFS PV might be exported on the server as read-only. Each PV gets its own set of access modes describing that specific PV’s capabilities.
Claims are matched to volumes with similar access modes. The only two matching criteria are access modes and size. A claim’s access modes represent a request. Therefore, you might be granted more, but never less. For example, if a claim requests RWO, but the only volume available is an NFS PV (RWO+ROX+RWX), the claim would then match NFS because it supports RWO.
Direct matches are always attempted first. The volume’s modes must match or contain more modes than you requested. The size must be greater than or equal to what is expected. If two types of volumes, such as NFS and iSCSI, have the same set of access modes, either of them can match a claim with those modes. There is no ordering between types of volumes and no way to choose one type over another.
All volumes with the same modes are grouped, and then sorted by size, smallest to largest. The binder gets the group with matching modes and iterates over each, in size order, until one size matches.
在下一段中:
A volume’s AccessModes
are descriptors of the volume’s capabilities. They are not enforced constraints. The storage provider is responsible for runtime errors resulting from invalid use of the resource.
For example, NFS offers ReadWriteOnce access mode. You must mark the claims as read-only if you want to use the volume’s ROX capability. Errors in the provider show up at runtime as mount errors.
另一个例子是你可以选择一些AccessModes
因为它不是约束而是匹配条件。
$ cat <<EOF | kubectl create -f -
> apiVersion: v1
> kind: PersistentVolumeClaim
> metadata:
> name: exmaple-pvc
> spec:
> accessModes:
> - ReadOnlyMany
> - ReadWriteMany
> - ReadWriteOnce
> resources:
> requests:
> storage: 1Gi
> EOF
或按照 GKE 示例:
$ cat <<EOF | kubectl create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: exmaple-pvc-rwo-rom
spec:
accessModes:
- ReadOnlyMany
- ReadWriteOnce
resources:
requests:
storage: 1Gi
EOF
persistentvolumeclaim/exmaple-pvc-rwo-rom created
PVC输出
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
exmaple-pvc Pending standard 2m18s
exmaple-pvc-rwo-rom Bound pvc-d704d346-42b3-4090-af96-aebeee3053f5 1Gi RWO,ROX standard 6s
persistentvolumeclaim/exmaple-pvc created
exmaple-pvc
处于 Pending
状态作为默认 GKE GCEPersistentDisk
它不支持 RreadWriteMany。
Warning ProvisioningFailed 10s (x5 over 69s) persistentvolume-controller Failed to provision volume with StorageClass "standard": invalid AccessModes [ReadOnlyMany ReadWriteMany ReadWr
iteOnce]: only AccessModes [ReadWriteOnce ReadOnlyMany] are supported
但是创建了第二个 pvc exmaple-pvc-rwo-rom
,您可以看到它有 2 个访问模式 RWO, ROX
。
简而言之accessMode
更像是PVC/PV到Bind
的要求。如果提供所有 access modes
的 NFS
与 RWO
绑定,它满足要求,但是它将作为 RWM 作为 NFS
提供该功能。
希望回答清楚一点。
此外你还可以查看其他Whosebug threads regarding accessMode
我们在 K8s 上运行的平台有不同的组件。我们需要在其中两个组件(comp-A 和 comp-B)之间共享存储,但我们错误地将其定义为 ReadWriteOnce
的 PV 和 PVC,即使这两个组件是 运行在不同的节点上一切正常,我们能够从两个组件读取和写入存储。
根据 K8s 文档,ReadWriteOnce
可以挂载到一个节点上,我们必须使用 ReadWriteMany
:
- ReadWriteOnce -- 卷可以被单个节点挂载为读写
- ReadOnlyMany -- 该卷可以被多个节点以只读方式挂载
- ReadWriteMany -- 卷可以被许多节点挂载为读写
所以我想知道为什么一切正常但不应该正常工作?
更多信息: 我们使用 NFS 进行存储,我们没有使用动态配置,下面是我们如何定义我们的 pv 和 pvc(我们使用 helm):
- apiVersion: v1
kind: PersistentVolume
metadata:
name: gstreamer-{{ .Release.Namespace }}
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
mountOptions:
- hard
- nfsvers=4.1
nfs:
server: {{ .Values.global.nfsserver }}
path: /var/nfs/general/gstreamer-{{ .Release.Namespace }}
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gstreamer-claim
namespace: {{ .Release.Namespace }}
spec:
volumeName: gstreamer-{{ .Release.Namespace }}
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
更新
一些kubectl命令的输出:
$ kubectl get -n 149 pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
gstreamer-claim Bound gstreamer-149 10Gi RWO 177d
$ kubectl get -n 149 pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
gstreamer-149 10Gi RWO Recycle Bound 149/gstreamer-claim 177d
我认为它以某种方式处理了它,因为 pods 唯一需要做的就是连接到该 IP。
这是关于 accessMode
的误导性概念,尤其是在 NFS
中。
在 Kubernetes Persistent Volume docs 中提到 NFS
支持所有类型的访问。 RWO
、RXX
和 RWX
。
然而 accessMode
类似于 matching criteria
,与 storage size
相同。在 OpenShift Access Mode documentation
A
PersistentVolume
can be mounted on a host in any way supported by the resource provider. Providers have different capabilities and each PV’saccess modes
are set to the specific modes supported by that particular volume. For example, NFS can support multipleread-write
clients, but a specific NFS PV might be exported on the server as read-only. Each PV gets its own set of access modes describing that specific PV’s capabilities.
Claims are matched to volumes with similar access modes. The only two matching criteria are access modes and size. A claim’s access modes represent a request. Therefore, you might be granted more, but never less. For example, if a claim requests RWO, but the only volume available is an NFS PV (RWO+ROX+RWX), the claim would then match NFS because it supports RWO.
Direct matches are always attempted first. The volume’s modes must match or contain more modes than you requested. The size must be greater than or equal to what is expected. If two types of volumes, such as NFS and iSCSI, have the same set of access modes, either of them can match a claim with those modes. There is no ordering between types of volumes and no way to choose one type over another.
All volumes with the same modes are grouped, and then sorted by size, smallest to largest. The binder gets the group with matching modes and iterates over each, in size order, until one size matches.
在下一段中:
A volume’s
AccessModes
are descriptors of the volume’s capabilities. They are not enforced constraints. The storage provider is responsible for runtime errors resulting from invalid use of the resource.
For example, NFS offers ReadWriteOnce access mode. You must mark the claims as read-only if you want to use the volume’s ROX capability. Errors in the provider show up at runtime as mount errors.
另一个例子是你可以选择一些AccessModes
因为它不是约束而是匹配条件。
$ cat <<EOF | kubectl create -f -
> apiVersion: v1
> kind: PersistentVolumeClaim
> metadata:
> name: exmaple-pvc
> spec:
> accessModes:
> - ReadOnlyMany
> - ReadWriteMany
> - ReadWriteOnce
> resources:
> requests:
> storage: 1Gi
> EOF
或按照 GKE 示例:
$ cat <<EOF | kubectl create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: exmaple-pvc-rwo-rom
spec:
accessModes:
- ReadOnlyMany
- ReadWriteOnce
resources:
requests:
storage: 1Gi
EOF
persistentvolumeclaim/exmaple-pvc-rwo-rom created
PVC输出
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
exmaple-pvc Pending standard 2m18s
exmaple-pvc-rwo-rom Bound pvc-d704d346-42b3-4090-af96-aebeee3053f5 1Gi RWO,ROX standard 6s
persistentvolumeclaim/exmaple-pvc created
exmaple-pvc
处于 Pending
状态作为默认 GKE GCEPersistentDisk
它不支持 RreadWriteMany。
Warning ProvisioningFailed 10s (x5 over 69s) persistentvolume-controller Failed to provision volume with StorageClass "standard": invalid AccessModes [ReadOnlyMany ReadWriteMany ReadWr
iteOnce]: only AccessModes [ReadWriteOnce ReadOnlyMany] are supported
但是创建了第二个 pvc exmaple-pvc-rwo-rom
,您可以看到它有 2 个访问模式 RWO, ROX
。
简而言之accessMode
更像是PVC/PV到Bind
的要求。如果提供所有 access modes
的 NFS
与 RWO
绑定,它满足要求,但是它将作为 RWM 作为 NFS
提供该功能。
希望回答清楚一点。
此外你还可以查看其他Whosebug threads regarding accessMode