是否可以将 Azure Kubernetes 中的共享 Azure 磁盘挂载到多个 PODs/Nodes?
Is it possible to mount a shared Azure disk in Azure Kubernetes to multiple PODs/Nodes?
我想将 Azure 共享磁盘挂载到多个 deployments/nodes 基于此:
https://docs.microsoft.com/en-us/azure/virtual-machines/disks-shared
因此,我在 Azure 门户中创建了一个共享磁盘,当我尝试将其装载到 Kubernetes 中的部署时,我遇到了一个错误:
"Multi-Attach error for volume "azuredisk" Volume is already used by pod(s)..."
是否可以在 Kubernetes 中使用共享磁盘?如果是这样怎么办?
感谢提示。
Yes, you can,能力为GA.
Azure 共享磁盘可以作为 ReadWriteMany 安装,这意味着您可以将它安装到多个节点和 pods。它需要 Azure Disk CSI driver,需要注意的是目前仅支持 Raw Block 卷,因此应用程序负责管理共享磁盘上的写入、读取、锁定、缓存、装载和防护的控制,这作为原始块设备公开。这意味着您将原始块设备(磁盘)作为 volumeDevice
而不是 volumeMount
.
挂载到 pod 容器
The documentation examples 主要指向如何创建存储 Class 以动态配置静态 Azure 共享磁盘,但我也静态创建了一个并将其安装到多个 pods不同的节点。
动态配置共享 Azure 磁盘
- 创建存储Class 和 PVC
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-csi
provisioner: disk.csi.azure.com
parameters:
skuname: Premium_LRS # Currently shared disk only available with premium SSD
maxShares: "2"
cachingMode: None # ReadOnly cache is not available for premium SSD with maxShares>1
reclaimPolicy: Delete
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-azuredisk
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 256Gi # minimum size of shared disk is 256GB (P15)
volumeMode: Block
storageClassName: managed-csi
- 创建具有 2 个副本的部署并在规范中指定 volumeDevices、devicePath
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: deployment-azuredisk
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
name: deployment-azuredisk
spec:
containers:
- name: deployment-azuredisk
image: mcr.microsoft.com/oss/nginx/nginx:1.17.3-alpine
volumeDevices:
- name: azuredisk
devicePath: /dev/sdx
volumes:
- name: azuredisk
persistentVolumeClaim:
claimName: pvc-azuredisk
使用静态预配的 Azure 共享磁盘
使用已通过 ARM、Azure 门户或 Azure CLI 配置的 Azure 共享磁盘。
- 定义引用 DiskURI 和 DiskName 的 PersistentVolume (PV):
apiVersion: v1
kind: PersistentVolume
metadata:
name: azuredisk-shared-block
spec:
capacity:
storage: "256Gi" # 256 is the minimum size allowed for shared disk
volumeMode: Block # PV and PVC volumeMode must be 'Block'
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
azureDisk:
kind: Managed
diskURI: /subscriptions/<subscription>/resourcegroups/<group>/providers/Microsoft.Compute/disks/<disk-name>
diskName: <disk-name>
cachingMode: None # Caching mode must be 'None'
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-azuredisk-managed
spec:
resources:
requests:
storage: 256Gi
volumeMode: Block
accessModes:
- ReadWriteMany
volumeName: azuredisk-shared-block # The name of the PV (above)
安装此 PVC 对于动态和静态配置的共享磁盘是相同的。参考上面的部署。
我想将 Azure 共享磁盘挂载到多个 deployments/nodes 基于此: https://docs.microsoft.com/en-us/azure/virtual-machines/disks-shared
因此,我在 Azure 门户中创建了一个共享磁盘,当我尝试将其装载到 Kubernetes 中的部署时,我遇到了一个错误:
"Multi-Attach error for volume "azuredisk" Volume is already used by pod(s)..."
是否可以在 Kubernetes 中使用共享磁盘?如果是这样怎么办? 感谢提示。
Yes, you can,能力为GA.
Azure 共享磁盘可以作为 ReadWriteMany 安装,这意味着您可以将它安装到多个节点和 pods。它需要 Azure Disk CSI driver,需要注意的是目前仅支持 Raw Block 卷,因此应用程序负责管理共享磁盘上的写入、读取、锁定、缓存、装载和防护的控制,这作为原始块设备公开。这意味着您将原始块设备(磁盘)作为 volumeDevice
而不是 volumeMount
.
The documentation examples 主要指向如何创建存储 Class 以动态配置静态 Azure 共享磁盘,但我也静态创建了一个并将其安装到多个 pods不同的节点。
动态配置共享 Azure 磁盘
- 创建存储Class 和 PVC
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-csi
provisioner: disk.csi.azure.com
parameters:
skuname: Premium_LRS # Currently shared disk only available with premium SSD
maxShares: "2"
cachingMode: None # ReadOnly cache is not available for premium SSD with maxShares>1
reclaimPolicy: Delete
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-azuredisk
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 256Gi # minimum size of shared disk is 256GB (P15)
volumeMode: Block
storageClassName: managed-csi
- 创建具有 2 个副本的部署并在规范中指定 volumeDevices、devicePath
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: deployment-azuredisk
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
name: deployment-azuredisk
spec:
containers:
- name: deployment-azuredisk
image: mcr.microsoft.com/oss/nginx/nginx:1.17.3-alpine
volumeDevices:
- name: azuredisk
devicePath: /dev/sdx
volumes:
- name: azuredisk
persistentVolumeClaim:
claimName: pvc-azuredisk
使用静态预配的 Azure 共享磁盘
使用已通过 ARM、Azure 门户或 Azure CLI 配置的 Azure 共享磁盘。
- 定义引用 DiskURI 和 DiskName 的 PersistentVolume (PV):
apiVersion: v1
kind: PersistentVolume
metadata:
name: azuredisk-shared-block
spec:
capacity:
storage: "256Gi" # 256 is the minimum size allowed for shared disk
volumeMode: Block # PV and PVC volumeMode must be 'Block'
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
azureDisk:
kind: Managed
diskURI: /subscriptions/<subscription>/resourcegroups/<group>/providers/Microsoft.Compute/disks/<disk-name>
diskName: <disk-name>
cachingMode: None # Caching mode must be 'None'
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-azuredisk-managed
spec:
resources:
requests:
storage: 256Gi
volumeMode: Block
accessModes:
- ReadWriteMany
volumeName: azuredisk-shared-block # The name of the PV (above)
安装此 PVC 对于动态和静态配置的共享磁盘是相同的。参考上面的部署。