GKE 如何使用现有的计算引擎磁盘作为持久卷?
GKE how to use existing compute engine disk as persistent volumes?
我可能必须重建 GKE 集群,但计算引擎磁盘不会被删除,需要重新用作 pods 的持久卷。我还没有找到说明如何 link 现有 GCP 计算引擎磁盘作为 pods.
的持久卷的文档
是否可以将现有的 GCP 计算引擎磁盘与 GKE 存储 class 和持久卷一起使用?
是的,可以为另一个集群重用Persistent Disk as Persistent Volume
,但是有一个限制:
The persistent disk must be in the same zone as the cluster nodes.
如果PD
将在不同的区域,集群将找不到这个磁盘。
在文档 Using preexisting persistent disks as PersistentVolumes 中,您可以找到有关如何重复使用永久性磁盘的信息和示例。
如果您还没有创建 Persistent Disk
,您可以根据 Creating and attaching a disk 文档创建它。对于此测试,我使用了以下磁盘:
gcloud compute disks create pd-name \
--size 10G \
--type pd-standard \
--zone europe-west3-b
如果您将创建 PD
小于 200G
,您将得到低于 Warning
,一切都取决于您的需要。在区域 europe-west3-b
中,pd-standard
类型可以在 10GB - 65536GB
.
之间存储
You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/com
pute/docs/disks#performance.
请记住,您可能会在不同区域获得不同类型的 Persistent Disk
。有关详细信息,您可以查看 Disk Types 文档或 运行 $ gcloud compute disk-types list
.
拥有 Persistent Disk 后,您可以创建 PersistentVolume
和 PersistentVolumeClaim
。
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv
spec:
storageClassName: "test"
capacity:
storage: 10G
accessModes:
- ReadWriteOnce
claimRef:
namespace: default
name: pv-claim
gcePersistentDisk:
pdName: pd-name
fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-claim
spec:
storageClassName: "test"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10G
---
kind: Pod
apiVersion: v1
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/data"
name: task-pv-storage
测试
$ kubectl get pv,pvc,pod
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pv 10G RWO Retain Bound default/pv-claim test 22s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/pv-claim Bound pv 10G RWO test 22s
NAME READY STATUS RESTARTS AGE
pod/task-pv-pod 1/1 Running 0 21s
将一些信息写入磁盘
$ kubectl exec -ti task-pv-pod -- bin/bash
root@task-pv-pod:/# cd /usr/share/nginx/html
root@task-pv-pod:/usr/share/nginx/html# echo "This is test message from Nginx pod" >> message.txt
现在我删除了所有以前的资源:pv
、pvc
和 pod
。
$ kubectl get pv,pvc,pod
No resources found
现在如果我要重新创建 pv
,pvc
并在 pod
中做一些小改动,例如 busybox
。
containers:
- name: busybox
image: busybox
command: ["/bin/sh"]
args: ["-c", "while true; do echo hello; sleep 10;done"]
volumeMounts:
- mountPath: "/usr/data"
name: task-pv-storage
会反弹
$ kubectl get pv,pvc,po
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pv 10G RWO Retain Bound default/pv-claim 43m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/pv-claim Bound pv 10G RWO 43m
NAME READY STATUS RESTARTS AGE
pod/busybox 1/1 Running 0 3m43s
并且在 busybox
窗格中我将能够找到 Message.txt
。
$ kubectl exec -ti busybox -- bin/sh
/ # cd usr
/ # cd usr/data
/usr/data # ls
lost+found message.txt
/usr/data # cat message.txt
This is test message from Nginx pod
作为附加信息,您将无法同时在 2 个集群中使用它,如果您尝试会出现错误:
AttachVolume.Attach failed for volume "pv" : googleapi: Error 400: RESOURCE_IN_USE_B
Y_ANOTHER_RESOURCE - The disk resource 'projects/<myproject>/zones/europe-west3-b/disks/pd-name' is already being used by 'projects/<myproject>/zones/europe-west3-b/instances/gke-cluster-3-default-pool-bb545f05-t5hc'
我可能必须重建 GKE 集群,但计算引擎磁盘不会被删除,需要重新用作 pods 的持久卷。我还没有找到说明如何 link 现有 GCP 计算引擎磁盘作为 pods.
的持久卷的文档是否可以将现有的 GCP 计算引擎磁盘与 GKE 存储 class 和持久卷一起使用?
是的,可以为另一个集群重用Persistent Disk as Persistent Volume
,但是有一个限制:
The persistent disk must be in the same zone as the cluster nodes.
如果PD
将在不同的区域,集群将找不到这个磁盘。
在文档 Using preexisting persistent disks as PersistentVolumes 中,您可以找到有关如何重复使用永久性磁盘的信息和示例。
如果您还没有创建 Persistent Disk
,您可以根据 Creating and attaching a disk 文档创建它。对于此测试,我使用了以下磁盘:
gcloud compute disks create pd-name \
--size 10G \
--type pd-standard \
--zone europe-west3-b
如果您将创建 PD
小于 200G
,您将得到低于 Warning
,一切都取决于您的需要。在区域 europe-west3-b
中,pd-standard
类型可以在 10GB - 65536GB
.
You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/com
pute/docs/disks#performance.
请记住,您可能会在不同区域获得不同类型的 Persistent Disk
。有关详细信息,您可以查看 Disk Types 文档或 运行 $ gcloud compute disk-types list
.
拥有 Persistent Disk 后,您可以创建 PersistentVolume
和 PersistentVolumeClaim
。
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv
spec:
storageClassName: "test"
capacity:
storage: 10G
accessModes:
- ReadWriteOnce
claimRef:
namespace: default
name: pv-claim
gcePersistentDisk:
pdName: pd-name
fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-claim
spec:
storageClassName: "test"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10G
---
kind: Pod
apiVersion: v1
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/data"
name: task-pv-storage
测试
$ kubectl get pv,pvc,pod
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pv 10G RWO Retain Bound default/pv-claim test 22s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/pv-claim Bound pv 10G RWO test 22s
NAME READY STATUS RESTARTS AGE
pod/task-pv-pod 1/1 Running 0 21s
将一些信息写入磁盘
$ kubectl exec -ti task-pv-pod -- bin/bash
root@task-pv-pod:/# cd /usr/share/nginx/html
root@task-pv-pod:/usr/share/nginx/html# echo "This is test message from Nginx pod" >> message.txt
现在我删除了所有以前的资源:pv
、pvc
和 pod
。
$ kubectl get pv,pvc,pod
No resources found
现在如果我要重新创建 pv
,pvc
并在 pod
中做一些小改动,例如 busybox
。
containers:
- name: busybox
image: busybox
command: ["/bin/sh"]
args: ["-c", "while true; do echo hello; sleep 10;done"]
volumeMounts:
- mountPath: "/usr/data"
name: task-pv-storage
会反弹
$ kubectl get pv,pvc,po
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pv 10G RWO Retain Bound default/pv-claim 43m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/pv-claim Bound pv 10G RWO 43m
NAME READY STATUS RESTARTS AGE
pod/busybox 1/1 Running 0 3m43s
并且在 busybox
窗格中我将能够找到 Message.txt
。
$ kubectl exec -ti busybox -- bin/sh
/ # cd usr
/ # cd usr/data
/usr/data # ls
lost+found message.txt
/usr/data # cat message.txt
This is test message from Nginx pod
作为附加信息,您将无法同时在 2 个集群中使用它,如果您尝试会出现错误:
AttachVolume.Attach failed for volume "pv" : googleapi: Error 400: RESOURCE_IN_USE_B
Y_ANOTHER_RESOURCE - The disk resource 'projects/<myproject>/zones/europe-west3-b/disks/pd-name' is already being used by 'projects/<myproject>/zones/europe-west3-b/instances/gke-cluster-3-default-pool-bb545f05-t5hc'