副本之间的 kubernetes 部署的共享目录
Shared directory for a kubernetes Deployment between it's replicas
我有一个包含 2 个副本的简单部署。
我希望每个副本都有相同的存储文件夹(共享应用程序上传文件夹)
我一直在研究索赔和数量,但仍然没有优势,所以寻求快速帮助/示例。
apiVersion: apps/v1
kind: Deployment
metadata:
name: 'test-tomcat'
labels:
app: test-tomcat
spec:
selector:
matchLabels:
app: test-tomcat
replicas: 3
template:
metadata:
name: 'test-tomcat'
labels:
app: test-tomcat
spec:
volumes:
- name: 'data'
persistentVolumeClaim:
claimName: claim
containers:
- image: 'tomcat:9-alpine'
volumeMounts:
- name: 'data'
mountPath: '/app/data'
imagePullPolicy: Always
name: 'tomcat'
command: ['bin/catalina.sh', 'jpda', 'run']
kind: PersistentVolume
apiVersion: v1
metadata:
name: volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: claim
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
首先,您需要决定使用什么类型的持久卷。以下是本地集群的几个示例:
HostPath - 节点上的本地路径。因此,如果第一个 Pod 位于 Node1 上,第二个 Pod 位于 Node2 上,则存储将不同。要解决此问题,您可以使用以下选项之一。主机路径示例:
kind: PersistentVolume
apiVersion: v1
metadata:
name: example-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 3Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
NFS - 该类型的 PersistentVolume 使用网络文件系统。 NFS 是一种分布式文件系统协议,允许您在服务器上挂载远程目录。在 Kubernetes 中使用 NFS 之前需要先安装 NFS 服务器;这是示例 How To Set Up an NFS Mount on Ubuntu。 Kubernetes 中的示例:
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-pv
spec:
capacity:
storage: 3Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: slow
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /tmp
server: 172.17.0.2
GlusterFS - GlusterFS 是一个可扩展的分布式文件系统,它将来自多个服务器的磁盘存储资源聚合到一个全局命名空间中。至于NFS,在Kubernetes中使用之前需要先安装GlusterFS;这是 link with instructions, and one 更多示例。 Kubernetes 中的示例:
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-pv
annotations:
pv.beta.kubernetes.io/gid: "590"
spec:
capacity:
storage: 3Gi
accessModes:
- ReadWriteMany
glusterfs:
endpoints: glusterfs-cluster
path: myVol1
readOnly: false
persistentVolumeReclaimPolicy: Retain
---
apiVersion: v1
kind: Service
metadata:
name: glusterfs-cluster
spec:
ports:
- port: 1
---
apiVersion: v1
kind: Endpoints
metadata:
name: glusterfs-cluster
subsets:
- addresses:
- ip: 192.168.122.221
ports:
- port: 1
- addresses:
- ip: 192.168.122.222
ports:
- port: 1
- addresses:
- ip: 192.168.122.223
ports:
- port: 1
创建PersistentVolume后,需要创建一个PersistaentVolumeClaim。 PersistaentVolumeClaim 是 Pods 用于从存储请求卷的资源。创建 PersistentVolumeClaim 后,Kubernetes 控制平面会查找满足声明要求的 PersistentVolume。示例:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: example-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
最后一步,您需要配置 Pod 以使用 PersistentVolumeClaim。这是示例:
apiVersion: apps/v1
kind: Deployment
metadata:
name: 'test-tomcat'
labels:
app: test-tomcat
spec:
selector:
matchLabels:
app: test-tomcat
replicas: 3
template:
metadata:
name: 'test-tomcat'
labels:
app: test-tomcat
spec:
volumes:
- name: 'data'
persistentVolumeClaim:
claimName: example-pv-claim #name of the claim should be the same as defined before
containers:
- image: 'tomcat:9-alpine'
volumeMounts:
- name: 'data'
mountPath: '/app/data'
imagePullPolicy: Always
name: 'tomcat'
command: ['bin/catalina.sh', 'jpda', 'run']
我有一个包含 2 个副本的简单部署。
我希望每个副本都有相同的存储文件夹(共享应用程序上传文件夹)
我一直在研究索赔和数量,但仍然没有优势,所以寻求快速帮助/示例。
apiVersion: apps/v1
kind: Deployment
metadata:
name: 'test-tomcat'
labels:
app: test-tomcat
spec:
selector:
matchLabels:
app: test-tomcat
replicas: 3
template:
metadata:
name: 'test-tomcat'
labels:
app: test-tomcat
spec:
volumes:
- name: 'data'
persistentVolumeClaim:
claimName: claim
containers:
- image: 'tomcat:9-alpine'
volumeMounts:
- name: 'data'
mountPath: '/app/data'
imagePullPolicy: Always
name: 'tomcat'
command: ['bin/catalina.sh', 'jpda', 'run']
kind: PersistentVolume
apiVersion: v1
metadata:
name: volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: claim
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
首先,您需要决定使用什么类型的持久卷。以下是本地集群的几个示例:
HostPath - 节点上的本地路径。因此,如果第一个 Pod 位于 Node1 上,第二个 Pod 位于 Node2 上,则存储将不同。要解决此问题,您可以使用以下选项之一。主机路径示例:
kind: PersistentVolume apiVersion: v1 metadata: name: example-pv labels: type: local spec: storageClassName: manual capacity: storage: 3Gi accessModes: - ReadWriteOnce hostPath: path: "/mnt/data"
NFS - 该类型的 PersistentVolume 使用网络文件系统。 NFS 是一种分布式文件系统协议,允许您在服务器上挂载远程目录。在 Kubernetes 中使用 NFS 之前需要先安装 NFS 服务器;这是示例 How To Set Up an NFS Mount on Ubuntu。 Kubernetes 中的示例:
apiVersion: v1 kind: PersistentVolume metadata: name: example-pv spec: capacity: storage: 3Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Recycle storageClassName: slow mountOptions: - hard - nfsvers=4.1 nfs: path: /tmp server: 172.17.0.2
GlusterFS - GlusterFS 是一个可扩展的分布式文件系统,它将来自多个服务器的磁盘存储资源聚合到一个全局命名空间中。至于NFS,在Kubernetes中使用之前需要先安装GlusterFS;这是 link with instructions, and one 更多示例。 Kubernetes 中的示例:
apiVersion: v1 kind: PersistentVolume metadata: name: example-pv annotations: pv.beta.kubernetes.io/gid: "590" spec: capacity: storage: 3Gi accessModes: - ReadWriteMany glusterfs: endpoints: glusterfs-cluster path: myVol1 readOnly: false persistentVolumeReclaimPolicy: Retain --- apiVersion: v1 kind: Service metadata: name: glusterfs-cluster spec: ports: - port: 1 --- apiVersion: v1 kind: Endpoints metadata: name: glusterfs-cluster subsets: - addresses: - ip: 192.168.122.221 ports: - port: 1 - addresses: - ip: 192.168.122.222 ports: - port: 1 - addresses: - ip: 192.168.122.223 ports: - port: 1
创建PersistentVolume后,需要创建一个PersistaentVolumeClaim。 PersistaentVolumeClaim 是 Pods 用于从存储请求卷的资源。创建 PersistentVolumeClaim 后,Kubernetes 控制平面会查找满足声明要求的 PersistentVolume。示例:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: example-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
最后一步,您需要配置 Pod 以使用 PersistentVolumeClaim。这是示例:
apiVersion: apps/v1
kind: Deployment
metadata:
name: 'test-tomcat'
labels:
app: test-tomcat
spec:
selector:
matchLabels:
app: test-tomcat
replicas: 3
template:
metadata:
name: 'test-tomcat'
labels:
app: test-tomcat
spec:
volumes:
- name: 'data'
persistentVolumeClaim:
claimName: example-pv-claim #name of the claim should be the same as defined before
containers:
- image: 'tomcat:9-alpine'
volumeMounts:
- name: 'data'
mountPath: '/app/data'
imagePullPolicy: Always
name: 'tomcat'
command: ['bin/catalina.sh', 'jpda', 'run']