bare-metal/private 云上的 Kubernetes 存储
Kubernetes Storage on bare-metal/private cloud
我刚开始在 2 个私有云服务器上设置 2 个节点(master-minion)上的 Kubernetes。我已经安装了它,进行了基本配置并得到了它 运行 从 master 到 minion 的一些简单 pods/services。
我的问题是:
如何在不使用 Google 云时使用 pods 的持久存储?
对于我的第一次测试,我得到了一个 Ghost Blog pod 运行,但是如果我撕掉 pod,更改就会丢失。尝试向 pod 添加卷,但实际上找不到任何关于在不使用 GC 时如何完成的文档。
我的尝试:
apiVersion: v1beta1
id: ghost
kind: Pod
desiredState:
manifest:
version: v1beta1
id: ghost
containers:
- name: ghost
image: ghost
volumeMounts:
- name: ghost-persistent-storage
mountPath: /var/lib/ghost
ports:
- hostPort: 8080
containerPort: 2368
volumes:
- name: ghost-persistent-storage
source:
emptyDir: {}
找到这个:Persistent Installation of MySQL and WordPress on Kubernetes
不知道如何将存储(NFS?)添加到我的测试安装中。
在新的 API(v1beta3), we've added many more volume types, including NFS volumes 中。NFS 卷类型假定您已经有一个 NFS 服务器 运行 某处可以将 pod 指向。试一试,让我们知道你有什么问题!
NFS 示例:
https://github.com/kubernetes/kubernetes/tree/master/examples/volumes/nfs
GlusterFS 示例:
https://github.com/kubernetes/kubernetes/tree/master/examples/volumes/glusterfs
希望对您有所帮助!
您可以尝试 https://github.com/suquant/glusterd 解决方案。
kubernetes 集群中的 Glusterfs 服务器
想法很简单,cluster manager listen kubernetes api 并添加到/etc/hosts "metadata.name" 和pod ip address.
1。创建pods
gluster1.yaml
apiVersion: v1
kind: Pod
metadata:
name: gluster1
namespace: mynamespace
labels:
component: glusterfs-storage
spec:
nodeSelector:
host: st01
containers:
- name: glusterfs-server
image: suquant/glusterd:3.6.kube
imagePullPolicy: Always
command:
- /kubernetes-glusterd
args:
- --namespace
- mynamespace
- --labels
- component=glusterfs-storage
ports:
- containerPort: 24007
- containerPort: 24008
- containerPort: 49152
- containerPort: 38465
- containerPort: 38466
- containerPort: 38467
- containerPort: 2049
- containerPort: 111
- containerPort: 111
protocol: UDP
volumeMounts:
- name: brick
mountPath: /mnt/brick
- name: fuse
mountPath: /dev/fuse
- name: data
mountPath: /var/lib/glusterd
securityContext:
capabilities:
add:
- SYS_ADMIN
- MKNOD
volumes:
- name: brick
hostPath:
path: /opt/var/lib/brick1
- name: fuse
hostPath:
path: /dev/fuse
- name: data
emptyDir: {}
gluster2.yaml
apiVersion: v1
kind: Pod
metadata:
name: gluster2
namespace: mynamespace
labels:
component: glusterfs-storage
spec:
nodeSelector:
host: st02
containers:
- name: glusterfs-server
image: suquant/glusterd:3.6.kube
imagePullPolicy: Always
command:
- /kubernetes-glusterd
args:
- --namespace
- mynamespace
- --labels
- component=glusterfs-storage
ports:
- containerPort: 24007
- containerPort: 24008
- containerPort: 49152
- containerPort: 38465
- containerPort: 38466
- containerPort: 38467
- containerPort: 2049
- containerPort: 111
- containerPort: 111
protocol: UDP
volumeMounts:
- name: brick
mountPath: /mnt/brick
- name: fuse
mountPath: /dev/fuse
- name: data
mountPath: /var/lib/glusterd
securityContext:
capabilities:
add:
- SYS_ADMIN
- MKNOD
volumes:
- name: brick
hostPath:
path: /opt/var/lib/brick1
- name: fuse
hostPath:
path: /dev/fuse
- name: data
emptyDir: {}
3。 运行pods
kubectl create -f gluster1.yaml
kubectl create -f gluster2.yaml
2。管理 glusterfs 服务器
kubectl --namespace=mynamespace exec -ti gluster1 -- sh -c "gluster peer probe gluster2"
kubectl --namespace=mynamespace exec -ti gluster1 -- sh -c "gluster peer status"
kubectl --namespace=mynamespace exec -ti gluster1 -- sh -c "gluster volume create media replica 2 transport tcp,rdma gluster1:/mnt/brick gluster2:/mnt/brick force"
kubectl --namespace=mynamespace exec -ti gluster1 -- sh -c "gluster volume start media"
3。用法
gluster-svc.yaml
kind: Service
apiVersion: v1
metadata:
name: glusterfs-storage
namespace: mynamespace
spec:
ports:
- name: glusterfs-api
port: 24007
targetPort: 24007
- name: glusterfs-infiniband
port: 24008
targetPort: 24008
- name: glusterfs-brick0
port: 49152
targetPort: 49152
- name: glusterfs-nfs-0
port: 38465
targetPort: 38465
- name: glusterfs-nfs-1
port: 38466
targetPort: 38466
- name: glusterfs-nfs-2
port: 38467
targetPort: 38467
- name: nfs-rpc
port: 111
targetPort: 111
- name: nfs-rpc-udp
port: 111
targetPort: 111
protocol: UDP
- name: nfs-portmap
port: 2049
targetPort: 2049
selector:
component: glusterfs-storage
运行 服务
kubectl create -f gluster-svc.yaml
在您可以通过主机名在集群中挂载 NFS 后 "glusterfs-storage.mynamespace"
我刚开始在 2 个私有云服务器上设置 2 个节点(master-minion)上的 Kubernetes。我已经安装了它,进行了基本配置并得到了它 运行 从 master 到 minion 的一些简单 pods/services。
我的问题是:
如何在不使用 Google 云时使用 pods 的持久存储?
对于我的第一次测试,我得到了一个 Ghost Blog pod 运行,但是如果我撕掉 pod,更改就会丢失。尝试向 pod 添加卷,但实际上找不到任何关于在不使用 GC 时如何完成的文档。
我的尝试:
apiVersion: v1beta1
id: ghost
kind: Pod
desiredState:
manifest:
version: v1beta1
id: ghost
containers:
- name: ghost
image: ghost
volumeMounts:
- name: ghost-persistent-storage
mountPath: /var/lib/ghost
ports:
- hostPort: 8080
containerPort: 2368
volumes:
- name: ghost-persistent-storage
source:
emptyDir: {}
找到这个:Persistent Installation of MySQL and WordPress on Kubernetes
不知道如何将存储(NFS?)添加到我的测试安装中。
在新的 API(v1beta3), we've added many more volume types, including NFS volumes 中。NFS 卷类型假定您已经有一个 NFS 服务器 运行 某处可以将 pod 指向。试一试,让我们知道你有什么问题!
NFS 示例: https://github.com/kubernetes/kubernetes/tree/master/examples/volumes/nfs
GlusterFS 示例: https://github.com/kubernetes/kubernetes/tree/master/examples/volumes/glusterfs
希望对您有所帮助!
您可以尝试 https://github.com/suquant/glusterd 解决方案。
kubernetes 集群中的 Glusterfs 服务器
想法很简单,cluster manager listen kubernetes api 并添加到/etc/hosts "metadata.name" 和pod ip address.
1。创建pods
gluster1.yaml
apiVersion: v1
kind: Pod
metadata:
name: gluster1
namespace: mynamespace
labels:
component: glusterfs-storage
spec:
nodeSelector:
host: st01
containers:
- name: glusterfs-server
image: suquant/glusterd:3.6.kube
imagePullPolicy: Always
command:
- /kubernetes-glusterd
args:
- --namespace
- mynamespace
- --labels
- component=glusterfs-storage
ports:
- containerPort: 24007
- containerPort: 24008
- containerPort: 49152
- containerPort: 38465
- containerPort: 38466
- containerPort: 38467
- containerPort: 2049
- containerPort: 111
- containerPort: 111
protocol: UDP
volumeMounts:
- name: brick
mountPath: /mnt/brick
- name: fuse
mountPath: /dev/fuse
- name: data
mountPath: /var/lib/glusterd
securityContext:
capabilities:
add:
- SYS_ADMIN
- MKNOD
volumes:
- name: brick
hostPath:
path: /opt/var/lib/brick1
- name: fuse
hostPath:
path: /dev/fuse
- name: data
emptyDir: {}
gluster2.yaml
apiVersion: v1
kind: Pod
metadata:
name: gluster2
namespace: mynamespace
labels:
component: glusterfs-storage
spec:
nodeSelector:
host: st02
containers:
- name: glusterfs-server
image: suquant/glusterd:3.6.kube
imagePullPolicy: Always
command:
- /kubernetes-glusterd
args:
- --namespace
- mynamespace
- --labels
- component=glusterfs-storage
ports:
- containerPort: 24007
- containerPort: 24008
- containerPort: 49152
- containerPort: 38465
- containerPort: 38466
- containerPort: 38467
- containerPort: 2049
- containerPort: 111
- containerPort: 111
protocol: UDP
volumeMounts:
- name: brick
mountPath: /mnt/brick
- name: fuse
mountPath: /dev/fuse
- name: data
mountPath: /var/lib/glusterd
securityContext:
capabilities:
add:
- SYS_ADMIN
- MKNOD
volumes:
- name: brick
hostPath:
path: /opt/var/lib/brick1
- name: fuse
hostPath:
path: /dev/fuse
- name: data
emptyDir: {}
3。 运行pods
kubectl create -f gluster1.yaml
kubectl create -f gluster2.yaml
2。管理 glusterfs 服务器
kubectl --namespace=mynamespace exec -ti gluster1 -- sh -c "gluster peer probe gluster2"
kubectl --namespace=mynamespace exec -ti gluster1 -- sh -c "gluster peer status"
kubectl --namespace=mynamespace exec -ti gluster1 -- sh -c "gluster volume create media replica 2 transport tcp,rdma gluster1:/mnt/brick gluster2:/mnt/brick force"
kubectl --namespace=mynamespace exec -ti gluster1 -- sh -c "gluster volume start media"
3。用法
gluster-svc.yaml
kind: Service
apiVersion: v1
metadata:
name: glusterfs-storage
namespace: mynamespace
spec:
ports:
- name: glusterfs-api
port: 24007
targetPort: 24007
- name: glusterfs-infiniband
port: 24008
targetPort: 24008
- name: glusterfs-brick0
port: 49152
targetPort: 49152
- name: glusterfs-nfs-0
port: 38465
targetPort: 38465
- name: glusterfs-nfs-1
port: 38466
targetPort: 38466
- name: glusterfs-nfs-2
port: 38467
targetPort: 38467
- name: nfs-rpc
port: 111
targetPort: 111
- name: nfs-rpc-udp
port: 111
targetPort: 111
protocol: UDP
- name: nfs-portmap
port: 2049
targetPort: 2049
selector:
component: glusterfs-storage
运行 服务
kubectl create -f gluster-svc.yaml
在您可以通过主机名在集群中挂载 NFS 后 "glusterfs-storage.mynamespace"