如何在裸机 Kubernetes 集群上 运行 Dgraph

How to run Dgraph on bare-metal Kubernetes cluster

我正在尝试在 HA Cluster but it won't deploy if no volumes 中设置 Dgraph。

在裸机集群上直接应用 the provided config 将不起作用。

$ kubectl get pod --namespace dgraph
dgraph-alpha-0                      0/1     Pending     0          112s
dgraph-ratel-7459974489-ggnql       1/1     Running     0          112s
dgraph-zero-0                       0/1     Pending     0          112s


$ kubectl describe pod/dgraph-alpha-0 --namespace dgraph
Events:
  Type     Reason            Age        From               Message
  ----     ------            ----       ----               -------
  Warning  FailedScheduling  <unknown>  default-scheduler  error while running "VolumeBinding" filter plugin for pod "dgraph-alpha-0": pod has unbound immediate PersistentVolumeClaims
  Warning  FailedScheduling  <unknown>  default-scheduler  error while running "VolumeBinding" filter plugin for pod "dgraph-alpha-0": pod has unbound immediate PersistentVolumeClaims

还有其他人有这个问题吗?我已经遇到这个问题好几天了,找不到解决方法。 如何让 Dgraph 使用集群的本地存储?

谢谢

我自己找到了可行的解决方案。

我必须手动创建 pv and pvc,然后 Dgraph 才能在部署期间使用它们。

这是我用来创建所需 storageclasspvpvc

的配置
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/no-provisioner
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: datadir-dgraph-dgraph-alpha-0
  labels:
    type: local
spec:
  storageClassName: local
  capacity:
    storage: 8Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/dgraph/alpha-0"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: datadir-dgraph-dgraph-alpha-1
  labels:
    type: local
spec:
  storageClassName: local
  capacity:
    storage: 8Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/dgraph/alpha-1"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: datadir-dgraph-dgraph-alpha-2
  labels:
    type: local
spec:
  storageClassName: local
  capacity:
    storage: 8Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/dgraph/alpha-2"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: datadir-dgraph-dgraph-zero-0
  labels:
    type: local
spec:
  storageClassName: local
  capacity:
    storage: 8Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/dgraph/zero-0"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: datadir-dgraph-dgraph-zero-1
  labels:
    type: local
spec:
  storageClassName: local
  capacity:
    storage: 8Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/dgraph/zero-1"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: datadir-dgraph-dgraph-zero-2
  labels:
    type: local
spec:
  storageClassName: local
  capacity:
    storage: 8Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/dgraph/zero-2"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: datadir-dgraph-dgraph-alpha-0
spec:
  storageClassName: local
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: datadir-dgraph-dgraph-alpha-1
spec:
  storageClassName: local
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: datadir-dgraph-dgraph-alpha-2
spec:
  storageClassName: local
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: datadir-dgraph-dgraph-zero-0
spec:
  storageClassName: local
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: datadir-dgraph-dgraph-zero-1
spec:
  storageClassName: local
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: datadir-dgraph-dgraph-zero-2
spec:
  storageClassName: local
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi

部署 Dgraph 时,它锁定在 pvc

$ kubectl get pvc -n dgraph -o wide
NAME                            STATUS   VOLUME                          CAPACITY   ACCESS MODES   STORAGECLASS   AGE     VOLUMEMODE
datadir-dgraph-dgraph-alpha-0   Bound    datadir-dgraph-dgraph-zero-2    8Gi        RWO            local          6h40m   Filesystem
datadir-dgraph-dgraph-alpha-1   Bound    datadir-dgraph-dgraph-alpha-0   8Gi        RWO            local          6h40m   Filesystem
datadir-dgraph-dgraph-alpha-2   Bound    datadir-dgraph-dgraph-zero-0    8Gi        RWO            local          6h40m   Filesystem
datadir-dgraph-dgraph-zero-0    Bound    datadir-dgraph-dgraph-alpha-1   8Gi        RWO            local          6h40m   Filesystem
datadir-dgraph-dgraph-zero-1    Bound    datadir-dgraph-dgraph-alpha-2   8Gi        RWO            local          6h40m   Filesystem
datadir-dgraph-dgraph-zero-2    Bound    datadir-dgraph-dgraph-zero-1    8Gi        RWO            local          6h40m   Filesystem

Dgraph 的配置假定 Kubernetes 集群带有工作卷插件(供应商)。在托管 Kubernetes 产品(aws、GKE、DO 等)中,此步骤已由提供商负责。

我认为目标应该是实现与云提供商同等的功能,即配置必须是动态的(例如与 OP 自己的答案相反,后者是正确的,但 statically provisioned - k8s 文档)。

当 运行ning bare-metal 时,您必须手动配置卷插件才能 dynamically provision volumes (k8s docs) and thus use StatefulSets, PersistentVolumeClaims etc. Thankfully there are many provisioners available(k8s 文档)。 对于 开箱即用 动态配置支持,列表中选中 'Internal Provisioner' 的每个项目都可以。

所以虽然这个问题有很多解决方案,但我最终还是使用了 NFS。为了实现动态配置,我不得不使用外部配置器。希望这就像安装 Helm Chart.

一样简单
  1. Install NFS(原始指南)在主节点上。

ssh 通过终端和 运行

sudo apt update
sudo apt install nfs-kernel-server nfs-common
  1. 创建 Kubernetes 将要使用的目录并更改所有权
sudo mkdir /var/nfs/kubernetes -p
sudo chown nobody:nogroup /var/nfs/kubernetes
  1. 配置 NFS

打开文件/etc/exports

sudo nano /etc/exports

在底部添加以下行

/var/nfs/kubernetes  client_ip(rw,sync,no_subtree_check)

client_ip换成你的主节点ip。 在我的例子中,这个 IP 是我的路由器租给机器 运行ning master node (192.168.1.7)

的 DHCP 服务器 IP
  1. 重新启动 NFS 以应用更改。
sudo systemctl restart nfs-kernel-server
  1. 在 master 上设置 NFS 并假设 Helm 存在后,安装供应器就像 运行ning
  2. 一样简单
helm install  nfs-provisioner --set nfs.server=XXX.XXX.XXX.XXX --set nfs.path=/var/nfs/kubernetes --set storageClass.defaultClass=true stable/nfs-client-provisioner

nfs.server 标志替换为主 node/NFS 服务器的适当 IP/hostname。

注意标志 storageClass.defaultClass 必须是 true 以便 Kubernetes 默认使用插件(配置器)创建卷。

Flag nfs.path 与步骤 2 中创建的路径相同。

如果 Helm 抱怨 can not find the chart 运行 helm repo add stable https://kubernetes-charts.storage.googleapis.com/

  1. 成功完成前面的步骤后,继续将 Dgraph 配置安装为 described in their docs 并享受 bare-metal 动态配置的集群 out-of-the-box 工作 Dgraph 部署。

单服务器

kubectl create --filename https://raw.githubusercontent.com/dgraph-io/dgraph/master/contrib/config/kubernetes/dgraph-single/dgraph-single.yaml

HA 集群

kubectl create --filename https://raw.githubusercontent.com/dgraph-io/dgraph/master/contrib/config/kubernetes/dgraph-ha/dgraph-ha.yaml