无法在 kubernetes 上部署 EFK 堆栈(使用 kubespray)

unable to deploy EFK stack on kubernetes (using kubespray)

我正在尝试在生产 kubernetes 集群上部署 EFK 堆栈(使用 kubespray 安装),我们有 3 个节点,1 个主节点 + 2 个工作节点,我需要使用 elasticsearch 作为状态集并在主节点中使用本地文件夹存储日志(用于持久性的本地存储),我的配置是:

kind: Namespace
apiVersion: v1
metadata:
  name: kube-logging

---
kind: Service
apiVersion: v1
metadata:
  name: elasticsearch
  namespace: kube-logging
  labels:
    app: elasticsearch
spec:
  selector:
    app: elasticsearch
  clusterIP: None
  ports:
    - port: 9200
      name: rest
    - port: 9300
      name: inter-node
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: local-storage
  namespace: kube-logging
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-pv
  namespace: kube-logging
spec:
  storageClassName: local-storage
  capacity:
    storage: 10Gi
  accessModes:
  - ReadWriteOnce
  hostPath:
    path: /tmp/elastic
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: es-cluster
  namespace: kube-logging
spec:
  serviceName: elasticsearch
  replicas: 2
  selector:
    matchLabels:
      app: elasticsearch
  template:
    metadata:
      labels:
        app: elasticsearch
    spec:
      containers:
      - name: elasticsearch
        image: docker.elastic.co/elasticsearch/elasticsearch:7.2.0
        resources:
            limits:
              cpu: 1000m
              memory: 2Gi
        ports:
        - containerPort: 9200
          name: rest
          protocol: TCP
        - containerPort: 9300
          name: inter-node
          protocol: TCP
        volumeMounts:
        - name: data
          mountPath: /usr/share/elasticsearch/data
        env:
          - name: cluster.name
            value: k8s-logs
          - name: node.name
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: discovery.seed_hosts
            value: "es-cluster-0.elasticsearch,es-cluster-1.elasticsearch,es-cluster-2.elasticsearch"
          - name: cluster.initial_master_nodes
            value: "es-cluster-0,es-cluster-1,es-cluster-2"
          - name: ES_JAVA_OPTS
            value: "-Xms512m -Xmx512m"
      initContainers:
      - name: fix-permissions
        image: busybox
        command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
        securityContext:
          privileged: true
        volumeMounts:
        - name: data
          mountPath: /usr/share/elasticsearch/data
      - name: increase-vm-max-map
        image: busybox
        command: ["sysctl", "-w", "vm.max_map_count=262144"]
        securityContext:
          privileged: true
      - name: increase-fd-ulimit
        image: busybox
        command: ["sh", "-c", "ulimit -n 65536"]
        securityContext:
          privileged: true
  volumeClaimTemplates:
  - metadata:
      name: data
      labels:
        app: elasticsearch
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: local-storage
      resources:
        requests:
          storage: 5Gi
---

所以这是我的配置,但在应用时,Elasticsearch 的两个 pods 之一仍处于待定状态。 当我为这个 pod 做 kubectl describe 时,这是我得到的错误: “1 个节点没有找到可用的持久卷来绑定”

我的配置正确吗?我必须使用 PV + storageclass + volumeClaimTemplates 吗? 提前谢谢你。

这些是我的输出:

    [root@node1 nex]# kubectl get pv
NAME    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                            STORAGECLASS    REASON   AGE
my-pv   5Gi        RWO            Retain           Bound    kube-logging/data-es-cluster-0   local-storage            24m
[root@node1 nex]# kubectl get pvc
NAME                STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS    AGE
data-es-cluster-0   Bound     my-pv    5Gi        RWO            local-storage   24m
data-es-cluster-1   Pending                                      local-storage   23m
[root@node1 nex]# kubectl describe pvc data-es-cluster-0
Name:          data-es-cluster-0
Namespace:     kube-logging
StorageClass:  local-storage
Status:        Bound
Volume:        my-pv
Labels:        app=elasticsearch
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      5Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Mounted By:    es-cluster-0
Events:
  Type    Reason                Age   From                         Message
  ----    ------                ----  ----                         -------
  Normal  WaitForFirstConsumer  24m   persistentvolume-controller  waiting for first consumer to be created before binding
[root@node1 nex]# kubectl describe pvc data-es-cluster-1
Name:          data-es-cluster-1
Namespace:     kube-logging
StorageClass:  local-storage
Status:        Pending
Volume:
Labels:        app=elasticsearch
Annotations:   <none>
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
Mounted By:    es-cluster-1
Events:
  Type    Reason                Age                   From                         Message
  ----    ------                ----                  ----                         -------
  Normal  WaitForFirstConsumer  4m12s (x82 over 24m)  persistentvolume-controller  waiting for first consumer to be created before binding
[root@node1 nex]#

is my configuration correct ? must i use PV + storageclass + volumeClaimTemplates ? thank you in advance.

除了@Arghya Sadhu 在他的回答中已经提出的建议之外,我还想强调您当前设置中的另一件事。

如果您同意 Elasticsearch Pods 将仅安排在一个特定节点上(在您的情况下是您的 主节点 ),您可以仍然使用 local volume type. Don't confuse it however with hostPath. I noticed in your PV definition that you used hostPath key so chances are that you're not completely aware of the differences between this two concepts. Although they are quite similar, local 类型比 hostPath 具有更大的功能和一些不可否认的优势。

如您在 documentation 中所读:

A local volume represents a mounted local storage device such as a disk, partition or directory.

所以这意味着除了特定的目录你还可以挂载本地磁盘或分区(/dev/sdb/dev/sdb5等)。它可以是例如具有严格定义容量的 LVM 分区。请记住,当涉及到安装本地目录时,您无法强制执行实际可以使用的容量,因此即使您定义 5Gi,日志也可以写入您的本地目录,即使这值被超过。但 logical volume 不是这种情况,因为您可以定义它的容量并确保它不会使用比您给它更多的磁盘 space。

第二个区别是:

Compared to hostPath volumes, local volumes can be used in a durable and portable manner without manually scheduling Pods to nodes, as the system is aware of the volume’s node constraints by looking at the node affinity on the PersistentVolume.

在这种情况下,它是您定义 节点亲和力 PersistentVolume,因此任何 Pod(它可以被 Pod 管理通过你的 StatefulSet) 随后使用 local-storage 存储 class 和相应的 PersistenVolume 将自动安排在正确的节点上。

您可以进一步阅读,nodeAffinity 实际上是 PV:

中的必填字段

PersistentVolume nodeAffinity is required when using local volumes. It enables the Kubernetes scheduler to correctly schedule Pods using local volumes to the correct node.

据我了解,您的 kubernetes 集群已设置 locally/on-premise。在这种情况下,NFS 可能是正确的选择。

如果您使用了一些云环境,那么您可以使用特定云提供商提供的持久存储,例如GCEPersistentDiskAWSElasticBlockStore。您可以找到 kubernetes 当前支持的持久卷类型的完整列表 here

再说一次,如果您担心 StatefulSet 中的节点级冗余,并且您希望将 2 Elasticsearch Pods 始终安排在不同的节点上,因为 @Arghya Sadhu 已经建议,使用 NFS 或其他一些非本地存储。

但是,如果您不关心节点级冗余,并且完全可以接受 Elasticsearch Pods 运行 在同一节点上这一事实( master节点在你的情况下),请关注我:)

正如@Arghya Sadhu 正确指出的那样:

Even if a PV which is already bound to a PVC have spare capacity it can not be again bound to another PVC because it's one to one mapping between PV and PVC.

虽然 PVPVC 之间始终是一对一的映射,但这并不意味着您不能在多个 Pods 中使用单个 PVC

请注意,在您的 StatefulSet 示例中,您使用了 volumeClaimTemplates,这基本上意味着每次创建由您的 StatefulSet 管理的新 Pod 时,也会创建一个新的相应 PersistentVolumeClaim 是基于此模板创建的。所以如果你有例如10Gi PersistentVolume 已定义,无论您在声明中要求全部 10Gi 还是仅一半,只有第一个 PVC 会成功绑定到您的 PV .

但不是使用 volumeClaimTemplates 并为每个有状态 Pod 创建一个单独的 PVC,您可以让它们使用一个手动定义的 PVC。请看下面的例子:

我们首先需要的是存储空间class。它看起来与您的示例中的非常相似:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

此设置与您的第一个区别在于 PV 定义。而不是 hostPath 我们在这里使用 local volume:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: example-pv
spec:
  capacity:
    storage: 10Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  storageClassName: local-storage
  local:
    path: /var/tmp/test ### path on your master node
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - your-master-node-name

请注意,除了定义 local 路径外,我们还定义了 nodeAffinity 规则,确保所有获得此特定 PVPods 将自动安排在我们的主节点.

然后我们手动应用PVC:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: myclaim
spec:
  accessModes:
    - ReadWriteOnce
  volumeMode: Filesystem
  resources:
    requests:
      storage: 10Gi
  storageClassName: local-storage

PVC 现在可供 StatefulSet 管理的所有人使用(在您的示例 2 中)Pods:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  selector:
    matchLabels:
      app: nginx # has to match .spec.template.metadata.labels
  serviceName: "nginx"
  replicas: 2 # by default is 1
  template:
    metadata:
      labels:
        app: nginx # has to match .spec.selector.matchLabels
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: nginx
        image: k8s.gcr.io/nginx-slim:0.8
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: mypd
          mountPath: /usr/share/nginx/html
      volumes:
      - name: mypd
        persistentVolumeClaim:
          claimName: myclaim

请注意,在上面的示例中,我们不再使用 volumeClaimTemplates,而是使用一个 PersistentVolumeClaim,我们所有的 Pods 都可以使用它。 Pods 仍然是唯一的,因为它们由 StatefulSet 管理,但他们没有使用唯一的 PVCs,而是使用普通的。由于这种方法,Pods 可以同时将日志写入单个卷。

在我的示例中,我使用 nginx 服务器使复制尽可能简单,让每个想要快速尝试的人都能轻松进行,但我相信您可以根据自己的需要轻松调整它。