Binding statefulset to local persistent volumes - volume node affinity conflict 错误
Binding statefulset to local persistent volumes - volume node affinity conflict error
我有 3 节点 kubernetes,主机名是 host_1、host_2、host_3。
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
host_1 Ready master 134d v1.10.1
host_2 Ready <none> 134d v1.10.1
host_3 Ready <none> 134d v1.10.1
我已经定义了 3 个大小为 100M 的本地持久卷,映射到每个节点上的本地目录。我使用了以下描述符 3 次,其中 <hostname>
是其中之一:host_1、host_2、host_3:
apiVersion: v1
kind: PersistentVolume
metadata:
name: test-volume-<hostname>
spec:
capacity:
storage: 100M
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /opt/jnetx/volumes/test-volume
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- <hostname>
在应用三个这样的 yaml 之后,我有以下内容:
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
test-volume-host_1 100M RWO Delete Available local-storage 58m
test-volume-host_2 100M RWO Delete Available local-storage 58m
test-volume-host_3 100M RWO Delete Available local-storage 58m
现在,我有一个非常简单的写入文件的容器。该文件应位于本地持久卷上。我通过 statefulset 的 volumeClaimTemplates 将其部署为具有 1 个副本和映射卷的 statefulset:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: filewriter
spec:
serviceName: filewriter
...
replicas: 1
template:
spec:
containers:
- name: filewriter
...
volumeMounts:
- mountPath: /test/data
name: fw-pv-claim
volumeClaimTemplates:
- metadata:
name: fw-pv-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-storage
resources:
requests:
storage: 100M
卷声明似乎已正确创建并绑定到第一台主机上的 pv:
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
test-volume-host_1 100M RWO Delete Bound default/fw-pv-claim-filewriter-0 local-storage 1m
test-volume-host_2 100M RWO Delete Available local-storage 1h
test-volume-host_3 100M RWO Delete Available local-storage 1h
但是,pod 挂起在 Pending 状态:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
filewriter-0 0/1 Pending 0 4s
如果我们描述,可以看到如下错误:
$ kubectl describe pod filewriter-0
Name: filewriter-0
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 2s (x8 over 1m) default-scheduler 0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 node(s) had volume node affinity conflict.
你能帮我找出问题所在吗?为什么它不能只创建 pod?
似乎 PV 可用的一个节点存在您的 StatefulSet 无法容忍的污点。
我遇到了与上述情况非常相似的情况,并观察到相同的症状(节点关联冲突)。在我的例子中,问题是我有 2 个卷连接到 2 个不同的节点,但试图在 1 个 pod 中使用它们。
我通过使用 kubectl describe pvc name-of-pvc
并注意到 selected-node
注释检测到了这一点。一旦我将 pod 设置为使用同时位于一个节点上的卷,我就不再有问题了。
我有 3 节点 kubernetes,主机名是 host_1、host_2、host_3。
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
host_1 Ready master 134d v1.10.1
host_2 Ready <none> 134d v1.10.1
host_3 Ready <none> 134d v1.10.1
我已经定义了 3 个大小为 100M 的本地持久卷,映射到每个节点上的本地目录。我使用了以下描述符 3 次,其中 <hostname>
是其中之一:host_1、host_2、host_3:
apiVersion: v1
kind: PersistentVolume
metadata:
name: test-volume-<hostname>
spec:
capacity:
storage: 100M
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /opt/jnetx/volumes/test-volume
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- <hostname>
在应用三个这样的 yaml 之后,我有以下内容:
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
test-volume-host_1 100M RWO Delete Available local-storage 58m
test-volume-host_2 100M RWO Delete Available local-storage 58m
test-volume-host_3 100M RWO Delete Available local-storage 58m
现在,我有一个非常简单的写入文件的容器。该文件应位于本地持久卷上。我通过 statefulset 的 volumeClaimTemplates 将其部署为具有 1 个副本和映射卷的 statefulset:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: filewriter
spec:
serviceName: filewriter
...
replicas: 1
template:
spec:
containers:
- name: filewriter
...
volumeMounts:
- mountPath: /test/data
name: fw-pv-claim
volumeClaimTemplates:
- metadata:
name: fw-pv-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-storage
resources:
requests:
storage: 100M
卷声明似乎已正确创建并绑定到第一台主机上的 pv:
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
test-volume-host_1 100M RWO Delete Bound default/fw-pv-claim-filewriter-0 local-storage 1m
test-volume-host_2 100M RWO Delete Available local-storage 1h
test-volume-host_3 100M RWO Delete Available local-storage 1h
但是,pod 挂起在 Pending 状态:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
filewriter-0 0/1 Pending 0 4s
如果我们描述,可以看到如下错误:
$ kubectl describe pod filewriter-0
Name: filewriter-0
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 2s (x8 over 1m) default-scheduler 0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 node(s) had volume node affinity conflict.
你能帮我找出问题所在吗?为什么它不能只创建 pod?
似乎 PV 可用的一个节点存在您的 StatefulSet 无法容忍的污点。
我遇到了与上述情况非常相似的情况,并观察到相同的症状(节点关联冲突)。在我的例子中,问题是我有 2 个卷连接到 2 个不同的节点,但试图在 1 个 pod 中使用它们。
我通过使用 kubectl describe pvc name-of-pvc
并注意到 selected-node
注释检测到了这一点。一旦我将 pod 设置为使用同时位于一个节点上的卷,我就不再有问题了。