创建的 PVC 没有访问模式和存储 class
PVC created with no accessmode and storage class
我正在使用以下 yaml 将 Keydb 部署到我的集群中
---
# Source: keydb/templates/cm-utils.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: keydb-utils
labels:
helm.sh/chart: keydb-0.8.0
app.kubernetes.io/name: keydb
app.kubernetes.io/instance: keydb
app.kubernetes.io/version: "5.3.3"
app.kubernetes.io/managed-by: Helm
data:
server.sh: |
#!/bin/bash
set -euxo pipefail
host="$(hostname)"
port="6379"
replicas=()
for node in {0..2}; do
if [ "$host" != "keydb-${node}" ]; then
replicas+=("--replicaof keydb-${node}.keydb ${port}")
fi
done
keydb-server /etc/keydb/redis.conf \
--active-replica yes \
--multi-master yes \
--appendonly no \
--bind 0.0.0.0 \
--port "$port" \
--protected-mode no \
--server-threads 2 \
"${replicas[@]}"
---
# Source: keydb/templates/svc.yaml
# Headless service for proper name resolution
apiVersion: v1
kind: Service
metadata:
name: keydb
labels:
helm.sh/chart: keydb-0.8.0
app.kubernetes.io/name: keydb
app.kubernetes.io/instance: keydb
app.kubernetes.io/version: "5.3.3"
app.kubernetes.io/managed-by: Helm
spec:
type: ClusterIP
clusterIP: None
ports:
- name: server
port: 6379
protocol: TCP
targetPort: keydb
selector:
app.kubernetes.io/name: keydb
app.kubernetes.io/instance: keydb
---
# Source: keydb/templates/sts.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: keydb
labels:
helm.sh/chart: keydb-0.8.0
app.kubernetes.io/name: keydb
app.kubernetes.io/instance: keydb
app.kubernetes.io/version: "5.3.3"
app.kubernetes.io/managed-by: Helm
spec:
replicas: 3
serviceName: keydb
selector:
matchLabels:
app.kubernetes.io/name: keydb
app.kubernetes.io/instance: keydb
template:
metadata:
annotations:
checksum/cm-utils: e0806d2d0698a10e54131bde1119e44c51842191a777c154c308eab52ebb2ec7
labels:
helm.sh/chart: keydb-0.8.0
app.kubernetes.io/name: keydb
app.kubernetes.io/instance: keydb
app.kubernetes.io/version: "5.3.3"
app.kubernetes.io/managed-by: Helm
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- keydb
topologyKey: kubernetes.io/hostname
containers:
- name: keydb
image: eqalpha/keydb:x86_64_v5.3.3
imagePullPolicy: IfNotPresent
command:
- /utils/server.sh
ports:
- name: keydb
containerPort: 6379
protocol: TCP
livenessProbe:
tcpSocket:
port: keydb
readinessProbe:
tcpSocket:
port: keydb
resources:
limits:
cpu: 200m
memory: 2Gi
requests:
cpu: 100m
memory: 1Gi
volumeMounts:
- name: keydb-data
mountPath: /data
- name: utils
mountPath: /utils
readOnly: true
volumes:
- name: utils
configMap:
name: keydb-utils
defaultMode: 0700
items:
- key: server.sh
path: server.sh
volumeClaimTemplates:
- metadata:
name: keydb-data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 2Gi
storageClassName: "gp2"
使用命令
kubectl apply -f deploy.yaml
创建时没有错误
$ kubectl apply -f deploy.yaml
configmap/keydb-utils created
service/keydb created
statefulset.apps/keydb created
但是 pod 没有得到安排,出现以下错误
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2020-04-24T15:44:39Z"
message: pod has unbound immediate PersistentVolumeClaims (repeated 3 times)
reason: Unschedulable
status: "False"
type: PodScheduled
phase: Pending
qosClass: Burstable
当我检查 PVC 时,它创建时没有访问模式或存储 class。
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
keydb-data-keydb-0 Pending 28m
请帮忙。
添加存储class输出
$ kubectl get sc
NAME PROVISIONER AGE
gp2 (default) kubernetes.io/aws-ebs 32d
local-storage kubernetes.io/no-provisioner 10h
没有为此创建 PV。
当不满足亲和条件时,经常会发生计划错误。您应该使用 pod 亲和力而不是反亲和力吗?或者甚至是节点亲和力?也许尝试使用节点亲和力或更简单的亲和力规则来尝试首先排除亲和力的原因。
有关亲和力的示例,请参阅 here。
我正在使用以下 yaml 将 Keydb 部署到我的集群中
---
# Source: keydb/templates/cm-utils.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: keydb-utils
labels:
helm.sh/chart: keydb-0.8.0
app.kubernetes.io/name: keydb
app.kubernetes.io/instance: keydb
app.kubernetes.io/version: "5.3.3"
app.kubernetes.io/managed-by: Helm
data:
server.sh: |
#!/bin/bash
set -euxo pipefail
host="$(hostname)"
port="6379"
replicas=()
for node in {0..2}; do
if [ "$host" != "keydb-${node}" ]; then
replicas+=("--replicaof keydb-${node}.keydb ${port}")
fi
done
keydb-server /etc/keydb/redis.conf \
--active-replica yes \
--multi-master yes \
--appendonly no \
--bind 0.0.0.0 \
--port "$port" \
--protected-mode no \
--server-threads 2 \
"${replicas[@]}"
---
# Source: keydb/templates/svc.yaml
# Headless service for proper name resolution
apiVersion: v1
kind: Service
metadata:
name: keydb
labels:
helm.sh/chart: keydb-0.8.0
app.kubernetes.io/name: keydb
app.kubernetes.io/instance: keydb
app.kubernetes.io/version: "5.3.3"
app.kubernetes.io/managed-by: Helm
spec:
type: ClusterIP
clusterIP: None
ports:
- name: server
port: 6379
protocol: TCP
targetPort: keydb
selector:
app.kubernetes.io/name: keydb
app.kubernetes.io/instance: keydb
---
# Source: keydb/templates/sts.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: keydb
labels:
helm.sh/chart: keydb-0.8.0
app.kubernetes.io/name: keydb
app.kubernetes.io/instance: keydb
app.kubernetes.io/version: "5.3.3"
app.kubernetes.io/managed-by: Helm
spec:
replicas: 3
serviceName: keydb
selector:
matchLabels:
app.kubernetes.io/name: keydb
app.kubernetes.io/instance: keydb
template:
metadata:
annotations:
checksum/cm-utils: e0806d2d0698a10e54131bde1119e44c51842191a777c154c308eab52ebb2ec7
labels:
helm.sh/chart: keydb-0.8.0
app.kubernetes.io/name: keydb
app.kubernetes.io/instance: keydb
app.kubernetes.io/version: "5.3.3"
app.kubernetes.io/managed-by: Helm
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- keydb
topologyKey: kubernetes.io/hostname
containers:
- name: keydb
image: eqalpha/keydb:x86_64_v5.3.3
imagePullPolicy: IfNotPresent
command:
- /utils/server.sh
ports:
- name: keydb
containerPort: 6379
protocol: TCP
livenessProbe:
tcpSocket:
port: keydb
readinessProbe:
tcpSocket:
port: keydb
resources:
limits:
cpu: 200m
memory: 2Gi
requests:
cpu: 100m
memory: 1Gi
volumeMounts:
- name: keydb-data
mountPath: /data
- name: utils
mountPath: /utils
readOnly: true
volumes:
- name: utils
configMap:
name: keydb-utils
defaultMode: 0700
items:
- key: server.sh
path: server.sh
volumeClaimTemplates:
- metadata:
name: keydb-data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 2Gi
storageClassName: "gp2"
使用命令
kubectl apply -f deploy.yaml
创建时没有错误
$ kubectl apply -f deploy.yaml
configmap/keydb-utils created
service/keydb created
statefulset.apps/keydb created
但是 pod 没有得到安排,出现以下错误
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2020-04-24T15:44:39Z"
message: pod has unbound immediate PersistentVolumeClaims (repeated 3 times)
reason: Unschedulable
status: "False"
type: PodScheduled
phase: Pending
qosClass: Burstable
当我检查 PVC 时,它创建时没有访问模式或存储 class。
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
keydb-data-keydb-0 Pending 28m
请帮忙。
添加存储class输出
$ kubectl get sc
NAME PROVISIONER AGE
gp2 (default) kubernetes.io/aws-ebs 32d
local-storage kubernetes.io/no-provisioner 10h
没有为此创建 PV。
当不满足亲和条件时,经常会发生计划错误。您应该使用 pod 亲和力而不是反亲和力吗?或者甚至是节点亲和力?也许尝试使用节点亲和力或更简单的亲和力规则来尝试首先排除亲和力的原因。
有关亲和力的示例,请参阅 here。