为什么 kubernetes HPA 没有缩减 pods?
Why kubernetes HPA is not scalling down pods?
我正在使用 prometheus 和适配器来扩展 HPA(自定义指标 memory_usage_bytes)。我不知道为什么 m 附加了 targetValue 并且 HPA 在不使用内存时也没有按比例缩小 pods。
我错过了什么吗?
HPA 代码
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: pros
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: pros
maxReplicas: 3
metrics:
- type: Pods
pods:
metricName: memory_usage_bytes
targetAverageValue: 33000000
kubectl 获取 hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
pros Deployment/pros 26781013333m/33M 1 3 3 19m
custom.metrics.k8.io
{
"kind": "MetricValueList",
"apiVersion": "custom.metrics.k8s.io/v1beta1",
"metadata": {
"selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/%2A/memory_usage_bytes"
},
"items": [
{
"describedObject": {
"kind": "Pod",
"namespace": "default",
"name": "pros-6c9b9c5c59-57vmx",
"apiVersion": "/v1"
},
"metricName": "memory_usage_bytes",
"timestamp": "2019-07-13T12:03:10Z",
"value": "34947072",
"selector": null
},
{
"describedObject": {
"kind": "Pod",
"namespace": "default",
"name": "pros-6c9b9c5c59-957zv",
"apiVersion": "/v1"
},
"metricName": "memory_usage_bytes",
"timestamp": "2019-07-13T12:03:10Z",
"value": "19591168",
"selector": null
},
{
"describedObject": {
"kind": "Pod",
"namespace": "default",
"name": "pros-6c9b9c5c59-nczqq",
"apiVersion": "/v1"
},
"metricName": "memory_usage_bytes",
"timestamp": "2019-07-13T12:03:10Z",
"value": "19615744",
"selector": null
}
]
}
至少有两个很好的理由可以解释为什么它可能不起作用:
如您在 documentation 中所见:
The current stable version, which only includes support for CPU
autoscaling, can be found in the autoscaling/v1 API version. The beta
version, which includes support for scaling on memory and custom
metrics, can be found in autoscaling/v2beta2.
并且您正在使用:
apiVersion: autoscaling/v2beta1
在您的 HorizontalPodAutoscaler
定义中。
- 如果您从
custom.metrics.k8.io
示例中总结所有 3 个当前 运行 pods 使用的总内存,工作负载仍然不适合仅 2 Pods当内存限制设置为 33000000
时。
请注意,第一个 Pod 已经达到其 33M
的限制,并且其他 2 Pods (19591168
+ 19615744
) 的内存消耗仍然太高,无法将其放入单个 pod 3300000
限制。
我正在使用 prometheus 和适配器来扩展 HPA(自定义指标 memory_usage_bytes)。我不知道为什么 m 附加了 targetValue 并且 HPA 在不使用内存时也没有按比例缩小 pods。
我错过了什么吗?
HPA 代码
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: pros
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: pros
maxReplicas: 3
metrics:
- type: Pods
pods:
metricName: memory_usage_bytes
targetAverageValue: 33000000
kubectl 获取 hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
pros Deployment/pros 26781013333m/33M 1 3 3 19m
custom.metrics.k8.io
{
"kind": "MetricValueList",
"apiVersion": "custom.metrics.k8s.io/v1beta1",
"metadata": {
"selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/%2A/memory_usage_bytes"
},
"items": [
{
"describedObject": {
"kind": "Pod",
"namespace": "default",
"name": "pros-6c9b9c5c59-57vmx",
"apiVersion": "/v1"
},
"metricName": "memory_usage_bytes",
"timestamp": "2019-07-13T12:03:10Z",
"value": "34947072",
"selector": null
},
{
"describedObject": {
"kind": "Pod",
"namespace": "default",
"name": "pros-6c9b9c5c59-957zv",
"apiVersion": "/v1"
},
"metricName": "memory_usage_bytes",
"timestamp": "2019-07-13T12:03:10Z",
"value": "19591168",
"selector": null
},
{
"describedObject": {
"kind": "Pod",
"namespace": "default",
"name": "pros-6c9b9c5c59-nczqq",
"apiVersion": "/v1"
},
"metricName": "memory_usage_bytes",
"timestamp": "2019-07-13T12:03:10Z",
"value": "19615744",
"selector": null
}
]
}
至少有两个很好的理由可以解释为什么它可能不起作用:
如您在 documentation 中所见:
The current stable version, which only includes support for CPU autoscaling, can be found in the autoscaling/v1 API version. The beta version, which includes support for scaling on memory and custom metrics, can be found in autoscaling/v2beta2.
并且您正在使用:
apiVersion: autoscaling/v2beta1
在您的HorizontalPodAutoscaler
定义中。- 如果您从
custom.metrics.k8.io
示例中总结所有 3 个当前 运行 pods 使用的总内存,工作负载仍然不适合仅 2 Pods当内存限制设置为33000000
时。 请注意,第一个 Pod 已经达到其33M
的限制,并且其他 2 Pods (19591168
+19615744
) 的内存消耗仍然太高,无法将其放入单个 pod3300000
限制。