谁能解释这种 Kubernetes HPA 行为?
Can anyone explain this Kubernetes HPA behavior?
所以这发生在 EKS K8s v1.15 上。您可以在描述输出中看到 api 版本。 millicpu 在 80 到 120 之间徘徊......这与 HPA 的副本计数完全不匹配......
这是 YAML:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: sequencer
namespace: djin-content
spec:
minReplicas: 1
maxReplicas: 10
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: sequencer
metrics:
- type: Pods
pods:
metricName: cpu_usage
targetAverageValue: 500
这里是 kubectl 描述:
[root@ip-10-150-53-173 ~]# kubectl describe hpa -n djin-content
Name: sequencer
Namespace: djin-content
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"autoscaling/v2beta1","kind":"HorizontalPodAutoscaler","metadata":{"annotations":{},"name":"sequencer","namespace":"djin-con...
CreationTimestamp: Wed, 05 Aug 2020 20:40:37 +0000
Reference: Deployment/sequencer
Metrics: ( current / target )
"cpu_usage" on pods: 122m / 500
Min replicas: 1
Max replicas: 10
Deployment pods: 7 current / 7 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True SucceededRescale the HPA controller was able to update the target scale to 4
ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric cpu_usage
ScalingLimited False DesiredWithinRange the desired count is within the acceptable range
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulRescale 34m horizontal-pod-autoscaler New size: 10; reason: pods metric cpu_usage above target
Normal SuccessfulRescale 15m (x2 over 34m) horizontal-pod-autoscaler New size: 6; reason: pods metric cpu_usage above target
Normal SuccessfulRescale 10m horizontal-pod-autoscaler New size: 5; reason: All metrics below target
Normal SuccessfulRescale 9m51s (x2 over 23m) horizontal-pod-autoscaler New size: 3; reason: All metrics below target
Normal SuccessfulRescale 5m (x2 over 16m) horizontal-pod-autoscaler New size: 4; reason: pods metric cpu_usage above target
Normal SuccessfulRescale 4m45s (x2 over 15m) horizontal-pod-autoscaler New size: 5; reason: pods metric cpu_usage above target
Normal SuccessfulRescale 4m30s horizontal-pod-autoscaler New size: 7; reason: pods metric cpu_usage above target
自定义指标 API 已填充 correctly/frequently 和 运行。部署目标工作完美......我已经为这个 API 和副本计算遍历了整个 k8s 代码库,这没有意义......
指标似乎不匹配,您有 122m(毫核)vs / 500 原始数据。
"cpu_usage" on pods: 122m / 500
您没有指定计算自定义指标的内容,可能是将额外的 0
添加到 122m
使其成为 1220 / 500
(我假设 cpu_usage
是自定义指标,因为常规指标服务器指标只是 cpu
) 但您可以尝试:
targetAverageValue: 500m
在 CPU 使用情况下执行 HPA 的更常见方法是使用来自指标服务器的 CPU 利用率百分比。
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: php-apache
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: php-apache
minReplicas: 1
maxReplicas: 10
targetCPUUtilizationPercentage: 50
...
缩放活动由 K8s 控制平面中的 kube-controlller-manager
管理,如果您启用了 EKS 控制平面日志,您还可以查看那里以找到更多信息。
✌️
前段时间找到了这个问题的答案,忘记更新了。在一个著名的 k8s 项目问题中就这个话题进行了长时间的辩论。它本质上是 k8s HPA 目标(功能?)中的设计错误:https://github.com/kubernetes/kubernetes/issues/78761#issuecomment-670815813
所以这发生在 EKS K8s v1.15 上。您可以在描述输出中看到 api 版本。 millicpu 在 80 到 120 之间徘徊......这与 HPA 的副本计数完全不匹配......
这是 YAML:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: sequencer
namespace: djin-content
spec:
minReplicas: 1
maxReplicas: 10
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: sequencer
metrics:
- type: Pods
pods:
metricName: cpu_usage
targetAverageValue: 500
这里是 kubectl 描述:
[root@ip-10-150-53-173 ~]# kubectl describe hpa -n djin-content
Name: sequencer
Namespace: djin-content
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"autoscaling/v2beta1","kind":"HorizontalPodAutoscaler","metadata":{"annotations":{},"name":"sequencer","namespace":"djin-con...
CreationTimestamp: Wed, 05 Aug 2020 20:40:37 +0000
Reference: Deployment/sequencer
Metrics: ( current / target )
"cpu_usage" on pods: 122m / 500
Min replicas: 1
Max replicas: 10
Deployment pods: 7 current / 7 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True SucceededRescale the HPA controller was able to update the target scale to 4
ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric cpu_usage
ScalingLimited False DesiredWithinRange the desired count is within the acceptable range
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulRescale 34m horizontal-pod-autoscaler New size: 10; reason: pods metric cpu_usage above target
Normal SuccessfulRescale 15m (x2 over 34m) horizontal-pod-autoscaler New size: 6; reason: pods metric cpu_usage above target
Normal SuccessfulRescale 10m horizontal-pod-autoscaler New size: 5; reason: All metrics below target
Normal SuccessfulRescale 9m51s (x2 over 23m) horizontal-pod-autoscaler New size: 3; reason: All metrics below target
Normal SuccessfulRescale 5m (x2 over 16m) horizontal-pod-autoscaler New size: 4; reason: pods metric cpu_usage above target
Normal SuccessfulRescale 4m45s (x2 over 15m) horizontal-pod-autoscaler New size: 5; reason: pods metric cpu_usage above target
Normal SuccessfulRescale 4m30s horizontal-pod-autoscaler New size: 7; reason: pods metric cpu_usage above target
自定义指标 API 已填充 correctly/frequently 和 运行。部署目标工作完美......我已经为这个 API 和副本计算遍历了整个 k8s 代码库,这没有意义......
指标似乎不匹配,您有 122m(毫核)vs / 500 原始数据。
"cpu_usage" on pods: 122m / 500
您没有指定计算自定义指标的内容,可能是将额外的 0
添加到 122m
使其成为 1220 / 500
(我假设 cpu_usage
是自定义指标,因为常规指标服务器指标只是 cpu
) 但您可以尝试:
targetAverageValue: 500m
在 CPU 使用情况下执行 HPA 的更常见方法是使用来自指标服务器的 CPU 利用率百分比。
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: php-apache
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: php-apache
minReplicas: 1
maxReplicas: 10
targetCPUUtilizationPercentage: 50
...
缩放活动由 K8s 控制平面中的 kube-controlller-manager
管理,如果您启用了 EKS 控制平面日志,您还可以查看那里以找到更多信息。
✌️
前段时间找到了这个问题的答案,忘记更新了。在一个著名的 k8s 项目问题中就这个话题进行了长时间的辩论。它本质上是 k8s HPA 目标(功能?)中的设计错误:https://github.com/kubernetes/kubernetes/issues/78761#issuecomment-670815813