Kubernetes HPA 在减少负载后不会缩减
Kubernetes HPA doesn't scale down after decreasing the loads
Kubernetes HPA 在 pod 负载增加时正常工作,但在负载减少后,部署规模没有改变。这是我的 HPA 文件:
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: baseinformationmanagement
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: baseinformationmanagement
minReplicas: 1
maxReplicas: 3
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 80
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
我的 kubernetes 版本:
> kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.1", GitCommit:"d647ddbd755faf07169599a625faf302ffc34458", GitTreeState:"clean", BuildDate:"2019-10-02T17:01:15Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-18T23:22:30Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}
这是我的 HPA 描述:
> kubectl describe hpa baseinformationmanagement
Name: baseinformationmanagement
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"autoscaling/v2beta2","kind":"HorizontalPodAutoscaler","metadata":{"annotations":{},"name":"baseinformationmanagement","name...
CreationTimestamp: Sun, 27 Sep 2020 06:09:07 +0000
Reference: Deployment/baseinformationmanagement
Metrics: ( current / target )
resource memory on pods (as a percentage of request): 49% (1337899008) / 70%
resource cpu on pods (as a percentage of request): 2% (13m) / 50%
Min replicas: 1
Max replicas: 3
Deployment pods: 2 current / 2 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True ReadyForNewScale recommended size matches current size
ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from memory resource utilization (percentage of request)
ScalingLimited False DesiredWithinRange the desired count is within the acceptable range
Events: <none>
您的 HPA 指定了内存和 CPU 目标。 Horizontal Pod Autoscaler 文档说明:
If multiple metrics are specified in a HorizontalPodAutoscaler, this calculation is done for each metric, and then the largest of the desired replica counts is chosen.
实际副本目标是当前副本计数以及当前和目标利用率的函数(相同 link):
desiredReplicas = ceil[currentReplicas * ( currentMetricValue / desiredMetricValue )]
特别是内存:currentReplicas
是2; currentMetricValue
为 49; desiredMetricValue
是 80。所以目标副本数是
desiredReplicas = ceil[ 2 * ( 49 / 80 )]
desiredReplicas = ceil[ 2 * 0.61 ]
desiredReplicas = ceil[ 1.26 ]
desiredReplicas = 2
即使您的服务完全空闲,这也会导致(至少)有 2 个副本,除非服务选择将内存释放回 OS;这通常取决于语言运行时并且有点不受您的控制。
仅删除内存目标和仅基于 CPU 的自动缩放可能更符合您的期望。
Kubernetes HPA 在 pod 负载增加时正常工作,但在负载减少后,部署规模没有改变。这是我的 HPA 文件:
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: baseinformationmanagement
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: baseinformationmanagement
minReplicas: 1
maxReplicas: 3
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 80
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
我的 kubernetes 版本:
> kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.1", GitCommit:"d647ddbd755faf07169599a625faf302ffc34458", GitTreeState:"clean", BuildDate:"2019-10-02T17:01:15Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-18T23:22:30Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}
这是我的 HPA 描述:
> kubectl describe hpa baseinformationmanagement
Name: baseinformationmanagement
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"autoscaling/v2beta2","kind":"HorizontalPodAutoscaler","metadata":{"annotations":{},"name":"baseinformationmanagement","name...
CreationTimestamp: Sun, 27 Sep 2020 06:09:07 +0000
Reference: Deployment/baseinformationmanagement
Metrics: ( current / target )
resource memory on pods (as a percentage of request): 49% (1337899008) / 70%
resource cpu on pods (as a percentage of request): 2% (13m) / 50%
Min replicas: 1
Max replicas: 3
Deployment pods: 2 current / 2 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True ReadyForNewScale recommended size matches current size
ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from memory resource utilization (percentage of request)
ScalingLimited False DesiredWithinRange the desired count is within the acceptable range
Events: <none>
您的 HPA 指定了内存和 CPU 目标。 Horizontal Pod Autoscaler 文档说明:
If multiple metrics are specified in a HorizontalPodAutoscaler, this calculation is done for each metric, and then the largest of the desired replica counts is chosen.
实际副本目标是当前副本计数以及当前和目标利用率的函数(相同 link):
desiredReplicas = ceil[currentReplicas * ( currentMetricValue / desiredMetricValue )]
特别是内存:currentReplicas
是2; currentMetricValue
为 49; desiredMetricValue
是 80。所以目标副本数是
desiredReplicas = ceil[ 2 * ( 49 / 80 )]
desiredReplicas = ceil[ 2 * 0.61 ]
desiredReplicas = ceil[ 1.26 ]
desiredReplicas = 2
即使您的服务完全空闲,这也会导致(至少)有 2 个副本,除非服务选择将内存释放回 OS;这通常取决于语言运行时并且有点不受您的控制。
仅删除内存目标和仅基于 CPU 的自动缩放可能更符合您的期望。