如何在 Kubernetes 中使用大量不活跃的 pods 来有效地使用 CPU?
How to use CPU effectively with a large number of inactive pods in Kubernetes?
我有很多服务。一天中,少数服务会忙上十个小时左右,而其他大部分服务空闲或使用少量 cpu.
以前我把所有的服务都放在一个有两个cpu的虚拟机里,按cpu的使用量来缩放,最忙的时候有两个虚拟机,但是大部分时间只有一个。
services
instances
busy time in a day
cpu when busy
(core/service)
cpu when idle
(core/service)
busy services
2
8~12 hours
0.5~1
0.1~0.5
busy services
2
8~12 hours
0.3~0.8
0.1~0.3
inactive services
30
0~1 hours
0.1~0.3
< 0.1
现在,我想把它们放在kubernetes中,每个节点有两个CPU,并使用节点自动缩放和HPA,为了使节点自动缩放,我必须设置请求CPU对于所有服务,这正是我遇到的困难。
这是我的设置。
services
instances
busy time
requests cpu
(cpu/service)
total requests cpu
busy services
2
8~12 hours
300m
600m
busy services
2
8~12 hours
300m
600m
inactive services
30
0~1 hours
100m
3000m
注意:非活动服务请求CPU设置为100m,因为忙时小于100m效果不好
采用这种设置,节点数总是大于三个,成本太高。我认为问题在于,虽然这些服务需要 100m CPU 才能正常工作,但它们大多处于闲置状态。
真希望所有的服务都能自动伸缩,我觉得这就是kubernetes的好处,可以帮我更灵活的分配pods。我的想法错了吗?我不应该为非活动服务设置请求 CPU 吗?
即使我忽略非活动服务。我发现 kubernetes 更多时候有两个以上的节点。如果我有更多活跃的服务,即使在非高峰时段,请求CPU也会超过2000m。有什么解决办法吗?
I put all services in a virtual machine with two cpus, and scale by cpu usage, there are two virtual machine at the busiest time, but most of the time there is only one.
首先,如果您有任何可用性要求,我建议始终至少有 两个 个节点。如果您只有一个节点并且那个崩溃(例如硬件故障或内核崩溃),则需要几分钟才能检测到,并且需要几分钟才能启动新节点。
The inactive service requests cpu is set to 100m because it will not work well if it is less than 100m when it is busy.
I think the problem is that although these services require 100m of cpu to work properly, they are mostly idle.
CPU请求是有保证的预留资源量。在这里,您为几乎闲置的服务保留了太多资源。将 CPU 请求设置得更低,可能低至 20m
甚至 5m
?但是由于这些服务在繁忙的时候会需要更多的资源,所以设置一个更高的limit,这样容器可以“爆棚”,也可以使用Horizontal Pod Autoscaler for these. When using the Horizontal Pod Autoscaler more replicas will be created and the traffic will be load balanced across all replicas. Also see Managing Resources for Containers.
你的“繁忙服务”也是如此,少预留CPU资源,更积极地使用Horizontal Pod Autoscaling,让流量在高负载时分散到更多节点,但可以缩减并节省流量低时的费用。
I really hope that all services can autoscaling, I think this is the benefit of kubernetes, which can help me assign pods more flexibly. Is my idea wrong?
是的,我同意你的看法。
Shouldn't I set a request cpu for an inactive service?
最好始终为 request 和 limit 设置一些值,至少对于生产环境而言。如果没有 资源请求 .
,计划和自动缩放将无法正常工作
If I have more active services, even in off-peak hours, the requests cpu will exceed 2000m. Is there any solution?
一般来说,尝试使用较低的资源请求并更积极地使用 Horizontal Pod Autoscaling。这对于您的“繁忙服务”和“非活动服务”都是如此。
I find that kubernetes more often has more than two nodes.
是的,这有两个方面。
如果您只使用两个节点,您的环境可能很小,而 Kubernetes 控制平面可能包含更多节点并且是成本的大部分。对于非常小的环境,Kubernetes 可能很昂贵,使用它会更有吸引力。像 Google Cloud Run
这样的无服务器替代方案
其次,可用性。最好至少有两个节点以防突然崩溃,例如硬件故障或内核恐慌,以便您的“服务”在节点自动缩放器扩展新节点时仍然可用。对于 Deployment
的 副本 的数量也是如此,如果可用性很重要,请至少使用两个副本。当你例如drain a node for maintenance or node upgrade, the pods will be evicted - but not created on a different node first. The control plane will detect that the Deployment
(technically ReplicaSet) has less than the desired number of replicas and create a new pod. But when a new Pod is created on a new node, the container image will first be pulled before the Pod is running. To avoid downtime during these events, use at least two replicas for your Deployment
and Pod Topology Spread Constraints 以确保这两个副本 运行 在不同的节点上。
注意:您可能 运行 遇到与 and that should be mitigated by an upcoming Kubernetes feature: KEP - Trimaran: Real Load Aware Scheduling
相同的问题
我有很多服务。一天中,少数服务会忙上十个小时左右,而其他大部分服务空闲或使用少量 cpu.
以前我把所有的服务都放在一个有两个cpu的虚拟机里,按cpu的使用量来缩放,最忙的时候有两个虚拟机,但是大部分时间只有一个。
services | instances | busy time in a day | cpu when busy (core/service) |
cpu when idle (core/service) |
---|---|---|---|---|
busy services | 2 | 8~12 hours | 0.5~1 | 0.1~0.5 |
busy services | 2 | 8~12 hours | 0.3~0.8 | 0.1~0.3 |
inactive services | 30 | 0~1 hours | 0.1~0.3 | < 0.1 |
现在,我想把它们放在kubernetes中,每个节点有两个CPU,并使用节点自动缩放和HPA,为了使节点自动缩放,我必须设置请求CPU对于所有服务,这正是我遇到的困难。
这是我的设置。
services | instances | busy time | requests cpu (cpu/service) |
total requests cpu |
---|---|---|---|---|
busy services | 2 | 8~12 hours | 300m | 600m |
busy services | 2 | 8~12 hours | 300m | 600m |
inactive services | 30 | 0~1 hours | 100m | 3000m |
注意:非活动服务请求CPU设置为100m,因为忙时小于100m效果不好
采用这种设置,节点数总是大于三个,成本太高。我认为问题在于,虽然这些服务需要 100m CPU 才能正常工作,但它们大多处于闲置状态。
真希望所有的服务都能自动伸缩,我觉得这就是kubernetes的好处,可以帮我更灵活的分配pods。我的想法错了吗?我不应该为非活动服务设置请求 CPU 吗?
即使我忽略非活动服务。我发现 kubernetes 更多时候有两个以上的节点。如果我有更多活跃的服务,即使在非高峰时段,请求CPU也会超过2000m。有什么解决办法吗?
I put all services in a virtual machine with two cpus, and scale by cpu usage, there are two virtual machine at the busiest time, but most of the time there is only one.
首先,如果您有任何可用性要求,我建议始终至少有 两个 个节点。如果您只有一个节点并且那个崩溃(例如硬件故障或内核崩溃),则需要几分钟才能检测到,并且需要几分钟才能启动新节点。
The inactive service requests cpu is set to 100m because it will not work well if it is less than 100m when it is busy.
I think the problem is that although these services require 100m of cpu to work properly, they are mostly idle.
CPU请求是有保证的预留资源量。在这里,您为几乎闲置的服务保留了太多资源。将 CPU 请求设置得更低,可能低至 20m
甚至 5m
?但是由于这些服务在繁忙的时候会需要更多的资源,所以设置一个更高的limit,这样容器可以“爆棚”,也可以使用Horizontal Pod Autoscaler for these. When using the Horizontal Pod Autoscaler more replicas will be created and the traffic will be load balanced across all replicas. Also see Managing Resources for Containers.
你的“繁忙服务”也是如此,少预留CPU资源,更积极地使用Horizontal Pod Autoscaling,让流量在高负载时分散到更多节点,但可以缩减并节省流量低时的费用。
I really hope that all services can autoscaling, I think this is the benefit of kubernetes, which can help me assign pods more flexibly. Is my idea wrong?
是的,我同意你的看法。
Shouldn't I set a request cpu for an inactive service?
最好始终为 request 和 limit 设置一些值,至少对于生产环境而言。如果没有 资源请求 .
,计划和自动缩放将无法正常工作If I have more active services, even in off-peak hours, the requests cpu will exceed 2000m. Is there any solution?
一般来说,尝试使用较低的资源请求并更积极地使用 Horizontal Pod Autoscaling。这对于您的“繁忙服务”和“非活动服务”都是如此。
I find that kubernetes more often has more than two nodes.
是的,这有两个方面。
如果您只使用两个节点,您的环境可能很小,而 Kubernetes 控制平面可能包含更多节点并且是成本的大部分。对于非常小的环境,Kubernetes 可能很昂贵,使用它会更有吸引力。像 Google Cloud Run
这样的无服务器替代方案其次,可用性。最好至少有两个节点以防突然崩溃,例如硬件故障或内核恐慌,以便您的“服务”在节点自动缩放器扩展新节点时仍然可用。对于 Deployment
的 副本 的数量也是如此,如果可用性很重要,请至少使用两个副本。当你例如drain a node for maintenance or node upgrade, the pods will be evicted - but not created on a different node first. The control plane will detect that the Deployment
(technically ReplicaSet) has less than the desired number of replicas and create a new pod. But when a new Pod is created on a new node, the container image will first be pulled before the Pod is running. To avoid downtime during these events, use at least two replicas for your Deployment
and Pod Topology Spread Constraints 以确保这两个副本 运行 在不同的节点上。
注意:您可能 运行 遇到与