我如何找到 kubernetes pod/container 实际使用了多少 CPU?
How do I find how much actual CPU a kubernetes pod/container is using?
我正在尝试根据该 pod 的先前运行优化分配给该 pod 的 CPU 资源。
唯一的问题是我只能找到分配给给定 pod 的 CPU 数量,而不是 pod 实际使用的 CPU 数量。
该信息未存储在 Kubernetes 中的任何位置。您通常可以从指标端点获取 'current' CPU 利用率。
您将不得不使用另一个 system/database 来随时间存储该信息。最常用的是开源时间序列数据库 Prometheus. You can also visualize its content using another popular tool: Grafana. There are other open-source alternatives too. For example, InfluxDB.
此外,还有大量支持 Kubernetes 指标的商业解决方案。例如:
从 docker 开始,您可以使用 docker stats
查询容器。要显示所有容器的瞬时统计信息,包括 CPU、内存和网络使用情况:
docker stats --no-stream
为了收集随时间变化的指标,cAdvisor、Prometheus 和 Grafana 一起构成了一套通用的开源指标收集、存储和查看工具。
我可能对问题的措辞读得太多了:(引用)"how much CPU a pod is actually using"...即使问题也提到(引用)"to optimize...based on previous runs"。所以:
有关使用历史 - 请参阅 Rico 的回答。
有关当前用法,请参阅 kubectl top
。使用 watch
每 2 秒查看一次使用情况统计信息,而无需一遍又一遍地 运行 命令。例如:
watch kubectl top pod <pod-name> --namespace=<namespace-name>
考虑github.com/dpetzold/kube-resource-explorer
# /opt/go/bin/kube-resource-explorer -namespace kube-system -reverse -sort MemReq
Namespace Name CpuReq CpuReq% CpuLimit CpuLimit% MemReq MemReq% MemLimit MemLimit%
--------- ---- ------ ------- -------- --------- ------ ------- -------- ---------
kube-system calico-node-sqh7m/calico-node 250m 3% 0m 0% 0Mi 0% 0Mi 0%
kube-system metrics-server-58699455bc-kz4r9/metrics-server 0m 0% 0m 0% 0Mi 0% 0Mi 0%
kube-system kube-proxy-hftdz/kube-proxy 100m 1% 0m 0% 0Mi 0% 0Mi 0%
kube-system kube-proxy-x72g6/kube-proxy 100m 1% 0m 0% 0Mi 0% 0Mi 0%
kube-system kube-proxy-fhtqm/kube-proxy 100m 1% 0m 0% 0Mi 0% 0Mi 0%
kube-system tiller-deploy-5b7c66d59c-b72hk/tiller 0m 0% 0m 0% 0Mi 0% 0Mi 0%
kube-system calico-node-xvfjf/calico-node 250m 3% 0m 0% 0Mi 0% 0Mi 0%
kube-system calico-node-ptq8l/calico-node 250m 3% 0m 0% 0Mi 0% 0Mi 0%
kube-system addon-http-application-routing-external-dns-855cdc4946-jh68m/addon-http-application-routing-external-dns 0m 0% 0m 0% 0Mi 0% 0Mi 0%
kube-system addon-http-application-routing-nginx-ingress-controller-6bfljzb/addon-http-application-routing-nginx-ingress-controller 0m 0% 0m 0% 0Mi 0% 0Mi 0%
kube-system calico-node-wsxp7/calico-node 250m 3% 0m 0% 0Mi 0% 0Mi 0%
kube-system calico-typha-86bcb74584-vwq5d/calico-typha 0m 0% 0m 0% 0Mi 0% 0Mi 0%
kube-system calico-typha-horizontal-autoscaler-79d4669c84-7kd6s/autoscaler 10m 0% 10m 0% 0Mi 0% 0Mi 0%
kube-system kube-proxy-xq5cq/kube-proxy 100m 1% 0m 0% 0Mi 0% 0Mi 0%
kube-system kube-svc-redirect-nqpf6/redirector 5m 0% 0m 0% 2Mi 0% 0Mi 0%
kube-system kube-svc-redirect-k4zrl/redirector 5m 0% 0m 0% 2Mi 0% 0Mi 0%
kube-system kube-svc-redirect-kx8l5/redirector 5m 0% 0m 0% 2Mi 0% 0Mi 0%
kube-system kube-svc-redirect-pwd5r/redirector 5m 0% 0m 0% 2Mi 0% 0Mi 0%
kube-system coredns-autoscaler-657d77ffbf-ld6jp/autoscaler 20m 0% 0m 0% 10Mi 0% 0Mi 0%
kube-system addon-http-application-routing-default-http-backend-74698cnzjt8/addon-http-application-routing-default-http-backend 10m 0% 10m 0% 20Mi 0% 20Mi 0%
kube-system kube-svc-redirect-nqpf6/azureproxy 5m 0% 0m 0% 32Mi 0% 0Mi 0%
kube-system kube-svc-redirect-k4zrl/azureproxy 5m 0% 0m 0% 32Mi 0% 0Mi 0%
kube-system kube-svc-redirect-pwd5r/azureproxy 5m 0% 0m 0% 32Mi 0% 0Mi 0%
kube-system kube-svc-redirect-kx8l5/azureproxy 5m 0% 0m 0% 32Mi 0% 0Mi 0%
kube-system kubernetes-dashboard-6f697bd9f5-sjtnf/main 100m 1% 100m 1% 50Mi 0% 500Mi 1%
kube-system tunnelfront-6bb9dcf868-hh6kp/tunnel-front 10m 0% 0m 0% 64Mi 0% 0Mi 0%
kube-system coredns-7fbf4847b6-gtnpb/coredns 100m 1% 0m 0% 70Mi 0% 170Mi 0%
kube-system coredns-7fbf4847b6-qcsgb/coredns 100m 1% 0m 0% 70Mi 0% 170Mi 0%
kube-system omsagent-rs-7b98f76d84-kj9v6/omsagent 50m 0% 150m 1% 175Mi 0% 500Mi 1%
kube-system omsagent-7m8vs/omsagent 75m 0% 150m 1% 225Mi 0% 600Mi 2%
kube-system omsagent-8xcng/omsagent 75m 0% 150m 1% 225Mi 0% 600Mi 2%
kube-system omsagent-q6dj4/omsagent 75m 0% 150m 1% 225Mi 0% 600Mi 2%
kube-system omsagent-whnbp/omsagent 75m 0% 150m 1% 225Mi 0% 600Mi 2%
kube-system cluster-autoscaler-7c694f79fd-rzftb/cluster-autoscaler 100m 1% 200m 2% 300Mi 1% 500Mi 1%
--------- ---- ------ ------- -------- --------- ------ ------- -------- ---------
Total 2240m/31644m 7% 1070m/31644m 3% 1795Mi/111005Mi 1% 4260Mi/111005Mi 3%
我正在尝试根据该 pod 的先前运行优化分配给该 pod 的 CPU 资源。
唯一的问题是我只能找到分配给给定 pod 的 CPU 数量,而不是 pod 实际使用的 CPU 数量。
该信息未存储在 Kubernetes 中的任何位置。您通常可以从指标端点获取 'current' CPU 利用率。
您将不得不使用另一个 system/database 来随时间存储该信息。最常用的是开源时间序列数据库 Prometheus. You can also visualize its content using another popular tool: Grafana. There are other open-source alternatives too. For example, InfluxDB.
此外,还有大量支持 Kubernetes 指标的商业解决方案。例如:
从 docker 开始,您可以使用 docker stats
查询容器。要显示所有容器的瞬时统计信息,包括 CPU、内存和网络使用情况:
docker stats --no-stream
为了收集随时间变化的指标,cAdvisor、Prometheus 和 Grafana 一起构成了一套通用的开源指标收集、存储和查看工具。
我可能对问题的措辞读得太多了:(引用)"how much CPU a pod is actually using"...即使问题也提到(引用)"to optimize...based on previous runs"。所以:
有关使用历史 - 请参阅 Rico 的回答。
有关当前用法,请参阅 kubectl top
。使用 watch
每 2 秒查看一次使用情况统计信息,而无需一遍又一遍地 运行 命令。例如:
watch kubectl top pod <pod-name> --namespace=<namespace-name>
考虑github.com/dpetzold/kube-resource-explorer
# /opt/go/bin/kube-resource-explorer -namespace kube-system -reverse -sort MemReq
Namespace Name CpuReq CpuReq% CpuLimit CpuLimit% MemReq MemReq% MemLimit MemLimit%
--------- ---- ------ ------- -------- --------- ------ ------- -------- ---------
kube-system calico-node-sqh7m/calico-node 250m 3% 0m 0% 0Mi 0% 0Mi 0%
kube-system metrics-server-58699455bc-kz4r9/metrics-server 0m 0% 0m 0% 0Mi 0% 0Mi 0%
kube-system kube-proxy-hftdz/kube-proxy 100m 1% 0m 0% 0Mi 0% 0Mi 0%
kube-system kube-proxy-x72g6/kube-proxy 100m 1% 0m 0% 0Mi 0% 0Mi 0%
kube-system kube-proxy-fhtqm/kube-proxy 100m 1% 0m 0% 0Mi 0% 0Mi 0%
kube-system tiller-deploy-5b7c66d59c-b72hk/tiller 0m 0% 0m 0% 0Mi 0% 0Mi 0%
kube-system calico-node-xvfjf/calico-node 250m 3% 0m 0% 0Mi 0% 0Mi 0%
kube-system calico-node-ptq8l/calico-node 250m 3% 0m 0% 0Mi 0% 0Mi 0%
kube-system addon-http-application-routing-external-dns-855cdc4946-jh68m/addon-http-application-routing-external-dns 0m 0% 0m 0% 0Mi 0% 0Mi 0%
kube-system addon-http-application-routing-nginx-ingress-controller-6bfljzb/addon-http-application-routing-nginx-ingress-controller 0m 0% 0m 0% 0Mi 0% 0Mi 0%
kube-system calico-node-wsxp7/calico-node 250m 3% 0m 0% 0Mi 0% 0Mi 0%
kube-system calico-typha-86bcb74584-vwq5d/calico-typha 0m 0% 0m 0% 0Mi 0% 0Mi 0%
kube-system calico-typha-horizontal-autoscaler-79d4669c84-7kd6s/autoscaler 10m 0% 10m 0% 0Mi 0% 0Mi 0%
kube-system kube-proxy-xq5cq/kube-proxy 100m 1% 0m 0% 0Mi 0% 0Mi 0%
kube-system kube-svc-redirect-nqpf6/redirector 5m 0% 0m 0% 2Mi 0% 0Mi 0%
kube-system kube-svc-redirect-k4zrl/redirector 5m 0% 0m 0% 2Mi 0% 0Mi 0%
kube-system kube-svc-redirect-kx8l5/redirector 5m 0% 0m 0% 2Mi 0% 0Mi 0%
kube-system kube-svc-redirect-pwd5r/redirector 5m 0% 0m 0% 2Mi 0% 0Mi 0%
kube-system coredns-autoscaler-657d77ffbf-ld6jp/autoscaler 20m 0% 0m 0% 10Mi 0% 0Mi 0%
kube-system addon-http-application-routing-default-http-backend-74698cnzjt8/addon-http-application-routing-default-http-backend 10m 0% 10m 0% 20Mi 0% 20Mi 0%
kube-system kube-svc-redirect-nqpf6/azureproxy 5m 0% 0m 0% 32Mi 0% 0Mi 0%
kube-system kube-svc-redirect-k4zrl/azureproxy 5m 0% 0m 0% 32Mi 0% 0Mi 0%
kube-system kube-svc-redirect-pwd5r/azureproxy 5m 0% 0m 0% 32Mi 0% 0Mi 0%
kube-system kube-svc-redirect-kx8l5/azureproxy 5m 0% 0m 0% 32Mi 0% 0Mi 0%
kube-system kubernetes-dashboard-6f697bd9f5-sjtnf/main 100m 1% 100m 1% 50Mi 0% 500Mi 1%
kube-system tunnelfront-6bb9dcf868-hh6kp/tunnel-front 10m 0% 0m 0% 64Mi 0% 0Mi 0%
kube-system coredns-7fbf4847b6-gtnpb/coredns 100m 1% 0m 0% 70Mi 0% 170Mi 0%
kube-system coredns-7fbf4847b6-qcsgb/coredns 100m 1% 0m 0% 70Mi 0% 170Mi 0%
kube-system omsagent-rs-7b98f76d84-kj9v6/omsagent 50m 0% 150m 1% 175Mi 0% 500Mi 1%
kube-system omsagent-7m8vs/omsagent 75m 0% 150m 1% 225Mi 0% 600Mi 2%
kube-system omsagent-8xcng/omsagent 75m 0% 150m 1% 225Mi 0% 600Mi 2%
kube-system omsagent-q6dj4/omsagent 75m 0% 150m 1% 225Mi 0% 600Mi 2%
kube-system omsagent-whnbp/omsagent 75m 0% 150m 1% 225Mi 0% 600Mi 2%
kube-system cluster-autoscaler-7c694f79fd-rzftb/cluster-autoscaler 100m 1% 200m 2% 300Mi 1% 500Mi 1%
--------- ---- ------ ------- -------- --------- ------ ------- -------- ---------
Total 2240m/31644m 7% 1070m/31644m 3% 1795Mi/111005Mi 1% 4260Mi/111005Mi 3%