docker 容器的内存和 cpu 管理

memory and cpu management on docker containers

我在 Docker 容器上 运行 硒网格。我有一个运行 selenium hub 的容器和五个其他容器 运行 chrome-nodes(每个最多有 5 个会话)。问题是,测试团队请求 chrome 会话的随机数。通常,当有大约 5 个 chrome 会话请求时,内存使用率会上升到 80%,CPU 会上升到 95%。再有一个请求,所有容器都会关闭,使每个人都无法使用 selenium。

我的问题是如何防止这种情况发生?由于我无法控制测试团队将请求多少个会话,我想限制 Docker 容器可用的 RAM 和 CPU 的百分比。我是否必须在每个容器上执行此操作或只为 Docker 应用程序执行一次?

AFIK,您将必须限制 docker run 中的每个容器资源。来自 Docker Run Reference

Runtime constraints on CPU and memory

The operator can also adjust the performance parameters of the container:

-m="": Memory limit (format: <number><optional unit>, where unit = b, k, m or g) -c=0 : CPU shares (relative weight)

The operator can constrain the memory available to a container easily with docker run -m. If the host supports swap memory, then the -m memory setting can be larger than physical RAM.

Similarly the operator can increase the priority of this container with the -c option. By default, all containers run at the same priority and get the same proportion of CPU cycles, but you can tell the kernel to give more shares of CPU time to one or more containers when you start them via Docker.

The flag -c or --cpu-shares with value 0 indicates that the running container has access to all 1024 (default) CPU shares. However, this value can be modified to run a container with a different priority or different proportion of CPU cycles.

E.g., If we start three {C0, C1, C2} containers with default values (-c OR --cpu-shares = 0) and one {C3} with (-c or --cpu-shares=512) then C0, C1, and C2 would have access to 100% CPU shares (1024) and C3 would only have access to 50% CPU shares (512). In the context of a time-sliced OS with time quantum set as 100 milliseconds, containers C0, C1, and C2 will run for full-time quantum, and container C3 will run for half-time quantum i.e 50 milliseconds.

您还可以使用选项 --cpuset 指定容器使用的内核。例如:--cpuset=0-3--cpuset=0--cpuset=3,4