Gunicorn:坚持引导新员工
Gunicorn: stuck at booting new workers
我有一个相当简单的 Flask 应用程序(使用 fastAPI)来加载一个 numpy 数组并定义一些 API 端点。
import numpy as np
import pandas as pd
import logging
from fastapi import FastAPI
app = FastAPI()
logging.basicConfig(level=logging.DEBUG)
logging.info('Loading texts')
texts = pd.read_csv('cleaned.csv')
logging.info('Loading embeddings')
embeddings = np.load('laser-2020-04-30.npy') # 3.7G
logging.info('Loading completed!')
# some API endpoints below...
我可以 运行 使用纯 python3.7
这个应用程序没有任何问题。 运行 也很好,而 运行 宁香草 gunicorn。当 运行 将所有东西都放在 docker 容器中(并使用 gunicorn)时,问题就出现了。它似乎卡在加载大型 numpy 数组和启动新 worker 时。
[2020-05-11 08:33:20 +0000] [1] [INFO] Starting gunicorn 20.0.4
[2020-05-11 08:33:20 +0000] [1] [DEBUG] Arbiter booted
[2020-05-11 08:33:20 +0000] [1] [INFO] Listening at: http://0.0.0.0:80 (1)
[2020-05-11 08:33:20 +0000] [1] [INFO] Using worker: sync
[2020-05-11 08:33:20 +0000] [7] [INFO] Booting worker with pid: 7
[2020-05-11 08:33:20 +0000] [1] [DEBUG] 1 workers
INFO:root:Loading texts
INFO:root:Loading embeddings
[2020-05-11 08:33:35 +0000] [18] [INFO] Booting worker with pid: 18
INFO:root:Loading texts
INFO:root:Loading embeddings
[2020-05-11 08:33:51 +0000] [29] [INFO] Booting worker with pid: 29
INFO:root:Loading texts
INFO:root:Loading embeddings
[2020-05-11 08:34:05 +0000] [40] [INFO] Booting worker with pid: 40
INFO:root:Loading texts
INFO:root:Loading embeddings
[2020-05-11 08:34:19 +0000] [51] [INFO] Booting worker with pid: 51
INFO:root:Loading texts
INFO:root:Loading embeddings
[2020-05-11 08:34:36 +0000] [62] [INFO] Booting worker with pid: 62
我将工人数量设置为 1,并将超时增加到 900 秒。但是,它每 10-15 秒启动一次新工人。
运行在我的 Dockerfile
中安装应用程序的命令如下所示
CMD ["gunicorn","-b 0.0.0.0:8080", "main:app", "--timeout 900", "--log-level", "debug", "--workers", "1", "--graceful-timeout", "900"]
为了解决这个问题,我只是increased the available amount of RAM that the docker container可以使用。在我安装 docker 的 MacBook 2019 上,默认值为 2G。由于 numpy 数组是 3.7G,这就是它无法加载它的原因。
docker run -m=8g -t my_docker
我有一个相当简单的 Flask 应用程序(使用 fastAPI)来加载一个 numpy 数组并定义一些 API 端点。
import numpy as np
import pandas as pd
import logging
from fastapi import FastAPI
app = FastAPI()
logging.basicConfig(level=logging.DEBUG)
logging.info('Loading texts')
texts = pd.read_csv('cleaned.csv')
logging.info('Loading embeddings')
embeddings = np.load('laser-2020-04-30.npy') # 3.7G
logging.info('Loading completed!')
# some API endpoints below...
我可以 运行 使用纯 python3.7
这个应用程序没有任何问题。 运行 也很好,而 运行 宁香草 gunicorn。当 运行 将所有东西都放在 docker 容器中(并使用 gunicorn)时,问题就出现了。它似乎卡在加载大型 numpy 数组和启动新 worker 时。
[2020-05-11 08:33:20 +0000] [1] [INFO] Starting gunicorn 20.0.4
[2020-05-11 08:33:20 +0000] [1] [DEBUG] Arbiter booted
[2020-05-11 08:33:20 +0000] [1] [INFO] Listening at: http://0.0.0.0:80 (1)
[2020-05-11 08:33:20 +0000] [1] [INFO] Using worker: sync
[2020-05-11 08:33:20 +0000] [7] [INFO] Booting worker with pid: 7
[2020-05-11 08:33:20 +0000] [1] [DEBUG] 1 workers
INFO:root:Loading texts
INFO:root:Loading embeddings
[2020-05-11 08:33:35 +0000] [18] [INFO] Booting worker with pid: 18
INFO:root:Loading texts
INFO:root:Loading embeddings
[2020-05-11 08:33:51 +0000] [29] [INFO] Booting worker with pid: 29
INFO:root:Loading texts
INFO:root:Loading embeddings
[2020-05-11 08:34:05 +0000] [40] [INFO] Booting worker with pid: 40
INFO:root:Loading texts
INFO:root:Loading embeddings
[2020-05-11 08:34:19 +0000] [51] [INFO] Booting worker with pid: 51
INFO:root:Loading texts
INFO:root:Loading embeddings
[2020-05-11 08:34:36 +0000] [62] [INFO] Booting worker with pid: 62
我将工人数量设置为 1,并将超时增加到 900 秒。但是,它每 10-15 秒启动一次新工人。
运行在我的 Dockerfile
中安装应用程序的命令如下所示
CMD ["gunicorn","-b 0.0.0.0:8080", "main:app", "--timeout 900", "--log-level", "debug", "--workers", "1", "--graceful-timeout", "900"]
为了解决这个问题,我只是increased the available amount of RAM that the docker container可以使用。在我安装 docker 的 MacBook 2019 上,默认值为 2G。由于 numpy 数组是 3.7G,这就是它无法加载它的原因。
docker run -m=8g -t my_docker