运行 来自单个 docker 容器的烧瓶芹菜和 gunicorn
Running Flask celery and gunicorn from a single docker container
我正在尝试 运行 在同一个 docker 容器中使用 celery 和 gunicorn 的 flask 应用程序。我是 usinf supervisord。 Gunicorn 从 docker-compose.yml
执行为
services:
web:
container_name: "flask"
build: ./
volumes:
- ./app:/app
ports:
- "8000:8000"
environment:
- DEPLOYMENT_TYPE=production
- FLASK_APP=app/main.py
- FLASK_DEBUG=1
- MONGODB_DATABASE=testdb
- MONGODB_USERNAME=testuser
- MONGODB_PASSWORD=testuser
- MONGODB_HOSTNAME=mongo
command: gunicorn app.main:app --workers 1 --name main --reload -b 0.0.0.0:8000 --preload
depends_on:
- redis
- mongo
links:
- mongo
在supervisord.conf中设置celery
[supervisord]
nodaemon=true
[program:celeryworker]
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
command=/opt/venv/bin/celery -A app.routes.celery_tasks.celery worker --loglevel=info -B -s app/celerybeat-schedule
supervisor从docker文件执行
FROM ubuntu:20.04
LABEL maintainer="nebu"
ENV GROUP_ID=1000 \
USER_ID=1000
RUN apt-get update && apt-get -y upgrade
RUN apt-get install -y apt-transport-https ca-certificates supervisor procps cron vim python3.8-venv python3-gdbm
WORKDIR /app
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV VIRTUAL_ENV=/opt/venv
RUN python3 -m venv $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
RUN ["python", "-m", "pip", "install", "--upgrade", "pip", "wheel"]
RUN apt-get install -y python3-wheel
COPY ./requirements.txt /app/requirements.txt
RUN ["python", "-m", "pip", "install", "--no-cache-dir", "--upgrade", "-r", "/app/requirements.txt"]
COPY ./app /app
RUN ["mkdir", "-p","/var/log/supervisor"]
COPY ./app/supervisord.conf /etc/supervisor/conf.d/supervisord.conf
CMD ["/usr/bin/supervisord"]
但是当我把所有的容器都拿上来的时候,芹菜和豆子都没有上来。于是进入flask app容器shell执行/usr/bin/supervisord
,然后celery worker就没有报错了。如何从同一个 docker 容器中将 gunicorn 和 celery worker-beat 放在一起。
更新
当我使用下面的 supervisord.conf 时,nginx 没有启动。显示错误
[supervisord]
nodaemon=true
[program:celeryworker]
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
command=/opt/venv/bin/celery -A app.routes.celery_tasks.celery worker --loglevel=info -B -s app/celerybeat-schedule
[program:myproject_gunicorn]
user=root
command=gunicorn app.main:app --workers 1 --name main --reload -b 0.0.0.0:8000 --preload
autostart=true
autorestart=true
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
错误
nginx exited with code 1
nginx | 2022/04/09 09:41:56 [emerg] 1#1: host not found in upstream "web" in /etc/nginx/conf.d/app.conf:8
nginx | nginx: [emerg] host not found in upstream "web" in /etc/nginx/conf.d/app.conf:8
nginx exited with code 1
nginx | 2022/04/09 09:41:59 [emerg] 1#1: host not found in upstream "web" in /etc/nginx/conf.d/app.conf:8
nginx | nginx: [emerg] host not found in upstream "web" in /etc/nginx/conf.d/app.conf:8
dockerfile中的CMD ["/usr/bin/supervisord"]
和docker-compose.yml中的command: gunicorn app.main:app --workers 1 --name main --reload -b 0.0.0.0:8000 --preload
有冲突。
事实上,docker-compose.yml 中的 command
会覆盖 dockerfile 中的 CMD
。这就是为什么只有 gunicorn 出现了。
我的建议:将 gunicorn 相关内容添加到 supervisord.conf 并删除 docker-compose.yml.
中的 command
这样supervisor就会把gunicorn和celery都带上去
我正在尝试 运行 在同一个 docker 容器中使用 celery 和 gunicorn 的 flask 应用程序。我是 usinf supervisord。 Gunicorn 从 docker-compose.yml
执行为
services:
web:
container_name: "flask"
build: ./
volumes:
- ./app:/app
ports:
- "8000:8000"
environment:
- DEPLOYMENT_TYPE=production
- FLASK_APP=app/main.py
- FLASK_DEBUG=1
- MONGODB_DATABASE=testdb
- MONGODB_USERNAME=testuser
- MONGODB_PASSWORD=testuser
- MONGODB_HOSTNAME=mongo
command: gunicorn app.main:app --workers 1 --name main --reload -b 0.0.0.0:8000 --preload
depends_on:
- redis
- mongo
links:
- mongo
在supervisord.conf中设置celery
[supervisord]
nodaemon=true
[program:celeryworker]
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
command=/opt/venv/bin/celery -A app.routes.celery_tasks.celery worker --loglevel=info -B -s app/celerybeat-schedule
supervisor从docker文件执行
FROM ubuntu:20.04
LABEL maintainer="nebu"
ENV GROUP_ID=1000 \
USER_ID=1000
RUN apt-get update && apt-get -y upgrade
RUN apt-get install -y apt-transport-https ca-certificates supervisor procps cron vim python3.8-venv python3-gdbm
WORKDIR /app
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV VIRTUAL_ENV=/opt/venv
RUN python3 -m venv $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
RUN ["python", "-m", "pip", "install", "--upgrade", "pip", "wheel"]
RUN apt-get install -y python3-wheel
COPY ./requirements.txt /app/requirements.txt
RUN ["python", "-m", "pip", "install", "--no-cache-dir", "--upgrade", "-r", "/app/requirements.txt"]
COPY ./app /app
RUN ["mkdir", "-p","/var/log/supervisor"]
COPY ./app/supervisord.conf /etc/supervisor/conf.d/supervisord.conf
CMD ["/usr/bin/supervisord"]
但是当我把所有的容器都拿上来的时候,芹菜和豆子都没有上来。于是进入flask app容器shell执行/usr/bin/supervisord
,然后celery worker就没有报错了。如何从同一个 docker 容器中将 gunicorn 和 celery worker-beat 放在一起。
更新
当我使用下面的 supervisord.conf 时,nginx 没有启动。显示错误
[supervisord]
nodaemon=true
[program:celeryworker]
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
command=/opt/venv/bin/celery -A app.routes.celery_tasks.celery worker --loglevel=info -B -s app/celerybeat-schedule
[program:myproject_gunicorn]
user=root
command=gunicorn app.main:app --workers 1 --name main --reload -b 0.0.0.0:8000 --preload
autostart=true
autorestart=true
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
错误
nginx exited with code 1
nginx | 2022/04/09 09:41:56 [emerg] 1#1: host not found in upstream "web" in /etc/nginx/conf.d/app.conf:8
nginx | nginx: [emerg] host not found in upstream "web" in /etc/nginx/conf.d/app.conf:8
nginx exited with code 1
nginx | 2022/04/09 09:41:59 [emerg] 1#1: host not found in upstream "web" in /etc/nginx/conf.d/app.conf:8
nginx | nginx: [emerg] host not found in upstream "web" in /etc/nginx/conf.d/app.conf:8
CMD ["/usr/bin/supervisord"]
和docker-compose.yml中的command: gunicorn app.main:app --workers 1 --name main --reload -b 0.0.0.0:8000 --preload
有冲突。
事实上,docker-compose.yml 中的 command
会覆盖 dockerfile 中的 CMD
。这就是为什么只有 gunicorn 出现了。
我的建议:将 gunicorn 相关内容添加到 supervisord.conf 并删除 docker-compose.yml.
中的command
这样supervisor就会把gunicorn和celery都带上去