Celery Daemon 在 Centos 7 上不工作

Celery Daemon does not work on Centos 7

我正在尝试 运行 在具有 systemd / systemctl 的 Centos 7 上使用 celery 守护进程。 它不起作用。

关于如何解决这个问题有什么建议吗?

这是我的守护程序默认配置:

CELERYD_NODES="localhost.localdomain"
CELERY_BIN="/tmp/myapp/venv/bin/celery"
CELERY_APP="pipeline"
CELERYD_OPTS="--broker=amqp://192.168.168.111/"
CELERYD_LOG_LEVEL="INFO"
CELERYD_CHDIR="/tmp/myapp"
CELERYD_USER="root"

注意:我用

启动守护进程
sudo /etc/init.d/celeryd start

我的 celery 守护程序脚本来自: https://raw.githubusercontent.com/celery/celery/3.1/extra/generic-init.d/celeryd

我也试过来自: https://raw.githubusercontent.com/celery/celery/3.1/extra/generic-init.d/celeryd 但是这个在尝试启动守护进程时向我展示了一个错误:

systemd[1]: Starting LSB: celery task worker daemon...
celeryd[19924]: basename: missing operand
celeryd[19924]: Try 'basename --help' for more information.
celeryd[19924]: Starting : /etc/rc.d/init.d/celeryd: line 193: multi: command not found
celeryd[19924]: [FAILED]
systemd[1]: celeryd.service: control process exited, code=exited status=1
systemd[1]: Failed to start LSB: celery task worker daemon.
systemd[1]: Unit celeryd.service entered failed state.

celeryd 被贬低了。如果您能够 运行 在非守护进程模式下说

celery worker -l info -A my_app -n my_worker

您可以使用 celery multi

简单地对其进行守护进程
celery multi my_worker -A my_app -l info

话虽如此,如果您仍想使用 celeryd try these steps

正如@ChillarAnand 之前回答过的,不要使用celeryd

但其实并没有他写的运行celery with celery multi with systemd那么简单

这是我工作的、不明显的(我认为)示例。

它们已经在 Centos 7.1.1503celery 3.1.23 (Cipater) 运行ning 上进行了测试virtualenv,以及来自 Celery tutorial 的 tasks.py 示例应用程序。

运行一个工人

[Unit]
Description=Celery Service
After=network.target

[Service]
Type=forking
User=vagrant
Group=vagrant

# directory with tasks.py
WorkingDirectory=/home/vagrant/celery_example

# !!! using the below systemd is REQUIRED in this case!
# (you will still get a warning "PID file /var/run/celery/single.pid not readable (yet?) after start." from systemd but service will in fact be starting, stopping and restarting properly. I haven't found a way to get rid of this warning.)
PIDFile=/var/run/celery/single.pid

# !!! using --pidfile option here and below is REQUIRED in this case!
# !!! also: don't use "%n" in pidfile or logfile paths - you will get these files named after the systemd service instead of after the worker (?)
ExecStart=/home/vagrant/celery_example/venv/bin/celery multi start single-worker -A tasks --pidfile=/var/run/celery/single.pid --logfile=/var/log/celery/single.log "-c 4 -Q celery -l INFO"

ExecStop=/home/vagrant/celery_example/venv/bin/celery multi stopwait single-worker --pidfile=/var/run/celery/single.pid --logfile=/var/log/celery/single.log

ExecReload=/home/vagrant/celery_example/venv/bin/celery multi restart single-worker --pidfile=/var/run/celery/single.pid --logfile=/var/log/celery/single.log

# Creates /var/run/celery, if it doesn't exist
RuntimeDirectory=celery

[Install]
WantedBy=multi-user.target

运行 多个工人

[Unit]
Description=Celery Service
After=network.target

[Service]
Type=forking
User=vagrant
Group=vagrant

# directory with tasks.py
WorkingDirectory=/home/vagrant/celery_example

# !!! in this case DON'T set PIDFile or use --pidfile or --logfile below or it won't work!
ExecStart=/home/vagrant/celery_example/venv/bin/celery multi start 3 -A tasks "-c 4 -Q celery -l INFO"

ExecStop=/home/vagrant/celery_example/venv/bin/celery multi stopwait 3

ExecReload=/home/vagrant/celery_example/venv/bin/celery multi restart 3

# Creates /var/run/celery, if it doesn't exist
RuntimeDirectory=celery

[Install]
WantedBy=multi-user.target

(请注意,我是 运行宁工人 -c / --concurrency > 1 但它也适用于它设置为 1 或默认值。如果您不使用 [=14,这也应该有效=], 但我强烈建议您使用它。)

我真的不明白为什么 systemd 在第一种情况下无法猜测分叉进程的 PID,以及为什么将 pidfile 放在特定位置会破坏第二种情况,所以我在这里提交了一张票:https://github.com/celery/celery/issues/3459。如果我能得到答案或自己提出一些解释,那么我会 post 在这里。