启动 Python、Spark 和 Gunicorn 时出现问题

Issue while starting Python,Spark and Gunicorn

我正在尝试在 Capistrano 完成部署后立即重新启动 Python、gunicorn 和 spark,但出现以下错误。但是,当我尝试通过 ssh 在服务器上执行这些命令时,它工作正常。

deploy.rb中的函数:

desc 'Restart django'
    task :restart_django do
      on roles(:django), in: :sequence, wait: 5 do
        within "#{fetch(:deploy_to)}/current/" do
          execute "cd #{fetch(:deploy_to)}/current/ &&  source bin/activate "
          execute "sudo /etc/init.d/supervisor stop && sudo fuser -k 8000/tcp && pkill -f python && pkill -f gunicorn && pkill -f spark"
#execute " cd /home/ubuntu/code/spark-2.1.0-bin-hadoop2.7/sbin/ && ./start-master.sh && ./start-slave.sh spark://127.0.0.1:7077;"
          #execute "sleep 20"
          #execute "cd /home/ubuntu/code/ && nohup gunicorn example.wsgi:application --name example --workers 4 &"
        end
      end
  end

部署输出:

cap dev deploy:restart_django
Using airbrussh format.
Verbose output is being written to log/capistrano.log.
00:00 deploy:restart_django
      01 cd /home/ubuntu/code/ &&  source bin/activate
    ✔ 01 ubuntu@xx-xx-xx-xx-xx.us-west-1.compute.amazonaws.com 2.109s
      02 sudo /etc/init.d/supervisor stop && sudo fuser -k 8000/tcp && pkill -f python gunicorn spark
(Backtrace restricted to imported tasks)
cap aborted!
SSHKit::Runner::ExecuteError: Exception while executing as ubuntu@ec2-54-244-99-254.us-west-2.compute.amazonaws.com: sudo /etc/init.d/supervisor stop && sudo fuser -k 8000/tcp && pkill -f python gunicorn spark exit status: 1
sudo /etc/init.d/supervisor stop && sudo fuser -k 8000/tcp && pkill -f python gunicorn spark stdout: Stopping supervisor: supervisord.
sudo /etc/init.d/supervisor stop && sudo fuser -k 8000/tcp && pkill -f python gunicorn spark stderr: Nothing written

SSHKit::Command::Failed: sudo /etc/init.d/supervisor stop && sudo fuser -k 8000/tcp && pkill -f python gunicorn spark exit status: 1
sudo /etc/init.d/supervisor stop && sudo fuser -k 8000/tcp && pkill -f python gunicorn spark stdout: Stopping supervisor: supervisord.
sudo /etc/init.d/supervisor stop && sudo fuser -k 8000/tcp && pkill -f python gunicorn spark stderr: Nothing written

Tasks: TOP => deploy:restart_django
(See full trace by running task with --trace)

Capistrano 默认调用无登录、非交互式 shell。但是,使用登录 shell 并不是一个好的选择,但我可以通过在 Capistrano 下执行以下命令调用登录 shell 来解决问题。

execute "bash --login -c 'pkill -f spark'", raise_on_non_zero_exit: false