Nginx/Uwsgi/Flask POST 如果正文太大则超时

Nginx/Uwsgi/Flask POST times out if body is too large

我正在使用基于 https://github.com/tiangolo/uwsgi-nginx-flask-docker/tree/master/python3.6 的 docker 图像。我是 运行 一个接受 POST 的 python 应用程序,对 json 主体进行一些处理,returns 一个简单的 json 响应背部。像这样的 post:

curl -H "Content-Type: application/json" -X POST http://10.4.5.168:5002/test -d '{"test": "test"}'

工作正常。但是,如果我 post 一个更大的 json 文件,我会收到 504:网关超时。

curl -H "Content-Type: application/json" -X POST http://10.4.5.168:5002/test -d @some_6mb_file.json

我感觉 Nginx 和 Uwsgi 之间的通信有问题,但我不确定如何解决它。

编辑:我跳进 docker 容器并手动重启 nginx 以获得更好的日志记录。我收到以下错误:

2018/12/21 20:47:45 [error] 611#611: *1 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 10.4.3.168, server: , request: "POST /model/refuel_verification_model/predict HTTP/1.1", upstream: "uwsgi://unix:///tmp/uwsgi.sock", host: "10.4.3.168:5002"

从容器内部,我启动了我的 Flask 应用程序的第二个实例,运行 没有 Nginx 和 Uwsgi,它运行良好。返回响应大约需要 5 秒(由于数据的处理时间。)

配置:

/etc/nginx/nginx.conf:

user  nginx;
worker_processes 1;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

    include /etc/nginx/conf.d/*.conf;
}
daemon off;

/etc/nginx/conf.d/nginx.conf:

server {
    listen 80;
    location / {
        try_files $uri @app;
    }
    location @app {
        include uwsgi_params;
        uwsgi_pass unix:///tmp/uwsgi.sock;
    }
    location /static {
        alias /app/static;
    }
}

/etc/nginx/conf.d/upload.conf:

client_max_body_size 128m;
client_body_buffer_size 128m;

Tensorflow 出现问题。我在应用程序初始化期间加载了一个 tensorflow 模型,然后尝试稍后使用它。由于网络服务器完成的线程和 Tensorflow 的 "non-thread-safe" 性质,处理挂起导致超时。

我在代理到 aiohttp (Python) 应用程序时遇到了这种行为。

就我而言,在代理的位置块中,我需要禁用缓存:

从区块中移除:

proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;

因此,工作配置是这样的:

  server {
    listen 80;
    location / {
        try_files $uri @app;
    }
    location @app {
      proxy_set_header Host $http_host;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_redirect off;
      proxy_buffering off;
      proxy_pass http://myapp;
    }
    location /static {
        alias /app/static;
    }
}