nginx uwsgi 在处理请求时不会接受新连接
nginx uwsgi won't accept new connection while request is being processed
我有一个 Python Flask
应用程序由 nginx
和 uwsgi
提供服务。我在本地机器上为应用程序提供开发服务,当我在浏览器中打开它时,一切都很好。我发出 POST 请求,请求返回正常。到目前为止一切顺利...
现在这个POST请求是一个很长的运行ning请求,计算量很大,需要大约60秒才能运行。所以我想测试一下我是否可以打开多个连接。我发出 POST 请求,然后在另一个浏览器选项卡中打开应用程序,但在 POST 请求得到响应之前它不会加载。
我是 nginx 和 uwsgi 的新手,要走到这一步是一条艰难的道路,但我认为这个想法是你能够更有效地处理连接和开箱即用的负载,所以我认为我在这里犯了一个新手错误。
如何让这个应用程序处理多个连接和请求?
这是我的 nginx.conf
:
daemon off;
events {
worker_connections 1024;
}
http {
server {
listen 80;
server_name 127.0.0.1;
#root /app;
charset UTF-8;
access_log /var/log/nginx/t206cv.access.log;
location / {
proxy_pass http://app;
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
send_timeout 600;
}
}
upstream app {
server app:5000;
}
}
这是我的 uwsgi.ini
:
[uwsgi]
chdir = /app
module = t206cv:app
http-socket = 0.0.0.0:5000
master = True
下面是我启动应用程序时发生的情况:
app_1 | [uWSGI] getting INI configuration from /etc/uwsgi.ini
app_1 | *** Starting uWSGI 2.0.13.1 (64bit) on [Thu Aug 25 00:49:37 2016] ***
app_1 | compiled with version: 4.9.2 on 24 August 2016 02:00:22
app_1 | os: Linux-4.1.19-boot2docker #1 SMP Mon Mar 7 17:44:33 UTC 2016
app_1 | nodename: b6faafc928a1
app_1 | machine: x86_64
app_1 | clock source: unix
app_1 | pcre jit disabled
app_1 | detected number of CPU cores: 1
app_1 | current working directory: /
app_1 | detected binary path: /usr/local/bin/uwsgi
app_1 | uWSGI running as root, you can use --uid/--gid/--chroot options
app_1 | *** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
app_1 | chdir() to /app
app_1 | your processes number limit is 1048576
app_1 | your memory page size is 4096 bytes
app_1 | detected max file descriptor number: 1048576
app_1 | lock engine: pthread robust mutexes
app_1 | thunder lock: disabled (you can enable it with --thunder-lock)
app_1 | uwsgi socket 0 bound to TCP address 0.0.0.0:5000 fd 3
app_1 | Python version: 2.7.12 (default, Aug 22 2016, 20:25:04) [GCC 4.9.2]
app_1 | *** Python threads support is disabled. You can enable it with --enable-threads ***
app_1 | Python main interpreter initialized at 0xe34450
app_1 | your server socket listen backlog is limited to 100 connections
app_1 | your mercy for graceful operations on workers is 60 seconds
app_1 | mapped 145536 bytes (142 KB) for 1 cores
app_1 | *** Operational MODE: single process ***
app_1 | WSGI app 0 (mountpoint='') ready in 1 seconds on interpreter 0xe34450 pid: 6 (default app)
app_1 | *** uWSGI is running in multiple interpreter mode ***
app_1 | spawned uWSGI master process (pid: 6)
app_1 | spawned uWSGI worker 1 (pid: 11, cores: 1)
app_1 | [pid: 11|app: 0|req: 1/1] 172.17.0.3 () {34 vars in 605 bytes} [Thu Aug 25 00:50:06 2016] GET / => generated 2383 bytes in 16 msecs (HTTP/1.0 200) 2 headers in 81 bytes (1 switches on core 0)
app_1 | [pid: 11|app: 0|req: 2/2] 172.17.0.3 () {38 vars in 697 bytes} [Thu Aug 25 00:50:06 2016] GET /static/styles.css => generated 0 bytes in 6 msecs (HTTP/1.0 304) 4 headers in 181 bytes (0 switches on core 0)
app_1 | [pid: 11|app: 0|req: 3/3] 172.17.0.3 () {38 vars in 683 bytes} [Thu Aug 25 00:50:06 2016] GET /static/scripts.js => generated 0 bytes in 1 msecs (HTTP/1.0 304) 4 headers in 182 bytes (0 switches on core 0)
app_1 | [pid: 11|app: 0|req: 4/4] 172.17.0.3 () {34 vars in 605 bytes} [Thu Aug 25 00:51:03 2016] GET / => generated 2383 bytes in 2 msecs (HTTP/1.0 200) 2 headers in 81 bytes (1 switches on core 0)
app_1 | [pid: 11|app: 0|req: 5/5] 172.17.0.3 () {38 vars in 697 bytes} [Thu Aug 25 00:51:03 2016] GET /static/styles.css => generated 0 bytes in 2 msecs (HTTP/1.0 304) 4 headers in 181 bytes (0 switches on core 0)
app_1 | [pid: 11|app: 0|req: 6/6] 172.17.0.3 () {38 vars in 683 bytes} [Thu Aug 25 00:51:03 2016] GET /static/scripts.js => generated 0 bytes in 3 msecs (HTTP/1.0 304) 4 headers in 182 bytes (0 switches on core 0)
app_1 | [pid: 11|app: 0|req: 7/7] 172.17.0.3 () {40 vars in 683 bytes} [Thu Aug 25 00:51:06 2016] POST /search => generated 89 bytes in 81585 msecs (HTTP/1.0 200) 2 headers in 71 bytes (2 switches on core 0)
app_1 | [pid: 11|app: 0|req: 8/8] 172.17.0.3 () {32 vars in 574 bytes} [Thu Aug 25 00:52:28 2016] GET / => generated 2383 bytes in 1 msecs (HTTP/1.0 200) 2 headers in 81 bytes (1 switches on core 0)
app_1 | [pid: 11|app: 0|req: 9/9] 172.17.0.3 () {32 vars in 574 bytes} [Thu Aug 25 00:52:28 2016] GET / => generated 2383 bytes in 2 msecs (HTTP/1.0 200) 2 headers in 81 bytes (1 switches on core 0)
你只是 运行 一名工人。
spawned uWSGI worker 1 (pid: 11, cores: 1)
一个 thread/process 工作人员可以处理一个请求。在 uWSGI 配置中配置更多 worker。
workers = 8
使用 cheaper mode 是一种根据需求扩展工作进程的简单方法。
cheaper = 2
cheaper-initial = 2
workers = 16
我有一个 Python Flask
应用程序由 nginx
和 uwsgi
提供服务。我在本地机器上为应用程序提供开发服务,当我在浏览器中打开它时,一切都很好。我发出 POST 请求,请求返回正常。到目前为止一切顺利...
现在这个POST请求是一个很长的运行ning请求,计算量很大,需要大约60秒才能运行。所以我想测试一下我是否可以打开多个连接。我发出 POST 请求,然后在另一个浏览器选项卡中打开应用程序,但在 POST 请求得到响应之前它不会加载。
我是 nginx 和 uwsgi 的新手,要走到这一步是一条艰难的道路,但我认为这个想法是你能够更有效地处理连接和开箱即用的负载,所以我认为我在这里犯了一个新手错误。
如何让这个应用程序处理多个连接和请求?
这是我的 nginx.conf
:
daemon off;
events {
worker_connections 1024;
}
http {
server {
listen 80;
server_name 127.0.0.1;
#root /app;
charset UTF-8;
access_log /var/log/nginx/t206cv.access.log;
location / {
proxy_pass http://app;
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
send_timeout 600;
}
}
upstream app {
server app:5000;
}
}
这是我的 uwsgi.ini
:
[uwsgi]
chdir = /app
module = t206cv:app
http-socket = 0.0.0.0:5000
master = True
下面是我启动应用程序时发生的情况:
app_1 | [uWSGI] getting INI configuration from /etc/uwsgi.ini
app_1 | *** Starting uWSGI 2.0.13.1 (64bit) on [Thu Aug 25 00:49:37 2016] ***
app_1 | compiled with version: 4.9.2 on 24 August 2016 02:00:22
app_1 | os: Linux-4.1.19-boot2docker #1 SMP Mon Mar 7 17:44:33 UTC 2016
app_1 | nodename: b6faafc928a1
app_1 | machine: x86_64
app_1 | clock source: unix
app_1 | pcre jit disabled
app_1 | detected number of CPU cores: 1
app_1 | current working directory: /
app_1 | detected binary path: /usr/local/bin/uwsgi
app_1 | uWSGI running as root, you can use --uid/--gid/--chroot options
app_1 | *** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
app_1 | chdir() to /app
app_1 | your processes number limit is 1048576
app_1 | your memory page size is 4096 bytes
app_1 | detected max file descriptor number: 1048576
app_1 | lock engine: pthread robust mutexes
app_1 | thunder lock: disabled (you can enable it with --thunder-lock)
app_1 | uwsgi socket 0 bound to TCP address 0.0.0.0:5000 fd 3
app_1 | Python version: 2.7.12 (default, Aug 22 2016, 20:25:04) [GCC 4.9.2]
app_1 | *** Python threads support is disabled. You can enable it with --enable-threads ***
app_1 | Python main interpreter initialized at 0xe34450
app_1 | your server socket listen backlog is limited to 100 connections
app_1 | your mercy for graceful operations on workers is 60 seconds
app_1 | mapped 145536 bytes (142 KB) for 1 cores
app_1 | *** Operational MODE: single process ***
app_1 | WSGI app 0 (mountpoint='') ready in 1 seconds on interpreter 0xe34450 pid: 6 (default app)
app_1 | *** uWSGI is running in multiple interpreter mode ***
app_1 | spawned uWSGI master process (pid: 6)
app_1 | spawned uWSGI worker 1 (pid: 11, cores: 1)
app_1 | [pid: 11|app: 0|req: 1/1] 172.17.0.3 () {34 vars in 605 bytes} [Thu Aug 25 00:50:06 2016] GET / => generated 2383 bytes in 16 msecs (HTTP/1.0 200) 2 headers in 81 bytes (1 switches on core 0)
app_1 | [pid: 11|app: 0|req: 2/2] 172.17.0.3 () {38 vars in 697 bytes} [Thu Aug 25 00:50:06 2016] GET /static/styles.css => generated 0 bytes in 6 msecs (HTTP/1.0 304) 4 headers in 181 bytes (0 switches on core 0)
app_1 | [pid: 11|app: 0|req: 3/3] 172.17.0.3 () {38 vars in 683 bytes} [Thu Aug 25 00:50:06 2016] GET /static/scripts.js => generated 0 bytes in 1 msecs (HTTP/1.0 304) 4 headers in 182 bytes (0 switches on core 0)
app_1 | [pid: 11|app: 0|req: 4/4] 172.17.0.3 () {34 vars in 605 bytes} [Thu Aug 25 00:51:03 2016] GET / => generated 2383 bytes in 2 msecs (HTTP/1.0 200) 2 headers in 81 bytes (1 switches on core 0)
app_1 | [pid: 11|app: 0|req: 5/5] 172.17.0.3 () {38 vars in 697 bytes} [Thu Aug 25 00:51:03 2016] GET /static/styles.css => generated 0 bytes in 2 msecs (HTTP/1.0 304) 4 headers in 181 bytes (0 switches on core 0)
app_1 | [pid: 11|app: 0|req: 6/6] 172.17.0.3 () {38 vars in 683 bytes} [Thu Aug 25 00:51:03 2016] GET /static/scripts.js => generated 0 bytes in 3 msecs (HTTP/1.0 304) 4 headers in 182 bytes (0 switches on core 0)
app_1 | [pid: 11|app: 0|req: 7/7] 172.17.0.3 () {40 vars in 683 bytes} [Thu Aug 25 00:51:06 2016] POST /search => generated 89 bytes in 81585 msecs (HTTP/1.0 200) 2 headers in 71 bytes (2 switches on core 0)
app_1 | [pid: 11|app: 0|req: 8/8] 172.17.0.3 () {32 vars in 574 bytes} [Thu Aug 25 00:52:28 2016] GET / => generated 2383 bytes in 1 msecs (HTTP/1.0 200) 2 headers in 81 bytes (1 switches on core 0)
app_1 | [pid: 11|app: 0|req: 9/9] 172.17.0.3 () {32 vars in 574 bytes} [Thu Aug 25 00:52:28 2016] GET / => generated 2383 bytes in 2 msecs (HTTP/1.0 200) 2 headers in 81 bytes (1 switches on core 0)
你只是 运行 一名工人。
spawned uWSGI worker 1 (pid: 11, cores: 1)
一个 thread/process 工作人员可以处理一个请求。在 uWSGI 配置中配置更多 worker。
workers = 8
使用 cheaper mode 是一种根据需求扩展工作进程的简单方法。
cheaper = 2
cheaper-initial = 2
workers = 16