运行 ruby rails 查询时 Nginx 上游超时错误
Nginx upstream timeout error while running ruby on rails query
在 rails 应用程序的 ruby 中,我有一个 table 有大约 10,000 个条目,可以使用不同的参数进行搜索。在开发箱上,这工作正常,但在生产箱上,我得到一个错误。
2018/10/04 15:46:39 [error] 3418#3418: *6 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.1.5, server: my.site.com, request: "POST /quotes/quoteTable_js HTTP/1.1", upstream: "http://unix:///path/to/app/shared/tmp/sockets/puma.awi_staging.sock/items/itemTable_js", host: "192.168.1.25", referrer: "http://192.168.1.25/items"
我没有设置服务器,所以我在这里有点不深入。我查看了以下问题
- Nginx reverse proxy causing 504 Gateway Timeout
- NGINX: upstream timed out (110: Connection timed out) while reading response header from upstream
但是,none 个有效,或者我没有正确实施它们。
我的nginx.conf文件
user nginx;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
编辑
我在 ~7 秒后收到 "We're sorry, but something went wrong" rails 消息。我试过增加 keepalive_timeout,但没有任何改变。
keepalive_timeout
参数控制空闲客户端连接保持打开的时间以节省可能的重新连接成本,这不是您的问题所在。
上游超时有proxy_connect_timeout
、proxy_send_timeout
和proxy_read_timeout
(nginx默认是60s,但是你的配置文件里好像设置的比较低),你可以试试增加后两个,但通常没有人希望服务器响应时间那么长 - 因为长请求阻塞工作人员和客户端可能开始对所有请求超时,而不仅仅是 'heavy' 个请求。
在 rails 应用程序的 ruby 中,我有一个 table 有大约 10,000 个条目,可以使用不同的参数进行搜索。在开发箱上,这工作正常,但在生产箱上,我得到一个错误。
2018/10/04 15:46:39 [error] 3418#3418: *6 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.1.5, server: my.site.com, request: "POST /quotes/quoteTable_js HTTP/1.1", upstream: "http://unix:///path/to/app/shared/tmp/sockets/puma.awi_staging.sock/items/itemTable_js", host: "192.168.1.25", referrer: "http://192.168.1.25/items"
我没有设置服务器,所以我在这里有点不深入。我查看了以下问题
- Nginx reverse proxy causing 504 Gateway Timeout
- NGINX: upstream timed out (110: Connection timed out) while reading response header from upstream
但是,none 个有效,或者我没有正确实施它们。
我的nginx.conf文件
user nginx;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
编辑
我在 ~7 秒后收到 "We're sorry, but something went wrong" rails 消息。我试过增加 keepalive_timeout,但没有任何改变。
keepalive_timeout
参数控制空闲客户端连接保持打开的时间以节省可能的重新连接成本,这不是您的问题所在。
上游超时有proxy_connect_timeout
、proxy_send_timeout
和proxy_read_timeout
(nginx默认是60s,但是你的配置文件里好像设置的比较低),你可以试试增加后两个,但通常没有人希望服务器响应时间那么长 - 因为长请求阻塞工作人员和客户端可能开始对所有请求超时,而不仅仅是 'heavy' 个请求。