NGINX:超过 65535 个连接限制

NGINX : Exceeds 65535 connections limit

与 HTTP 不同,websocket 在从 HTTP 升级后保持长期连接。

即使 OS 调整为使用所有端口,总共仍然只有 65536 个端口。 NGINX有没有可能超过这个限制?

一个可能的解决方案是 SO_REUSEPORT,但它缺少文档——至少我没有找到,除了下面这段

NGINX release 1.9.1 introduces a new feature that enables use of the SO_REUSEPORT socket option, which is available in newer versions of many operating systems, including DragonFly BSD and Linux (kernel version 3.9 and later). This socket option allows multiple sockets to listen on the same IP address and port combination. The kernel then load balances incoming connections across the sockets.

因此,NGINX 调用 accept 接受入站连接。

The accept() system call is used with connection-based socket types (SOCK_STREAM, SOCK_SEQPACKET). It extracts the first connection request on the queue of pending connections for the listening socket, sockfd, creates a new connected socket, and returns a new file descriptor referring to that socket. The newly created socket is not in the listening state. The original socket sockfd is unaffected by this call.

新的套接字会占用端口吗?如果是,如何超过 65535 个连接限制?

您收到的评论是正确的:

TCP connections are defined by the 4-tuple (src_addr, src_port, dst_addr, dst_port). You can have a server connected to more than 65536 clients all on the same port if the clients are using different IP addresses and/or source ports. Example: server IP is 0.0.0.1 listening on port 80. All the 4-tuples could then be (*, *, 0.0.0.1, 80). So long as no 4-tuples are the same, the server can have as many connections on port 80 as its memory will allow. – Cornstalks Dec 4 '15 at 2:36

然而,在评估你是否会超过限制时,你还必须考虑 nginx 不仅仅是一个服务器(有 ngx_connection.c#ngx_open_listening_sockets() call socket(2), bind(2) and listen(2) system calls to take over ports like 80, and subsequently calling accept(2) in an infinite loop), but it is also potentially a client of an upstream server (calling socket(2) and connect(2) 可以根据需要连接到 8080 等端口上的上游) .

请注意,虽然 运行 个 TCP 端口对于其服务器上下文是不可能的(因为服务器在其所有连接中使用单个端口——例如端口 80),运行 客户端的 TCP 端口外是一种真实的可能性,具体取决于配置。您还必须考虑在客户端执行 close(2) on the connection, the state goes to TIME_WAIT 大约 60 秒左右的时间后(以确保如果任何迟到的数据包确实通过,系统将知道如何处理它们) .

然而,话虽如此,请注意 the SO_REUSEPORT option to getsockopt(2), at least in the sharding context presented in the referenced release notes and reuseport announcement of nginx 1.9.165535 困境完全无关——它只是内核和应用程序之间具有可扩展多处理器支持的构建块在内核下运行:

I ran a wrk benchmark with 4 NGINX workers on a 36-core AWS instance. To eliminate network effects, I ran both client and NGINX on localhost, and also had NGINX return the string OK instead of a file. I compared three NGINX configurations: the default (equivalent to accept_mutex on), with accept_mutex off, and with reuseport. As shown in the figure, reuseport increases requests per second by 2 to 3 times, and reduces both latency and the standard deviation for latency.

关于您的基本问题,uint16_t issue of outgoing TCP ports would probably be to not use backends through TCP when this is of concern, and/or use extra local addresses through the proxy_bind et al 指令的解决方案(and/or 限制可以与后端建立的 TCP 连接数)。