Vert.x 2 Netty 上的 HTTP 服务器:已写入响应字节但从未刷新,通道从未关闭
Vert.x 2 HTTP server on Netty: response bytes are written but never flushed, channel never closed
使用Vert.x2.1.6(基于Netty 4.0.21)中的HTTP服务器处理HTTP请求,从文件系统读取静态文件写入客户端。
有时,在多个同时请求(阅读:在浏览器中快速单击)期间,随机请求永远不会得到任何响应。
例如。单击注销 link 将触发对背景图像、徽标、几个 css 文件等的请求。
所有文件都成功返回,例如徽标文件,http 请求只是挂起。
这种情况并不总是发生,但似乎是在启动新请求之前没有等待以前的请求完成而触发的。
为了弄清楚发生了什么,我将 LoggingHandlers 添加到 Vert.x DefaultHttpServer 创建的 netty 管道中。
下面的示例显示了对文件 logo.png(大小 12754B)的成功和不成功请求的输出。
输出留给我两个问题:
- 成功示例中,写入实际文件的12754字节后,又写入了12774+10字节。那些字节是什么?
- 不成功的例子,在WRITE操作之前有一个FLUSH操作,之后就看不到了activity。即使明确关闭响应也会被忽略。怎么回事?
关于问题 1 的编辑:我意识到该通道用于多个 HTTP 请求,因此尾随字节只是另一个正在写入的文件。
要求:
GET /authenticate/res/images/logo.png HTTP/1.1
Host: 192.168.0.12:18443
Connection: keep-alive
Pragma: no-cache
Cache-Control: no-cache
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36
Accept: image/webp,image/apng,image/*,*/*;q=0.8
Referer: https://192.168.0.12:18443/authenticate/res/css/login.css
Accept-Encoding: gzip, deflate, br
Accept-Language: en-GB,en-US;q=0.8,en;q=0.6
成功时输出:
$ grep 0xafd76300 /var/log/server.log
2017-09-06 15:17:12,609 [LoggingHandler] DEBUG: [id: 0xafd76300, /192.168.10.113:60180 => /172.16.238.11:8443] REGISTERED
2017-09-06 15:17:12,609 [LoggingHandler] DEBUG: [id: 0xafd76300, /192.168.10.113:60180 => /172.16.238.11:8443] REGISTERED
2017-09-06 15:17:12,609 [LoggingHandler] DEBUG: [id: 0xafd76300, /192.168.10.113:60180 => /172.16.238.11:8443] ACTIVE
2017-09-06 15:17:12,609 [LoggingHandler] DEBUG: [id: 0xafd76300, /192.168.10.113:60180 => /172.16.238.11:8443] ACTIVE
2017-09-06 15:17:12,625 [SslHandler] DEBUG: [id: 0xafd76300, /192.168.10.113:60180 => /172.16.238.11:8443] HANDSHAKEN: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
2017-09-06 15:17:12,625 [LoggingHandler] DEBUG: [id: 0xafd76300, /192.168.10.113:60180 => /172.16.238.11:8443] USER_EVENT: io.netty.handler.ssl.SslHandshakeCompletionEvent@38f7a12a
2017-09-06 15:17:12,625 [LoggingHandler] DEBUG: [id: 0xafd76300, /192.168.10.113:60180 => /172.16.238.11:8443] USER_EVENT: io.netty.handler.ssl.SslHandshakeCompletionEvent@38f7a12a
2017-09-06 15:17:12,629 [LoggingHandler] DEBUG: [id: 0xafd76300, /192.168.10.113:60180 => /172.16.238.11:8443] RECEIVED: DefaultHttpRequest(decodeResult: success)
2017-09-06 15:17:12,633 [LoggingHandler] DEBUG: [id: 0xafd76300, /192.168.10.113:60180 => /172.16.238.11:8443] RECEIVED(0B)
2017-09-06 15:17:12,634 [LoggingHandler] DEBUG: [id: 0xafd76300, /192.168.10.113:60180 => /172.16.238.11:8443] WRITE(12754B)
2017-09-06 15:17:12,634 [LoggingHandler] DEBUG: [id: 0xafd76300, /192.168.10.113:60180 => /172.16.238.11:8443] FLUSH
2017-09-06 15:17:12,635 [LoggingHandler] DEBUG: [id: 0xafd76300, /192.168.10.113:60180 => /172.16.238.11:8443] WRITE(0B)
2017-09-06 15:17:12,635 [LoggingHandler] DEBUG: [id: 0xafd76300, /192.168.10.113:60180 => /172.16.238.11:8443] WRITE(12774B)
2017-09-06 15:17:12,635 [LoggingHandler] DEBUG: [id: 0xafd76300, /192.168.10.113:60180 => /172.16.238.11:8443] FLUSH
2017-09-06 15:17:12,636 [LoggingHandler] DEBUG: [id: 0xafd76300, /192.168.10.113:60180 => /172.16.238.11:8443] WRITE(0B)
2017-09-06 15:17:12,636 [LoggingHandler] DEBUG: [id: 0xafd76300, /192.168.10.113:60180 => /172.16.238.11:8443] FLUSH
2017-09-06 15:17:12,636 [LoggingHandler] DEBUG: [id: 0xafd76300, /192.168.10.113:60180 => /172.16.238.11:8443] WRITE(10B)
2017-09-06 15:17:12,636 [LoggingHandler] DEBUG: [id: 0xafd76300, /192.168.10.113:60180 => /172.16.238.11:8443] WRITE(0B)
2017-09-06 15:17:12,636 [LoggingHandler] DEBUG: [id: 0xafd76300, /192.168.10.113:60180 => /172.16.238.11:8443] FLUSH
2017-09-06 15:17:12,636 [LoggingHandler] DEBUG: [id: 0xafd76300, /192.168.10.113:60180 => /172.16.238.11:8443] CLOSE()
2017-09-06 15:17:12,636 [LoggingHandler] DEBUG: [id: 0xafd76300, /192.168.10.113:60180 => /172.16.238.11:8443] CLOSE()
2017-09-06 15:17:12,636 [LoggingHandler] DEBUG: [id: 0xafd76300, /192.168.10.113:60180 :> /172.16.238.11:8443] INACTIVE
2017-09-06 15:17:12,637 [LoggingHandler] DEBUG: [id: 0xafd76300, /192.168.10.113:60180 :> /172.16.238.11:8443] UNREGISTERED
2017-09-06 15:17:12,637 [LoggingHandler] DEBUG: [id: 0xafd76300, /192.168.10.113:60180 :> /172.16.238.11:8443] UNREGISTERED
不成功时的输出:
$ grep 0x09db306c /var/log/server.log
2017-09-06 15:17:25,569 [LoggingHandler] DEBUG: [id: 0x09db306c, /192.168.10.113:60300 => /172.16.238.11:8443] REGISTERED
2017-09-06 15:17:25,569 [LoggingHandler] DEBUG: [id: 0x09db306c, /192.168.10.113:60300 => /172.16.238.11:8443] REGISTERED
2017-09-06 15:17:25,569 [LoggingHandler] DEBUG: [id: 0x09db306c, /192.168.10.113:60300 => /172.16.238.11:8443] ACTIVE
2017-09-06 15:17:25,569 [LoggingHandler] DEBUG: [id: 0x09db306c, /192.168.10.113:60300 => /172.16.238.11:8443] ACTIVE
2017-09-06 15:17:25,571 [SslHandler] DEBUG: [id: 0x09db306c, /192.168.10.113:60300 => /172.16.238.11:8443] HANDSHAKEN: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
2017-09-06 15:17:25,571 [LoggingHandler] DEBUG: [id: 0x09db306c, /192.168.10.113:60300 => /172.16.238.11:8443] USER_EVENT: io.netty.handler.ssl.SslHandshakeCompletionEvent@38f7a12a
2017-09-06 15:17:25,572 [LoggingHandler] DEBUG: [id: 0x09db306c, /192.168.10.113:60300 => /172.16.238.11:8443] USER_EVENT: io.netty.handler.ssl.SslHandshakeCompletionEvent@38f7a12a
2017-09-06 15:17:25,608 [LoggingHandler] DEBUG: [id: 0x09db306c, /192.168.10.113:60300 => /172.16.238.11:8443] RECEIVED: DefaultHttpRequest(decodeResult: success)
2017-09-06 15:17:25,608 [LoggingHandler] DEBUG: [id: 0x09db306c, /192.168.10.113:60300 => /172.16.238.11:8443] RECEIVED(0B)
2017-09-06 15:17:25,609 [LoggingHandler] DEBUG: [id: 0x09db306c, /192.168.10.113:60300 => /172.16.238.11:8443] FLUSH
2017-09-06 15:17:25,610 [LoggingHandler] DEBUG: [id: 0x09db306c, /192.168.10.113:60300 => /172.16.238.11:8443] WRITE(12754B)
2017-09-06 15:17:25,610 [LoggingHandler] DEBUG: [id: 0x09db306c, /192.168.10.113:60300 => /172.16.238.11:8443] WRITE(0B)
在 HttpServer 上注册的请求处理程序如下所示:
Vertx vertx = ...
HttpServerRequest req = ...
HttpServerResponse response = req.response();
vertx.fileSystem().readFile(file, (AsyncResult<Buffer> e) -> {
try {
if (e.succeeded()) {
Buffer buf = e.result();
response.putHeader("Content-Type", MimeMapper.getTypeForFile(file)+";charset=UTF-8");
response.putHeader("Content-Length", Integer.toString(buf.length()));
response.write(buf);
response.end();
response.close();
LOG.trace("Write success (%d) %s...", buf.length(), file);
} else {
LOG.error("Write failure", e.cause());
}
} catch (Exception x) {
LOG.error("Write failure", x);
}
});
编辑:调用response.close()
具有立即注销和停用频道的效果。如果没有该调用,注销会在最后一次写入后约 5 分钟自动触发。在不成功的情况下,永远不会触发注销,频道将无限期地保持在活动模式。
我通过添加两个 LoggingHandlers 修改了 Vert.x 2 DefaultHttpServer(版本 2.1.6,ssl 和压缩已激活)中的 Netty 管道:
ServerBootstrap bootstrap = new ServerBootstrap();
bootstrap.group(availableWorkers);
bootstrap.channel(NioServerSocketChannel.class);
tcpHelper.applyConnectionOptions(bootstrap);
tcpHelper.checkSSL(vertx);
bootstrap.childHandler(new ChannelInitializer<Channel>() {
@Override
protected void initChannel(Channel ch) throws Exception {
ChannelPipeline pipeline = ch.pipeline();
if (tcpHelper.isSSL()) {
pipeline.addLast("ssl", tcpHelper.createSslHandler(vertx, false));
}
pipeline.addLast("flashpolicy", new FlashPolicyHandler());
pipeline.addLast("httpDecoder", new HttpRequestDecoder(4096, 8192, 8192, false));
pipeline.addLast("httpEncoder", new VertxHttpResponseEncoder());
pipeline.addLast("reqlog", new LoggingHandler());
if (compressionSupported) {
pipeline.addLast("deflater", new HttpChunkContentCompressor());
}
if (tcpHelper.isSSL() || compressionSupported) {
// only add ChunkedWriteHandler when SSL is enabled otherwise it is not needed as FileRegion is used.
pipeline.addLast("chunkedWriter", new ChunkedWriteHandler()); // For large file / sendfile support
}
pipeline.addLast("handler", new ServerHandler());
pipeline.addLast("rsplog", new LoggingHandler());
}
});
看来我找到了错误的根源。对我来说,这看起来像是一个经典的并发问题。我通过更改 vertx-core 版本 2.1.6 中的一行代码,将 write
操作更改为 writeAndFlush
来修复它。该修复可能绕过了原作者的意图,将导致在 netty 通道上进行一些不必要的刷新操作。
这就是我认为正在发生的事情。为了遵循我在代码中的推理,请查看 2.1.6 中的 org.vertx.java.core.net.impl.ConnectionBase
。涉及两个标志:read
和 needsFlush
。我在这里假设有两个线程同时访问通道。
- Vert.x
HttpServer
收到一个http请求,打开一个Netty通道,成功读取整个请求。处理程序在读取时置于 READ 模式,然后返回 WRITE 模式,刷新写入缓冲区:read = false
、needsFlush = false
.
- 在Vertx HttpServer 开始写入对第一个请求的响应之前,它收到另一个http 请求并再次开始读取。这会将处理程序置于读取模式:
read = true
- http 处理程序现在想要写入对第一个请求的响应,但是由于
read = true
,写入将从设置 needsFlush = true
开始,并准备对响应进行排队。
- 然而,在任何字节排队之前,读取线程再次接管,完成读取第二个请求,从而设置
read = false
,刷新(空)写缓冲区(因为 needsFlush = true
)和设置 needsFlush = false
.
- 现在写作线程又开始了。由于它已经检查了
read
标志,它只是将字节写入通道而不刷新。现在将永远不会刷新通道,因为不会再发生 endReadAndFlush
。
为了解决这个问题,我更新了
org.vertx.java.core.http.impl.VertxHttpHandler.write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise)
做 ctx.writeAndFlush(msg, promise)
而不仅仅是 ctx.write(msg, promise)
.
可能是我忽略了一些东西,它可以解决问题而无需修补 vertx2。也许可以配置 vertx2 为每个 http 请求创建一个新的 netty 通道,而不是为多个请求重用同一个通道?如果有人可以提出这样的替代解决方案,我很乐意接受。
编辑:也许我应该补充一点,我在重新编译 Vertx2 时将 Netty 升级到了 4.0.51。两个版本的问题都是一样的,升级后可能更糟。
使用Vert.x2.1.6(基于Netty 4.0.21)中的HTTP服务器处理HTTP请求,从文件系统读取静态文件写入客户端。 有时,在多个同时请求(阅读:在浏览器中快速单击)期间,随机请求永远不会得到任何响应。 例如。单击注销 link 将触发对背景图像、徽标、几个 css 文件等的请求。 所有文件都成功返回,例如徽标文件,http 请求只是挂起。 这种情况并不总是发生,但似乎是在启动新请求之前没有等待以前的请求完成而触发的。
为了弄清楚发生了什么,我将 LoggingHandlers 添加到 Vert.x DefaultHttpServer 创建的 netty 管道中。 下面的示例显示了对文件 logo.png(大小 12754B)的成功和不成功请求的输出。 输出留给我两个问题:
- 成功示例中,写入实际文件的12754字节后,又写入了12774+10字节。那些字节是什么?
- 不成功的例子,在WRITE操作之前有一个FLUSH操作,之后就看不到了activity。即使明确关闭响应也会被忽略。怎么回事?
关于问题 1 的编辑:我意识到该通道用于多个 HTTP 请求,因此尾随字节只是另一个正在写入的文件。
要求:
GET /authenticate/res/images/logo.png HTTP/1.1
Host: 192.168.0.12:18443
Connection: keep-alive
Pragma: no-cache
Cache-Control: no-cache
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36
Accept: image/webp,image/apng,image/*,*/*;q=0.8
Referer: https://192.168.0.12:18443/authenticate/res/css/login.css
Accept-Encoding: gzip, deflate, br
Accept-Language: en-GB,en-US;q=0.8,en;q=0.6
成功时输出:
$ grep 0xafd76300 /var/log/server.log
2017-09-06 15:17:12,609 [LoggingHandler] DEBUG: [id: 0xafd76300, /192.168.10.113:60180 => /172.16.238.11:8443] REGISTERED
2017-09-06 15:17:12,609 [LoggingHandler] DEBUG: [id: 0xafd76300, /192.168.10.113:60180 => /172.16.238.11:8443] REGISTERED
2017-09-06 15:17:12,609 [LoggingHandler] DEBUG: [id: 0xafd76300, /192.168.10.113:60180 => /172.16.238.11:8443] ACTIVE
2017-09-06 15:17:12,609 [LoggingHandler] DEBUG: [id: 0xafd76300, /192.168.10.113:60180 => /172.16.238.11:8443] ACTIVE
2017-09-06 15:17:12,625 [SslHandler] DEBUG: [id: 0xafd76300, /192.168.10.113:60180 => /172.16.238.11:8443] HANDSHAKEN: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
2017-09-06 15:17:12,625 [LoggingHandler] DEBUG: [id: 0xafd76300, /192.168.10.113:60180 => /172.16.238.11:8443] USER_EVENT: io.netty.handler.ssl.SslHandshakeCompletionEvent@38f7a12a
2017-09-06 15:17:12,625 [LoggingHandler] DEBUG: [id: 0xafd76300, /192.168.10.113:60180 => /172.16.238.11:8443] USER_EVENT: io.netty.handler.ssl.SslHandshakeCompletionEvent@38f7a12a
2017-09-06 15:17:12,629 [LoggingHandler] DEBUG: [id: 0xafd76300, /192.168.10.113:60180 => /172.16.238.11:8443] RECEIVED: DefaultHttpRequest(decodeResult: success)
2017-09-06 15:17:12,633 [LoggingHandler] DEBUG: [id: 0xafd76300, /192.168.10.113:60180 => /172.16.238.11:8443] RECEIVED(0B)
2017-09-06 15:17:12,634 [LoggingHandler] DEBUG: [id: 0xafd76300, /192.168.10.113:60180 => /172.16.238.11:8443] WRITE(12754B)
2017-09-06 15:17:12,634 [LoggingHandler] DEBUG: [id: 0xafd76300, /192.168.10.113:60180 => /172.16.238.11:8443] FLUSH
2017-09-06 15:17:12,635 [LoggingHandler] DEBUG: [id: 0xafd76300, /192.168.10.113:60180 => /172.16.238.11:8443] WRITE(0B)
2017-09-06 15:17:12,635 [LoggingHandler] DEBUG: [id: 0xafd76300, /192.168.10.113:60180 => /172.16.238.11:8443] WRITE(12774B)
2017-09-06 15:17:12,635 [LoggingHandler] DEBUG: [id: 0xafd76300, /192.168.10.113:60180 => /172.16.238.11:8443] FLUSH
2017-09-06 15:17:12,636 [LoggingHandler] DEBUG: [id: 0xafd76300, /192.168.10.113:60180 => /172.16.238.11:8443] WRITE(0B)
2017-09-06 15:17:12,636 [LoggingHandler] DEBUG: [id: 0xafd76300, /192.168.10.113:60180 => /172.16.238.11:8443] FLUSH
2017-09-06 15:17:12,636 [LoggingHandler] DEBUG: [id: 0xafd76300, /192.168.10.113:60180 => /172.16.238.11:8443] WRITE(10B)
2017-09-06 15:17:12,636 [LoggingHandler] DEBUG: [id: 0xafd76300, /192.168.10.113:60180 => /172.16.238.11:8443] WRITE(0B)
2017-09-06 15:17:12,636 [LoggingHandler] DEBUG: [id: 0xafd76300, /192.168.10.113:60180 => /172.16.238.11:8443] FLUSH
2017-09-06 15:17:12,636 [LoggingHandler] DEBUG: [id: 0xafd76300, /192.168.10.113:60180 => /172.16.238.11:8443] CLOSE()
2017-09-06 15:17:12,636 [LoggingHandler] DEBUG: [id: 0xafd76300, /192.168.10.113:60180 => /172.16.238.11:8443] CLOSE()
2017-09-06 15:17:12,636 [LoggingHandler] DEBUG: [id: 0xafd76300, /192.168.10.113:60180 :> /172.16.238.11:8443] INACTIVE
2017-09-06 15:17:12,637 [LoggingHandler] DEBUG: [id: 0xafd76300, /192.168.10.113:60180 :> /172.16.238.11:8443] UNREGISTERED
2017-09-06 15:17:12,637 [LoggingHandler] DEBUG: [id: 0xafd76300, /192.168.10.113:60180 :> /172.16.238.11:8443] UNREGISTERED
不成功时的输出:
$ grep 0x09db306c /var/log/server.log
2017-09-06 15:17:25,569 [LoggingHandler] DEBUG: [id: 0x09db306c, /192.168.10.113:60300 => /172.16.238.11:8443] REGISTERED
2017-09-06 15:17:25,569 [LoggingHandler] DEBUG: [id: 0x09db306c, /192.168.10.113:60300 => /172.16.238.11:8443] REGISTERED
2017-09-06 15:17:25,569 [LoggingHandler] DEBUG: [id: 0x09db306c, /192.168.10.113:60300 => /172.16.238.11:8443] ACTIVE
2017-09-06 15:17:25,569 [LoggingHandler] DEBUG: [id: 0x09db306c, /192.168.10.113:60300 => /172.16.238.11:8443] ACTIVE
2017-09-06 15:17:25,571 [SslHandler] DEBUG: [id: 0x09db306c, /192.168.10.113:60300 => /172.16.238.11:8443] HANDSHAKEN: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
2017-09-06 15:17:25,571 [LoggingHandler] DEBUG: [id: 0x09db306c, /192.168.10.113:60300 => /172.16.238.11:8443] USER_EVENT: io.netty.handler.ssl.SslHandshakeCompletionEvent@38f7a12a
2017-09-06 15:17:25,572 [LoggingHandler] DEBUG: [id: 0x09db306c, /192.168.10.113:60300 => /172.16.238.11:8443] USER_EVENT: io.netty.handler.ssl.SslHandshakeCompletionEvent@38f7a12a
2017-09-06 15:17:25,608 [LoggingHandler] DEBUG: [id: 0x09db306c, /192.168.10.113:60300 => /172.16.238.11:8443] RECEIVED: DefaultHttpRequest(decodeResult: success)
2017-09-06 15:17:25,608 [LoggingHandler] DEBUG: [id: 0x09db306c, /192.168.10.113:60300 => /172.16.238.11:8443] RECEIVED(0B)
2017-09-06 15:17:25,609 [LoggingHandler] DEBUG: [id: 0x09db306c, /192.168.10.113:60300 => /172.16.238.11:8443] FLUSH
2017-09-06 15:17:25,610 [LoggingHandler] DEBUG: [id: 0x09db306c, /192.168.10.113:60300 => /172.16.238.11:8443] WRITE(12754B)
2017-09-06 15:17:25,610 [LoggingHandler] DEBUG: [id: 0x09db306c, /192.168.10.113:60300 => /172.16.238.11:8443] WRITE(0B)
在 HttpServer 上注册的请求处理程序如下所示:
Vertx vertx = ...
HttpServerRequest req = ...
HttpServerResponse response = req.response();
vertx.fileSystem().readFile(file, (AsyncResult<Buffer> e) -> {
try {
if (e.succeeded()) {
Buffer buf = e.result();
response.putHeader("Content-Type", MimeMapper.getTypeForFile(file)+";charset=UTF-8");
response.putHeader("Content-Length", Integer.toString(buf.length()));
response.write(buf);
response.end();
response.close();
LOG.trace("Write success (%d) %s...", buf.length(), file);
} else {
LOG.error("Write failure", e.cause());
}
} catch (Exception x) {
LOG.error("Write failure", x);
}
});
编辑:调用response.close()
具有立即注销和停用频道的效果。如果没有该调用,注销会在最后一次写入后约 5 分钟自动触发。在不成功的情况下,永远不会触发注销,频道将无限期地保持在活动模式。
我通过添加两个 LoggingHandlers 修改了 Vert.x 2 DefaultHttpServer(版本 2.1.6,ssl 和压缩已激活)中的 Netty 管道:
ServerBootstrap bootstrap = new ServerBootstrap();
bootstrap.group(availableWorkers);
bootstrap.channel(NioServerSocketChannel.class);
tcpHelper.applyConnectionOptions(bootstrap);
tcpHelper.checkSSL(vertx);
bootstrap.childHandler(new ChannelInitializer<Channel>() {
@Override
protected void initChannel(Channel ch) throws Exception {
ChannelPipeline pipeline = ch.pipeline();
if (tcpHelper.isSSL()) {
pipeline.addLast("ssl", tcpHelper.createSslHandler(vertx, false));
}
pipeline.addLast("flashpolicy", new FlashPolicyHandler());
pipeline.addLast("httpDecoder", new HttpRequestDecoder(4096, 8192, 8192, false));
pipeline.addLast("httpEncoder", new VertxHttpResponseEncoder());
pipeline.addLast("reqlog", new LoggingHandler());
if (compressionSupported) {
pipeline.addLast("deflater", new HttpChunkContentCompressor());
}
if (tcpHelper.isSSL() || compressionSupported) {
// only add ChunkedWriteHandler when SSL is enabled otherwise it is not needed as FileRegion is used.
pipeline.addLast("chunkedWriter", new ChunkedWriteHandler()); // For large file / sendfile support
}
pipeline.addLast("handler", new ServerHandler());
pipeline.addLast("rsplog", new LoggingHandler());
}
});
看来我找到了错误的根源。对我来说,这看起来像是一个经典的并发问题。我通过更改 vertx-core 版本 2.1.6 中的一行代码,将 write
操作更改为 writeAndFlush
来修复它。该修复可能绕过了原作者的意图,将导致在 netty 通道上进行一些不必要的刷新操作。
这就是我认为正在发生的事情。为了遵循我在代码中的推理,请查看 2.1.6 中的 org.vertx.java.core.net.impl.ConnectionBase
。涉及两个标志:read
和 needsFlush
。我在这里假设有两个线程同时访问通道。
- Vert.x
HttpServer
收到一个http请求,打开一个Netty通道,成功读取整个请求。处理程序在读取时置于 READ 模式,然后返回 WRITE 模式,刷新写入缓冲区:read = false
、needsFlush = false
. - 在Vertx HttpServer 开始写入对第一个请求的响应之前,它收到另一个http 请求并再次开始读取。这会将处理程序置于读取模式:
read = true
- http 处理程序现在想要写入对第一个请求的响应,但是由于
read = true
,写入将从设置needsFlush = true
开始,并准备对响应进行排队。 - 然而,在任何字节排队之前,读取线程再次接管,完成读取第二个请求,从而设置
read = false
,刷新(空)写缓冲区(因为needsFlush = true
)和设置needsFlush = false
. - 现在写作线程又开始了。由于它已经检查了
read
标志,它只是将字节写入通道而不刷新。现在将永远不会刷新通道,因为不会再发生endReadAndFlush
。
为了解决这个问题,我更新了
org.vertx.java.core.http.impl.VertxHttpHandler.write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise)
做 ctx.writeAndFlush(msg, promise)
而不仅仅是 ctx.write(msg, promise)
.
可能是我忽略了一些东西,它可以解决问题而无需修补 vertx2。也许可以配置 vertx2 为每个 http 请求创建一个新的 netty 通道,而不是为多个请求重用同一个通道?如果有人可以提出这样的替代解决方案,我很乐意接受。
编辑:也许我应该补充一点,我在重新编译 Vertx2 时将 Netty 升级到了 4.0.51。两个版本的问题都是一样的,升级后可能更糟。