Jetty Async 未按预期工作,性能比同步差
Jetty Async not working as anticipated, experiencing worse performance than sync
在此先感谢您的指点或帮助。
基本上我期待它的异步版本比同步版本表现更好,但同步版本表现相当或更好。
我做错了什么吗?
我在没有 Javalin 的情况下尝试过,以防框架中的某些东西产生问题,似乎给出了类似的结果。
我确实只用 Netty 尝试过(post 代码太长了),我也遇到了类似的结果。
我写了下面的代码:
(javalin-3.12.0 和 jetty-9.4.31.v20200723)
import io.javalin.Javalin;
import org.eclipse.jetty.server.Server;
import org.eclipse.jetty.util.thread.QueuedThreadPool;
import java.io.IOException;
import java.util.concurrent.ScheduledThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
public class AsyncTest {
static ScheduledThreadPoolExecutor scheduledThreadPoolExecutor = new ScheduledThreadPoolExecutor(5000);
public static void main(String[] args) {
var jav = Javalin.create();
jav.config.server(() -> new Server(new QueuedThreadPool(5000, 500, 120_000)));
Javalin app = jav.start(8080);
app.get("/async-delay", ctx -> {
var async = ctx.req.startAsync();
scheduledThreadPoolExecutor.schedule(() -> {
try {
ctx.res.getOutputStream().println("ok");
} catch (IOException e) {
e.printStackTrace();
}
async.complete();
}, 100, TimeUnit.MILLISECONDS);
});
app.get("/delay", ctx -> {
Thread.sleep(100);
ctx.result("ok");
});
app.get("/no-delay", ctx -> {
ctx.result("ok");
});
}
}
得到如下结果:
➜ ~ wrk2 -t16 -c300 -d5s -R3000 http://localhost:8080/delay
Running 5s test @ http://localhost:8080/delay
16 threads and 300 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 331.36ms 138.72ms 626.18ms 57.34%
Req/Sec nan nan 0.00 0.00%
10854 requests in 5.00s, 1.24MB read
Socket errors: connect 53, read 0, write 0, timeout 106
Requests/sec: 2170.40
Transfer/sec: 254.34KB
➜ ~ wrk2 -t16 -c300 -d5s -R3000 http://localhost:8080/async-delay
Running 5s test @ http://localhost:8080/async-delay
16 threads and 300 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 285.84ms 120.75ms 522.50ms 56.27%
Req/Sec nan nan 0.00 0.00%
11060 requests in 6.10s, 1.29MB read
Socket errors: connect 53, read 0, write 0, timeout 124
Requests/sec: 1814.16
Transfer/sec: 216.14KB
➜ ~ wrk2 -t16 -c16 -d5s -R70000 http://localhost:8080/no-delay
Running 5s test @ http://localhost:8080/no-delay
16 threads and 16 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 2.51ms 3.12ms 21.95ms 88.36%
Req/Sec nan nan 0.00 0.00%
349824 requests in 5.00s, 40.03MB read
Requests/sec: 69995.44
Transfer/sec: 8.01MB
由于 Jetty 9+ 从一开始就是 100% 异步的,因此没有差异是有道理的。
(事实上,在 Jetty 9+ 中有 额外的 工作来假装在使用像 InputStream.read()
或 OutputStream.write()
这样的同步 API 时是同步的)
此外,您的负载测试工作量也不现实。
- 您需要许多 个客户端机器 来进行测试。没有任何一个软件客户端能够单独对 Jetty 服务器施加压力。在达到任何类型的 Jetty 服务限制之前,您会先达到系统资源限制。
- 客户端计算机与服务器计算机的比率至少为 4 比 1(我们测试的比率为 8 比 1),以产生足够的负载来对 Jetty 施加压力。
- 您需要许多 到服务器的并发连接。 (认为 40,000+)
- 或者您想要图片中的 HTTP/2(这也以其独特的方式强调服务器资源)
- 您需要大量数据 returned(需要多个网络缓冲区才能 return)。
- 您还想加入一些读取速度较慢的客户端连接(这在同步服务器上可能会影响其他不慢的连接,因为它们只是消耗太多资源)
是的,Jaokim 调用了它,wrk 是这里的瓶颈。如果我 运行 它们按照上面的建议并联,则 rps 的数量是 4 倍。
➜ ~ ➜ ~ wrk2 -t16 -c500 -d5s -R3000 http://localhost:8080/async-delay & ; wrk2 -t16 -c500 -d5s -R3000 http://localhost:8080/async-delay & ; wrk2 -t16 -c500 -d5s -R3000 http://localhost:8080/async-delay & ; wrk2 -t16 -c500 -d5s -R3000 http://localhost:8080/async-delay & ;
[1] 2779
[2] 2780
[3] 2781
[4] 2782
Running 5s test @ http://localhost:8080/async-delay
16 threads and 500 connections
Running 5s test @ http://localhost:8080/async-delay
16 threads and 500 connections
Running 5s test @ http://localhost:8080/async-delay
16 threads and 500 connections
Running 5s test @ http://localhost:8080/async-delay
16 threads and 500 connections
➜ ~ Thread Stats Avg Stdev Max +/- Stdev
Latency 104.09ms 13.99ms 337.66ms 97.43%
Req/Sec nan nan 0.00 0.00%
7066 requests in 5.06s, 841.85KB read
Socket errors: connect 261, read 14, write 1, timeout 522
Requests/sec: 1395.35
Transfer/sec: 166.24KB
Thread Stats Avg Stdev Max +/- Stdev
Latency 103.84ms 12.70ms 239.23ms 97.48%
Req/Sec nan nan 0.00 0.00%
7066 requests in 5.06s, 841.85KB read
Socket errors: connect 261, read 9, write 2, timeout 522
Requests/sec: 1395.56
Transfer/sec: 166.27KB
[1] 2779 done wrk2 -t16 -c500 -d5s -R3000 http://localhost:8080/async-delay
[2] 2780 done wrk2 -t16 -c500 -d5s -R3000 http://localhost:8080/async-delay
➜ ~ ➜ ~ Thread Stats Avg Stdev Max +/- Stdev
Latency 103.62ms 12.48ms 243.58ms 97.67%
Req/Sec nan nan 0.00 0.00%
7064 requests in 6.16s, 841.61KB read
Socket errors: connect 261, read 13, write 2, timeout 584
Requests/sec: 1147.51
Transfer/sec: 136.71KB
Thread Stats Avg Stdev Max +/- Stdev
Latency 103.50ms 12.94ms 339.46ms 97.83%
Req/Sec nan nan 0.00 0.00%
7055 requests in 6.16s, 840.54KB read
Socket errors: connect 261, read 6, write 2, timeout 646
Requests/sec: 1145.42
Transfer/sec: 136.47KB
[3] - 2781 done wrk2 -t16 -c500 -d5s -R3000 http://localhost:8080/async-delay
[4] + 2782 done wrk2 -t16 -c500 -d5s -R3000 http://localhost:8080/async-delay
➜ ~ ➜ ~
在此先感谢您的指点或帮助。
基本上我期待它的异步版本比同步版本表现更好,但同步版本表现相当或更好。
我做错了什么吗? 我在没有 Javalin 的情况下尝试过,以防框架中的某些东西产生问题,似乎给出了类似的结果。 我确实只用 Netty 尝试过(post 代码太长了),我也遇到了类似的结果。
我写了下面的代码: (javalin-3.12.0 和 jetty-9.4.31.v20200723)
import io.javalin.Javalin;
import org.eclipse.jetty.server.Server;
import org.eclipse.jetty.util.thread.QueuedThreadPool;
import java.io.IOException;
import java.util.concurrent.ScheduledThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
public class AsyncTest {
static ScheduledThreadPoolExecutor scheduledThreadPoolExecutor = new ScheduledThreadPoolExecutor(5000);
public static void main(String[] args) {
var jav = Javalin.create();
jav.config.server(() -> new Server(new QueuedThreadPool(5000, 500, 120_000)));
Javalin app = jav.start(8080);
app.get("/async-delay", ctx -> {
var async = ctx.req.startAsync();
scheduledThreadPoolExecutor.schedule(() -> {
try {
ctx.res.getOutputStream().println("ok");
} catch (IOException e) {
e.printStackTrace();
}
async.complete();
}, 100, TimeUnit.MILLISECONDS);
});
app.get("/delay", ctx -> {
Thread.sleep(100);
ctx.result("ok");
});
app.get("/no-delay", ctx -> {
ctx.result("ok");
});
}
}
得到如下结果:
➜ ~ wrk2 -t16 -c300 -d5s -R3000 http://localhost:8080/delay
Running 5s test @ http://localhost:8080/delay
16 threads and 300 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 331.36ms 138.72ms 626.18ms 57.34%
Req/Sec nan nan 0.00 0.00%
10854 requests in 5.00s, 1.24MB read
Socket errors: connect 53, read 0, write 0, timeout 106
Requests/sec: 2170.40
Transfer/sec: 254.34KB
➜ ~ wrk2 -t16 -c300 -d5s -R3000 http://localhost:8080/async-delay
Running 5s test @ http://localhost:8080/async-delay
16 threads and 300 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 285.84ms 120.75ms 522.50ms 56.27%
Req/Sec nan nan 0.00 0.00%
11060 requests in 6.10s, 1.29MB read
Socket errors: connect 53, read 0, write 0, timeout 124
Requests/sec: 1814.16
Transfer/sec: 216.14KB
➜ ~ wrk2 -t16 -c16 -d5s -R70000 http://localhost:8080/no-delay
Running 5s test @ http://localhost:8080/no-delay
16 threads and 16 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 2.51ms 3.12ms 21.95ms 88.36%
Req/Sec nan nan 0.00 0.00%
349824 requests in 5.00s, 40.03MB read
Requests/sec: 69995.44
Transfer/sec: 8.01MB
由于 Jetty 9+ 从一开始就是 100% 异步的,因此没有差异是有道理的。
(事实上,在 Jetty 9+ 中有 额外的 工作来假装在使用像 InputStream.read()
或 OutputStream.write()
这样的同步 API 时是同步的)
此外,您的负载测试工作量也不现实。
- 您需要许多 个客户端机器 来进行测试。没有任何一个软件客户端能够单独对 Jetty 服务器施加压力。在达到任何类型的 Jetty 服务限制之前,您会先达到系统资源限制。
- 客户端计算机与服务器计算机的比率至少为 4 比 1(我们测试的比率为 8 比 1),以产生足够的负载来对 Jetty 施加压力。
- 您需要许多 到服务器的并发连接。 (认为 40,000+)
- 或者您想要图片中的 HTTP/2(这也以其独特的方式强调服务器资源)
- 您需要大量数据 returned(需要多个网络缓冲区才能 return)。
- 您还想加入一些读取速度较慢的客户端连接(这在同步服务器上可能会影响其他不慢的连接,因为它们只是消耗太多资源)
是的,Jaokim 调用了它,wrk 是这里的瓶颈。如果我 运行 它们按照上面的建议并联,则 rps 的数量是 4 倍。
➜ ~ ➜ ~ wrk2 -t16 -c500 -d5s -R3000 http://localhost:8080/async-delay & ; wrk2 -t16 -c500 -d5s -R3000 http://localhost:8080/async-delay & ; wrk2 -t16 -c500 -d5s -R3000 http://localhost:8080/async-delay & ; wrk2 -t16 -c500 -d5s -R3000 http://localhost:8080/async-delay & ;
[1] 2779
[2] 2780
[3] 2781
[4] 2782
Running 5s test @ http://localhost:8080/async-delay
16 threads and 500 connections
Running 5s test @ http://localhost:8080/async-delay
16 threads and 500 connections
Running 5s test @ http://localhost:8080/async-delay
16 threads and 500 connections
Running 5s test @ http://localhost:8080/async-delay
16 threads and 500 connections
➜ ~ Thread Stats Avg Stdev Max +/- Stdev
Latency 104.09ms 13.99ms 337.66ms 97.43%
Req/Sec nan nan 0.00 0.00%
7066 requests in 5.06s, 841.85KB read
Socket errors: connect 261, read 14, write 1, timeout 522
Requests/sec: 1395.35
Transfer/sec: 166.24KB
Thread Stats Avg Stdev Max +/- Stdev
Latency 103.84ms 12.70ms 239.23ms 97.48%
Req/Sec nan nan 0.00 0.00%
7066 requests in 5.06s, 841.85KB read
Socket errors: connect 261, read 9, write 2, timeout 522
Requests/sec: 1395.56
Transfer/sec: 166.27KB
[1] 2779 done wrk2 -t16 -c500 -d5s -R3000 http://localhost:8080/async-delay
[2] 2780 done wrk2 -t16 -c500 -d5s -R3000 http://localhost:8080/async-delay
➜ ~ ➜ ~ Thread Stats Avg Stdev Max +/- Stdev
Latency 103.62ms 12.48ms 243.58ms 97.67%
Req/Sec nan nan 0.00 0.00%
7064 requests in 6.16s, 841.61KB read
Socket errors: connect 261, read 13, write 2, timeout 584
Requests/sec: 1147.51
Transfer/sec: 136.71KB
Thread Stats Avg Stdev Max +/- Stdev
Latency 103.50ms 12.94ms 339.46ms 97.83%
Req/Sec nan nan 0.00 0.00%
7055 requests in 6.16s, 840.54KB read
Socket errors: connect 261, read 6, write 2, timeout 646
Requests/sec: 1145.42
Transfer/sec: 136.47KB
[3] - 2781 done wrk2 -t16 -c500 -d5s -R3000 http://localhost:8080/async-delay
[4] + 2782 done wrk2 -t16 -c500 -d5s -R3000 http://localhost:8080/async-delay
➜ ~ ➜ ~