使用 YCSB 进行基准测试时吞吐量和延迟之间的相关性

Correlation between throughtput and latency when benchmarking with YCSB

我正在使用 YCSB 对许多不同的 NoSQL 数据库进行基准测试。但是,在处理客户端线程的数量时,我很难解释吞吐量与延迟结果。

例如,当使用 16 个客户端线程对 cassandra 运行ning 工作负载 a(50/50 读取和更新)进行基准测试时,将执行以下命令:

bin/ycsb run cassandra-cql -p hosts=xx.xx.xx.xx -p recordcount=525600 -p operationcount=525600 -threads 16 -P workloads/workloada -s > workloada_525600_16_threads_run_res.txt

给出以下输出:

[OVERALL], RunTime(ms), 62751
[OVERALL], Throughput(ops/sec), 8375.962136061577
[TOTAL_GCS_PS_Scavenge], Count, 64
[TOTAL_GC_TIME_PS_Scavenge], Time(ms), 289
[TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.46055042947522745
[TOTAL_GCS_PS_MarkSweep], Count, 0
[TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 0
[TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0.0
[TOTAL_GCs], Count, 64
[TOTAL_GC_TIME], Time(ms), 289
[TOTAL_GC_TIME_%], Time(%), 0.46055042947522745
[READ], Operations, 262650
[READ], AverageLatency(us), 1844.6075042832667
[READ], MinLatency(us), 290
[READ], MaxLatency(us), 116159
[READ], 95thPercentileLatency(us), 3081
[READ], 99thPercentileLatency(us), 7551
[READ], Return=OK, 262650
[CLEANUP], Operations, 16
[CLEANUP], AverageLatency(us), 139458.5
[CLEANUP], MinLatency(us), 1
[CLEANUP], MaxLatency(us), 2232319
[CLEANUP], 95thPercentileLatency(us), 19
[CLEANUP], 99thPercentileLatency(us), 2232319
[UPDATE], Operations, 262950
[UPDATE], AverageLatency(us), 1764.8220193953223
[UPDATE], MinLatency(us), 208
[UPDATE], MaxLatency(us), 95807
[UPDATE], 95thPercentileLatency(us), 2901
[UPDATE], 99thPercentileLatency(us), 7031
[UPDATE], Return=OK, 262950

运行 32 个线程的相同操作我得到:

[OVERALL], RunTime(ms), 51785
[OVERALL], Throughput(ops/sec), 10149.65723665154
[TOTAL_GCS_PS_Scavenge], Count, 124
[TOTAL_GC_TIME_PS_Scavenge], Time(ms), 310
[TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.5986289466061601
[TOTAL_GCS_PS_MarkSweep], Count, 0
[TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 0
[TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0.0
[TOTAL_GCs], Count, 124
[TOTAL_GC_TIME], Time(ms), 310
[TOTAL_GC_TIME_%], Time(%), 0.5986289466061601
[READ], Operations, 262848
[READ], AverageLatency(us), 2947.844628834916
[READ], MinLatency(us), 363
[READ], MaxLatency(us), 194559
[READ], 95thPercentileLatency(us), 5079
[READ], 99thPercentileLatency(us), 11055
[READ], Return=OK, 262848
[CLEANUP], Operations, 32
[CLEANUP], AverageLatency(us), 69601.5625
[CLEANUP], MinLatency(us), 1
[CLEANUP], MaxLatency(us), 2228223
[CLEANUP], 95thPercentileLatency(us), 3
[CLEANUP], 99thPercentileLatency(us), 2228223
[UPDATE], Operations, 262752
[UPDATE], AverageLatency(us), 2881.930485781269
[UPDATE], MinLatency(us), 316
[UPDATE], MaxLatency(us), 203391
[UPDATE], 95thPercentileLatency(us), 4987
[UPDATE], 99thPercentileLatency(us), 10711
[UPDATE], Return=OK, 262752

总体 运行时间更短,因此吞吐量更高,但延迟也更高。

我不太确定如何解释这些结果,您如何找到 "appropriate" 到 运行 的客户端线程数?

为了获得合格的基准,您应该首先定义您的系统要达到的 SLA 要求。 假设您的工作负载模式是 50/50 WR/RD,您的 SLA 要求是 10K ops/sec 吞吐量,第 99 个百分位延迟 < 10 毫秒。使用 YCSB -target 标志生成所需的吞吐量,并使用各种线程数来查看哪一个满足您的 SLA 需求。

当使用更多线程时,吞吐量增加(更多 ops/sec)很有意义,但这是以延迟为代价的。 您应该查看相关的数据库指标以尝试找到您的瓶颈 - 它可以是:

  • 客户端(需要更强大的客户端,或者使用更少线程但更多客户端的更好并行性)

  • 网络

  • 数据库服务器(磁盘/RAM - 使用更强的实例)。

您可以阅读更多关于数据库基准测试的注意事项here