cassandra 2.2.6 中的高读写延迟

High Read an Write Latency in cassandra 2.2.6

您好,已经有人问过类似的问题,但我想我们的问题有点不同:

我们使用 Cassandra 2.2.6 单节点安装(并准备升级到最新)。现在我们遇到了可怕的查询时间,有时还会写超时。

    Read Count: 21554802
    Read Latency: 10.702975718589295 ms.
    Write Count: 19437551
    Write Latency: 27.806026818707767 ms.
    Pending Flushes: 0
            Table: -----
            SSTable count: 5
            Space used (live): 661310370
            Space used (total): 661310370
            Space used by snapshots (total): 704698632
            Off heap memory used (total): 845494
            SSTable Compression Ratio: 0.13491738106721324
            Number of keys (estimate): 179623
            Memtable cell count: 594836
            Memtable data size: 8816212
            Memtable off heap memory used: 0
            Memtable switch count: 3343
            Local read count: 21554802
            Local read latency: 11,744 ms
            Local write count: 19437551
            Local write latency: 30,506 ms
            Pending flushes: 0
            Bloom filter false positives: 387
            Bloom filter false ratio: 0,00024
            Bloom filter space used: 258368
            Bloom filter off heap memory used: 258328
            Index summary off heap memory used: 34830
            Compression metadata off heap memory used: 552336
            Compacted partition minimum bytes: 180
            Compacted partition maximum bytes: 12108970
            Compacted partition mean bytes: 23949
            Average live cells per slice (last five minutes): 906.8858219156                                                       92
            Maximum live cells per slice (last five minutes): 182785
            Average tombstones per slice (last five minutes): 1.432102507830                                                       9697
            Maximum tombstones per slice (last five minutes): 50

为了比较,有一个不同的 table 包含大约 1000 万条记录,其构造与上述非常相似

    Read Count: 815780599
    Read Latency: 0.1672932019580917 ms.
    Write Count: 3083462
    Write Latency: 1.5470194706469547 ms.
    Pending Flushes: 0
            Table: ------
            SSTable count: 9
            Space used (live): 5067447115
            Space used (total): 5067447115
            Space used by snapshots (total): 31810631860
            Off heap memory used (total): 19603932
            SSTable Compression Ratio: 0.2952622065160448
            Number of keys (estimate): 12020796
            Memtable cell count: 300611
            Memtable data size: 18020553
            Memtable off heap memory used: 0
            Memtable switch count: 97
            Local read count: 815780599
            Local read latency: 0,184 ms
            Local write count: 3083462
            Local write latency: 1,692 ms
            Pending flushes: 0
            Bloom filter false positives: 7
            Bloom filter false ratio: 0,00000
            Bloom filter space used: 15103552
            Bloom filter off heap memory used: 15103480
            Index summary off heap memory used: 2631412
            Compression metadata off heap memory used: 1869040
            Compacted partition minimum bytes: 925
            Compacted partition maximum bytes: 1916
            Compacted partition mean bytes: 1438
            Average live cells per slice (last five minutes): 1.0
            Maximum live cells per slice (last five minutes): 1
            Average tombstones per slice (last five minutes): 1.0193396020053622
            Maximum tombstones per slice (last five minutes): 3

不同的是第一个包含很多地图和UDT。在开发中心简单测试 select * from ... limit 999; (省略任何 Lucene 索引等)最后一个显示 183 毫秒,第一个显示 1.8 秒。

有人可以帮助定义一种查找根本原因的方法吗?

Maximum live cells per slice (last five minutes): 182785

这很大,可能来自您的地图和 UDT。您的数据模型很可能是根本原因。走活 180k cells 来满足单个查询会很慢。

select * from ... limit 999;

范围查询本来就很慢。尝试设计您的表,以便您可以从单个分区回答您的问题,您将获得更好的结果。

one node installation

每当有 GC 时,您都会有一个错误的查询,这可以通过添加更多节点来缓解,这样暂停就不会那么糟糕(如果在驱动程序上使用客户端推测重试会更好)。