Hbase中的批量增量

Batch Increment in Hbase

当我尝试在 hbase 上进行批量增量时 table(没有行键重复)

 final List<Increment> increments = countPerUid.entrySet().stream()
                .map(entry -> {
                    Increment increment = new Increment(toBytes(entry.getKey()));
                    increment.addColumn(toBytes(conf.parentColumnFamily()), toBytes(conf.parentRankQualifier()), entry.getValue());
                    return increment;
                }).collect(Collectors.toList());

 public BatchOperationResult batchIncrement(HTable table, List<Increment> rows) {
        Object[] results = new Object[rows.size()];
        try {
            table.batch(rows, results);
        } catch (IOException | InterruptedException e) {
            Throwables.propagate(e);
        }
        return new BatchOperationResult(results);
    }

我有这样的异常:

    2015-05-13 09:53:43,674 [Thread-9] ERROR hbase_query_layer.service.HbaseLayerServiceHandlerImpl - java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException:
 Failed 14896 actions: org.apache.hadoop.hbase.exceptions.OperationConflictException: The operation with nonce {-3517837563370374612, -1595005354043534544} on row [298270339298463040] may have already completed

有人知道为什么吗?:/

我有 Hbase 0.98.0

无关,但我想我建议在这里尝试高效的无读增量,因为显然您正在尝试通过批处理加速增量。

可在此处找到 HBase 0.98 的示例:https://github.com/caskdata/cdap-hbase-increments

作为一种临时解决方案,需要将增量批处理拆分为更小的块。