Cassandra:在一致性简单写入查询期间超时 LOCAL_QUORUM

Cassandra : timeout during SIMPLE write query at consistency LOCAL_QUORUM

我正在尝试使用这些 conf 参数将数据(一个分区 = 1MB BLOB)从 Spark 提取到 Cassandra:

spark.sql.catalog.cassandra.spark.cassandra.output.batch.size.rows 1
spark.sql.catalog.cassandra.spark.cassandra.output.concurrent.writes 100
spark.sql.catalog.cassandra.spark.cassandra.output.batch.grouping.key none
spark.sql.catalog.cassandra.spark.cassandra.output.throughputMBPerSec 1
spark.sql.catalog.cassandra.spark.cassandra.output.consistency.level LOCAL_QUORUM
spark.sql.catalog.cassandra.spark.cassandra.output.metrics false
spark.sql.catalog.cassandra.spark.cassandra.connection.timeoutMS 90000
spark.sql.catalog.cassandra.spark.cassandra.query.retry.count 10
spark.sql.catalog.cassandra com.datastax.spark.connector.datasource.CassandraCatalog
spark.sql.extensions com.datastax.spark.connector.CassandraSparkExtensions

我从总共 16 个核心的 Spark 作业开始,然后减少到只有 1 个核心的 Spark 作业。

无论如何,每次,经过一段时间后,响应如下,驱动程序进入失败状态:

21/09/19 19:03:50 ERROR QueryExecutor: Failed to execute: com.datastax.spark.connector.writer.RichBoundStatementWrapper@532adef2
com.datastax.oss.driver.api.core.servererrors.WriteTimeoutException: Cassandra timeout during SIMPLE write query at consistency LOCAL_QUORUM (2 replica were required but only 0 acknowledged the write)

可能与某些节点过载有关..但是如何调试?要调整什么配置?

谢谢

问题已解决!

问题是我的数据,而不是 Cassandra。

事实上,几个分区(60 000 000 个中的 2000 个)的大小约为 50 MB,而不是我预期的 1MB。

我只是在 Spark 中摄取时过滤以排除大分区:

import org.apache.spark.sql.functions.{col, expr, length}
...
spark.read.parquet("...")
// EXCLUDE LARGE PARTITIONS
.withColumn("bytes_count",length(col("blob")))
.filter("bytes_count< " + argSkipPartitionLargerThan)
// PROJECT
.select("data_key","blob")
// COMMIT
.writeTo(DS + "." + argTargetKS + "."+argTargetTable).append()

现在只需 10 分钟即可使用 Spark 提取数据(500 GB 数据)