如何用cassandra导出指定时间内的数据到csv?
How to export data in a specified time to csv with cassandra?
我可以使用此命令将所有数据导出到 csv:
COPY data TO '/usr/local/cassandra/my_data.csv';
但是我现在想把指定时间的数据导出到csv,比如这种情况:
select * from data where upload_time >='2020-10-16 00:00:00.000+00' and upload_time <'2020-10-17 00:00:00.000+0000' allow filtering ;
我应该用命令做什么?
您可以使用 DSBulk utility with custom query, but you need to be careful and put the optimized condition so it will perform full scan, but using the token ranges (see this blog post for details).
类似于(将 pk
替换为实际分区键列的名称,以及未拆分的查询字符串 - 我拆分它只是为了便于阅读):
dsbulk unload -url data.csv \
-query "SELECT * FROM ks.table WHERE token(pk) > :start AND token(pk) <= :end
AND upload_time >='2020-01-01 00:00:00.000+00'
AND upload_time <'2021-01-01 00:00:00.000+0000' allow filtering"
另一种方法是将 Spark 与 Spark Cassandra Connector (even in the local master mode) - it will do the same under the hood, something like this (example for spark-shell
in Scala, could be done similarly via pyspark):
一起使用
import org.apache.spark.sql.cassandra._
val data = spark.read.cassandraFormat("table", "keyspace").load()
val filtered = data.filter("upload_time >= cast('2020-01-01 00:00:00.000+00' as timestamp) AND upload_time <= cast('2021-01-01 00:00:00.000+0000' as timestamp)")
filtered.write.format("csv").save("data.csv")
我可以使用此命令将所有数据导出到 csv:
COPY data TO '/usr/local/cassandra/my_data.csv';
但是我现在想把指定时间的数据导出到csv,比如这种情况:
select * from data where upload_time >='2020-10-16 00:00:00.000+00' and upload_time <'2020-10-17 00:00:00.000+0000' allow filtering ;
我应该用命令做什么?
您可以使用 DSBulk utility with custom query, but you need to be careful and put the optimized condition so it will perform full scan, but using the token ranges (see this blog post for details).
类似于(将 pk
替换为实际分区键列的名称,以及未拆分的查询字符串 - 我拆分它只是为了便于阅读):
dsbulk unload -url data.csv \
-query "SELECT * FROM ks.table WHERE token(pk) > :start AND token(pk) <= :end
AND upload_time >='2020-01-01 00:00:00.000+00'
AND upload_time <'2021-01-01 00:00:00.000+0000' allow filtering"
另一种方法是将 Spark 与 Spark Cassandra Connector (even in the local master mode) - it will do the same under the hood, something like this (example for spark-shell
in Scala, could be done similarly via pyspark):
import org.apache.spark.sql.cassandra._
val data = spark.read.cassandraFormat("table", "keyspace").load()
val filtered = data.filter("upload_time >= cast('2020-01-01 00:00:00.000+00' as timestamp) AND upload_time <= cast('2021-01-01 00:00:00.000+0000' as timestamp)")
filtered.write.format("csv").save("data.csv")