插入性能和插入稳定性差的 Cassandra 集群
Cassandra cluster with bad insert performance and insert stability
我必须为每个客户每秒存储大约 250 个数值,即每小时大约 90 万个数字。它可能不会是一整天的记录(可能一天 5-10 小时),但我会根据客户端 ID 和读取日期对数据进行分区。最大行长度约为 22-23M,这仍然是可管理的。 Neverteless,我的方案是这样的:
CREATE TABLE measurement (
clientid text,
date text,
event_time timestamp,
value int,
PRIMARY KEY ((clientid,date), event_time)
);
密钥space的复制因子为2,只是为了测试,告密者是GossipingPropertyFileSnitch
和NetworkTopologyStrategy
。我知道复制因子 3 更符合生产标准。
接下来,我在公司服务器上创建了一个小型集群,三台裸机虚拟机,配备 2 个 CPU x 2 个内核和 16GB RAM 以及大量 space。我和他们在千兆局域网中。集群是可操作的,基于 nodetool。
这是我用来测试设置的代码:
Cluster cluster = Cluster.builder()
.addContactPoint("192.168.1.100")
.addContactPoint("192.168.1.102")
.build();
Session session = cluster.connect();
DateTime time = DateTime.now();
BlockingQueue<BatchStatement> queryQueue = new ArrayBlockingQueue(50, true);
try {
ExecutorService pool = Executors.newFixedThreadPool(15); //changed the pool size also to throttle inserts
String insertQuery = "insert into keyspace.measurement (clientid,date,event_time,value) values (?, ?, ?, ?)";
PreparedStatement preparedStatement = session.prepare(insertQuery);
BatchStatement batch = new BatchStatement(BatchStatement.Type.LOGGED); //tried with unlogged also
//generating the entries
for (int i = 0; i < 900000; i++) { //900000 entries is an hour worth of measurements
time = time.plus(4); //4ms between each entry
BoundStatement bound = preparedStatement.bind("1", "2014-01-01", time.toDate(), 1); //value not important
batch.add(bound);
//The batch statement must have 65535 statements at most
if (batch.size() >= 65534) {
queryQueue.put(batch);
batch = new BatchStatement();
}
}
queryQueue.put(batch); //the last batch, perhaps shorter than 65535
//storing the data
System.out.println("Starting storing");
while (!queryQueue.isEmpty()) {
pool.execute(() -> {
try {
long threadId = Thread.currentThread().getId();
System.out.println("Started: " + threadId);
BatchStatement statement = queryQueue.take();
long start2 = System.currentTimeMillis();
session.execute(statement);
System.out.println("Finished " + threadId + ": " + (System.currentTimeMillis() - start2));
} catch (Exception ex) {
System.out.println(ex.toString());
}
});
}
pool.shutdown();
pool.awaitTermination(120,TimeUnit.SECONDS);
} catch (Exception ex) {
System.out.println(ex.toString());
} finally {
session.close();
cluster.close();
}
我通过阅读此处以及其他博客和网站上的帖子得出了代码。据我了解,客户端使用多线程很重要,这就是我这样做的原因。我也尝试过使用异步操作。
底线结果是这样的,无论我使用哪种方法,一批都在 5-6 秒内执行,尽管它可能需要多达 10 秒。如果我只输入一批(所以,只有~65k 列)或者如果我使用一个愚蠢的单线程应用程序。老实说,我期待更多。特别是因为我在使用本地实例的笔记本电脑上获得或多或少相似的性能。
第二个,也许更重要的问题,是我以不可预测的方式面临的例外情况。这两个:
com.datastax.driver.core.exceptions.WriteTimeoutException: Cassandra
timeout during write query at consistency ONE (1 replica were required
but only 0 acknowledged the write)
和
com.datastax.driver.core.exceptions.NoHostAvailableException: All
host(s) tried for query failed (tried: /192.168.1.102:9042
(com.datastax.driver.core.TransportException: [/192.168.1.102:9042]
Connection has been closed), /192.168.1.100:9042
(com.datastax.driver.core.TransportException: [/192.168.1.100:9042]
Connection has been closed), /192.168.1.101:9042
(com.datastax.driver.core.TransportException: [/192.168.1.101:9042]
Connection has been closed))
归根结底,我做错了什么吗?我应该重新组织加载数据的方式,还是更改方案。我尝试减少行长度(因此我有 12 小时的行),但这并没有太大的不同。
==============================
更新:
我很粗鲁,忘了粘贴我在回答问题后使用的代码示例。它工作得相当好,但是我正在继续使用 KairosDB 进行研究,并使用 Astyanax 进行二进制传输。看起来我可以通过 CQL 获得更好的性能,尽管 KairosDB 在超载时可能会遇到一些问题(但我正在处理它)并且 Astyanax 对于我的口味来说有点冗长。不过,这是代码,我可能在某处弄错了。
信号量槽数在超过 5000 时对性能没有影响,它几乎是恒定的。
String insertQuery = "insert into keyspace.measurement (userid,time_by_hour,time,value) values (?, ?, ?, ?)";
PreparedStatement preparedStatement = session.prepare(insertQuery);
Semaphore semaphore = new Semaphore(15000);
System.out.println("Starting " + Thread.currentThread().getId());
DateTime time = DateTime.parse("2015-01-05T12:00:00");
//generating the entries
long start = System.currentTimeMillis();
for (int i = 0; i < 900000; i++) {
BoundStatement statement = preparedStatement.bind("User1", "2015-01-05:" + time.hourOfDay().get(), time.toDate(), 500); //value not important
semaphore.acquire();
ResultSetFuture resultSetFuture = session.executeAsync(statement);
Futures.addCallback(resultSetFuture, new FutureCallback<ResultSet>() {
@Override
public void onSuccess(@Nullable com.datastax.driver.core.ResultSet resultSet) {
semaphore.release();
}
@Override
public void onFailure(Throwable throwable) {
System.out.println("Error: " + throwable.toString());
semaphore.release();
}
});
time = time.plus(4); //4ms between each entry
}
使用未记录批处理的结果如何?您确定要使用批处理语句吗?
https://medium.com/@foundev/cassandra-batch-loading-without-the-batch-keyword-40f00e35e23e
我必须为每个客户每秒存储大约 250 个数值,即每小时大约 90 万个数字。它可能不会是一整天的记录(可能一天 5-10 小时),但我会根据客户端 ID 和读取日期对数据进行分区。最大行长度约为 22-23M,这仍然是可管理的。 Neverteless,我的方案是这样的:
CREATE TABLE measurement (
clientid text,
date text,
event_time timestamp,
value int,
PRIMARY KEY ((clientid,date), event_time)
);
密钥space的复制因子为2,只是为了测试,告密者是GossipingPropertyFileSnitch
和NetworkTopologyStrategy
。我知道复制因子 3 更符合生产标准。
接下来,我在公司服务器上创建了一个小型集群,三台裸机虚拟机,配备 2 个 CPU x 2 个内核和 16GB RAM 以及大量 space。我和他们在千兆局域网中。集群是可操作的,基于 nodetool。
这是我用来测试设置的代码:
Cluster cluster = Cluster.builder()
.addContactPoint("192.168.1.100")
.addContactPoint("192.168.1.102")
.build();
Session session = cluster.connect();
DateTime time = DateTime.now();
BlockingQueue<BatchStatement> queryQueue = new ArrayBlockingQueue(50, true);
try {
ExecutorService pool = Executors.newFixedThreadPool(15); //changed the pool size also to throttle inserts
String insertQuery = "insert into keyspace.measurement (clientid,date,event_time,value) values (?, ?, ?, ?)";
PreparedStatement preparedStatement = session.prepare(insertQuery);
BatchStatement batch = new BatchStatement(BatchStatement.Type.LOGGED); //tried with unlogged also
//generating the entries
for (int i = 0; i < 900000; i++) { //900000 entries is an hour worth of measurements
time = time.plus(4); //4ms between each entry
BoundStatement bound = preparedStatement.bind("1", "2014-01-01", time.toDate(), 1); //value not important
batch.add(bound);
//The batch statement must have 65535 statements at most
if (batch.size() >= 65534) {
queryQueue.put(batch);
batch = new BatchStatement();
}
}
queryQueue.put(batch); //the last batch, perhaps shorter than 65535
//storing the data
System.out.println("Starting storing");
while (!queryQueue.isEmpty()) {
pool.execute(() -> {
try {
long threadId = Thread.currentThread().getId();
System.out.println("Started: " + threadId);
BatchStatement statement = queryQueue.take();
long start2 = System.currentTimeMillis();
session.execute(statement);
System.out.println("Finished " + threadId + ": " + (System.currentTimeMillis() - start2));
} catch (Exception ex) {
System.out.println(ex.toString());
}
});
}
pool.shutdown();
pool.awaitTermination(120,TimeUnit.SECONDS);
} catch (Exception ex) {
System.out.println(ex.toString());
} finally {
session.close();
cluster.close();
}
我通过阅读此处以及其他博客和网站上的帖子得出了代码。据我了解,客户端使用多线程很重要,这就是我这样做的原因。我也尝试过使用异步操作。
底线结果是这样的,无论我使用哪种方法,一批都在 5-6 秒内执行,尽管它可能需要多达 10 秒。如果我只输入一批(所以,只有~65k 列)或者如果我使用一个愚蠢的单线程应用程序。老实说,我期待更多。特别是因为我在使用本地实例的笔记本电脑上获得或多或少相似的性能。
第二个,也许更重要的问题,是我以不可预测的方式面临的例外情况。这两个:
com.datastax.driver.core.exceptions.WriteTimeoutException: Cassandra timeout during write query at consistency ONE (1 replica were required but only 0 acknowledged the write)
和
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /192.168.1.102:9042 (com.datastax.driver.core.TransportException: [/192.168.1.102:9042] Connection has been closed), /192.168.1.100:9042 (com.datastax.driver.core.TransportException: [/192.168.1.100:9042] Connection has been closed), /192.168.1.101:9042 (com.datastax.driver.core.TransportException: [/192.168.1.101:9042] Connection has been closed))
归根结底,我做错了什么吗?我应该重新组织加载数据的方式,还是更改方案。我尝试减少行长度(因此我有 12 小时的行),但这并没有太大的不同。
============================== 更新:
我很粗鲁,忘了粘贴我在回答问题后使用的代码示例。它工作得相当好,但是我正在继续使用 KairosDB 进行研究,并使用 Astyanax 进行二进制传输。看起来我可以通过 CQL 获得更好的性能,尽管 KairosDB 在超载时可能会遇到一些问题(但我正在处理它)并且 Astyanax 对于我的口味来说有点冗长。不过,这是代码,我可能在某处弄错了。
信号量槽数在超过 5000 时对性能没有影响,它几乎是恒定的。
String insertQuery = "insert into keyspace.measurement (userid,time_by_hour,time,value) values (?, ?, ?, ?)";
PreparedStatement preparedStatement = session.prepare(insertQuery);
Semaphore semaphore = new Semaphore(15000);
System.out.println("Starting " + Thread.currentThread().getId());
DateTime time = DateTime.parse("2015-01-05T12:00:00");
//generating the entries
long start = System.currentTimeMillis();
for (int i = 0; i < 900000; i++) {
BoundStatement statement = preparedStatement.bind("User1", "2015-01-05:" + time.hourOfDay().get(), time.toDate(), 500); //value not important
semaphore.acquire();
ResultSetFuture resultSetFuture = session.executeAsync(statement);
Futures.addCallback(resultSetFuture, new FutureCallback<ResultSet>() {
@Override
public void onSuccess(@Nullable com.datastax.driver.core.ResultSet resultSet) {
semaphore.release();
}
@Override
public void onFailure(Throwable throwable) {
System.out.println("Error: " + throwable.toString());
semaphore.release();
}
});
time = time.plus(4); //4ms between each entry
}
使用未记录批处理的结果如何?您确定要使用批处理语句吗? https://medium.com/@foundev/cassandra-batch-loading-without-the-batch-keyword-40f00e35e23e