尝试使用 spark streaming 连接 cassandra 数据库时出错

Error while trying to connect cassandra database using spark streaming

我在一个使用 Spark 流、Apache kafka 和 Cassandra 的项目中工作。 我使用 streaming-kafka 集成。在 kafka 中,我有一个使用此配置发送数据的生产者:

props.put("metadata.broker.list", KafkaProperties.ZOOKEEPER); props.put("bootstrap.servers", KafkaProperties.SERVER); props.put("client.id", "DemoProducer");

其中 ZOOKEEPER = localhost:2181,以及 SERVER = localhost:9092

发送数据后,我可以用 spark 接收它,我也可以使用它。我的火花配置是:

SparkConf sparkConf = new SparkConf().setAppName("org.kakfa.spark.ConsumerData").setMaster("local[4]");
sparkConf.set("spark.cassandra.connection.host", "localhost");
JavaStreamingContext jssc = new JavaStreamingContext(sparkConf, new Duration(2000));

之后,我试图将这些数据存储到 cassandra 数据库中。但是当我尝试使用这个打开会话时:

CassandraConnector connector = CassandraConnector.apply(jssc.sparkContext().getConf());
Session session = connector.openSession();

我收到以下错误:

Exception in thread "main" com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: localhost/127.0.0.1:9042 (com.datastax.driver.core.exceptions.InvalidQueryException: unconfigured table schema_keyspaces))
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:220)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:78)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1231)
at com.datastax.driver.core.Cluster.getMetadata(Cluster.java:334)
at com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:182)
at com.datastax.spark.connector.cql.CassandraConnector$$anonfun.apply(CassandraConnector.scala:161)
at com.datastax.spark.connector.cql.CassandraConnector$$anonfun.apply(CassandraConnector.scala:161)
at com.datastax.spark.connector.cql.RefCountedCache.createNewValueAndKeys(RefCountedCache.scala:36)
at com.datastax.spark.connector.cql.RefCountedCache.acquire(RefCountedCache.scala:61)
at com.datastax.spark.connector.cql.CassandraConnector.openSession(CassandraConnector.scala:70)
at org.kakfa.spark.ConsumerData.main(ConsumerData.java:80)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)

关于cassandra,我使用的是默认配置:

start_native_transport: true
native_transport_port: 9042
- seeds: "127.0.0.1"
cluster_name: 'Test Cluster'
rpc_address: localhost
rpc_port: 9160
start_rpc: true

我可以使用 cqlsh localhost 从命令行连接到 cassandra,得到以下消息:

Connected to Test Cluster at 127.0.0.1:9042. [cqlsh 5.0.1 | Cassandra 3.0.5 | CQL spec 3.4.0 | Native protocol v4] Use HELP for help. cqlsh> 

我也使用了 nodetool status,它显示了这个:

http://pastebin.com/ZQ5YyDyB

对于 运行ning cassandra 我调用 bin/cassandra -f

我正在尝试 运行 是这样的:

try (Session session = connector.openSession()) {
        System.out.println("dentro del try");
        session.execute("DROP KEYSPACE IF EXISTS test");
        System.out.println("dentro del try - 1");
        session.execute("CREATE KEYSPACE test WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 1}");
        System.out.println("dentro del try - 2");
        session.execute("CREATE TABLE test.users (id TEXT PRIMARY KEY, name TEXT)");
        System.out.println("dentro del try - 3");
    }

我的 pom.xml 文件看起来像这样:

<dependencies>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-streaming_2.10</artifactId>
        <version>1.6.1</version>
    </dependency>

    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-streaming-kafka_2.10</artifactId>
        <version>1.6.1</version>
    </dependency>

    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-core_2.10</artifactId>
        <version>1.6.1</version>
    </dependency>
    <dependency>
        <groupId>com.datastax.spark</groupId>
        <artifactId>spark-cassandra-connector-java_2.10</artifactId>
        <version>1.6.0-M1</version>
    </dependency>
    <dependency>
        <groupId>com.datastax.spark</groupId>
        <artifactId>spark-cassandra-connector_2.10</artifactId>
        <version>1.6.0-M2</version>
    </dependency>
    <dependency>
        <groupId>com.datastax.spark</groupId>
        <artifactId>spark-cassandra-connector_2.10</artifactId>
        <version>1.1.0-alpha2</version>
    </dependency>
    <dependency>
        <groupId>com.datastax.spark</groupId>
        <artifactId>spark-cassandra-connector-java_2.10</artifactId>
        <version>1.1.0-alpha2</version>
    </dependency>

    <dependency>
        <groupId>org.json</groupId>
        <artifactId>json</artifactId>
        <version>20160212</version>
    </dependency>
</dependencies>

我不知道为什么我无法使用 spark 连接到 cassandra,是配置错误还是我做错了什么?

谢谢!

com.datastax.driver.core.exceptions.InvalidQueryException: unconfigured table schema_keyspaces)

该错误表明旧驱动程序具有新的 Cassandra 版本。查看 POM 文件,我们发现 spark-cassandra-connector 依赖项声明了两次。 一个使用版本 1.6.0-m2(良好),另一个使用 1.1.0-alpha2(旧)。

从您的配置中删除对旧依赖项 1.1.0-alpha2 的引用:

<dependency>
    <groupId>com.datastax.spark</groupId>
    <artifactId>spark-cassandra-connector_2.10</artifactId>
    <version>1.1.0-alpha2</version>
</dependency>
<dependency>
    <groupId>com.datastax.spark</groupId>
    <artifactId>spark-cassandra-connector-java_2.10</artifactId>
    <version>1.1.0-alpha2</version>
</dependency>