Spark Cassandra SQL 无法对查询结果执行 DataFrame 方法

Spark Cassandra SQL can't perform DataFrame methods on query results

所以我有一个 Spark-Cassandra 集群,我正在尝试对其执行 sql 查询。我用 sbt assembly 构建了一个 jar 然后我用 spark-submit 提交它。当我不使用 spark-sql 时,这很好用。当我使用 spark sql 时出现错误,下面是输出:

2
CassandraRow{key: key1, value: 1}
3.0
Exception in thread "main" java.lang.NoSuchMethodError: org.apache.spark.sql.catalyst.trees.LeafNode$class.children(Lorg/apache/spark/sql/catalyst/trees/LeafNode;)Lscala/collection/Seq;
    at org.apache.spark.sql.cassandra.CassandraTableScan.children(CassandraTableScan.scala:19)
    at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$$anonfun$apply.apply(TreeNode.scala:280)
    at scala.collection.TraversableLike$$anonfun$map.apply(TraversableLike.scala:244)
    at scala.collection.TraversableLike$$anonfun$map.apply(TraversableLike.scala:244)
    at scala.collection.Iterator$class.foreach(Iterator.scala:727)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
    at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
    at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
    at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
    at scala.collection.AbstractTraversable.map(Traversable.scala:105)
    at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun.apply(TreeNode.scala:279)
    at scala.collection.Iterator$$anon.next(Iterator.scala:328)
    at scala.collection.Iterator$class.foreach(Iterator.scala:727)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
    at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
    at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
    at scala.collection.AbstractIterator.to(Iterator.scala:1157)
    at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
    at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
    at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
    at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
    at org.apache.spark.sql.catalyst.trees.TreeNode.transformChildrenUp(TreeNode.scala:292)
    at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:247)
    at org.apache.spark.sql.execution.AddExchange.apply(Exchange.scala:128)
    at org.apache.spark.sql.execution.AddExchange.apply(Exchange.scala:124)
    at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$$anonfun$apply.apply(RuleExecutor.scala:61)
    at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$$anonfun$apply.apply(RuleExecutor.scala:59)
    at scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:51)
    at scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:60)
    at scala.collection.mutable.WrappedArray.foldLeft(WrappedArray.scala:34)
    at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply.apply(RuleExecutor.scala:59)
    at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply.apply(RuleExecutor.scala:51)
    at scala.collection.immutable.List.foreach(List.scala:318)
    at org.apache.spark.sql.catalyst.rules.RuleExecutor.apply(RuleExecutor.scala:51)
    at org.apache.spark.sql.SQLContext$QueryExecution.executedPlan$lzycompute(SQLContext.scala:1085)
    at org.apache.spark.sql.SQLContext$QueryExecution.executedPlan(SQLContext.scala:1085)
    at org.apache.spark.sql.DataFrame.rdd(DataFrame.scala:889)
    at org.apache.spark.sql.DataFrame.foreach(DataFrame.scala:797)
    at CassSparkTest$.main(CassSparkTest.scala:22)
    at CassSparkTest.main(CassSparkTest.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain(SparkSubmit.scala:166)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

这是作业的 scala 代码,非常简单:

import org.apache.spark.{SparkContext, SparkConf}
import com.datastax.spark.connector._
import org.apache.spark.sql.cassandra.CassandraSQLContext
import org.apache.spark.sql._

object CassSparkTest {
        def main(args: Array[String]) {
                val conf = new SparkConf(true)
                        .set("spark.cassandra.connection.host", "127.0.0.1")
                val sc = new SparkContext("spark://192.168.10.11:7077", "test", conf)

                val rdd = sc.cassandraTable("test", "kv")
                println(rdd.count)
                println(rdd.first)
                println(rdd.map(_.getInt("value")).sum)

                val sqlC = new CassandraSQLContext(sc)

                val sqlText = "SELECT * FROM test.kv"
                val df = sqlC.sql(sqlText)
                df.show()
                df.foreach(println)
        }
}

如您所见,spark 成功创建了一个带有 sc.cassandraTable("test", "kv") 的 rdd,并且它能够获取计数、第一个值和总和。

当我 运行 sql 查询时,我试图通过 spark-sql 在 cqlsh 上 运行 这是我得到的结果:

cqlsh> select * from test.kv;

 key  | value
------+-------
 key1 |     1
 key2 |     2

(2 rows)

这是 build.sbt 文件,一个包含 spark-cassandra-connector 的 fat jar 被保存在 lib 文件夹中,因此它会自动被 sbt 作为 unmanagedDependancy 添加到类路径中(我不认为考虑到我已经成功地创建了一个基于 C* table 的 rdd 并在其上使用了方法,构建文件是个问题)

lazy val root = (project in file(".")).
        settings(
                name := "CassSparkTest",
                version := "1.0"
        )
libraryDependencies ++= Seq(
        "com.datastax.cassandra" % "cassandra-driver-core" % "2.1.5" % "provided",
        "org.apache.cassandra" % "cassandra-thrift" % "2.1.5" % "provided",
        "org.apache.cassandra" % "cassandra-clientutil" % "2.1.5" % "provided",
        //"com.datastax.spark" %% "spark-cassandra-connector" % "1.3.0-M1"  % "provided",
        "org.apache.spark" %% "spark-core" % "1.3.0" % "provided",
        "org.apache.spark" %% "spark-streaming" % "1.3.0" % "provided",
        "org.apache.spark" %% "spark-sql" % "1.3.0" % "provided"
)

试用 Spark 1.3.1

从 spark 连接器

检查正确的版本 Versions.scala