Spark WordCount命令行编译报错

Errors in command line compilation of Spark WordCount

我正在尝试使用没有任何 Maven 和 sbt 支持的命令行为 Scala 编译和 运行 WordCOunt 程序。 我用来编译 scala 程序的命令是

scalac -classpath /spark-2.3.0-bin-hadoop2.7/jars/ Wordcount.scala

import org.apache.spark._
import org.apache.spark.SparkConf

/** Create a RDD of lines from a text file, and keep count of
 *  how often each word appears.
 */
object wordcount {

  def main(args: Array[String]) {
      // Set up a SparkContext named WordCount that runs locally using
      // all available cores.
      val conf = new SparkConf().setAppName("WordCount")
      conf.setMaster("local[*]")
      val sc = new SparkContext(conf)

我的研究: 我参考了源代码,发现导入语句在 他们需要的罐子。
例如 SparkConf 存在于包 org.apache.spark 中 节目中提到了

https://github.com/apache/spark/blob/v2.3.1/core/src/main/scala/org/apache/spark/SparkConf.scala

我面临的错误:

Wordcount.scala:3: error: **object apache is not a member of package org import org.apache.spark._ ^**

Wordcount.scala:4: error: **object apache is not a member of package org import org.apache.spark.SparkConf** ^

Wordcount.scala:14: error: not found: **type SparkConf val conf = new SparkConf().setAppName("WordCount")** ^

Wordcount.scala:16: error: not found: **type SparkContext val sc = new SparkContext(conf)** ^

发现四个错误

试试这个:

scalac -classpath "/spark-2.3.0-bin-hadoop2.7/jars/*" Wordcount.scala

您问题中提到的 scalac 命令有问题。如果你想 select 某个目录中的所有 jar 并将其放在类路径中,那么你需要使用 * 通配符 字符并将路径用双引号括起来。

详情请参考:Including all the jars in a directory within the Java classpath