使用 Avro 构建 Apache Spark 的问题
Issue building Apache Spark with Avro
我正在尝试使用 ./build/sbt clean package
从 master 分支构建 spark
我想测试一些特定于 spark-avro 子模块的东西。然而,当我 运行 ./bin/spark-shell
并尝试:
scala> import org.apache.spark.sql.avro._
我收到 object avro is not a member of package org.apache.spark.sql
我是否缺少用于测试 spark-avro 的构建参数?我在文档中找不到太多内容。
test@tests/spark$ ./bin/spark-shell
NOTE: SPARK_PREPEND_CLASSES is set, placing locally compiled Spark classes ahead of assembly.
20/02/21 14:28:52 WARN Utils: Your hostname, pascals resolves to a loopback address: 127.0.1.1; using 192.168.0.11 instead (on interface enp0s31f6)
20/02/21 14:28:52 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
20/02/21 14:28:52 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Spark context Web UI available at http://192.168.0.11:4040
Spark context available as 'sc' (master = local[*], app id = local-1582291738090).
Spark session available as 'spark'.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 3.0.0-SNAPSHOT
/_/
Using Scala version 2.12.10 (OpenJDK 64-Bit Server VM, Java 1.8.0_242)
Type in expressions to have them evaluated.
Type :help for more information.
scala> import org.apache.spark.sql.avro.SchemaConverters
<console>:23: error: object avro is not a member of package org.apache.spark.sql
import org.apache.spark.sql.avro.SchemaConverters
感谢任何帮助!
./bin/spark-shell --packages org.apache.spark:spark-avro_2.12:3.0.0-SNAPSHOT
是要走的路,如记录 here
我正在尝试使用 ./build/sbt clean package
我想测试一些特定于 spark-avro 子模块的东西。然而,当我 运行 ./bin/spark-shell
并尝试:
scala> import org.apache.spark.sql.avro._
我收到 object avro is not a member of package org.apache.spark.sql
我是否缺少用于测试 spark-avro 的构建参数?我在文档中找不到太多内容。
test@tests/spark$ ./bin/spark-shell
NOTE: SPARK_PREPEND_CLASSES is set, placing locally compiled Spark classes ahead of assembly.
20/02/21 14:28:52 WARN Utils: Your hostname, pascals resolves to a loopback address: 127.0.1.1; using 192.168.0.11 instead (on interface enp0s31f6)
20/02/21 14:28:52 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
20/02/21 14:28:52 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Spark context Web UI available at http://192.168.0.11:4040
Spark context available as 'sc' (master = local[*], app id = local-1582291738090).
Spark session available as 'spark'.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 3.0.0-SNAPSHOT
/_/
Using Scala version 2.12.10 (OpenJDK 64-Bit Server VM, Java 1.8.0_242)
Type in expressions to have them evaluated.
Type :help for more information.
scala> import org.apache.spark.sql.avro.SchemaConverters
<console>:23: error: object avro is not a member of package org.apache.spark.sql
import org.apache.spark.sql.avro.SchemaConverters
感谢任何帮助!
./bin/spark-shell --packages org.apache.spark:spark-avro_2.12:3.0.0-SNAPSHOT
是要走的路,如记录 here