为什么将 Spark 应用程序提交到 Mesos 会失败并显示 'Failed to load native Mesos library'?
Why does submitting a Spark application to Mesos fail with 'Failed to load native Mesos library'?
我在尝试将 Spark 应用程序提交到 Mesos 集群时遇到以下异常:
/home/knoldus/application/spark-2.2.0-rc4/conf/spark-env.sh: line 40: export: `/usr/local/lib/libmesos.so': not a valid identifier
/home/knoldus/application/spark-2.2.0-rc4/conf/spark-env.sh: line 41: export: `hdfs://spark-2.2.0-bin-hadoop2.7.tgz': not a valid identifier
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
17/09/30 14:17:31 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/09/30 14:17:31 WARN Utils: Your hostname, knoldus resolves to a loopback address: 127.0.1.1; using 192.168.0.111 instead (on interface wlp6s0)
17/09/30 14:17:31 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
Failed to load native Mesos library from
java.lang.UnsatisfiedLinkError: Expecting an absolute path of the library:
at java.lang.Runtime.load0(Runtime.java:806)
at java.lang.System.load(System.java:1086)
at org.apache.mesos.MesosNativeLibrary.load(MesosNativeLibrary.java:159)
at org.apache.mesos.MesosNativeLibrary.load(MesosNativeLibrary.java:188)
at org.apache.mesos.MesosSchedulerDriver.<clinit>(MesosSchedulerDriver.java:61)
at org.apache.spark.scheduler.cluster.mesos.MesosSchedulerUtils$class.createSchedulerDriver(MesosSchedulerUtils.scala:104)
at org.apache.spark.scheduler.cluster.mesos.MesosCoarseGrainedSchedulerBackend.createSchedulerDriver(MesosCoarseGrainedSchedulerBackend.scala:49)
at org.apache.spark.scheduler.cluster.mesos.MesosCoarseGrainedSchedulerBackend.start(MesosCoarseGrainedSchedulerBackend.scala:170)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:173)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:509)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2509)
at org.apache.spark.sql.SparkSession$Builder$$anonfun.apply(SparkSession.scala:909)
at org.apache.spark.sql.SparkSession$Builder$$anonfun.apply(SparkSession.scala:901)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:901)
at org.apache.spark.repl.Main$.createSparkSession(Main.scala:103)
... 47 elided
我使用
构建了 spark
./build/mvn -Pmesos -DskipTests clean package
我在 spark-env.sh 中设置了以下属性:
export MESOS_NATIVE_JAVA_LIBRARY= /usr/local/lib/libmesos.so
export SPARK_EXECUTOR_URI= hdfs://spark-2.2.0-bin-hadoop2.7.tgz
并且在 spark-defaults.conf 中:
spark.executor.uri hdfs://spark-2.2.0-bin-hadoop2.7.tgz
我已经解决了这个问题。
问题是导出路径时不应该有space。
export MESOS_NATIVE_JAVA_LIBRARY= /usr/local/lib/libmesos.so
export SPARK_EXECUTOR_URI= hdfs://spark-2.2.0-bin-hadoop2.7.tgz
例如
export foo = bar
shell 会将其解释为导出三个名称的请求:foo
、=
和 bar
。 = 不是有效的变量名,因此命令失败。变量名、等号和它的值不能用 space 分隔,以便将它们作为同时赋值和导出进行处理。
删除 spaces.
export MESOS_NATIVE_JAVA_LIBRARY=/usr/local/lib/libmesos.so
export SPARK_EXECUTOR_URI=hdfs://spark-2.2.0-bin-hadoop2.7.tgz
我在尝试将 Spark 应用程序提交到 Mesos 集群时遇到以下异常:
/home/knoldus/application/spark-2.2.0-rc4/conf/spark-env.sh: line 40: export: `/usr/local/lib/libmesos.so': not a valid identifier
/home/knoldus/application/spark-2.2.0-rc4/conf/spark-env.sh: line 41: export: `hdfs://spark-2.2.0-bin-hadoop2.7.tgz': not a valid identifier
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
17/09/30 14:17:31 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/09/30 14:17:31 WARN Utils: Your hostname, knoldus resolves to a loopback address: 127.0.1.1; using 192.168.0.111 instead (on interface wlp6s0)
17/09/30 14:17:31 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
Failed to load native Mesos library from
java.lang.UnsatisfiedLinkError: Expecting an absolute path of the library:
at java.lang.Runtime.load0(Runtime.java:806)
at java.lang.System.load(System.java:1086)
at org.apache.mesos.MesosNativeLibrary.load(MesosNativeLibrary.java:159)
at org.apache.mesos.MesosNativeLibrary.load(MesosNativeLibrary.java:188)
at org.apache.mesos.MesosSchedulerDriver.<clinit>(MesosSchedulerDriver.java:61)
at org.apache.spark.scheduler.cluster.mesos.MesosSchedulerUtils$class.createSchedulerDriver(MesosSchedulerUtils.scala:104)
at org.apache.spark.scheduler.cluster.mesos.MesosCoarseGrainedSchedulerBackend.createSchedulerDriver(MesosCoarseGrainedSchedulerBackend.scala:49)
at org.apache.spark.scheduler.cluster.mesos.MesosCoarseGrainedSchedulerBackend.start(MesosCoarseGrainedSchedulerBackend.scala:170)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:173)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:509)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2509)
at org.apache.spark.sql.SparkSession$Builder$$anonfun.apply(SparkSession.scala:909)
at org.apache.spark.sql.SparkSession$Builder$$anonfun.apply(SparkSession.scala:901)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:901)
at org.apache.spark.repl.Main$.createSparkSession(Main.scala:103)
... 47 elided
我使用
构建了 spark./build/mvn -Pmesos -DskipTests clean package
我在 spark-env.sh 中设置了以下属性:
export MESOS_NATIVE_JAVA_LIBRARY= /usr/local/lib/libmesos.so
export SPARK_EXECUTOR_URI= hdfs://spark-2.2.0-bin-hadoop2.7.tgz
并且在 spark-defaults.conf 中:
spark.executor.uri hdfs://spark-2.2.0-bin-hadoop2.7.tgz
我已经解决了这个问题。 问题是导出路径时不应该有space。
export MESOS_NATIVE_JAVA_LIBRARY= /usr/local/lib/libmesos.so
export SPARK_EXECUTOR_URI= hdfs://spark-2.2.0-bin-hadoop2.7.tgz
例如
export foo = bar
shell 会将其解释为导出三个名称的请求:foo
、=
和 bar
。 = 不是有效的变量名,因此命令失败。变量名、等号和它的值不能用 space 分隔,以便将它们作为同时赋值和导出进行处理。
删除 spaces.
export MESOS_NATIVE_JAVA_LIBRARY=/usr/local/lib/libmesos.so
export SPARK_EXECUTOR_URI=hdfs://spark-2.2.0-bin-hadoop2.7.tgz