无法启动 Spark-Jobserver 的本地实例
Can't start local instance of Spark-Jobserver
所以我正在尝试创建一个 spark jobserver 的本地实例来测试作业,但我什至无法将其发送到 运行。
因此,当我进入 vagrant 实例时,我做的第一件事就是启动 spark。我知道这是可行的,因为我使用它提供的提交作业实用程序提交作业以激发灵感。然后我去我本地的 spark-jobserver 克隆和 运行
vagrant@cassandra-spark:~/spark-jobserver$ sudo sbt
[info] Loading project definition from /home/vagrant/spark-jobserver/project
Missing bintray credentials /root/.bintray/.credentials. Some bintray features depend on this.
Missing bintray credentials /root/.bintray/.credentials. Some bintray features depend on this.
Missing bintray credentials /root/.bintray/.credentials. Some bintray features depend on this.
Missing bintray credentials /root/.bintray/.credentials. Some bintray features depend on this.
[info] Set current project to root (in build file:/home/vagrant/spark-jobserver/)
> reStart /home/vagrant/spark-jobserver/config/local.conf
[info] scalastyle using config /home/vagrant/spark-jobserver/scalastyle-config.xml
[info] Processed 21 file(s)
[info] Found 0 errors
[info] Found 0 warnings
[info] Found 0 infos
[info] Finished in 35 ms
[success] created output: /home/vagrant/spark-jobserver/job-server/target
[info] scalastyle using config /home/vagrant/spark-jobserver/scalastyle-config.xml
[info] Processed 6 file(s)
[info] Found 0 errors
[info] Found 0 warnings
[info] Found 0 infos
[info] Finished in 6 ms
[success] created output: /home/vagrant/spark-jobserver/job-server-extras/target
[warn] Credentials file /root/.bintray/.credentials does not exist
[warn] Credentials file /root/.bintray/.credentials does not exist
[warn] Credentials file /root/.bintray/.credentials does not exist
[warn] Credentials file /root/.bintray/.credentials does not exist
[warn] Credentials file /root/.bintray/.credentials does not exist
[warn] Credentials file /root/.bintray/.credentials does not exist
[warn] Credentials file /root/.bintray/.credentials does not exist
[info] scalastyle using config /home/vagrant/spark-jobserver/scalastyle-config.xml
[info] Processed 3 file(s)
[info] Found 0 errors
[info] Found 0 warnings
[info] Found 0 infos
[info] Finished in 8 ms
[success] created output: /home/vagrant/spark-jobserver/job-server-api/target
[info] scalastyle using config /home/vagrant/spark-jobserver/scalastyle-config.xml
[info] Processed 11 file(s)
[info] Found 0 errors
[info] Found 0 warnings
[info] Found 0 infos
[info] Finished in 7 ms
[success] created output: /home/vagrant/spark-jobserver/akka-app/target
[info] scalastyle using config /home/vagrant/spark-jobserver/scalastyle-config.xml
[info] Processed 3 file(s)
[info] Found 0 errors
[info] Found 0 warnings
[info] Found 0 infos
[info] Finished in 9 ms
[success] created output: /home/vagrant/spark-jobserver/job-server-api/target
[info] scalastyle using config /home/vagrant/spark-jobserver/scalastyle-config.xml
[info] Processed 11 file(s)
[info] Found 0 errors
[info] Found 0 warnings
[info] Found 0 infos
[info] Finished in 6 ms
[success] created output: /home/vagrant/spark-jobserver/akka-app/target
[info] scalastyle using config /home/vagrant/spark-jobserver/scalastyle-config.xml
[info] Processed 21 file(s)
[info] Found 0 errors
[info] Found 0 warnings
[info] Found 0 infos
[info] Finished in 2 ms
[success] created output: /home/vagrant/spark-jobserver/job-server/target
[info] Application job-server not yet started
[info] Starting application job-server in the background ...
job-server Starting spark.jobserver.JobServer.main(/home/vagrant/spark-jobserver/config/local.conf)
job-server[ERROR] Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
[warn] No main class detected
[info] Application job-server-extras not yet started
[info] Starting application job-server-extras in the background ...
job-server-extras Starting spark.jobserver.JobServer.main(/home/vagrant/spark-jobserver/config/local.conf)
job-server-extras[ERROR] Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
[success] Total time: 6 s, completed Jun 12, 2015 2:28:32 PM
> job-server-extras[ERROR] log4j:WARN No appenders could be found for logger (spark.jobserver.JobServer$).
job-server-extras[ERROR] log4j:WARN Please initialize the log4j system properly.
job-server-extras[ERROR] log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
>
在另一个终端中,我通过 ssh 进入 vagrant 实例并 运行
vagrant@cassandra-spark:~$ curl --data-binary @/home/vagrant/SQLJob/target/scala-2.10/CassSparkTest-a
ssembly-1.0.jar localhost:8090/jars
The requested resource could not be found.
这是我的 config/local.conf
中的内容
# Template for a Spark Job Server configuration file
# When deployed these settings are loaded when job server starts
#
# Spark Cluster / Job Server configuration
spark {
# spark.master will be passed to each job's JobContext
master = "spark://192.168.10.11:7077"
# master = "mesos://vm28-hulk-pub:5050"
# master = "yarn-client"
# Default # of CPUs for jobs to use for Spark standalone cluster
job-number-cpus = 1
# predefined Spark contexts
# contexts {
# my-low-latency-context {
# num-cpu-cores = 1 # Number of cores to allocate. Required.
# memory-per-node = 512m # Executor memory per node, -Xmx style eg 512m, 1G, etc.
# }
# # define additional contexts here
# }
# universal context configuration. These settings can be overridden, see README.md
context-settings {
num-cpu-cores = 1 # Number of cores to allocate. Required.
memory-per-node = 512m # Executor memory per node, -Xmx style eg 512m, #1G, etc.
spark.cassandra.connection.host = "127.0.0.1"
# in case spark distribution should be accessed from HDFS (as opposed to being installed on every mesos slave)
# spark.executor.uri = "hdfs://namenode:8020/apps/spark/spark.tgz"
# uris of jars to be loaded into the classpath for this context. Uris is a string list, or a string separated by commas ','
dependent-jar-uris = ["file:///home/vagrant/lib/spark-cassandra-connector-assembly-1.3.0-M2-SNAPSHOT.jar"]
# If you wish to pass any settings directly to the sparkConf as-is, add them here in passthrough,
# such as hadoop connection settings that don't use the "spark." prefix
passthrough {
#es.nodes = "192.1.1.1"
}
}
# This needs to match SPARK_HOME for cluster SparkContexts to be created successfully
home = "/home/vagrant/spark"
}
# Note that you can use this file to define settings not only for job server,
# but for your Spark jobs as well. Spark job configuration merges with this configuration file as defaults.
找出问题所在,服务器正常启动(虽然没有正确记录)
问题是我在传递给 curl 的路径末尾没有“/”
所以要修复它,请将 curl 语句更改为:
vagrant@cassandra-spark:~$ curl --data-binary @/home/vagrant/SQLJob/target/scala-2.10/CassSparkTest-a
ssembly-1.0.jar localhost:8090/jars
所以我正在尝试创建一个 spark jobserver 的本地实例来测试作业,但我什至无法将其发送到 运行。
因此,当我进入 vagrant 实例时,我做的第一件事就是启动 spark。我知道这是可行的,因为我使用它提供的提交作业实用程序提交作业以激发灵感。然后我去我本地的 spark-jobserver 克隆和 运行
vagrant@cassandra-spark:~/spark-jobserver$ sudo sbt
[info] Loading project definition from /home/vagrant/spark-jobserver/project
Missing bintray credentials /root/.bintray/.credentials. Some bintray features depend on this.
Missing bintray credentials /root/.bintray/.credentials. Some bintray features depend on this.
Missing bintray credentials /root/.bintray/.credentials. Some bintray features depend on this.
Missing bintray credentials /root/.bintray/.credentials. Some bintray features depend on this.
[info] Set current project to root (in build file:/home/vagrant/spark-jobserver/)
> reStart /home/vagrant/spark-jobserver/config/local.conf
[info] scalastyle using config /home/vagrant/spark-jobserver/scalastyle-config.xml
[info] Processed 21 file(s)
[info] Found 0 errors
[info] Found 0 warnings
[info] Found 0 infos
[info] Finished in 35 ms
[success] created output: /home/vagrant/spark-jobserver/job-server/target
[info] scalastyle using config /home/vagrant/spark-jobserver/scalastyle-config.xml
[info] Processed 6 file(s)
[info] Found 0 errors
[info] Found 0 warnings
[info] Found 0 infos
[info] Finished in 6 ms
[success] created output: /home/vagrant/spark-jobserver/job-server-extras/target
[warn] Credentials file /root/.bintray/.credentials does not exist
[warn] Credentials file /root/.bintray/.credentials does not exist
[warn] Credentials file /root/.bintray/.credentials does not exist
[warn] Credentials file /root/.bintray/.credentials does not exist
[warn] Credentials file /root/.bintray/.credentials does not exist
[warn] Credentials file /root/.bintray/.credentials does not exist
[warn] Credentials file /root/.bintray/.credentials does not exist
[info] scalastyle using config /home/vagrant/spark-jobserver/scalastyle-config.xml
[info] Processed 3 file(s)
[info] Found 0 errors
[info] Found 0 warnings
[info] Found 0 infos
[info] Finished in 8 ms
[success] created output: /home/vagrant/spark-jobserver/job-server-api/target
[info] scalastyle using config /home/vagrant/spark-jobserver/scalastyle-config.xml
[info] Processed 11 file(s)
[info] Found 0 errors
[info] Found 0 warnings
[info] Found 0 infos
[info] Finished in 7 ms
[success] created output: /home/vagrant/spark-jobserver/akka-app/target
[info] scalastyle using config /home/vagrant/spark-jobserver/scalastyle-config.xml
[info] Processed 3 file(s)
[info] Found 0 errors
[info] Found 0 warnings
[info] Found 0 infos
[info] Finished in 9 ms
[success] created output: /home/vagrant/spark-jobserver/job-server-api/target
[info] scalastyle using config /home/vagrant/spark-jobserver/scalastyle-config.xml
[info] Processed 11 file(s)
[info] Found 0 errors
[info] Found 0 warnings
[info] Found 0 infos
[info] Finished in 6 ms
[success] created output: /home/vagrant/spark-jobserver/akka-app/target
[info] scalastyle using config /home/vagrant/spark-jobserver/scalastyle-config.xml
[info] Processed 21 file(s)
[info] Found 0 errors
[info] Found 0 warnings
[info] Found 0 infos
[info] Finished in 2 ms
[success] created output: /home/vagrant/spark-jobserver/job-server/target
[info] Application job-server not yet started
[info] Starting application job-server in the background ...
job-server Starting spark.jobserver.JobServer.main(/home/vagrant/spark-jobserver/config/local.conf)
job-server[ERROR] Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
[warn] No main class detected
[info] Application job-server-extras not yet started
[info] Starting application job-server-extras in the background ...
job-server-extras Starting spark.jobserver.JobServer.main(/home/vagrant/spark-jobserver/config/local.conf)
job-server-extras[ERROR] Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
[success] Total time: 6 s, completed Jun 12, 2015 2:28:32 PM
> job-server-extras[ERROR] log4j:WARN No appenders could be found for logger (spark.jobserver.JobServer$).
job-server-extras[ERROR] log4j:WARN Please initialize the log4j system properly.
job-server-extras[ERROR] log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
>
在另一个终端中,我通过 ssh 进入 vagrant 实例并 运行
vagrant@cassandra-spark:~$ curl --data-binary @/home/vagrant/SQLJob/target/scala-2.10/CassSparkTest-a
ssembly-1.0.jar localhost:8090/jars
The requested resource could not be found.
这是我的 config/local.conf
中的内容 # Template for a Spark Job Server configuration file
# When deployed these settings are loaded when job server starts
#
# Spark Cluster / Job Server configuration
spark {
# spark.master will be passed to each job's JobContext
master = "spark://192.168.10.11:7077"
# master = "mesos://vm28-hulk-pub:5050"
# master = "yarn-client"
# Default # of CPUs for jobs to use for Spark standalone cluster
job-number-cpus = 1
# predefined Spark contexts
# contexts {
# my-low-latency-context {
# num-cpu-cores = 1 # Number of cores to allocate. Required.
# memory-per-node = 512m # Executor memory per node, -Xmx style eg 512m, 1G, etc.
# }
# # define additional contexts here
# }
# universal context configuration. These settings can be overridden, see README.md
context-settings {
num-cpu-cores = 1 # Number of cores to allocate. Required.
memory-per-node = 512m # Executor memory per node, -Xmx style eg 512m, #1G, etc.
spark.cassandra.connection.host = "127.0.0.1"
# in case spark distribution should be accessed from HDFS (as opposed to being installed on every mesos slave)
# spark.executor.uri = "hdfs://namenode:8020/apps/spark/spark.tgz"
# uris of jars to be loaded into the classpath for this context. Uris is a string list, or a string separated by commas ','
dependent-jar-uris = ["file:///home/vagrant/lib/spark-cassandra-connector-assembly-1.3.0-M2-SNAPSHOT.jar"]
# If you wish to pass any settings directly to the sparkConf as-is, add them here in passthrough,
# such as hadoop connection settings that don't use the "spark." prefix
passthrough {
#es.nodes = "192.1.1.1"
}
}
# This needs to match SPARK_HOME for cluster SparkContexts to be created successfully
home = "/home/vagrant/spark"
}
# Note that you can use this file to define settings not only for job server,
# but for your Spark jobs as well. Spark job configuration merges with this configuration file as defaults.
找出问题所在,服务器正常启动(虽然没有正确记录)
问题是我在传递给 curl 的路径末尾没有“/”
所以要修复它,请将 curl 语句更改为:
vagrant@cassandra-spark:~$ curl --data-binary @/home/vagrant/SQLJob/target/scala-2.10/CassSparkTest-a
ssembly-1.0.jar localhost:8090/jars