zeppelin notebook "error: not found: value %"
zeppelin notebook "error: not found: value %"
根据 我应该使用 %dep
加载 csv jar,但我得到 error: not found: value %
谁知道我缺少什么?
%spark
val a = 1
%dep
z.reset()
z.addRepo("Spark Packages Repo").url("http://dl.bintray.com/spark-packages/maven")
z.load("com.databricks:spark-csv_2.10:1.2.0")
a: Int = 1
<console>:28: error: not found: value %
%dep
^
在 zeppelin
日志中我看到:
INFO [2016-04-21 11:44:19,300] ({pool-2-thread-11} SchedulerFactory.java[jobFinished]:137) - Job remoteInterpretJob_1461228259278 finished by scheduler org.apache.zeppelin.spark.SparkInterpreter1173192611
INFO [2016-04-21 11:44:19,678] ({pool-2-thread-4} SchedulerFactory.java[jobStarted]:131) - Job remoteInterpretJob_1461228259678 started by scheduler org.apache.zeppelin.spark.SparkInterpreter1173192611
INFO [2016-04-21 11:44:19,704] ({pool-2-thread-4} SchedulerFactory.java[jobFinished]:137) - Job remoteInterpretJob_1461228259678 finished by scheduler org.apache.zeppelin.spark.SparkInterpreter1173192611
INFO [2016-04-21 11:44:36,968] ({pool-2-thread-12} SchedulerFactory.java[jobStarted]:131) - Job remoteInterpretJob_1461228276968 started by scheduler 1367682354
INFO [2016-04-21 11:44:36,969] ({pool-2-thread-12} RReplInterpreter.scala[liftedTree1]:41) - intrpreting %dep
z.reset()
z.addRepo("Spark Packages Repo").url("http://dl.bintray.com/spark-packages/maven")
z.load("com.databricks:spark-csv_2.10:1.2.0")
ERROR [2016-04-21 11:44:36,975] ({pool-2-thread-12} RClient.scala[eval]:79) - R Error .zreplout <- rzeppelin:::.z.valuate(.zreplin) <text>:1:1: unexpected input
1: %dep
^
INFO [2016-04-21 11:44:36,978] ({pool-2-thread-12} SchedulerFactory.java[jobFinished]:137) - Job remoteInterpretJob_1461228276968 finished by scheduler 1367682354
INFO [2016-04-21 11:45:22,157] ({pool-2-thread-8} SchedulerFactory.java[jobStarted]:131) - Job remoteInterpretJob_1461228322157 started by scheduler org.apache.zeppelin.spark.SparkInterpreter1173192611
每个单元格可以容纳一种解释器。因此,为了使用 %dep
和 %spark
,您应该在重新启动 spark 解释器后将它们分成两个以 %dep
开头的单元格,以便将其考虑在内。例如:
在第一个单元格中:
%dep
z.reset()
z.addRepo("Spark Packages Repo").url("http://dl.bintray.com/spark-packages/maven")
z.load("com.databricks:spark-csv_2.10:1.2.0")
现在您的依赖项已加载,您可以在不同的单元格中访问 spark 解释器:
%spark
val a = 1
PS: 默认情况下,单元格使用 spark 解释器运行,因此您无需显式使用 %spark
.
根据 %dep
加载 csv jar,但我得到 error: not found: value %
谁知道我缺少什么?
%spark
val a = 1
%dep
z.reset()
z.addRepo("Spark Packages Repo").url("http://dl.bintray.com/spark-packages/maven")
z.load("com.databricks:spark-csv_2.10:1.2.0")
a: Int = 1
<console>:28: error: not found: value %
%dep
^
在 zeppelin
日志中我看到:
INFO [2016-04-21 11:44:19,300] ({pool-2-thread-11} SchedulerFactory.java[jobFinished]:137) - Job remoteInterpretJob_1461228259278 finished by scheduler org.apache.zeppelin.spark.SparkInterpreter1173192611
INFO [2016-04-21 11:44:19,678] ({pool-2-thread-4} SchedulerFactory.java[jobStarted]:131) - Job remoteInterpretJob_1461228259678 started by scheduler org.apache.zeppelin.spark.SparkInterpreter1173192611
INFO [2016-04-21 11:44:19,704] ({pool-2-thread-4} SchedulerFactory.java[jobFinished]:137) - Job remoteInterpretJob_1461228259678 finished by scheduler org.apache.zeppelin.spark.SparkInterpreter1173192611
INFO [2016-04-21 11:44:36,968] ({pool-2-thread-12} SchedulerFactory.java[jobStarted]:131) - Job remoteInterpretJob_1461228276968 started by scheduler 1367682354
INFO [2016-04-21 11:44:36,969] ({pool-2-thread-12} RReplInterpreter.scala[liftedTree1]:41) - intrpreting %dep
z.reset()
z.addRepo("Spark Packages Repo").url("http://dl.bintray.com/spark-packages/maven")
z.load("com.databricks:spark-csv_2.10:1.2.0")
ERROR [2016-04-21 11:44:36,975] ({pool-2-thread-12} RClient.scala[eval]:79) - R Error .zreplout <- rzeppelin:::.z.valuate(.zreplin) <text>:1:1: unexpected input
1: %dep
^
INFO [2016-04-21 11:44:36,978] ({pool-2-thread-12} SchedulerFactory.java[jobFinished]:137) - Job remoteInterpretJob_1461228276968 finished by scheduler 1367682354
INFO [2016-04-21 11:45:22,157] ({pool-2-thread-8} SchedulerFactory.java[jobStarted]:131) - Job remoteInterpretJob_1461228322157 started by scheduler org.apache.zeppelin.spark.SparkInterpreter1173192611
每个单元格可以容纳一种解释器。因此,为了使用 %dep
和 %spark
,您应该在重新启动 spark 解释器后将它们分成两个以 %dep
开头的单元格,以便将其考虑在内。例如:
在第一个单元格中:
%dep
z.reset()
z.addRepo("Spark Packages Repo").url("http://dl.bintray.com/spark-packages/maven")
z.load("com.databricks:spark-csv_2.10:1.2.0")
现在您的依赖项已加载,您可以在不同的单元格中访问 spark 解释器:
%spark
val a = 1
PS: 默认情况下,单元格使用 spark 解释器运行,因此您无需显式使用 %spark
.