在 PyCharm IDE 中添加 spark-csv 包
Adding spark-csv package in PyCharm IDE
我已经通过
在python独立模式下成功加载了spark-csv库
$ --packages com.databricks:spark-csv_2.10:1.4.0
运行执行上述命令时,它会在此位置创建两个文件夹(jars 和缓存)
C:\Users\Mahima\.ivy2
里面有两个文件夹。其中之一包含这些 jar 文件 - org.apache.commons_commons-csv-1.1.jar、com.univocity_univocity-parsers-1.5.1.jar、com.databricks_spark-csv_2 .10-1.4.0.jar
我想在 PyCharm(Windows 10) 中加载这个库,它已经设置为 运行 Spark 程序。所以我将 .ivy2 文件夹添加到 Project Interpreter Path。
我得到的错误主要是:
An error occurred while calling o22.load.
: java.lang.ClassNotFoundException: Failed to find data source: com.databricks.spark.csv. Please find packages at http://spark-packages.org
完整错误日志如下:
16/06/27 12:54:02 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Traceback (most recent call last):
File "C:/Users/Mahima/PycharmProjects/wordCount/wordCount.py", line 10, in <module>
df = sqlContext.read.format('com.databricks.spark.csv').options(header='true').load('flight.csv')
File "C:\spark-1.6.1-bin-hadoop2.4\python\pyspark\sql\readwriter.py", line 137, in load
return self._df(self._jreader.load(path))
File "C:\spark-1.6.1-bin-hadoop2.4\python\lib\py4j-0.9-src.zip\py4j\java_gateway.py", line 813, in __call__
File "C:\spark-1.6.1-bin-hadoop2.4\python\pyspark\sql\utils.py", line 45, in deco
return f(*a, **kw)
File "C:\spark-1.6.1-bin-hadoop2.4\python\lib\py4j-0.9-src.zip\py4j\protocol.py", line 308, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o22.load.
: java.lang.ClassNotFoundException: Failed to find data source: com.databricks.spark.csv. Please find packages at http://spark-packages.org
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.lookupDataSource(ResolvedDataSource.scala:77)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:102)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:109)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
at py4j.Gateway.invoke(Gateway.java:259)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:209)
at java.lang.Thread.run(Unknown Source)
Caused by: java.lang.ClassNotFoundException: com.databricks.spark.csv.DefaultSource
at java.net.URLClassLoader.run(Unknown Source)
at java.net.URLClassLoader.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$$anonfun$$anonfun$apply.apply(ResolvedDataSource.scala:62)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$$anonfun$$anonfun$apply.apply(ResolvedDataSource.scala:62)
at scala.util.Try$.apply(Try.scala:161)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$$anonfun.apply(ResolvedDataSource.scala:62)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$$anonfun.apply(ResolvedDataSource.scala:62)
at scala.util.Try.orElse(Try.scala:82)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.lookupDataSource(ResolvedDataSource.scala:62)
... 14 more
Process finished with exit code 1
我已经将 jars 添加到项目解释器路径中。我哪里错了?请提出一些解决方案。
提前致谢
控制台上的 sqlContext.read.format('com.databricks.spark.csv') 不保证包已实际安装,直到您对其执行正确的命令。实际上命令
sqlContext.read.format('com.dummy.csv')
也不会return任何错误
您可以将包添加到您的 spark 上下文中
sc.addPyFile("com.databricks_spark-csv_2.10-1.4.0.jar")
你可以一行打开一个csv文件,不需要包
sc.textFile("file.csv").map(lambda line: line.split(",")).toDF
解决方法是添加一个名为"PYSPARK_SUBMIT_ARGS"的环境变量,并将其值设置为“--packages com.databricks:spark-csv_2.10:1.4.0 pyspark-shell”。它会很好地工作。
我已经通过
在python独立模式下成功加载了spark-csv库$ --packages com.databricks:spark-csv_2.10:1.4.0
运行执行上述命令时,它会在此位置创建两个文件夹(jars 和缓存)
C:\Users\Mahima\.ivy2
里面有两个文件夹。其中之一包含这些 jar 文件 - org.apache.commons_commons-csv-1.1.jar、com.univocity_univocity-parsers-1.5.1.jar、com.databricks_spark-csv_2 .10-1.4.0.jar
我想在 PyCharm(Windows 10) 中加载这个库,它已经设置为 运行 Spark 程序。所以我将 .ivy2 文件夹添加到 Project Interpreter Path。 我得到的错误主要是:
An error occurred while calling o22.load.
: java.lang.ClassNotFoundException: Failed to find data source: com.databricks.spark.csv. Please find packages at http://spark-packages.org
完整错误日志如下:
16/06/27 12:54:02 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Traceback (most recent call last):
File "C:/Users/Mahima/PycharmProjects/wordCount/wordCount.py", line 10, in <module>
df = sqlContext.read.format('com.databricks.spark.csv').options(header='true').load('flight.csv')
File "C:\spark-1.6.1-bin-hadoop2.4\python\pyspark\sql\readwriter.py", line 137, in load
return self._df(self._jreader.load(path))
File "C:\spark-1.6.1-bin-hadoop2.4\python\lib\py4j-0.9-src.zip\py4j\java_gateway.py", line 813, in __call__
File "C:\spark-1.6.1-bin-hadoop2.4\python\pyspark\sql\utils.py", line 45, in deco
return f(*a, **kw)
File "C:\spark-1.6.1-bin-hadoop2.4\python\lib\py4j-0.9-src.zip\py4j\protocol.py", line 308, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o22.load.
: java.lang.ClassNotFoundException: Failed to find data source: com.databricks.spark.csv. Please find packages at http://spark-packages.org
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.lookupDataSource(ResolvedDataSource.scala:77)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:102)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:109)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
at py4j.Gateway.invoke(Gateway.java:259)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:209)
at java.lang.Thread.run(Unknown Source)
Caused by: java.lang.ClassNotFoundException: com.databricks.spark.csv.DefaultSource
at java.net.URLClassLoader.run(Unknown Source)
at java.net.URLClassLoader.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$$anonfun$$anonfun$apply.apply(ResolvedDataSource.scala:62)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$$anonfun$$anonfun$apply.apply(ResolvedDataSource.scala:62)
at scala.util.Try$.apply(Try.scala:161)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$$anonfun.apply(ResolvedDataSource.scala:62)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$$anonfun.apply(ResolvedDataSource.scala:62)
at scala.util.Try.orElse(Try.scala:82)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.lookupDataSource(ResolvedDataSource.scala:62)
... 14 more
Process finished with exit code 1
我已经将 jars 添加到项目解释器路径中。我哪里错了?请提出一些解决方案。 提前致谢
-
控制台上的
sqlContext.read.format('com.databricks.spark.csv') 不保证包已实际安装,直到您对其执行正确的命令。实际上命令
sqlContext.read.format('com.dummy.csv')
也不会return任何错误
您可以将包添加到您的 spark 上下文中
sc.addPyFile("com.databricks_spark-csv_2.10-1.4.0.jar")
你可以一行打开一个csv文件,不需要包
sc.textFile("file.csv").map(lambda line: line.split(",")).toDF
解决方法是添加一个名为"PYSPARK_SUBMIT_ARGS"的环境变量,并将其值设置为“--packages com.databricks:spark-csv_2.10:1.4.0 pyspark-shell”。它会很好地工作。