PySpark error: "Input path does not exist"
PySpark error: "Input path does not exist"
我是 Spark 新手,我在 Python 中编写代码。
完全按照我的 "Learning Spark" 指南,我看到 "You don't need to have Hadoop installed to run Spark"
然而,当我只是尝试使用 Pyspark 计算一个文件中的行数时,出现以下错误。我错过了什么?
>>> lines = sc.textFile("README.md")
15/02/01 13:27:12 INFO MemoryStore: ensureFreeSpace(32728) called with curMem=0,
maxMem=278019440
15/02/01 13:27:12 INFO MemoryStore: Block broadcast_0 stored as values in memory
(estimated size 32.0 KB, free 265.1 MB)
>>> lines.count()
15/02/01 13:27:18 WARN NativeCodeLoader: Unable to load native-hadoop library fo
r your platform... using builtin-java classes where applicable
15/02/01 13:27:18 WARN LoadSnappy: Snappy native library not loaded
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Spark\spark-1.1.0-bin-hadoop1\python\pyspark\rdd.py", line 847, in co
unt
return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum()
File "C:\Spark\spark-1.1.0-bin-hadoop1\python\pyspark\rdd.py", line 838, in su
m
return self.mapPartitions(lambda x: [sum(x)]).reduce(operator.add)
File "C:\Spark\spark-1.1.0-bin-hadoop1\python\pyspark\rdd.py", line 759, in re
duce
vals = self.mapPartitions(func).collect()
File "C:\Spark\spark-1.1.0-bin-hadoop1\python\pyspark\rdd.py", line 723, in co
llect
bytesInJava = self._jrdd.collect().iterator()
File "C:\Spark\spark-1.1.0-bin-hadoop1\python\lib\py4j-0.8.2.1-src.zip\py4j\ja
va_gateway.py", line 538, in __call__
File "C:\Spark\spark-1.1.0-bin-hadoop1\python\lib\py4j-0.8.2.1-src.zip\py4j\pr
otocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o26.collect.
: org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: fil
e:/C:/Spark/spark-1.1.0-bin-hadoop1/bin/README.md
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.j
ava:197)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.ja
va:208)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:179)
at org.apache.spark.rdd.RDD$$anonfun$partitions.apply(RDD.scala:204)
at org.apache.spark.rdd.RDD$$anonfun$partitions.apply(RDD.scala:202)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
at org.apache.spark.rdd.RDD$$anonfun$partitions.apply(RDD.scala:204)
at org.apache.spark.rdd.RDD$$anonfun$partitions.apply(RDD.scala:202)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
at org.apache.spark.api.python.PythonRDD.getPartitions(PythonRDD.scala:5
6)
at org.apache.spark.rdd.RDD$$anonfun$partitions.apply(RDD.scala:204)
at org.apache.spark.rdd.RDD$$anonfun$partitions.apply(RDD.scala:202)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1135)
at org.apache.spark.rdd.RDD.collect(RDD.scala:774)
at org.apache.spark.api.java.JavaRDDLike$class.collect(JavaRDDLike.scala
:305)
at org.apache.spark.api.java.JavaRDD.collect(JavaRDD.scala:32)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
at py4j.Gateway.invoke(Gateway.java:259)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:207)
at java.lang.Thread.run(Unknown Source)
>>> lines.first()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Spark\spark-1.1.0-bin-hadoop1\python\pyspark\rdd.py", line 1167, in f
irst
return self.take(1)[0]
File "C:\Spark\spark-1.1.0-bin-hadoop1\python\pyspark\rdd.py", line 1126, in t
ake
totalParts = self._jrdd.partitions().size()
File "C:\Spark\spark-1.1.0-bin-hadoop1\python\lib\py4j-0.8.2.1-src.zip\py4j\ja
va_gateway.py", line 538, in __call__
File "C:\Spark\spark-1.1.0-bin-hadoop1\python\lib\py4j-0.8.2.1-src.zip\py4j\pr
otocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o20.partitions.
: org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: fil
e:/C:/Spark/spark-1.1.0-bin-hadoop1/bin/README.md
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.j
ava:197)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.ja
va:208)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:179)
at org.apache.spark.rdd.RDD$$anonfun$partitions.apply(RDD.scala:204)
at org.apache.spark.rdd.RDD$$anonfun$partitions.apply(RDD.scala:202)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
at org.apache.spark.rdd.RDD$$anonfun$partitions.apply(RDD.scala:204)
at org.apache.spark.rdd.RDD$$anonfun$partitions.apply(RDD.scala:202)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
at org.apache.spark.api.java.JavaRDDLike$class.partitions(JavaRDDLike.sc
ala:50)
at org.apache.spark.api.java.JavaRDD.partitions(JavaRDD.scala:32)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
at py4j.Gateway.invoke(Gateway.java:259)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:207)
at java.lang.Thread.run(Unknown Source)
>>>
我没有尝试 运行 在 Windows 系统中启动,但在我看来问题是:
py4j.protocol.Py4JJavaError: An error occurred while calling
o26.collect. : org.apache.hadoop.mapred.InvalidInputException: Input
path does not exist: fil
e:/C:/Spark/spark-1.1.0-bin-hadoop1/bin/README.md
您必须正确引用要加载的文件。如果您 运行 来自 spark 文件夹的 pyspark(即:C:\spark
),那么 lines = sc.textFile("README.md")
是正确的。但是,如果您 运行 来自 bin
的 pyspark(即:C:\spark\bin
),则必须引用它:lines = sc.textFile("../README.md")
,或使用文件的绝对路径。
我来晚了一点。我遇到了类似的问题(ec2 spark 集群)。就我而言,hdfs dint 拥有我正在寻找的文件。因此,我不得不使用以下命令手动添加我想要的文件
~/ephemeral-hdfs/bin/hadoop fs -put /dir/filename.txt filename.txt
希望对您有所帮助。
这是我在 windows 中托管的 Spark 集群上遇到的此错误的解决方案:
加载原始HVAC.csv文件,使用函数
解析它
data = sc.textFile("wasb:///HdiSamples/SensorSampleData/hvac/HVAC.csv")
我们使用 (wasb:///) 允许 Hadoop 访问 azure 博客存储文件,三个斜杠是对 运行 节点容器文件夹的相对引用。
例如:
如果您的文件在 Spark 集群仪表板的文件资源管理器中的路径是:
sflcc1\sflccspark1\HdiSamples\SensorSampleData\hvac
所以要描述路径如下:
sflcc1:是存储帐户的名称。
sflccspark: 是集群节点名称。
所以我们用相对的三个斜杠来引用当前集群节点名称。
希望对您有所帮助。
我遇到了同样的问题,通过以下方式解决了
scala> val textFile = spark.read.textFile("file:///usr/local/spark-3.1.2/README.md")
textFile: org.apache.spark.sql.Dataset[String] = [value: string]
我是 Spark 新手,我在 Python 中编写代码。
完全按照我的 "Learning Spark" 指南,我看到 "You don't need to have Hadoop installed to run Spark"
然而,当我只是尝试使用 Pyspark 计算一个文件中的行数时,出现以下错误。我错过了什么?
>>> lines = sc.textFile("README.md")
15/02/01 13:27:12 INFO MemoryStore: ensureFreeSpace(32728) called with curMem=0,
maxMem=278019440
15/02/01 13:27:12 INFO MemoryStore: Block broadcast_0 stored as values in memory
(estimated size 32.0 KB, free 265.1 MB)
>>> lines.count()
15/02/01 13:27:18 WARN NativeCodeLoader: Unable to load native-hadoop library fo
r your platform... using builtin-java classes where applicable
15/02/01 13:27:18 WARN LoadSnappy: Snappy native library not loaded
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Spark\spark-1.1.0-bin-hadoop1\python\pyspark\rdd.py", line 847, in co
unt
return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum()
File "C:\Spark\spark-1.1.0-bin-hadoop1\python\pyspark\rdd.py", line 838, in su
m
return self.mapPartitions(lambda x: [sum(x)]).reduce(operator.add)
File "C:\Spark\spark-1.1.0-bin-hadoop1\python\pyspark\rdd.py", line 759, in re
duce
vals = self.mapPartitions(func).collect()
File "C:\Spark\spark-1.1.0-bin-hadoop1\python\pyspark\rdd.py", line 723, in co
llect
bytesInJava = self._jrdd.collect().iterator()
File "C:\Spark\spark-1.1.0-bin-hadoop1\python\lib\py4j-0.8.2.1-src.zip\py4j\ja
va_gateway.py", line 538, in __call__
File "C:\Spark\spark-1.1.0-bin-hadoop1\python\lib\py4j-0.8.2.1-src.zip\py4j\pr
otocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o26.collect.
: org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: fil
e:/C:/Spark/spark-1.1.0-bin-hadoop1/bin/README.md
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.j
ava:197)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.ja
va:208)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:179)
at org.apache.spark.rdd.RDD$$anonfun$partitions.apply(RDD.scala:204)
at org.apache.spark.rdd.RDD$$anonfun$partitions.apply(RDD.scala:202)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
at org.apache.spark.rdd.RDD$$anonfun$partitions.apply(RDD.scala:204)
at org.apache.spark.rdd.RDD$$anonfun$partitions.apply(RDD.scala:202)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
at org.apache.spark.api.python.PythonRDD.getPartitions(PythonRDD.scala:5
6)
at org.apache.spark.rdd.RDD$$anonfun$partitions.apply(RDD.scala:204)
at org.apache.spark.rdd.RDD$$anonfun$partitions.apply(RDD.scala:202)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1135)
at org.apache.spark.rdd.RDD.collect(RDD.scala:774)
at org.apache.spark.api.java.JavaRDDLike$class.collect(JavaRDDLike.scala
:305)
at org.apache.spark.api.java.JavaRDD.collect(JavaRDD.scala:32)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
at py4j.Gateway.invoke(Gateway.java:259)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:207)
at java.lang.Thread.run(Unknown Source)
>>> lines.first()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Spark\spark-1.1.0-bin-hadoop1\python\pyspark\rdd.py", line 1167, in f
irst
return self.take(1)[0]
File "C:\Spark\spark-1.1.0-bin-hadoop1\python\pyspark\rdd.py", line 1126, in t
ake
totalParts = self._jrdd.partitions().size()
File "C:\Spark\spark-1.1.0-bin-hadoop1\python\lib\py4j-0.8.2.1-src.zip\py4j\ja
va_gateway.py", line 538, in __call__
File "C:\Spark\spark-1.1.0-bin-hadoop1\python\lib\py4j-0.8.2.1-src.zip\py4j\pr
otocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o20.partitions.
: org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: fil
e:/C:/Spark/spark-1.1.0-bin-hadoop1/bin/README.md
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.j
ava:197)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.ja
va:208)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:179)
at org.apache.spark.rdd.RDD$$anonfun$partitions.apply(RDD.scala:204)
at org.apache.spark.rdd.RDD$$anonfun$partitions.apply(RDD.scala:202)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
at org.apache.spark.rdd.RDD$$anonfun$partitions.apply(RDD.scala:204)
at org.apache.spark.rdd.RDD$$anonfun$partitions.apply(RDD.scala:202)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
at org.apache.spark.api.java.JavaRDDLike$class.partitions(JavaRDDLike.sc
ala:50)
at org.apache.spark.api.java.JavaRDD.partitions(JavaRDD.scala:32)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
at py4j.Gateway.invoke(Gateway.java:259)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:207)
at java.lang.Thread.run(Unknown Source)
>>>
我没有尝试 运行 在 Windows 系统中启动,但在我看来问题是:
py4j.protocol.Py4JJavaError: An error occurred while calling o26.collect. : org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: fil e:/C:/Spark/spark-1.1.0-bin-hadoop1/bin/README.md
您必须正确引用要加载的文件。如果您 运行 来自 spark 文件夹的 pyspark(即:C:\spark
),那么 lines = sc.textFile("README.md")
是正确的。但是,如果您 运行 来自 bin
的 pyspark(即:C:\spark\bin
),则必须引用它:lines = sc.textFile("../README.md")
,或使用文件的绝对路径。
我来晚了一点。我遇到了类似的问题(ec2 spark 集群)。就我而言,hdfs dint 拥有我正在寻找的文件。因此,我不得不使用以下命令手动添加我想要的文件
~/ephemeral-hdfs/bin/hadoop fs -put /dir/filename.txt filename.txt
希望对您有所帮助。
这是我在 windows 中托管的 Spark 集群上遇到的此错误的解决方案:
加载原始HVAC.csv文件,使用函数
解析它data = sc.textFile("wasb:///HdiSamples/SensorSampleData/hvac/HVAC.csv")
我们使用 (wasb:///) 允许 Hadoop 访问 azure 博客存储文件,三个斜杠是对 运行 节点容器文件夹的相对引用。
例如: 如果您的文件在 Spark 集群仪表板的文件资源管理器中的路径是:
sflcc1\sflccspark1\HdiSamples\SensorSampleData\hvac
所以要描述路径如下: sflcc1:是存储帐户的名称。 sflccspark: 是集群节点名称。
所以我们用相对的三个斜杠来引用当前集群节点名称。
希望对您有所帮助。
我遇到了同样的问题,通过以下方式解决了
scala> val textFile = spark.read.textFile("file:///usr/local/spark-3.1.2/README.md")
textFile: org.apache.spark.sql.Dataset[String] = [value: string]