Py4JJavaError: An error occurred while calling o26.parquet. (Reading Parquet file)
Py4JJavaError: An error occurred while calling o26.parquet. (Reading Parquet file)
正在尝试读取 PySpark 中的 Parquet
文件但得到 Py4JJavaError
。我什至尝试从 spark-shell
读取它并且能够这样做。就它在 Scala 而不是 PySpark 中工作的 Python API 而言,我无法理解我在这里做错了什么;
spark = SparkSession.builder.master("local").appName("test-read").getOrCreate()
sdf = spark.read.parquet("game_logs.parquet")
堆栈跟踪:
Py4JJavaError Traceback (most recent call last)
<timed exec> in <module>()
~/pyenv/pyenv/lib/python3.6/site-packages/pyspark/sql/readwriter.py in parquet(self, *paths)
301 [('name', 'string'), ('year', 'int'), ('month', 'int'), ('day', 'int')]
302 """
--> 303 return self._df(self._jreader.parquet(_to_seq(self._spark._sc, paths)))
304
305 @ignore_unicode_prefix
~/pyenv/pyenv/lib/python3.6/site-packages/py4j/java_gateway.py in __call__(self, *args)
1255 answer = self.gateway_client.send_command(command)
1256 return_value = get_return_value(
-> 1257 answer, self.gateway_client, self.target_id, self.name)
1258
1259 for temp_arg in temp_args:
~/pyenv/pyenv/lib/python3.6/site-packages/pyspark/sql/utils.py in deco(*a, **kw)
61 def deco(*a, **kw):
62 try:
---> 63 return f(*a, **kw)
64 except py4j.protocol.Py4JJavaError as e:
65 s = e.java_exception.toString()
~/pyenv/pyenv/lib/python3.6/site-packages/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
326 raise Py4JJavaError(
327 "An error occurred while calling {0}{1}{2}.\n".
--> 328 format(target_id, ".", name), value)
329 else:
330 raise Py4JError(
Py4JJavaError: An error occurred while calling o26.parquet.
: java.lang.IllegalArgumentException
at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
at org.apache.spark.util.FieldAccessFinder$$anon$$anonfun$visitMethodInsn.apply(ClosureCleaner.scala:449)
at org.apache.spark.util.FieldAccessFinder$$anon$$anonfun$visitMethodInsn.apply(ClosureCleaner.scala:432)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach.apply(TraversableLike.scala:733)
at scala.collection.mutable.HashMap$$anon$$anonfun$foreach.apply(HashMap.scala:103)
at scala.collection.mutable.HashMap$$anon$$anonfun$foreach.apply(HashMap.scala:103)
at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
at scala.collection.mutable.HashMap$$anon.foreach(HashMap.scala:103)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
at org.apache.spark.util.FieldAccessFinder$$anon.visitMethodInsn(ClosureCleaner.scala:432)
at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean.apply(ClosureCleaner.scala:262)
at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean.apply(ClosureCleaner.scala:261)
at scala.collection.immutable.List.foreach(List.scala:381)
at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:261)
at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:159)
at org.apache.spark.SparkContext.clean(SparkContext.scala:2299)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2073)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2099)
at org.apache.spark.rdd.RDD$$anonfun$collect.apply(RDD.scala:939)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.RDD.collect(RDD.scala:938)
at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$.mergeSchemasInParallel(ParquetFileFormat.scala:611)
at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.inferSchema(ParquetFileFormat.scala:241)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun.apply(DataSource.scala:202)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun.apply(DataSource.scala:202)
at scala.Option.orElse(Option.scala:289)
at org.apache.spark.sql.execution.datasources.DataSource.getOrInferFileFormatSchema(DataSource.scala:201)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:392)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:239)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:227)
at org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:622)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.base/java.lang.Thread.run(Thread.java:844
环境信息:
Spark version 2.3.1
Using Scala version 2.11.8,
Java HotSpot(TM) 64-Bit Server VM, 1.8.0_172
Python 3.6.5
PySpark 2.3.1
我弄清楚到底出了什么问题。 spark-shell
使用的是 Java 1.8
,而 PySpark
使用的是 Java 10.1
。 Java 1.9/10 和 Spark 存在一些问题。将默认 Java 版本更改为 1.8。
Spark 在 Java 8 月 11 日运行。
要在 Java 版本之间切换,您可以将其添加到您的 .bashrc/.zshrc 文件中:
alias j='f(){ export JAVA_HOME=$(/usr/libexec/java_home -v ) };f'
然后在您的终端中:
source .zshrc
j 1.8
java -version
这将更改系统范围内的版本。如果你只是想让它在一个应用程序中有所不同,你可以在它前面加上环境变量 JAVA_HOME
JAVA_HOME=$(/usr/libexec/java_home -v 1.8) jupyter notebook
%env JAVA_HOME {path}
Java版本:
openjdk 版本“1.8.0_275”
OpenJDK 运行时环境(build 1.8.0_275-b01)
OpenJDK 64 位服务器 VM(内部版本 25.275-b01,混合模式)
Python版本:
Python 3.9.5(tags/v3.9.5:0a7dcbd,2021 年 5 月 3 日,17:27:52)Win32 上的 [MSC v.1928 64 位 (AMD64)]
键入“帮助”、“版权”、“信用”或“许可”以获取更多信息。
PySpark 版本:
3.0.1
(注:此版本为key)
这个组合效果很好。
正在尝试读取 PySpark 中的 Parquet
文件但得到 Py4JJavaError
。我什至尝试从 spark-shell
读取它并且能够这样做。就它在 Scala 而不是 PySpark 中工作的 Python API 而言,我无法理解我在这里做错了什么;
spark = SparkSession.builder.master("local").appName("test-read").getOrCreate()
sdf = spark.read.parquet("game_logs.parquet")
堆栈跟踪:
Py4JJavaError Traceback (most recent call last)
<timed exec> in <module>()
~/pyenv/pyenv/lib/python3.6/site-packages/pyspark/sql/readwriter.py in parquet(self, *paths)
301 [('name', 'string'), ('year', 'int'), ('month', 'int'), ('day', 'int')]
302 """
--> 303 return self._df(self._jreader.parquet(_to_seq(self._spark._sc, paths)))
304
305 @ignore_unicode_prefix
~/pyenv/pyenv/lib/python3.6/site-packages/py4j/java_gateway.py in __call__(self, *args)
1255 answer = self.gateway_client.send_command(command)
1256 return_value = get_return_value(
-> 1257 answer, self.gateway_client, self.target_id, self.name)
1258
1259 for temp_arg in temp_args:
~/pyenv/pyenv/lib/python3.6/site-packages/pyspark/sql/utils.py in deco(*a, **kw)
61 def deco(*a, **kw):
62 try:
---> 63 return f(*a, **kw)
64 except py4j.protocol.Py4JJavaError as e:
65 s = e.java_exception.toString()
~/pyenv/pyenv/lib/python3.6/site-packages/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
326 raise Py4JJavaError(
327 "An error occurred while calling {0}{1}{2}.\n".
--> 328 format(target_id, ".", name), value)
329 else:
330 raise Py4JError(
Py4JJavaError: An error occurred while calling o26.parquet.
: java.lang.IllegalArgumentException
at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
at org.apache.spark.util.FieldAccessFinder$$anon$$anonfun$visitMethodInsn.apply(ClosureCleaner.scala:449)
at org.apache.spark.util.FieldAccessFinder$$anon$$anonfun$visitMethodInsn.apply(ClosureCleaner.scala:432)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach.apply(TraversableLike.scala:733)
at scala.collection.mutable.HashMap$$anon$$anonfun$foreach.apply(HashMap.scala:103)
at scala.collection.mutable.HashMap$$anon$$anonfun$foreach.apply(HashMap.scala:103)
at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
at scala.collection.mutable.HashMap$$anon.foreach(HashMap.scala:103)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
at org.apache.spark.util.FieldAccessFinder$$anon.visitMethodInsn(ClosureCleaner.scala:432)
at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean.apply(ClosureCleaner.scala:262)
at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean.apply(ClosureCleaner.scala:261)
at scala.collection.immutable.List.foreach(List.scala:381)
at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:261)
at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:159)
at org.apache.spark.SparkContext.clean(SparkContext.scala:2299)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2073)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2099)
at org.apache.spark.rdd.RDD$$anonfun$collect.apply(RDD.scala:939)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.RDD.collect(RDD.scala:938)
at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$.mergeSchemasInParallel(ParquetFileFormat.scala:611)
at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.inferSchema(ParquetFileFormat.scala:241)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun.apply(DataSource.scala:202)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun.apply(DataSource.scala:202)
at scala.Option.orElse(Option.scala:289)
at org.apache.spark.sql.execution.datasources.DataSource.getOrInferFileFormatSchema(DataSource.scala:201)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:392)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:239)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:227)
at org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:622)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.base/java.lang.Thread.run(Thread.java:844
环境信息:
Spark version 2.3.1
Using Scala version 2.11.8,
Java HotSpot(TM) 64-Bit Server VM, 1.8.0_172
Python 3.6.5
PySpark 2.3.1
我弄清楚到底出了什么问题。 spark-shell
使用的是 Java 1.8
,而 PySpark
使用的是 Java 10.1
。 Java 1.9/10 和 Spark 存在一些问题。将默认 Java 版本更改为 1.8。
Spark 在 Java 8 月 11 日运行。
要在 Java 版本之间切换,您可以将其添加到您的 .bashrc/.zshrc 文件中:
alias j='f(){ export JAVA_HOME=$(/usr/libexec/java_home -v ) };f'
然后在您的终端中:
source .zshrc
j 1.8
java -version
这将更改系统范围内的版本。如果你只是想让它在一个应用程序中有所不同,你可以在它前面加上环境变量 JAVA_HOME
JAVA_HOME=$(/usr/libexec/java_home -v 1.8) jupyter notebook
%env JAVA_HOME {path}
Java版本: openjdk 版本“1.8.0_275” OpenJDK 运行时环境(build 1.8.0_275-b01) OpenJDK 64 位服务器 VM(内部版本 25.275-b01,混合模式)
Python版本: Python 3.9.5(tags/v3.9.5:0a7dcbd,2021 年 5 月 3 日,17:27:52)Win32 上的 [MSC v.1928 64 位 (AMD64)] 键入“帮助”、“版权”、“信用”或“许可”以获取更多信息。
PySpark 版本: 3.0.1 (注:此版本为key)
这个组合效果很好。