AWS glue pyspark: java.lang.NoClassDefFoundError: org/jets3t/service/ServiceException
AWS glue pyspark: java.lang.NoClassDefFoundError: org/jets3t/service/ServiceException
我正在尝试在我的 AWS glue pyspark 脚本中从 s3 读取一个 csv 文件。
以下是代码片段:-
import sys
import os
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
argList = ['config']
args = getResolvedOptions(sys.argv,argList)
print(f"The config path is: {args['config']}")
sc = SparkContext.getOrCreate()
sch = sc._jsc.hadoopConfiguration()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
sch.set("fs.s3.impl","org.apache.hadoop.fs.s3native.NativeS3FileSystem")
sch.set("fs.s3.canned.acl","BucketOwnerFullControl")
source_path_url = "s3://bucket/folder"
df = spark.read.option("header", "true").option("inferSchema", "true").csv(source_path_url)
执行时,出现以下错误:-
: java.lang.NoClassDefFoundError: org/jets3t/service/ServiceException
at org.apache.hadoop.fs.s3native.NativeS3FileSystem.createDefaultStore(NativeS3FileSystem.java:343)
at org.apache.hadoop.fs.s3native.NativeS3FileSystem.initialize(NativeS3FileSystem.java:333)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2859)
at org.apache.hadoop.fs.FileSystem.access0(FileSystem.java:99)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2896)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2878)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:392)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
at org.apache.spark.sql.execution.streaming.FileStreamSink$.hasMetadata(FileStreamSink.scala:45)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:332)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
at org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:615)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: org.jets3t.service.ServiceException
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
... 24 more
我需要提供胶水中的 jets3t 罐子吗?如果是,为什么会这样,因为这些 jar 是在 scala spark 作业运行时由胶水自动提供的。
我找到了解决方案。正如我在原始 post 中所怀疑的那样,您需要从外部下载 jets3t jar 并将其存储在某个 s3 位置。
之后,您可以在粘合作业的 job parameters 部分更新存储 jar 的 s3 路径,如下所示
关键:“--额外的罐子”
值:“s3_path_to_jets3t_jar”
或者您可以在粘合作业的 Dependent jars path 部分设置 jar 的路径。
我正在尝试在我的 AWS glue pyspark 脚本中从 s3 读取一个 csv 文件。 以下是代码片段:-
import sys
import os
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
argList = ['config']
args = getResolvedOptions(sys.argv,argList)
print(f"The config path is: {args['config']}")
sc = SparkContext.getOrCreate()
sch = sc._jsc.hadoopConfiguration()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
sch.set("fs.s3.impl","org.apache.hadoop.fs.s3native.NativeS3FileSystem")
sch.set("fs.s3.canned.acl","BucketOwnerFullControl")
source_path_url = "s3://bucket/folder"
df = spark.read.option("header", "true").option("inferSchema", "true").csv(source_path_url)
执行时,出现以下错误:-
: java.lang.NoClassDefFoundError: org/jets3t/service/ServiceException
at org.apache.hadoop.fs.s3native.NativeS3FileSystem.createDefaultStore(NativeS3FileSystem.java:343)
at org.apache.hadoop.fs.s3native.NativeS3FileSystem.initialize(NativeS3FileSystem.java:333)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2859)
at org.apache.hadoop.fs.FileSystem.access0(FileSystem.java:99)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2896)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2878)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:392)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
at org.apache.spark.sql.execution.streaming.FileStreamSink$.hasMetadata(FileStreamSink.scala:45)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:332)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
at org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:615)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: org.jets3t.service.ServiceException
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
... 24 more
我需要提供胶水中的 jets3t 罐子吗?如果是,为什么会这样,因为这些 jar 是在 scala spark 作业运行时由胶水自动提供的。
我找到了解决方案。正如我在原始 post 中所怀疑的那样,您需要从外部下载 jets3t jar 并将其存储在某个 s3 位置。 之后,您可以在粘合作业的 job parameters 部分更新存储 jar 的 s3 路径,如下所示 关键:“--额外的罐子” 值:“s3_path_to_jets3t_jar”
或者您可以在粘合作业的 Dependent jars path 部分设置 jar 的路径。