Oozie s3 作为作业文件夹
Oozie s3 as job folder
当从 s3 提供 workflow.xml 时,Oozie 失败并出现以下错误,但是从 HDFS 提供 workflow.xml 时同样有效。
同样适用于早期版本的 oozie,与 4.3 版本的 oozie 相比有什么变化吗?
环境:
- HDP 3.1.0
- Oozie 4.3.1
oozie.service.HadoopAccessorService.supported.filesystems=*
Job.properties
nameNode=hdfs://ambari-master-1a.xdata.com:8020
jobTracker=ambari-master-2a.xdata.com:8050
queue=default
#OOZIE job details
basepath=s3a://mybucket/test/oozie
oozie.use.system.libpath=true
oozie.wf.application.path=${basepath}/jobs/test-hive
#(适用于 Job.properties 中的此更改)
basepath=hdfs://ambari-master-1a.xdata.com:8020/test/oozie
workflow.xml
<workflow-app xmlns="uri:oozie:workflow:0.5" name="test-hive">
<start to="hive-query"/>
<action name="hive-query" retry-max="2" retry-interval="10">
<hive xmlns="uri:oozie:hive-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<script>test_hive.sql</script>
</hive>
<ok to="end"/>
<error to="kill"/>
</action>
<kill name="kill">
<message>job failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name="end"/>
</workflow-app>
错误:
org.apache.oozie.action.ActionExecutorException: UnsupportedOperationException: Accessing local file system is not allowed
at org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:446)
at org.apache.oozie.action.hadoop.JavaActionExecutor.createLauncherConf(JavaActionExecutor.java:1100)
at org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1214)
at org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1502)
at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:241)
at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:68)
at org.apache.oozie.command.XCommand.call(XCommand.java:287)
at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:332)
at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:261)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.UnsupportedOperationException: Accessing local file system is not allowed
at org.apache.hadoop.fs.RawLocalFileSystem.initialize(RawLocalFileSystem.java:48)
at org.apache.hadoop.fs.LocalFileSystem.initialize(LocalFileSystem.java:47)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3303)
at org.apache.hadoop.fs.FileSystem.access0(FileSystem.java:124)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3352)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3320)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479)
at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:435)
at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:301)
at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:378)
at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:461)
at org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:200)
at org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:572)
at org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:811)
at org.apache.hadoop.fs.s3a.S3ABlockOutputStream.createBlockIfNeeded(S3ABlockOutputStream.java:190)
at org.apache.hadoop.fs.s3a.S3ABlockOutputStream.<init>(S3ABlockOutputStream.java:168)
at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:778)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1118)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1098)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:987)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:975)
at org.apache.oozie.action.hadoop.LauncherMapperHelper.setupLauncherInfo(LauncherMapperHelper.java:156)
at org.apache.oozie.action.hadoop.JavaActionExecutor.createLauncherConf(JavaActionExecutor.java:1040)
这是由于 CVE-2017-15712. If you remove Oozie's dummy implelentation of RawLocalFileSystem 保护 Oozie 的方式造成的,这将为您 运行。如果不想重新编译,可以在distribution中找到class文件,删除即可。请注意,您的 Oozie 服务器容易受到 CVE-2017-15712 的攻击。
您可以看到这个问题:OOZIE-3529,从 oozie WEB-INF/classes 中删除 RawLocalFilesystem.class 并重新启动 oozie 是一个临时解决方案,您应该将 oozie 版本升级到 5.2.0。
顺便问一下,你能展示一下你关于 S3 的配置吗?如何配置 S3 端点和 AK/SK 让 oozie 访问 S3?我正面临这个问题。
当从 s3 提供 workflow.xml 时,Oozie 失败并出现以下错误,但是从 HDFS 提供 workflow.xml 时同样有效。 同样适用于早期版本的 oozie,与 4.3 版本的 oozie 相比有什么变化吗?
环境:
- HDP 3.1.0
- Oozie 4.3.1
oozie.service.HadoopAccessorService.supported.filesystems=*
Job.properties
nameNode=hdfs://ambari-master-1a.xdata.com:8020
jobTracker=ambari-master-2a.xdata.com:8050
queue=default
#OOZIE job details
basepath=s3a://mybucket/test/oozie
oozie.use.system.libpath=true
oozie.wf.application.path=${basepath}/jobs/test-hive
#(适用于 Job.properties 中的此更改)
basepath=hdfs://ambari-master-1a.xdata.com:8020/test/oozie
workflow.xml
<workflow-app xmlns="uri:oozie:workflow:0.5" name="test-hive">
<start to="hive-query"/>
<action name="hive-query" retry-max="2" retry-interval="10">
<hive xmlns="uri:oozie:hive-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<script>test_hive.sql</script>
</hive>
<ok to="end"/>
<error to="kill"/>
</action>
<kill name="kill">
<message>job failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name="end"/>
</workflow-app>
错误:
org.apache.oozie.action.ActionExecutorException: UnsupportedOperationException: Accessing local file system is not allowed
at org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:446)
at org.apache.oozie.action.hadoop.JavaActionExecutor.createLauncherConf(JavaActionExecutor.java:1100)
at org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1214)
at org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1502)
at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:241)
at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:68)
at org.apache.oozie.command.XCommand.call(XCommand.java:287)
at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:332)
at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:261)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.UnsupportedOperationException: Accessing local file system is not allowed
at org.apache.hadoop.fs.RawLocalFileSystem.initialize(RawLocalFileSystem.java:48)
at org.apache.hadoop.fs.LocalFileSystem.initialize(LocalFileSystem.java:47)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3303)
at org.apache.hadoop.fs.FileSystem.access0(FileSystem.java:124)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3352)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3320)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479)
at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:435)
at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:301)
at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:378)
at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:461)
at org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:200)
at org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:572)
at org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:811)
at org.apache.hadoop.fs.s3a.S3ABlockOutputStream.createBlockIfNeeded(S3ABlockOutputStream.java:190)
at org.apache.hadoop.fs.s3a.S3ABlockOutputStream.<init>(S3ABlockOutputStream.java:168)
at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:778)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1118)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1098)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:987)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:975)
at org.apache.oozie.action.hadoop.LauncherMapperHelper.setupLauncherInfo(LauncherMapperHelper.java:156)
at org.apache.oozie.action.hadoop.JavaActionExecutor.createLauncherConf(JavaActionExecutor.java:1040)
这是由于 CVE-2017-15712. If you remove Oozie's dummy implelentation of RawLocalFileSystem 保护 Oozie 的方式造成的,这将为您 运行。如果不想重新编译,可以在distribution中找到class文件,删除即可。请注意,您的 Oozie 服务器容易受到 CVE-2017-15712 的攻击。
您可以看到这个问题:OOZIE-3529,从 oozie WEB-INF/classes 中删除 RawLocalFilesystem.class 并重新启动 oozie 是一个临时解决方案,您应该将 oozie 版本升级到 5.2.0。 顺便问一下,你能展示一下你关于 S3 的配置吗?如何配置 S3 端点和 AK/SK 让 oozie 访问 S3?我正面临这个问题。