Oozie s3作为工作文件夹

问题描述 投票:1回答:1

当从s3提供workflow.xml时,Oozie失败并出现以下错误,但同样有效的是HDFS提供了workflow.xml。同样适用于oozie的早期版本,是否有4.3版本的oozie有什么变化。

有关:

  • HDP 3.1.0
  • Oozie 4.3.1
  • oozie.service.HadoopAccessorService.supported.filesystems=*

job.properties

nameNode=hdfs://ambari-master-1a.xdata.com:8020
jobTracker=ambari-master-2a.xdata.com:8050
queue=default
#OOZIE job details
basepath=s3a://mybucket/test/oozie
oozie.use.system.libpath=true
oozie.wf.application.path=${basepath}/jobs/test-hive​

#(适用于Job.properties中的此更改)

basepath=hdfs://ambari-master-1a.xdata.com:8020/test/oozie

workflow.xml

​<workflow-app xmlns="uri:oozie:workflow:0.5" name="test-hive">
    <start to="hive-query"/>
    <action name="hive-query" retry-max="2" retry-interval="10">
        <hive xmlns="uri:oozie:hive-action:0.2">
            <job-tracker>${jobTracker}</job-tracker>
            <name-node>${nameNode}</name-node>
            <script>test_hive.sql</script>
        </hive>
        <ok to="end"/>
        <error to="kill"/>
    </action>
    <kill name="kill">
        <message>job failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
    </kill>
    <end name="end"/>
</workflow-app>​

错误:

​org.apache.oozie.action.ActionExecutorException: UnsupportedOperationException: Accessing local file system is not allowed
    at org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:446)
    at org.apache.oozie.action.hadoop.JavaActionExecutor.createLauncherConf(JavaActionExecutor.java:1100)
    at org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1214)
    at org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1502)
    at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:241)
    at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:68)
    at org.apache.oozie.command.XCommand.call(XCommand.java:287)
    at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:332)
    at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:261)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.UnsupportedOperationException: Accessing local file system is not allowed
    at org.apache.hadoop.fs.RawLocalFileSystem.initialize(RawLocalFileSystem.java:48)
    at org.apache.hadoop.fs.LocalFileSystem.initialize(LocalFileSystem.java:47)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3303)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3352)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3320)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479)
    at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:435)
    at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:301)
    at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:378)
    at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:461)
    at org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:200)
    at org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:572)
    at org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:811)
    at org.apache.hadoop.fs.s3a.S3ABlockOutputStream.createBlockIfNeeded(S3ABlockOutputStream.java:190)
    at org.apache.hadoop.fs.s3a.S3ABlockOutputStream.<init>(S3ABlockOutputStream.java:168)
    at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:778)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1118)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1098)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:987)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:975)
    at org.apache.oozie.action.hadoop.LauncherMapperHelper.setupLauncherInfo(LauncherMapperHelper.java:156)
    at org.apache.oozie.action.hadoop.JavaActionExecutor.createLauncherConf(JavaActionExecutor.java:1040)​
hadoop amazon-s3 hive bigdata oozie
1个回答
5
投票

这是由于Oozie对CVE-2017-15712的保护方式造成的。如果你删除Oozie's dummy implelentation of RawLocalFileSystem,这将为你运行。如果您不想重新编译,可以在分发中找到类文件并将其删除。请注意,您的Oozie服务器容易受到CVE-2017-15712的攻击。

© www.soinside.com 2019 - 2024. All rights reserved.