Intellij:MapReduce错误:线程“ main”中发生异常0:无此类文件或目录

问题描述 投票:0回答:1

我一直在研究map reduce程序,它在虚拟机的hadoop hdfs环境中运行良好。但是,当我使用Intellij在Windows中尝试相同的程序时,出现此错误。

WordCount.class //使用它作为示例程序来测试其是否有效。

public class WordCount {

    public static class TokenizerMapper
            extends Mapper<Object, Text, Text, IntWritable> {

        private final static IntWritable one = new IntWritable(1);
        private Text word = new Text();

        public void map(Object key, Text value, Context context
        ) throws IOException, InterruptedException {
            StringTokenizer itr = new StringTokenizer(value.toString());
            while (itr.hasMoreTokens()) {
                word.set(itr.nextToken());
                context.write(word, one);
            }
        }
    }

    public static class IntSumReducer
            extends Reducer<Text, IntWritable, Text, IntWritable> {
        private IntWritable result = new IntWritable();

        public void reduce(Text key, Iterable<IntWritable> values,
                           Context context
        ) throws IOException, InterruptedException {
            int sum = 0;
            for (IntWritable val : values) {
                sum += val.get();
            }
            result.set(sum);
            context.write(key, result);
        }
    }

    public static void main(String[] args) throws Exception {
        Configuration conf = new Configuration();
        Job job = Job.getInstance(conf, "word count");
        job.setJarByClass(WordCount.class);
        job.setMapperClass(TokenizerMapper.class);
        job.setCombinerClass(IntSumReducer.class);
        job.setReducerClass(IntSumReducer.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(IntWritable.class);
        FileInputFormat.addInputPath(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));
        System.exit(job.waitForCompletion(true) ? 0 : 1);
    }
}

Intellij错误日志

2019-12-12 21:42:04,139 INFO  [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(1181)) - session.id is deprecated. Instead, use dfs.metrics.session-id
2019-12-12 21:42:04,144 INFO  [main] jvm.JvmMetrics (JvmMetrics.java:init(79)) - Initializing JVM Metrics with processName=JobTracker, sessionId=
2019-12-12 21:42:08,029 WARN  [main] mapreduce.JobResourceUploader (JobResourceUploader.java:uploadFiles(64)) - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
2019-12-12 21:42:08,089 INFO  [main] mapreduce.JobSubmitter (JobSubmitter.java:submitJobInternal(251)) - Cleaning up the staging area file:/tmp/hadoop/mapred/staging/Abhishek1224360463/.staging/job_local1224360463_0001
Exception in thread "main" 0: No such file or directory
    at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmod(NativeIO.java:236)
    at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:767)
    at org.apache.hadoop.fs.ChecksumFileSystem$1.apply(ChecksumFileSystem.java:506)
    at org.apache.hadoop.fs.ChecksumFileSystem$FsOperation.run(ChecksumFileSystem.java:487)
    at org.apache.hadoop.fs.ChecksumFileSystem.setPermission(ChecksumFileSystem.java:503)
    at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:619)
    at org.apache.hadoop.mapreduce.JobResourceUploader.uploadFiles(JobResourceUploader.java:94)
    at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:97)
    at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:192)
    at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1341)
    at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1338)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
    at org.apache.hadoop.mapreduce.Job.submit(Job.java:1338)
    at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1359)
    at WordCount.main(WordCount.java:59)

我通过将目录名作为参数发送给主类,即通过编辑运行配置并传递包含文本文件的目录名来提供输入。 (输入参数:输入输出)我在项目根文件夹下有输入目录。

java hadoop intellij-idea mapreduce hdfs
1个回答
0
投票

在管理员模式下运行Intellij可以达到目的。不过那很奇怪。如果有人向我解释一下,将不胜感激。

© www.soinside.com 2019 - 2024. All rights reserved.