hadoop 抱怨试图覆盖非空目标目录
hadoop complains about attempting to overwrite nonempty destination directory
我正在关注 Rasesh Mori 的 instructions to install Hadoop on a multinode cluster,并且已经到了 jps 显示各个节点已启动和 运行ning 的地步。我可以将文件复制到 hdfs;我这样做了
$HADOOP_HOME/bin/hdfs dfs -put ~/in /in
然后尝试 运行 上面的 wordcount 示例程序
$HADOOP_HOME/bin/hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar wordcount /in /out
但我收到错误
15/06/16 00:59:53 INFO mapreduce.Job: Task Id : attempt_1434414924941_0004_m_000000_0, Status : FAILED
Rename cannot overwrite non empty destination directory /home/hduser/hadoop-2.6.0/nm-local-dir/usercache/hduser/appcache/application_1434414924941_0004/filecache/10
java.io.IOException: Rename cannot overwrite non empty destination directory /home/hduser/hadoop-2.6.0/nm-local-dir/usercache/hduser/appcache/application_1434414924941_0004/filecache/10
at org.apache.hadoop.fs.AbstractFileSystem.renameInternal(AbstractFileSystem.java:716)
at org.apache.hadoop.fs.FilterFs.renameInternal(FilterFs.java:228)
at org.apache.hadoop.fs.AbstractFileSystem.rename(AbstractFileSystem.java:659)
at org.apache.hadoop.fs.FileContext.rename(FileContext.java:909)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:364)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:60)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
我该如何解决这个问题?
这是 Hadoop 2.6.0 中的错误。它已被标记为已修复,但它仍然偶尔会发生(参见:https://issues.apache.org/jira/browse/YARN-2624)。
清除 appcache 目录并重新启动 YARN 守护程序很可能会解决此问题。
我对 /hadoop/yarn/local/usercache/hue/filecache/ 目录有同样的错误。
我已经完成 sudo rm -rf /hadoop/yarn/local/usercache/hue/filecache/* 并解决了它。
我正在关注 Rasesh Mori 的 instructions to install Hadoop on a multinode cluster,并且已经到了 jps 显示各个节点已启动和 运行ning 的地步。我可以将文件复制到 hdfs;我这样做了
$HADOOP_HOME/bin/hdfs dfs -put ~/in /in
然后尝试 运行 上面的 wordcount 示例程序
$HADOOP_HOME/bin/hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar wordcount /in /out
但我收到错误
15/06/16 00:59:53 INFO mapreduce.Job: Task Id : attempt_1434414924941_0004_m_000000_0, Status : FAILED
Rename cannot overwrite non empty destination directory /home/hduser/hadoop-2.6.0/nm-local-dir/usercache/hduser/appcache/application_1434414924941_0004/filecache/10
java.io.IOException: Rename cannot overwrite non empty destination directory /home/hduser/hadoop-2.6.0/nm-local-dir/usercache/hduser/appcache/application_1434414924941_0004/filecache/10
at org.apache.hadoop.fs.AbstractFileSystem.renameInternal(AbstractFileSystem.java:716)
at org.apache.hadoop.fs.FilterFs.renameInternal(FilterFs.java:228)
at org.apache.hadoop.fs.AbstractFileSystem.rename(AbstractFileSystem.java:659)
at org.apache.hadoop.fs.FileContext.rename(FileContext.java:909)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:364)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:60)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
我该如何解决这个问题?
这是 Hadoop 2.6.0 中的错误。它已被标记为已修复,但它仍然偶尔会发生(参见:https://issues.apache.org/jira/browse/YARN-2624)。
清除 appcache 目录并重新启动 YARN 守护程序很可能会解决此问题。
我对 /hadoop/yarn/local/usercache/hue/filecache/ 目录有同样的错误。 我已经完成 sudo rm -rf /hadoop/yarn/local/usercache/hue/filecache/* 并解决了它。