无法提交并发 Hadoop 作业
Unable to submit concurrent Hadoop jobs
我 运行正在我的本地计算机上 Hadoop 2.7
,还有 HBase 1.4
和 Phoenix 4.15
。我编写了一个应用程序,该应用程序提交通过 Phoenix 删除 HBase 中的数据的 map reduce 作业。每个作业都是 运行 由 ThreadPoolExecutor
的单独线程组成,看起来像这样:
public class MRDeleteTask extends Task {
private final Logger LOGGER = LoggerFactory.getLogger(MRDeleteTask.class);
private String query;
public MRDeleteTask(int id, String q) {
this.setId(id);
this.query = q;
}
@Override
public void run() {
LOGGER.info("Running Task: " + getId());
try {
Configuration configuration = HBaseConfiguration.create();
Job job = Job.getInstance(configuration, "phoenix-mr-job-"+getId());
LOGGER.info("mapper input: " + this.query);
PhoenixMapReduceUtil.setInput(job, DeleteMR.PhoenixDBWritable.class, "Table", QUERY);
job.setMapperClass(DeleteMR.DeleteMapper.class);
job.setJarByClass(DeleteMR.class);
job.setNumReduceTasks(0);
job.setOutputFormatClass(NullOutputFormat.class);
job.setOutputKeyClass(ImmutableBytesWritable.class);
job.setOutputValueClass(Writable.class);
TableMapReduceUtil.addDependencyJars(job);
boolean result = job.waitForCompletion(true);
}
catch (Exception e) {
LOGGER.info(e.getMessage());
}
}
}
如果 ThreadPoolExecutor 中只有 1 个线程,一切都很好。如果同时提交了多个这样的 Hadoop 作业,则什么也不会发生。根据日志,错误如下所示:
4439 [pool-1-thread-2] INFO MRDeleteTask - java.util.concurrent.ExecutionException: java.io.IOException: Unable to rename file: [/tmp/hadoop-user/mapred/local/1595274269610_tmp/tmp_phoenix-4.15.0-HBase-1.4-client.jar] to [/tmp/hadoop-user/mapred/local/1595274269610_tmp/phoenix-4.15.0-HBase-1.4-client.jar]
4439 [pool-1-thread-1] INFO MRDeleteTask - java.util.concurrent.ExecutionException: ExitCodeException exitCode=1: chmod: /private/tmp/hadoop-user/mapred/local/1595274269610_tmp/phoenix-4.15.0-HBase-1.4-client.jar: No such file or directory
任务是使用 ThreadPoolExecutor.submit()
提交的,并且正在使用返回的 future future.isDone()
检查它们的状态。
作业没有提交给 YARN,而是 运行 从 Intellij 本地提交。将以下内容添加到作业配置中解决了问题:
conf.set("mapreduce.framework.name", "yarn");
我 运行正在我的本地计算机上 Hadoop 2.7
,还有 HBase 1.4
和 Phoenix 4.15
。我编写了一个应用程序,该应用程序提交通过 Phoenix 删除 HBase 中的数据的 map reduce 作业。每个作业都是 运行 由 ThreadPoolExecutor
的单独线程组成,看起来像这样:
public class MRDeleteTask extends Task {
private final Logger LOGGER = LoggerFactory.getLogger(MRDeleteTask.class);
private String query;
public MRDeleteTask(int id, String q) {
this.setId(id);
this.query = q;
}
@Override
public void run() {
LOGGER.info("Running Task: " + getId());
try {
Configuration configuration = HBaseConfiguration.create();
Job job = Job.getInstance(configuration, "phoenix-mr-job-"+getId());
LOGGER.info("mapper input: " + this.query);
PhoenixMapReduceUtil.setInput(job, DeleteMR.PhoenixDBWritable.class, "Table", QUERY);
job.setMapperClass(DeleteMR.DeleteMapper.class);
job.setJarByClass(DeleteMR.class);
job.setNumReduceTasks(0);
job.setOutputFormatClass(NullOutputFormat.class);
job.setOutputKeyClass(ImmutableBytesWritable.class);
job.setOutputValueClass(Writable.class);
TableMapReduceUtil.addDependencyJars(job);
boolean result = job.waitForCompletion(true);
}
catch (Exception e) {
LOGGER.info(e.getMessage());
}
}
}
如果 ThreadPoolExecutor 中只有 1 个线程,一切都很好。如果同时提交了多个这样的 Hadoop 作业,则什么也不会发生。根据日志,错误如下所示:
4439 [pool-1-thread-2] INFO MRDeleteTask - java.util.concurrent.ExecutionException: java.io.IOException: Unable to rename file: [/tmp/hadoop-user/mapred/local/1595274269610_tmp/tmp_phoenix-4.15.0-HBase-1.4-client.jar] to [/tmp/hadoop-user/mapred/local/1595274269610_tmp/phoenix-4.15.0-HBase-1.4-client.jar]
4439 [pool-1-thread-1] INFO MRDeleteTask - java.util.concurrent.ExecutionException: ExitCodeException exitCode=1: chmod: /private/tmp/hadoop-user/mapred/local/1595274269610_tmp/phoenix-4.15.0-HBase-1.4-client.jar: No such file or directory
任务是使用 ThreadPoolExecutor.submit()
提交的,并且正在使用返回的 future future.isDone()
检查它们的状态。
作业没有提交给 YARN,而是 运行 从 Intellij 本地提交。将以下内容添加到作业配置中解决了问题:
conf.set("mapreduce.framework.name", "yarn");