Hadoop Container甚至100%完成失败

问题描述 投票:0回答:1

我已经设置了一个小集群Hadoop 2.7,Hbase 0.98和Nutch 2.3.1。我写了一个简单的第一个组合相同域的文档的自定义作业,之后每个域的URL(来自缓存,即列表)首先从缓存中获取,然后相应的键用于通过datastore.get(url_key)获取对象,然后更新分数,它是通过context.write写的。

在完成所有文档处理后,作业应该完成,但我发现,如果由于超时和进度而失败的每次尝试都是100%完成显示。这是LOG

attempt_1549963404554_0110_r_000001_1   100.00  FAILED  reduce > reduce node2:8042  logs    Thu Feb 21 20:50:43 +0500 2019  Fri Feb 22 02:11:44 +0500 2019  5hrs, 21mins, 0sec  AttemptID:attempt_1549963404554_0110_r_000001_1 Timed out after 1800 secs Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143
attempt_1549963404554_0110_r_000001_3   100.00  FAILED  reduce > reduce node1:8042  logs    Fri Feb 22 04:39:08 +0500 2019  Fri Feb 22 07:25:44 +0500 2019  2hrs, 46mins, 35sec AttemptID:attempt_1549963404554_0110_r_000001_3 Timed out after 1800 secs Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143
attempt_1549963404554_0110_r_000002_0   100.00  FAILED  reduce > reduce node3:8042  logs    Thu Feb 21 12:38:45 +0500 2019  Thu Feb 21 22:50:13 +0500 2019  10hrs, 11mins, 28sec    AttemptID:attempt_1549963404554_0110_r_000002_0 Timed out after 1800 secs Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143

它是什么,即当一次尝试完成100.00%时,它应该被标记为成功。不幸的是,除了我的情况超时之外,还有任何错误信息。如何调试此问题?我的减速器有点张贴到另一个问题Apache Nutch 2.3.1 map-reduce timeout occurred while updating the score

hadoop mapreduce timeout hadoop2 nutch
1个回答
0
投票

我观察到,在上面提到的3个日志中,执行所需的时间差别很大。请查看您执行一次的工作。

© www.soinside.com 2019 - 2024. All rights reserved.