Hadoop节点需要很长时间才能退役

Sri*_*nth 5 hadoop

编辑:我终于弄清楚问题是什么.一些文件具有非常高的复制因子集,我将我的集群减少到2个节点.一旦我减少了对这些文件的复制因子,退役就会很快成功结束.

我在dfs.hosts.excludemapred.hosts.exclude文件中添加了要退役的节点,并执行了以下命令:

bin/hadoop dfsadmin -refreshNodes.

在NameNode用户界面中,我看到了这个节点Decommissioning Nodes,但它花了太长时间,而且我没有太多关于该节点退役的数据.

解析节点总是需要很长时间,还是我应该看一些地方?我不确定究竟发生了什么.

我在这个节点上看不到任何损坏的块:

$ ./hadoop/bin/hadoop fsck -blocks /
 Total size:    157254687 B
 Total dirs:    201
 Total files:   189 (Files currently being written: 6)
 Total blocks (validated):      140 (avg. block size 1123247 B) (Total open file blocks (not validated): 1)
 Minimally replicated blocks:   140 (100.0 %)
 Over-replicated blocks:        6 (4.285714 %)
 Under-replicated blocks:       12 (8.571428 %)
 Mis-replicated blocks:         0 (0.0 %)
 Default replication factor:    2
 Average block replication:     1.9714285
 Corrupt blocks:                0
 Missing replicas:              88 (31.884058 %)
 Number of data-nodes:          3
 Number of racks:               1
FSCK ended at Mon Jul 22 14:42:45 IST 2013 in 33 milliseconds


The filesystem under path '/' is HEALTHY

$ ./hadoop/bin/hadoop dfsadmin -report
Configured Capacity: 25357025280 (23.62 GB)
Present Capacity: 19756299789 (18.4 GB)
DFS Remaining: 19366707200 (18.04 GB)
DFS Used: 389592589 (371.54 MB)
DFS Used%: 1.97%
Under replicated blocks: 14
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 3 (3 total, 0 dead)

Name: 10.40.11.107:50010
Decommission Status : Decommission in progress
Configured Capacity: 8452341760 (7.87 GB)
DFS Used: 54947840 (52.4 MB)
Non DFS Used: 1786830848 (1.66 GB)
DFS Remaining: 6610563072(6.16 GB)
DFS Used%: 0.65%
DFS Remaining%: 78.21%
Last contact: Mon Jul 22 14:29:37 IST 2013


Name: 10.40.11.106:50010
Decommission Status : Normal
Configured Capacity: 8452341760 (7.87 GB)
DFS Used: 167412428 (159.66 MB)
Non DFS Used: 1953377588 (1.82 GB)
DFS Remaining: 6331551744(5.9 GB)
DFS Used%: 1.98%
DFS Remaining%: 74.91%
Last contact: Mon Jul 22 14:29:37 IST 2013


Name: 10.40.11.108:50010
Decommission Status : Normal
Configured Capacity: 8452341760 (7.87 GB)
DFS Used: 167232321 (159.49 MB)
Non DFS Used: 1860517055 (1.73 GB)
DFS Remaining: 6424592384(5.98 GB)
DFS Used%: 1.98%
DFS Remaining%: 76.01%
Last contact: Mon Jul 22 14:29:38 IST 2013
Run Code Online (Sandbox Code Playgroud)

Cha*_*guy 6

即使您没有太多数据,退役也不是即时过程.

首先,当您退役时,这意味着必须将数据复制到相当多的块(取决于块大小的大小),这可能很容易使您的群集不堪重负并导致操作问题,所以我认为这有点受到限制.

此外,根据您使用的Hadoop版本,监视decomissions的线程只会经常唤醒.在早期版本的Hadoop中,过去大约需要5分钟,但我相信现在每分钟或更短时间.

正在进行停用意味着正在复制块,所以我想这实际上取决于您拥有多少数据,而您只需要等待,因为这不会完全用于完成此任务.

  • 感谢您的回答.我终于找到了问题所在.一些文件具有非常高的复制因子集,我将我的集群减少到2个节点.一旦我减少了对这些文件的复制因子,退役就会很快成功结束. (3认同)