Hadoop节点在一段时间后死亡(崩溃)

vef*_*hym 6 ubuntu networking hadoop cluster-computing

我有一个16个(ubuntu 12.04服务器)节点的hadoop集群(1个主节点和15个从节点).它们通过专用网络连接,主设备也有公共IP(它属于两个网络).当我运行小任务时,即输入量小,处理时间短,一切正常.但是,当我运行更大的任务时,即使用7-8 GB的输入数据时,我的从属节点会一个接一个地开始死亡.

从web ui(http://master:50070/dfsnodelist.jsp?whatNodes=LIVE)我看到最后一次联系开始增加,从我的集群提供者的web ui,我看到节点崩溃了.这是节点的屏幕截图(我无法向上滚动):

在此输入图像描述

另一台机器显示此错误,运行hadoop dfs,而没有正在运行的作业:

BUG: soft lockup - CPU#7 stuck for 27s! [java:4072]

BUG: soft lockup - CPU#5 stuck for 41s! [java:3309]
ata2.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen
ata2.00: cmd a0/00:00:00:08:00/00:00:00:00:00/a0 tag 0 pio 16392 in
         res 40/00:02:00:08:00/00:00:00:00:00/a0 Emask 0x4 (timeout)
ata2.00: status: { DRDY }
Run Code Online (Sandbox Code Playgroud)

这是另一个截图(其中我没有任何意义):

在此输入图像描述

这是崩溃的datanode的日志(IP 192.168.0.9):

2014-02-01 15:17:34,874 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving blk_-2375077065158517857_1818 src: /192.168.0.7:53632 dest: /192.168.0.9:50010
2014-02-01 15:20:14,187 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in receiveBlock for blk_-2375077065158517857_1818 java.io.EOFException: while trying to read 65557 bytes
2014-02-01 15:20:17,556 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder blk_-2375077065158517857_1818 0 : Thread is interrupted.
2014-02-01 15:20:17,556 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 0 for blk_-2375077065158517857_1818 terminating
2014-02-01 15:20:17,557 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: writeBlock blk_-2375077065158517857_1818 received exception java.io.EOFException: while trying to read 65557 bytes
2014-02-01 15:20:17,560 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(192.168.0.9:50010, storageID=DS-271028747-192.168.0.9-50010-1391093674214, infoPort=50075, ipcPort=50020):DataXceiver
java.io.EOFException: while trying to read 65557 bytes
    at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readToBuf(BlockReceiver.java:296)
    at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readNextPacket(BlockReceiver.java:340)
    at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:404)
    at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:582)
    at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:404)
    at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:112)
    at java.lang.Thread.run(Thread.java:744)
2014-02-01 15:21:48,350 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.9:50010, dest: /192.168.0.19:60853, bytes: 132096, op: HDFS_READ, cliID: DFSClient_attempt_201402011511_0001_m_000018_0_1657459557_1, offset: 0, srvID: DS-271028747-192.168.0.9-50010-1391093674214, blockid: blk_-6962923875569811947_1279, duration: 276262265702
2014-02-01 15:21:56,707 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.9:50010, dest: /192.168.0.19:60849, bytes: 792576, op: HDFS_READ, cliID: DFSClient_attempt_201402011511_0001_m_000013_0_1311506552_1, offset: 0, srvID: DS-271028747-192.168.0.9-50010-1391093674214, blockid: blk_4630218397829850426_1316, duration: 289841363522
2014-02-01 15:23:46,614 WARN org.apache.hadoop.ipc.Server: IPC Server Responder, call getProtocolVersion(org.apache.hadoop.hdfs.server.protocol.InterDatanodeProtocol, 3) from 192.168.0.19:48460: output error
2014-02-01 15:23:46,617 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 50020 caught: java.nio.channels.ClosedChannelException
    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:265)
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:474)
    at org.apache.hadoop.ipc.Server.channelWrite(Server.java:1756)
    at org.apache.hadoop.ipc.Server.access$2000(Server.java:97)
    at org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:780)
    at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:844)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1472)
2014-02-01 15:24:26,800 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.9:50010, dest: /192.168.0.9:36391, bytes: 10821100, op: HDFS_READ, cliID: DFSClient_attempt_201402011511_0001_m_000084_0_-2100756773_1, offset: 0, srvID: DS-271028747-192.168.0.9-50010-1391093674214, blockid: blk_496206494030330170_1187, duration: 439385255122
2014-02-01 15:27:11,871 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.9:50010, dest: /192.168.0.20:32913, bytes: 462336, op: HDFS_READ, cliID: DFSClient_attempt_201402011511_0001_m_000004_0_-1095467656_1, offset: 19968, srvID: DS-271028747-192.168.0.9-50010-1391093674214, blockid: blk_-7029660283973842017_1326, duration: 205748392367
2014-02-01 15:27:57,144 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.9:50010, dest: /192.168.0.9:36393, bytes: 10865080, op: HDFS_READ, cliID: DFSClient_attempt_201402011511_0001_m_000033_0_-1409402881_1, offset: 0, srvID: DS-271028747-192.168.0.9-50010-1391093674214, blockid: blk_-8749840347184507986_1447, duration: 649481124760
2014-02-01 15:28:47,945 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification succeeded blk_887028200097641216_1396
2014-02-01 15:30:17,505 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.9:50010, dest: /192.168.0.8:58304, bytes: 10743459, op: HDFS_READ, cliID: DFSClient_attempt_201402011511_0001_m_000202_0_1200991434_1, offset: 0, srvID: DS-271028747-192.168.0.9-50010-1391093674214, blockid: blk_887028200097641216_1396, duration: 69130787562
2014-02-01 15:32:05,208 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(192.168.0.9:50010, storageID=DS-271028747-192.168.0.9-50010-1391093674214, infoPort=50075, ipcPort=50020) Starting thread to transfer blk_-7029660283973842017_1326 to 192.168.0.8:50010
2014-02-01 15:32:55,805 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(192.168.0.9:50010, storageID=DS-271028747-192.168.0.9-50010-1391093674214, infoPort=50075, ipcPort=50020) Starting thread to transfer blk_-34479901
Run Code Online (Sandbox Code Playgroud)

以下是我的mapred-site.xml文件设置方式:

<property>
    <name>mapred.child.java.opts</name>
    <value>-Xmx2048m</value>
</property>

<property>
    <name>mapred.tasktracker.map.tasks.maximum</name>
    <value>4</value>
</property>

<property>
    <name>mapred.tasktracker.reduce.tasks.maximum</name>
    <value>4</value>
</property>
Run Code Online (Sandbox Code Playgroud)

每个节点有8个CPU和8GB RAM.我知道我设置mapred.child.java.opts得太高,但是使用这些设置和数据运行相同的作业.我已将slow slowstart设置为1.0,因此reducers仅在所有映射器完成后启动.

Ping一些节点导致一小部分数据包丢失,ssh连接冻结一段时间,但我不知道它是否相关.我/etc/security/limits.conf在每个节点上添加了该行的文件:

hadoop hard nofile 16384

但那也不起作用.

解决方案:似乎毕竟,这确实是一个内存错误.我有太多的正在运行的任务,计算机崩溃了.在他们崩溃并重新启动它们之后,即使我设置了正确的映射器数量,hadoop作业也没有运行.解决方案是删除坏数据节点(通过退役),然后再次包含它们.这就是我所做的,一切都完美无缺,不会丢失任何数据:

如何在Hadoop中正确删除节点?

当然,为每个节点设置正确的max map和reduce任务数.

Vik*_*dia 5

根据映射,你的内存不足,你有2GB RAM,允许4个地图.

请尝试使用1 GB xmx运行相同的工作,它肯定会有效.

如果你想使用你的集群有效地根据文件的块大小设置xmx.

如果你的块是128 Mb,那么512 mb就足够了.