java.io.IOException:不兼容的clusterID

luc*_*ber 7 hadoop hdfs

我正在安装Hadoop 2.7.2(1个主NN -1秒NN-3 datanode)并且无法启动数据节点!麻烦喊出日志后(见下文),致命错误是由于ClusterID不匹配......很容易!只需更改ID. 错误 ...当我在NameNode和DataNodes上检查我的VERSION文件时它们是相同的..

所以问题很简单:INTO日志文件 - > NameNode的ClusterID来自哪里????

日志文件:


WARN org.apache.hadoop.hdfs.server.common.Storage: java.io.IOException: Incompatible clusterIDs in /home/hduser/mydata/hdfs/datanode: namenode clusterID = **CID-8e09ff25-80fb-4834-878b-f23b3deb62d0**; datanode clusterID = **CID-cd85e59a-ed4a-4516-b2ef-67e213cfa2a1**
org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid unassigned) service to master/172.XX.XX.XX:9000. Exiting.
java.io.IOException: All specified directories are failed to load.
atorg.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:478)
atorg.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1358)
atorg.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1323)
atorg.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:317)
atorg.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:223)
atorg.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:802)
at java.lang.Thread.run(Thread.java:745)
WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid unassigned) service to master/172.XX.XX.XX:9000
INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool <registering> (Datanode Uuid unassigned)
WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
Run Code Online (Sandbox Code Playgroud)

复制文件的副本


大师

storageID=DS-f72f5710-a869-489d-9f52-40dadc659937
clusterID=CID-cd85e59a-ed4a-4516-b2ef-67e213cfa2a1
cTime=0
datanodeUuid=54bc8b80-b84f-4893-8b96-36568acc5d4b
storageType=DATA_NODE
layoutVersion=-56
Run Code Online (Sandbox Code Playgroud)

DataNode

storageID=DS-f72f5710-a869-489d-9f52-40dadc659937
clusterID=CID-cd85e59a-ed4a-4516-b2ef-67e213cfa2a1
cTime=0
datanodeUuid=54bc8b80-b84f-4893-8b96-36568acc5d4b
storageType=DATA_NODE
layoutVersion=-56
Run Code Online (Sandbox Code Playgroud)

luc*_*ber 5

为了总结(并结束)这个问题,我想分享一下我是如何解决这个问题的.

MASTER第二个Namenode上,Namenode VERSION文件位于〜/.../ namenode/current/VERSION下.

但对于DATANODES来说,路径是不同的.它应该看起来像这样〜/.../ datanode/current/VERSION

2个VERSION文件之间的ClusterID应该相同

希望能帮助到你!


小智 5

我在安装 2.7.2 时也遇到了同样的问题。数据节点没有出现。datanode 日志文件中显示的错误是

java.io.IOException:/home/prassanna/usr/local/hadoop/yarn_data/hdfs/datanode 中不兼容的 clusterID:namenode clusterID = CID-XXX;数据节点集群ID = CID-YYY

我所做的是

HADOOP_DIR/bin/hadoop namenode -format -clusterID CID-YYY
Run Code Online (Sandbox Code Playgroud)

(集群 ID 不需要引号)