没有数据节点启动

Aar*_*n S 31 hadoop hdfs

我正在尝试使用以下指南在伪分布式配置中设置Hadoop版本0.20.203.0:

http://www.javacodegeeks.com/2012/01/hadoop-modes-explained-standalone.html

运行start-all.sh脚本后,我运行"jps".

我得到这个输出:

4825 NameNode
5391 TaskTracker
5242 JobTracker
5477 Jps
5140 SecondaryNameNode
Run Code Online (Sandbox Code Playgroud)

当我尝试使用以下方法向hdfs添加信息时:

bin/hadoop fs -put conf input
Run Code Online (Sandbox Code Playgroud)

我收到一个错误:

hadoop@m1a2:~/software/hadoop$ bin/hadoop fs -put conf input
12/04/10 18:15:31 WARN hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/hadoop/input/core-site.xml could only be replicated to 0 nodes, instead of 1
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1417)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:596)
        at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:616)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:523)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1383)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1379)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:416)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1377)

        at org.apache.hadoop.ipc.Client.call(Client.java:1030)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:224)
        at $Proxy1.addBlock(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:616)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
        at $Proxy1.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3104)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2975)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2255)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2446)

12/04/10 18:15:31 WARN hdfs.DFSClient: Error Recovery for block null bad datanode[0] nodes == null
12/04/10 18:15:31 WARN hdfs.DFSClient: Could not get block locations. Source file "/user/hadoop/input/core-site.xml" - Aborting...
put: java.io.IOException: File /user/hadoop/input/core-site.xml could only be replicated to 0 nodes, instead of 1
12/04/10 18:15:31 ERROR hdfs.DFSClient: Exception closing file /user/hadoop/input/core-site.xml : org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/hadoop/input/core-site.xml could only be replicated to 0 nodes, instead of 1
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1417)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:596)
        at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:616)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:523)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1383)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1379)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:416)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1377)

org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/hadoop/input/core-site.xml could only be replicated to 0 nodes, instead of 1
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1417)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:596)
        at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:616)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:523)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1383)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1379)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:416)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1377)

        at org.apache.hadoop.ipc.Client.call(Client.java:1030)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:224)
        at $Proxy1.addBlock(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:616)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
        at $Proxy1.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3104)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2975)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2255)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2446)
Run Code Online (Sandbox Code Playgroud)

我不完全确定,但我相信这可能与datanode没有运行的事实有关.

有人知道我做错了什么,或者如何解决这个问题?

编辑:这是datanode.log文件:

2012-04-11 12:27:28,977 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = m1a2/139.147.5.55
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.20.203.0
STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-security-203 -r 1099333; compiled by 'oom' on Wed May  4 07:57:50 PDT 2011
************************************************************/
2012-04-11 12:27:29,166 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2012-04-11 12:27:29,181 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2012-04-11 12:27:29,183 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2012-04-11 12:27:29,183 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2012-04-11 12:27:29,342 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2012-04-11 12:27:29,347 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2012-04-11 12:27:29,615 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /tmp/hadoop-hadoop/dfs/data: namenode namespaceID = 301052954; datanode namespaceID = 229562149
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:232)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:147)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:354)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:268)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1480)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1419)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1437)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1563)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1573)

2012-04-11 12:27:29,617 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at m1a2/139.147.5.55
************************************************************/
Run Code Online (Sandbox Code Playgroud)

Chr*_*ain 47

您在DN日志中遇到的错误在此处描述:http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/#java-io-ioexception-不兼容,namespaceids

从该页面:

目前,似乎有两种解决方法,如下所述.

解决方法1:从头开始

我可以证明以下步骤解决了这个错误,但副作用不会让你开心(我也不会).我找到的原始解决方法是:

  1. 停止群集
  2. 删除有问题的DataNode上的数据目录:该目录由conf/hdfs-site.xml中的dfs.data.dir指定; 如果您按照本教程,相关目录是/ app/hadoop/tmp/dfs/data
  3. 重新格式化NameNode(注意:在此过程中,所有HDFS数据都会丢失!)
  4. 重新启动群集

当删除所有HDFS数据并从头开始听起来不是一个好主意(在初始设置/测试期间可能没问题),您可能会尝试第二种方法.

解决方法2:更新有问题的DataNode的namespaceID

非常感谢Jared Stehler提出以下建议.我还没有自己测试过,但请随意尝试并向我发送反馈.此解决方法是"微创的",因为您只需编辑有问题的DataNode上的一个文件:

  1. 停止DataNode
  2. 编辑/ current/VERSION中namespaceID的值以匹配当前NameNode的值
  3. 重新启动DataNode

如果您按照我的教程中的说明操作,相关文件的完整路径是:

NameNode:/ app/hadoop/tmp/dfs/name/current/VERSION

DataNode:/ app/hadoop/tmp/dfs/data/current/VERSION

(背景:dfs.data.dir默认设置为

$ {hadoop.tmp.dir}/dfs/data,我们设置hadoop.tmp.dir

在本教程中/ app/hadoop/tmp).

如果您想知道VERSION的内容是什么样的,这是我的一个:

#content/VERSION的内容

名称空间ID = 393514426

storageID = DS-1706792599-10.10.10.1-50010-1204306713481

CTIME = 1215607609074

storageType = DATA_NODE

layoutVersion = -13


apu*_*dan 12

好的,我再次发布这个:

如果有人需要这个,对于更新版本的Hadoop(基本上我运行的是2.4.0)

  • 在这种情况下,请停止群集 sbin/stop-all.sh

  • 然后转到/etc/hadoop配置文件.

在文件中:hdfs-site.xml查找与dfs.namenode.name.dir对应的目录路径dfs.namenode.data.dir

  • 以递归方式删除这两个目录(rm -r).

  • 现在通过格式化namenode bin/hadoop namenode -format

  • 最后 sbin/start-all.sh

希望这可以帮助.


小智 7

我在使用hadoop1.1.2的伪节点上遇到了同样的问题所以我运行了bin/stop-all.sh来停止集群然后在hdfs-site.xml中看到了我的hadoop tmp目录的配置

<name>hadoop.tmp.dir</name>
<value>/root/data/hdfstmp</value>
Run Code Online (Sandbox Code Playgroud)

所以我进入/ root/data/hdfstmp并使用命令删除所有文件(你可能会丢失你的数据)

rm -rf *
Run Code Online (Sandbox Code Playgroud)

然后再次格式化namenode

bin/hadoop namenode -format
Run Code Online (Sandbox Code Playgroud)

然后使用启动集群

bin/start-all.sh
Run Code Online (Sandbox Code Playgroud)

主要原因是bin/hadoop namenode -format没有删除旧数据.所以我们必须手动删除它.


Som*_*dam 5

请执行以下步骤:

1. bin/stop-all.sh
2. remove dfs/ and mapred/ folder of hadoop.tmp.dir in core-site.xml
3. bin/hadoop namenode -format
4. bin/start-all.sh
5. jps
Run Code Online (Sandbox Code Playgroud)