无法检查hadoop上的节点[连接被拒绝]

Bap*_*per 12 hadoop

如果我输入 http://localhost:50070http://localhost:9000查看节点,我的浏览器没有显示我认为它无法连接到服务器.我用这个命令测试了我的hadoop:

hadoop jar hadoop-*test*.jar TestDFSIO -write -nrFiles 10 -fileSize 1000
Run Code Online (Sandbox Code Playgroud)

但也没有工作,它试图连接到服务器,这是输出:

12/06/06 17:25:24 INFO mapred.FileInputFormat: nrFiles = 10
12/06/06 17:25:24 INFO mapred.FileInputFormat: fileSize (MB) = 1000
12/06/06 17:25:24 INFO mapred.FileInputFormat: bufferSize = 1000000
12/06/06 17:25:25 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 0 time(s).
12/06/06 17:25:26 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 1 time(s).
12/06/06 17:25:27 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 2 time(s).
12/06/06 17:25:28 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 3 time(s).
12/06/06 17:25:29 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 4 time(s).
12/06/06 17:25:30 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 5 time(s).
12/06/06 17:25:31 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 6 time(s).
12/06/06 17:25:32 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 7 time(s).
12/06/06 17:25:33 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 8 time(s).
12/06/06 17:25:34 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 9 time(s).
java.net.ConnectException: Call to localhost/127.0.0.1:9000 failed on connection exception: java.net.ConnectException: Connection refused
Run Code Online (Sandbox Code Playgroud)

我更改了一些这样的文件:在conf/core-site.xml中:

<configuration>
  <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:9000</value>
  </property>
Run Code Online (Sandbox Code Playgroud)

在conf/hdfs-site.xml中:

<configuration>
  <property>
    <name>dfs.replication</name>
    <value>1</value>
  </property>
</configuration>
</configuration>
Run Code Online (Sandbox Code Playgroud)

在conf/mapred-site.xml中:

<configuration>
  <property>
    <name>mapred.job.tracker</name>
    <value>localhost:9001</value>
  </property>
</configuration>
Run Code Online (Sandbox Code Playgroud)

嘿伙计们请你注意,如果我运行这个命令

cat /etc/hosts
Run Code Online (Sandbox Code Playgroud)

我知道了:

127.0.0.1   localhost
127.0.1.1   ubuntu.ubuntu-domain    ubuntu

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
Run Code Online (Sandbox Code Playgroud)

如果我运行这个:

ps axww | grep hadoop
Run Code Online (Sandbox Code Playgroud)

我看到了这个结果:

2170 pts/0    S+     0:00 grep --color=auto hadoop
Run Code Online (Sandbox Code Playgroud)

但没有效果!你有什么想法,我怎样才能解决我的问题?

pyf*_*unc 13

在启动hadoop服务之前,您需要处理的事情很少.

检查这返回的内容:

hostname --fqdn 
Run Code Online (Sandbox Code Playgroud)

在你的情况下,这应该是localhost.还要在/ etc/hosts中注释掉IPV6.

在启动HDFS之前是否格式化了namenode.

hadoop namenode -format
Run Code Online (Sandbox Code Playgroud)

你是如何安装Hadoop的?日志文件的位置将取决于此.如果您使用了cloudera的分布,通常它位于"/ var/log/hadoop /"位置.

如果你是一个完整的新手,我建议使用Cloudera SCM安装Hadoop,这很容易.我已经发布使用Cloudera的发行版安装Hadoop的方法.

确保DFS位置具有写入权限.它通常坐在@/usr/local/hadoop_store/hdfs

这是一个常见的原因.