namenode,datanode不使用jps列出

Cod*_*ang 2 ubuntu hadoop hdfs

环境:ubuntu 14.04,hadoop 2.6

当我键入start-all.shjps,DataNode不会列出终端上

>jps
9529 ResourceManager
9652 NodeManager
9060 NameNode
10108 Jps
9384 SecondaryNameNode
Run Code Online (Sandbox Code Playgroud)

根据这个答案:Datanode进程没有在Hadoop中运行

我尝试了最好的解决方案

  • bin/stop-all.sh (or stop-dfs.sh and stop-yarn.sh in the 2.x serie)
  • rm -Rf /app/tmp/hadoop-your-username/*
  • bin/hadoop namenode -format (or hdfs in the 2.x series)

但是,现在我明白了:

>jps
20369 ResourceManager
26032 Jps
20204 SecondaryNameNode
20710 NodeManager
Run Code Online (Sandbox Code Playgroud)

如你所见,即使NameNode是失踪,请帮助我.

DataNode logs:https://gist.github.com/fifiteen82726/b561bbd9cdcb9bf36032

NmaeNode logs:https://gist.github.com/fifiteen82726/02dcf095b5a23c1570b0

mapred-site.xml :

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>mapreduce.framework.name</name>
 <value>yarn</value>
</property>

</configuration>
Run Code Online (Sandbox Code Playgroud)

UPDATE

coda@ubuntu:/usr/local/hadoop/sbin$ start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
15/04/30 01:07:25 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
coda@localhost's password: 
localhost: chown: changing ownership of ‘/usr/local/hadoop/logs’: Operation not permitted
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.4’ to ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.5’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.3’ to ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.4’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.2’ to ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.3’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.1’ to ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.2’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out’ to ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.1’: Permission denied
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 159: /usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out: Permission denied
localhost: ulimit -a for user coda
localhost: core file size          (blocks, -c) 0
localhost: data seg size           (kbytes, -d) unlimited
localhost: scheduling priority             (-e) 0
localhost: file size               (blocks, -f) unlimited
localhost: pending signals                 (-i) 3877
localhost: max locked memory       (kbytes, -l) 64
localhost: max memory size         (kbytes, -m) unlimited
localhost: open files                      (-n) 1024
localhost: pipe size            (512 bytes, -p) 8
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 177: /usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out: Permission denied
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 178: /usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out: Permission denied
coda@localhost's password: 
localhost: chown: changing ownership of ‘/usr/local/hadoop/logs’: Operation not permitted
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.4’ to ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.5’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.3’ to ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.4’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.2’ to ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.3’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.1’ to ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.2’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out’ to ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.1’: Permission denied
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 159: /usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out: Permission denied
localhost: ulimit -a for user coda
localhost: core file size          (blocks, -c) 0
localhost: data seg size           (kbytes, -d) unlimited
localhost: scheduling priority             (-e) 0
localhost: file size               (blocks, -f) unlimited
localhost: pending signals                 (-i) 3877
localhost: max locked memory       (kbytes, -l) 64
localhost: max memory size         (kbytes, -m) unlimited
localhost: open files                      (-n) 1024
localhost: pipe size            (512 bytes, -p) 8
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 177: /usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out: Permission denied
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 178: /usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out: Permission denied
Starting secondary namenodes [0.0.0.0]
coda@0.0.0.0's password: 
0.0.0.0: chown: changing ownership of ‘/usr/local/hadoop/logs’: Operation not permitted
0.0.0.0: secondarynamenode running as process 20204. Stop it first.
15/04/30 01:07:51 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
chown: changing ownership of ‘/usr/local/hadoop/logs’: Operation not permitted
resourcemanager running as process 20369. Stop it first.
coda@localhost's password: 
localhost: chown: changing ownership of ‘/usr/local/hadoop/logs’: Operation not permitted
localhost: nodemanager running as process 20710. Stop it first.
coda@ubuntu:/usr/local/hadoop/sbin$ jps
20369 ResourceManager
2934 Jps
20204 SecondaryNameNode
20710 NodeManager
Run Code Online (Sandbox Code Playgroud)

UPDATE

hadoop@ubuntu:/usr/local/hadoop/sbin$ $HADOOP_HOME ./start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
15/05/03 09:32:23 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
hadoop@localhost's password: 
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hadoop-namenode-ubuntu.out
hadoop@localhost's password: 
localhost: datanode running as process 28584. Stop it first.
Starting secondary namenodes [0.0.0.0]
hadoop@0.0.0.0's password: 
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hadoop-secondarynamenode-ubuntu.out
15/05/03 09:32:47 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hadoop-resourcemanager-ubuntu.out
hadoop@localhost's password: 
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hadoop-nodemanager-ubuntu.out
hadoop@ubuntu:/usr/local/hadoop/sbin$ jps
6842 Jps
28584 DataNode
Run Code Online (Sandbox Code Playgroud)

Raj*_*h N 6

FATAL org.apache.hadoop.hdfs.server.datanode.DataNode:secureMain中的异常java.io.IOException:dfs.datanode.data.dir中的所有目录都无效:"/ usr/local/hadoop_store/hdfs/datanode /"

此错误可能是由于/usr/local/hadoop_store/hdfs/datanode/文件夹的权限错误.

FATAL org.apache.hadoop.hdfs.server.namenode.NameNode:无法启动namenode.org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:目录/ usr/local/hadoop_store/hdfs/namenode处于不一致状态:存储目录不存在或无法访问.

此错误可能是由于/usr/local/hadoop_store/hdfs/namenode文件夹的权限错误或它不存在.要解决此问题,请遵循以下选项:

选项I:

如果您没有该文件夹/usr/local/hadoop_store/hdfs,请按如下所示创建并授予该文件夹的权限:

sudo mkdir /usr/local/hadoop_store/hdfs
sudo chown -R hadoopuser:hadoopgroup /usr/local/hadoop_store/hdfs
sudo chmod -R 755 /usr/local/hadoop_store/hdfs
Run Code Online (Sandbox Code Playgroud)

分别更改hadoopuserhadoopgroup您的hadoop用户名和hadoop组名.现在,尝试启动hadoop进程.如果问题仍然存在,请尝试选项2.

方案二:

删除/usr/local/hadoop_store/hdfs文件夹的内容:

sudo rm -r /usr/local/hadoop_store/hdfs/*
Run Code Online (Sandbox Code Playgroud)

更改文件夹权限:

sudo chmod -R 755 /usr/local/hadoop_store/hdfs
Run Code Online (Sandbox Code Playgroud)

现在,启动hadoop进程.它应该工作.

注意:如果错误仍然存​​在,请发布新日志.

更新:

如果您尚未创建hadoop用户和组,请执行以下操作:

sudo addgroup hadoop
sudo adduser --ingroup hadoop hadoop
Run Code Online (Sandbox Code Playgroud)

现在,更改/usr/local/hadoop和的所有权/usr/local/hadoop_store:

sudo chown -R hadoop:hadoop /usr/local/hadoop
sudo chown -R hadoop:hadoop /usr/local/hadoop_store
Run Code Online (Sandbox Code Playgroud)

将您的用户更改为hadoop:

su - hadoop
Run Code Online (Sandbox Code Playgroud)

输入您的hadoop用户密码.现在您的终端应该像:

hadoop@ubuntu:$

现在,键入:

$HADOOP_HOME/bin/start-all.sh

要么

sh /usr/local/hadoop/bin/start-all.sh


小智 5

我面临类似的问题,jps显示的DataNode。

删除hdfs文件夹的内容和更改文件夹权限对我来说很有效。

sudo rm -r /usr/local/hadoop_store/hdfs/*
sudo chmod -R 755 /usr/local/hadoop_store/hdfs    
hadoop namenode =format
start-all.sh
jps
Run Code Online (Sandbox Code Playgroud)