标签: hadoop3

主节点上的“ start-all.sh”和“ start-dfs.sh”不能启动从属节点服务吗?

我已使用从节点的主机名更新了Hadoop主节点上的/ conf / slaves文件,但无法从主节点启动从节点。我必须单独启动从站,然后我的5节点群集已启动并正在运行。如何通过主节点上的单个命令启动整个集群?

同样,SecondaryNameNode在所有从属服务器上运行。那是问题吗?如果是这样,如何将它们从奴隶中删除?我认为在具有一个NameNode的群集中应该只有一个SecondaryNameNode,对吗?

谢谢!

hadoop hdfs namenode hadoop3

4
推荐指数
1
解决办法
921
查看次数

Hadoop / HDFS 3.1.1(在Java 11上)在加载文件资源管理器时Web UI崩溃?

之后start-dfs.sh,我可以导航到http://localhost:9870。NameNode似乎运行得很好。

然后单击“实用程序->浏览文件系统”,并在Web浏览器中得到以下提示:

Failed to retrieve data from /webhdfs/v1/?op=LISTSTATUS: Server Error
Run Code Online (Sandbox Code Playgroud)

深入日志文件($HADOOP_HOME/logs/hadoop-xxx-namenode-xxx.log),我发现:

2018-11-30 16:47:25,097 WARN org.eclipse.jetty.servlet.ServletHandler: Error for /webhdfs/v1/
java.lang.NoClassDefFoundError: javax/activation/DataSource
    at com.sun.xml.bind.v2.model.impl.RuntimeBuiltinLeafInfoImpl.(RuntimeBuiltinLeafInfoImpl.java:457)
    at com.sun.xml.bind.v2.model.impl.RuntimeTypeInfoSetImpl.(RuntimeTypeInfoSetImpl.java:65)
    at com.sun.xml.bind.v2.model.impl.RuntimeModelBuilder.createTypeInfoSet(RuntimeModelBuilder.java:133)
Run Code Online (Sandbox Code Playgroud)

因此缺少一堂课。为什么会这样,如何解决该问题?

noclassdeffounderror hdfs java-9 hadoop3 java-11

3
推荐指数
1
解决办法
1141
查看次数

Hadoop启动ResourceManager和NodeManager时出错

我正在尝试使用单节点集群(Psuedo分布式)设置Hadoop3-alpha3,并使用apache指南进行设置。我尝试运行示例MapReduce作业,但是每次拒绝连接时。运行后,sbin/start-all.sh我一直在ResourceManager日志(以及类似的NodeManager日志)中看到这些异常:

xxxx-xx-xx xx:xx:xx,xxx INFO org.apache.commons.beanutils.FluentPropertyBeanIntrospector: Error when creating PropertyDescriptor for public final void org.apache.commons.configuration2.AbstractConfiguration.setProperty(java.lang.String,java.lang.Object)! Ignoring this property.
xxxx-xx-xx xx:xx:xx,xxx DEBUG org.apache.commons.beanutils.FluentPropertyBeanIntrospector: Exception is:
java.beans.IntrospectionException: bad write method arg count: public final void org.apache.commons.configuration2.AbstractConfiguration.setProperty(java.lang.String,java.lang.Object)
    at java.desktop/java.beans.PropertyDescriptor.findPropertyType(PropertyDescriptor.java:696)
    at java.desktop/java.beans.PropertyDescriptor.setWriteMethod(PropertyDescriptor.java:356)
    at java.desktop/java.beans.PropertyDescriptor.<init>(PropertyDescriptor.java:142)
    at org.apache.commons.beanutils.FluentPropertyBeanIntrospector.createFluentPropertyDescritor(FluentPropertyBeanIntrospector.java:178)
    at org.apache.commons.beanutils.FluentPropertyBeanIntrospector.introspect(FluentPropertyBeanIntrospector.java:141)
    at org.apache.commons.beanutils.PropertyUtilsBean.fetchIntrospectionData(PropertyUtilsBean.java:2245)
    at org.apache.commons.beanutils.PropertyUtilsBean.getIntrospectionData(PropertyUtilsBean.java:2226)
    at org.apache.commons.beanutils.PropertyUtilsBean.getPropertyDescriptor(PropertyUtilsBean.java:954)
    at org.apache.commons.beanutils.PropertyUtilsBean.isWriteable(PropertyUtilsBean.java:1478)
    at org.apache.commons.configuration2.beanutils.BeanHelper.isPropertyWriteable(BeanHelper.java:521)
    at org.apache.commons.configuration2.beanutils.BeanHelper.initProperty(BeanHelper.java:357)
    at org.apache.commons.configuration2.beanutils.BeanHelper.initBeanProperties(BeanHelper.java:273)
    at org.apache.commons.configuration2.beanutils.BeanHelper.initBean(BeanHelper.java:192)
    at org.apache.commons.configuration2.beanutils.BeanHelper$BeanCreationContextImpl.initBean(BeanHelper.java:669)
    at org.apache.commons.configuration2.beanutils.DefaultBeanFactory.initBeanInstance(DefaultBeanFactory.java:162)
    at org.apache.commons.configuration2.beanutils.DefaultBeanFactory.createBean(DefaultBeanFactory.java:116)
    at org.apache.commons.configuration2.beanutils.BeanHelper.createBean(BeanHelper.java:459)
    at org.apache.commons.configuration2.beanutils.BeanHelper.createBean(BeanHelper.java:479)
    at org.apache.commons.configuration2.beanutils.BeanHelper.createBean(BeanHelper.java:492)
    at org.apache.commons.configuration2.builder.BasicConfigurationBuilder.createResultInstance(BasicConfigurationBuilder.java:447)
    at org.apache.commons.configuration2.builder.BasicConfigurationBuilder.createResult(BasicConfigurationBuilder.java:417)
    at …
Run Code Online (Sandbox Code Playgroud)

java hadoop resourcemanager hadoop3

2
推荐指数
1
解决办法
3884
查看次数

Hadoop 3:如何配置/启用纠删码?

我正在尝试设置 Hadoop 3 集群。

关于纠删码功能的两个问题:

  1. 如何确保启用纠删码?
  2. 我还需要将复制因子设置为 3 吗?

请指出与纠删码/复制相关的配置属性,以获得与 Hadoop 2 相同的数据安全性(复制因子 3),但具有 Hadoop 3 纠删码的磁盘空间优势(仅 50% 开销,而不是 200%) 。

hadoop bigdata hdfs erasure-code hadoop3

1
推荐指数
1
解决办法
2250
查看次数

hadoop Web UI localhost:50070 无法打开

Ubuntu 16.04.1 LTS
Hadoop 3.3.1

我参考一篇网络教程尝试设置hadoop伪分布式模式,并按照以下步骤操作。
步骤1:设置Hadoop
1.将以下代码添加到/etc/profile。

export HADOOP_HOME=/home/hadoop/hadoop
export HADOOP_MAPRED_HOME=$HADOOP_HOME 
export HADOOP_COMMON_HOME=$HADOOP_HOME 
export HADOOP_HDFS_HOME=$HADOOP_HOME 
export YARN_HOME=$HADOOP_HOME 
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native 
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin 
export HADOOP_INSTALL=$HADOOP_HOME 
Run Code Online (Sandbox Code Playgroud)

2.在$HADOOP_HOME/etc/hadoop/hadoop-env.sh中,设置

export JAVA_HOME=/opt/jdk1.8.0_261
Run Code Online (Sandbox Code Playgroud)

核心站点.xml:

<configuration>
   <property>
      <name>fs.default.name </name>
      <value> hdfs://localhost:9000 </value> 
   </property>
</configuration>
Run Code Online (Sandbox Code Playgroud)

hdfs-site.xml:

<configuration>
   <property>
      <name>dfs.replication</name>
      <value>1</value>
   </property>
   <property>
      <name>dfs.name.dir</name>
      <value>file:///home/hadoop/hadoop/pseudo/hdfs/namenode</value>
   </property>
   <property>
      <name>dfs.data.dir</name> 
      <value>file:///home/hadoop/hadoop/pseudo/hdfs/datanode</value> 
   </property>
</configuration>
Run Code Online (Sandbox Code Playgroud)

纱线站点.xml:

<configuration>
   <property>
      <name>yarn.nodemanager.aux-services</name>
      <value>mapreduce_shuffle</value> 
   </property>
</configuration>
Run Code Online (Sandbox Code Playgroud)

mapred-site.xml:

<configuration>
   <property> 
      <name>mapreduce.framework.name</name>
      <value>yarn</value>
   </property>
</configuration>
Run Code Online (Sandbox Code Playgroud)

步骤2:验证Hadoop
1.$ hdfs namenode -format
2。

sudo apt-get install ssh
ssh-keygen -t rsa
ssh-copy-id …
Run Code Online (Sandbox Code Playgroud)

hadoop3

1
推荐指数
1
解决办法
2526
查看次数