小编Mou*_*tta的帖子

如何在极小的集群(3个或更少节点)上附加到hdfs文件

我试图在单个节点集群上附加到hdfs上的文件.我也尝试过2节点集群但得到相同的例外.

在hdfs-site中,我已dfs.replication设置为1.如果我设置dfs.client.block.write.replace-datanode-on-failure.policyDEFAULTI,则会出现以下异常

java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[10.10.37.16:50010], original=[10.10.37.16:50010]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
Run Code Online (Sandbox Code Playgroud)

如果我遵循hdfs-default.xml中针对极小集群(3个或更少节点)的配置的注释中的建议并设置dfs.client.block.write.replace-datanode-on-failure.policyNEVERI,则会出现以下异常:

org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot append to file/user/hadoop/test. Name node is in safe mode.
The reported blocks 1277 has reached the threshold 1.0000 …
Run Code Online (Sandbox Code Playgroud)

java hadoop hdfs

12
推荐指数
1
解决办法
3899
查看次数

标签 统计

hadoop ×1

hdfs ×1

java ×1