JJ1*_*180 24 configuration hadoop hdfs
尝试写入HDFS作为我的多线程应用程序的一部分时,我收到以下错误
could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and no node(s) are excluded in this operation.
Run Code Online (Sandbox Code Playgroud)
我在这里尝试了重新格式化的最高评价答案,但这对我不起作用:HDFS错误:只能复制到0个节点,而不是1个节点
这是怎么回事:
PartitionTextFileWriter
线程1和2不会写入同一文件,尽管它们在我的目录树的根目录下共享一个父目录.
我的服务器上的磁盘空间没有问题.
我也在我的名称 - 节点日志中看到了这一点,但不确定它的含义:
2016-03-15 11:23:12,149 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
2016-03-15 11:23:12,150 WARN org.apache.hadoop.hdfs.protocol.BlockStoragePolicy: Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=1, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]})
2016-03-15 11:23:12,150 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
2016-03-15 11:23:12,151 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 9000, call org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 10.104.247.78:52004 Call#61 Retry#0
java.io.IOException: File /metrics/abc/myfile could only be replicated to 0 nodes instead of [2016-03-15 13:34:16,663] INFO [Group Metadata Manager on Broker 0]: Removed 0 expired offsets in 1 milliseconds. (kafka.coordinator.GroupMetadataManager)
Run Code Online (Sandbox Code Playgroud)
可能是导致此错误的原因是什么?
谢谢
Era*_*nli 18
此错误是由HDFS的块复制系统引起的,因为它无法在聚焦文件中设置特定块的任何副本.常见原因:
还请:
参考:https://wiki.apache.org/hadoop/CouldOnlyBeReplicatedTo
参考:https://support.pivotal.io/hc/en-us/articles/201846688-HDFS-reports-Configured-Capacity-0-0-B-for-datanode
另外,请检查:从Java写入HDFS,获取"只能复制到0个节点而不是minReplication"
小智 5
另一个原因可能是你的 Datanode 机器没有暴露端口(默认为 50010)。就我而言,我试图将一个文件从 Machine1 写入到在 Machine2 上托管的 Docker 容器 C1 上运行的 HDFS。为了让宿主机将请求转发到容器上运行的服务,端口转发应该被处理。将端口 50010 从主机转发到来宾计算机后,我可以解决该问题。
归档时间: |
|
查看次数: |
21594 次 |
最近记录: |