如何更改Hadoop HDFS本地存储文件

Hah*_*pro 1 hadoop hdfs

我发现问题 HDFS默认在本地存储文件?

我的 HDFS 将数据存储在 /tmp/ 文件夹中,该文件夹被系统删除。

我想更改HDFS 在本地存储文件的位置

我正在查找 hdfs-default.xml 但找不到 dfs.data.dir

跑步bin/hadoop version

Hadoop 2.8.2
Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r 66c47f2a01ad9637879e95f80c41f798373828fb
Compiled by jdu on 2017-10-19T20:39Z
Compiled with protoc 2.5.0
From source with checksum dce55e5afe30c210816b39b631a53b1d
This command was run using /usr/local/hadoop/share/hadoop/common/hadoop-common-2.8.2.jar
Run Code Online (Sandbox Code Playgroud)

编辑
我想知道详细信息:
我应该编辑哪个文件以及如何编辑以更改 HDFS 在本地存储文件?

Hah*_*pro 5

感谢@ultimoTG 的提示。

所以,我的解决方案是在我的 hadoop 目录中找到文件名hdfs-default.xml(该文件仅供参考,在此处更改配置不起作用)。

$HADOOP_HOME/share/doc/hadoop/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml
Run Code Online (Sandbox Code Playgroud)

然后,我将要更改的行复制hdfs-default.xml$HADOOP_HOME/etc/hadoop/hdfs-site.xml修改前的值。

这是我的$HADOOP_HOME/etc/hadoop/hdfs-site.xml更改 HDFS 将文件本地目录存储到下载文件夹中。

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>


<property>
  <name>dfs.namenode.name.dir</name>
  <value>/home/my_name/Downloads/hadoop_data/dfs/name</value>
  <description>Determines where on the local filesystem the DFS name node
      should store the name table(fsimage).  If this is a comma-delimited list
      of directories then the name table is replicated in all of the
      directories, for redundancy. </description>
</property>

<property>
  <name>dfs.datanode.data.dir</name>
  <value>/home/my_name/Downloads/hadoop_data/dfs/data</value>
  <description>Determines where on the local filesystem an DFS data node
  should store its blocks.  If this is a comma-delimited
  list of directories, then data will be stored in all named
  directories, typically on different devices. The directories should be tagged
  with corresponding storage types ([SSD]/[DISK]/[ARCHIVE]/[RAM_DISK]) for HDFS
  storage policies. The default storage type will be DISK if the directory does
  not have a storage type tagged explicitly. Directories that do not exist will
  be created if local filesystem permission allows.
  </description>
</property>

</configuration>
Run Code Online (Sandbox Code Playgroud)