HDFS 授予文件及其所有目录的权限

Mar*_*ace 3 java hadoop scala hdfs hadoop2

我在 HDFS 中有以下数据(2 个文件):

/a
  /b
    /c
      /f1.txt
      /f2.txt
Run Code Online (Sandbox Code Playgroud)

我想将f1.txt和f2.txt的权限更改为644:例如hadoop fs -chmod 644 /a/b/c/*.txt

但是,为了真正授予对这些文件的访问权限,我需要将/b和:的权限更改/c为包含这些文件的目录。注意:我不拥有它,并且它已经是世界可读的。755+x/a

有没有hadoop fs命令让我这样做?Java/Scala 代码怎么样?

roh*_*roh 5

您可以acls为此使用:

授予用户读写和执行访问权限

hdfs dfs -setfacl -m -R user:UserName:rwx /a/b/c/f1.txt
Run Code Online (Sandbox Code Playgroud)

如果您想查看文件的权限,请使用getfacl

hdfs dfs -getfacl -R hdfs://somehost:8020/a/b/c/f1.txt

来自 hadoop 指南的 SetFacl :

设定值

用法:hdfs dfs -setfacl [-R] [-b|-k -m|-x <acl_spec> <path>]|[--set <acl_spec> <path>]

设置文件和目录的访问控制列表 (ACL)。

选项:

-b: Remove all but the base ACL entries. The entries for user, group and others are retained for compatibility with permission bits.
-k: Remove the default ACL.
-R: Apply operations to all files and directories recursively.
-m: Modify ACL. New entries are added to the ACL, and existing entries are retained.
-x: Remove specified ACL entries. Other ACL entries are retained.
--set: Fully replace the ACL, discarding all existing entries. The acl_spec must include entries for user, group, and others for compatibility with permission bits.
acl_spec: Comma separated list of ACL entries.
path: File or directory to modify.
Run Code Online (Sandbox Code Playgroud)

例子:

hdfs dfs -setfacl -m user:hadoop:rw- /file
hdfs dfs -setfacl -x user:hadoop /file
hdfs dfs -setfacl -b /file
hdfs dfs -setfacl -k /dir
hdfs dfs -setfacl --set user::rw-,user:hadoop:rw-,group::r--,other::r-- /file
hdfs dfs -setfacl -R -m user:hadoop:r-x /dir
hdfs dfs -setfacl -m default:user:hadoop:r-x /dir
Exit Code:

Returns 0 on success and non-zero on error.
Run Code Online (Sandbox Code Playgroud)