After HDFS has been enabled, you need permission to access it in order to perform operations, such as read data or create folders.

Add a configuration

The configurations related to HDFS permission are as follows:

  • dfs.permissions.enabled

    Enable permission check. Even if the value is false, chmod/chgrp/chown/setfacl performs a permission check.


    The permission of the local folder used by datanode, which is 755 by default.

  • fs.permissions.umask-mode
    • Permission mask(default permission settings when creating a new file/folder)
    • File creation: 0666 & ^umask
    • Folder creation: 0777 & ^umask
    • Default umask value is 022. This means that the permission for file creation is 644 (666&^022 = 644), and permission of folder creation is 755 (777&^022 = 755).
    • The default setting of the Kerberos security cluster in E-MapReduce is 027. The permission for file creation is 640. Forfolder creation, it is 750.
  • dfs.namenode.acls.enabled
    • Enable ACL control. This gives you permission control for owners/groups, and you can also set it for other users.
    • Commands for setting ACL:
      hadoop fs -getfacl [-R] <path>
      hadoop fs -setfacl [-R] [-b |-k -m |-x <acl_spec> <path>] |[--set <acl_spec>   <path>]
      For example:
      su test
       #The user test creates a folder
       hadoop fs -mkdir /tmp/test
       #View the permission of the created folder
       hadoop fs -ls /tmp
       drwxr-x---   - test   hadoop          0 2017-11-26 21:18 /tmp/test
       #Set ACL and grant rwx permissions to user foo
       hadoop fs -setfacl -m user:foo:rwx /tmp/test
       #View the permission of the file (+ means that ACL is set)
       hadoop fs -ls  /tmp/
       drwxrwx---+  - test   hadoop          0 2017-11-26 21:18 /tmp/test
       #View ACL
        hadoop fs -getfacl  /tmp/test
        # file: /tmp/test
      # owner: test
      # group: hadoop
  • dfs.permissions.superusergroup

    Super user group. Users in this group have super user permissions.

Restart the HDFS service

For Kerberos security clusters, HDFS permissions have been set by default (umask is set to 027). Configuration and service restart are not necessary.

For non-Kerberos security clusters, a configuration must be added and the service must be restarted.


  • The umask value can be modified as required.
  • HDFS is a basic service, and Hive/HBase are based on HDFS. Therefore, HDFS permission control must be configured in advance when configuring other upper-layer services.
  • When permissions are enabled for HDFS, the services must be set up (such as /spark-history for Spark and /tmp/$user/ for YARN).
  • Sticky bit:

    Sticky bit can be set for a folder to prevent anyone other than superuser/file owner/directory owner from deleting files or folders in the folder (even if other users have rwx permissions for that folder). For example:

    #That is, adding numeral 1 as the first digit
    hadoop fs -chmod 1777 /tmp
    hadoop fs -chmod 1777 /spark-history
    hadoop fs -chmod 1777 /user/hive/warehouse