Synchronizing HDFS ACLs and Sentry Permissions

The HDFS-Sentry plugin allows you to configure synchronization of Sentry privileges with HDFS ACLs for specific HDFS directories.


The integration of Sentry and HDFS permissions automatically keeps HDFS ACLs in sync with the privileges configured with Sentry. This feature offers the easiest way to share data between Hive, Impala and other components such as MapReduce, Spark, and Pig, while setting permissions for that data with just one set of rules through Sentry. It maintains the ability of Hive and Impala to set permissions on views, in addition to tables, while access to data outside of Hive and Impala (for example, reading files off HDFS) requires table permissions. HDFS permissions for some or all of the files that are part of tables defined in the Hive Metastore will now be controlled by Sentry.

This consists of three components:
  • An HDFS NameNode plugin
  • A Sentry-Hive Metastore plugin
  • A Sentry Service plugin

With synchronization enabled, Sentry will translate permissions on databases and tables to the appropriate corresponding HDFS ACL on the underlying files in HDFS. For example, if a user group is assigned to a Sentry role that has SELECT permissions on a particular table, that user group will also have read access to the HDFS files that are part of that table. When you list those files in HDFS, this permission will be listed as an HDFS ACL. Or if a user group is assigned to a Sentry role that has SELECT permissions on a database, that user group will also have read access to the HDFS files that are part of that database. When you list those files in HDFS, those permissions will also be listed as an HDFS ACL.

Note that when Sentry was enabled, the hive user/group was given ownership of all files/directories in the Hive warehouse (/user/hive/warehouse). Hence, the resulting synchronized Sentry permissions will reflect this fact. If you skipped that step, Sentry permissions will be based on the existing Hive warehouse ACLs. Sentry will not automatically grant ownership to the hive user.

The mapping of Sentry privileges to HDFS ACLs is as follows:

  • SELECT privilege -> Read access on the file.
  • INSERT privilege -> Write access on the file.
  • ALL privilege -> Read and Write access on the file.

Note that you must explicitly specify the path prefix to the Hive warehouse (default: user/hive/warehouse) and any other directories that must be managed by Sentry. This procedure is described in the Enabling the HDFS-Sentry Plugin section below.

Prompting HDFS ACL Changes

URIs do not have an impact on the HDFS-Sentry plugin. Therefore, you cannot manage all of your HDFS ACLs with the HDFS-Sentry plugin and you must continue to use standard HDFS ACLs for data outside of Hive.

HDFS ACL changes are triggered on:
  • Hive DATABASE object LOCATION (HDFS) when a role is granted to the object
  • Hive TABLE object LOCATION (HDFS) when a role is granted to the object
HDFS ACL changes are not triggered by:
  • Hive URI LOCATION (HDFS) when a role is granted to a URI
  • Hive SERVER object when a role is granted to the object. HDFS ACLs are not updated if a role is assigned to the SERVER. The privileges are inherited by child objects in standard Sentry interactions, but the plugin does not trickle the privileges down.
  • Permissions granted on views. Views are not synchronized as objects in the HDFS file system.

For more information about granting privileges on URIs, see Granting Privileges on URIs.


  • CDH 5.3.0 or higher
  • (Strongly Recommended) Implement Kerberos authentication on your cluster.
The following conditions must be also be true when enabling Sentry-HDFS synchronization. Failure to comply with any of these will result in validation errors.
  • You must use the Sentry service, not policy file-based authorization.
  • Enabling HDFS Extended Access Control Lists (ACLs) is required.
  • There must be at least one Sentry service dependent on HDFS.
  • The Sentry service must have at least one Sentry Server role.
  • The Sentry service must have at least one dependent Hive service.
  • The Hive service must have at least one Hive metastore role.

Enabling the HDFS-Sentry Plugin

  1. Go to the HDFS service.
  2. Click the Configuration tab.
  3. Select Scope > HDFS (Service-Wide).
  4. Type Check HDFS Permissions in the Search box.
  5. Select Check HDFS Permissions.
  6. Select Enable Sentry Synchronization.
  7. Locate the Sentry Synchronization Path Prefixes property or search for it by typing its name in the Search box.
  8. Edit the Sentry Synchronization Path Prefixes property to list HDFS path prefixes where Sentry permissions should be enforced. Multiple HDFS path prefixes can be specified. By default, this property points to /user/hive/warehouse and must always be non-empty. If you are using a non-default location for the Hive warehouse, make sure you add it to the list of path prefixes. HDFS privilege synchronization will not occur for tables and databases located outside the HDFS regions listed here.
  9. Click Save Changes.
  10. Restart the cluster. Note that it may take an additional two minutes after cluster restart for privilege synchronization to take effect.

Testing the Sentry Synchronization Plugins

The following tasks will help you ensure that Sentry-HDFS synchronization has been enabled and configured correctly:

For a folder that has been enabled for the plugin, such as the Hive warehouse, try accessing the files in that folder outside Hive and Impala. For this, you should know what tables and databases those HDFS files belong to and the Sentry permissions on those tables. Attempt to view or modify the Sentry permissions settings over those tables using one of the following tools:
  • (Recommended) Hue's Security application
  • HiveServer2 CLI
  • Impala CLI
  • Access the tables and databases directly in HDFS. For example:
    • List files inside the folder and verify that the file permissions shown in HDFS (including ACLs) match what was configured in Sentry.
    • Run a MapReduce, Pig or Spark job that accesses those files. Pick any tool besides HiveServer2 and Impala