Configuring Apache HDFS Encryption
Also available as:
PDF
loading table of contents...

Additional Changes in Behavior with HDFS-Encrypted Tables

Additional behavioral changes with HDFS-encrypted tables.

  • Users reading data from read-only encrypted tables must have access to a temp directory that is encrypted with at least as strong encryption as the table.

  • By default, temp data related to HDFS encryption is written to a staging directory identified by the hive-exec.stagingdir property created in the hive-site.xml file associated with the table folder.

  • As of HDP-2.6.0, Hive INSERT OVERWRITE queries require a Ranger URI policy to allow write operations, even if the user has write privilege granted through HDFS policy. To fix the failing Hive INSERT OVERWRITE queries:
    1. Create a new policy under the Hive repository.
    2. In the dropdown where you see Database, select URI.
    3. Update the path (Example: /tmp/*).
    4. Add the users and group and save.
    5. Retry the insert query.
  • When using encryption with Trash enabled, table deletion operates differently than the default trash mechanism. For more information see “Deleting Files from an Encryption Zone with Trash Enabled”.