Command Line Upgrade
Also available as:
PDF
loading table of contents...

Configure and Start Apache Ranger

This section describes how to upgrade the Apache Ranger service.

Preparing Your Cluster to Upgrade Ranger

If you are not currently using Ambari to manage your Hadoop cluster, you need to upgrade Ranger manually to the latest version. This section describes the steps you need to follow to prepare your cluster for the Ranger upgrade.

  1. Back up the following Ranger configuration directories:

    • Ranger Policy Administration Service

      /etc/ranger/admin/conf
    • Ranger UserSync

      /etc/ranger/usersync/conf
    • Ranger Plugins:

      • Hadoop

        /etc/hadoop/conf
      • Hive

        /etc/hive/conf
      • HBase

        /etc/hbase/conf
      • Knox

        /etc/knox/conf
      • Storm

        /etc/storm/conf
  2. Backup the Ranger Policy and Audit databases. Make sure to take note of the following details in the install.properties file:

    • db_host

    • db_name

    • db_user

    • db_password

    • policy manager configuration

    • LDAP directory configuration

    • LDAP settings

    • LDAP AD domain

    • LDAP URL

      mysqldump -u root -p root ranger > dest_dir/filename.sql
      mysqldump -u root -p root ranger_audit > dest_dir/audit_filename.sql

Stop the Ranger Services

Now that you have prepared your cluster for the Ranger upgrade, you need to stop the Ranger Admin and Ranger UserSync services. To stop the Ranger services, perform the steps described below.

  1. Stop the Ranger Policy Admin service. When the service is stopped, you receive an acknowledgement from the server that the service has been stopped.

    service ranger-admin stop
  2. Stop the Ranger UserSync service. When the service is stopped, you receive an acknowledgement from the server that the service has been stopped.

    service ranger-usersync stop
  3. Stop the applicable services using the Ranger plugin (HDFS, HBase, Hive, Knox, Storm).

    See Stopping HDP Services for more information.

Preparing the Cluster for Upgrade

Before you begin the upgrade process, you need to perform a series of steps to prepare the cluster for upgrade. These steps are described in the "Getting Ready To Upgrade" section of this guide, which you need to follow before continuing to upgrade Ranger. Some of these steps include:

  • Backing up HDP directories

  • Stopping all long-running applications and services.

  • Backing up the Hive and Oozie metastore databases.

  • Backing up Hue

  • Backing up specific directories and configurations

Registering the HDP 2.4 Repo

After you have prepared your cluster for the upgrade, you need to register the HDP 2.5.0 repo. This requires you to perofrm the following steps:

  1. (Optional) The Ranger components should already have installed in the at the beginning of the HDP upgrade process, but you can use the following commands to confirm that the Ranger packages have been installed:

    hdp-select status ranger-admin
    hdp-select status ranger-usersync

    If the packages have not been installed, you can use the install commands specific to your OS. For example, for RHEL/CentOS you would use the following commands to install the packages.

    yum install ranger_2_3_*-admin
    yum install ranger_2_3_*-usersync
  2. Select the Ranger Admin and Ranger UserSync versions you want to use.

    hdp-select set ranger-admin <HDP_server_version>
    hdp-select set ranger-usersync <HDP_server_version>
  3. Change ownership of /etc/ranger/admin/conf/ and /etc/ranger/usersync/conf/ to ranger:ranger:

    chown -R ranger:ranger /etc/ranger/admin/conf/
  4. Solr must be installed and configured before installing RangerAdmin or any of the Ranger component plugins.

    For information regarding installation and configuration of Solr, see Using Apache Solr for Ranger Audits.

  5. Update the install.properties file to migrate the database credentials properties and POLICYMGR_EXTERNAL-URL property from HDP 2.2 to HDP 2.5.0.

    Table 3.4. Ranger_Admin install.properties names and values

    Property NameProperty Value
    DB_FLAVORMySql(|ORACLE|POSTGRES|MSSQL|SQLA)
    db_root_userroot
    db_root_passwordPassword of db (eg: vagrant)
    db_hostHostname : where your db does exist
    policymgr_external_urlhttp://<hostname>:6080
    policymgr_http_enabledtrue
    authentication_methodUNIX(LDAP|ACTIVE_DIRECTORY|UNIX|NONE)
    audit_storesolr
    audit_solr_urlshttp://<solr_host>:6083/solr/ranger_audits

    [Note]Note

    When you migrate to new version, you have to remove /user/bin/ranger-admin, which points to the older version of the ranger start file, /usr/hdp/<version>/ranger-admin/ews/ranger-admin-services.sh. After you remove this file, you have to run setup again.

  6. Install the Ranger Admin component. Be sure to set the JAVA_HOME environment variable if it is not already set.

    cd /usr/hdp/current/ranger-admin/
    
    cp ews/webapp/WEB-INF/classes/conf.dist/ranger-admin-site.xml ews/webapp/WEB-INF/classes/conf/
    
    cp ews/webapp/WEB-INF/classes/conf.dist/ranger-admin-default-site.xml ews/webapp/WEB-INF/classes/conf/
    
    cp ews/webapp/WEB-INF/classes/conf.dist/security-applicationContext.xml ews/webapp/WEB-INF/classes/conf/
    
    ./setup.sh

    This should successfully install the Ranger Admin component.

  7. Start the Ranger Admin component.

    service ranger-admin start
  8. Configure and setup the Ranger UserSync component by migrating the properties from the HDP 2.2 install.properties file (POLICY_MGR_URL, SYNC_SOURCE and LDAP/AD properties.

    Table 3.5. Ranger_Usersync install.properties names and values

    Property NameProperty Value
    POLICY_MGR_URLhttp://<hostname>:6080
    SYNC_SOURCEunix
    SYNC_INTERVAL5

    [Note]Note

    When you migrate to new version, you have to remove /user/bin/ranger-admin, which points to the older version of the ranger start file, /usr/hdp/<version>/ranger-admin/ews/ranger-admin-services.sh. After you remove this file, you have to run setup again.

  9. Install the Ranger UserSync component. Be sure to set the JAVA_HOME component if it is not already set.

    cd /usr/hdp/current/ranger-usersync/
    
    ./setup.sh
  10. Start the Ranger UserSync component.

    service ranger-usersync start

Install the Ranger Components

Next, you need to re-install each Ranger component again to ensure you have the latest version. Because you have already upgraded your HDP stack, you only need to follow the instructions in the Non-Ambari Cluster Installation Guide to install each Ranger component. You must install the following Ranger components:

  • Ranger Policy Admin

  • Ranger UserSync

  • Ranger Plugins:

    • HDFS

    • HBase

    • Hive

    • Knox

    • Storm

[Note]Note

When installing each Ranger component, you also need to make sure you upgrade each individual component to version 2.5.0 before restarting each service.

With this release, Ranger has also added support for the following components:

  • Solr

  • Kafka

  • YARN

Restart the Ranger Services

Once you have re-installed each Ranger component, you then need to restart these components to ensure the new configurations are loaded in your cluster. This includes restarting the Policy Admin and UserSync components, NameNode, and each Ranger plugin.

[Note]Note

Before restarting the NameNode, make sure to remove the set-hdfs-plugin-env.sh from /etc/hadoop/conf. You need to re-enable the NameNode after finishing the upgrade.

The Non-Ambari Cluster Installation Guide describes how you can start the following Ranger services:

  • Ranger Policy Admin service

    service ranger-admin start
  • Ranger UserSync service

    service ranger-usersync start

Enable Ranger Plugins

The final step in the Ranger upgrade process requires you to re-enable the Ranger plugins. Although you are only required to enable HDFS in your cluster, you should re-enable all of the Ranger plugins because class names have changed for the 2.5.0 release, and to ensure smooth operation of Ranger services in your cluster.

[Note]Note

When you enable each Ranger plugin, be sure to remove all 2.2 class name values.

[Note]Note

Re-enabling a Ranger plugin does not affect policies you have already created. As long as you use the same database as the Policy store, all of your data remains intact.

To re-enable the Ranger plugins, use the links listed below to access instructions in the Non-Ambari Cluster Installation guide that describe editing the install.properties file and enabling the Ranger plugins:

[Important]Important

Before enabling the HDFS plugin, remove set-hdfs-plugin-env.sh from /etc/hadoop/conf. You need to re-enable this plugin after the upgrade is complete.

Enable KMS Configuration

Configure and setup the Ranger KMS configuration by editing the KMS install.properties file:

Table 3.6. KMS install.properties names and values

Property NameProperty Name
DB_FLAVORMYSQL
db_root_userfoot
db_root_password<db password>
db_host<db hostname>
db_namerangerkms (default)
db_userrangerkms (default)
db_password<kms db password>
hadoop_conf/etc/hadoop/conf
POLICY_MGR_URLhttp://<hostname>:6080
REPOSITORY_NAMEkmsdev
XAAUDIT.SOLR.ENABLEfalse (default), change to true if desired
XAAUDIT.SOLR.URLhttp://<solr_host>:6083/solr/ranger_audits

Configure and Start Apache Ranger on a Kerberized Cluster

Beginning with HDP 2.5.0, kerberos authentication is supported for Ranger and its plugins. Use the this section if you have an HDP 2.2 cluster in a kerberized environment with Ranger in simple authentication mode that you want to upgrade.

For additional information regarding Kerberos, refer to Kerberos Overview in the Hadoop Security Guide.

Create Keytabs and Principals

  1. Follow these steps to create keytabs and principals:

    For Ranger Admin:

    1. Create rangeradmin/<FQDN of Ranger Admin>@<REALM>.

    2. > kadmin.local
      > addprinc -randkey rangeradmin/<FQDN of Ranger Admin>
      Eg: addprinc -randkey rangeradmin/ranger-upgrade-0707-2.openstacklocal
      > xst -k /etc/security/keytabs/rangeradmin.keytab rangeradmin/<FQDN of Ranger Admin>@<REALM>
      Eg: xst -k /etc/security/keytabs/rangeradmin.keytab rangeradmin/ranger-upgrade-0707-2.openstacklocal/EXAMPLE.COM
      >exit

    For Ranger Lookup:

    1. Create rangerlookup/<FQDN of Ranger Admin>@<REALM>.

    2. > kadmin.local
      > addprinc -randkey  rangerlookup/<FQDN of Ranger Admin>
      Eg: addprinc -randkey rangerlookup/ranger-upgrade-0707-2.openstacklocal
      > xst -k /etc/security/keytabs/rangerlookup.keytab rangerlookup/<FQDN of Ranger  Admin>@<REALM>    
      Eg: xst -k /etc/security/keytabs/rangerlookup.keytab rangerlookup/ranger-upgrade-0707-2.openstacklocal@EXAMPLE.COM       
      > exit
      

    For Ranger Usersync:

    1. Create rangerusersync/<FQDN>@<REALM>.

    2. > kadmin.local
      > addprinc -randkey rangerusersync/<FQDN of Ranger usersync>
      Eg: addprinc -randkey rangerusersync/ranger-upgrade-0707-2.openstacklocal
      > xst -k /etc/security/keytabs/rangerusersync.keytab rangerusersync/<FQDN>@<REALM>
      Eg: xst -k /etc/security/keytabs/rangerusersync.keytab rangerusersync/ranger-upgrade-0707-2.openstacklocal@EXAMPLE.COM
      > exit
      

    For Ranger Tagsync

    1. Create rangertagsync/<FQDN>@<REALM>.

    2. > kadmin.local
      > addprinc -randkey rangertagsync/<FQDN of Ranger tagsync>
      Eg: addprinc -randkey rangertagsync/ranger-upgrade-0707-2.openstacklocal
      > xst -k /etc/security/keytabs/rangertagsync.keytab rangertagsync/<FQDN>@<REALM>
      Eg: xst -k /etc/security/keytabs/rangertagsync.keytab rangertagsync/ranger-upgrade-0707-2.openstacklocal@EXAMPLE.COM
      > exit
      

    For Ranger KMS:

    1. Create rangerkms/<FQDN of Ranger Admin>@<REALM>

    2. > kadmin.local
      > addprinc -randkey rangerkms/<FQDN of Ranger Admin>
      Eg: addprinc -randkey rangerkms/ranger-upgrade-0707-2.openstacklocal
      > xst -k /etc/security/keytabs/rangerkms.keytab rangerkms/<FQDN of Ranger Admin>@<REALM>
      Eg: xst -k /etc/security/keytabs/rangerkms.keytab rangerkms/ranger-upgrade-0707-2.openstacklocal/EXAMPLE.COM
      > exit
      
  2. If the Kerberos server and admin are on different hosts, copy the keytab on the admin host, assign permission to user ranger, and change the permissions.

    scp the <rangeradmin_keytab_file> to <new_path>
    chown ranger <rangeradmin_keytab_path>
    chmod 400 <rangeradmin_keytab_path>
    
  3. Use kdestroy to delete the Keberos credentials cache file.

  4. Set the following properties and values in the /etc/hadoop/conf/core-site.xml file:

    Table 3.7. Properties for the /etc/hadoo/conf/core-site.xml file

    Property NameProperty Value
    fs.defaultFShdfs://ranger-upgrade-0707-2.openstacklocal:8020
    hadoop.security.authenticationkerberos
    hadoop.security.authorizationtrue
    hadoop.security.auth_to_localRULE:[1:$1@$0](ambari-qa-cluster1@EXAMPLE.COM)s/.*/ambari-qa/ RULE:[1:$1@$0](.*@EXAMPLE.COM)s/@.*// RULE:[2:$1@$0](dn@EXAMPLE.COM)s/.*/hdfs/ RULE:[2:$1@$0](nn@EXAMPLE.COM)s/.*/hdfs/ RULE:[2:$1@$0](rangeradmin@EXAMPLE.COM)s/.*/ranger/ RULE:[2:$1@$0](rangerusersync@EXAMPLE.COM)s/.*/rangerusersync/ DEFAULT


    The following is an example of a core-site.xml file with the properties set for Kerberos:

    <configuration>
    
        <property>
          <name>fs.defaultFS</name>
          <value>hdfs://ranger-upgrade-0707-2.openstacklocal:8020</value>
          <final>true</final>
        </property>
    
        <property>
          <name>hadoop.security.authentication</name>
          <value>kerberos</value>
        </property>
    
       <property>
          <name>hadoop.security.authorization</name>
          <value>true</value>
       </property>
    
    <property>
          <name>hadoop.security.auth_to_local</name>
          <value>RULE:[1:$1@$0](ambari-qa-cluster1@EXAMPLE.COM)s/.*/ambari-qa/
    RULE:[1:$1@$0](.*@EXAMPLE.COM)s/@.*//
    RULE:[2:$1@$0](dn@EXAMPLE.COM)s/.*/hdfs/
    RULE:[2:$1@$0](nn@EXAMPLE.COM)s/.*/hdfs/
    RULE:[2:$1@$0](rangeradmin@EXAMPLE.COM)s/.*/ranger/
    RULE:[2:$1@$0](rangerusersync@EXAMPLE.COM)s/.*/rangerusersync/
    DEFAULT</value>
        </property>
    
    </configuration>
    

Run Setup Again for Ranger Admin

  1. Stop the ranger-admin service.

  2. Add the following properties and values to the ranger-admin install.properties file:

    Table 3.8. Ranger-admin install.properties

    Property NameProperty Value
    spnego_principalHTTP/<FQDN_OF_Ranger_Admin_Cluster>@<REALM>
    spnego_keytab<HTTP keytab path>
    token_valid30
    cookie_domain<FQDN_OF_Ranger_Admin_Cluster>
    cookie_path/
    admin_principalrangeradmin/<FQDN_OF_Ranger_Admin_Cluster>@<REALM>
    admin_keytab<rangeradmin keytab path>
    lookup_principalrangerlookup/<FQDN_OF_Ranger_Admin_Cluster>@<REALM>
    lookup_keytab<rangerlookup keytab path>
    hadoop_conf/etc/hadoop/conf

  3. Execute the setup.sh script.

  4. Start the ranger-admin service.

  5. Stop the ranger-usersync service.

  6. Add the following properties and values to the ranger-sync install.properties file:

    Table 3.9. 

    Property NameProperty Value
    usersync_principalrangerusersync/<FQDN>@<REALM>
    usersyn_keytab<rangerusersync keytab path>
    hadoop_conf/etc/hadoop/conf

  7. Execute the setup.sh script.

  8. Start the ranger-sync service.

  9. Stop the ranger-tagsync service.

  10. Add the following properties and values to the ranger-tagsync install.properties file:

    Table 3.10. Ranger-tagsync install.properties and values

    Property NameProperty Value
    tagsync_principalrangertagsync/<FQDN>@<REALM>
    tagsync_keytab<rangertagsync keytab path>
    hadoop_conf/etc/hadoop/conf

  11. Execute the setup.sh script.

  12. Start the ranger-tagsync service.

  13. Stop the ranger-kms service.

  14. Add the following properties and values to the ranger-kms install.properties file:

    Table 3.11. Ranger-kms install.properties and values

    Property NameProperty Value
    kms_principalrangerkms/<FQDN>@<REALM>
    kms_keytab<rangerkms keytab path>
    hadoop_conf/etc/hadoop/conf

Install and Enable the Ranger HDFS Plugin

This section documents how to install and enable the Ranger HDFS plugin. You might want to consider making similar changes for the other Ranger plugins that you are using.

The Ranger HDFS plugin is located at /usr/hdp/<version>/ranger-hdfs-plugin.

Follow these steps to install and enable the Ranger HDFS Plugin:

  1. Edit the relevant lines in the install.properties file:

    POLICY_MGR_URL=http://<FQDN of ranger admin host>:6080
    REPOSITORY_NAME=hadoopdev
    Audit info (Solr/HDFS options available)
    
  2. Enter the following commands to enable the HDFS plugin:

    export JAVA_HOME=<JAVA Path>
    ./enable-hdfs-plugin.sh
  3. Start and stop the namenode:

    su hdfs -c "/usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh stop namenode"
    su hdfs -c "/usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh start namenode"
    
  4. In the custom repo configurationfile, add the component user, for example, hdfs, as a value for each of the following properties:

    • policy.download.auth.users or policy.grantrevoke.auth.users

    • tag.download.auth.users

  5. Verify that the plugin communicates with Ranger admin using the Audit → plugins tab.

  6. Set the following properties in the hdfs-site.xml file:

    Table 3.12. hdfs-site.xml Property Names and Values

    Property NameProperty Value
    dfs.permissions.enabledtrue
    dfs.permissions.supergrouphdfs
    dfs.block.access.token.enabletrue
    dfs.namenode.kerberos.principalnn/_HOST@EXAMPLE.COM
    dfs.secondary.namenode.kerberos.principalnn/_HOST@EXAMPLE.COM
    dfs.web.authentication.kerberos.principalHTTP/_HOST@EXAMPLE.COM
    dfs.web.authentication.kerberos. keytab/etc/security/keytabs/spnego.service.keytab
    dfs.datanode.kerberos.principaldn/_HOST@EXAMPLE.COM
    dfs.namenode.keytab.file/etc/security/keytabs/nn.service.keytab
    dfs.secondary.namenode.keytab.file/etc/security/keytabs/nn.service.keytab
    dfs.datanode.keytab.file /etc/security/keytabs/dn.service.keytab
    dfs.https.port50470
    dfs.namenode.https-addressExample:ip-10-111-59-170.ec2.internal:50470
    dfs.datanode.data.dir.perm750
    dfs.cluster.administratorshdfs
    dfs.namenode.kerberos.internal. spnego.principal${dfs.web.authentication.kerberos.principal}
    dfs.secondary.namenode.kerberos. internal.spnego.principal${dfs.web.authentication.kerberos.principal}


    The following is an example of a hdfs-site.xml file with the properties set for Kerberos:

    <property> 
         <name>dfs.permissions</name> 
         <value>true</value> 
         <description> If "true", enable permission checking in
         HDFS. If "false", permission checking is turned
         off, but all other behavior is
         unchanged. Switching from one parameter value to the other does
         not change the mode, owner or group of files or
         directories. </description> 
    </property> 
     
    <property> 
         <name>dfs.permissions.supergroup</name> 
         <value>hdfs</value> 
         <description>The name of the group of
         super-users.</description> 
    </property> 
     
    <property> 
         <name>dfs.namenode.handler.count</name> 
         <value>100</value> 
         <description>Added to grow Queue size so that more
         client connections are allowed</description> 
    </property> 
     
    <property> 
         <name>ipc.server.max.response.size</name> 
         <value>5242880</value> 
    </property> 
     
    <property> 
         <name>dfs.block.access.token.enable</name> 
         <value>true</value> 
         <description> If "true", access tokens are used as capabilities
         for accessing datanodes. If "false", no access tokens are checked on
         accessing datanodes. </description> 
    </property> 
     
    <property> 
         <name>dfs.namenode.kerberos.principal</name> 
         <value>nn/_HOST@EXAMPLE.COM</value> 
         <description> Kerberos principal name for the
         NameNode </description> 
    </property> 
     
    <property> 
         <name>dfs.secondary.namenode.kerberos.principal</name> 
         <value>nn/_HOST@EXAMPLE.COM</value> 
         <description>Kerberos principal name for the secondary NameNode. 
         </description> 
    </property> 
     
    <property> 
         <!--cluster variant --> 
         <name>dfs.secondary.http.address</name> 
         <value>ip-10-72-235-178.ec2.internal:50090</value> 
         <description>Address of secondary namenode web server</description> 
    </property> 
     
    <property> 
         <name>dfs.secondary.https.port</name> 
         <value>50490</value> 
         <description>The https port where secondary-namenode
         binds</description> 
    </property> 
     
    <property> 
         <name>dfs.web.authentication.kerberos.principal</name> 
         <value>HTTP/_HOST@EXAMPLE.COM</value> 
         <description> The HTTP Kerberos principal used by Hadoop-Auth in the HTTP endpoint.
         The HTTP Kerberos principal MUST start with 'HTTP/' per Kerberos HTTP
         SPNEGO specification. 
         </description> 
    </property> 
     
    <property> 
         <name>dfs.web.authentication.kerberos.keytab</name> 
         <value>/etc/security/keytabs/spnego.service.keytab</value> 
         <description>The Kerberos keytab file with the credentials for the HTTP
         Kerberos principal used by Hadoop-Auth in the HTTP endpoint. 
         </description> 
    </property> 
     
    <property> 
         <name>dfs.datanode.kerberos.principal</name> 
         <value>dn/_HOST@EXAMPLE.COM</value> 
         <description> 
         The Kerberos principal that the DataNode runs as. "_HOST" is replaced by the real
         host name. 
         </description> 
    </property> 
     
    <property> 
         <name>dfs.namenode.keytab.file</name> 
         <value>/etc/security/keytabs/nn.service.keytab</value> 
         <description> 
         Combined keytab file containing the namenode service and host
         principals. 
         </description> 
    </property> 
     
    <property> 
         <name>dfs.secondary.namenode.keytab.file</name> 
         <value>/etc/security/keytabs/nn.service.keytab</value> 
         <description> 
         Combined keytab file containing the namenode service and host
         principals. 
         </description> 
    </property> 
     
    <property> 
         <name>dfs.datanode.keytab.file</name> 
         <value>/etc/security/keytabs/dn.service.keytab</value> 
         <description> 
         The filename of the keytab file for the DataNode. 
         </description> 
    </property> 
     
    <property> 
         <name>dfs.https.port</name> 
         <value>50470</value> 
         <description>The https port where namenode
         binds</description> 
    </property> 
     
    <property> 
         <name>dfs.https.address</name> 
         <value>ip-10-111-59-170.ec2.internal:50470</value> 
         <description>The https address where namenode binds</description> 
    </property> 
     
    <property> 
         <name>dfs.datanode.data.dir.perm</name> 
         <value>750</value> 
         <description>The permissions that should be there on
         dfs.data.dir directories. The datanode will not come up if the
         permissions are different on existing dfs.data.dir directories. If
         the directories don't exist, they will be created with this
         permission.</description> 
    </property> 
     
    <property> 
         <name>dfs.access.time.precision</name> 
         <value>0</value> 
         <description>The access time for HDFS file is precise upto this
         value.The default value is 1 hour. Setting a value of 0
         disables access times for HDFS. 
         </description> 
    </property> 
     
    <property> 
         <name>dfs.cluster.administrators</name> 
         <value> hdfs</value> 
         <description>ACL for who all can view the default
         servlets in the HDFS</description> 
    </property> 
     
    <property> 
         <name>ipc.server.read.threadpool.size</name> 
         <value>5</value> 
         <description></description> 
    </property> 
     
    <property> 
         <name>dfs.namenode.kerberos.internal.spnego.principal</name> 
         <value>${dfs.web.authentication.kerberos.principal}</value> 
    </property> 
     
    <property> 
         <name>dfs.secondary.namenode.kerberos.internal.spnego.principal</name> 
         <value>${dfs.web.authentication.kerberos.principal}</value> 
    </property>
    
  7. For Download Policy to be successful, use the Ranger UI to update the service configuration with the following custom properties:

    policy.download.auth.users=<Component service user>
    tag.download.auth.users=<Component service user>(if tag download)
    
  8. For Grant/Revoke for Hive and Hbase to be successful, use the Ranger UI to update the service configuration with the following custom property:

    policy.grantrevoke.auth.users = <Component service user>
  9. For Test Connection and Resource Lookup to be successful, use the Ranger UI to add lookup user in the permission list of the policies.