Configure and Start Apache Ranger
This section describes how to upgrade the Ranger service.
Prerequisites
When using MySQL, the storage engine used for the Ranger admin policy store tables MUST support transactions. InnoDB is an example of engine that supports transactions. A storage engine that does not support transactions is not suitable as a policy store.
Preparing Your Cluster to Upgrade Ranger
If you are not currently using Ambari to manage your Hadoop cluster, you need to upgrade Apache Ranger manually to the latest version. This section describes the steps you need to follow to prepare your cluster for the Ranger upgrade.
Back up the following Ranger configuration directories:
Ranger Policy Administration Service
/etc/ranger/admin/conf
Ranger UserSync
/etc/ranger/usersync/conf
Ranger Plugins:
Hadoop
/etc/hadoop/conf
Hive
/etc/hive/conf
HBase
/etc/hbase/conf
Knox
/etc/knox/conf
Storm
/etc/storm/conf
Backup the Ranger Policy and Audit databases. Make sure to take note of the following details in the
install.properties
file:
Stop the Ranger Services
Now that you have prepared your cluster for the Ranger upgrade, you need to stop the Ranger Admin and Ranger UserSync services. To stop the Ranger services, perform the steps described below.
Stop the Ranger Policy Admin service. When the service is stopped, you receive an acknowledgement from the server that the service has been stopped.
service ranger-admin stop
Stop the Ranger UserSync service. When the service is stopped, you receive an acknowledgement from the server that the service has been stopped.
service ranger-usersync stop
Stop the applicable services using the Ranger plugin (HDFS, HBase, Hive, Knox, Storm).
See Stopping HDP Services for more information.
Preparing the Cluster for Upgrade
Before you begin the upgrade process, you need to perform a series of steps to prepare the cluster for upgrade. These steps are described in the "Getting Ready To Upgrade" section of this guide, which you need to follow before continuing to upgrade Ranger. Some of these steps include:
Backing up HDP directories
Stopping all long-running applications and services.
Backing up the Hive and Oozie metastore databases.
Backing up Hue
Backing up specific directories and configurations
Registering the HDP 2.5.0 Repo
After you have prepared your cluster for the upgrade, you need to register the HDP 2.5.0 repo. This requires you to perform the following steps:
(Optional) The Ranger components should already have installed in the at the beginning of the HDP upgrade process, but you can use the following commands to confirm that the Ranger packages have been installed:
hdp-select status ranger-admin hdp-select status ranger-usersync
If the packages have not been installed, you can use the install commands specific to your OS. For example, for RHEL/CentOS you would use the following commands to install the packages.
yum install ranger_2_5_*-admin yum install ranger_2_5_*-usersync
Select the Ranger Admin and Ranger UserSync versions you want to use.
hdp-select set ranger-admin <HDP_server_version> hdp-select set ranger-usersync <HDP_server_version>
Change ownership of
/etc/ranger/admin/conf/
and/etc/ranger/usersync/conf/
toranger:ranger
:chown -R ranger:ranger /etc/ranger/admin/conf/
Solr must be installed and configured before installing RangerAdmin or any of the Ranger component plugins.
For information regarding installation and configuration of Solr, see Using Apache Solr for Ranger Audits.
Update the
install.properties
file to migrate the database credentials properties and POLICYMGR_EXTERNAL-URL property from HDP 2.4. to HDP 2.5.0.Table 1.4. Ranger_Admin install.properties names and values
Property Name Property Value DB_FLAVOR MySql(|ORACLE|POSTGRES|MSSQL|SQLA) db_root_user root db_root_password Password of db (eg: vagrant) db_host Hostname : where your db does exist policymgr_external_url http://<hostname>:6080 policymgr_http_enabled true authentication_method UNIX(LDAP|ACTIVE_DIRECTORY|UNIX|NONE) audit_store solr audit_solr_urls http://<solr_host>:6083/solr/ranger_audits Note When you migrate to new version, you have to remove
/user/bin/ranger-admin
, which points to the older version of the ranger start file,/usr/hdp/<version>/ranger-admin/ews/ranger-admin-services.sh
. After you remove this file, you have to run setup again.Install the Ranger Admin component. Be sure to set the JAVA_HOME environment variable if it is not already set.
cd /usr/hdp/current/ranger-admin/ cp ews/webapp/WEB-INF/classes/conf.dist/ranger-admin-site.xml ews/webapp/WEB-INF/classes/conf/ cp ews/webapp/WEB-INF/classes/conf.dist/ranger-admin-default-site.xml ews/webapp/WEB-INF/classes/conf/ cp ews/webapp/WEB-INF/classes/conf.dist/security-applicationContext.xml ews/webapp/WEB-INF/classes/conf/ ./setup.sh
This should successfully install the Ranger Admin component.
Start the Ranger Admin component.
service ranger-admin start
Configure and setup the Ranger UserSync component by migrating the properties from the HDP 2.4
install.properties
file (POLICY_MGR_URL, SYNC_SOURCE and LDAP/AD properties.Table 1.5. Ranger_Usersync install.properties names and values
Property Name Property Value POLICY_MGR_URL http://<hostname>:6080 SYNC_SOURCE unix SYNC_INTERVAL 5 Note When you migrate to new version, you have to remove
/user/bin/ranger-admin
, which points to the older version of the ranger start file,/usr/hdp/<version>/ranger-admin/ews/ranger-admin-services.sh
. After you remove this file, you have to run setup again.Install the Ranger UserSync component. Be sure to set the JAVA_HOME component if it is not already set.
cd /usr/hdp/current/ranger-usersync/ ./setup.sh
Start the Ranger UserSync component.
service ranger-usersync start
Install the Ranger Components
Next, you need to re-install each Ranger component again to ensure you have the latest version. Because you have already upgraded your HDP stack, you only need to follow the instructions in the Non-Ambari Cluster Installation Guide to install each Ranger component. You must install the following Ranger components:
Ranger Policy Admin
Ranger UserSync
Ranger Plugins:
HDFS
HBase
Hive
Knox
Storm
Solr
Kafka
YARN
Note | |
---|---|
When installing each Ranger component, you also need to make sure you upgrade each individual component to version 2.5.0 before restarting each service. |
Restart the Ranger Services
Once you have re-installed each Ranger component, you then need to restart these components to ensure the new configurations are loaded in your cluster. This includes restarting the Policy Admin and UserSync components, NameNode, and each Ranger plugin.
Note | |
---|---|
Before restarting the NameNode, make sure to remove the
|
The Non-Ambari Cluster Installation Guide describes how you can start the following Ranger services:
Ranger Policy Admin service
service ranger-admin start
Ranger UserSync service
service ranger-usersync start
Enable Ranger Plugins
The final step in the Ranger upgrade process requires you to re-enable the Ranger plugins. Although you are only required to enable HDFS in your cluster, you should re-enable all of the Ranger plugins because class names have changed for the 2.5.0 release, and to ensure smooth operation of Ranger services in your cluster.
Note | |
---|---|
When you enable each Ranger plugin, be sure to remove all 2.4 class name values. |
Note | |
---|---|
Re-enabling a Ranger plugin does not affect policies you have already created. As long as you use the same database as the Policy store, all of your data remains intact. |
To re-enable the Ranger plugins, use the links listed below to access instructions in
the Non-Ambari Cluster Installation guide that describe
editing the install.properties
file and enabling the Ranger plugins:
Important | |
---|---|
Before enabling the HDFS plugin, remove |
Enable KMS Configuration
Configure and setup the Ranger KMS configuration by editing the KMS
install.properties
file:
Table 1.6. KMS install.properties names and values
Property Name | Property Name |
---|---|
DB_FLAVOR | MYSQL |
db_root_user | foot |
db_root_password | <db password> |
db_host | <db hostname> |
db_name | rangerkms (default) |
db_user | rangerkms (default) |
db_password | <kms db password> |
hadoop_conf | /etc/hadoop/conf |
POLICY_MGR_URL | http://<hostname>:6080 |
REPOSITORY_NAME | kmsdev |
XAAUDIT.SOLR.ENABLE | false (default), change to true if desired |
XAAUDIT.SOLR.URL | http://<solr_host>:6083/solr/ranger_audits |
Configure and Start Apache Ranger on a Kerberized Cluster
Beginning with HDP 2.5.0, kerberos authentication is supported for Ranger and its plugins. Use the this section if you have an HDP 2.4 cluster in a kerberized environment with Ranger in simple authentication mode that you want to upgrade.
For additional information regarding Kerberos, refer to Kerberos Overview in the Hadoop Security Guide.
Create Keytabs and Principals
Follow these steps to create keytabs and principals:
For Ranger Admin:
Create rangeradmin/<FQDN of Ranger Admin>@<REALM>.
> kadmin.local > addprinc -randkey rangeradmin/<FQDN of Ranger Admin> Eg: addprinc -randkey rangeradmin/ranger-upgrade-0707-2.openstacklocal > xst -k /etc/security/keytabs/rangeradmin.keytab rangeradmin/<FQDN of Ranger Admin>@<REALM> Eg: xst -k /etc/security/keytabs/rangeradmin.keytab rangeradmin/ranger-upgrade-0707-2.openstacklocal/EXAMPLE.COM >exit
For Ranger Lookup:
Create rangerlookup/<FQDN of Ranger Admin>@<REALM>.
> kadmin.local > addprinc -randkey rangerlookup/<FQDN of Ranger Admin> Eg: addprinc -randkey rangerlookup/ranger-upgrade-0707-2.openstacklocal > xst -k /etc/security/keytabs/rangerlookup.keytab rangerlookup/<FQDN of Ranger Admin>@<REALM> Eg: xst -k /etc/security/keytabs/rangerlookup.keytab rangerlookup/ranger-upgrade-0707-2.openstacklocal@EXAMPLE.COM > exit
For Ranger Usersync:
Create rangerusersync/<FQDN>@<REALM>.
> kadmin.local > addprinc -randkey rangerusersync/<FQDN of Ranger usersync> Eg: addprinc -randkey rangerusersync/ranger-upgrade-0707-2.openstacklocal > xst -k /etc/security/keytabs/rangerusersync.keytab rangerusersync/<FQDN>@<REALM> Eg: xst -k /etc/security/keytabs/rangerusersync.keytab rangerusersync/ranger-upgrade-0707-2.openstacklocal@EXAMPLE.COM > exit
For Ranger Tagsync
Create rangertagsync/<FQDN>@<REALM>.
> kadmin.local > addprinc -randkey rangertagsync/<FQDN of Ranger tagsync> Eg: addprinc -randkey rangertagsync/ranger-upgrade-0707-2.openstacklocal > xst -k /etc/security/keytabs/rangertagsync.keytab rangertagsync/<FQDN>@<REALM> Eg: xst -k /etc/security/keytabs/rangertagsync.keytab rangertagsync/ranger-upgrade-0707-2.openstacklocal@EXAMPLE.COM > exit
For Ranger KMS:
Create rangerkms/<FQDN of Ranger Admin>@<REALM>
> kadmin.local > addprinc -randkey rangerkms/<FQDN of Ranger Admin> Eg: addprinc -randkey rangerkms/ranger-upgrade-0707-2.openstacklocal > xst -k /etc/security/keytabs/rangerkms.keytab rangerkms/<FQDN of Ranger Admin>@<REALM> Eg: xst -k /etc/security/keytabs/rangerkms.keytab rangerkms/ranger-upgrade-0707-2.openstacklocal/EXAMPLE.COM > exit
If the Kerberos server and admin are on different hosts, copy the keytab on the admin host, assign permission to user
ranger
, and change the permissions.scp the <rangeradmin_keytab_file> to <new_path> chown ranger <rangeradmin_keytab_path> chmod 400 <rangeradmin_keytab_path>
Use kdestroy to delete the Keberos credentials cache file.
Set the following properties and values in the
/etc/hadoop/conf/core-site.xml
file:Table 1.7. Properties for the /etc/hadoo/conf/core-site.xml file
Property Name Property Value fs.defaultFS hdfs://ranger-upgrade-0707-2.openstacklocal:8020 hadoop.security.authentication kerberos hadoop.security.authorization true hadoop.security.auth_to_local RULE:[1:$1@$0](ambari-qa-cluster1@EXAMPLE.COM)s/.*/ambari-qa/ RULE:[1:$1@$0](.*@EXAMPLE.COM)s/@.*// RULE:[2:$1@$0](dn@EXAMPLE.COM)s/.*/hdfs/ RULE:[2:$1@$0](nn@EXAMPLE.COM)s/.*/hdfs/ RULE:[2:$1@$0](rangeradmin@EXAMPLE.COM)s/.*/ranger/ RULE:[2:$1@$0](rangerusersync@EXAMPLE.COM)s/.*/rangerusersync/ DEFAULT See Creating Mappings Between Principals and UNIX Usernames in the Hadoop Security Guide.
The following is an example of a
core-site.xml
file with the properties set for Kerberos:<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://ranger-upgrade-0707-2.openstacklocal:8020</value> <final>true</final> </property> <property> <name>hadoop.security.authentication</name> <value>kerberos</value> </property> <property> <name>hadoop.security.authorization</name> <value>true</value> </property> <property> <name>hadoop.security.auth_to_local</name> <value>RULE:[1:$1@$0](ambari-qa-cluster1@EXAMPLE.COM)s/.*/ambari-qa/ RULE:[1:$1@$0](.*@EXAMPLE.COM)s/@.*// RULE:[2:$1@$0](dn@EXAMPLE.COM)s/.*/hdfs/ RULE:[2:$1@$0](nn@EXAMPLE.COM)s/.*/hdfs/ RULE:[2:$1@$0](rangeradmin@EXAMPLE.COM)s/.*/ranger/ RULE:[2:$1@$0](rangerusersync@EXAMPLE.COM)s/.*/rangerusersync/ DEFAULT</value> </property> </configuration>
Run Setup Again for Ranger Admin
Stop the ranger-admin service.
Add the following properties and values to the ranger-admin
install.properties
file:Table 1.8. Ranger-admin install.properties
Property Name Property Value spnego_principal HTTP/<FQDN_OF_Ranger_Admin_Cluster>@<REALM> spnego_keytab <HTTP keytab path> token_valid 30 cookie_domain <FQDN_OF_Ranger_Admin_Cluster> cookie_path / admin_principal rangeradmin/<FQDN_OF_Ranger_Admin_Cluster>@<REALM> admin_keytab <rangeradmin keytab path> lookup_principal rangerlookup/<FQDN_OF_Ranger_Admin_Cluster>@<REALM> lookup_keytab <rangerlookup keytab path> hadoop_conf /etc/hadoop/conf Execute the
setup.sh
script.Start the ranger-admin service.
Stop the ranger-usersync service.
Add the following properties and values to the ranger-sync
install.properties
file:Table 1.9.
Property Name Property Value usersync_principal rangerusersync/<FQDN>@<REALM> usersyn_keytab <rangerusersync keytab path> hadoop_conf /etc/hadoop/conf Execute the
setup.sh
script.Start the ranger-sync service.
Stop the ranger-tagsync service.
Add the following properties and values to the ranger-tagsync
install.properties
file:Table 1.10. Ranger-tagsync install.properties and values
Property Name Property Value tagsync_principal rangertagsync/<FQDN>@<REALM> tagsync_keytab <rangertagsync keytab path> hadoop_conf /etc/hadoop/conf Execute the
setup.sh
script.Start the ranger-tagsync service.
Stop the ranger-kms service.
Add the following properties and values to the ranger-kms
install.properties
file:Table 1.11. Ranger-kms install.properties and values
Property Name Property Value kms_principal rangerkms/<FQDN>@<REALM> kms_keytab <rangerkms keytab path> hadoop_conf /etc/hadoop/conf
Install and Enable the Ranger HDFS Plugin
This section documents how to install and enable the Ranger HDFS plugin. You might want to consider making similar changes for the other Ranger plugins that you are using.
The Ranger HDFS plugin is located at
/usr/hdp/<version>/ranger-hdfs-plugin
.
Follow these steps to install and enable the Ranger HDFS Plugin:
Edit the relevant lines in the
install.properties
file:POLICY_MGR_URL=http://<FQDN of ranger admin host>:6080 REPOSITORY_NAME=hadoopdev Audit info (Solr/HDFS options available)
Enter the following commands to enable the HDFS plugin:
export JAVA_HOME=<JAVA Path> ./enable-hdfs-plugin.sh
Start and stop the namenode:
su hdfs -c "/usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh stop namenode" su hdfs -c "/usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh start namenode"
In the custom repo configuration file, add the component user, for example,
hdfs
, as a value for each of the following properties:policy.download.auth.users or policy.grantrevoke.auth.users
tag.download.auth.users
Verify that the plugin communicates with Ranger admin using the Audit → plugins tab.
Set the following properties in the
hdfs-site.xml
file:Table 1.12. hdfs-site.xml Property Names and Values
Property Name Property Value dfs.permissions.enabled true dfs.permissions.supergroup hdfs dfs.block.access.token.enable true dfs.namenode.kerberos.principal nn/_HOST@EXAMPLE.COM dfs.secondary.namenode.kerberos.principal nn/_HOST@EXAMPLE.COM dfs.web.authentication.kerberos.principal HTTP/_HOST@EXAMPLE.COM dfs.web.authentication.kerberos. keytab /etc/security/keytabs/spnego.service.keytab dfs.datanode.kerberos.principal dn/_HOST@EXAMPLE.COM dfs.namenode.keytab.file /etc/security/keytabs/nn.service.keytab dfs.secondary.namenode.keytab.file /etc/security/keytabs/nn.service.keytab dfs.datanode.keytab.file /etc/security/keytabs/dn.service.keytab dfs.https.port 50470 dfs.namenode.https-address Example:ip-10-111-59-170.ec2.internal:50470 dfs.datanode.data.dir.perm 750 dfs.cluster.administrators hdfs dfs.namenode.kerberos.internal. spnego.principal ${dfs.web.authentication.kerberos.principal} dfs.secondary.namenode.kerberos. internal.spnego.principal ${dfs.web.authentication.kerberos.principal} The following is an example of a
hdfs-site.xml
file with the properties set for Kerberos:<property> <name>dfs.permissions</name> <value>true</value> <description> If "true", enable permission checking in HDFS. If "false", permission checking is turned off, but all other behavior is unchanged. Switching from one parameter value to the other does not change the mode, owner or group of files or directories. </description> </property> <property> <name>dfs.permissions.supergroup</name> <value>hdfs</value> <description>The name of the group of super-users.</description> </property> <property> <name>dfs.namenode.handler.count</name> <value>100</value> <description>Added to grow Queue size so that more client connections are allowed</description> </property> <property> <name>ipc.server.max.response.size</name> <value>5242880</value> </property> <property> <name>dfs.block.access.token.enable</name> <value>true</value> <description> If "true", access tokens are used as capabilities for accessing datanodes. If "false", no access tokens are checked on accessing datanodes. </description> </property> <property> <name>dfs.namenode.kerberos.principal</name> <value>nn/_HOST@EXAMPLE.COM</value> <description> Kerberos principal name for the NameNode </description> </property> <property> <name>dfs.secondary.namenode.kerberos.principal</name> <value>nn/_HOST@EXAMPLE.COM</value> <description>Kerberos principal name for the secondary NameNode. </description> </property> <property> <!--cluster variant --> <name>dfs.secondary.http.address</name> <value>ip-10-72-235-178.ec2.internal:50090</value> <description>Address of secondary namenode web server</description> </property> <property> <name>dfs.secondary.https.port</name> <value>50490</value> <description>The https port where secondary-namenode binds</description> </property> <property> <name>dfs.web.authentication.kerberos.principal</name> <value>HTTP/_HOST@EXAMPLE.COM</value> <description> The HTTP Kerberos principal used by Hadoop-Auth in the HTTP endpoint. The HTTP Kerberos principal MUST start with 'HTTP/' per Kerberos HTTP SPNEGO specification. </description> </property> <property> <name>dfs.web.authentication.kerberos.keytab</name> <value>/etc/security/keytabs/spnego.service.keytab</value> <description>The Kerberos keytab file with the credentials for the HTTP Kerberos principal used by Hadoop-Auth in the HTTP endpoint. </description> </property> <property> <name>dfs.datanode.kerberos.principal</name> <value>dn/_HOST@EXAMPLE.COM</value> <description> The Kerberos principal that the DataNode runs as. "_HOST" is replaced by the real host name. </description> </property> <property> <name>dfs.namenode.keytab.file</name> <value>/etc/security/keytabs/nn.service.keytab</value> <description> Combined keytab file containing the namenode service and host principals. </description> </property> <property> <name>dfs.secondary.namenode.keytab.file</name> <value>/etc/security/keytabs/nn.service.keytab</value> <description> Combined keytab file containing the namenode service and host principals. </description> </property> <property> <name>dfs.datanode.keytab.file</name> <value>/etc/security/keytabs/dn.service.keytab</value> <description> The filename of the keytab file for the DataNode. </description> </property> <property> <name>dfs.https.port</name> <value>50470</value> <description>The https port where namenode binds</description> </property> <property> <name>dfs.https.address</name> <value>ip-10-111-59-170.ec2.internal:50470</value> <description>The https address where namenode binds</description> </property> <property> <name>dfs.datanode.data.dir.perm</name> <value>750</value> <description>The permissions that should be there on dfs.data.dir directories. The datanode will not come up if the permissions are different on existing dfs.data.dir directories. If the directories don't exist, they will be created with this permission.</description> </property> <property> <name>dfs.access.time.precision</name> <value>0</value> <description>The access time for HDFS file is precise upto this value.The default value is 1 hour. Setting a value of 0 disables access times for HDFS. </description> </property> <property> <name>dfs.cluster.administrators</name> <value> hdfs</value> <description>ACL for who all can view the default servlets in the HDFS</description> </property> <property> <name>ipc.server.read.threadpool.size</name> <value>5</value> <description></description> </property> <property> <name>dfs.namenode.kerberos.internal.spnego.principal</name> <value>${dfs.web.authentication.kerberos.principal}</value> </property> <property> <name>dfs.secondary.namenode.kerberos.internal.spnego.principal</name> <value>${dfs.web.authentication.kerberos.principal}</value> </property>
For Download Policy to be successful, use the Ranger UI to update the service configuration with the following custom properties:
policy.download.auth.users=<Component service user> tag.download.auth.users=<Component service user>(if tag download)
For Grant/Revoke for Hive and Hbase to be successful, use the Ranger UI to update the service configuration with the following custom property:
policy.grantrevoke.auth.users = <Component service user>
For Test Connection and Resource Lookup to be successful, use the Ranger UI to add lookup user in the permission list of the policies.