Known Issues in Apache Ranger

Learn about the known issues in Ranger, the impact or changes to the functionality, and the workaround.

CDPD-35657: Ranger Admin goes to OOM when usersync is trying to delete existing group mappings from ranger DB.
Workaround: None.
Ranger default service cm_solr for Ranger solr plugin fails.
Workaround: You must create it manually from Ranger Admin UI.
CDPD-3296: Audit files for Ranger plugin components do not appear immediately in S3 after cluster creation
For Ranger plugin components (Atlas, Hive, HBase, etc.), audit data is updated when the applicable audit file is rolled over. The default Ranger audit rollover time is 24 hours, so audit data appears 24 hours after cluster creation.
To see the audit logs in S3 before the default rollover time of 24 hours, use the following steps to override the default value in the Cloudera Manager safety valve for the applicable service.
  1. On the Configuration tab in the applicable service, select Advanced under CATEGORY.
  2. Click the + icon for the <service_name> Advanced Configuration Snippet (Safety Valve) for ranger-<service_name>-audit.xml property.
  3. Enter the following property in the Name box:


  4. Enter the desired rollover interval (in seconds) in the Value box. For example, if you specify 180, the audit log data is updated every 3 minutes.
  5. Click Save Changes and restart the service.
CDPD-12644: Ranger Key Names cannot be reused with the Ranger KMS KTS service
Key names cannot be reused with the Ranger KMS KTS service. If the key name of a delete key is reused, the new key can be successfully created and used to create an encryption zone, but data cannot be written to that encryption zone.
Use only unique key names when creating keys.
CDPD-17962: Ranger roles do not work when you upgrade from any CDP Private Cloud Base to CDP Private cloud base. Roles which are created prior to upgrade work as expected, issue is only for new roles created post upgrade and authorization enforced via ranger policies wont work for these new roles. This behavior is only observed with the upgraded cluster; a newly installed cluster does not show this behavior.
There are two possible workarounds to resolve this issue:
  1. Update database entries (Recommended):
    • select * from x_ranger_global_state where state_name='RangerRole';
    • update x_ranger_global_state set app_data='{"Version":"2"}' where state_name='RangerRole';
  2. Add a property in safety valve under ranger-admin-site which will bypass the getAppDataVersion method: