Known Issues in Ranger

Learn about the known issues in Ranger, the impact or changes to the functionality, and the workaround.

CDPD-66092: Upgrade from CDP-7.1.8 to 7.1.9 fails, Ranger shows no policies at all, due to Java patches not being applied during upgrade. This causes rolling restart of services to fail.
Skip applying any of the following java patches NOT applicable to the underlying environment. In other words, do not apply patches for a service definition that does not appear in ranger database.
Table 1. Java Patches and Related Service Definitions
Java patch Servic-definition
PatchForHiveServiceDefUpdate_J10027.java Hive
PatchForTagServiceDefUpdate_J10028.java Tag
PatchForHBaseServiceDefUpdate_J10035.java HBase
PatchForHdfsAddChainedPluginProvider_J10038.java HDFS
PatchForHdfsRemoveChainedPluginProvider_J10039.java HDFS
PatchForOzoneServiceDefUpdate_J10041.java Ozone
PatchForOzoneServiceDefConfigUpdate_J10051.java Ozone
PatchForOzoneServiceDefUpdate_J10057.java Ozone
CDPD-60489: Jackson-dataformat-yaml 2.12.7 and Snakeyaml 2.0 are not compatible.
You must not use Jackson-dataformat-yaml through the platform for YAML parsing.
CDPD-56803: When there is no existing policy for user and a revoke request comes from hbase, then will get this error.
   hbase:001:0> revoke 'hrt_11'
 ERROR: org.apache.hadoop.hbase.coprocessor.CoprocessorException: HTTP 400 Error: processSecureRevokeRequest processing failed
	at org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor.preRevoke(RangerAuthorizationCoprocessor.java:1309)
	at org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor.preRevoke(RangerAuthorizationCoprocessor.java:1128)
	at org.apache.hadoop.hbase.master.MasterCoprocessorHost$162.call(MasterCoprocessorHost.java:1857)
	at org.apache.hadoop.hbase.master.MasterCoprocessorHost$162.call(MasterCoprocessorHost.java:1854)
	at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558)
	at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631)
	at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preRevoke(MasterCoprocessorHost.java:1854)
	at org.apache.hadoop.hbase.master.MasterRpcServices.revoke(MasterRpcServices.java:2740)
	at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
	at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387)
	at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:139)
	at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369)
	at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349)
None
CDPD-56741: Improvement in log message when jwtauth not used
The following exception is printed at startup only and it is not cluttering logs:
2023-05-30 06:18:40,127 ERROR org.apache.ranger.rms.security.RMSJwtAuthFilter: 
quasar-pibgzl-1.quasar-pibgzl.root.hwx.site-startStop-1]: 
Failed to initialize Ranger RMS JWT Auth Filter.
java.lang.Exception: RangerJwtAuthHandler: 
Mandatory configs ('jwks.provider-url' & 'jwt.public-key') are missing, must provide atleast one.
at org.apache.ranger.authz.handler.jwt.RangerJwtAuthHandler.initialize(RangerJwtAuthHandler.java:84) 
~[ranger-authn-2.4.0.7.1.9.0-186.jar:2.4.0.7.1.9.0-186]
 at org.apache.ranger.rms.security.RMSJwtAuthFilter.initialize(RMSJwtAuthFilter.java:77) 
~[ranger-rms-common-2.4.0.7.1.9.0-186.jar:2.4.0.7.1.9.0-186]
 at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
 at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImp
This exception is logged, since mandatory JWT auth filter configs are not provided (expected for some environments like y-cloud) Even though JWT auth filter is failed to initialised, it falls back to kerberos auth, hence no impact from auth perspective
NA
CDPD-56738: Ranger RMS showing FileNotFoundException: /usr/share/java/oraclepki.jar in Oracle 19 setup
This is a warning log printed in catalina.out file when Ranger RMS server is initialised. Following exception is observed only in Oracle 19 setup. FileNotFoundException: /usr/share/java/oraclepki.jar
NA
CDPD-55107: Not able to search using muliple user filter in access audit tab
If you were using multiple user search filters in Audit > Access Tab on Ranger Admin UI, after upgrading to CDP-7.1.9 that would not be supported. You can continue to search user with single search filter.
None
CDPD-48975: Ranger KMS KTS to KMS DB migration : keys with the same name but different case are not migrated
KMS keys are not case sensitive
No work arounds. Such keys combination are very rare and migration doc was updated to check such keys before starting the migration.
CDPD-58704: hadoop roll key / key delete command shows operation failed error when one KMS host is down, even when operation succeeds
In case of rollover/delete, client sends one more (last after delete request) request to KMS instances to clean their cache and that too to all registered kms instances. if one KMS instance is stopped ( not deleted), client gets runtime exception.
This simply returns the runtime exception on client end for stopped instances but doesn't break any functionality.
CDPD-41582: Atlas Resource Lookup : Classification for "entity-type" lists only classification for the following payload:

{"resourceName": "classification", "userInput": "", "resources": {"classification": []}}]

expectation is to return all the classifications . But the response has only "classification"Happens similarly for entity-label , entity-business-metadata.

None.
CDPD-42598: Kafka policy creation allowed with incorrect permissions.

When creating a Kafka policy from the UI, the permissions "Idempotent write"and "Cluster action" are not displayed as they are not applicable for the "topic" resource, but when creating a policy for the "topic" resource with the permissions "Idempotent write" and "Cluster Action", the policy is created successfully when the expected behaviour is that the policy creation must fail as the permission is not applicable for the Kafka topic resource

None.
CDPD-40734: User allowed to insert data into a hive table when there is a deny policy on a table column.

A user is allowed to enter data into a table even if there is a deny policy present on one of the table columns.

Test scenario details:
Policy setup :-
policy 1 :- all access policy for hrt_qa, hive and impala users
 resources - database - * , table - *, column - *
 users : hrt_qa, hive, impala
 access - all access allowed
policy 2 :- policy on test_1.table_1 for hrt_5
 users : hrt_5
 resources : database - test_1, table - table_1, column - *
 access :- all access allowed
policy 3 :- deny policy on test_1.table_1.c0 for hrt_5
 users : hrt_5
 resources : database - test_1, table - table_1, column - c0
 access - all access denied
data setup :-
database - test_1
table - table_1(c0 int, c1 int)

The user is able to insert data into the table.

None.
CDPD-58860: After upgrading CDP-7.1.8 to CDP-7.1.9, cdp-proxy token missing from knox-ranger policy.

As part of OPSAPS-67480 in 719 default ranger policy is added from cdp-proxy-token topology, so that after a new installation of CDP-7.1.9, the knox-ranger policy includes cdp-proxy-token. However, upgrades do not add cdp-proxy-token to cm_knox policies automatically.

Manually add cdp-proxy-token to the knox policy, using Ranger Admin Web UI.
  1. Log in to Cloudera Manager > Ranger > Ranger Admin Web UI, as a Ranger administrator.
  2. On Ranger Admin Web UI > Service Manager > Resource > Knox, click cm_knox.
  3. In Knox Policies, open the CDP Proxy UI, API and Token policy.
  4. In Knox Topology*, add cdp-proxy-token.
  5. Click Save.
  6. Restart Ranger.
CDPD-68806: The Revoke operation for users belonging to a group or role permission does not function as expected.

List command is listing all the tables even when the user permission is revoked. And also the command does not add any deny policy to Ranger for that specific user.

For example:

user_1 is the part of group_1

user_2 is the part of role_1

Create a Ranger policy for group_1 and role_1 to access all the resources in HBase.

Try to revoke the permission for user_1 and user_2 using HBase shell global revoke command:

revoke 'user_1'

revoke 'user_2'

But there is no deny policy added for that user. And access is still granted to user_1 and user_2.

This behavior is currently not supported in HBase shell. Must be handled manually using the Ranger policy change.
CDPD-64939: Impala behavior differences across 717 SP3 and 719 SP1 releases.
Upgrading from Cloudera Runtime 7.1.7 Service Pack 3 to 7.1.9 Service Pack 1, for a GRANT statement involving multiple columns in a table, Impala service creates one Ranger policy for each column.
None
CDPD-68739: The revoke command does not work when using the HBase shell

While using the HBase shell, running the revoke command does not cancel the user permission. Users are able to perform actions even after running the revoke command.

For example:

When a user is granted permission to a particular table using the HBase grant command.

hbase:003:0> grant 'user_1', 'RWCA', 'some_abc_table'

And then execute the global revoke command for that user:

hbase:005:0> revoke 'user_1'

Permission for that user 'user_1' for the table 'some_abc_table' not got revoked.

None
CDPD-67238: Multiple Columns Revoke not generating policies with correct number of columns

As an example, when "revoke select(col1, col2,col3) on table demo.test from role Role3;" is done, the generated policy does not revoke the columns. Currently the revoke statement is only revoking if there is only one column.

None
CDPD-69412: revokeAccess() behaves differently from secureRevokeAccess() after RANGER-4638
In a non-Kerberized environment, you must use within revokeAccess(), "!isPolicyResourceSameAsRevokedResource" as the condition as it is used in secureRevokeAccess().
None