Fixed Issues
Review the list of Cloudera Manager issues that are resolved in Cloudera Manager 7.13.2 and its cumulative hotfixes.
Cloudera Manager 7.13.2 resolves issues and incorporates fixes from the cumulative hotfixes from 7.13.1.100 through 7.13.1.700. For a comprehensive record of all fixes in Cloudera Manager 7.13.1.x, see Fixed Issues 7.13.1.x.
Cloudera Manager 7.13.2
- OPSAPS-74276: RockDB JNI library is loaded from the same place to multiple Ozone components
- 7.13.2.0
- OPSAPS-73808: Cloudera Manager is not
propagating the Storage Container Manager Block Client port to
omroles - 7.13.2.0
- OPSAPS-75236: Excessive INFO-level logs were printed during Ozone CLI operations
- 7.13.2.0
- OPSAPS-73164: Ozone's upgrade handlers were not properly added to the UpgradeHandlerRegistry
- 7.13.2.0
- OPSAPS-72718: The dn-container.log is not collected in the diag bundle
- 7.13.2.0
- OPSAPS-73304: Ozone Prometheus port conflict on freshly installed cluster
- 7.13.2.0
- OPSAPS-71329: The
testDuplicateAndSnapshotClassescheck failed - 7.13.2.0
- OPSAPS-71561: Ozone canary does not handle S3 secret getting revoked
- 7.13.2.0
- OPSAPS-71897: Finalize Upgrade command fails post-upgrade with CustomKerberos setup: causing INTERNAL_ERROR with EC writes
- 7.13.2.0
- OPSAPS-73078: Cloudera Manager is not referring to the S3 Gateway TLS enable configuration to start the S3 Gateway with secure or insecure ports
- 7.13.2.0
- OPSAPS-73383: SCM principal is hardcoded in the Ozone Manager
- 7.13.2.0
- OPSAPS-71342: Setting hdds.x509.max.duration to 0 shuts down Storage Container Manager, DataNodes, and Ozone Manager
- 7.13.2.0
- OPSAPS-76539: Cloudera Manager UI allowed adding multiple Spark 3 service instances within a single cluster
- Prevented multiple Spark 3 service instances within a
single cluster by implementing a
maxInstancesflag in Cloudera Manager. - OPSAPS-72316: Knox Gateway might crash when serving Hive and Impala clients under heavy load
-
Performance issues with the PAM module affected Knox. Specifically, under heavy load, interaction with the
libpam.somodule might crash Knox Gateway. - OPSAPS-75616: Logger safety valves for Cloudera 7.1.9 were incorrect
-
Logger safety valves for Cloudera 7.1.9 did not apply correctly for Knox.
- OPSAPS-74281: Disabled the live Spark UI when the
ENCRYPT_ALL_PORTSfeature flag is enabled Enhanced the security of Spark by implementing new default configuration settings:
spark.ui.enabled=false: preventing initiation of an HTTP service that can be accessed from external hosts.spark.io.encryption.keySizeBits=256: the default Spark keySizeBits has been increased from 128 to 256
- OPSAPS-74346, OPSAPS-74346, OPSAPS-74316: Enhancing Spark security
- Changed the generation of the
spark.yarn.historyServer.addressvalue to use the HTTPS address when SSL/TLS is enabled. Thespark3.network.crypto.enablednew configuration property is now available to enable AES-based encryption. - OPSAPS-72254: UCL | FIPS Failed to upload Spark example jar to HDFS in cluster mode
Fixed an issue with deploying the
Spark 3 Client Advanced Configuration Snippet (Safety Valve)for spark3-conf/spark-env.sh.For more information, see New Cloudera Manager configuration parameter
spark_pyspark_executable_pathhas been added to the Livy for Spark 3 in Behavioral Changes In Cloudera Manager 7.13.2.- OPSAPS-75290, OPSAPS-74994: The
yarn_enable_container_usage_aggregationjob is failing with “Null real user” error on Service Monitor. - The
yarn_enable_container_usage_aggregationjob is failing with "Null real user" error on Service Mnitor when the Yarn service is running on the computer cluster with Stub DFS, and when the Powerscale Service is running in the cluster with Powerscale DFS provider instead of HDFS. - OPSAPS-73372: hbase-env.sh is incorrectly copied without variable substitution to dependent projects
- 7.3.2.0
- OPSAPS-74862: Unable to set HBase RPC mTLS key for clients in Cloudera Manager
- 7.3.2.0
- OPSAPS-76258: The Deploy client configuration and refresh operation fails after a CDH upgrade
- 7.3.2.0
- OPSAPS-71576: Default value for fe_service_threads increased to improve concurrency
- The default value for the fe_service_threads setting was 64. Starting with Cloudera Runtime 7.13.2, the default value is 128.
- OPSAPS-74019/OPSAPS-72739: Query execution stability with temporary directories
- Queries failed with an execution error when using a compression library. This happened because the system attempted to use /tmp as a temporary folder for script execution, which was not permitted by default for this library, leading to query failures.
- OPSAPS-74044: Setting the catalog topic mode when disabling the local catalog
- Previously, unchecking the
local_catalog_enabledcheckbox in the Impala configuration page did not correctly trigger the necessary evaluators to set the catalog topic mode to full or disable the local catalog in impalad. - OPSAPS-72905: Missing MemoryUsage counter in Impala Query Profile
- Previously, the
MemoryUsagecounter was missing from the Impala Query Profile in Cloudera Manager. This issue caused thememory_aggregate_peakmetric to display incorrect values. - OPSAPS-73880: Impala thrift definition update
- Previously, the Thrift files under if/impala/ were outdated, which could lead to compatibility issues with newer versions of Impala.
- OPSAPS-74044: Setting the catalog topic mode when disabling local catalog
- Previously, unchecking the
local_catalog_enabledcheckbox did not correctly trigger the necessary evaluators to set the catalog topic mode to full and disable the local catalog in impalad. - OPSAPS-76290: HMS Metastore schema setup timeout
- Previously, the Create Hive Metastore database tables command frequently failed because the context preparation consumed most of the allocated 150-second timeout, leaving insufficient time for schema initialization.
- OPSAPS-72998: Missing Hive Metastore event API charts
- Previously, charts for Hive Metastore (HMS) event APIs,
including
get_next_notification,get_current_notificationEventId, andfire_listener_event, were missing from the Cloudera Manager Charts Library. - OPSAPS-72930: Tez client configuration during upgrade
- Previously, the Tez client configuration was not automatically deployed during the upgrade process.
- OPSAPS-60161: Hive Metastore canary test failures
- Previously, the
cloudera_manager_metastore_canary_testfailed in environments with multiple Hive Metastore (HMS) nodes. - OPSAPS-75843: Hive external table replication fails when Zookeeper has a non-default service name
- Previously, Hive external table replication policies failed when Zookeeper was configured with a customized principal or non-default service name. This issue is now fixed. You can successfully use a customized principal by adding the -Dzookeeper.sasl.client.username = [*** ADD CUSTOMIZED PRINCIPAL *** ] key-value pair in the property.
- OPSAPS-70834: Multiple instances of Atlas replication policy are running at the same time
- Previously, multiple instances of an Atlas replication policy were running at the same time, which was incorrect. This issue is now fixed.
- OPSAPS-70681: Atlas client configuration at policy level
- Previously, you could set Atlas client-related properties
at the cluster level which was not efficient. This issue is now fixed. You can now
configure these properties at replication policy level using the Cloudera Manager API.
For example, you can set the following properties:
"atlasClientAdvanceConfigs": { "atlas.client.connectTimeoutMSecs": "12345", "atlas.client.readTimeoutMSecs": "12345" } - OPSAPS-70713: Error is displayed when running Atlas replication policy if source or target clusters use Dell EMC Isilon storage
- Previously, you could not create an Atlas replication policy between clusters if one or both the clusters used Dell EMC Isilon storage. This issue is now fixed.
- OPSAPS-71220: Replication History page displays incorrect status for Atlas replication
- Previously, when you ran Hive external table or Iceberg replication policies that included replicating Atlas metadata (also called composite replication), the Replication Policies page displayed success even if one of the replications failed. For example, if during the Iceberg replication policy run, the Atlas metadata replication failed, the page displayed the Successful status, which was incorrect. This issue is now fixed.
- OPSAPS-75080, OPSAPS-75125: Replication policies history page displays half the count of history than expected for composite replication
- Previously, the Replication Policy History page for a composite replication policy displayed half the number of job runs. The composite replication policies include Hive external table or Iceberg replication policies that also migrated Atlas metadata. This issue is now fixed. The page displays all the job runs.
- OPSAPS-74864: Iceberg composite replication policy displays all the options in the history list
- Previously, during an Iceberg composite replication policy job run, when Atlas replication failed but Iceberg replication continued, the Replication Policies page displayed all the available options in the History list, which was incorrect. This issue is now fixed.
- OPSAPS-76077, OPSAPS-75926: Hive external metadata-only replication of Ozone backed tables fails for virtual views
- Previously, Hive on Ozone external metadata replication
failed if the input regex matched any virtual views. This issue is now fixed. The
virtual views are replicated by default.
If you do not want to replicate the virtual views, add the DISALLOW_VIRTUAL_VIEWS_FOR_OZONE=true key-value pair in the property.
- OPSAPS-73218, OPSAPS-73219: Dry run for Ozone replication policies does not work as expected
- Previously, the Dry Run action for Ozone replication policies failed and led to data loss. This issue is now fixed. The Dry Run action is no longer available for Ozone replication policies when the Listing type is Incremental only or Incremental with fallback to full file listing.
- OPSAPS-71067: Wrong interval sent from the Replication Manager UI after Ozone replication policy submit or edit process
- Previously, when you edited the existing Ozone replication policies, the schedule frequency changed unexpectedly. This issue is now fixed.
- OPSAPS-74203: Incorrect parameters are displayed for HBase Snapshot operations
- Previously, incorrect parameters were displayed for HBase Snapshot operations on the Snapshot Policies page and in Cloudera Manager Server logs. The UI now properly interpolates the tableName and snapshotName into i18n message strings to display the correct parameters.
- OPSAPS-70822: Hive external table replication policy could not be saved on the ‘Edit Hive External Table Replication Policy’ window
- Previously, Replication Manager did not save the changes as expected when you clicked Save Policy after you edited a Hive replication policy using the option for the replication policy on the Replication Policies page. This issue is fixed.
- OPSAPS-72276: Cannot edit Ozone replication policy if the MapReduce service is stale
- Previously, you could not edit an Ozone replication policy in Replication Manager if the MapReduce service did not load completely. This issue is fixed.
- OPSAPS-71596, OPSAPS-69782: Exception appears if the peer Cloudera Manager's API version is higher than the local cluster's API version
- HBase replication using HBase replication policies in CDP
Public Cloud Replication Manager between two Data Hubs/COD clusters succeed as expected
when all the following conditions are true:
- The destination Data Hub/COD cluster’s Cloudera Manager version is 7.9.0-h7 through 7.9.0-h9 or 7.11.0-h2 through 7.11.0-h4, or 7.12.0.0.
- The source Data Hub/COD cluster's Cloudera Manager major version is higher than the destination cluster's Cloudera Manager major version.
- The Initial Snapshot option is chosen during the HBase replication policy creation process and/or the source cluster is already participating in another HBase replication setup as a source or destination with a third cluster.
- OPSAPS-71424: The 'configuration sanity check' step ignores the replication advanced configuration snippet values during the Ozone replication policy job run
- Previously, the OBS-to-OBS Ozone replication policy jobs
failed when the S3 property values for
fs.s3a.endpoint,fs.s3a.secret.key, andfs.s3a.access.keywere empty in Ozone Service Advanced Configuration Snippet (Safety Valve) for ozone-conf/ozone-site.xml even when these properties were defined in Ozone Replication Advanced Configuration Snippet (Safety Valve) for core-site.xml. This issue is fixed. - OPSAPS-75136, OPSAPS-75187, OPSAPS-75245, OPSAPS-75449: Kerberos ticket validation fails during HDFS replication
- Previously, Kerberos ticket validation failed during the HDFS replication policy run. This issue is now fixed because Kerberos ticket validation now checks the current cached tickets by utilizing the Kerby Credential Cache. This improvement also prevents a round-trip authentication request to the Key Distribution Center (KDC).
- OPSAPS-74314, OPSAPS-74636: HBase snapshot export always runs with the default client configuration
- Previously, when multiple HBase services existed in a cluster, the HBase export process used the default client configuration. This issue is now resolved because the export process prioritizes the correct HBase replication client configurations based on the set CLASSPATH value in the snapshot-hbase.sh file.
- OPSAPS-73217, OPSAPS-74665, OPSAPS-75303, OPSAPS-75444: Snapshot retention after incremental Ozone replication dry run
- Previously, the dry run process for the incremental Ozone replication policy did not delete the snapshot it created after the replication process was complete. This issue is now fixed. For information about this issue, see the corresponding Knowledge article: Technical Service Bulletin 2025-835: Dry run of incremental Ozone replication can cause failure to replicate some changes in Cloudera Replication Manager.
- OPSAPS-73138, OPSAPS-72435: Ozone OBS-to-OBS replication policies created incorrect directories in the target cluster
- Ozone OBS-to-OBS replication policies created incorrect directories in the target cluster even when no such directories existed on the source cluster. This issue is now resolved.
- OPSAPS-72447, CDPD-76705: Ozone incremental replication fails to copy renamed directory
- Ozone incremental replication using Ozone replication policies succeed but might fail to synchronize the nested renames for FSO buckets. When a directory and its contents are renamed between the replication runs, the outer level rename synced but did not synchronize the contents with the previous name. This issue is fixed now.
- OPSAPS-74082: Ozone FSO to FSO replication failed on link buckets
- Previously, the Ozone replication policies for FSO to FSO buckets failed for link buckets if the link bucket was not in the s3v volume. This issue is now resolved.
- OPSAPS-74040: Ozone OBS replication fails due to pre-filelisting check failure
- During OBS-to-OBS Ozone replication, if the source bucket
is a linked bucket, the replication failed during the \
Run Pre-Filelisting Checkstep, and the Source bucket is a linked bucket, however the bucket it points to is also a link error message appeared, even when the source bucket directly links to a regular, non-linked bucket. The issue is now fixed. - OPSAPS-73906, OPSAPS-73737, OPSAPS-73655, OPSAPS-74061: Cloud replication no longer fails after the delegation token is issued
- Previously, the replication policies were failing during incremental replication job runs if you chose the option during the replication policy creation process. You can now configure com.cloudera.enterprise.distcp.skip-delegation-token-on-cloud-replication to false in the advanced configuration snippet to ensure that the HDFS and Hive external table replication policies replicating from an on-premises cluster to cloud do not fail. When the advanced configuration snippet is set to false, the MapReduce client process obtains the delegation tokens explicitly before it submits the MapReduce job for the replication policy. By default, the advanced configuration snippet is set to true.
- OPSAPS-73142: The required configuration from replication safety valve is not accessed
- An Ozone replication policy with Incremental with fallback to full file listing option failed with Pre-Filelisting Check Failed with Error: target bucket has layout OBS, but [fs.s3a.endpoint, fs.s3a.secret.key, fs.s3a.access.key] properties are missing from the target Ozone service core-site.xml config error because the required configuration was not available in the required folders. To mitigate this issue, the required configuration parameters are now added automatically to the required folders during the Ozone replication policy run.
- OPSAPS-72756: The runOzoneCommand API endpoint fails during the Ozone replication policy run
- The
/clusters/{clusterName}/runOzoneCommand Cloudera Manager API
endpoint fails when the API is called with the getOzoneBucketInfo
command. In this scenario, the Ozone replication policy runs also fail if the following
conditions are true:
- The source Cloudera Manager version is 7.11.3 CHF11 or 7.11.3 CHF12.
- The target Cloudera Manager is version 7.11.3 through 7.11.3 CHF10 or 7.13.0.0 or
later where the feature flag
API_OZONE_REPLICATION_USING_PROXY_USERis disabled.
- OPSAPS-72468: Subsequent Ozone OBS-to-OBS replication policy do not skip replicated files during replication
- Replication Manager now skips the replicated files during
subsequent Ozone replication policy runs after you add the following key-value pairs in :
- com.cloudera.enterprise.distcp.ozone-schedules-with-unsafe-equality-check = [***ENTER COMMA-SEPARATED LIST OF OZONE REPLICATION POLICIES’ ID or ENTER all TO APPLY TO ALL OZONE REPLICATION POLICIES***] –- The advanced snippet skips the already replicated files when the relative file path, file name, and file size are equal and ignores the modification times.
- com.cloudera.enterprise.distcp.require-source-before-target-modtime-in-unsafe-equality-check = [***ENTER true OR false***] –- When you add both the key-value pairs, the subsequent Ozone replication policy runs skip replicating files when the matching file on the target has the same relative file path, file name, file size and the source file’s modification time is less or equal to the target file modification time.
- OPSAPS-67498: The Replication Policies page takes a long time to load
- Previously, the took a long time to load. This issue is resolved.
- OPSAPS-69622: Cannot view the correct number of files copied for Ozone replication policies
- The last run of an Ozone replication policy does not show the correct number of the files copied during the policy run when you load the page after the Ozone replication policy run completes successfully. This issue is fixed now.
- OPSAPS-70848: Hive external table replication policies succeed when the source cluster uses Dell EMC Isilon storage
- During the Hive external table replication policy run,
the replication policy failed at the
Hive Replication Exportstep. This issue is fixed now. - OPSAPS-70909: Use specified users instead of "hive" for Ozone replication-related commands
- Starting from Cloudera Manager 7.11.3 CHF15, Ozone
commands executed by Ozone replication policies are run by impersonating the users that
you specify in the Run as Username and Run on Peer as
Username fields in the Create Ozone replication
policy wizard. The bucket access for OBS-to-OBS replication depends on the
user with the access key specified in the
fs.s3a.access.keyproperty. When the source and target clusters are secure, and Ranger is enabled for Ozone, specific permissions are required for Ozone replication to replicate Ozone data using Ozone replication policies. - OPSAPS-71093: Validation on source for Ranger replication policy fails
- The Cloudera Manager page would be logged out automatically when you created a Ranger replication policy. This is because the source cluster did not support the getUsersFromRanger or getPoliciesFromRanger API requests. The issue is fixed now. The required validation on the source completes successfully as expected.
- OPSAPS-72559: Incorrect error messages appear for Hive ACID replication policies
- Replication Manager now shows correct error messages for every Hive ACID replication policy run on the page as expected.
- OPSAPS-71544, OPSAPS-75166, OPSAPS-75182: Ranger replication policies failed for custom username
- Previously, when you used a custom username or Kerberos principal in the Ranger replication policy, the policy failed during the transformation step if the custom Ranger process user was set in Cloudera Manager. This issue is now fixed.
- OPSAPS-72509: Hive metadata transfer to GCS fails with ClassNotFoundException
- Hive external table replication policies from an
on-premises cluster to cloud failed during the
Transfer Metadata Filesstep when the target is on Google Cloud and the source Cloudera Manager version is 7.11.3 CHF7, 7.11.3 CHF8, 7.11.3 CHF9, 7.11.3 CHF9.1, 7.11.3 CHF10, or 7.11.3 CHF11. This issue is fixed. - OPSAPS-72446, OPSAPS-71565, OPSAPS-71566, OPSAPS-73405,OPSAPS-72860: Replication policy runs when the source or target cluster becomes available after it recovers from temporary node failures
- Hive replication policies and HBase replication policies can now recover from a temporary node failure on the source or target clusters to continue the replication policy job run. Alternatively, you can also rerun the failed or aborted policies manually. To ensure that the RemoteCmdWork daemon continues to poll even in case of network failures or if the Cloudera Manager goes down, you can set the remote_cmd_network_failure_max_poll_count = [*** ENTER REMOTE EXECUTOR MAX POLL COUNT***] parameter on the page.
- OPSAPS-74279, OPSAPS-72439, OPSAPS-74265: HDFS and Hive external tables replication policies failed when using custom krb5.conf files
- HDFS and Hive external tables replication policies failed when using custom krb5.conf files. This is because the custom krb5.conf was not propagated to the required files. To mitigate this issue, complete the instructions provided in Step 13 in Using a custom Kerberos configuration path.
- OPSAPS-72978: The getUsersFromRanger API parameter truncates the user list after 200 items
- The Cloudera Manager API endpoint v58/clusters/[***CLUSTER***]/services/[***SERVICE***]/commands/getUsersFromRanger API endpoint no longer truncates the list of returned users at 200 items.
- OPSAPS-73602, OPSAPS-74360: HDFS replication policies to cloud failed with HTTP 400 error
- The HDFS replication policies to cloud were failing after you edited the replication policies in the . This issue is fixed.
- OPSAPS-72804: For recurring policies, the interval is overwritten to 1 after the replication policy is edited
- Previously, when you edited an Atlas, Iceberg, Ozone, or a Ranger replication policy that had a recurring schedule on the Replication Manager UI, the Edit Replication Policy modal window appeared as expected. However, the frequency of the policy was reset to run at 1 unit where the unit depended on what you configured in the replication policy. For example, if you configured the replication policy to run every four hours, it was reset to one hour when you edited the replication policy. This issue is fixed.
- OPSAPS-72214: Cannot create a Ranger replication policy if the source and target cluster names are not the same
- You could not create a Ranger replication policy if the source cluster and target cluster names were not the same. This issue is fixed.
- OPSAPS-71853: The Replication Policies page does not load the replication policies’ history
- When the sourceService is null for a Hive ACID replication policy, the failed to load the existing replication policies’ history details and the current state of the replication policies on the Replication Policies page. This issue is now fixed.
- OPSAPS-71256: The “Create Ranger replication policy” action shows 'TypeError' if no peer exists
- When you clicked option, the TypeError: Cannot read properties of undefined error appeared. This issue is fixed now.
- OPSAPS-71459: Commands continue to run after Cloudera Manager restart
- Some remote replication commands continue to run endlessly even after a Cloudera Manager restart operation. This issue is fixed
- OPSAPS-72573: Monitoring for Kudu tablet sizes and replica counts
- Previously, Cloudera Manager lacked integrated monitoring for Kudu tablet sizes and replica counts, making it difficult to track on-disk footprints or identify excessively large tablets.
- OPSAPS-75602: Issue with RANGER_C719 CSD becoming stale after upgrading Cloudera Manager
- Fixed an issue where the RANGER_C719 CSD could become stale
after upgrading Cloudera Manager from 7.13.1.600 with Cloudera 7.1.9 to 7.13.2.0 by fixing the following:
- OPSAPS-73498: Added Cloudera Manager side ranger-trino integration changes.
- OPSAPS-73152: Improved Ranger Admin Diagnostic collection command from Cloudera Manager scripts.
- OPSAPS-75556: After upgrade from 7.1.9 to 7.3.2.0 dataset field type is set to boolean in solr managed-schema
- Fixed an issue where, after upgrading from Cloudera 7.1.9 to 7.3.2, the datasets field in the ranger_audits Solr collection schema was incorrectly set to the boolean type instead of key_lower_case with multiValued="true". This schema mismatch caused Ranger Admin to fail to load the Access Audit page on upgraded clusters. The upgrade process now updates the ranger_audits Solr schema so that the datasets field is created with the correct type and behaves consistently with fresh 7.3.2 deployments.
- OPSAPS-71619: Removed the mandatory validation for
ranger.ldap.user.dnpattern - Previously, when LDAP was configured as the external
authentication type for Ranger Admin, the
ranger.ldap.user.dnpatternparameter was mandatory. If it was not set, the Ranger Admin service failed to start, even though this parameter is rarely required and is ignored when LDAP bind DN/password and user search parameters are configured. This has been fixed by removing the mandatory validation forranger.ldap.user.dnpattern, so the parameter is now optional and the service can start without requiring a dummy value. - OPSAPS-69156: Fixed an issue with Java add-opens/add-modules/add-exports options
- Cloudera Manager components now consistently
use the
--add-opens=,--add-modules=, and--add-exports=syntax for Java options. This avoids cases where options passed viaJAVA_TOOL_OPTIONScould be rejected (for example when using--add-opensor--add-exportswithout=), improving compatibility across different Java runtimes. - OPSAPS-67197: Ranger RMS server shows as healthy without service being accessible
- Previously, Cloudera Manager reported the Ranger RMS server as healthy based only on the RMS process (PID), even when the RMS web service was not fully initialized and the service was inaccessible. The health check logic has been updated to use a Cloudera Manager web alert that verifies the Ranger RMS RMS web endpoint instead of relying solely on the PID. This allows Cloudera Manager to more accurately detect when RMS is not accessible and helps users identify RMS availability issues faster.
- OPSAPS-72766: Ranger KMS tomcat context update
- Updated the default Tomcat context for Ranger KMS from /kms to / by changing the ranger.contextName property in ranger-kms-site.xml. This aligns the Ranger KMS context path with Cloudera configuration and simplifies access and integration.
- OPSAPS-74083: JAAS configuration for Ranger services connecting to ZooKeeper with strict SASL enforcement
- When ZooKeeper was configured with strict SASL enforcement, Ranger Admin, Ranger Tagsync, and Ranger RAZ could not establish SASL-secured connections because no JAAS configuration was defined. This has been fixed by introducing a dedicated JAAS configuration file for these Ranger services and adding a Java option to reference this file, enabling successful SASL authentication with ZooKeeper.
- OPSAPS-74063: RANGER_RMS CSD changes for supporting multiple storage types
- Fixed an issue in the Ranger RMS CSD configuration for the 7.13.2.0 release to support multiple storage types by adding S3- and Ozone-specific HMS source service properties and updating the supported URI schemes and default HMS source service type.
- OPSAPS-74517: Service users denied access to Kafka topics
- Service users were denied access to internal Kafka Connect topics
(
connect-status,connect-secrets,connect-offsets, andconnect-configs), generating a large number of access-denied audit entries for thestreamsrepmgrandatlasservice users. The default “connect internal - topic” policy for Kafka has been updated to include these service users, ensuring they can access the required internal topics and preventing further denied-access audit noise. - OPSAPS-72249: Oozie database dump fails on JDK 17
- Previously, the Oozie database dump and load commands could not be executed from Cloudera Manager when using JDK 17. This issue is fixed now.
- OPSAPS-72767: The Install Oozie ShareLib command fails on FIPS and FedRAMP clusters
- Previously, the Install Oozie ShareLib command could not be executed on FIPS and FedRAMP clusters. This issue is fixed now.
- OPSAPS-75667: Oozie failed to start due to an insufficient minimum Java heap size setting
- Previously, the minimum heap size for Oozie was set to 256 MB, which could lead to out-of-memory errors during startup. This issue is fixed now, and the minimum heap size has now been increased to 1 GB to ensure reliable Oozie service startup and operation.
- OPSAPS-70948: The HTTPFS java option parameters set through Cloudera Manager are not being picked up
- Previously, the HDFS HTTPFS did not get all Java related options from Cloudera Manager. This issue is fixed now.
- OPSAPS-75733: Services are not enabled for Oozie on PostUpgrade
- During upgrades from Cloudera Manager 7.1.x to 7.3.x, the removal of 7.2.x upgrade handlers (as part of OPSAPS-74572) caused required service dependencies for Oozie to be unset. This issue is fixed, and restores the necessary dependency setup for Oozie during upgrades to 7.3.x, ensuring that all required services are properly enabled and configured post-upgrade.
