Cloudera Manager 7.11.3 Cumulative hotfix 2

Know more about the Cloudera Manager 7.11.3 cumulative hotfixes 2.

This cumulative hotfix was released on December 21, 2023.

New features and changed behavior for Cloudera Manager 7.11.3 CHF2 (version: 7.11.3.3-47960007):
Replicate Hive ACID tables and Iceberg tables in Dell EMC Isilon storage clusters using Hive ACID table replication policies and Iceberg replication policies respectively
You can replicate Hive ACID tables and Iceberg tables, using replication policies, between CDP Private Cloud Base 7.1.9 or higher clusters on Dell EMC Isilon storage using Cloudera Manager 7.11.3 CHF2 or higher versions.
Wait timeout for regenerating credentials in Active Directory (AD)
Cloudera Manager supports a new parameter ad_wait_time_for_regenerate to indicate the wait timeout period after deleting an old principal and allowing this deletion to replicate on all AD servers in sufficient period of time. This ensures a successful creation of a new principal after the deletion process. Set the timeout value according to your AD setup (number of AD server replicas). If the timeout value is too low, then an error of stale principal might occur (ldap_add: Constraint violation (19) additional info: 000021C8: AtrErr: DSID-03200EB7, #1: 0: 000021C8: DSID-03200EB7, problem 1005 (CONSTRAINT_ATT_TYPE), data 0, Att 90290 (userPrincipalName)).
FIPS support for JDK11 in Kudu
Added FIPS support for JDK11 in Kudu.
FIPS support for JDK11 in Hive
Added the required JVM arguments in Hive processes in order to execute on a FIPS enabled cluster.
Following are the list of known issues and their corresponding workarounds that are shipped for Cloudera Manager 7.11.3 CHF2 (version: 7.11.3.3-47960007):
OPSAPS-69406: Cannot edit existing HDFS and HBase snapshot policy configuration
The Edit Configuration modal window does not appear when you click Actions > Edit Configuration on the Cloudera Manager > Replication > Snapshot Policies page for existing HDFS or HBase snapshot policies.
None.
OPSAPS-68340: Zeppelin paragraph execution fails with the User not allowed to impersonate error.

Starting from Cloudera Manager 7.11.3, Cloudera Manager auto-configures the livy_admin_users configuration when Livy is run for the first time. If you add Zeppelin or Knox services later to the existing cluster and do not manually update the service user, the User not allowed to impersonate error is displayed.

If you add Zeppelin or Knox services later to the existing cluster, you must manually add the respective service user to the livy_admin_users configuration in the Livy configuration page.

OPSAPS-69342: Access issues identified in MariaDB 10.6 were causing discrepancies in High Availability (HA) mode

MariaDB 10.6, by default, includes the property require_secure_transport=ON in the configuration file (/etc/my.cnf), which is absent in MariaDB 10.4. This setting prohibits non-TLS connections, leading to access issues. This problem is observed in High Availability (HA) mode, where certain operations may not be using the same connection.

To resolve the issue temporarily, you can either comment out or disable the line require_secure_transport in the configuration file located at /etc/my.cnf.

OPSAPS-68452: Azul Open JDK 8 and 11 are not supported with Cloudera Manager

Azul Open JDK 8 and 11 are not supported with Cloudera Manager. To use Azul Open JDK 8 or 11 for Cloudera Manager RPM/DEBs, you must manually create a symlink between the Zulu JDK installation path and the default JDK path.

After installing Azul Open JDK8 or 11, you must run the following commands on all the hosts in the cluster:
Azul Open JDK 8
RHEL or SLES
# sudo ln -s /usr/lib/jvm/java-8-zulu-openjdk-jdk /usr/lib/jvm/java-8-openjdk
Ubuntu or Debian
# sudo ln -s /usr/lib/jvm/zulu-8-amd64 /usr/lib/jvm/java-8-openjdk
Azul Open JDK 11
For DEBs only
# sudo ln -s /usr/lib/jvm/zulu-11-amd64 /usr/lib/jvm/java-11
OPSAPS-69340: Dlog4j.configurationFile annotation is not working with the log4j library of the Cloudera Manager Server.

The incorrect notation used in defining the log4j configuration file name (which is Dlog4j.configurationFile annotation) is preventing the Cloudera Manager Server from receiving updates made to thelog4j.properties file.

Perform the following steps:
  1. Edit the /etc/default/cloudera-scm-server file by adding the following line:
    export CMF_JAVA_OPTS="-Dlog4j.configuration=file:/etc/cloudera-scm-server/log4j.properties $CMF_JAVA_OPTS"
  2. Restart the Cloudera Manager Server by running the following command:
    sudo systemctl restart cloudera-scm-server
CDPD-62464: Java process called by navatlas.sh tool fails on JDK-8 version
While running nav2atlas.sh script on OracleJDK 8 an error message is thrown and returns code 0 on an unsuccessful run.
You must install JDK-11 version on the host. Make sure not to put into the default path and JAVA_HOME. In a shell, set the JAVA_HOME to this location and run the nav2atlas.sh script.
CDPD-62834: Status of the deleted table is seen as ACTIVE in Atlas after the completion of navigator2atlas migration process
The status of the deleted table displays as ACTIVE.
None
CDPD-62837: During the navigator2atlas process, the hive_storagedesc is incomplete in Atlas
For the hive_storagedesc entity, some of the attributes are not getting populated.
None
OPSAPS-69897: NPE in Ozone replication from CM 7.7.1 to CM 7.11.3
When you use source Cloudera Manager 7.7.1 and target Cloudera Manager 7.11.3 for Ozone replication policies, the policies fail with Failure during PreOzoneCopyListingCheck execution: null error. This is because the target Cloudera Manager 7.11.3 does not retrieve the required source bucket information for validation from the source Cloudera Manager 7.7.1 during the PreCopyListingCheck command phase. You come across this error when you use source Cloudera Manager versions lower than 7.10.1 and target Cloudera Manager versions higher than or equal to 7.10.1 in an Ozone replication policy.
Upgrade the source Cloudera Manager to 7.11.3 or higher version.
OPSAPS-69481: Some Kafka Connect metrics missing from Cloudera Manager due to conflicting definitions
The metric definitions for kafka_connect_connector_task_metrics_batch_size_avg and kafka_connect_connector_task_metrics_batch_size_max in recent Kafka CSDs conflict with previous definitions in other CSDs. This prevents Cloudera Manager from registering these metrics. It also results in SMM returning an error. The metrics also cannot be monitored in Cloudera Manager chart builder or queried using the Cloudera Manager API.
Contact Cloudera support for a workaround.
Following are the list of fixed issues that were shipped for Cloudera Manager 7.11.3 CHF2 (version: 7.11.3.3-47960007):
OPSAPS-68689: Unable to emit the LDAP Bind password in core-site.xml for client configurations

If the CDP cluster has LDAP group to OS group mapping enabled, then applications running in Spark or Yarn would fail to authenticate to the LDAP server when trying to use the LDAP bind account during the LDAP group search.

This is because the LDAP bind password was not passed to the /etc/hadoop/conf/core-site.xml file. This was intended behavior to prevent leaking the LDAP bind password in a clear text field.

To fix this issue, perform the instructions from the Emitting the LDAP Bind password in core-site.xml for client configurations section to emit the LDAP Bind password in core-site.xml for client configurations.

OPSAPS-60139: Staleness performance issue in clusters with a large number of roles
In large clusters, Cloudera Manager takes a long time to display the Configuration Staleness icon after a service configuration change. This issue is fixed now by improving the performance of the staleness-checking algorithm.
OPSAPS-68722: Java heap size can now be configured
You can now customize Java heap size in YARN Queue Manager. Although the default for this setting should be valid in most deployment scenarios, you have the option to update the setting only if a given cluster has run into memory-management issues, otherwise, the settings can remain.
OPSAPS-68217: Add post replication diff to compare files
You can now trace files that go missing during snapshot-based cloud replication. To trace and debug the issue, perform the following steps:
  1. Go to the Clusters > HDFS > Configuration tab.
  2. To enable the debug steps, complete the following steps:
    1. Search for the HDFS Replication Environment Advanced Configuration Snippet (Safety Valve) property.
    2. Add the following key-value pair, and Save the changes:

      SCHEDULES_WITH_ADDITIONAL_DEBUG_STEPS = [***comma-separated list of numerical IDs of all the applicable replication policies***]

    3. Search for the HDFS Replication Advanced Configuration Snippet (Safety Valve) for hdfs-site.xml property.
    4. Add the following key-value pair, and Save the changes:

      com.cloudera.enterprise.distcp.post-copy-reconciliation.fail-on = MISSING_ON_TARGET

      The possible values for this parameter include MISSING_ON_SOURCE, MISSING_ON_TARGET, MISSING_ON_BOTH, ANY_MISSING, and NONE. The default is NONE.

  3. To enable extra logging, complete the following steps:
    1. Search for the HDFS Replication Environment Advanced Configuration Snippet (Safety Valve) property.
    2. Add the following key-value pair, and Save the changes:
      EXTRA_LOG_CONFIGS_$SCHEDULE_ID =
      log4j.rootLogger=INFO,console;
      hadoop.root.logger=INFO,console;log4j.appender.console=org.apache.log4j.ConsoleAppender;
      log4j.appender.console.target=System.err;log4j.appender.console.layout=org.apache.log4j.PatternLayout;
      log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n;
      log4j.logger.org.apache.hadoop.fs.azurebfs.services.AbfsIoUtils=DEBUG,console;
      log4j.logger.org.apache.hadoop.fs.azurebfs.services.AbfsClient=DEBUG,console;
      log4j.logger.distcp.SimpleCopyListing=DEBUG,console;log4j.logger.distcp.SnapshotDiffGenerator=DEBUG,console

      The extra debug logs are collected on HDFS in the $logDir/debug directory. For example, the log location hdfs://user/hdfs/.cm/distcp/2023-08-24_206/debug

OPSAPS-68855: Fix replication policy deletion for Hive ACID replication policies using Dell Powerscale Isilon clusters
The Hive ACID replication policy can be deleted successfully on CDP Private Cloud Base 7.1.9 or higher clusters with Dell EMC Isilon storage using Cloudera Manager 7.11.3 CHF2 or higher versions.
OPSAPS-68516: Ozone replication diagnostic bundle collection
Replication Manager generates diagnostic information bundle for Ozone replication policies.
OPSAPS-68698: Replication command type is incorrectly reported for Ozone incremental replication
When you create an Ozone replication policy using the “Incremental with fallback to full file listing” Listing type, the Ozone replication command type correctly reports the file listing type for the run.

The first run of an Ozone replication policy creates a snapshot during the run, but it cannot calculate a snapshot diff because there is no previous snapshot. In this case, full file listing is used for the first run of the policy. This is now reported correctly as FULL_FILE_LISTING_FALLBACK.

OPSAPS-68856: Fix Hive ACID replication policy creation when using Dell Powerscale Isilon clusters
The Hive ACID replication policy can be created successfully on CDP Private Cloud Base 7.1.9 or higher clusters with Dell EMC Isilon storage using Cloudera Manager 7.11.3 CHF2 or higher versions.
OPSAPS-68995: Convert some DistCp feature checks from CM version checks to feature flags
To ensure interoperability between different cumulative hotfixes (CHF), the NUM_FETCH_THREADS, DELETE_LATEST_SOURCE_SNAPSHOT_ON_JOB_FAILURE, and RAISE_SNAPSHOT_DIFF_FAILURES DistCp features must be published as feature flags.
OPSAPS-68658: Source ozone service ID is used as target
Ozone replication policies do not fail when the Ozone service name is different on the source and destination clusters because Ozone replication uses the destination Ozone service name during the path normalization process.
OPSAPS-68526 - Iceberg Replication support for Dell Powerscale
Iceberg replication policies run successfully on Kerberos-enabled clusters on Dell EMC Isilon storage. For more information, see Adding custom Kerberos keytab and Kerberos principal for replication policies.

The repositories for Cloudera Manager 7.11.3-CHF2 are listed in the following table:

Table 1. Cloudera Manager 7.11.3-CHF2
Repository Type Repository Location
RHEL 9 Compatible Repository:
https://username:password@archive.cloudera.com/p/cm7/patch/7.11.3.3-47960007/redhat9/yum
Repository File:
https://username:password@archive.cloudera.com/p/cm7/patch/7.11.3.3-47960007/redhat9/yum/cloudera-manager.repo
RHEL 8 Compatible Repository:
https://username:password@archive.cloudera.com/p/cm7/patch/7.11.3.3-47960007/redhat8/yum
Repository File:
https://username:password@archive.cloudera.com/p/cm7/patch/7.11.3.3-47960007/redhat8/yum/cloudera-manager.repo
RHEL 7 Compatible Repository:
https://username:password@archive.cloudera.com/p/cm7/patch/7.11.3.3-47960007/redhat7/yum
Repository File:
https://username:password@archive.cloudera.com/p/cm7/patch/7.11.3.3-47960007/redhat7/yum/cloudera-manager.repo
SLES 15 Repository:
https://username:password@archive.cloudera.com/p/cm7/patch/7.11.3.3-47960007/sles15/yum
Repository File:
https://username:password@archive.cloudera.com/p/cm7/patch/7.11.3.3-47960007/sles15/yum/cloudera-manager.repo
SLES 12 Repository:
https://username:password@archive.cloudera.com/p/cm7/patch/7.11.3.3-47960007/sles12/yum
Repository File:
https://username:password@archive.cloudera.com/p/cm7/patch/7.11.3.3-47960007/sles12/yum/cloudera-manager.repo
Ubuntu 20 Repository:
https://username:password@archive.cloudera.com/p/cm7/patch/7.11.3.3-47960007/ubuntu2004/apt
Repository File:
https://username:password@archive.cloudera.com/p/cm7/patch/7.11.3.3-47960007/ubuntu2004/apt/cloudera-manager.list
IBM PowerPC RHEL 9
https://username:password@archive.cloudera.com/p/cm7/patch/7.11.3.3-47960007/redhat9-ppc/yum
IBM PowerPC RHEL 8
https://username:password@archive.cloudera.com/p/cm7/patch/7.11.3.3-47960007/redhat8-ppc/yum