Known Issues in Cloudera Manager 7.7.1

Known issues in Cloudera Manager 7.7.1

OPSAPS-68689: Unable to emit the LDAP Bind password in core-site.xml for client configurations

If the CDP cluster has LDAP group to OS group mapping enabled, then applications running in Spark or Yarn would fail to authenticate to the LDAP server when trying to use the LDAP bind account during the LDAP group search.

This is because the LDAP bind password was not passed to the /etc/hadoop/conf/core-site.xml file. This was intended behavior to prevent leaking the LDAP bind password in a clear text field.

Set the LDAP Bind password through the HDFS client configuration safety valve.
  1. On the Cloudera Manager UI, navigate to the HDFS service, by clicking on the HDFS service under the Cluster.
  2. Click the Configuration tab. Search for the HDFS Client Advanced Configuration Snippet (Safety Valve) for hdfs-site.xml configuration parameter.

  3. Add an entry with the following values:
    • Name = hadoop.security.group.mapping.ldap.bind.password
    • Value = (Enter the LDAP bind password here)
    • Description = Password for LDAP bind account
  4. Then click the Save Changes button to save the safety valve entry.

  5. Perform the instructions from the Manually Redeploying Client Configuration Files to manually deploy client configuration files to the cluster.

OPSAPS-68452: Azul Open JDK 8 and 11 are not supported with Cloudera Manager

Azul Open JDK 8 and 11 are not supported with Cloudera Manager. To use Azul Open JDK 8 or 11 for Cloudera Manager RPM/DEBs, you must manually create a symlink between the Zulu JDK installation path and the default JDK path.

After installing Azul Open JDK8 or 11, you must run the following commands on all the hosts in the cluster:
Azul Open JDK 8
RHEL or SLES
# sudo ln -s /usr/lib/jvm/java-8-zulu-openjdk-jdk /usr/lib/jvm/java-8-openjdk
Ubuntu or Debian
# sudo ln -s /usr/lib/jvm/zulu-8-amd64 /usr/lib/jvm/java-8-openjdk
Azul Open JDK 11
For DEBs only
# sudo ln -s /usr/lib/jvm/zulu-11-amd64 /usr/lib/jvm/java-11
OPSAPS-62805: Kafka role log file retrieval fails and diagnostic bundles do not contain the Kafka broker role logs.

Kafka and Cruise Control role-level logs cannot be accessed due to a u'LOG4J2 issue.

None
OPSAPS-67152: Cloudera Manager does not allow you to update some configuration parameters.

Cloudera Manager does not allow you to set to "0" for the dfs_access_time_precision and dfs_namenode_accesstime_precision configuration parameters.

You will not be able to update dfs_access_time_precision and dfs_namenode_accesstime_precision to "0". If you try to enter "0" in these configuration input fields, then the field gets cleared off and results in a validation error: This field is required.

To fix this issue, perform the workaround steps as mentioned in the KB article.

If you need any guidance during this process, contact Cloudera support.

OPSAPS-65213: Ending the maintenance mode for a commissioned host with either an Ozone DataNode role or a Kafka Broker role running on it, might result in an error.

You may see the following error if you end the maintenance mode for Ozone and Kafka services from Cloudera Manager when the roles are not decommissioned on the host.

Execute command Recommission and Start on service OZONE-1
Failed to execute command Recommission and Start on service OZONE-1
Recommission and Start
Command Recommission and Start is not currently available for execution.
To resolve this issue, use the API support feature to take the host out of maintenance mode.
  1. Log into Cloudera Manager as an Administrator.
  2. Go to Hosts > All Hosts.
  3. Select the host for which you need to end the maintenance mode from the available list and click the link to open the host details page.
  4. Copy the Host ID from the Details section.
  5. Go to Support > API Explorer.
  6. Locate and click the /hosts/{hostId}/commands/exitMaintenanceMode endpoint for HostsResource API to view the API parameters.
  7. Click Try it out.
  8. Enter the ID of your host in the hostId field.
  9. Click Execute.
  10. Verify that the maintenance mode status is cleared for the host by checking the Server response code.

    The operation is successful if the API response code is 200.

If you need any guidance during this process, contact Cloudera support for further assistance.

Cloudera bug: OPSAPS-64029
When Cloudera Manager is upgraded from prior versions to 7.7.1 or later, Queue Manager (QM) will be flagged as stale due to new support for auto-configuration of QM with Yarn Resource Manager (RM).
Restart the QM role at a convenient time.
Cloudera bug: OPSAPS-63881: When CDP Private Cloud Base is running on RHEL/CentOS/Oracle Linux 8.4, services fail to start because service directories under the /var/lib directory are created with 700 permission instead of 755.
Run the following command on all managed hosts to change the permissions to 755. Run the command for each directory under /var/lib:
chmod -R 755 [***path_to_service_dir***]
x
Cloudera Bug: OPSAPS-63838: Cloudera Manager is unavailable after failover
When high availability is enabled for Cloudera Manager, and there is a failover from the Active to the Passive server, the Cloudera Manager server may be unavailable for 15-20 seconds when failing back to the Active server.

Known Issues in Replication Manager

OPSAPS-64388 - Schedule creation API doesn't stop user from creating a bucket within a bucket
When the bucket path in the source and target clusters are different, the replication policy creation API does not fail but the Ozone replication fails with the Ozone File Listing Command Failed error.
Before you create the Ozone replication policy using Cloudera Manager APIs, ensure that the path of the bucket which includes the volume name and bucket name in the target cluster is same as in the source cluster.
OPSAPS-64466 - JCKS way of authentication on Ozone causes YARN to go down on Auto-TLS cluster
During the Ozone replication policy job for OBS buckets, the YARN application goes down and does not restart when the authentication credentials for Auto-TLS is provided using the hadoop.security.credential.provider.path property where the value is the JKS file.
Configure fs.s3a.secret.key and fs.s3a.access.key in the Ozone Client Advanced Configuration Snippet (Safety Valve) property in the ozone-conf.xml and ozone-site.xml files so that the Ozone replication policies use the authentication credentials in these files for OBS bucket replication.
OPSAPS-64501 - Hive 3 replication | CMHA | Failover doesn't go to completion status on its own
This behavior is observed when high availability is enabled for both source and target clusters’ Cloudera Manager instances.

When you click Actions > Start Failover for a successful Hive ACID table replication policy on the Replication Policies page, the policy job does not transition to failover status for a long time. When you click Actions > Revert/Complete failover for the same replication policy, the policy transitions to failover complete and then eventually disables the policy.

OPSAPS-64879 - Replication policies with empty name are not shown on the UI
Replication policies with an empty name do not appear on the Replication Policies page.
Provide a unique replication policy name during replication policy creation.
OPSAPS-65104
Replication Manager does not work as expected when you upgrade from Cloudera Manager version 7.6.7 CHF2 to any Cloudera Manager version between 7.7.1 and 7.7.1 CHF13. If there were any Hive replication policies before the upgrade, Replication Manager does not respond after the upgrade.
If you are using Hive replication policies in Cloudera Manager 7.6.7 CHF2 or higher versions, you must only upgrade to Cloudera Manager 7.7.1 CHF14 version or higher.

Log4j-1x remediation

CDP Private Cloud Base 7.1.7 SP1 and CDP Private Cloud Base 7.1.8 uses Reload4j and does not contain those CVEs but the files were renamed to log4j-1.2.17-cloudera6.jar. This still sets off scanners, but retained the log4j prefix that made for an easy transition for dependencies. In CDP Private Cloud Base 7.1.7 SP2, the log4j-1.2.17-cloudera6.jar files were renamed to reload4j-1.2.22.jar in the CDP parcel and should not set off scanners.

These remaining JARs are related to Cloudera Manager and are in 7.7.1 but 7.6.7 has them removed:

/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/jars/log4j-1.2.17-cloudera6.jar

/opt/cloudera/cm/cloudera-navigator-audit-server/log4j-1.2.17-cloudera6.jar

/opt/cloudera/cm/cloudera-navigator-server/jars/log4j-1.2.17-cloudera6.jar

/opt/cloudera/cm/cloudera-scm-telepub/jars/log4j-1.2.17-cloudera6.jar

/opt/cloudera/cm/common_jars/log4j-1.2.17-cloudera6.5e6c49dac2e98e54fc9a8438826fa763.jar

/opt/cloudera/cm/lib/log4j-1.2.17-cloudera6.jar

Workaround: To get every log4j-1x version replaced with ones named reload4j, you must be on CDP Private Cloud Base 7.1.8 latest Cumulative hotfixes or CDP Private Cloud Base 7.1.9 and associated Cloudera Manager versions. (CDP Private Cloud Base 7.1.7 SP1 uses reload4j but the name still says log4j).