Known issues in 7.1.9 CHF 1

You must be aware of the known issues and limitations, the areas of impact, and workaround in Cloudera Runtime 7.1.9 CHF 1.

CDPD-68951: In 7.1.9 CHF2 version and lower, the command ozone sh key list <bucket_path> displays the isFile flag in a key's metadata as false even when the key is a file. This issue is rectified in 7.1.9 CHF3. However, the pre-existing (pre-upgrade) key's metadata cannot be changed.
None
When using S3A committer fs.s3a.committer.name=directory with fs.s3a.committer.staging.conflict-mode=replace to write to FSO buckets, the client fails with the following error.
DIRECTORY_NOT_FOUND org.apache.hadoop.ozone.om.exceptions.OMException: Failed to find parent directory of xxxxxxxx at org.apache.hadoop.ozone.om.request.file.OMFileRequest.getParentID(OMFileRequest.java:1008) at org.apache.hadoop.ozone.om.request.file.OMFileRequest.getParentID(OMFileRequest.java:958) at org.apache.hadoop.ozone.om.request.file.OMFileRequest.getParentId(OMFileRequest.java:1038) at org.apache.hadoop.ozone.om.request.s3.multipart.S3MultipartUploadCompleteRequestWithFSO.getDBOzoneKey(S3MultipartUploadCompleteRequestWithFSO.java:114) at org.apache.hadoop.ozone.om.request.s3.multipart.S3MultipartUploadCompleteRequest.validateAndUpdateCache(S3MultipartUploadCompleteRequest.java:157) at org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.handleWriteRequest(OzoneManagerRequestHandler.java:378) at org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.runCommand(OzoneManagerStateMachine.java:568) at org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.lambda$1(OzoneManagerStateMachine.java:363) at java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1700) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:834)
This occurs because S3A uses multipart upload to commit job results in a batch. The staging committer's replace mode deletes the target directory before completing MPU. The problem is that FSO does not create intermediate directories during MPU, it does only for regular file/dir/key requests.
Use fs.s3a.committer.name=magic for ** affected versions.
CDPD-63665: The root cause is the netty native working directory configured by org.apache.ratis.thirdparty.io.netty.native.workdir with default value ${OZONE_HOME}/temp which is /opt/cloudera/parcels/CDH-7.1.9-1.cdh7.1.9.p1.47064069/lib/hadoop-ozone/ in the cluster is not available. The possible reason is that SCM instance does not have the permission to create directory under hadoop-ozone.
Add the configuration to java opts, with a new location that SCM process has write permission -Dorg.apache.ratis.thirdparty.io.netty.native.workdir=${OZONE_HOME}/temp.
OPSAPS-69539: CDP Runtime 7.1.9 from the base release through to CHF3 does not support Oracle JDK 8u401 or OpenJDK 1.8.0_402 (8u402). Some services will fail to start. This can be a problem on RHEL 9.x as version 8u402 is the default OpenJDK 8 installed by the OS.
Workaround is to install an earlier version of JDK 8. For example Oracle jdk-8u291 / 1.8.0_291, or OpenJDK 8u292 / 1.8.0_292.
CDPD-60839: KnoxShell clientside does not work with JDK17 due to incompatible groovy dependency.
The Knox service is working, however, KnoxShell is broken.
If you are using Knoxshell client, you must not upgrade to 7.1.9 CHF1.
CDPD-61524: Ozone Storage Container Manager fails to start on upgrading from CDP Private Cloud Base 7.1.6 to 7.1.9 CHF1. Also, if you have upgraded from CDP Private Cloud Base 7.1.6 to 7.1.7 or 7.1.8 and then to 7.1.9, the upgrade fails.
None. Cloudera recommends you to reach out to the Support before performing the upgrade to CDP Private Cloud Base 7.1.9.
CDPD-62254: Ozone is not supported on SLES15 with CHF1.
If your cluster has Ozone, Cloudera recommends you to not upgrade to 7.1.9 CHF1.
QAINFRA-18371: Conflict while installing libmysqlclient-devel on SLES 15
You may see an error such as the following while installing the mysql-devel and libmysqlclient-devel packages for setting up MariaDB as a backend database on SLES 15: File /usr/bin/mariadb_config from install of MariaDB-devel-<version>.x86_64 conflicts with file from install of libmariadb-devel-3.1.21-150000.3.33.3.x86_64 (SLES Module Server Applications Updates)
While installing the mysql-devel and libmysqlclient-devel packages on SLES15, use the --replacefiles zypper switch or manually enter yes on the interactive pop-up that you see when the files are being overwritten.
CDPD-62464: Java process called by navatlas.sh tool fails on JDK-8 version
While running nav2atlas.sh script on OracleJDK 8 an error message is thrown and returns code 0 on an unsuccessful run.
You must install JDK-11 version on the host. Make sure not to put into the default path and JAVA_HOME. In a shell, set the JAVA_HOME to this location and run the nav2atlas.sh script.
CDPD-62834: Status of the deleted table is seen as ACTIVE in Atlas after the completion of navigator2atlas migration process
The status of the deleted table displays as ACTIVE.
None
CDPD-62837: During the navigator2atlas process, the hive_storagedesc is incomplete in Atlas
For the hive_storagedesc entity, some of the attributes are not getting populated.
None
CDPD-62935: If you are using the Knox Port Mapping feature, CDP Private Cloud Runtime 7.1.9 GA is not compatible with Cloudera Manager 7.11.3.2.
If you use Knox Port Mapping feature and want to upgrade Cloudera Manager to 7.11.3 CHF 1, then you must upgrade CDP Runtime to CDP 7.1.9 CHF 1.
COMPX-7493: YARN Tracking URL that is shown in the command line does not work when knox is enabled
When Knox is configured for YARN, the Tracking URL printed in the command line of an YARN application such as spark-submit shows the direct URL instead of the Knox Gateway URL.
Upgrade CDP Runtime to CDP 7.1.9 CHF 2, and then you need to perform the following steps:
  1. Open the Cloudera Manager Admin Console and go to the Knox service.
  2. Click on the Knox Gateway Home URL.
  3. Copy the YARN Resource Manager Web UI V2 URL from the Knox Gateway Home page.

    For example, https://knox-gateway.example.com:8443/gateway/cdp-proxy/yarnuiv2/

  4. Open the Cloudera Manager Admin Console and go to the YARN service.
  5. Click the Configuration tab and search for resourcemanager_config_safety_valve.
  6. Add the Resource Manager Advanced Configuration Snippet (Safety Valve) for yarn-site.xml property, and specify its value by using the YARN Resource Manager Web UI V2 URL, copied earlier, as follows:
    Name: yarn.web-proxy.gateway.url
                                        Value: <YARN Resource Manager Web UI V2 URL>
  7. Enter a Reason for Change and then click Save Changes.
  8. Restart YARN.
OPSAPS-69481: Some Kafka Connect metrics missing from CM due to conflicting definitions
The metric definitions for kafka_connect_connector_task_metrics_batch_size_avg and kafka_connect_connector_task_metrics_batch_size_max in recent Kafka CSDs conflict with previous definitions in other CSDs. This prevents CM from registering these metrics. It also results in SMM returning an error. The metrics also cannot be monitored in CM chart builder or queried using the CM API.
Contact Cloudera support for a workaround.

Technical Service Bulletins

TSB 2023-702: Potential wrong result for queries with date partition filter for clusters in GMT+ timezone
In Cloudera Data Platform (CDP) Private Cloud Base 7.1.7 Service Pack (SP) 2 Cumulative Hotfix (CHF) 11, a fix was introduced in Hive Metastore (HMS) to address a parsing issue with date strings. This fix caused a regression in Hive clusters where the HMS time zone is set ahead of GMT for the following combination of tables and queries: a table that is partitioned on a DATE column and a SELECT query on that table containing a WHERE clause filter on the same DATE column. For such queries, during the partition pruning phase, the date string would be converted to a date without timezone and compared with the partition value retrieved by HMS. This causes wrong results (0 rows) because the date values do not match.

The regression was identified in CDP Private Cloud Base 7.1.7 SP2 CHF14, but it exists in CHF11 through CHF16 as well as on certain versions of 7.1.8 and 7.1.9.

This issue does not affect clusters where the time zones are behind GMT. For example, if the time zone of the cluster is set USA/Los Angeles, which is 8 hours behind GMT, a date ‘2023-10-02’ will remain as ‘2023-10-02’ after converting to GMT (adding 8 hours). On the other hand, using Asia/Hong Kong time as an example, which is 8 hours ahead of GMT, the same date would become ‘2023-10-01’ after converting to GMT (subtracting 8 hours), which leads to the wrong results.

Upstream JIRA
HIVE-27760
Knowledge article
For the latest update on this issue, see the corresponding Knowledge article: TSB 2023-702: Potential wrong result for queries with date partition filter for clusters in GMT+ timezone