Cumulative hotfix CDP Private Cloud Base 7.1.8.68-1 (Cumulative hotfix27)

Know more about the cumulative hotfix 27 for CDP 7.1.8. This cumulative hotfix was released on August 29, 2024.

Following are the list of fixes that were shipped for CDP Private Cloud Base version 7.1.8-1.cdh7.1.8.p68.56795344.

COMPX-16285: Backport YARN-6523
Optimized the system credentials sent in node heartbeat responses.
CDPD-73442: IMPALA-13313 Potential deadlock in ImpalaServer::ExpireQueries()
When idle_query_timeout was set in a session, new queries stopped responding for that session and failed. This deadlock issue occurred in long-running sessions and is now resolved.
CDPD-73423: Ranger - Upgrade Spring Framework to 6.1.12/6.0.23/5.3.39 due to CVE-2024-38808 and CVE-2024-38809
Upgraded the Spring Framework version to 5.3.39 due to CVE-2024-38808 and CVE-2024-38809.
CDPD-73065: Backport HIVE-25773 to CDH-7.1.8.x
Fixed an issue where column descriptors were not deleted even though the related partition was dropped.
CDPD-72703: Altering a Kudu table with per-range hash partitions might make the table unusable
Fixed an issue where altering a table with per-range hash bucketing by dropping or adding a particular number of columns made the table inaccessible for Kudu client applications.
CDPD-72621: HWC - Support default constraints while writing into a table
Added support for default constraints while writing into a table in Hive Warehouse Connector.
CDPD-72292, CDPD-72149: [Private Cloud Releases] Upgrade requireJS due to CVE-2024-38998 and CVE-2024-38999
Upgraded the RequireJS version due to CVE-2024-38998 and CVE-2024-38999.
CDPD-70357: [7.1.x] Do not call HMS to get list of pruned partitions when translated filter is empty
Minimized the calls to Hive Metastore (HMS) layer to get the partitions list by making one call for each table irrespective of repetition.
CDPD-69196, CDPD-69231: [718 CHF27] revokeAccess() behaves differently from secureRevokeAccess() after RANGER-4638
Multiple columns policy creation is now supported in Ranger for Grant / Revoke request.
CDPD-66882: [718 CHF27] IMPALA-12554 Create only one Ranger policy for GRANT statement
Impala now creates only one Ranger policy for a GRANT statement when there are multiple columns specified. This reduces the number of policies created on the Ranger server.
CDPD-64940: [718 CHF27] RANGER-4585 Support multiple columns policy creation in ranger for Grant / Revoke request
Multiple columns policy creation is now supported in Ranger Policy for Grant / Revoke request.
CDPD-63092: Avro - CVE-2023-39410
When deserializing untrusted data, there was a possibility for a reader to consume memory beyond the allowed constraints, leading to out of memory on the system. This issue affected Java applications using Apache Avro Java SDK up to and including 1.11.2. This issue is resolved by updating to Apache-Avro version 1.11.3.
CDPD-58012: Spark SQL editor (Spark3 with Livy) does not reflect output when rendering array and map columns
Complex data types are now supported for SparkSQL editor in Hue.
Common Vulnerabilities and Exposures (CVE) fixed in this release:
  • CVE-2024-38999 - RequireJS
  • CVE-2024-38998 - RequireJS
Table 1. Cloudera Runtime 7.1.8.68 (Cumulative Hotfix 27) download URL:
Repository Location
https://[[***USERNAME***]]:[[***PASSWORD***]]@archive.cloudera.com/p/cdh7/7.1.8.68/parcels/

Technical Service Bulletin

TSB 2024-775: FileNotFoundException for Ozone Filesystem JAR during or after CDP installation or upgrade
A potential availability issue has been found with services that have an Ozone client dependency on the ozone-filesystem-hadoop3 fat JAR file when upgrading the Cloudera Data Platform (CDP) Private Cloud Base cluster from version 7.1.8 to 7.1.9. This issue may also affect service installations, runs, and restarts during or after the CDP Private Cloud Base installation or upgrade.
The following exception appears on the Cloudera Manager User Interface (UI) or in the log files of the respective service when an installation, upgrade or other operations fail due to this issue: `java.io.FileNotFoundException: /path/to/ozone-filesystem-hadoop3-<version>.jar (No such file or directory).
The failure is caused by the broken symbolic link: /var/lib/hadoop-hdfs/ozone-filesystem-hadoop3.jar. This issue arises if the hdfs user already exists on the node before the Cloudera Runtime parcel activation. When the hdfs user already exists on the node, the Cloudera Manager agent skips the initialization related to Hadoop Distributed File System (HDFS), which includes creating the /var/lib/hadoop-hdfs directory. As the path is not created, the symbolic link cannot be created during the parcel activation process. This results in a series of broken symbolic links that point to the Ozone binaries.
Knowledge article
For the latest update on this issue see the corresponding Knowledge Article: TSB 2024-775: FileNotFoundException for the Ozone FS JAR during or after installation or upgrade