February 27, 2026 - Hotfix
Review the fixed issues and behaviour changes in this release of Cloudera Data Warehouse on cloud.
Fixed issues
Review the fixed issues in this release of the Cloudera Data Warehouse service on cloud.
- DWX-22470: Fluentd out-of-memory errors when shipping audit logs from the coordinator pod
- Previously, in Cloudera Data Warehouse, the audit log Fluentd sidecar in the coordinator pod
could encounter out-of-memory (OOM) errors. This occurred because the Fluentd configuration
used a
chunk_limit_sizeof 256 MB, which matched the container memory limit of 256 MB. Under heavy audit log loads, buffering and flushing data caused Fluentd’s memory usage to exceed the container limit, leading to OOM kills and interruptions in audit log shipping.This issue is now resolved by optimizing the Fluentd configuration to reduce memory usage and improve reliability under heavy audit log loads.
- DWX-22700, DWX-22330: Multiple configuration updates during a single update operation
- Previously, the Cloudera Data Warehouse user interface had trouble saving
multiple configuration settings for a Virtual Warehouse when multiple values were updated at
the same time using a single update operation. Some keys would get excluded, causing incomplete
or inconsistent configurations.
This issue is resolved by ensuring that all modified configuration flags are saved correctly in a single update operation.
- DWX-22347: Database Catalog initialization failures due to short informer cache sync timeout in Cloudera Data Warehouse
- Previously, in Cloudera Data Warehouse environments, Database Catalog
initialization could fail with errors such
as:
error getting metastore-sys-init-job job for '<catalog-name>': timed out waiting for caches to sync Error Code : 9999
This issue occurred in private AKS/EKS clusters and other environments with higher control-plane or CCM round-trip latency. The default informer cache sync timeout of 10 seconds was insufficient for the Kubernetes informers used by Cloudera Data Warehouse control pane components. As a result, cache synchronization could not complete within the allotted time, causing initialization jobs to fail and leaving the Database Catalog in an error state.
This issue is now resolved by increasing the informer cache sync timeout.
- DWX-22235: Impala profile upload fails on Azure due to missing home directory for diagnostic tool user
- Previously, in Cloudera Data Warehouse on Azure, Impala profile uploads failed
when using the diagnostic tools image. This was caused by a configuration issue where the Azure
CLI, used during the upload process, required a writable directory for its configuration but
none was available.
This issue is now resolved.
- DWX-22253: Cluster autoscaler scale-down-delay-after-add causing unstable scaling behavior in cloud deployments
- Previously, in Cloudera Data Warehouse on AWS and Azure, the
scale-down-delay-after-addparameter was set to 1 minute, causing the cluster autoscaler to scale down too quickly after scaling up. This led to a continuous loop of scaling up and down, resulting in unstable cluster capacity and inefficient resource usage.This issue has been resolved by increasing the
scale-down-delay-after-addparameter to 5 minutes, allowing the system to stabilize after scaling up and improving cluster stability. - DWX-22606: Insufficient ephemeral storage for Impala coordinator heap dumps and logs
- Previously, Impala coordinator pods in Cloudera Data Warehouse requested only 24
GB of ephemeral storage, which was insufficient to handle heap dumps, query profiles, and logs.
This could lead to disk pressure or pod eviction on coordinator nodes.
This issue is now resolved by increasing the requested ephemeral storage for Impala coordinator pods, ensuring stable operation without disk-related issues.
- CDPD-97080: Memory leak in global admissiond for cancelled queued queries
- Previously, a memory leak occurred in the global admissiond when queries in the admission queue were cancelled due to backpressure. The system identified the cancellation but did not remove the query from the admission state map.
- CDPD-45130: Truncating excessive length queries to prevent database indexing errors
- Previously, large or complex SQL statements, such as lengthy INSERT
queries, were indexed by the Query Processor. This resulted in increased load times for the Job
Browser. You can now configure query truncation by using the
hue.query-processor.query.max-lengthproperty in the Query Processor configuration underdasConfsection. By default, no truncation is performed to ensure backward compatibility.
Behavior changes
This release of the Cloudera Data Warehouse service on Cloudera on cloud has the following behavior changes:
Summary: Impala audit logs disabled by default
Before this release: Impala coordinator audit logging was
enabled by default. The flagfile for the Impala
Coordinator included the audit_event_log_dir settings, which
generated additional audit log files in the /opt/impala/logs/audit/
directory. These logs were also periodically shipped to cloud storage. These logs were
created in addition to the existing Ranger audit logs.
After this release: The
audit_event_log_dir setting is removed from the Impala coordinator
flagfile. As a result, these additional Impala audit logs are now disabled by default. If
required, you can re-enable the coordinator audit logging by manually adding these
configurations back to the Impala coordinator flagfile.
Summary: Global Admission Controller default behavior changed
Before this release: The Global Admission Controller feature was enabled by default, which sometimes caused admissiond to run out of memory (OOM) when processing large numbers of queries on Iceberg tables with many small files.
After this release: The Global Admission Controller feature is now disabled by default to improve cluster stability and prevent admissiond OOM. Administrators can use the Cloudera Data Warehouse UI to explicitly enable or disable the feature in the Virtual Warehouse Details page. It is recommended to perform regular table maintenance, such as compacting tables or partitions and optimizing the partition strategy before enabling this feature to prevent OOM issues caused by high metadata volume and small file sprawl.
Summary: Increased admission control service memory
Before this release: The default memory limit for the admission control service queue was 50 MB.
After this release: The default memory limit for the admission control service queue is increased to 1 GB. These changes help prevent query rejections caused by memory pressure in the admission control service.
