Cloudera on Cloud: July 2025 Release Summary

The Release Summary of Cloudera Public Cloud summarizes major features introduced in Management Console, Data Hub, and data services.

Cloudera AI

Cloudera AI 2.0.52-b27 introduces the following changes:

New Features / Improvements

Cloudera AI Platform

  • Added support for file storage replication in AWS Elastic File System (EFS), enhancing data redundancy and availability. For information, see Configuring File Storage Replication on AWS.
  • Improvements have been made to enable retriable in-place upgrades, leading to more robust upgrade processes. For information, see Upgrading Cloudera AI Workbenches.
  • Added support for the Poland Central and Italy North Azure regions.
  • Added support for Istio.
  • Added support for Customer-Managed Key (CMK) encryption in Cloudera AI Azure workbenches. For more information, see Enabling Customer Managed Keys on Microsoft Azure.
  • You can now change Persistent Volume Claim (PVC) sizes through both the UI on the Workbench Details page and using the CDP CLI. For more information, see Modifying workbench persistent volume size.
  • Added a liveness probe to the mlx crud application pod and implemented graceful shutdown to improve the stability and resilience of the application.
  • The system now automatically selects the Cloudera AI Registry if only one instance exists within a given tenant.
  • Improvements have been made to the UI, allowing you to create Cloudera AI Registries with private clusters and enable User-Defined Routing (UDR) more easily.
  • Added user-friendly information to the UI to assist users when utilizing the air-gapped model hub import functionality.

Cloudera AI Registry

  • Added CSI driver support for AI Registries, removing previous resource constraints. You can now download any number of models in parallel without encountering resource limitations with Azure.
  • User-friendly and informative error messages are now displayed when users are unable to import a model to the AI Registry.
  • A new caching mechanism has been introduced in Model Hub, significantly reducing the time for pages to load.

Cloudera AI Inference service

  • Added support for the Nemotron Super 49B model.
  • Added support for Riva ASR NIM (NVIDIA Inference Microservice), enabling advanced automatic speech recognition. This feature is compatible with the Whisper mode, requiring a 16-bit, mono, 16000 Hz, uncompressed WAV file as input.
  • Added support for several new vLLM load formats, including sharded_state, gguf, bitsandbytes, mistral, runai_streamer, and fastsafetensors. This enhances the list of supported vLLM quantization options.
  • Nemotron’s thinking mode is now user-configurable, allowing you to explicitly activate this advanced reasoning capability by including "content": "detailed thinking on" within the system role of your prompt payload, giving you precise control over resource usage.
  • Implemented necessary validators for GPU instance types during the deployment of NVIDIA models to prevent misconfigurations.
  • Significantly improved the performance of Cloudera AI Inference service by caching tokens to improve UI responsiveness and decrease network load.
  • The replica for endpoint logs and events is now automatically selected for any given model endpoint.
  • Added a Refresh button to various sub-sections of the model endpoint details page for easier data updates.
  • Added a force fetch button on the Model Hub UI for users to override cached values and ensure that the latest data is displayed.
  • Replaced the generic Failed to Fetch messages with more user-friendly error messages when a user attempts to import a Hugging Face model not present in our Model Hub.
  • An alert box is now displayed in the UI to notify users when an Ingress Ready endpoint has a replica count of 0.

ML Runtimes

  • Resource requests for several core Cloudera AI services have been increased. This change enhances performance and stability, providing a smoother experience without requiring any user intervention.

For more information about the Known issues, Fixed issues and Behavioral changes, see the Cloudera AI Release Notes.

Cloudera Data Catalog

Cloudera Data Catalog 3.1.2 introduces the following new change:

You are now able to approve tags recommended by the Data Compliance Profiler before applying them to your assets and syncing them to Apache Atlas. This means that you can review tag suggestions to correct mistakenly applied tags, which would otherwise lead to unexpected changes of tag-based Apache Ranger policies.

For more information about the Known issues, Fixed issues and Behavioral changes, see the Cloudera Data Catalog Release Notes.

Cloudera Data Engineering

Cloudera Data Engineering 1.24.1-H1 is a hotfix release that does not include new features, but a list of fixes. For more information, see the Cloudera Data Engineering Release Notes.

Cloudera Data Flow

Cloudera Data Flow 2.10.0-h3-b3 delivers various fixes that optimize Flow Designer performance and stability. It also fixes an issue that caused disabled processors to start after a flow was stopped and then started. For more information, see the Cloudera Data Flow Release Notes.

Cloudera Management Console

The latest version of Cloudera Management Console introduces the following changes:

Support for Spain Central Azure region
The Spain Central Azure region is now supported. You can register Azure environments and provision Cloudera Data Hub clusters in these regions. See updated Supported Azure regions.

Secure binds for LDAP in FreeIPA
By default, secure authentication is available for LDAP (port 636) on FreeIPA instances for new and existing environments, which means that the secure authentication is automatically used for FreeIPA, Data Lake and Cloudera Data Hub within a Cloudera environment. For more information, see the Secure binds for LDAP in FreeIPA documentation.