November 18, 2022

This release of Cloudera Data Engineering (CDE) on CDP Private Cloud 1.4.1 includes the following features:

Custom NFS Storage Class

You can now specify the name of the custom NFS storage class while creating a CDE service. By default, CDE uses the platform's in-built storage class discovered using the provisioner.

For more information, see . Adding a Cloudera Data Engineering service.

Limit Resources (Technical Preview)

You can now set the maximum number of CPU cores and the maximum memory in gigabytes that can be used by this CDE service and virtual cluster. The cluster can utilize resources upto the set capacity to run the submitted Spark applications.

For more information, see Adding a Cloudera Data Engineering service and Creating virtual clusters. For information about configuring resource pool and capacity, see Managing cluster resources using Quota Management (Technical Preview).

Workload Secrets

CDE now provides a secure way to create and store workload secrets for Cloudera Data Engineering (CDE) Spark Jobs. This is a more secure alternative to storing credentials in plain text embedded in your application or job configuration.

For more information, see Managing workload secrets with CDE Spark Jobs using the API.

Spark History Server

You can now use Spark history server to troubleshoot Spark jobs. The Spark history server is a monitoring tool that displays information about completed Spark applications. It provides information for debugging such as Spark configurations, DAG execution, driver and executor resource utilization, application logs, and job, stage and task-level details.

For more information, see Using Spark history server to troubleshoot Spark job.

Loading example jobs and sample data using new VCs

CDE now provides an option to add in-product examples of data and jobs in new virtual clusters to facilitate smoother onboarding and learning for new customers.

For more information, see CDE example jobs and sample data.

Set default values for the variables in CDE job specification

Using [--default-variable] flags you can now replace strings in job values.

For more information, see Creating and updating Apache Spark jobs using the CLI.

Job email alerts

SLA miss and job failure conditions can be configured for email notifications during job submission.

For more information, see Creating jobs and Automating data pipelines using Apache Airflow.