Cloudera Data Engineering service
Cloudera Data Engineering (CDE) is a serverless service for Cloudera Data Platform that allows you to submit jobs to auto-scaling virtual clusters. CDE enables you to spend more time on your applications, and less time on infrastructure.
Cloudera Data Engineering allows you to create, manage, and schedule Apache Spark jobs without the overhead of creating and maintaining Spark clusters. With Cloudera Data Engineering, you define virtual clusters with a range of CPU and memory resources, and the cluster scales up and down as needed to execute your Spark workloads, helping to control your cloud costs.
The CDE service involves several components:
- A logical subset of your cloud provider account including a specific virtual network. For more information, see Environments.
- CDE Service
- The long-running Kubernetes cluster and services that manage the virtual clusters. The CDE service must be enabled on an environment before you can create any virtual clusters.
- Virtual Cluster
- An individual auto-scaling cluster with defined CPU and memory ranges. Virtual Clusters in CDE can be created and deleted on demand. Jobs are associated with clusters.
- Application code along with defined configurations and resources. Jobs can be run on demand or scheduled. An individual job execution is called a job run.
- A defined collection of files such as a Python file or application JAR, dependencies, and any other reference files required for a job.
- Job run
- An individual job run.
The CDE service differs from a Data Engineering Data Hub cluster in several ways, including the following:
|Feature||Cloudera Data Engineering||Data Hub DE Template|
|Compute engines||Apache Spark||Apache Spark, Apache Hive|
|Deployment||Kubernetes||Virtual machines (cloud provider)|
|Resource management||Yunikorn, Kubernetes||YARN|
|Troubleshooting||CDE deep analysis, Spark History Server||Spark History Server|
|Portability||Public/private cloud||Public cloud|
|Job submission||Managed API||Apache Livy|