Known issues and limitations in Cloudera Data Engineering on

This page lists the current known issues and limitations that you might run into while using the Cloudera Data Engineering (CDE) service on

Changing LDAP configuration after installing CDE breaks authentication
If you change the LDAP configuration after installing CDE, as described in Configuring LDAP authentication for CDP Private Cloud, authentication no longer works.
Workaround: Re-install CDE after making any necessary changes to the LDAP configuration.
Gang scheduling is not supported
Gang scheduling is not currently supported for CDE on CDP Private Cloud.
HDFS is the default filesystem for all resource mounts
Workaround: For any jobs that use local filesystem paths as arguments to a Spark job, explicitly specify file:// as the scheme. For example, if your job uses a mounted resource called test-resource.txt, in the job definition, you would typically refer to it as /app/mount/test-resource.txt. In CDP Private Cloud, this should be specified as file:///app/mount/test-resource.txt.
The CDE CLI does not support credentials file or access keys
The CDE CLI in this release does not support using credentials files (credentials-file and credentials-profile options) or access keys (access-key-id and access-key-secret options).
Workaround: Use the workload password authentication mechanism.
The CDE virtual cluster quota is hard-coded to 100 CPUs and 10240 GB memory
Each CDE virtual cluster created is hard-coded to have a maximum of 100 CPU cores and 10240 GB memory.
Workaround: None.
Apache Ozone is supported only for log files
Apache Ozone is supported only for log files. It is not supported for job configurations, resources, and so on.
Scheduling jobs with URL references does not work
Scheduling a job that specifies a URL reference does not work.
Workaround: Use a file reference or create a resource and specify it