Known issues and limitations in Cloudera Data Engineering on CDP Private Cloud Experiences

This page lists the current known issues and limitations that you might run into while using the Cloudera Data Engineering (CDE) service on CDP Private Cloud Experiences

HDFS is the default filesystem for all resource mounts
Workaround: For any jobs that use local filesystem paths as arguments to a Spark job, explicitly specify file:// as the scheme. For example, if your job uses a mounted resource called test-resource.txt, in the job definition, you would typically refer to it as /app/mount/test-resource.txt. In CDP Private Cloud, this should be specified as file:///app/mount/test-resource.txt.
The CDE CLI does not support credentials file or access keys
The CDE CLI in this release does not support using credentials files (credentials-file and credentials-profile options) or access keys (access-key-id and access-key-secret options).
Workaround: Use the CLI in insecure mode.
The CDE virtual cluster quota is hard-coded to 100 CPUs and 10240 GB memory
Each CDE virtual cluster created is hard-coded to have a maximum of 100 CPU cores and 10240 GB memory.
Workaround: None.
Default size of NFS volumes hard-coded
The default external volume size is hard-coded to 100 GB for each virtual cluster, and 20 GB for each component.
Workaround: None
Submitting jobs from one virtual cluster to another is not supported
Workaround: Submit jobs on the cluster that contains the job.
Apache Ozone is supported only for log files
Apache Ozone is supported only for log files. It is not supported for job configurations, resources, and so on.
Scheduling jobs with URL references does not work
Scheduling a job that specifies a URL reference does not work.
Workaround: Use a file reference or create a resource and specify it