CML software requirements for Private Cloud

To launch the Cloudera Machine Learning service, the Private Cloud host must meet several software requirements. Review the following CML-specific software requirements.


If necessary, contact your Administrator to make sure the following requirements are satisfied:
  1. If you are using OpenShift, the installed OpenShift Container Platform must be version 4.10 or 4.8 (when upgrading from 1.4.x to 1.5.2, not performing a fresh installation). For ECS, refer to the Hardware and Software Requirements section in CDP Private Cloud Experiences Installation Hardware Requirements and Managing a Private Cloud Experience Cluster 1.5.1.
  2. CML assumes it has cluster-admin privileges on the cluster.
  3. Storage:
    1. Ppersistent volume block storage per ML Workspace: 600 GB minimum, 4.5 TB recommended..
    2. 1 TB of external NFS space recommended per Workspace (depending on user files). If using embedded NFS, 1 TB per workspace in addition to the 600 GB minimum, or 4.5 TB recommended block storage space.
    3. Access to NFS storage is routable from all pods running in the cluster.
    4. For monitoring, recommended volume size is 60 GB.
  4. On OCP, CephFS is used as the underlying storage provisioner for any new internal workspace on PVC 1.5.1. A storage class named ocs-storagecluster-cephfs with csi driver set to "" must exist in the cluster for new internal workspaces to get provisioned.
  5. A block storage class must be marked as default in the cluster. This may be rook-ceph-block, Portworx, or another storage system. Confirm the storage class by listing the storage classes (run oc get sc) in the cluster, and check that one of them is marked default.
  6. If external NFS is used, the NFS directory and assumed permissions must be those of the cdsw user. For details see Using an External NFS Server in the Related information section at the bottom of this page.
  7. If CML needs access to a database on the CDP Private Cloud Base cluster, then the user must be authenticated using Kerberos and must have Ranger policies set up to allow read/write operations to the default (or other specified) database.
  8. Ensure that Kerberos is enabled for all services in the cluster. Custom Kerberos principals are not currently supported. For more information, see Enabling Kerberos for authentication.
  9. Forward and reverse DNS must be working.
  10. DNS lookups to sub-domains and the ML Workspace itself should work.
  11. In DNS, wildcard subdomains (such as * must be set to resolve to the master domain (such as The TLS certificate (if TLS is used) must also include the wildcard subdomains. When a session or job is started, an engine is created for it, and the engine is assigned to a random, unique subdomain.
  12. The external load balancer server timeout needs to be set to 5 min. Without this, creating a project in an ML workspace with git clone or with the API may result in API timeout errors. For workarounds, see Known Issue DSE-11837.
  13. If you intend to access a workspace over https, see Deploy an ML Workspace with Support for TLS.
  14. For non-TLS ML workspaces, websockets need to be allowed for port 80 on the external load balancer.
  15. Only a TLS-enabled custom Docker Registry is supported. Ensure that you use a TLS certificate to secure the custom Docker Registry. The TLS certificate can be self-signed, or signed by a private or public trusted Certificate Authority (CA).
  16. On OpenShift, due to a Red Hat issue with OpenShift Container Platform 4.3.x, the image registry cluster operator configuration must be set to Managed.
  17. Check if storage is set up in the cluster image registry operator. See Known Issues DSE-12778 for further information.

For more information on requirements, see CDP Private Cloud Base Installation Guide.