Key Differences between Cloudera Machine Learning and Cloudera Data Science Workbench

For existing Cloudera Machine Learning users, this topic will help highlight some key differences between Cloudera Data Science Workbench and its cloud-native counterpart, Cloudera Machine Learning.

How is Cloudera Machine Learning (CML) related to Cloudera Data Science Workbench (CDSW)?

CML expands the end-to-end workflow of Cloudera Data Science Workbench (CDSW) with cloud-native benefits like rapid provisioning, elastic autoscaling, distributed dependency isolation, and distributed GPU training.

It can run its own native distributed computing workloads without requiring a separate CDH cluster for scale-out compute. It is designed to run on CDP in existing Kubernetes environments, such as managed cloud Kubernetes services (EKS, AKS, GKE) or Red Hat OpenShift, reducing operational costs for some customers while delivering multi-cloud portability.

Both products help data engineers and data science teams be more productive on shared data and compute, with strong security and governance. They share extensive code.

There is one primary difference:
  • CDSW extends an existing CDH cluster, by running on gateway nodes and pushing distributed compute workloads to the cluster. CDSW requires and supports a single CDH cluster for its distributed compute, including Apache Spark.

  • In contrast, CML is self-contained and manages its own distributed compute, natively running workloads - including but not limited to Apache Spark - in containers on Kubernetes.

    Note: It can still connect to an existing cluster to leverage its distributed compute, data, or metadata (SDX).

Table 1. Key Differences

CDSW requires a CDH or HDP cluster and runs on one or more dedicated gateway nodes on the cluster.

CML is self-contained and does not require an attached CDH/HDP cluster.

Notion of 1 master and multiple worker hosts. No designated master and worker hosts; all nodes are ephemeral.
Security Kerberos authentication integrated via the CDH/HDP cluster Centralised identity management using FreeIPA via the Cloudera Data Platform (CDP).
External authentication via LDAP/SAML.
App Storage Project files, internal postgresDB, and Livelog, are all stored persistently on the Master host. All required persistent storage is on cloud-managed block store, NFS, and a relational data store. For example, for AWS, this is managed via EFS.
Compute Python/R/Scala workloads run on the CDSW gateway nodes of the cluster. Python/R/Scala workloads run on the CDP/cloud-provider-managed K8s cluster.

CDSW pushes distributed compute workloads, such as Spark-on-YARN, to the CDH/HDP cluster.

Spark-on-YARN is not supported; Spark-on-K8s instead. Workloads will run on a dedicated K8s cluster provisioned within the customer environment.
No autoscaling. Autoscaling via your cloud service provider. Kubernetes/node-level autoscaling will be used to expand/contract the cluster size based on demand.
Packaging Available as a downloadable RPM and CSD. Available as a managed service on CDP.
Spark is packaged with CDH. Spark on K8s is packaged with CML - no dependency on an external cluster.
Data Access Data usually resides on the attached CDH/HDP cluster in HDFS, Hive, HBase, and so on. Data can reside on object storage such as S3 or any pre-existing workload clusters registered with CDP.