ML Runtimes

ML Runtimes are responsible for running data science workloads and intermediating access to the underlying cluster.

CML allows you to run any code via an interactive session, scheduled job, or deployed model or application. Data Scientists can use interactive sessions to explore data, or develop a model. They can create jobs and schedule them to run at specified times or productionize their work as a model to provide a REST endpoint or as an application that offers an interactive data dashboard for business users. All of these workloads run inside an ML Runtime container on top of Kubernetes.

Cloudera ML Runtimes are purpose built to serve a specific use-case. They are available with a single editor (for example, Workbench, Jupyterlab), ship a single language Kernel (for example, Python 3.8 or R 4.0), and have a set of UNIX tools and utilities or language libraries and packages.

ML Runtimes have been open sourced and are available in the cloudera/ml-runtimes GitHub repository. If you need to understand your Runtime environments fully or want to build a new Runtime from scratch, you can access the Dockerfiles that were used to build the ML Runtime container images in this repository.

There is a wide range of supported Runtimes out-of-the-box that cover the large majority of Data Science use-cases, but any special requirements can be satisfied by building a custom ML Runtime container image.

CML also supports quota management for CPU, GPU, and memory to limit the amount of resources users have access to within the CML workspace.

Before ML Runtimes, CML offered similar functionalities via Legacy Engines. These deprecated container images followed a monolithic architecture, all kernels (Python, R, Scala), and all seemingly useful packages and libraries were included in the image.