Known Issues and Limitations

You might run into some known issues while using Cloudera Machine Learning.

DSE-14882 CML endpoint connectivity from DataHub and Cloudera Data Engineering

When CDP services connect to CML services, if the ML workspace is provisioned on a public subnet, traffic is routed out of the VPC first, and then routed back in. On Private Cloud CML, traffic is not routed externally.

DSE-14652 Chrome browser warning when accessing ML workspace

Some browsers (Chrome 86 and higher) may display the following message when a user attempts to access a workspace that was configured without TLS. 
The information you're about to submit is not secure.

Workaround: Accept and bypass the browser warning.

Explanation: Chrome 86 and higher displays warnings when forms submit or redirect to http://, which is the case when connecting to a workspace configured without TLS using SSO. The workspace is still functional in all respects if you accept and bypass the browser warning. It is not possible to enable TLS on a workspace that was created without TLS.

OPSAPS-59476 Ranger and RAZ enabled environments

When using Ranger and RAZ enabled environments in public cloud CML, run the following commands on the session terminal or inline in the user code before doing any other operations:
sed -i "s/http:/https:/g" /etc/hadoop/conf/core-site.xml
sed -i "s/http:/https:/g" /etc/hive/conf/core-site.xml

DSE-14606 Orphan EBS block volumes after deletion of ML workspace

If a CML workspace on AWS is deleted using the February 3, 2021 release (1.15.0-b72) of the CML Control Plane, orphan EBS block volumes may be left behind. The state of any orphan volumes appears as Available in the EC2 console (you can see the list of volumes by navigating to Service > EC2 > Volumes). The volumes have names similar to kubernetes-dynamic-pvc-<unique ID>, and are tagged kubernetes.io/created-for/pvc/namespace":"mlx". These orphan EBS volumes must be deleted to prevent cloud resource leaks.

DSE-14519, DSE-14077 Upgrade not supported on NFS v4.x

Upgrading ML workspaces on Azure configured with external NFS services using the NFS v4.x protocol is currently not supported.

DSE-14355 Applications may not start after Kubernetes upgrade

In some cases, applications that were running before a Kubernetes upgrade may fail to start after Kubernetes is upgraded. Users with access to such applications must restart them manually.

DSE-13937 Transparent proxy supported only on AWS

Cloudera Machine Learning, when used on AWS public cloud, supports transparent proxies. Transparent proxy enables CML to proxy web requests without requiring any particular browser setup. In normal operation, CML requires the ability to reach several external domains. For more information, see: Outbound network access destinations.

DSE-13928 Cannot restrict application access

Authorization used by Applications might not be up to date. For example, if a user is removed from a project in CDSW or CML (no more read access to the project and its applications), this user might continue to have access to the application, if they accessed the application before their access was revoked.

Workaround: When updating permissions of a project that has applications, restart applications to ensure that applications use up-to-date authorization.

DSE-13741 Jupyter Notebook sessions do not time out

Jupyter Notebook sessions in legacy engine:8 through engine:13 do not exit after IDLE_MAXIMUM_MINUTES of inactivity. They will run until SESSION_MAXIMUM_MINUTES (which is seven days by default).

Workaround

You can change the configuration of your cluster to apply the fix for this issue. Change the editor command for Jupyter Notebook in every engine that uses it to the following:

NOTEBOOK_TIMEOUT_SECONDS=$(python3 -c “print(${IDLE_MAXIMUM_MINUTES}*60)“)
/usr/local/bin/jupyter notebook 
--no-browser --ip=127.0.0.1 --port=${CDSW_APP_PORT}
--NotebookApp.token= 
--NotebookApp.allow_remote_access=True 
--NotebookApp.quit_button=False
--log-level=ERROR 
--NotebookApp.shutdown_no_activity_timeout=300
--MappingKernelManager.cull_idle_timeout=${NOTEBOOK_TIMEOUT_SECONDS} 
--TerminalManager.cull_inactive_timeout=${NOTEBOOK_TIMEOUT_SECONDS}
--MappingKernelManager.cull_interval=60 
--TerminalManager.cull_interval=60
--MappingKernelManager.cull_connected=True
This does the following:
  • Kills each running notebook after IDLE_MAXIMUM_MINUTES of inactivity
  • Kills the CDSW/CML session in which Jupyter is running after 5 minutes with no notebooks

Cloudera Bug: DSE-13741, DSE-6651

DSE-13629 Play button missing in CML sessions with ML Runtimes

For ML Runtimes sessions, the Play button might not display.

Workaround:

You can still run the Session code by selecting Run–>Run All or Run --> Run Lines when the Play button is not shown on the UI.

DSE-13573 Scheduled Job is not running after switching over to Runtimes, Application can't be restarted

ML Runtimes is a new feature in the current release. Although you can now change your existing projects from Engine to ML Runtimes, we are currently not recommend migrating existing projects.

Applications and Jobs created with Engines might be impacted once their project is changed to use ML Runtimes based on the following:
  • You will be forced to change to ML Runtimes if you try to update related Editor/Kernel settings of Jobs, Models, Experiments, or Applications
  • Applications cannot be restarted from the UI in a migrated project unless ML Runtime settings are updated for that application.

DSE-12065: Disable file upload and download

You cannot disable file upload and download when using the Jupyter Notebook.

DSE-8834: Remove Workspace operation fails

Remove Workspace operation fails if workspace creation is still in progress.

DSE-8407: CML does not support modifying CPU/GPU scaling limits on provisioned ML workspaces

When provisioning a workspace, CML currently supports a maximum of 30 nodes of each type: CPUs and GPUs. Currently, CML does not provide a way to increase this limit for existing workspaces.

Workaround:
  1. Log in to the CDP web interface at https://console.us-west-1.cdp.cloudera.com using your corporate credentials or any other credentials that you received from your CDP administrator.
  2. Click ML Workspaces.
  3. Select the workspace whose limits you want to modify and go to its Details page.
  4. Copy the Liftie Cluster ID of the workspace. It should be of the format, liftie-abcdefgh.
  5. Login to the AWS EC2 console, and click Auto Scaling Groups.
  6. Paste the Liftie Cluster ID in the search filter box and press enter.
  7. Click on the auto-scaling group that has a name like: liftie-abcdefgh-ml-pqrstuv-xyz-cpu-workers-0-NodeGroup. Especially note the 'cpu-workers' in the middle of the string.
  8. On the Details page of this auto-scaling group, click Edit.
  9. Set Max capacity to the desired value and click Save.

Note that CML does not support lowering the maximum instances of an auto scaling group due to certain limitations in AWS.

SSO does not work if the first user to access an ML workspace is not a Site Admin

Problem: If a user assigned the MLUser role is the first user, the web application will display an error.

Workaround: Any user assigned the MLAdmin role must always be the first user to access an ML workspace.

API does not enforce a maximum number of nodes for ML workspaces

Problem: When the API is used to provision new ML workspaces, it does not enforce an upper limit on the autoscale range.

MLX-637, MLX-638: Downscaling ML workspace nodes does not work as expected

Problem: Downscaling nodes does not work as seamlessly as expected due to a lack of Bin Packing on the Spark default scheduler, and because dynamic allocation is not currently enabled. As a result, currently infrastructure pods, Spark driver/executor pods, and session pods are tagged as non-evictable using the cluster-autoscaler.kubernetes.io/safe-to-evict: "false" annotation.