ML Runtimes Release NotesPDF version

Known Issues and Limitations in ML Runtimes version 2025.01.01

You might run into some known issues while using ML Runtimes 2025.01.1.

In PBJ R Runtime, messages generated in R (using IRkernel) are sent to the logs instead of being displayed in the UI. This behavior is due to the IRkernel, which sends the stderr output to the logs, rather than forwarding it to the UI. This behavior is not related to the PBJ R Runtime, but observed related to the IRkernel operation.

When trying to plot additional content on already existing plots, PBJ R Runtimes throw an error. Plots can only be created using the plot function.

When embedded or language models are not configured, the following error message is displayed:

There seems to be a problem with the Chat backend, please look at the JupyterLab server logs or contact your administrator to correct this problem.

Workaround: Configure Cloudera Copilot with both language models and embedding models.

Due to a known issue, Cloudera AI Inference service is temporarily unavailable with Cloudera Copilot. When you try to use the Cloudera AI Inference service, an error is displayed.

When using Spark in R workloads that are running PBJ Workbench Runtimes, the environmental variable R_LIBS_USER must be passed to the Spark executors with the value set to /home/cdsw/.local/lib/R/<R_VERSION>/library

For example, when using sparklyr with a PBJ Workbench R 4.3 Runtime, the correct way to set up a sparklyr connection is:
library(sparklyr)
     config <- spark_config()
     config$spark.executorEnv.R_LIBS_USER="/home/cdsw/.local/lib/R/4.3/library"
     sc <- spark_connect(config = config)        

When installing R or Python packages in a session, the kernel might not be able to load the package in the same session, if a previous version of the package or its newly installed dependencies have been loaded in the same session. Such issues are observable more often in PBJ R Runtimes, which automatically load basic R packages like vctrs, lifecycle, rlang, cli at session startup.

Workaround: Start a new session, import and use the newly installed package there.

Upgrading the Python package jupyter-client with a version greater than 7.4.9 can temporarily break a project. Workloads using PBJ Runtimes will not be able to start Projects if the jupyter-client version is greater than 7.4.9.

Workaround: Launch the same version of Python, but not on a PBJ Runtime (either Workbench or JupyterLab). Open a Terminal window and uninstall the jupyter-client package from the Project by executing pip3 uninstall jupyter-client. Verify your change by running pip3 list and checking that the version of the jupyter-client package is less than version 8.

Sessions
When starting a Notebook or a Console for a specific environment, the installed packages will be available and the interpreter used to evaluate the contents of the Notebook or Console will be the one installed in the environment. However, the Conda environment is not "activated" in these sessions, therefore commands like !which python will return with the base Python interpreter on the Runtime. The recommended ways to modify a Conda environments or install packages are the following:
  • Conda commands must be used with the -n or --name argument to specify the environment, for example conda -n myenv install pandas
  • When installing packages with pip, use the %pip magic to install packages in the active kernel’s environment, for example %pip install pandas
Applications and Jobs
To start an Application or Job, first create a launcher Python script containing the following line: !source activate <conda_env_name> && python <job / application script.py>
When starting the Application or Job, select the launcher script as the "Script".
Models
Models are currently not supported for the Conda Runtime.
Spark
Spark is not supported in JupyterLab Notebooks and Consoles.
Spark workloads are supported in activated Conda environments in JupyterLab Terminals, or in Jobs or Applications.
The CDSW libraries for Python and R are not available for the Conda Runtimes.

When trying to add new ML Runtimes, a number of error messages might appear in various places when using a custom root certificate. For example, you might see: the Could not fetch the image metadata or the certificate signed by unknown authority messages. This is caused by the runtime-puller pods not having access to the custom root certificate that is in use.

Workaround:

  1. Create a directory at any location on the master node:

    For example:

    mkdir -p /certs/

  2. Copy the full server certificate chain into this folder. It is usually easier to create a single file with all of your certificates (server, intermediate(s), root):
    # copy all certificates into a single file: 
            cat server-cert.pem intermediate.pem root.pem > /certs/cert-chain.crt
  3. (Optional) If you are using a custom docker registry that has its own certificate, you need to copy this certificate chain into this same file:
    cat docker-registry-cert.pem >> /certs/cert-chain.crt
  4. Copy the global CA certificates into this new file:
    # cat /etc/ssl/certs/ca-bundle.crt >> /certs/cert-chain.crt
  5. Edit your deployment of runtime manager and add the new mount.

    Do not delete any existing objects.

    #kubectl edit deployment runtime-manager

  6. Under VolumeMounts, add the following lines.

    Note that the text is white-space sensitive - use spaces and not tabs.

    - mountPath: /etc/ssl/certs/ca-certificates.crt 
            name: mycert 
            subPath: cert-chain.crt #this should match the new file name created in step 4
           

    Under Volumes add the following text in the same edit:

    - hostPath: 
            path: /certs/  #this needs to match the folder created in step 1
            type: "" 
            name: mycert
  7. Save your changes:

    wq!

    Once saved, you will receive the message "deployment.apps/runtime-manager edited" and the pod will be restarted with your new changes.

  8. To persist these changes across cluster restarts, use the following Knowledge Base article to create a kubernetes patch file for the runtime-manager deployment: https://community.cloudera.com/t5/Customer/Patching-CDSW-Kubernetes-deployments/ta-p/90241

Cloudera Bug: DSE-20530

Scala Runtimes should not appear as an option for Models, Experiments, and Applications in the user interface. Currently Scala Runtimes only support Session and Jobs.

The Python 3.12 editions of ML Runtimes, that is ML Runtimes 2024.10 version and higher versions, are not supported by the Impyla package. Therefore, Impyla cannot be used with these ML Runtimes to connect to Apache Impala.

We want your opinion

How can we improve this page?

What kind of feedback do you have?