Migrating from an RPM-based Deployment to the Latest 1.9.1 CSD

This topic describes how to migrate from an RPM-based deployment to the latest 1.9.1 CSD and parcel-based deployment.

Make sure you read the Deploying Cloudera Data Science Release Notes relevant to the version you are migrating to/from.
  1. Save a backup of the Cloudera Data Science Workbench configuration file located at /etc/cdsw/config/cdsw.conf.
  2. Stop the Cloudera Data Science Workbench service in Cloudera Manager.
  3. Stop the Cloudera Data Science Workbench service in Cloudera Manager.
    1. Delete the 2 patch files: /etc/cdsw/patches/default/deployment/ingress-controller.yaml and /etc/cdsw/patches/default/deployment/tcp-ingress-controller.yaml.
    2. Delete every empty folder from the /etc/cdsw/patches directory.
    3. Delete the /etc/cdsw/patches directory if it is empty.
  4. (Strongly Recommended) On the master host, backup all your application data that is stored in the /var/lib/cdsw directory.
    To create the backup, run the following command on the master host:
    tar cvzf cdsw.tar.gz /var/lib/cdsw/*
  5. Save a backup of the Cloudera Data Science workbench configuration file at:
    /etc/cdsw/config/cdsw.conf
  6. Uninstall the previous release of Cloudera Data Science Workbench. Perform this step on the master host, as well as all the worker hosts.
    yum remove cloudera-data-science-workbench 
  7. Install the latest version of Cloudera Data Science Workbench using the CSD and parcel. Note that when you are configuring role assignments for the Cloudera Data Science Workbench service, the Master role must be assigned to the same host that was running as master prior to the upgrade.
    For installation instructions, see Installing Cloudera Data Science Workbench 1.9.1 Using Cloudera Manager. You might be able to skip the first few steps assuming you have the wildcard DNS domain and block devices already set up.
  8. Use your copy of the backup cdsw.conf created in Step 3 to recreate those settings in Cloudera Manager by configuring the corresponding properties under the Cloudera Data Science Workbench service.
    1. Log into the Cloudera Manager Admin Console.
    2. Go to the Cloudera Data Science Workbench service.
    3. Click the Configuration tab.
    4. The following table lists all the cdsw.conf properties and their corresponding Cloudera Manager properties (in bold). Use the search box to bring up the properties you want to modify.
    5. Click Save Changes.
      cdsw.conf Property Corresponding Cloudera Manager Property and Description

      TLS_ENABLE

      Enable TLS: Enable and enforce HTTPS (TLS/SSL) access to the web application (optional). Both internal and external termination are supported. To enable internal termination, you must also set the TLS Certificate for Internal Termination and TLS Key for Internal Termination parameters. If these parameters are not set, terminate TLS using an external proxy.

      For more details on TLS termination, see Enabling TLS/SSL for Cloudera Data Science Workbench.

      TLS_CERT

      TLS_KEY

      TLS Certificate for Internal Termination, TLS Key for Internal Termination

      Complete path to the certificate and private key (in PEM format) to be used for internal TLS termination. Set these parameters only if you are not terminating TLS externally. You must also set the Enable TLS property to enable and enforce termination. The certificate must include both DOMAIN and *.DOMAIN as hostnames.

      Self-signed certificates are not supported unless trusted fully by clients. Accepting an invalid certificate manually can cause connection failures for unknown subdomains.Set these only if you are not terminating TLS externally. For details on certificate requirements and enabling TLS termination, see Enabling TLS/SSL for Cloudera Data Science Workbench.

      TLS_ROOTCA

      If your organization uses an internal custom Certificate Authority, you can use this field to paste in the contents of your internal CA's root certificate file.

      The contents of this field are then inserted into the engine's root certificate store every time a session (or any workload) is launched. This allows processes inside the engine to communicate securely with the ingress controller.

      HTTP_PROXY

      HTTPS_PROXY

      HTTP Proxy, HTTPS Proxy

      If your deployment is behind an HTTP or HTTPS proxy, set the respective HTTP Proxy or HTTPS Proxy property to the hostname of the proxy you are using.
      http://<proxy_host>:<proxy-port>
      or
      https://<proxy_host>:<proxy_port>

      If you are using an intermediate proxy such as Cntlm to handle NTLM authentication, add the Cntlm proxy address to the HTTP Proxy or HTTPS Proxy fields. That is, either http://localhost:3128 or https://localhost:3128 respectively.

      If the proxy server uses TLS encryption to handle connection requests, you will need to add the proxy's root CA certificate to your host's store of trusted certificates. This is because proxy servers typically sign their server certificate with their own root certificate. Therefore, any connection attempts will fail until the Cloudera Data Science Workbench host trusts the proxy's root CA certificate. If you do not have access to your proxy's root certificate, contact your Network / IT administrator.

      To enable trust, copy the proxy's root certificate to the trusted CA certificate store (ca-trust) on the Cloudera Data Science Workbench host.
      cp /tmp/<proxy-root-certificate>.crt /etc/pki/ca-trust/source/anchors/
      Use the following command to rebuild the trusted certificate store.
      update-ca-trust extract

      ALL_PROXY

      SOCKS Proxy: If a SOCKS proxy is in use, set this parameter to socks5://<host>:<port>/.

      NO_PROXY

      No Proxy: Comma-separated list of hostnames that should be skipped from the proxy.

      Starting with version 1.4, if you have defined a proxy in the HTTP_PROXY(S) or ALL_PROXY properties, Cloudera Data Science Workbench automatically appends the following list of IP addresses to the NO_PROXY configuration. Note that this is the minimum required configuration for this field.

      This list includes 127.0.0.1, localhost, and any private Docker registries and HTTP services inside the firewall that Cloudera Data Science Workbench users might want to access from the engines.

      "127.0.0.1,localhost,100.66.0.1,100.66.0.2,100.66.0.3,
      100.66.0.4,100.66.0.5,100.66.0.6,100.66.0.7,100.66.0.8,100.66.0.9,
      100.66.0.10,100.66.0.11,100.66.0.12,100.66.0.13,100.66.0.14,
      100.66.0.15,100.66.0.16,100.66.0.17,100.66.0.18,100.66.0.19,
      100.66.0.20,100.66.0.21,100.66.0.22,100.66.0.23,100.66.0.24,
      100.66.0.25,100.66.0.26,100.66.0.27,100.66.0.28,100.66.0.29,
      100.66.0.30,100.66.0.31,100.66.0.32,100.66.0.33,100.66.0.34,
      100.66.0.35,100.66.0.36,100.66.0.37,100.66.0.38,100.66.0.39,
      100.66.0.40,100.66.0.41,100.66.0.42,100.66.0.43,100.66.0.44,
      100.66.0.45,100.66.0.46,100.66.0.47,100.66.0.48,100.66.0.49,
      100.66.0.50,100.77.0.10,100.77.0.128,100.77.0.129,100.77.0.130,
      100.77.0.131,100.77.0.132,100.77.0.133,100.77.0.134,100.77.0.135,
      100.77.0.136,100.77.0.137,100.77.0.138,100.77.0.139"

      NVIDIA_GPU_ENABLE

      Enable GPU Support: When this property is enabled, GPUs installed on Cloudera Data Science Workbench hosts will be available for use in its workloads. By default, this parameter is disabled.

      For instructions on how to enable GPU-based workloads on Cloudera Data Science Workbench, see Configuring Custom Root CA Certificate.

  9. Cloudera Manager will prompt you to restart the service if needed.
  10. If the release you have just upgraded to includes a new version of the base engine image (see release notes), you will need to manually configure existing projects to use the new engine. Cloudera recommends you do so to take advantage of any new features and bug fixes included in the newly released engine. For example:
    • Container Security

      Security best practices dictate that engine containers should not run as the root user. Engines (v7 and lower) briefly initialize as the root user and then run as the cdsw user. Engines v8 (and higher) now follow the best practice and run only as the cdsw user. For more details, see Restricting User-Created Pods.

    • CDH 6 Compatibility

      The base engine image you use must be compatible with the version of CDH you are running. This is especially important if you are running workloads on Spark. Older base engines (v6 and lower) cannot support the latest versions of CDH 6. If you want to run Spark workloads on CDH 6, you must upgrade your projects to base engine 7 (or higher).

    • Editors

      Engines v8 (and higher) ships with the browser-based IDE, Jupyter, preconfigured and can be selected from the Start Session menu.

    To upgrade a project to the new engine, go to the project's Settings > Engine page and select the new engine from the dropdown. If any of your projects are using custom extended engines, you will need to modify them to use the new base engine image.
  11. (GPU-enabled Deployments) Remove nvidia-docker1 and Upgrade NVIDIA Drivers to 410.xx or higher
    Perform the following steps to make sure you can continue to leverage GPUs for workloads on Cloudera Data Science Workbench 1.6 (and higher).
    1. Remove nvidia-docker1. Cloudera Data Science Workbench (version 1.6 and higher) ships with nvidia-docker2 installed by default.
      Perform this step on all hosts that have GPUs attached to them.
    2. Upgrade your NVIDIA driver to version 410.xx (or higher). This must be done because nvidia-docker2 does not support lower versions of NVIDIA drivers.
      • Stop Cloudera Data Science Workbench.

        Depending on your deployment, either stop the CDSW service in Cloudera Manager (for CSDs) or run cdsw stop on the Master host (for RPMs).

      • Reboot the GPU-enabled hosts. Install a supported version of the NVIDIA driver (410.xx or higher) on all GPU-enabled hosts.
      • Start Cloudera Data Science Workbench.

        Depending on your deployment, either start the CDSW service in Cloudera Manager (for CSDs) or run cdsw start on the Master host (for RPMs).