Known installation issues.

  • During the Cloudera Data Science Workbench startup process, you might see certain timeout issues.
    Pods not ready in cluster default ['role/<pod_name>'].
    This is due to an issue with some pods taking longer to start up and other dependent processes timing out. Restart the CDSW service to get past this issue.

    Cloudera Bug: DSE-6855

  • Cloudera Data Science Workbench restricts a maximum of 50 concurrent Sessions, Jobs, Models, Applications, and Spark Executors that can run parallel by single users. You can use the Workers API to create additional pods which count toward the 50 limit.
  • Cloudera Data Science Workbench cannot be managed by Apache Ambari.
  • Apache Phoenix requires additional configuration to run commands successfully from within Cloudera Data Science Workbench engines (sessions, jobs, experiments, models). Workaround: Explicitly set HBASE_CONF_PATH to a valid path before running Phoenix commands from engines.
    export HBASE_CONF_PATH=/usr/hdp/hbase/<hdp_version>/0/
  • Cloudera Data Science Workbench is not Highly Available

    The CDSW application does not have any built in High Availability.


      1. Install CDSW.
      2. Remove the CDSW master node from the Cloudera Manager cluster.
      3. Add a new node to the Cloudera Manager cluster and configure/install it as a CDSW master node, taking care to include a block device for both /var/lib/cdsw and for Docker usage.

        You also need to ensure that this "new" master node has the correct DNS configurations.

      4. Copy the full /var/lib/cdsw from the original master node to this new master node and start CDSW.

        Everything should work as normal.

      5. Set up a process to copy the full /var/lib/cdsw from your primary master node to the standby master node.
      6. Test a down scenario by removing the current master node from Cloudera Manager.

        You will need to start the second master node and update the DNS and TLS settings.

  • Cloudera Bug: DSE-799