CUDA Engine - Technical Preview
To make it easier for users to get started with using GPUs on CDSW, the Cloudera engineering team is working on a new custom engine that comes enabled with CUDA out of the box. Previously, users were expected to build their own CUDA engine.
Enabling GPUs for CDSW with the CUDA engine
This section gives you a modified set of steps to be used when you want to use the CUDA engine to enable GPUs for CDSW workloads. If you already have
Set up the CDSW hosts
The first few steps of this process are the same as that required for the traditional GPU set up process. If you have already performed these steps for an existing project that uses GPUs, you can move on to the next section: Site Admins: Add the Custom CUDA Engine to your Cloudera Data Science Workbench Deployment
- Set Up the Operating System and Kernel
- Install the NVIDIA Driver on GPU Hosts - Make sure you are using a driver that is compatible with the CUDA engine.
- Enable GPU Support in Cloudera Data Science Workbench
Site Admins: Add the CUDA Engine to your Cloudera Data Science Workbench Deployment
Required CDSW Role: Site Administrator
- Sign in to Cloudera Data Science Workbench.
- Click Admin.
- Go to the Engines tab.
- Under Engine Images, add docker.repository.cloudera.com/cdsw/cuda-engine:10 to the list of images.
- Click Update.
Airgapped CDSW Deployments: Once these steps have been performed, the CUDA engine will be pulled from Cloudera's public Docker registry. If you have an airgapped deployment, you will also need to manually distribute this image to every CDSW host. For sample steps, see Distribute the Image.
Project Admins: Enable the CUDA Engine for your Project
Project administrators can use the following steps to make it the CUDA engine the default engine used for workloads within a particular project.
- Navigate to your project's Overview page.
- Click Settings.
- Go to the Engines tab.
- Under Engine Image, select the CUDA-capable engine image from the dropdown.
Test the CUDA Engine
You can use the following simple examples to test whether the new CUDA engine is able to leverage GPUs as expected.
- Go to a project that is using the CUDA engine and click New Session.
- Launch a new session with GPUs.
- Run the following command in the workbench command prompt to verify that the driver was installed correctly:
! /usr/bin/nvidia-smi
- Use any of the following code samples to confirm that the new engine works with common deep learning libraries.
Pytorch
!pip3 install torch from torch import cuda assert cuda.is_available() assert cuda.device_count() > 0 print(cuda.get_device_name(cuda.current_device()))
Tensorflow
!pip3 install tensorflow-gpu==2.1.0 from tensorflow.python.client import device_lib assert 'GPU' in str(device_lib.list_local_devices()) device_lib.list_local_devices()
Keras
!pip3 install keras from keras import backend assert len(backend.tensorflow_backend._get_available_gpus()) > 0 print(backend.tensorflow_backend._get_available_gpus())