GPUsPDF version

Testing ML Runtime GPU Setup

You can use the following simple examples to test whether the new ML Runtime is able to leverage GPUs as expected.

  1. Go to a project that is using the ML Runtimes Nvdia GPU edition and click Open Workbench.
  2. Launch a new session with GPUs.
  3. Run the following command in the workbench command prompt to verify that the driver was installed correctly:
    ! /usr/bin/nvidia-smi
  4. Use any of the following code samples to confirm that the new engine works with common deep learning libraries.

Pytorch

!pip3 install torch==1.4.0
from torch import cuda
assert cuda.is_available()
assert cuda.device_count() > 0
print(cuda.get_device_name(cuda.current_device()))

Tensorflow

!pip3 install tensorflow-gpu==2.1.0
from tensorflow.python.client import device_lib
assert 'GPU' in str(device_lib.list_local_devices())
device_lib.list_local_devices()

Keras

!pip3 install keras
from keras import backend
assert len(backend.tensorflow_backend._get_available_gpus()) > 0
print(backend.tensorflow_backend._get_available_gpus())