Testing GPU Setup
Use these code samples to test that your GPU setup works with several common deep learning libraries. The specific versions of libraries depend on the particular GPU used and the GPU driver version. You can use this testing for GPU setup using Legacy Engines.
- Go to a project that is using the CUDA engine and click Open Workbench.
- Launch a new session with GPUs.
- Run the following command in the workbench command prompt to verify that the
driver was installed correctly:
! /usr/bin/nvidia-smi
- Use any of the following code samples to confirm that the new engine works with common deep learning libraries.
PyTorch
!pip3 install torch==1.4.0
from torch import cuda
assert cuda.is_available()
assert cuda.device_count() > 0
print(cuda.get_device_name(cuda.current_device()))
Tensorflow
!pip3 install tensorflow-gpu==2.1.0
from tensorflow.python.client import device_lib
assert 'GPU' in str(device_lib.list_local_devices())
device_lib.list_local_devices()
Keras
!pip3 install keras
from keras import backend
assert len(backend.tensorflow_backend._get_available_gpus()) > 0
print(backend.tensorflow_backend._get_available_gpus())