1. Home
  2. Knowledge Base
  3. GPU
  4. Enabling CUDA, cuDNN & TensorRT for NVIDIA L4 GPU

Enabling CUDA, cuDNN & TensorRT for NVIDIA L4 GPU

This knowledge base (KB) provides supported versions, installation commands, and verification steps for enabling CUDA, cuDNN, and TensorRT on virtual machines powered by the NVIDIA L4 GPU.


1. Overview

The NVIDIA L4 GPU is based on the Ada Lovelace architecture and is optimized for:

  • AI/ML inference & training
  • Computer vision
  • Video transcoding
  • Generative AI workloads

To fully utilize the L4 GPU, the following NVIDIA software stack is required:

  • NVIDIA GPU Driver
  • CUDA Toolkit
  • cuDNN
  • TensorRT

2. Supported Versions for NVIDIA L4

2.1 GPU Driver (Minimum Requirement)

ComponentSupported Version
NVIDIA DriverR535+ (Recommended: R550+)

⚠️ Older drivers do not support Ada Lovelace GPUs such as L4.


2.2 CUDA Toolkit

CUDA VersionSupport Status
CUDA 12.4✅ Recommended
CUDA 12.3✅ Supported
CUDA 12.2✅ Supported
CUDA 11.x❌ Not recommended

2.3 cuDNN

cuDNN VersionCUDA CompatibilityStatus
cuDNN 9.xCUDA 12.x✅ Recommended
cuDNN 8.9+CUDA 12.x✅ Supported

2.4 TensorRT

TensorRT VersionCUDA CompatibilityStatus
TensorRT 10.xCUDA 12.x✅ Recommended
TensorRT 9.xCUDA 12.x✅ Supported

3. Installation Steps (Linux)

Assumption: NVIDIA drivers are already installed and GPU is visible using nvidia-smi.


3.1 Verify GPU & Driver

nvidia-smi

Expected:

  • GPU Name: NVIDIA L4
  • Driver Version: 535+
  • No errors

4. Installing CUDA Toolkit

Ubuntu 22.04 / 24.04

sudo apt update
sudo apt install -y build-essential
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-ubuntu2204.pin
sudo mv cuda-ubuntu2204.pin /etc/apt/preferences.d/cuda-repository-pin-600
sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/3bf863cc.pub
sudo add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/ /"
sudo apt update
sudo apt install -y cuda-toolkit-12-4

Set environment variables:

echo 'export PATH=/usr/local/cuda-12.4/bin:$PATH' >> ~/.bashrc
echo 'export LD_LIBRARY_PATH=/usr/local/cuda-12.4/lib64:$LD_LIBRARY_PATH' >> ~/.bashrc
source ~/.bashrc

Verify CUDA:

nvcc --version

5. Installing cuDNN

sudo apt install -y libcudnn9 libcudnn9-dev

Verify cuDNN:

cat /usr/include/cudnn_version.h | grep CUDNN_MAJOR -A

6. Installing TensorRT

sudo apt install -y tensorrt

This installs:

  • TensorRT runtime
  • TensorRT Python bindings
  • ONNX parser

Verify TensorRT:

trtexec --version

For Python verification:

python3 -c"import tensorrt as trt; print(trt.__version__)"

7. Verification Summary

7.1 GPU

nvidia-smi

7.2 CUDA

nvcc --version

7.3 cuDNN

ldconfig -p | grep cudnn

7.4 TensorRT

trtexec --help

Was this article helpful?

Related Articles

This is a staging environment