Skip to content

Instantly share code, notes, and snippets.

@cyqsimon
Last active January 5, 2026 10:47
Show Gist options
  • Select an option

  • Save cyqsimon/55bc377d2c4bb7093363ff58434154c8 to your computer and use it in GitHub Desktop.

Select an option

Save cyqsimon/55bc377d2c4bb7093363ff58434154c8 to your computer and use it in GitHub Desktop.
Resolving Podman GPU access troubles relating to nvidia-uvm
# /etc/systemd/system/nvidia-uvm-init.service
# Create nvidia-uvm devices (required by containers via CDI) on service start
[Unit]
Before=multi-user.target
[Service]
Type=oneshot
ExecStart=/usr/local/libexec/nvidia-uvm-init.sh
[Install]
WantedBy=multi-user.target
#!/usr/bin/env bash
# /usr/local/libexec/nvidia-uvm-init.sh
if modprobe nvidia-uvm; then
# Get major device number used by the nvidia-uvm driver
D=$(grep nvidia-uvm /proc/devices | awk '{print $1}')
mknod -m666 /dev/nvidia-uvm c $D 0
mknod -m666 /dev/nvidia-uvm-tools c $D 0
else
exit 1
fi
@cyqsimon
Copy link
Author

cyqsimon commented Jan 5, 2026

This Gist documents how you resolve the following error in Podman container startup:

Error: setting up CDI devices: failed to inject devices: failed to stat CDI host device "/dev/nvidia-uvm": no such file or directory

This happens when you've configured a Podman container that relies on an NVIDIA GPU to autostart, but the system is headless so /dev/nvidia-uvm and /dev/nvidia-uvm-tools are missing. In my opinion this is a bug on NVIDIA's part, but whatever.

Code adapted from Matthieu's blog (Internet Archive). Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment