This guide provides step-by-step instructions for setting up GPU passthrough in Proxmox VE, supporting both Intel and AMD systems with NVIDIA or AMD GPUs.
- Prerequisites
- Step 1: Configure GRUB for IOMMU
- Step 2: Load VFIO Kernel Modules
- Step 2.1: Fix EFI Boot Sync Issues (Proxmox)
- Step 3: Verify IOMMU is Working
- Step 4: Verify IOMMU Groups
- Step 5: Identify Your GPU Hardware
- Step 6: Blacklist GPU Drivers
- Step 7: Configure VFIO for Your GPU
- Step 8: KVM Configuration for GPU Compatibility
- Step 9: Update and Reboot
- Step 10: Verify GPU Passthrough Setup
- Step 11: Set Up Shared Storage (Optional)
- Step 12: Create VMs with GPU Passthrough
- Step 13: Performance Optimization
- π οΈ Interactive Helper Script
- π§― Troubleshooting
- π Security Considerations
- π§ͺ Hardware-Specific Examples
- CPU: Intel with VT-x + VT-d OR AMD with AMD-V + AMD-Vi
- Motherboard: UEFI firmware with IOMMU support
- GPU: Dedicated graphics card for passthrough
- RAM: 16GB minimum (32GB+ recommended for dual VMs)
- Storage: SSD recommended for VM storage
Enable the following in your BIOS/UEFI:
Intel Systems:
- Intel VT-x (Virtualization Technology)
- Intel VT-d (Directed I/O)
AMD Systems:
- AMD-V (SVM Mode)
- AMD-Vi (IOMMU)
Common Settings:
- Above 4G Decoding: ENABLED
- Re-Size BAR Support: ENABLED (if available)
- CSM (Compatibility Support Module): DISABLED
- Secure Boot: Can be enabled or disabled
Edit the GRUB configuration:
nano /etc/default/grubGRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt pcie_acs_override=downstream,multifunction"GRUB_CMDLINE_LINUX_DEFAULT="quiet iommu=pt pcie_acs_override=downstream,multifunction"Or, for full IOMMU isolation:
GRUB_CMDLINE_LINUX_DEFAULT="quiet iommu=1 pcie_acs_override=downstream,multifunction"
β οΈ If you're using Proxmox 8 withproxmox-boot-tool, do not runupdate-grubdirectly. Instead:
proxmox-boot-tool refresh
rebootAdd required VFIO modules to ensure they load at boot:
nano /etc/modulesAdd:
vfio
vfio_iommu_type1
vfio_pciUpdate initramfs:
update-initramfs -u -k all
proxmox-boot-tool refresh
rebootIf you see EFI-related warnings during update-initramfs, follow these steps:
-
Identify the EFI partition:
lsblk -f
-
Mount it:
mkdir -p /boot/efi mount /dev/sdXn /boot/efi
-
Install GRUB EFI:
apt install grub-efi-amd64
-
Initialize boot tool:
umount /boot/efi proxmox-boot-tool init /dev/sdXn
-
Remount and refresh:
mount /dev/sdXn /boot/efi proxmox-boot-tool refresh
-
Persist mount:
blkid /dev/sdXn # Add to /etc/fstab: UUID=XXXX-XXXX /boot/efi vfat defaults 0 1 -
Reload systemd:
systemctl daemon-reload
Check that IOMMU is properly enabled:
dmesg | grep -e DMAR -e IOMMULook for messages like:
DMAR: Intel(R) Virtualization Technology for Directed I/O
dmesg | grep -e AMD-Vi -e IOMMULook for messages like:
AMD-Vi: Interrupt remapping enabled
dmesg | grep 'remapping'Expected output:
- Intel:
DMAR-IR: Enabled IRQ remapping in x2apic mode - AMD:
AMD-Vi: Interrupt remapping enabled
If interrupt remapping is not supported:
echo "options vfio_iommu_type1 allow_unsafe_interrupts=1" > /etc/modprobe.d/iommu_unsafe_interrupts.confCheck IOMMU group isolation:
find /sys/kernel/iommu_groups/ -type l | sort -n -t/ -k5For better readability, use this script:
#!/bin/bash
shopt -s nullglob
for g in $(find /sys/kernel/iommu_groups/* -maxdepth 0 -type d | sort -V); do
echo "IOMMU Group ${g##*/}:"
for d in $g/devices/*; do
echo -e "\t$(lspci -nns ${d##*/})"
done
doneYour GPU and its audio device should ideally be in the same IOMMU group, isolated from other critical devices.
Find your GPU's vendor and device IDs:
lspci -nn | grep NVIDIAlspci -nn | grep AMD
# OR
lspci -nn | grep Radeonlspci -nn | grep -E 'VGA|Display'Example output:
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GA106M [GeForce RTX 3060 Mobile / Max-Q] [10de:2520] (rev a1)
01:00.1 Audio device [0403]: NVIDIA Corporation GA106 High Definition Audio Controller [10de:228e] (rev a1)
Note the hardware IDs (e.g., 10de:2520 and 10de:228e) β you'll need these for the next step.
Prevent the host system from using your GPU by blacklisting drivers and ensuring VFIO modules load early.
echo "blacklist nouveau" >> /etc/modprobe.d/blacklist.conf
echo "blacklist nvidiafb" >> /etc/modprobe.d/blacklist.conf
echo "blacklist nvidia" >> /etc/modprobe.d/blacklist.confecho "blacklist radeon" >> /etc/modprobe.d/blacklist.conf
echo "blacklist amdgpu" >> /etc/modprobe.d/blacklist.confecho "blacklist i915" >> /etc/modprobe.d/blacklist.conf
β οΈ Only blacklisti915if you're not using Intel graphics for host display. If your host relies on Intel iGPU, skip this line.
echo "vfio" >> /etc/initramfs-tools/modules
echo "vfio_pci" >> /etc/initramfs-tools/modules
echo "vfio_iommu_type1" >> /etc/initramfs-tools/modules
echo "vfio_virqfd" >> /etc/initramfs-tools/modulesThen regenerate and sync the boot environment:
proxmox-boot-tool refreshRun one of the following commands depending on your GPU type:
-
NVIDIA:
lspci -nn | grep -E 'NVIDIA'
-
AMD:
lspci -nn | grep -E 'AMD|Radeon'
-
Intel (for integrated GPUs):
lspci -nn | grep -E 'VGA|Display'
Look for output like:
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation Device [10de:2520]
01:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:228e]
Note the vendor:device IDs β in this example, they are 10de:2520 and 10de:228e.
Replace <vendor_id:device_id> with your actual GPU IDs:
echo "options vfio-pci ids=<vendor_id:device_id>,<vendor_id:device_id> disable_vga=1" > /etc/modprobe.d/vfio.confExample (NVIDIA):
echo "options vfio-pci ids=10de:2520,10de:228e disable_vga=1" > /etc/modprobe.d/vfio.confEdit your GRUB configuration:
nano /etc/default/grubUpdate the line based on your platform:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt pcie_acs_override=downstream,multifunction vfio-pci.ids=<vendor_id:device_id>,<vendor_id:device_id> disable_vga=1"GRUB_CMDLINE_LINUX_DEFAULT="quiet iommu=pt pcie_acs_override=downstream,multifunction vfio-pci.ids=<vendor_id:device_id>,<vendor_id:device_id> disable_vga=1"
β οΈ Replace<vendor_id:device_id>with your actual GPU IDs from the previous step. Do not use placeholder values like10de:0000.
Then apply changes:
update-initramfs -u -k all
proxmox-boot-tool refresh
rebootSome GPUs include USB and UCSI controllers that may not bind to vfio-pci at boot, even with correct initramfs and GRUB configuration. To ensure full isolation, persist driver_override using udev.
cat <<EOF > /etc/udev/rules.d/99-vfio-override.rules
SUBSYSTEM=="pci", ATTR{vendor}=="0x10de", ATTR{device}=="0x1ad6", ATTR{driver_override}="vfio-pci"
SUBSYSTEM=="pci", ATTR{vendor}=="0x10de", ATTR{device}=="0x1ad7", ATTR{driver_override}="vfio-pci"
EOFudevadm control --reloadecho vfio-pci > /sys/bus/pci/devices/0000:0c:00.2/driver_override
echo vfio-pci > /sys/bus/pci/devices/0000:0c:00.3/driver_override
echo 0000:0c:00.2 > /sys/bus/pci/drivers/xhci_hcd/unbind
echo 0000:0c:00.3 > /sys/bus/pci/drivers/nvidia-gpu/unbind
echo 0000:0c:00.2 > /sys/bus/pci/drivers_probe
echo 0000:0c:00.3 > /sys/bus/pci/drivers_probeβ This is only needed once. After reboot, the udev rule ensures
vfio-pcibinds automatically.
You can verify binding with:
lspci -nnk | grep -A 3 '0c:00'Add KVM options for better GPU compatibility:
echo "options kvm ignore_msrs=1 report_ignored_msrs=0" > /etc/modprobe.d/kvm.confecho "options kvm ignore_msrs=1" > /etc/modprobe.d/kvm.confWhat these options do:
ignore_msrs=1: Allows VM to boot with GPU drivers that access unsupported MSRsreport_ignored_msrs=0: Reduces log spam from ignored MSR accesses
Apply all changes:
update-initramfs -u -k all
proxmox-boot-tool refresh
rebootAfter reboot, verify that your GPU is bound to VFIO:
# Check that your GPU is using vfio-pci driver
lspci -k | grep -A 3 -i nvidia # For NVIDIA
lspci -k | grep -A 3 -i radeon # For AMDYou should see:
Kernel driver in use: vfio-pci
Check VFIO devices:
ls /dev/vfio/Expected output:
<group_number(s)> devices vfio
If you have a second storage device for shared access between VMs:
mkfs.ext4 /dev/nvme1n1
mkdir /mnt/shared-projects
mount /dev/nvme1n1 /mnt/shared-projects
echo "/dev/nvme1n1 /mnt/shared-projects ext4 defaults 0 2" >> /etc/fstabapt update && apt install nfs-kernel-server
echo "/mnt/shared-projects *(rw,sync,no_subtree_check,no_root_squash)" >> /etc/exports
systemctl enable --now nfs-kernel-server
exportfs -raapt update && apt install samba
cat >> /etc/samba/smb.conf << EOF
[shared-projects]
path = /mnt/shared-projects
browseable = yes
read only = no
guest ok = no
valid users = root
EOF
smbpasswd -a root
systemctl restart smbdOnce your GPU is bound to vfio-pci, you can assign it to a virtual machine. Below are configuration templates for both Windows and Linux VMs.
name: windows-gpu
agent: 1
bios: ovmf
boot: order=scsi0
cores: 20
machine: q35
memory: 32768
numa: 1
ostype: win11
scsihw: virtio-scsi-single
efidisk0: local-lvm:vm-120-disk-0,efitype=4m,size=4M
tpmstate0: local-lvm:vm-120-disk-1,size=4M,version=v2.0,pre-enrolled-keys=0
scsi0: local-lvm:vm-120-disk-2,cache=writeback,discard=on,iothread=1,size=512G
smbios1: uuid=1824e48d-4ffa-4c0d-a13e-e38707b6a8d0
vga: none
hostpci0: 0b:00.0,pcie=1,x-vga=1
hostpci1: 0b:00.1,pcie=1
usb0: host=1-1
args: -cpu host,+kvm,+hypervisor,vendor=1234567890abpvesm alloc local-lvm 120 vm-120-disk-0 4M # EFI vars
pvesm alloc local-lvm 120 vm-120-disk-1 4M # TPM state
pvesm alloc local-lvm 120 vm-120-disk-2 512G # Main diskname: fedora-gpu
agent: 1
bios: ovmf
boot: order=virtio0
cores: 8
machine: q35
memory: 16384
numa: 1
ostype: l26
scsihw: virtio-scsi-single
efidisk0: local-lvm:vm-110-disk-0,efitype=4m,size=4M
tpmstate0: local-lvm:vm-110-disk-1,size=4M,version=v2.0
virtio0: local-lvm:vm-110-disk-2,cache=writeback,discard=on,iothread=1,size=128G
net0: virtio=DE:AD:BE:EF:00:02,bridge=vmbr0
vga: none
hostpci0: 0b:00.0,pcie=1,x-vga=1
hostpci1: 0b:00.1,pcie=1
args: -cpu host,+kvm,+hypervisor,vendor=1234567890abpvesm alloc local-lvm 110 vm-110-disk-0 4M # EFI vars
pvesm alloc local-lvm 110 vm-110-disk-1 4M # TPM state
pvesm alloc local-lvm 110 vm-110-disk-2 128G # Main diskTo improve performance and compatibility, especially with GPU passthrough, use these flags depending on your host CPU:
cpu: host,flags=+pcid;+invtsc;+aes;+x2apic;+vmx+pcidand+invtsc: Required for Windows timekeeping and performance+aes: Enables AES-NI acceleration+vmx: Ensures virtualization extensions are exposed+x2apic: Improves interrupt handling
cpu: host,flags=+topoext;+invtsc;+aes;+svm+topoext: Required for proper CPU topology in Windows+invtsc: Fixes time drift issues+aes: Enables AES-NI acceleration+svm: Ensures AMD virtualization extensions are exposed
β οΈ Always usecpu: hostto expose native CPU features to the VM.
Set CPU governor to performance:
echo performance | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governorDisable CPU mitigations (optional, reduces security):
# Add to GRUB:
GRUB_CMDLINE_LINUX_DEFAULT="... mitigations=off"- CPU: Use
host-passthroughfor best performance - Memory: Enable balloon memory for dynamic allocation
- Storage: Use VirtIO SCSI with
cache=writeback - Network: Use VirtIO network adapter
This Interactive GPU Passthrough Helper CLI menu helps you inspect your system for GPU passthrough readiness. Paste the entire block into your terminal to run it.
clear
echo -e "\e[1;34m=== Proxmox GPU Passthrough Helper Menu ===\e[0m"
echo "Choose an option:"
echo "1) Extract GPU Vendor:Device IDs"
echo "2) Check VFIO Binding Status"
echo "3) List IOMMU Groups"
echo "4) Run All"
echo "5) Exit"
echo ""
read -p "Enter your choice [1-5]: " choice
case $choice in
1)
echo -e "\n\e[1;34mπ GPU Vendor:Device IDs:\e[0m"
lspci -nn | grep -E 'NVIDIA|AMD|Radeon|VGA|Display' | while read -r line; do
echo -e "\e[0;36m$line\e[0m"
echo "$line" | grep -o '
\[....:....\]
' | tr -d '[]' | while read -r id; do
echo -e "\e[1;32mβ ID: $id\e[0m"
done
done
;;
2)
echo -e "\n\e[1;34mπ§ VFIO Binding Status:\e[0m"
lspci -nnk | grep -A 3 -E 'NVIDIA|AMD|Radeon|VGA|Display' | grep 'Kernel driver in use' | while read -r line; do
echo -e "\e[1;33m$line\e[0m"
done
;;
3)
echo -e "\n\e[1;34m𧬠IOMMU Groups:\e[0m"
shopt -s nullglob
for g in /sys/kernel/iommu_groups/*; do
echo -e "\e[1;35mIOMMU Group ${g##*/}:\e[0m"
for d in "$g"/devices/*; do
echo -e "\t\e[0;36m$(lspci -nns "${d##*/}")\e[0m"
done
done
;;
4)
echo -e "\n\e[1;34mπ GPU Vendor:Device IDs:\e[0m"
lspci -nn | grep -E 'NVIDIA|AMD|Radeon|VGA|Display' | while read -r line; do
echo -e "\e[0;36m$line\e[0m"
echo "$line" | grep -o '
\[....:....\]
' | tr -d '[]' | while read -r id; do
echo -e "\e[1;32mβ ID: $id\e[0m"
done
done
echo -e "\n\e[1;34mπ§ VFIO Binding Status:\e[0m"
lspci -nnk | grep -A 3 -E 'NVIDIA|AMD|Radeon|VGA|Display' | grep 'Kernel driver in use' | while read -r line; do
echo -e "\e[1;33m$line\e[0m"
done
echo -e "\n\e[1;34m𧬠IOMMU Groups:\e[0m"
shopt -s nullglob
for g in /sys/kernel/iommu_groups/*; do
echo -e "\e[1;35mIOMMU Group ${g##*/}:\e[0m"
for d in "$g"/devices/*; do
echo -e "\t\e[0;36m$(lspci -nns "${d##*/}")\e[0m"
done
done
;;
5)
echo -e "\n\e[1;31mExiting...\e[0m"
;;
*)
echo -e "\n\e[1;31mInvalid choice. Please run again.\e[0m"
;;
esacπ‘ Tip: You can paste this directly into your terminal. It will display a menu and run the selected diagnostic tool interactively.
GPU not binding to VFIO
- Verify IOMMU is enabled in BIOS
- Check hardware IDs are correct
- Ensure drivers are blacklisted
VM fails to start
- Check IOMMU group isolation
- Verify OVMF firmware is installed
- Check VM logs in Proxmox
Black screen in Windows
- Install latest GPU drivers in VM
- Enable Hyper-V enlightenments
- Set
vendor_idin VM config
Poor performance
- Use CPU
host-passthrough - Allocate sufficient RAM
- Use VirtIO drivers for storage/network
# Check IOMMU status
dmesg | grep -i iommu
# Check VFIO binding
lspci -k | grep vfio
# Check VM resource usage
pvesh get /nodes/$(hostname)/qemu/{vmid}/status/current
# Monitor GPU usage in VM
nvidia-smi # Inside Windows VM with NVIDIA drivers- GPU passthrough may bypass some host protections
- Keep host and VMs updated with security patches
- Use strong passwords and SSH keys
- Regularly back up VM configs and data
- Apply firewall rules for shared storage
- Avoid exposing Proxmox web UI to the internet without protection
This laptop uses a muxless hybrid GPU setup. The Intel Iris Xe iGPU handles host display, while the NVIDIA RTX 3060 Mobile dGPU is passed through to the VM for compute or gaming.
GRUB Configuration:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt pcie_acs_override=downstream,multifunction vfio-pci.ids=10de:2520,10de:228e disable_vga=1"VFIO Configuration:
echo "options vfio-pci ids=10de:2520,10de:228e disable_vga=1" > /etc/modprobe.d/vfio.confπ‘ Tip: Because the dGPU has no direct display output, use remote desktop tools like Moonlight or Parsec to interact with the VM.
This desktop setup allows flexible passthrough of either GPU to different VMs. Ensure each GPU and its audio device are in separate IOMMU groups.
GRUB Configuration:
GRUB_CMDLINE_LINUX_DEFAULT="quiet iommu=pt pcie_acs_override=downstream,multifunction vfio-pci.ids=10de:1c03,10de:10f1,10de:1e07,10de:10f7 disable_vga=1"VFIO Configuration:
echo "options vfio-pci ids=10de:1c03,10de:10f1,10de:1e07,10de:10f7 disable_vga=1" > /etc/modprobe.d/vfio.confπ― IDs used:
10de:1c03= GTX 1060 GPU10de:10f1= GTX 1060 Audio10de:1e07= RTX 2080 Ti GPU10de:10f7= RTX 2080 Ti Audio
π§ Tip: You can assign one GPU per VM or switch dynamically using PCI passthrough and VM hooks.