Skip to content

Instantly share code, notes, and snippets.

@notjl
Last active July 28, 2024 01:17
Show Gist options
  • Save notjl/3cb8de4811c215726128c67ade3c44f7 to your computer and use it in GitHub Desktop.
Save notjl/3cb8de4811c215726128c67ade3c44f7 to your computer and use it in GitHub Desktop.
NVIDIA Optimus Laptop GPU Passthrough (ASUS TUF FX505DT)

Optimus Laptop GPU Passthrough

This guide assumes that you have a CPU that supports hardware virtualization and IOMMU. If you have queries regarding the pre-requisities of this guide, you can check the ArchWiki. In case, you are still confuse how to do the prior steps, they will be provided still.

If you are using FX505DT, then you're in luck.

This guide assumes that you have installed qemu virt-manager ebtables dnsmasq vde2 iptables-nft edk2-ovmf using your distro's package manager.

Setting up IOMMU

Go to the UEFI (BIOS) and enable:

If you are on an AMD platform: AMD-Vi

If you are on an Intel platform: Intel VT-d

Manually enable IOMMU support by editing the kernel parameter.

If you are on an AMD platform, add these parameters: amd_iommu=on iommu=pt

If you are on Intel platform, add this parameters: intel_iommu=on iommu=pt

After adding the parameters, reboot your system. Then, run this line in your terminal dmesg | grep -i -e DMAR -e IOMMU.

It should output this (results may vary):

AMD
[    0.767328] pci 0000:00:00.2: AMD-Vi: IOMMU performance counters supported
[    0.767383] pci 0000:00:01.0: Adding to iommu group 0
[    0.767396] pci 0000:00:01.1: Adding to iommu group 1
...
[    0.767796] pci 0000:06:00.0: Adding to iommu group 5
[    0.768438] pci 0000:00:00.2: AMD-Vi: Found IOMMU cap 0x40
[    0.768643] perf/amd_iommu: Detected AMD IOMMU #0 (2 banks, 4 counters/bank).
[    0.962655] AMD-Vi: AMD IOMMUv2 loaded and initialized
Intel
[    0.000000] ACPI: DMAR 0x00000000BDCB1CB0 0000B8 (v01 INTEL  BDW      00000001 INTL 00000001)
[    0.000000] Intel-IOMMU: enabled
[    0.028879] dmar: IOMMU 0: reg_base_addr fed90000 ver 1:0 cap c0000020660462 ecap f0101a
[    0.028883] dmar: IOMMU 1: reg_base_addr fed91000 ver 1:0 cap d2008c20660462 ecap f010da
[    0.028950] IOAPIC id 8 under DRHD base  0xfed91000 IOMMU 1
[    0.536212] DMAR: No ATSR found
[    0.536229] IOMMU 0 0xfed90000: using Queued invalidation
[    0.536230] IOMMU 1 0xfed91000: using Queued invalidation
[    0.536231] IOMMU: Setting RMRR:
[    0.536241] IOMMU: Setting identity map for device 0000:00:02.0 [0xbf000000 - 0xcf1fffff]
[    0.537490] IOMMU: Setting identity map for device 0000:00:14.0 [0xbdea8000 - 0xbdeb6fff]
[    0.537512] IOMMU: Setting identity map for device 0000:00:1a.0 [0xbdea8000 - 0xbdeb6fff]
[    0.537530] IOMMU: Setting identity map for device 0000:00:1d.0 [0xbdea8000 - 0xbdeb6fff]
[    0.537543] IOMMU: Prepare 0-16MiB unity mapping for LPC
[    0.537549] IOMMU: Setting identity map for device 0000:00:1f.0 [0x0 - 0xffffff]
[    2.182790] [drm] DMAR active, disabling use of stolen memory

Check IOMMU groups validity

Check your IOMMU groups if the dGPU is in its own group using this script:

#!/bin/bash
shopt -s nullglob
for g in $(find /sys/kernel/iommu_groups/* -maxdepth 0 -type d | sort -V); do
    echo "IOMMU Group ${g##*/}:"
    for d in $g/devices/*; do
        echo -e "\t$(lspci -nns ${d##*/})"
    done;
done;
It should look like this
IOMMU Group 0:
  00:01.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge [1022:1452]
IOMMU Group 1:
  00:01.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Raven/Raven2 PCIe GPP Bridge [6:0] [1022:15d3]
IOMMU Group 2:
  00:01.2 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Raven/Raven2 PCIe GPP Bridge [6:0] [1022:15d3]
IOMMU Group 3:
  00:01.3 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Raven/Raven2 PCIe GPP Bridge [6:0] [1022:15d3]
IOMMU Group 4:
  00:01.7 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Raven/Raven2 PCIe GPP Bridge [6:0] [1022:15d3]
IOMMU Group 5:
  00:08.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge [1022:1452]
  00:08.2 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Raven/Raven2 Internal PCIe GPP Bridge 0 to Bus B [1022:15dc]
  06:00.0 SATA controller [0106]: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] [1022:7901] (rev 61)
IOMMU Group 6:
  00:08.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Raven/Raven2 Internal PCIe GPP Bridge 0 to Bus A [1022:15db]
IOMMU Group 7:
  00:14.0 SMBus [0c05]: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller [1022:790b] (rev 61)
  00:14.3 ISA bridge [0601]: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge [1022:790e] (rev 51)
IOMMU Group 8:
  00:18.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Raven/Raven2 Device 24: Function 0 [1022:15e8]
  00:18.1 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Raven/Raven2 Device 24: Function 1 [1022:15e9]
  00:18.2 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Raven/Raven2 Device 24: Function 2 [1022:15ea]
  00:18.3 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Raven/Raven2 Device 24: Function 3 [1022:15eb]
  00:18.4 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Raven/Raven2 Device 24: Function 4 [1022:15ec]
  00:18.5 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Raven/Raven2 Device 24: Function 5 [1022:15ed]
  00:18.6 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Raven/Raven2 Device 24: Function 6 [1022:15ee]
  00:18.7 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Raven/Raven2 Device 24: Function 7 [1022:15ef]
IOMMU Group 9:
  01:00.0 VGA compatible controller [0300]: NVIDIA Corporation TU117M [GeForce GTX 1650 Mobile / Max-Q] [10de:1f91] (rev a1)
  01:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:10fa] (rev a1)
IOMMU Group 10:
  02:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller [10ec:8168] (rev 15)
IOMMU Group 11:
  03:00.0 Non-Volatile memory controller [0108]: Micron Technology Inc Device [1344:5410] (rev 01)
IOMMU Group 12:
  04:00.0 Network controller [0280]: Intel Corporation Wi-Fi 6 AX210/AX211/AX411 160MHz [8086:2725] (rev 1a)
IOMMU Group 13:
  05:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Picasso/Raven 2 [Radeon Vega Series / Radeon Vega Mobile Series] [1002:15d8] (rev c2)
IOMMU Group 14:
  05:00.2 Encryption controller [1080]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 10h-1fh) Platform Security Processor [1022:15df]
  05:00.3 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Raven USB 3.1 [1022:15e0]
  05:00.4 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Raven USB 3.1 [1022:15e1]
  05:00.6 Audio device [0403]: Advanced Micro Devices, Inc. [AMD] Family 17h/19h HD Audio Controller [1022:15e3]

In this case, we have to take note of the IOMMU group 9 and the IDs associated with it.

Isolating dGPU

We already took note of our dGPU's IDs (in this case it's 10de:1f91 and 10de:10fa), in this part, we're gonna check the what driver is using the dGPU using lspci -nnk

01:00.0 VGA compatible controller [0300]: NVIDIA Corporation TU117M [GeForce GTX 1650 Mobile / Max-Q] [10de:1f91] (rev a1)
	Subsystem: ASUSTeK Computer Inc. Device [1043:109f]
	Kernel driver in use: nvidia
	Kernel modules: nouveau, nvidia_drm, nvidia
01:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:10fa] (rev a1)
	Subsystem: ASUSTeK Computer Inc. Device [1043:109f]
	Kernel modules: snd_hda_intel

We can see that nvidia is using our dGPU. Ff you are using AMD, it should be amdgpu. In my testing, we want this to be vfio-pci, so that we could pass it onto a VM. In case you are using optimus-manager, I suggest looking through this guide.

To make the dGPU make use of vfio-pci by default, we have to edit the kernel parameter again. Add this:

rd.driver.pre=vfio-pci vfio-pci.ids=10de:1f91,10de:10fa
  • rd.driver.pre - Enables the vfio-pci driver
  • vfio-pci.ids - Binds the specified device to vfio-pci

After editing the kernel parameter, we will load vfio-pci driver early by editing mkinitcpio or dracut.

Dracut

Create a file in /etc/dracut.conf.d/10-vfio.conf then add this line:

force_drivers+=" vfio_pci vfio vfio_iommu_type1 vfio_virqfd "

And then regenerate your initramfs. If you are on EndeavourOS, you can just call dracut-rebuild.

mkinitcpio

Edit /etc/mkinitcpio.conf

MODULES=(... vfio_pci vfio vfio_iommu_type1 vfio_virqfd ...)
HOOKS=(... modconf ...)

And then regenerate your initramfs.

After regenerating your initramfs, make sure to reboot.

Checking vfio-pci driver is being used

Call the lspci -nnk and it should show:

01:00.0 VGA compatible controller [0300]: NVIDIA Corporation TU117M [GeForce GTX 1650 Mobile / Max-Q] [10de:1f91] (rev a1)
	Subsystem: ASUSTeK Computer Inc. Device [1043:109f]
	Kernel driver in use: vfio-pci
	Kernel modules: nouveau, nvidia_drm, nvidia
01:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:10fa] (rev a1)
	Subsystem: ASUSTeK Computer Inc. Device [1043:109f]
	Kernel driver in use: vfio-pci
	Kernel modules: snd_hda_intel

Or by calling dmesg | grep -i vfio

...
[    2.131876] vfio-pci 0000:01:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=none
[    2.132068] vfio_pci: add [10de:1f91[ffffffff:ffffffff]] class 0x000000/00000000
[    2.939817] vfio_pci: add [10de:10fa[ffffffff:ffffffff]] class 0x000000/00000000
[   14.566597] vfio-pci 0000:01:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=none
...

It is now being used by the vfio-pci driver. But maybe you want to use your GPU on your host to play games. No worries, we have a way to fix that and way to automate it.

Test reattaching and detaching the dGPU

We can test reattaching the dGPU with just commands, and those are:

$ sudo virsh nodedev-reattach pci_0000_01_00_0
$ sudo virsh nodedev-reattach pci_0000_01_00_1
$ sudo rmmod vfio_pci vfio_pci_core vfio_iommu_type1
$ sudo modprobe -i nvidia_modeset nvidia_uvm nvidia_drm nvidia

As if you have noticed, what is pci_0000_01_00_0? Well, that is what we find when we call lspci -nnk. The set of numbers before the device type. In this case, it is 01.00.0 which translates to 01_00_0.

We can then again detach the dGPU to use for VFIO:

$ sudo rmmod nvidia_modeset nvidia_uvm nvidia_drm nvidia
$ sudo modprobe -i vfio_pci vfio_pci_core vfio_iommu_type1
$ sudo virsh nodedev-detach pci_0000_01_00_0
$ sudo virsh nodedev-detach pci_0000_01_00_1

You can also check the driver that the dGPU is using with lsmod | grep nvidia and then use the rmmod to remove those.

Automating the switch

In your .bashrc or .zshrc you can add this function

Script
function gpu-switch() {
    if lspci -nk | grep -q "Kernel driver in use: nvidia"; then
        sudo rmmod nvidia_modeset nvidia_uvm nvidia_drm nvidia &> /dev/null
        echo "dGPU found... detaching..."
        echo "NVIDIA Drivers removed!"
        sudo modprobe -i vfio_pci vfio_pci_core vfio_iommu_type1 &> /dev/null
        echo "VFIO Drivers added!"
        sudo virsh nodedev-detach pci_0000_01_00_0 &> /dev/null
        sudo virsh nodedev-detach pci_0000_01_00_1 &> /dev/null
        echo "dGPU detached!"
        echo "dGPU is now VFIO ready!"
    else
        sudo virsh nodedev-reattach pci_0000_01_00_0 &> /dev/null
        sudo virsh nodedev-reattach pci_0000_01_00_1 &> /dev/null
        echo "dGPU not found... attaching..."
        echo "dGPU Attached!"
        sudo rmmod vfio_pci vfio_pci_core vfio_iommu_type1 &> /dev/null
        echo "VFIO Drivers removed!"
        sudo modprobe -i nvidia_modeset nvidia_uvm nvidia_drm nvidia &> /dev/null
        echo "NVIDIA Drivers added!"
        echo "dGPU is now host ready!"
    fi
}

If you have a better way of implementing, please share it ^-^

Installing the Virtual Machine

First, get the virtio drivers and Windows 11 ISO. You can get the Windows 11 ISO from Microsoft or use Tiny11, less bloat but you will have to install dependencies if you are using the VM for gaming. For dependencies, you can install this inside the VM.

  1. Click the Create a New Virtual Machine icon.
  2. Select Local install media and click Forward.
  3. Click Browse and look for the OS ISO.
  4. Put win11 in the Choose the operating system you are installing.
  5. Set your desired memory and CPUs.
  6. Set your desired image size.
  7. You can keep the name win11 but in this guide we will name it win11-gpu. Make sure to tick the Customize configuration before install

Editing XML/VM Config

  1. Make sure Hypervisor line is KVM
  2. Make sure Chipset is assigned to Q35
  3. Select UEFI x86_64: /usr/share/edk2/x64/OVMF_CODE.secboot.fd as your Firmware. To ensure the Windows 11 installs properly.
  4. Go to CPUs and tick Copy host CPU configuration (host-passthrough). Expand the Topology and tick Manually set CPU topology and make Sockets as 1, Cores as the amount of core you CPU has, and Threads as the amount of threads per core your CPU has.

In my case, I have a AMD Ryzen 3550H with 4 cores and 8 threads and I want to have 6 cores passed meaning my Cores would be 3 and Threads would be 2

Later on, we will improve the performance of our VM by CPU pinning or by reserving the cores of our CPU so that Linux wouldn't use it.

  1. Go to SATA Disk 1 and select VirtIO.
  2. Click Add Hardware and select CDROM device for Device type and select the virtio driver ISO then click Finish
  3. Go to NIC and select virtio as the Device model
  4. Add your dGPU as hardware by clicking Add Hardware > PCI Host Device. Select the dGPU. Make sure you add all the devices in the IOMMU group of your dGPU.

In this case I pass 0000:01:00:0 and 0000:01:00:1

  1. Click Begin installation

Inside the Windows VM

Inside the installation, it might prompt you that there are no drives. In order to load in the VirtIO drive, you have to click Load Drivers and then click Browser > virtio-win > amd64 > win11 / win10 > Ok > Next. After that, you should be able to install Windows normally.

After the initial setup, you can install all of the required drivers by going to virtio and then install everything using virtio-win-guest-tools. After doing so, you can remove the respective hardware for the OS ISO and virtio ISO.

Install the NVIDIA drivers using nvcleanstall

  1. Open Microsoft Store and then search for App Installer and update it.
  2. Open powershell and then enter winget install nvcleanstall. After installation, run it.

Add virtual monitor

You want to add a virtual monitor to eliminate the need for a dummy HDMI.

Install scoop

Open powershell and enter:

> Set-ExecutionPolicy RemoteSigned -Scope CurrentUser # Optional: Needed to run a remote script the first time
> irm get.scoop.sh | iex

Install IDD

Open powershell as administrator

> scoop install git
> scoop bucket add extras
> scoop bucket add nonportable
> scoop install iddsampledriver-ge9-np -g

You can configure the virtual monitor by editing C:\IddSampleDriver\option.txt

Looking Glass

Looking Glass eliminates the need for a Physical Monitor, we will be using this to attach to the virtual monitor we made.

  1. First, we have to Enable XML editing found in the Virtual Machine Manager > Edit > Preferences > Enable XML editing.
  2. Add this to your XML file:
<devices>
    ...
  <shmem name='looking-glass'>
    <model type='ivshmem-plain'/>
    <size unit='M'>32</size>
  </shmem>
</devices>

To compute for the 32, take into account the resolution you are passing through, in my case it is 1920x1080.

width x height x 4 x 2 = total bytes
total bytes / 1024 / 1024 = total mebibytes + 10
1920 x 1080 x 4 x 2 = 16,588,800 bytes
16,588,800 / 1024 / 1024 = 15.82 MiB + 10 = 25.82

The result must be rounded up to the nearest power of two, and since 25.82 is bigger than 16 we should choose 32.

  1. Create a file at /etc/tmpfiles.d/10-looking-glass.conf and add this:
f    /dev/shm/looking-glass    0660    <username>    kvm    -
  1. Run this:
$ sudo systemd-tmpfiles --create /etc/tmpfiles.d/10-looking-glass.conf
  1. Boot up your VM
  2. Install the IVSHMEM Driver via Device Manager > System Devices > PCI standard RAM Controller > Update Driver and then click manual > Browse > win10 > amd64
  3. Install looking-glass-host, it should run automatically and hidden inside the system tray.
  4. Install looking-glass-client and then run looking-glass-client -s -m 97

In case you have an Insert button, you can ignore -m 97. If you don't, R-Ctrl is your way to access the Looking Glass keybindings.

You can now see your second monitor using Looking Glass, but you cannot use it since you don't have a mouse and keyboard to interact with it. If you have a spare mouse and keyboard, you can use that but if you only have one pair. Fear not.

Quality of Life configuration

Passing keyboard/mouse using Evdev

To know what to pass, you can cat the devices found in /dev/input/by-path / /dev/input/by-id. They usually end ...-event-mouse or ...-event-kbd.

cat-ing these and trying to move them or type/click, you will see activity in the terminal. In this case, I'm passing /dev/input/by-path/platform-i8042-serio-0-event-kbd for my keyboard and /dev/input/by-id/usb-Razer_Razer_Viper_Ultimate_000000000000-event-mouse for my mouse.

Edit the XML and add these lines:

  <devices>
    ...
    <input type='evdev'>
      <source dev='/dev/input/by-id/MOUSE_NAME'/>
    </input>
    <input type='evdev'>
      <source dev='/dev/input/by-id/KEYBOARD_NAME' grab='all' repeat='on' grabToggle='ctrl-ctrl'/>
    </input>
    ...
  </devices>

Passing Audio IO using JACK and Pipewire

You can use this guide from Looking Glass documentation. You can also use this guide. First, install qemu-audio-jack and pipewire-jack.

Then edit /etc/libvirt/qemu.conf and add user = <username>.

After doing so, you can use qpwgraph to look for your audio devices' name. Also, take note of your ID using the command id. This guide assumes that your ID is 1000, the audio device is GS3 Analog Audio, and the VM name is win11-gpu.

Edit the XML configuration of the VM

<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>

<device>
...
<audio id="1000" type="jack">
    <input clientName="win11-gpu" connectPorts="GS3 Analog Stereo"/>
    <output clientName="win11-gpu" connectPorts="GS3 Analog Stereo"/>
</audio>
<sound model="ich9">
    <alias name="sound0"/>
</sound>
...
</device>
<qemu:commandline>
    <qemu:env name="PIPEWIRE_RUNTIME_DIR" value="/run/user/1000"/>
    <qemu:env name="PIPEWIRE_LATENCY" value="512/48000"/>
</qemu:commandline>
</domain>

Keeping only one monitor (the virtual one)

Make sure to make the Video QXL into Video None to allow the IDD Virtual Monitor to become the main monitor and will immediately use the dGPU.

Hook Scripts for automatic GPU switching

As said at the earlier parts of this guide, that we would be automating the process of making the GPU VFIO ready and Host ready.

Qemu hook script

First, setup the hook script for the VM so that you won't have to call gpu-switch everytime you want to use the VM or play on your Host OS after using the VM.

# mkdir /etc/libvirt/hooks
# sudoedit /etc/libvirt/hooks/qemu
qemu

Paste these lines:

#!/bin/bash

GUEST_NAME="$1"
HOOK_NAME="$2"
STATE_NAME="$3"
MISC="${@:4}"

BASEDIR="$(dirname $0)"

HOOKPATH="$BASEDIR/qemu.d/$GUEST_NAME/$HOOK_NAME/$STATE_NAME"
set -e # If a script exits with an error, we should as well.

if [ -f "$HOOKPATH" ]; then
eval \""$HOOKPATH"\" "$@"
elif [ -d "$HOOKPATH" ]; then
while read file; do
  eval \""$file"\" "$@"
done <<< "$(find -L "$HOOKPATH" -maxdepth 1 -type f -executable -print;)"
fi

Detach dGPU start script

Create the start script to automatically detach the dGPU from Host OS

# mkdir -p /etc/libvirt/hooks/qemu.d/<vm-name>/prepare/begin
# sudoedit /etc/libvirt/hooks/qemu.d/<vm-name>/prepare/begin/start.sh

In this case, the vm-name is win11-gpu

start.sh
#!/bin/bash

# Check first if the GPU is VFIO ready
if lspci -nk | grep -q "Kernel driver in use: nvidia"; then
  # Unload NVIDIA kernel modules
  rmmod nvidia_modeset nvidia_uvm nvidia_drm nvidia &> /dev/null

  # Load vfio kernel modules
  modprobe -i vfio_pci vfio_pci_core vfio_iommu_type1 &> /dev/null

  # Detach the GPU devices from host
  virsh nodedev-detach pci_0000_01_00_0 &> /dev/null
  virsh nodedev_detach pci_0000_01_00_1 &> /dev/null
fi

exit 0

Reattach dGPU stop script

Create the stop script to automatically reattach the dGPU to Host OS

# mkdir -p /etc/libvirt/hooks/qemu.d/<vm-name>/release/end
# sudoedit /etc/libvirt/hooks/qemu.d/<vm-name>/release/end/stop.sh
stop.sh
#!/bin/bash

# Check first if the GPU is VFIO ready
if lspci -nk | grep -q "Kernel driver in use: vfio-pci"; then
  # Attach the GPU devices to host
  virsh nodedev-reattach pci_0000_01_00_0 &> /dev/null
  virsh nodedev_reattach pci_0000_01_00_1 &> /dev/null

  # Unload VFIO kernel modules
  rmmod vfio_pci vfio_pci_core vfio_iommu_type1 &> /dev/null

  # Load NVIDIA kernel modules
  modprobe -i nvidia_modeset nvidia_uvm nvidia_drm nvidia &> /dev/null

# Kill all looking glass processes
killall looking-glass-client

exit 0

Autorun Looking Glass

To save you a few clicks, automatically run looking glass once the VM has started

# mkdir -p /etc/libvirt/hooks/qemu.d/<vm-name>/started/begin/
# sudoedit /etc/libvirt/hooks/qemu.d/<vm-name>/started/begin/looking-glass.sh
looking-glass.sh
#!/bin/bash

DISPLAY=:0 sudo -H -u <username> /usr/bin/looking-glass-client -s -m 97 -F &> /dev/null & disown

Making scripts executable

You can individually make them executable by using chmod +x or just do:

# chmod +x /etc/libvirt/hooks/{qemu,qemu.d/win11-gpu/prepare/begin/start.sh,qemu.d/win11-gpu/release/end/stop.sh,qemu.d/win11-gpu/started/begin/looking-glass.sh}

CPU Pinning

Reserve your 2 CPU cores for your Host OS and then the rest for your Guest OS. This way, when your Host OS requires your CPU, it wouldn't use your pinned cores. Your Guest OS would be able to utilize the CPU cores without being interrupted.

First, run lstopo or lscpu -e to see your CPU topology. Below are the results from my system: image image

In my case, I would be pinning Cores 1 - 3. That would mean I would pin 2 - 7 and leave out 0 - 1.

<vcpu placement="static">6</vcpu>
<iothreads>1</iothreads>
<cputune>
 <vcpupin vcpu="0" cpuset="2"/>
 <vcpupin vcpu="1" cpuset="3"/>
 <vcpupin vcpu="2" cpuset="4"/>
 <vcpupin vcpu="3" cpuset="5"/>
 <vcpupin vcpu="4" cpuset="6"/>
 <vcpupin vcpu="5" cpuset="7"/>
 <emulatorpin cpuset="0-1"/>
 <iothreadpin iothread="1" cpuset="0-1"/>
</cputune>

VirtIOFS

You can add a Host Filesystem through VirtIOFS.

First, Enable Shared Memory in your configuration. It should look like:

<memoryBacking>
    <source type="memfd"/>
    <access mode="shared"/>
</memoryBacking>

Next, Add Hardware > Filesystem. You point the source to your desired directory then name the target directory to anything. For example:

image

This guide assumes you have installed virtio drivers and virtio storage driver. If not install this (virtio-win-guest-tools)

Then install WinFsp.

Lastly, automatically start service for VirtioFS. service.msc > Virtio-FS Service > Right-click > Properties > Startup Type > Automatically.

Then press Start under Service Status

Intel Bluetooth USB Passthrough Error 2

This assumes you have applied Jack Audio configuration.

<domain>
  <devices>
    ...
  </devices>
  <qemu:capabilities>
    <qemu:del capability="usb-host.hostdevice"/>
  </qemu:capabilities>
</domain>

[Windows] Automatic Login

To use Registry Editor to turn on automatic logon, follow these steps:

Click Start, and then click Run.

In the Open box, type Regedit.exe, and then press Enter.

Locate the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon subkey in the registry.

On the Edit menu, click New, and then point to String Value.

Type AutoAdminLogon, and then press Enter.

Double-click AutoAdminLogon.

In the Edit String dialog box, type 1 and then click OK.

Double-click the DefaultUserName entry, type your user name, and then click OK.

Double-click the DefaultPassword entry, type your password, and then click OK.

If the DefaultPassword value does not exist, it must be added. To add the value, follow these steps:

On the Edit menu, click New, and then point to String Value.

Type DefaultPassword, and then press Enter.

Double-click DefaultPassword.

In the Edit String dialog, type your password and then click OK.

 Note

If no DefaultPassword string is specified, Windows automatically changes the value of the AutoAdminLogon key from 1 (true) to 0 (false), disabling the AutoAdminLogon feature.

If you have joined the computer to a domain, you should add the DefaultDomainName value, and the data for the value should be set as the fully qualified domain name (FQDN) of the domain, for example contoso.com..

Exit Registry Editor.

Click Start, click Shutdown, and then type a reason in the Comment text box.

Click OK to turn off your computer.

Restart your computer. You can now log on automatically.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment