Hands on Broadcom’s acquisition of VMware has sent many scrambling for alternatives.
Two of the biggest beneficiaries of Broadcom’s price hikes, at least on the free and open source side of things, have been the Proxmox VE and XCP-ng hypervisors.
At the same time, interest in enterprise AI has taken off in earnest. With so many making the switch to these FOSS-friendly virtualization platforms, we figured at least some of you might be interested in passing a GPU or two through to your VMs to experiment with local AI workloads.
In this tutorial, we’ll be looking at what it takes to pass a GPU through to VMs running on either platform, and go over some of the more common pitfalls you may run into.
Limitations and prerequisites
Before we make any config changes to our hypervisors, it’s important to understand some of the limitations of these platforms. For one, this guide will be looking at PCIe passthrough. This means that you’ll be limited to one GPU per virtual machine.
Some Nvidia cards may support vGPU capabilities on Proxmox via their own proprietary driver, however this requires a virtual workstation license to use and is therefore beyond the scope of this tutorial.
At the time of writing, XCP-ng lacks support for vGPU style partitioning on Nvidia and only supports PCIe passthrough to the VM.
We’ll be focusing primarily on Linux guests. It should work fine for other operating systems, including Windows Server, but your mileage may vary.
This guide assumes you are running either XCP-ng 8.2 with Xen Orchestra 5.92, or Proxmox VE 8.2, and have an Nvidia or AMD graphics card already installed in the system.
Enabling PCIe passthrough on XCP-ng
To kick things off, we’ll start with XCP-ng – a descendant of the Citrix Xen Server project – as it’s the easier of the two hypervisors to pass PCIe devices through, at least in this vulture’s experience.
By default, graphics cards get assigned to Dom0 (the management VM) and are used for display output. However, with a couple of quick config changes, we can tell Dom0 to ignore the card so that we can use the hardware for acceleration in another VM — you may want to set up a display via another GPU, via the CPU, or the motherboard’s integrated graphics.
Before you get started, make sure that an IOMMU is enabled in BIOS. Short for I/O memory management unit, sometimes called Intel VT-d or AMD IOV, this is used by the hypervisor to strictly control which hardware resources each guest VM can directly access, ultimately allowing a given virtual machine to communicate directly with the GPU.
On server and workstation hardware, an IOMMU is usually enabled by default. But if you’re using consumer hardware or running into issues, you may want to check your BIOS to ensure it’s turned on.
Next connect to your XCP-ng host via KVM or SSH, as shown above, and drop to a command shell. From here we’ll use lspci
to locate our GPU:
lspci -v | grep VGA lspci -v | grep audio
If VGA isn’t working, try one of the following instead:
lspci -v | grep 3D lspci -v | grep NVIDIA lspci -v | grep AMD
You should be presented something like this:
03:00.0 3D controller: Nvidia Corporation GP104GL [Tesla P4] (rev a1)
Next, note down the ID assigned to the GPU’s graphics compute and audio outputs. In this case it’s 03:00.0
. We’ll use this to tell XCP-ng to hide it from Dom0 on subsequent boots. As you can see in the command below we’ve plugged in our GPU’s ID after the 0000:
to hide that specific device from the management VM:
/opt/xensource/libexec/xen-cmdline --set-dom0 "xen-pciback.hide=(0000:03:00.0)"
With that out of the way we just need to reboot the machine and our GPU will be ready to be passed through to our VM.
reboot
Passing a GPU to a VM in XCP-ng
With Dom0 no longer in control of the GPU, you can move on to attaching it to another VM. Begin by spinning up a new VM in Xen Orchestra as you normally would. For this tutorial we’ll be using the latest release of Ubuntu Server 24.04.
Once your OS is installed in the new virtual machine, shutdown the VM, and head over to the VM’s “Advanced” tab in the Orchestra web interface, scroll down to GPUs, and click the +
button to select it, as pictured above. It will appear as passthrough
once added.
With that out of the way, you can go ahead and start up your VM. To test whether we passed through our GPU successfully, we can run lspci
this time from inside the Linux guest VM.
lspci -v
If your GPU appears in the list, you’re ready to install your drivers. Depending on your OS and hardware, this may require downloading driver packages from the manufacturer’s website. If you happen to be running a Ubuntu 24.04 VM with an Nvidia card, you can simply run:
sudo apt update && sudo apt install nvidia-driver-550-server
And if you want the CUDA toolkit, you’d also run:
sudo apt install nvidia-cuda-toolkit
If you’re running a different distro or operating system, you will want to check out the GPU vendor’s website for drivers and instructions.
Now that you’ve got an accelerated VM up and running, we recommend checking out some of our hands-on guides linked at the bottom of this story.
If things haven’t gone smoothly, check out XCP-ng’s documentation on device passthrough here.
Enabling PCIe passthrough on Proxmox VE
Enabling PCIe passthrough on Proxmox VE is a little more involved.
Like with XCP-ng, this means we need to tell Proxmox not to initialize the graphics card we’d like to pass through to our VM. Unfortunately, it’s a bit of an all-or-nothing situation with Proxmox, as the way we do this is by blacklisting the driver module for our specific brand of GPU.
To get started, install your GPU card in your server and boot into the Proxmox management console. But, before we go any further, make sure that Proxmox sees our GPU. For this we’ll be using the lspci
utility to list our installed peripherals.
From the Proxmox management console, select your node from the sidebar, open the shell, as pictured above, and then type in:
lscpi -v | grep VGA lspci -v | grep audio
If nothing comes up, try one of the following:
lspci -v | grep 3D lspci -v | grep NVIDIA lspci -v | grep AMD
You should see a print out similar to this one showing your graphics card:
2e:00.0 VGA compatible controller: Nvidia Corporation AD102GL [L6000 / RTX 6000 Ada Generation] (rev a1) (prog-if 00 [VGA controller]) Subsystem: NVIDIA Corporation AD102GL [RTX 6000 Ada Generation] 2e:00.1 Audio device: NVIDIA Corporation AD102 High Definition Audio Controller (rev a1) Subsystem: NVIDIA Corporation AD102 High Definition Audio Controller
Now that we’ve established that Proxmox can actually see the card, we can move on enabling the IOMMU and blacklisting the drivers. We’ll be demonstrating this on an AMD system with a Nvidia GPU, but we’ll also share steps for AMD cards, too.
Enabling IOMMU
Before we can pass through our PCIe device, we need to enable the IOMMU both in the BIOS and in the Proxmox bootloader. As we mentioned in the earlier section on XCP-ng, IOMMU is the mechanism which the hypervisor uses to make the GPU available to the VM guests running on the system. In our experience the IOMMU should be enabled by default on most server and workstation equipment, but is usually disabled on consumer boards.
Once you’ve got an IOMMU activated in BIOS, we need to tell Proxmox to use it. From the Proxmox management console, open the shell.
The next bit depends on how you configured your boot disk. Usually Proxmox will default to Grub for single disk installations and Systemd-boot for installations on mirrored disks.
For Grub:
Start by opening your Grub config file. Feel free to use your preferred editor, if nano
isn’t your thing.
nano /etc/default/grub
Then modify the GRUB_CMDLINE_LINUX_DEFAULT=
line to read for Intel CPUs:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"
Meanwhile, for those with AMD CPUs, the line should look like this:
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on"
The Proxmox team also recommends adding iommu=pt
to boost performance on hardware that supports it, however it’s not strictly required.
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"
Save and exit, then apply it by updating the bootloader:
update-grub
For Systemd-boot:
The process looks a little different for Systemd-boot but is pretty much the same idea. Rather than the Grub config, we’ll be editing the kernel cmdline file.
nano /etc/kernel/cmdline
Then add intel_iommu=on
if you’ve got an intel CPU or amd_iommu=on
if you’ve got an AMD one to the first line. You can also add iommu=pt
if your hardware supports it. If you’ve got an Intel-based system it the file should look something like this:
root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=on iommu=pt
Save and exit, then apply the change by executing:
proxmox-boot-tool refresh
Adding the VFIO kernel modules
Next we need to enable a few VFIO modules by opening the module config file using your editor of choice:
nano /etc/modules
And paste in the following:
vfio vfio_iommu_type1 vfio_pci
If you’re trying this on an earlier version of Proxmox – older than version 8.0 – you’ll also need to add:
vfio_virqfd
Once you’ve updated the modules, force an update and then reboot your system:
update-initramfs -u -k all reboot
After you reboot your system you can check that they’ve loaded successfully by running
lsmod | grep vfio
If everything worked properly, you should see the three VFIO modules we enabled earlier appear on screen. Check out our troubleshooting section if you run into any problems.
Blacklist the graphics drivers
Now that we’ve got an nIOMMU successfully configured we need to tell Proxmox that we don’t want it to load our GPU drivers.
This can be done by creating a blacklist config file under /etc/modprobe.d/
For Nvidia GPUs:
echo "blacklist nouveau" >> /etc/modprobe.d/blacklist.conf echo "blacklist nvidia" >> /etc/modprobe.d/blacklist.conf
For AMD GPUs:
echo "blacklist radeon" >> /etc/modprobe.d/blacklist.conf
Finally refresh the kernel modules and reboot by running:
update-initramfs -u reboot
Passing through your graphics card to a VM
Okay, we’ve officially arrived at the fun part. At this point, we should be able to add our GPU to our VM and everything should just work. If it doesn’t, head down to our troubleshooting section for a few tweaks to try as well as resources to check out.
To do that, start by creating a new VM. The process should be fairly straightforward but there are a couple of changes we need to make under the System and CPU sections of the Proxmox web-based VM creation wizard, as pictured below.
Under the System section:
- Change Machine Type to
q35
- Set
OVMF (UEFI)
as the BIOS type and add an EFI disk. - And ensure the “Graphic card” is set to
Default
Next, under the CPU section, shown above, ensure that Type is set to Host
to avoid compatibility issues that can crop up with some GPU drivers and runtimes.
Once your VM has been created, go ahead and start it and install your operating system of choice. In this case, we’re using Ubuntu 24.04 Server edition.
After you have your OS installed, shutdown the VM and head over to its Hardware config page and click Add
and select PCI Device
, as shown above.
Next select Raw Device
and select the GPU you’d like to pass through to the VM from the drop down. Then, tick the Advanced
checkbox and ensure that both ROM-Bar
and PCI-Express
are checked. If the latter is grayed out, you probably didn’t set the machine type to q35
in the earlier step. Optionally, you can repeat these steps to pass through the GPU’s audio device, if it has one.
With our PCIe device added, we can go ahead and start our VM up and use lspci
to make sure the GPU has been passed through successfully:
lscpi -v | grep VGA
Again, if nothing comes up, try one of the following:
lspci -v | grep 3D lspci -v | grep NVIDIA lspci -v | grep AMD
From here, you can install your GPU drivers as you normally would on a bare metal system.
About that secure boot error when installing Nvidia drivers
During installation, you may run into an error, shown below, because UEFI Secure Boot was enabled for OVMF VMs in Proxmox.
You can either reboot and disable Secure Boot in the VM BIOS (press the Esc
key during the initial boot splash) or you can set a password and enroll a signing key in your EFI firmware.
Troubleshooting Proxmox GPU passthrough
If for some reason you’re still having trouble passing your GPU through to a VM, you may need to make a few additional tweaks.
Enabling unsafe interrupts
If for some reason you’re having trouble getting the VFIO modules set up, you may have to enable unsafe interrupts by creating a new config file under /etc/modprobe.d/
. According to the Proxmox docs, this can lead to instability, so we only recommend applying this if you run into trouble:
echo "options vfio_iommu_type1 allow_unsafe_interrupts=1" > /etc/modprobe.d/unsafe_interrupts.conf
Configuring VFIO passthrough for troublesome cards
If you’re still having trouble, you may need to more explicitly tell Proxmox to let the VM take control over the GPU.
Start by grabbing the vendor and device IDs for your GPU. They’ll look a bit like this: 10de:26b1
and 10de:22ba
. Note if you’re using a server card, you may only have one set of IDs. To identify the IDs for our GPU we can use lspci
:
lspci -nn | grep VGA lspci -nn | grep Audio
Again, if VGA doesn’t work try 3D
, Nvidia
, or AMD
.
You should get something like this back:
2e:00.0 VGA compatible controller [0300]: Nvidia Corporation AD102GL [L6000 / RTX 6000 Ada Generation] [10de:26b1] (rev a1) 2e:00.1 Audio device [0403]: Nvidia Corporation AD102 High Definition Audio Controller [10de:22ba] (rev a1)
Now, we can add those IDs – in this case, 10de:26b1
and 10de:22ba
– to a new kernel module config file using echo
. Remember to change the ID numbers to your card’s actual IDs.
echo “options vfio-pci ids=10de:26b1,10de:22ba” > /etc/modprobe.d/vfio.conf
We can now refresh our kernel modules and reboot:
update-initramfs -u
reboot
To check that the vfio-pci driver has been loaded we can execute the following and scroll up until you see your card.
lspci -nnk
If everything worked correctly, you should see something like this (note that vfio-pci
is listed as the kernel driver) and you can head back up the previous section to configure your VM:
2e:00.0 VGA compatible controller [0300]: Nvidia Corporation AD102GL [L6000 / RTX 6000 Ada Generation] [10de:26b1] (rev a1) Subsystem: Nvidia Corporation AD102GL [RTX 6000 Ada Generation] [10de:16a1] Kernel driver in use: vfio-pci Kernel modules: nvidiafb, nouveau 2e:00.1 Audio device [0403]: Nvidia Corporation AD102 High Definition Audio Controller [10de:22ba] (rev a1) Subsystem: Nvidia Corporation AD102 High Definition Audio Controller [10de:16a1] Kernel driver in use: vfio-pci Kernel modules: snd_hda_intel
Putting your GPU accelerated VM to work
Now that you’ve got a GPU accelerated VM how about checking out one of our other AI themed tutorials for ways you can put it to work…
We’re already hard at work on more AI and large language model-related coverage, so be sure to sound off in the comments with any ideas or questions you might have. ®
Editor’s Note: Nvidia provided The Register with an RTX A6000 Ada Generation graphics card to support this story and others like it. Nvidia had no input into the content of this article.
- SEO Powered Content & PR Distribution. Get Amplified Today.
- PlatoData.Network Vertical Generative Ai. Empower Yourself. Access Here.
- PlatoAiStream. Web3 Intelligence. Knowledge Amplified. Access Here.
- PlatoESG. Carbon, CleanTech, Energy, Environment, Solar, Waste Management. Access Here.
- PlatoHealth. Biotech and Clinical Trials Intelligence. Access Here.
- Source: https://go.theregister.com/feed/www.theregister.com/2024/06/19/proxmox_xcp_ng_gpu_passthrough/
- :has
- :is
- :not
- $UP
- 1
- 24
- 3d
- 6000
- 7
- 8
- a
- Able
- About
- above
- accelerated
- acceleration
- access
- According
- acquisition
- actual
- actually
- ADA
- add
- added
- adding
- Additional
- advanced
- After
- ahead
- AI
- All
- Allowing
- already
- also
- alternatives
- AMD
- an
- and
- Another
- any
- appear
- appears
- Apply
- Applying
- APT
- ARE
- arrived
- article
- AS
- assigned
- assumes
- At
- audio
- available
- avoid
- back
- BE
- because
- been
- before
- begin
- being
- below
- beneficiaries
- Beyond
- Biggest
- Bit
- boost
- Boots
- both
- Bottom
- brand
- but
- button
- by
- called
- CAN
- capabilities
- card
- Cards
- case
- Center
- change
- Changes
- check
- checked
- checking
- choice
- click
- CO
- comes
- command
- comments
- Common
- communicate
- compatibility
- compatible
- Compute
- configured
- Connect
- Console
- consumer
- content
- control
- controller
- CORPORATION
- correctly
- Couple
- coverage
- created
- Creating
- creation
- crop
- Default
- definition
- demonstrating
- Depending
- depends
- device
- Devices
- didn
- different
- directly
- disabled
- Display
- do
- docs
- documentation
- doesn
- don
- done
- down
- downloading
- driver
- drivers
- Drop
- during
- each
- Earlier
- easier
- echo
- edition
- editor
- either
- enable
- enabled
- enabling
- ensure
- Enterprise
- equipment
- error
- established
- everything
- execute
- executing
- Exit
- experience
- experiment
- explicitly
- fairly
- feel
- few
- figured
- File
- fine
- First
- focusing
- following
- For
- Force
- Free
- from
- fun
- further
- generation
- get
- getting
- given
- Go
- gone
- got
- GPU
- GPUs
- Graphic
- graphics
- Guest
- guests
- guide
- Guides
- had
- hands-on
- happen
- Hard
- Hardware
- Have
- haven
- having
- head
- here
- Hide
- High
- host
- How
- However
- HTTPS
- ID
- idea
- ideas
- identify
- ids
- if
- ignore
- important
- in
- Including
- initial
- input
- inside
- instability
- install
- installation
- installed
- installing
- instead
- instructions
- integrated
- Intel
- interest
- interested
- Interface
- into
- involved
- isn
- issues
- IT
- ITS
- jpg
- just
- Key
- kick
- Label
- language
- large
- latest
- latest release
- lead
- least
- let
- License
- like
- limitations
- Limited
- Line
- linked
- linux
- List
- Listed
- little
- ll
- load
- local
- longer
- Look
- look like
- looking
- LOOKS
- machine
- make
- Making
- management
- Manufacturer
- many
- May..
- means
- mechanism
- Memory
- mentioned
- Menu
- metal
- might
- modify
- Module
- Modules
- more
- most
- move
- much
- Navigate
- Need
- New
- next
- no
- node
- normally
- note
- nothing
- now
- numbers
- Nvidia
- of
- off
- Officially
- older
- on
- once
- ONE
- only
- open
- open source
- opening
- operating
- operating system
- operating systems
- Options
- or
- OS
- Other
- Others
- our
- out
- output
- outputs
- over
- own
- packages
- page
- part
- pass
- passed
- Passing
- passthrough
- Password
- per
- performance
- peripherals
- platform
- Platforms
- plato
- Plato Data Intelligence
- PlatoData
- Plugged
- Point
- preferred
- presented
- press
- pretty
- previous
- primarily
- Prior
- probably
- problems
- process
- project
- properly
- proprietary
- provided
- put
- Questions
- Quick
- rather
- Raw
- RE
- Read
- ready
- reason
- reboot
- recommend
- recommends
- register
- release
- remember
- repeat
- require
- required
- requires
- Resources
- rtx
- Run
- running
- s
- same
- scope
- Screen
- scroll
- Section
- sections
- secure
- see
- sees
- select
- sent
- server
- set
- settings
- Share
- Shell
- Short
- should
- showing
- shown
- shutdown
- side
- signing
- similar
- simply
- single
- situation
- smoothly
- So
- some
- something
- sometimes
- Sound
- Source
- specific
- ssh
- start
- started
- Step
- Steps
- Still
- Story
- straightforward
- style
- subsequent
- Successfully
- support
- Supports
- sure
- Switch
- system
- Systems
- Take
- taken
- takes
- team
- tell
- Tesla
- test
- than
- that
- The
- The LINE
- The Register
- their
- Them
- Themed
- then
- There.
- therefore
- These
- they
- thing
- things
- this
- those
- three
- Through
- tick
- time
- to
- too
- toolkit
- trouble
- try
- trying
- Turned
- tutorial
- tutorials
- tweaks
- two
- type
- Ubuntu
- Ultimately
- under
- understand
- unfortunately
- unit
- until
- Update
- updated
- updating
- upon
- use
- used
- uses
- using
- usually
- utility
- vary
- Ve
- vendor
- version
- via
- Virtual
- virtual machine
- vmware
- want
- was
- Way..
- ways
- we
- web
- web-based
- Website
- WELL
- What
- when
- whether
- which
- will
- windows
- with
- Work
- worked
- working
- workstation
- would
- writing
- XEN
- You
- Your
- zephyrnet