FOR PEOPLE HAVING THIS ERROR: bdsDxe: failed to Ioad Boot0002 "UEFI QEMU QEMU HARDDISK" uncheck the "Pre-Enroll keys" option and it will boot via uefi! pls vote this up I googled for 5hrs to find the source of the problem. System: asus z590p, 11900k, 64gb kingston 2666.
Excellent guide. Do not forget to deselect Device Manager->Secure Boot Configuration->Attempt secure boot in VM UEFI BIOS when installing TrueNAS. Access it by pressing "Esc" key during boot sequence. Othervise you will get access denied on virtual installation disk.
I have been on the fence if I wanted to do truenas on bare metal or virtualize it and this sentence and Jeff's quick explanation on why made me feel a lot better about doing it.
Truly. I just dont think these things occur to them when they are processing feature adds and the like. They can be slow to adope like Debian which is what its based on.
Right? They have MOST of the UI, they just need the initialization bit to be UI-driven aswell. A full-feature product like Proxmox should have all of its functions available through its UI, "popping under the hood" with a terminal is an ugly solution, no matter how poweful it might be.
@@manekdubash5022 I'm sure it can, just not as simple. I archived my ESXi 8 ISOs and Keys so I'm not worried about moving for a few years. Who knows, Broadcom might decide to do good.. HAHAHAHA my sides hurt!
I just have to say, I spent hours trying to get my GPU to passthrough correctly, and your one comment on Memory Ballooning just fixed it! Thank you so much! I didn't even see anything about that mentioned in any of the official documentation!
This is actually such a good point. I barely/rarely watch Network Chuck anymore. He just feels fake to me now. Almost unwatchable. I haven't seen one of his videos in months.
@@johndroyson7921 he's what got me into networking/homelab. He made it fun and entertaining, but now that I am getting more knowledgable about this stuff, I watch him less and less
I've been waiting for this. I already have 2 Erying systems as my Proxmox cluster, after your first video on this, and they've been working perfectly for me, but when you originally said you couldn't get HBA passthrough to work properly, I held off buying a 3rd, as I wanted the 3rd for exactly what you've done in this video, and to have a 3rd node for ceph. Now that I can see you figured it out using a sata card, I'm off to order all the bits for the 3rd node. Thank You, and after I order everything, I'll pop into your store to buy some glassware to show some appreciation.
Jeff - Just wanted to give an extreme thank you for the quality and content of your videos. I just finished up my TrueNAS Scale build using your guidance and it worked like a charm. I did use an Audheid as well, but the K7 8-bay model. I went with an LSI 9240-8i HBA (flashed P20 9211-8i IT Mode) and the instructions on Proxmox 8 you provided were flawless and easily had my array of 4TB Toshiba N300's available via the HBA in my TrueNAS Scale VM. Lastly, a shout out to your top-notch beer-swillery as I am an avid IPA consumer as well! (cheers)
Thank you for sharing your experience! It was incredibly helpful in getting GPU passthrough to work. However, I needed to make a few adjustments: In Proxmox 8, /etc/kernel/cmdline does not exist. Instead, I entered the settings in /etc/default/grub as follows: GRUB_CMDLINE_LINUX_DEFAULT="quiet nouveau.modeset=0 intel_iommu=on iommu=pt video=efifb:off pci=realloc vfio-pci.ids=10de:1d01" It's important to note the parameters video=efifb:off and pci=realloc, which were not mentioned elsewhere. These are crucial because many motherboards use shadow RAM for PCIe Slot 1, which can hinder GPU passthrough if not configured properly. With this setup, I believe all your GPUs should function correctly. Additionally, I had to blacklist the NVIDIA drivers.
"It's important to note the parameters video=efifb:off and pci=realloc, which were not mentioned elsewhere." So where do these parameters get added/edited?
@@w33dp0w3r if you have GPU passthrough you can use the monitor (HDMI/DP) for audio or pass an USB card (like i did). Some monitor have an audio out port on them, but only only works with HDMI or DP.
For anyone who was confused like me there are 2 bootloaders, GRUB and Systemd-boot. /etc/kernel/cmdline only exists with Systemd-boot and this bootloader is used when Proxmox is installed on ZFS. Therefore, anyone with UEFI and not booting from ZFS should follow the GRUB instructions.
Another little addition to this. It seems that you still need to add ""GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on" "" to the etc/default/grub boot cfg file if using the legacy grub boot menu. The legacy grub boot menu is still teh default if installing ext4 onto a single drive.
its possible but at a cost. you'll sacrifice quite a lot in performance. like gpu will be working 50% maybe and nvme drives, connected through m.2 slots at 1/4 of full speed.
Hey Jeff, I had issues passing through a GPU with the exact same hardware until I pulled the EFI ROM off the GPU and loaded it within the VM config PCI line. Adding the flag bootrom=“” to the line in the VM config pointed to the rom should do it. I think this is because the GPU gets ignored during the motherboard EFI bootup so the VROM gets set to legacy mode. When trying to pass it into an EFI VM it won’t boot since the VROM doesn’t boot as EFI
Could you explain a little more on how you got that working? I still can't get GPU passthrough working on my 11900h ES erying mobo. Also did you mean "romfile=" ?
@@boredprince bootrom="" seemed to be the wrong parameter and removed the GPU from the hardware. romfile seemed to be accepted but the VM failed to startup. So not sure this is the fix (for me).
I had to do this too for my system. I think I used a WinPE image + GPU-Z to pull the rom off the card and then in the config for my VM i used the following: hostpci0: 09:00,pcie=1,x-vga=1,romfile=GP104_Fixed.rom
SR-IOV and IOMMU are completely orthogonal features and enabling one will not magically make the other work. SR-IOV simply lets the kernel use a standard way of telling PCI-E devices to split themselves into virtual functions. SR-IOV does not require an IOMMU, and IOMMU does not require SR-IOV.
6:45 -- systems for pve 8.2, you'll want to modify the grub boot settings at /etc/default/grub, append the same iommu text to the string value assigned to GRUB_CMDLINE_LINUX_DEFAULT. then execute update-grub.
Heh... funny story. I was working on getting Intel's UHD SR-IOV to work, so I would do a video on QuickSync passthrough, and I nuked my Proxmox install. Hadn't captured the dmesg yet 😂😭
@@CraftComputing There is a possibility that you need to load the graphic bios separatly first befor your passthrough works correctly, if you like to try it again later let me know or if you want more informations. there is some good yt video doc from unraid about this problem. i run it like this many years now. currently with 2 different 1060gtx... for booth i needed to dump the gpu-bios and give it the gemu engine as information to load. this also fix many issues with the passthrough in combination with the audio device and fixed problems with vm reboots or resets where the card will just hang and freeze in its old stage. with the gpu-bios given to qemu/kvm all this problems get solved and the hardware is resettable for the quest, wich solves many problems.
@@CraftComputingexactly! Quickly for clarification sake, q35 means uefi and ifx440 or whatever is bios boot? Half the tutorials say to do one or the other, and this is the first time I have heard it mentioned otherwise, unless I just forgot 😅.
@@lilsammywasapunkrock Both machine types support bios and uefi. The primary difference between q35 and i440fx is that q35 uses PCI-e while i440fx uses the old PCI. If I remember correctly, I was able to use PCI-e passthrough with i440fx but only for one device at a time. I personally don't see any point in using i440fx in modern systems with modern host operating systems.
Thank you for this video, it was very helpful! In particular the comment about memory ballooning not being supported and why was a HUGE help, I had not seen that mentioned anywhere else. Also the need to map the audio as well as video was a helpful point.
FYI, the instructions don't work if you're using GRUB. These instructions appear to be specific to systemd-boot. You'll need to look in /etc/default/grub rather than /etc/kernel/cmdline to make the kernel command line changes.
You definitely CAN passtrough your primary GPU to a VM... Running a setup like this for e few years now. The 'disadvantage' is that a monitor to the proxmox is not available any more, and until the VM boots, the screen says 'loading initramfs'.
Yes, definitely - and Proxmox UI is used through SSH from another device anyway as it usually isn't a thing to run the UI on the Proxmox Servers GPU itself anyway. It can be handy though to have another means of connecting a GPU to the system if the SSH-interface is messed up - I use a thunderbolt eGPU in such circumstances...
would be interested in LXC tutorial with GPU passtrough / sharing to it... especially with something like intel NUC with only 1 integrated GPU, or maybe just sharing / passtrough of integrated GPU in general
I created the /etc/kernel/cmdline file as well as edited GRUB_CMDLINE_LINUX_DEFAULT in /etc/default/grub. Not sure which one ended up making iommu work though
Thanks Jeff, you saved me a LOT of frustrating research :-) I just managed to passthrough a couple of network interfaces to a microvm within my NixOS server, and it just took me a couple of hours, I expected to spend all night on it :-D
12:48 -- This source mentions IQR remapping, which I think actually does allow the primary monitor of the server and a VM to 'share' the GPU. Have not tested it yet.
Can someone help me? At 14:50 you mention the vfio config. You show ####.####,####.#### . Which Hex IDs are those? From the graphics card and the the audio controller? Or the graphics card and the subsystem? Which IDs do you chose? In your written tutorial you don't specify it as well...please? Thank you!
Hi Jeff, Are there any drawbacks (i.e. Performance) not blacklisting your GPU from the host Proxmox O.S.? Currently I have GPU pass through working but I didn't black list that GPU from the host O.S. and everything seems to be working without issues. Thanks!
Same here. I did everything except the Proxmox blacklist and got it working in a Win11 VM. I also checked the "PCI Express" box on the pass-through model in Proxmox for the video card. It did not work without this. Additionally, my 1070 GTX needed a dummy HDMI plug (or external monitor) to initialize correctly.
Flash vBIOS to force GPU into UEFI mode and disable Legacy mode at boot ? Do you need to alter any of those CLI strings depending on chipset connected PCIe lanes vs. direct CPU lanes ?
Ive followed the isntructions but as soon as I add my HBA as a PCI device being passed through, my VM will just boot loop saying no boot device found. I checked the boot order and made sure it only had the lvm where truenas was installed but it still does this. If I remove the PCI devie, truenas boots fine.
I've moved all of my hypervisor duties from Unraid to Proxmox, but I gotta give kudos to Unraid for how easy they make hardware passthrough. A single checkbox to prepare the device for passthrough, reboot, then pass that bish through. Echoing the wishes from other commenters that Proxmox adds the passthrough prep steps to the GUI. There's a thousand different guides for passthrough on Proxmox and 1000 different ways to do it, it's hard to know which is correct or best.
I know this is older, but is there a reason you didn't select the PCI Express Option when adding the Passthrough and Primary (for GPU)? (Timestamp 11:00 and 15:50)
Thank you for the write-up, especially addressing upfront EFI vs legacy boot config for IOMMU (intel_iommu=on). Great video 👍 Kindest regards, neighbours and friends.
Efi booted host, cards don't have efi firmware on them, so the vbios doesn't get mirrored into memory. Get a dump of the vbios, and add it as a vbios file in the pci device section of your VM config.
I would love an explanation of this comment or further resources. I don't understand efi, vbios, why and how that gets mirrored, or really anything that was said.
@@dozerd42when a physical system boots, it copies the contents of your video card bios (vbios) into main system memory, into the memory region reserved for communicating with the card. Some cards have a uefi firmware in addition or instead of a traditional vbios. Without it though, the card won't initialize the display output during boot. In this case, the cards didn't initialize during boot at all, so providing the video bios to the VM gives it an opportunity to initialize the card on its own. While you can technically usually boot cards without supplying it, what will often happen is that the in memory copy will become overwritten in some cases - like if that memory region is needed for texture storage at some point. When that happens it's necessary to reload the vbios from the card, but if you don't supply the vbios separately, sometimes this reload fails, which will hard lock your host.
Wahoo!! Your directions worked! Thanks. I'm installing Ollama LLM on a VM and want to passthrough the GPU, which worked thanks to you! I'm using an Intel based i7 Dell 3891, GTX 1650, and current Proxmox.
On the "Proxmox isn't the best tool for ZFS file server duties argument".. that's mostly right, however, your friends at 45 drives' Houston UI (running in cockpit) does a solid job at all the missing responsibilities you listed that TrueNAS typically handles. I personally still prefer TrueNAS myself, but you can run the Houston UI webgui and standard Proxmox webgui on the same box.
I prefer separation of concerns and staying as close to default settings and usage as possible in order to be able to update much more easily. So if I needed or wanted to use ZFS (which I currently don't), I'd have gone for TrueNAS, possibly in a VM. I don't feel as comfortable with Proxmox (I am currently managing VMs and containers by hand or through Cockpit on my Ubuntu set up), though while it works, it's not that robust depending on what you do and it also requires a ton of manual work.
Does someone know why i dont have an cmdline file ( /etc/kernel/cmdline ). There is no. i have installed Virtual Environment 8.0.3 and also tried 8.1.2
Just started to follow yout tutorial and already in the begining I encounter an issue, there is no cmdline file in kernel..................................... What I'm supposed to do ?
It's awesome having a homelab, but not as awesome when you've put the server in a mildly inaccessible spot, headless. Especially when you follow a PCI passthrough tutorial and the system reboots and doesn't come back.
Great video! I wrote a hookscript a while ago to aid in PCIe passthrough. I found it useful to use specifically with a Ryzen system with no iGPU. It dynamically loads and unloads the kernel and vfio drivers so when say a windows gaming VM is not in use, the Proxmox console will re-attach when the VM stops. Could be useful for other devices too! If anyone is interested let me know, I'll try to point you to the Github gist. I don't think TH-cam likes my comment with an actual link. :)
12:42 helped me to root cause my problem: Proxmox was using my NVIDIA Quadro P2000 as my primary display source. In the BIOS, I had to go to Advanced -> AMD PBS -> Primary Graphics Adaptor and set it to D-Sub. My mobo has onboard graphics but my CPU does not.
1. make sure that the vm uefi is set to efi mode and not csl mode. The EFI should be loading the drivers for the card at boot time. That could stop the GPU from passing through. 2. If you have two identical GPUs, consider cross flashing the vbios with one from competing AIB with the same specs. The new vbios will change the pcie id for the card without changing the functionality, letting you split up the two cards under iommu
The later isn't needed as they are in different slots they have different buss IDs and thus should never collide with IOMMU. You are still able to assign them to different different VMs
@@omegatotal I know. But the original comment provides a way to solve the "two identical GPUs" issue (which is inherent to this method of passthrough, not just on Proxmox where serial is an option) that also applies to other virtio passthrough scenarios (like a desktop/workstation virtualization setup). And it's not solved by different slots (which the comment I replied to implied), though I must admit pcie id is not the right term, vendor/device id is a more accurate name
@@FlaxTheSeedOne I would have thought that too but it contradicts what was said in the video and seems to be a quirk of the Chinese motherboard with mobile cpu
Hi. I am currently investigating the idea of creating a proxmox server to run various things, including MacOs, since i definitely need/want that one for audio. I can't really find a clear answer so i feel like asking you this : is it feasible to have low-latency audio on a VM ? Not remotely, locally of course, through an USB audio interface. I feel like PCI passthrough on a dedicated USB card can give me something viable, but i'm not completely sure. Maybe i can just passthrough my USB controller on the motherboard ? But in the end, will it provide me something useable for realtime audio treatment, as in "i plug my guitar in the audio interface, and i hear it's sound, processed by the computer, on my loudspeakers in real-time with a low latency, under, say, 15/30ms" ? )
i always kinda like iommu as a name its a mouthful but at least its not easily confused with the many other acronyms i remember on something of the low end aorus gaming boards it used to be under overclocking settings > cpu > miscellanious cpu settings
At first, adding a GPU to one of my VM's also did not work as you pointed out. I made it work by deleting that VM ( Debian 12 ), creating it again from scratch, BUT before the first boot, add PCI device and select your GPU. Go through the installation process, and once done, lspci showed my GTX 1060 6G in the list. Hope this helps anyone else looking for this.
Just getting into my own homelab after watching for a while. Got an old ThinkCentre that I'm going to have a tinker with before fully migrating a Windows 11 PC with Plex etc. This video series is great
I was surprised that GPU passthrough to Debian or Windows based VMs worked out of the Box on my machine. I never configured anything inside Proxmox. I made sure that the UEFI bios was set up correctly. But that was it. Has been running great for months. (Im using a AMD 5900X on a MSI X570 Gaming Plus with a 1080ti)
So, somewhat silly question for @craftcomputing and the hive mind. Do you need (or should you use) dummy plugs on each of the graphics cards (pcie or integrated) in virtualized environments like this. Will this help their respective "systems" function better?
One important thing I ran into installing TruNAS Scale on ProxMox 8.0.4. When you add the EFI storage disable Pre-Enroll keys. Failure to do so can cause the error: bad shim signature
@@deanolivas3011 I was right there doing the same thing 3 nights ago. Gave up came back the next day and after working through a bunch of suggestions ran into this at the bottom of one trunas forum... I figured I would share this with anyone watching the video...
I've had ballooning enabled on proxmox 7 and it still worked. I wonder if ballooning knows which areas need to be directly mapped and still works normally.
Thank you sir! Just by adding a new physical NIC to Truenas, my write speed increased by x3 on my ZFS pool! I had saturated the just one NIC I had on board with a lot of LXC and VMs
In my proxmox setup all I did was add iommu=on and selected the device on vm. Didn't have to do any of blacklists or anything. Maybe that's your issue.
Depends on the GPU. On the Enterprise grade video cards like the Nvidia Tesla P4 you have to use their special video drivers to make it work. Open source drivers won't work at all.
Thanks for this vid, unfornutaly I didn't get it work. I'm using a Dell Optiplex 7050 running an Intel Core I5 - 7500T and booting in EFI. If I put in your commands, I get a message that the system is booted up in EFI but no grub efi amd64 was found. Even my /etc/kernel/cmdline file was empty at the first time....? What could be went wrong? I need IOMMU for a VM....
This is just what I was looking for. I am a single home user and running a separate machine for NAS from my Main Workstation makes little sense. Having single power PC with Proxmox with one instance of trueNAS and another for windows / linux OS will make things a lot simpler. I can also offload docker instances from my NAS over to proxmox and manage them independently. Just one question though, how's proxmox on PC in terms of power management. Once I shutdown my workstation PC, will the overall power consumption go down to a comparable level of a commercial grade NAS?
Not sure if it will help but try adding iommu=pt to the kernel command line and verify that the GPU is in it's whole IOMMU group. There might be a third pcie device from the CPU which should be fine. Also I saw in b-roll you didn't add the audio device 01:00.01. Not sure if you did but you need to add everything device in the IOMMU group for pcie passthrough to work from what I heard. I saw that it was using snd_intel kernel module. That might be a issue but not sure. Anyways hope it helps.
My install of Proxmox on a Dell r530 is EFI but it does not have a file in etc/kernal/cmdline. There is a cmdline in /proc but that cant be edited. Running 8.04.
i had nvidia-smi working fine for my quadro but plex wasn't doing hw transcode. after throwing some semi-stale additional virtualization tweaks at the wall, the real thing was that i used my distro's packaged nvidia driver - which didn't auto include libcuda1 and libnvidia-encode1. eventually figured it out from spelunking the plex debug logs, looks like those two extra packages are enough to get the full hw transcode going, but i'll update here if i notice anything else.
FYI Many Ryzen chipset drivers have a bug in their passthrough code. There was an early version that worked, then a new version that didn't, and a newer one that did. I spent hours troubleshooting making sure everything was right in all the configs nothing. Did a BIOS update and perfect in a moment. I was on a x470 chipset with a Ryzen 2700.
Darn it. I should have done this video. I got it working about a month ago. Great information!! So many people discouraged me from doing it as they said it wouldn't work. It works great for me.
Great instructions. My understanding is that to pass any optical drives through to VMs, I would need to do this using a PCIE/SATA controller? I knew I would have to do this to access my disk shelf with my HBA when I rebuild everything. All these lonely sata ports on my motherboard.
Here again. I am finally rebuilding my server and jbod. Reviewing the video helped me figure out why my disks weren't showing up in TrueNAS. Also realized after that I wasn't clicked on disks. :D
Not sure if I missed it and it was addressed in the video but my scenario is similar to what's done in the video, 1 VM with TrueNAS passing through the SATA controller to the drives for the sweet sweet ZFS setup and another VM to host all my home server stuff like jellyfin, qbittorrent and elasticsearch. In this case, what would be the best way to connect the ZFS pool between one VM to another?
12:43 Not true, you can passthrough intel iGPU even if you don't have any other GPU in the system. You have to of cause set the VFIO driver for it at boot and you will lose video for proxmox. But as you do everything through web or SSH, you don't need video most of the time. You can always reboot to kernel without the VFIO driver linked to iGPU if you lose network connection or need to fix something. There is also GPU partitioning which certain Intel GPUs support. Then you can use one GPU for both proxmox and even multiple VMs. That is a bit more hardcore for now though.
I am beginning to feel like Proxmox just out performs everything even VMware ESXi which I have used. I think at some point I am gonna build a "virtualization" server, and move my TrueNAS from Bare Metal to Virtual Metal. But since I need a software server more urgently, Proxmox is gonna have to take a back burner, but I'll still watch for the education.
I've found that sometimes using the "All Functions" option is what is actually causing the failure. Just adding the secondary device manually is more compatible.
what about installing a custom kernel to get the full rocm features? seems extremely complicated and i haven't found a guide how to patch a custom kernel in proxmox...
Been looking at converting the home lab to Prox, but can't find information on virtual storage options. Currently running vSAN and the performance has been outstanding. Looking for something on the Open side of the table that will do auto tiering and other functionality between SSD and HDD. Any recommendations?
Thank you for this update. This is one if the more challenging tasks for me in proxmox and I was only successful through sheer dumb luck the last time I did this. The good news? Its still deployed and the only thing I have changed is the GPUs and Storage controller.
Ok Jeff, I have the Erying 11th 0000 1.8GHz i7 ES motherboard and I gave it the old college try. I followed your tutorial (for grub) as well as played with settings and also followed a few other tutorials out there (they all seem to be slightly different). No luck. I was able to pass the iGPU through but not my Nvidia GTX 1660S card. I even tried blacklisting and passing through all of the items in the same pci group (VGA, audio, USB, etc.). At that point, it borked my install and I threw in the towel. Too bad, would be really nice to have proxmox on this MB but I need to pass through the GPU to Plex. Unfortunately, everywhere I found where some said they successfully passed through a GPU on an Erying motherboard, there were little to no details on how it was done (BIOS settings, proxmox settings, etc.). So I went back to my Windows 10 install with VMWare workstation to run VM's as needed.
@12:45 That is not true. You can pass-thru your iGPU. I have been doing this for several years now. I am using 3x NUC8 as my PVE cluster and passing through the iGPU to my Emby and Jellyfin.
FOR PEOPLE HAVING THIS ERROR: bdsDxe: failed to Ioad Boot0002 "UEFI QEMU QEMU HARDDISK"
uncheck the "Pre-Enroll keys" option and it will boot via uefi!
pls vote this up I googled for 5hrs to find the source of the problem.
System: asus z590p, 11900k, 64gb kingston 2666.
Excellent guide.
Do not forget to deselect Device Manager->Secure Boot Configuration->Attempt secure boot in VM UEFI BIOS when installing TrueNAS. Access it by pressing "Esc" key during boot sequence. Othervise you will get access denied on virtual installation disk.
5 months later, this comment just saved me some headache.
@@wirikidor MERCI !!!
I literally just disabled secure boot and it worked (as now it's just UEFI and no disk space is needed) hopefully that doesn't screw me down the road
an hour of headache could've been solved by scrolling down. fml
"Don't virtualize truenas"
*Chuckles in 4 virtualized truenas servers in production*
STOP SAYING TH....
Wait.... nevermind :-D
Just like Stockton Rush always said.
REAL Men ALWAYS test in production.
@@sarahjrandomnumbers Lmao rip
I have been on the fence if I wanted to do truenas on bare metal or virtualize it and this sentence and Jeff's quick explanation on why made me feel a lot better about doing it.
@@shinythings7 you really don't lose much juice virtualizing anything nowadays.
Proxmox really should just make these options available in the UI.
Truly. I just dont think these things occur to them when they are processing feature adds and the like. They can be slow to adope like Debian which is what its based on.
Right? They have MOST of the UI, they just need the initialization bit to be UI-driven aswell.
A full-feature product like Proxmox should have all of its functions available through its UI, "popping under the hood" with a terminal is an ugly solution, no matter how poweful it might be.
It's stupid easy in ESXi, too bad Broadcom killed it.
@@Solkre82That's where I'm coming from too. Moving from esxi to Proxmox - if my passthrough setup can be replicated in PVE...
@@manekdubash5022 I'm sure it can, just not as simple. I archived my ESXi 8 ISOs and Keys so I'm not worried about moving for a few years.
Who knows, Broadcom might decide to do good.. HAHAHAHA my sides hurt!
I just have to say, I spent hours trying to get my GPU to passthrough correctly, and your one comment on Memory Ballooning just fixed it! Thank you so much! I didn't even see anything about that mentioned in any of the official documentation!
Wish after so many years there was a simple gui option for this. Appreciate the guide!
These tutorials are so much more usefull than Network Chucks and you dont seem like a shill trying to sell me something constantly.
Network Chuck is only good for ideas not how-to guides. He’s more of a cyber influencer to me.
This is actually such a good point. I barely/rarely watch Network Chuck anymore. He just feels fake to me now. Almost unwatchable. I haven't seen one of his videos in months.
seems like a good starting point for newbies or kids. I won't knock him for making the stuff sound exciting but I definitely grew out of his style.
I can't fucking stand that guy. "Look at my beard! Look, I'm drinking coffee! Buy my sponsored bullshit!"
@@johndroyson7921 he's what got me into networking/homelab. He made it fun and entertaining, but now that I am getting more knowledgable about this stuff, I watch him less and less
I've been waiting for this. I already have 2 Erying systems as my Proxmox cluster, after your first video on this, and they've been working perfectly for me, but when you originally said you couldn't get HBA passthrough to work properly, I held off buying a 3rd, as I wanted the 3rd for exactly what you've done in this video, and to have a 3rd node for ceph. Now that I can see you figured it out using a sata card, I'm off to order all the bits for the 3rd node.
Thank You, and after I order everything, I'll pop into your store to buy some glassware to show some appreciation.
Jeff - Just wanted to give an extreme thank you for the quality and content of your videos. I just finished up my TrueNAS Scale build using your guidance and it worked like a charm. I did use an Audheid as well, but the K7 8-bay model. I went with an LSI 9240-8i HBA (flashed P20 9211-8i IT Mode) and the instructions on Proxmox 8 you provided were flawless and easily had my array of 4TB Toshiba N300's available via the HBA in my TrueNAS Scale VM. Lastly, a shout out to your top-notch beer-swillery as I am an avid IPA consumer as well! (cheers)
Thank you for sharing your experience! It was incredibly helpful in getting GPU passthrough to work. However, I needed to make a few adjustments:
In Proxmox 8, /etc/kernel/cmdline does not exist. Instead, I entered the settings in /etc/default/grub as follows:
GRUB_CMDLINE_LINUX_DEFAULT="quiet nouveau.modeset=0 intel_iommu=on iommu=pt video=efifb:off pci=realloc vfio-pci.ids=10de:1d01"
It's important to note the parameters video=efifb:off and pci=realloc, which were not mentioned elsewhere. These are crucial because many motherboards use shadow RAM for PCIe Slot 1, which can hinder GPU passthrough if not configured properly. With this setup, I believe all your GPUs should function correctly. Additionally, I had to blacklist the NVIDIA drivers.
hey, nice addition indeed ! what about the audio card ? this is my pain... can you give me some hints about that ?thx in advance.
"It's important to note the parameters video=efifb:off and pci=realloc, which were not mentioned elsewhere." So where do these parameters get added/edited?
@@w33dp0w3r if you have GPU passthrough you can use the monitor (HDMI/DP) for audio or pass an USB card (like i did). Some monitor have an audio out port on them, but only only works with HDMI or DP.
For anyone who was confused like me there are 2 bootloaders, GRUB and Systemd-boot.
/etc/kernel/cmdline only exists with Systemd-boot and this bootloader is used when Proxmox is installed on ZFS.
Therefore, anyone with UEFI and not booting from ZFS should follow the GRUB instructions.
As always you're Jeff.. There a situation where you aren't Jeff? like maybe Mike? or Chris?
I kind of like being Jeff.
@@CraftComputing Yeah it would be weird if you woke up as Patrick from STH.
That would be weird. I'd be a whole foot shorter.
@@CraftComputingDepends if you're cosplaying as an admin that day or not
@@CraftComputingme too
Another little addition to this. It seems that you still need to add ""GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on" "" to the etc/default/grub boot cfg file if using the legacy grub boot menu. The legacy grub boot menu is still teh default if installing ext4 onto a single drive.
Can we just take a step back and marvel at how now only that this is all possible, but also won't cost a dime in software?
its possible but at a cost. you'll sacrifice quite a lot in performance. like gpu will be working 50% maybe and nvme drives, connected through m.2 slots at 1/4 of full speed.
Are you planning a video on USB and or PCI passthrough to LXC containers? Something about cgroups and permissions never could get it to work.
6:55 Did the Path change? I only have install.d, postinst.d and postrm.d in the /etc/kernel directory.
Hey Jeff, I had issues passing through a GPU with the exact same hardware until I pulled the EFI ROM off the GPU and loaded it within the VM config PCI line. Adding the flag bootrom=“” to the line in the VM config pointed to the rom should do it. I think this is because the GPU gets ignored during the motherboard EFI bootup so the VROM gets set to legacy mode. When trying to pass it into an EFI VM it won’t boot since the VROM doesn’t boot as EFI
Could you explain a little more on how you got that working? I still can't get GPU passthrough working on my 11900h ES erying mobo.
Also did you mean "romfile=" ?
After looking at his documentation, I think you're onto something here.
@@boredprince bootrom="" seemed to be the wrong parameter and removed the GPU from the hardware. romfile seemed to be accepted but the VM failed to startup. So not sure this is the fix (for me).
I had to do this too for my system. I think I used a WinPE image + GPU-Z to pull the rom off the card and then in the config for my VM i used the following:
hostpci0: 09:00,pcie=1,x-vga=1,romfile=GP104_Fixed.rom
SR-IOV and IOMMU are completely orthogonal features and enabling one will not magically make the other work. SR-IOV simply lets the kernel use a standard way of telling PCI-E devices to split themselves into virtual functions. SR-IOV does not require an IOMMU, and IOMMU does not require SR-IOV.
6:45 -- systems for pve 8.2, you'll want to modify the grub boot settings at /etc/default/grub, append the same iommu text to the string value assigned to GRUB_CMDLINE_LINUX_DEFAULT. then execute update-grub.
Hey Jeff, quick tip: you can use the TH-cam sections in the timeline to add timings so people can easily skip to where they need help.
Sponserblock extension allows you to skip ads and see where you should start, try it
This tutorial series is top notch. Thank you so much, Jeff!
@CraftComputing
Cant see the dmesg log in the description? .. and its not attached to the google link??!?!
Heh... funny story. I was working on getting Intel's UHD SR-IOV to work, so I would do a video on QuickSync passthrough, and I nuked my Proxmox install. Hadn't captured the dmesg yet 😂😭
@@CraftComputing OMG 😭😱 Well I'm sure you tried all the KERNEL parameters there is. Just thought it would be fun to have a look 🤪
@@CraftComputing There is a possibility that you need to load the graphic bios separatly first befor your passthrough works correctly, if you like to try it again later let me know or if you want more informations. there is some good yt video doc from unraid about this problem.
i run it like this many years now. currently with 2 different 1060gtx... for booth i needed to dump the gpu-bios and give it the gemu engine as information to load.
this also fix many issues with the passthrough in combination with the audio device and fixed problems with vm reboots or resets where the card will just hang and freeze in its old stage.
with the gpu-bios given to qemu/kvm all this problems get solved and the hardware is resettable for the quest, wich solves many problems.
Been waiting for this. All the pcie passthrough write ups are old and outdated, and the only one that worked for me on prox 7.4 was yours.
Tutorials: update-grub
Proxmox 8.0: "What's a grub?"
@@CraftComputingexactly!
Quickly for clarification sake, q35 means uefi and ifx440 or whatever is bios boot?
Half the tutorials say to do one or the other, and this is the first time I have heard it mentioned otherwise, unless I just forgot 😅.
@@lilsammywasapunkrock
Both machine types support bios and uefi.
The primary difference between q35 and i440fx is that q35 uses PCI-e while i440fx uses the old PCI.
If I remember correctly, I was able to use PCI-e passthrough with i440fx but only for one device at a time.
I personally don't see any point in using i440fx in modern systems with modern host operating systems.
^^^ Bingo
Thank you for this video, it was very helpful! In particular the comment about memory ballooning not being supported and why was a HUGE help, I had not seen that mentioned anywhere else. Also the need to map the audio as well as video was a helpful point.
FYI, the instructions don't work if you're using GRUB. These instructions appear to be specific to systemd-boot.
You'll need to look in /etc/default/grub rather than /etc/kernel/cmdline to make the kernel command line changes.
You're a damn wizard! :v Thxx Mr Magical Pants!
For your next tutorial I'd love to see you get some VMs running with their storage hosted on the truenas VM!
You definitely CAN passtrough your primary GPU to a VM...
Running a setup like this for e few years now. The 'disadvantage' is that a monitor to the proxmox is not available any more, and until the VM boots, the screen says 'loading initramfs'.
Yes, definitely - and Proxmox UI is used through SSH from another device anyway as it usually isn't a thing to run the UI on the Proxmox Servers GPU itself anyway.
It can be handy though to have another means of connecting a GPU to the system if the SSH-interface is messed up - I use a thunderbolt eGPU in such circumstances...
Been searching for this for the past week or so. Love your work Jeff. Cheers
Me to, since upgrade failed on my HP Z440 with xeon 2690 and Tesla M40 24G. Cheers
Thank you! Every time i'm stuck on a project in my home lab, you tend to have just the video i need and explain it very well!
would be interested in LXC tutorial with GPU passtrough / sharing to it... especially with something like intel NUC with only 1 integrated GPU, or maybe just sharing / passtrough of integrated GPU in general
it's not passthrough for lxc, it'd be just using the host gpu directly in a virtual environment. it's the same kernel
I had to reinstall proxmox for the first time in over a year. This guide was very much needed today. Thanks
There is no 'cmdline' in /etc/kernel :(
I created the /etc/kernel/cmdline file as well as edited GRUB_CMDLINE_LINUX_DEFAULT in /etc/default/grub. Not sure which one ended up making iommu work though
Thanks Jeff, you saved me a LOT of frustrating research :-) I just managed to passthrough a couple of network interfaces to a microvm within my NixOS server, and it just took me a couple of hours, I expected to spend all night on it :-D
Thank you for this. I couldn't get hardware transcoding working properly. I turned off ballooning on the VM and BAM! It works. HUZZAH!
12:48 -- This source mentions IQR remapping, which I think actually does allow the primary monitor of the server and a VM to 'share' the GPU. Have not tested it yet.
You ever get PCIE pass through working for the x16 slot? Looking forward to part 4 😊
there is no /etc/kernel/cmdline
mine toooo... root@pve1:~# ls /etc/kernel/cmdline
ls: cannot access '/etc/kernel/cmdline': No such file or directory
Sierra Nevada is one of the best beers out there, hazy little thing is amazing
Can someone help me? At 14:50 you mention the vfio config. You show ####.####,####.#### . Which Hex IDs are those? From the graphics card and the the audio controller? Or the graphics card and the subsystem? Which IDs do you chose? In your written tutorial you don't specify it as well...please? Thank you!
Hi Jeff,
Are there any drawbacks (i.e. Performance) not blacklisting your GPU from the host Proxmox O.S.? Currently I have GPU pass through working but I didn't black list that GPU from the host O.S. and everything seems to be working without issues.
Thanks!
Same here. I did everything except the Proxmox blacklist and got it working in a Win11 VM.
I also checked the "PCI Express" box on the pass-through model in Proxmox for the video card. It did not work without this.
Additionally, my 1070 GTX needed a dummy HDMI plug (or external monitor) to initialize correctly.
If you can convert a video or see apps use cuda without crashing the VM then no, you are completely golden.
Great video! Waiting for one about SR-IOV, I tried using virtual functions on my Intel I350-T4 NIC and got nowhere with it
Flash vBIOS to force GPU into UEFI mode and disable Legacy mode at boot ?
Do you need to alter any of those CLI strings depending on chipset connected PCIe lanes vs. direct CPU lanes ?
Ive followed the isntructions but as soon as I add my HBA as a PCI device being passed through, my VM will just boot loop saying no boot device found. I checked the boot order and made sure it only had the lvm where truenas was installed but it still does this. If I remove the PCI devie, truenas boots fine.
Hey Jeff have you ever tried unraid would like to know your point of view on it
a particular reason not to passthrough disks before installing is to make it easier not to mess up the installation drive, so it's good advice indeed
I really like these series on proxmox
I've moved all of my hypervisor duties from Unraid to Proxmox, but I gotta give kudos to Unraid for how easy they make hardware passthrough. A single checkbox to prepare the device for passthrough, reboot, then pass that bish through. Echoing the wishes from other commenters that Proxmox adds the passthrough prep steps to the GUI. There's a thousand different guides for passthrough on Proxmox and 1000 different ways to do it, it's hard to know which is correct or best.
Man, I ran TrueNAS in a VM for years now. I never ran into issues.
I know this is older, but is there a reason you didn't select the PCI Express Option when adding the Passthrough and Primary (for GPU)? (Timestamp 11:00 and 15:50)
Hi mate at 14:48 when you add the ids doesnt matter if you put it xxxx:xxxx or xxxx.xxxx?
I have a 3700x and a p1000. Is it not possible to use this p1000 for plex transcoding since proxmox requires a display?
I do have a quadro k620 that could go into an x1 slot with an adapter. Would this resolve my issue?
Thank you for the write-up, especially addressing upfront EFI vs legacy boot config for IOMMU (intel_iommu=on).
Great video 👍
Kindest regards, neighbours and friends.
Efi booted host, cards don't have efi firmware on them, so the vbios doesn't get mirrored into memory.
Get a dump of the vbios, and add it as a vbios file in the pci device section of your VM config.
DOH! You're probably right.
I would love an explanation of this comment or further resources. I don't understand efi, vbios, why and how that gets mirrored, or really anything that was said.
@@dozerd42when a physical system boots, it copies the contents of your video card bios (vbios) into main system memory, into the memory region reserved for communicating with the card.
Some cards have a uefi firmware in addition or instead of a traditional vbios.
Without it though, the card won't initialize the display output during boot.
In this case, the cards didn't initialize during boot at all, so providing the video bios to the VM gives it an opportunity to initialize the card on its own.
While you can technically usually boot cards without supplying it, what will often happen is that the in memory copy will become overwritten in some cases - like if that memory region is needed for texture storage at some point.
When that happens it's necessary to reload the vbios from the card, but if you don't supply the vbios separately, sometimes this reload fails, which will hard lock your host.
Unfortunatelly, I'm receiving the error "No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync."
Wahoo!! Your directions worked! Thanks. I'm installing Ollama LLM on a VM and want to passthrough the GPU, which worked thanks to you! I'm using an Intel based i7 Dell 3891, GTX 1650, and current Proxmox.
This has been a life saver. I finally was able to passthrough my 6700 XT for jellyfin hardware encoding.
On the "Proxmox isn't the best tool for ZFS file server duties argument".. that's mostly right, however, your friends at 45 drives' Houston UI (running in cockpit) does a solid job at all the missing responsibilities you listed that TrueNAS typically handles. I personally still prefer TrueNAS myself, but you can run the Houston UI webgui and standard Proxmox webgui on the same box.
I prefer separation of concerns and staying as close to default settings and usage as possible in order to be able to update much more easily.
So if I needed or wanted to use ZFS (which I currently don't), I'd have gone for TrueNAS, possibly in a VM. I don't feel as comfortable with Proxmox (I am currently managing VMs and containers by hand or through Cockpit on my Ubuntu set up), though while it works, it's not that robust depending on what you do and it also requires a ton of manual work.
Does someone know why i dont have an cmdline file ( /etc/kernel/cmdline ). There is no. i have installed Virtual Environment 8.0.3 and also tried 8.1.2
the same issue in 8.1.10. Where the heck is cmdline?🤔
Great video, I enjoy your server content a lot when it's this kind of set up.
Just started to follow yout tutorial and already in the begining I encounter an issue, there is no cmdline file in kernel..................................... What I'm supposed to do ?
It's awesome having a homelab, but not as awesome when you've put the server in a mildly inaccessible spot, headless. Especially when you follow a PCI passthrough tutorial and the system reboots and doesn't come back.
Great video! I wrote a hookscript a while ago to aid in PCIe passthrough. I found it useful to use specifically with a Ryzen system with no iGPU. It dynamically loads and unloads the kernel and vfio drivers so when say a windows gaming VM is not in use, the Proxmox console will re-attach when the VM stops. Could be useful for other devices too! If anyone is interested let me know, I'll try to point you to the Github gist. I don't think TH-cam likes my comment with an actual link. :)
What's the name of the repo? We'll just search for it.
@@jowdyboyYes, seconded - sounds useful. Any idea if it works with NVidia?
I use it with Nvidia, I've tried to post several comments, but I'm assuming they keep getting flagged.
Whats the repo name
12:42 helped me to root cause my problem: Proxmox was using my NVIDIA Quadro P2000 as my primary display source. In the BIOS, I had to go to Advanced -> AMD PBS -> Primary Graphics Adaptor and set it to D-Sub. My mobo has onboard graphics but my CPU does not.
1. make sure that the vm uefi is set to efi mode and not csl mode. The EFI should be loading the drivers for the card at boot time. That could stop the GPU from passing through.
2. If you have two identical GPUs, consider cross flashing the vbios with one from competing AIB with the same specs. The new vbios will change the pcie id for the card without changing the functionality, letting you split up the two cards under iommu
The later isn't needed as they are in different slots they have different buss IDs and thus should never collide with IOMMU. You are still able to assign them to different different VMs
@@FlaxTheSeedOnebut not to use one on the host and one for passthrough
@@Momi_Vyou dont need to use one for the host. turn on serial console if your cpu doesn't have integrated gpu.
@@omegatotal I know. But the original comment provides a way to solve the "two identical GPUs" issue (which is inherent to this method of passthrough, not just on Proxmox where serial is an option) that also applies to other virtio passthrough scenarios (like a desktop/workstation virtualization setup). And it's not solved by different slots (which the comment I replied to implied), though I must admit pcie id is not the right term, vendor/device id is a more accurate name
@@FlaxTheSeedOne I would have thought that too but it contradicts what was said in the video and seems to be a quirk of the Chinese motherboard with mobile cpu
Hi. I am currently investigating the idea of creating a proxmox server to run various things, including MacOs, since i definitely need/want that one for audio. I can't really find a clear answer so i feel like asking you this : is it feasible to have low-latency audio on a VM ? Not remotely, locally of course, through an USB audio interface. I feel like PCI passthrough on a dedicated USB card can give me something viable, but i'm not completely sure. Maybe i can just passthrough my USB controller on the motherboard ? But in the end, will it provide me something useable for realtime audio treatment, as in "i plug my guitar in the audio interface, and i hear it's sound, processed by the computer, on my loudspeakers in real-time with a low latency, under, say, 15/30ms" ? )
i always kinda like iommu as a name
its a mouthful but at least its not easily confused with the many other acronyms
i remember on something of the low end aorus gaming boards it used to be under overclocking settings > cpu > miscellanious cpu settings
I always think that IOMMU is just the thing that Doctor Strange battles in the movie.
I was able to passthrough an RTX A2000 with my Eyring i9 12900H motherboard . I populated 2 of the 3 nvme ports though.
At first, adding a GPU to one of my VM's also did not work as you pointed out.
I made it work by deleting that VM ( Debian 12 ), creating it again from scratch, BUT before the first boot, add PCI device and select your GPU.
Go through the installation process, and once done, lspci showed my GTX 1060 6G in the list.
Hope this helps anyone else looking for this.
Just getting into my own homelab after watching for a while. Got an old ThinkCentre that I'm going to have a tinker with before fully migrating a Windows 11 PC with Plex etc. This video series is great
I was surprised that GPU passthrough to Debian or Windows based VMs worked out of the Box on my machine. I never configured anything inside Proxmox. I made sure that the UEFI bios was set up correctly. But that was it. Has been running great for months. (Im using a AMD 5900X on a MSI X570 Gaming Plus with a 1080ti)
So, somewhat silly question for @craftcomputing and the hive mind. Do you need (or should you use) dummy plugs on each of the graphics cards (pcie or integrated) in virtualized environments like this. Will this help their respective "systems" function better?
One important thing I ran into installing TruNAS Scale on ProxMox 8.0.4.
When you add the EFI storage disable Pre-Enroll keys.
Failure to do so can cause the error: bad shim signature
THANK YOU THANK YOU !!!!!! Was pounding my head on the wall trying to figure that one out.....
@@deanolivas3011 I was right there doing the same thing 3 nights ago. Gave up came back the next day and after working through a bunch of suggestions ran into this at the bottom of one trunas forum... I figured I would share this with anyone watching the video...
I've had ballooning enabled on proxmox 7 and it still worked. I wonder if ballooning knows which areas need to be directly mapped and still works normally.
Thank you sir! Just by adding a new physical NIC to Truenas, my write speed increased by x3 on my ZFS pool! I had saturated the just one NIC I had on board with a lot of LXC and VMs
In my proxmox setup all I did was add iommu=on and selected the device on vm. Didn't have to do any of blacklists or anything. Maybe that's your issue.
Depends on the GPU. On the Enterprise grade video cards like the Nvidia Tesla P4 you have to use their special video drivers to make it work. Open source drivers won't work at all.
Thanks for this vid, unfornutaly I didn't get it work. I'm using a Dell Optiplex 7050 running an Intel Core I5 - 7500T and booting in EFI. If I put in your commands, I get a message that the system is booted up in EFI but no grub efi amd64 was found.
Even my /etc/kernel/cmdline file was empty at the first time....? What could be went wrong? I need IOMMU for a VM....
Using a Dell Optiplex 7090 and I too am missing a /etc/kernel/cmdline file. Did you ever get this to work?
This is just what I was looking for.
I am a single home user and running a separate machine for NAS from my Main Workstation makes little sense. Having single power PC with Proxmox with one instance of trueNAS and another for windows / linux OS will make things a lot simpler. I can also offload docker instances from my NAS over to proxmox and manage them independently.
Just one question though, how's proxmox on PC in terms of power management. Once I shutdown my workstation PC, will the overall power consumption go down to a comparable level of a commercial grade NAS?
Not sure if it will help but try adding iommu=pt to the kernel command line and verify that the GPU is in it's whole IOMMU group. There might be a third pcie device from the CPU which should be fine.
Also I saw in b-roll you didn't add the audio device 01:00.01. Not sure if you did but you need to add everything device in the IOMMU group for pcie passthrough to work from what I heard. I saw that it was using snd_intel kernel module. That might be a issue but not sure.
Anyways hope it helps.
Checking the all devices box is the same as passing through the audio separately
I did verify the GPU was in its own IOMMU group.
My install of Proxmox on a Dell r530 is EFI but it does not have a file in etc/kernal/cmdline. There is a cmdline in /proc but that cant be edited. Running 8.04.
i had nvidia-smi working fine for my quadro but plex wasn't doing hw transcode. after throwing some semi-stale additional virtualization tweaks at the wall, the real thing was that i used my distro's packaged nvidia driver - which didn't auto include libcuda1 and libnvidia-encode1. eventually figured it out from spelunking the plex debug logs, looks like those two extra packages are enough to get the full hw transcode going, but i'll update here if i notice anything else.
FYI Many Ryzen chipset drivers have a bug in their passthrough code. There was an early version that worked, then a new version that didn't, and a newer one that did. I spent hours troubleshooting making sure everything was right in all the configs nothing. Did a BIOS update and perfect in a moment. I was on a x470 chipset with a Ryzen 2700.
Darn it. I should have done this video. I got it working about a month ago. Great information!! So many people discouraged me from doing it as they said it wouldn't work. It works great for me.
Great instructions. My understanding is that to pass any optical drives through to VMs, I would need to do this using a PCIE/SATA controller? I knew I would have to do this to access my disk shelf with my HBA when I rebuild everything. All these lonely sata ports on my motherboard.
Also, thanks for explaining why you are installing truenas on proxmox. It was something I was always curious about.
Here again. I am finally rebuilding my server and jbod. Reviewing the video helped me figure out why my disks weren't showing up in TrueNAS. Also realized after that I wasn't clicked on disks. :D
Not sure if I missed it and it was addressed in the video but my scenario is similar to what's done in the video, 1 VM with TrueNAS passing through the SATA controller to the drives for the sweet sweet ZFS setup and another VM to host all my home server stuff like jellyfin, qbittorrent and elasticsearch.
In this case, what would be the best way to connect the ZFS pool between one VM to another?
12:43 Not true, you can passthrough intel iGPU even if you don't have any other GPU in the system. You have to of cause set the VFIO driver for it at boot and you will lose video for proxmox. But as you do everything through web or SSH, you don't need video most of the time. You can always reboot to kernel without the VFIO driver linked to iGPU if you lose network connection or need to fix something. There is also GPU partitioning which certain Intel GPUs support. Then you can use one GPU for both proxmox and even multiple VMs. That is a bit more hardcore for now though.
I am beginning to feel like Proxmox just out performs everything even VMware ESXi which I have used. I think at some point I am gonna build a "virtualization" server, and move my TrueNAS from Bare Metal to Virtual Metal. But since I need a software server more urgently, Proxmox is gonna have to take a back burner, but I'll still watch for the education.
I've found that sometimes using the "All Functions" option is what is actually causing the failure. Just adding the secondary device manually is more compatible.
what about installing a custom kernel to get the full rocm features? seems extremely complicated and i haven't found a guide how to patch a custom kernel in proxmox...
Been looking at converting the home lab to Prox, but can't find information on virtual storage options. Currently running vSAN and the performance has been outstanding. Looking for something on the Open side of the table that will do auto tiering and other functionality between SSD and HDD. Any recommendations?
Impressive to the point and yet full of details tutorial !
Thank you for this update. This is one if the more challenging tasks for me in proxmox and I was only successful through sheer dumb luck the last time I did this.
The good news? Its still deployed and the only thing I have changed is the GPUs and Storage controller.
@craftcomputing Do you have an updated version of this? My passthrough was working but now it's not. Everything is still setup the same...
Excellent tutorial on PCI passthrough.
Could mention on how to passthrough on motherboard SATA and NVME drive?
Ok Jeff, I have the Erying 11th 0000 1.8GHz i7 ES motherboard and I gave it the old college try. I followed your tutorial (for grub) as well as played with settings and also followed a few other tutorials out there (they all seem to be slightly different). No luck. I was able to pass the iGPU through but not my Nvidia GTX 1660S card. I even tried blacklisting and passing through all of the items in the same pci group (VGA, audio, USB, etc.). At that point, it borked my install and I threw in the towel. Too bad, would be really nice to have proxmox on this MB but I need to pass through the GPU to Plex. Unfortunately, everywhere I found where some said they successfully passed through a GPU on an Erying motherboard, there were little to no details on how it was done (BIOS settings, proxmox settings, etc.). So I went back to my Windows 10 install with VMWare workstation to run VM's as needed.
I’ll try this tutorial. The other tutorials don’t seem to let me pass through an embedded graphics card.
I'm trying to learn how to install and implement VM's and ultimately build a homelab, just for self satisfaction and knowledge.
was there going to be a vid about changing the repo and getting rid of the stock error message?
The VM doesn't boot once the PCIe device is attached.
@12:45 That is not true. You can pass-thru your iGPU. I have been doing this for several years now. I am using 3x NUC8 as my PVE cluster and passing through the iGPU to my Emby and Jellyfin.
ahh nice, the only thing that happens to me is a crash, any idea why I can't mount the SATA controller (pie device)
Seems you were able to fix the ERYING motherboard and not being able to passthrough pci-e card. How you do it?