Proxmox 8.0 - PCIe Passthrough Tutorial

Поделиться
HTML-код
  • Опубликовано: 7 фев 2025
  • Grab yourself a Pint Glass, designed and made in house, at craftcomputing...
    Virtualization is great, but sometimes you just need access to physical hardware. If only there were a way to allow a virtual machine bare-metal access to PCIe cards in your server. OH WAIT! THERE IS! Whether you need access to a storage controller, graphics card, network card, or any other PCIe device, this is the video for you.
    But first... What am I drinking???
    Sierra Nevada (Chico, CA) Torpedo Imperial IPA (8.6%)
    Written Documentation can be found here: drive.google.c...
    Links to items below may be affiliate links for which I may be compensated
    Parts from today's build:
    AUDHEID 8-Bay NAS Chassis: amzn.to/47iCsxb
    ERYING i7-11700B 8-Core (Non-ES): s.click.aliexp....
    Leven DDR4 2x16GB 2666 UDIMM: amzn.to/3OmCnjv
    Flex ATX 1U 500 Watt: amzn.to/3Qw9EeB
    Silicon Power A60 1TB: amzn.to/44XANM1
    ASM1064 8-Bay SATA Controller w/ Cables: amzn.to/3KAqULZ
    Follow me on Mastodon @Craftcomputing@hostux.social
    Support me on Patreon and get access to my exclusive Discord server. Chat with myself and the other hosts on Talking Heads all week long.
    / craftcomputing
  • НаукаНаука

Комментарии • 485

  • @Skukkix23
    @Skukkix23 Год назад +52

    FOR PEOPLE HAVING THIS ERROR: bdsDxe: failed to Ioad Boot0002 "UEFI QEMU QEMU HARDDISK"
    uncheck the "Pre-Enroll keys" option and it will boot via uefi!
    pls vote this up I googled for 5hrs to find the source of the problem.
    System: asus z590p, 11900k, 64gb kingston 2666.

  • @oddholstensson212
    @oddholstensson212 Год назад +55

    Excellent guide.
    Do not forget to deselect Device Manager->Secure Boot Configuration->Attempt secure boot in VM UEFI BIOS when installing TrueNAS. Access it by pressing "Esc" key during boot sequence. Othervise you will get access denied on virtual installation disk.

    • @wirikidor
      @wirikidor 10 месяцев назад +3

      5 months later, this comment just saved me some headache.

    • @maconly34
      @maconly34 8 месяцев назад

      @@wirikidor MERCI !!!

    • @alexmoore4926
      @alexmoore4926 7 месяцев назад

      I literally just disabled secure boot and it worked (as now it's just UEFI and no disk space is needed) hopefully that doesn't screw me down the road

    • @Maxw3llTheGreat
      @Maxw3llTheGreat 5 месяцев назад

      an hour of headache could've been solved by scrolling down. fml

  • @tormaid42
    @tormaid42 Год назад +32

    Wish after so many years there was a simple gui option for this. Appreciate the guide!

  • @iamthesentinel584
    @iamthesentinel584 8 месяцев назад +7

    I just have to say, I spent hours trying to get my GPU to passthrough correctly, and your one comment on Memory Ballooning just fixed it! Thank you so much! I didn't even see anything about that mentioned in any of the official documentation!

  • @jttech44
    @jttech44 Год назад +150

    "Don't virtualize truenas"
    *Chuckles in 4 virtualized truenas servers in production*

    • @CraftComputing
      @CraftComputing  Год назад +49

      STOP SAYING TH....
      Wait.... nevermind :-D

    • @sarahjrandomnumbers
      @sarahjrandomnumbers 8 месяцев назад +11

      Just like Stockton Rush always said.
      REAL Men ALWAYS test in production.

    • @jttech44
      @jttech44 8 месяцев назад +1

      @@sarahjrandomnumbers Lmao rip

    • @shinythings7
      @shinythings7 6 месяцев назад +1

      I have been on the fence if I wanted to do truenas on bare metal or virtualize it and this sentence and Jeff's quick explanation on why made me feel a lot better about doing it.

    • @jttech44
      @jttech44 6 месяцев назад +1

      @@shinythings7 you really don't lose much juice virtualizing anything nowadays.

  • @mistakek
    @mistakek Год назад +19

    I've been waiting for this. I already have 2 Erying systems as my Proxmox cluster, after your first video on this, and they've been working perfectly for me, but when you originally said you couldn't get HBA passthrough to work properly, I held off buying a 3rd, as I wanted the 3rd for exactly what you've done in this video, and to have a 3rd node for ceph. Now that I can see you figured it out using a sata card, I'm off to order all the bits for the 3rd node.
    Thank You, and after I order everything, I'll pop into your store to buy some glassware to show some appreciation.

  • @rgibson1hrg7a
    @rgibson1hrg7a Месяц назад +1

    Thank you for this video, it was very helpful! In particular the comment about memory ballooning not being supported and why was a HUGE help, I had not seen that mentioned anywhere else. Also the need to map the audio as well as video was a helpful point.

  • @brentirwin10
    @brentirwin10 Год назад +4

    Thank you for this. I couldn't get hardware transcoding working properly. I turned off ballooning on the VM and BAM! It works. HUZZAH!

  • @DrNoCDN
    @DrNoCDN Год назад +6

    Jeff - Just wanted to give an extreme thank you for the quality and content of your videos. I just finished up my TrueNAS Scale build using your guidance and it worked like a charm. I did use an Audheid as well, but the K7 8-bay model. I went with an LSI 9240-8i HBA (flashed P20 9211-8i IT Mode) and the instructions on Proxmox 8 you provided were flawless and easily had my array of 4TB Toshiba N300's available via the HBA in my TrueNAS Scale VM. Lastly, a shout out to your top-notch beer-swillery as I am an avid IPA consumer as well! (cheers)

  • @ryanmoore1016
    @ryanmoore1016 10 месяцев назад

    Thank you! Every time i'm stuck on a project in my home lab, you tend to have just the video i need and explain it very well!

  • @scuzzy2142
    @scuzzy2142 Год назад +53

    These tutorials are so much more usefull than Network Chucks and you dont seem like a shill trying to sell me something constantly.

    • @sirdewd2197
      @sirdewd2197 Год назад +17

      Network Chuck is only good for ideas not how-to guides. He’s more of a cyber influencer to me.

    • @JamesMowery
      @JamesMowery 11 месяцев назад +8

      This is actually such a good point. I barely/rarely watch Network Chuck anymore. He just feels fake to me now. Almost unwatchable. I haven't seen one of his videos in months.

    • @johndroyson7921
      @johndroyson7921 11 месяцев назад +8

      seems like a good starting point for newbies or kids. I won't knock him for making the stuff sound exciting but I definitely grew out of his style.

    • @citypavement
      @citypavement 8 месяцев назад +3

      I can't fucking stand that guy. "Look at my beard! Look, I'm drinking coffee! Buy my sponsored bullshit!"

    • @Oschar157
      @Oschar157 5 месяцев назад

      @@johndroyson7921 he's what got me into networking/homelab. He made it fun and entertaining, but now that I am getting more knowledgable about this stuff, I watch him less and less

  • @harry4516
    @harry4516 7 месяцев назад +13

    Thank you for sharing your experience! It was incredibly helpful in getting GPU passthrough to work. However, I needed to make a few adjustments:
    In Proxmox 8, /etc/kernel/cmdline does not exist. Instead, I entered the settings in /etc/default/grub as follows:
    GRUB_CMDLINE_LINUX_DEFAULT="quiet nouveau.modeset=0 intel_iommu=on iommu=pt video=efifb:off pci=realloc vfio-pci.ids=10de:1d01"
    It's important to note the parameters video=efifb:off and pci=realloc, which were not mentioned elsewhere. These are crucial because many motherboards use shadow RAM for PCIe Slot 1, which can hinder GPU passthrough if not configured properly. With this setup, I believe all your GPUs should function correctly. Additionally, I had to blacklist the NVIDIA drivers.

    • @w33dp0w3r
      @w33dp0w3r 7 месяцев назад

      hey, nice addition indeed ! what about the audio card ? this is my pain... can you give me some hints about that ?thx in advance.

    • @mattp3437
      @mattp3437 5 месяцев назад

      "It's important to note the parameters video=efifb:off and pci=realloc, which were not mentioned elsewhere." So where do these parameters get added/edited?

    • @61212323
      @61212323 4 месяца назад

      @@w33dp0w3r if you have GPU passthrough you can use the monitor (HDMI/DP) for audio or pass an USB card (like i did). Some monitor have an audio out port on them, but only only works with HDMI or DP.

    • @airwolf_hd
      @airwolf_hd 3 месяца назад +1

      For anyone who was confused like me there are 2 bootloaders, GRUB and Systemd-boot.
      /etc/kernel/cmdline only exists with Systemd-boot and this bootloader is used when Proxmox is installed on ZFS.
      Therefore, anyone with UEFI and not booting from ZFS should follow the GRUB instructions.

  • @fanaticdavid
    @fanaticdavid Год назад +7

    This tutorial series is top notch. Thank you so much, Jeff!

  • @henderstech
    @henderstech 9 месяцев назад

    I had to reinstall proxmox for the first time in over a year. This guide was very much needed today. Thanks

  • @shawnhaywood4199
    @shawnhaywood4199 7 месяцев назад +1

    Wahoo!! Your directions worked! Thanks. I'm installing Ollama LLM on a VM and want to passthrough the GPU, which worked thanks to you! I'm using an Intel based i7 Dell 3891, GTX 1650, and current Proxmox.

  • @chromerims
    @chromerims Год назад +1

    Thank you for the write-up, especially addressing upfront EFI vs legacy boot config for IOMMU (intel_iommu=on).
    Great video 👍
    Kindest regards, neighbours and friends.

  • @marc3793
    @marc3793 Год назад +124

    Proxmox really should just make these options available in the UI.

    • @cjmoss51
      @cjmoss51 Год назад +4

      Truly. I just dont think these things occur to them when they are processing feature adds and the like. They can be slow to adope like Debian which is what its based on.

    • @TwiggehTV
      @TwiggehTV Год назад +22

      Right? They have MOST of the UI, they just need the initialization bit to be UI-driven aswell.
      A full-feature product like Proxmox should have all of its functions available through its UI, "popping under the hood" with a terminal is an ugly solution, no matter how poweful it might be.

    • @Solkre82
      @Solkre82 Год назад +13

      It's stupid easy in ESXi, too bad Broadcom killed it.

    • @manekdubash5022
      @manekdubash5022 11 месяцев назад +5

      ​@@Solkre82That's where I'm coming from too. Moving from esxi to Proxmox - if my passthrough setup can be replicated in PVE...

    • @Solkre82
      @Solkre82 11 месяцев назад

      @@manekdubash5022 I'm sure it can, just not as simple. I archived my ESXi 8 ISOs and Keys so I'm not worried about moving for a few years.
      Who knows, Broadcom might decide to do good.. HAHAHAHA my sides hurt!

  • @snakeychantey8521
    @snakeychantey8521 Год назад +1

    Been searching for this for the past week or so. Love your work Jeff. Cheers

    • @18leines
      @18leines Год назад

      Me to, since upgrade failed on my HP Z440 with xeon 2690 and Tesla M40 24G. Cheers

  • @thecameratherapychannel
    @thecameratherapychannel Год назад

    Thank you sir! Just by adding a new physical NIC to Truenas, my write speed increased by x3 on my ZFS pool! I had saturated the just one NIC I had on board with a lot of LXC and VMs

  • @iriolavagno4060
    @iriolavagno4060 Год назад +1

    Thanks Jeff, you saved me a LOT of frustrating research :-) I just managed to passthrough a couple of network interfaces to a microvm within my NixOS server, and it just took me a couple of hours, I expected to spend all night on it :-D

  • @jafizzle95
    @jafizzle95 3 месяца назад +1

    I've moved all of my hypervisor duties from Unraid to Proxmox, but I gotta give kudos to Unraid for how easy they make hardware passthrough. A single checkbox to prepare the device for passthrough, reboot, then pass that bish through. Echoing the wishes from other commenters that Proxmox adds the passthrough prep steps to the GUI. There's a thousand different guides for passthrough on Proxmox and 1000 different ways to do it, it's hard to know which is correct or best.

  • @DJCarlido
    @DJCarlido Год назад +8

    Another little addition to this. It seems that you still need to add ""GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on" "" to the etc/default/grub boot cfg file if using the legacy grub boot menu. The legacy grub boot menu is still teh default if installing ext4 onto a single drive.

  • @AlexJoneses
    @AlexJoneses 5 месяцев назад

    Sierra Nevada is one of the best beers out there, hazy little thing is amazing

  • @TheSolidSnakeOil
    @TheSolidSnakeOil 8 месяцев назад

    This has been a life saver. I finally was able to passthrough my 6700 XT for jellyfin hardware encoding.

  • @livtown
    @livtown Год назад +6

    Hey Jeff, quick tip: you can use the RUclips sections in the timeline to add timings so people can easily skip to where they need help.

    • @TomiWebPro
      @TomiWebPro Год назад

      Sponserblock extension allows you to skip ads and see where you should start, try it

  • @lilsammywasapunkrock
    @lilsammywasapunkrock Год назад +7

    Been waiting for this. All the pcie passthrough write ups are old and outdated, and the only one that worked for me on prox 7.4 was yours.

    • @CraftComputing
      @CraftComputing  Год назад +12

      Tutorials: update-grub
      Proxmox 8.0: "What's a grub?"

    • @lilsammywasapunkrock
      @lilsammywasapunkrock Год назад +1

      ​@@CraftComputingexactly!
      Quickly for clarification sake, q35 means uefi and ifx440 or whatever is bios boot?
      Half the tutorials say to do one or the other, and this is the first time I have heard it mentioned otherwise, unless I just forgot 😅.

    • @danilfun
      @danilfun Год назад +2

      @@lilsammywasapunkrock
      Both machine types support bios and uefi.
      The primary difference between q35 and i440fx is that q35 uses PCI-e while i440fx uses the old PCI.
      If I remember correctly, I was able to use PCI-e passthrough with i440fx but only for one device at a time.
      I personally don't see any point in using i440fx in modern systems with modern host operating systems.

    • @CraftComputing
      @CraftComputing  Год назад

      ^^^ Bingo

  • @SomeDudeUK
    @SomeDudeUK Год назад

    Just getting into my own homelab after watching for a while. Got an old ThinkCentre that I'm going to have a tinker with before fully migrating a Windows 11 PC with Plex etc. This video series is great

  • @LAWRENCESYSTEMS
    @LAWRENCESYSTEMS Год назад +3

    This was helpful as I don't run Proxmox and many people have commented on my XCP-NG videos saying how much easier Proxmox handles this VS XCP-NG but in reality they are actually very similar. Both have the need to find the devices and make changes via the command line just to get it working.

  • @brycedavey1252
    @brycedavey1252 Год назад +1

    Great video, I enjoy your server content a lot when it's this kind of set up.

  • @thatonetimeatbandcamp
    @thatonetimeatbandcamp Год назад +79

    As always you're Jeff.. There a situation where you aren't Jeff? like maybe Mike? or Chris?

    • @CraftComputing
      @CraftComputing  Год назад +51

      I kind of like being Jeff.

    • @SP-ny1fk
      @SP-ny1fk Год назад +11

      @@CraftComputing Yeah it would be weird if you woke up as Patrick from STH.

    • @CraftComputing
      @CraftComputing  Год назад +27

      That would be weird. I'd be a whole foot shorter.

    • @jonathanzj620
      @jonathanzj620 Год назад +4

      ​@@CraftComputingDepends if you're cosplaying as an admin that day or not

    • @JeffGeerling
      @JeffGeerling Год назад +27

      @@CraftComputingme too

  • @johnwhitney1344
    @johnwhitney1344 Год назад +1

    I really like these series on proxmox

  • @RealVercas
    @RealVercas Год назад +3

    SR-IOV and IOMMU are completely orthogonal features and enabling one will not magically make the other work. SR-IOV simply lets the kernel use a standard way of telling PCI-E devices to split themselves into virtual functions. SR-IOV does not require an IOMMU, and IOMMU does not require SR-IOV.

  • @TheFrantic5
    @TheFrantic5 Год назад +14

    Can we just take a step back and marvel at how now only that this is all possible, but also won't cost a dime in software?

    • @TheDimanoid999
      @TheDimanoid999 3 месяца назад

      its possible but at a cost. you'll sacrifice quite a lot in performance. like gpu will be working 50% maybe and nvme drives, connected through m.2 slots at 1/4 of full speed.

  • @Man0fSteell
    @Man0fSteell Год назад

    It took some good amount of hours to figure things out - but at the end it was worth it! I'm using GPU passthrough to run some language models locally

  • @darkenaxe
    @darkenaxe Год назад

    Impressive to the point and yet full of details tutorial !

  • @deathpie5000
    @deathpie5000 Год назад

    Hey Jeff I'm from Central Oregon been watching you channel for quite a while now, thank you so much for the videos please please more proxmox videos, show any and everything great content :) I'm trying to learn all ins and out of proxmox.

  • @MaxVoltageTech
    @MaxVoltageTech Год назад

    Darn it. I should have done this video. I got it working about a month ago. Great information!! So many people discouraged me from doing it as they said it wouldn't work. It works great for me.

  • @SytheZN
    @SytheZN Год назад +3

    For your next tutorial I'd love to see you get some VMs running with their storage hosted on the truenas VM!

  • @cldpt
    @cldpt Год назад +1

    a particular reason not to passthrough disks before installing is to make it easier not to mess up the installation drive, so it's good advice indeed

  • @mikequinn8780
    @mikequinn8780 Год назад +4

    Are you planning a video on USB and or PCI passthrough to LXC containers? Something about cgroups and permissions never could get it to work.

  • @timdenis6788
    @timdenis6788 Год назад +3

    You definitely CAN passtrough your primary GPU to a VM...
    Running a setup like this for e few years now. The 'disadvantage' is that a monitor to the proxmox is not available any more, and until the VM boots, the screen says 'loading initramfs'.

    • @m.l.9385
      @m.l.9385 Год назад

      Yes, definitely - and Proxmox UI is used through SSH from another device anyway as it usually isn't a thing to run the UI on the Proxmox Servers GPU itself anyway.
      It can be handy though to have another means of connecting a GPU to the system if the SSH-interface is messed up - I use a thunderbolt eGPU in such circumstances...

  • @MatthewHill
    @MatthewHill Год назад +6

    FYI, the instructions don't work if you're using GRUB. These instructions appear to be specific to systemd-boot.
    You'll need to look in /etc/default/grub rather than /etc/kernel/cmdline to make the kernel command line changes.

    • @OwO-ek6yd
      @OwO-ek6yd Год назад

      You're a damn wizard! :v Thxx Mr Magical Pants!

  • @Glitch-Vids
    @Glitch-Vids Год назад +25

    Hey Jeff, I had issues passing through a GPU with the exact same hardware until I pulled the EFI ROM off the GPU and loaded it within the VM config PCI line. Adding the flag bootrom=“” to the line in the VM config pointed to the rom should do it. I think this is because the GPU gets ignored during the motherboard EFI bootup so the VROM gets set to legacy mode. When trying to pass it into an EFI VM it won’t boot since the VROM doesn’t boot as EFI

    • @slackergonewrong
      @slackergonewrong Год назад

      Could you explain a little more on how you got that working? I still can't get GPU passthrough working on my 11900h ES erying mobo.
      Also did you mean "romfile=" ?

    • @rakhanreturns
      @rakhanreturns Год назад

      After looking at his documentation, I think you're onto something here.

    • @mattp3437
      @mattp3437 5 месяцев назад

      @@slackergonewrong bootrom="" seemed to be the wrong parameter and removed the GPU from the hardware. romfile seemed to be accepted but the VM failed to startup. So not sure this is the fix (for me).

    • @jtracy54
      @jtracy54 2 месяца назад

      I had to do this too for my system. I think I used a WinPE image + GPU-Z to pull the rom off the card and then in the config for my VM i used the following:
      hostpci0: 09:00,pcie=1,x-vga=1,romfile=GP104_Fixed.rom

  • @Tterragyello
    @Tterragyello 5 месяцев назад +2

    6:45 -- systems for pve 8.2, you'll want to modify the grub boot settings at /etc/default/grub, append the same iommu text to the string value assigned to GRUB_CMDLINE_LINUX_DEFAULT. then execute update-grub.

  • @FelipeBudinich
    @FelipeBudinich 7 месяцев назад

    FYI while it's fine to run a Truenas VM with PCIe passthrough to a SATA controller, the problem you can stumble upon are IOMMU groups.
    If you try to do this and you can't ungroup the SATA controller from a IOMMU that has other important components (say the APU) it may cause the Proxmox host to crash; just tested this on a X300-STX motherboard with a Ryzen 4750G and the SATA controller basically shares the IOMMU group with almost everything and no amount of grub parameters and blacklists allowed me to get this going. I was just expecting too much of a Deskmini x300 😆
    You COULD just enable samba on proxmox, but that would be a very bad security risk (as VMs would get access to the Host filesystem).

  • @greenprotag
    @greenprotag Год назад

    Thank you for this update. This is one if the more challenging tasks for me in proxmox and I was only successful through sheer dumb luck the last time I did this.
    The good news? Its still deployed and the only thing I have changed is the GPUs and Storage controller.

  • @Jan12700
    @Jan12700 8 месяцев назад +2

    6:55 Did the Path change? I only have install.d, postinst.d and postrm.d in the /etc/kernel directory.

  • @subrezon
    @subrezon Год назад +3

    Great video! Waiting for one about SR-IOV, I tried using virtual functions on my Intel I350-T4 NIC and got nowhere with it

  • @ProjectInitiative
    @ProjectInitiative Год назад +5

    Great video! I wrote a hookscript a while ago to aid in PCIe passthrough. I found it useful to use specifically with a Ryzen system with no iGPU. It dynamically loads and unloads the kernel and vfio drivers so when say a windows gaming VM is not in use, the Proxmox console will re-attach when the VM stops. Could be useful for other devices too! If anyone is interested let me know, I'll try to point you to the Github gist. I don't think RUclips likes my comment with an actual link. :)

    • @jowdyboy
      @jowdyboy Год назад +3

      What's the name of the repo? We'll just search for it.

    • @ccoder4953
      @ccoder4953 Год назад

      @@jowdyboyYes, seconded - sounds useful. Any idea if it works with NVidia?

    • @ProjectInitiative
      @ProjectInitiative Год назад

      I use it with Nvidia, I've tried to post several comments, but I'm assuming they keep getting flagged.

    • @98f5
      @98f5 6 месяцев назад

      Whats the repo name

  • @ozmosyd
    @ozmosyd Год назад

    Exactly what I had been looking for. Thanks for sharing.

  • @mattp3437
    @mattp3437 5 месяцев назад

    Ok Jeff, I have the Erying 11th 0000 1.8GHz i7 ES motherboard and I gave it the old college try. I followed your tutorial (for grub) as well as played with settings and also followed a few other tutorials out there (they all seem to be slightly different). No luck. I was able to pass the iGPU through but not my Nvidia GTX 1660S card. I even tried blacklisting and passing through all of the items in the same pci group (VGA, audio, USB, etc.). At that point, it borked my install and I threw in the towel. Too bad, would be really nice to have proxmox on this MB but I need to pass through the GPU to Plex. Unfortunately, everywhere I found where some said they successfully passed through a GPU on an Erying motherboard, there were little to no details on how it was done (BIOS settings, proxmox settings, etc.). So I went back to my Windows 10 install with VMWare workstation to run VM's as needed.

  • @bigun89
    @bigun89 Год назад

    This exact functionality I got working with an Nvidia P400 in Proxmox v7. I hadn't upgraded to 8 for fear of going through this again. Now I may have to take the dive.

  • @AV-th6kn
    @AV-th6kn Год назад

    Quality stuff again. Was excited when I saw the thumbnail that finally I will see how to passthrough properly an nvme ssd to a truenas vm. Unfortunately this not happened this time.
    Hope that you will cover that as well somewhen and if you could explain how to get the truenas vm to put the hdd's to sleep, that would be just the cherry on top.
    Cheers Jeff.

  • @robertchamberlin2362
    @robertchamberlin2362 Год назад

    Perfect video thanks a bunch I got Gpu passthrough working on my dell Precision T3600 with my Gtx 8800 Gpu passthrough

  • @ozmosyd
    @ozmosyd Год назад

    The Proxmox piece was good but I, Loved, loved, loved ... the beer review. Hopefully peeps understand now why the "Brits" drink "REAL" beer room temp. The way your descibed the experience of tasting the brew was pure class. Now on to the PCIe pass through vid.
    Love ya work chap! Stella job.

  • @Riyazatron
    @Riyazatron 11 месяцев назад

    I love your videos.
    They educate me a lot.
    Whst ive also learnt is i for plex and jellyfin you dont need to run a VM for just that. Its simple to run it in an lxc container. It's more efficient for my use case.
    Correct me if my understanding of proxmox is wrong, after all a noob here, but lxc containers have full access to the hardware that promox has. So for example where you have to blacklist the hardware in proxmox for VM pass through, you dont for containers?
    The only thing I'm struggling with is WiFi card pass through on my silly setup. I dont think itll work on an lxc container but im struggling on VM too.
    I had planned to use my setup if proxmox as the following:
    Hardware connects to my internet. OPNsense as my router/fw etc. second nic goes to switch. It is also bridged in proxmox.
    Then second lxc or VM for openwrt to use WIFI in AP which is compatible with openwrt in AP mode, thats been checked. I struggle eith that.
    It also runs jellyfin and PLEX in 2 different containers.
    I mainly use PLEX but playing around with jellyfin recently.
    I also have another container for pihole. I am looking at adguard too but i think they're bith dns sinkholes.
    All these units use about 6 to 12w depending on demand with a peak of 28w when i was doing silly stuff.
    The truenas proxmox server is different and i have a oroxmox backup server running too. This is all because of your simple tutorials. Really appreciate the work you put in

  • @Skyverb
    @Skyverb 9 месяцев назад

    This worked like a charm for me!
    Turned a spare gaming laptop into a remote access gaming server.
    For me the graphics card worked, and I removed the errors on my Nvidia card by not adding the sub features of the card, like usb C, and the audio device as advised in this tutorial. It gives an error saying I added the card twice if I did.

  • @ChrisJackson-js8rd
    @ChrisJackson-js8rd Год назад +3

    i always kinda like iommu as a name
    its a mouthful but at least its not easily confused with the many other acronyms
    i remember on something of the low end aorus gaming boards it used to be under overclocking settings > cpu > miscellanious cpu settings

    • @kahnzo
      @kahnzo Год назад +1

      I always think that IOMMU is just the thing that Doctor Strange battles in the movie.

  • @GeoffSeeley
    @GeoffSeeley Год назад

    It's possible to pass through just one of two identical cards using the driverctl package and is easier than adding kernel options and blacklists.

  • @zr0dfx
    @zr0dfx Год назад +1

    You ever get PCIE pass through working for the x16 slot? Looking forward to part 4 😊

  • @DavidAshwell
    @DavidAshwell Год назад +2

    On the "Proxmox isn't the best tool for ZFS file server duties argument".. that's mostly right, however, your friends at 45 drives' Houston UI (running in cockpit) does a solid job at all the missing responsibilities you listed that TrueNAS typically handles. I personally still prefer TrueNAS myself, but you can run the Houston UI webgui and standard Proxmox webgui on the same box.

    • @igordasunddas3377
      @igordasunddas3377 Год назад

      I prefer separation of concerns and staying as close to default settings and usage as possible in order to be able to update much more easily.
      So if I needed or wanted to use ZFS (which I currently don't), I'd have gone for TrueNAS, possibly in a VM. I don't feel as comfortable with Proxmox (I am currently managing VMs and containers by hand or through Cockpit on my Ubuntu set up), though while it works, it's not that robust depending on what you do and it also requires a ton of manual work.

  • @Doc_Chronic
    @Doc_Chronic Год назад

    Thank you so much for this! Just what I was looking for

  • @kodream316
    @kodream316 Год назад +4

    would be interested in LXC tutorial with GPU passtrough / sharing to it... especially with something like intel NUC with only 1 integrated GPU, or maybe just sharing / passtrough of integrated GPU in general

    • @derekzhu7349
      @derekzhu7349 11 месяцев назад

      it's not passthrough for lxc, it'd be just using the host gpu directly in a virtual environment. it's the same kernel

  • @kedu20
    @kedu20 5 месяцев назад

    Hi mate at 14:48 when you add the ids doesnt matter if you put it xxxx:xxxx or xxxx.xxxx?

  • @nte0631
    @nte0631 Год назад +1

    Ive followed the isntructions but as soon as I add my HBA as a PCI device being passed through, my VM will just boot loop saying no boot device found. I checked the boot order and made sure it only had the lvm where truenas was installed but it still does this. If I remove the PCI devie, truenas boots fine.

  • @SaifBinAdhed
    @SaifBinAdhed Год назад +1

    I was able to passthrough an RTX A2000 with my Eyring i9 12900H motherboard . I populated 2 of the 3 nvme ports though.

  • @Tterragyello
    @Tterragyello 5 месяцев назад

    12:48 -- This source mentions IQR remapping, which I think actually does allow the primary monitor of the server and a VM to 'share' the GPU. Have not tested it yet.

  • @blastedflavor3604
    @blastedflavor3604 Год назад +2

    Man, I ran TrueNAS in a VM for years now. I never ran into issues.

  • @KomradeMikhail
    @KomradeMikhail Год назад +2

    Flash vBIOS to force GPU into UEFI mode and disable Legacy mode at boot ?
    Do you need to alter any of those CLI strings depending on chipset connected PCIe lanes vs. direct CPU lanes ?

  • @stevanazlen
    @stevanazlen Год назад +1

    At first, adding a GPU to one of my VM's also did not work as you pointed out.
    I made it work by deleting that VM ( Debian 12 ), creating it again from scratch, BUT before the first boot, add PCI device and select your GPU.
    Go through the installation process, and once done, lspci showed my GTX 1060 6G in the list.
    Hope this helps anyone else looking for this.

  • @josephcwallace
    @josephcwallace 4 месяца назад

    It was kind of funny (and, sadly, very relatable) when you went through process but still had to admit, at the end it may still not work...I appreciate the effort :)

  • @THEMithrandir09
    @THEMithrandir09 2 месяца назад

    I've had ballooning enabled on proxmox 7 and it still worked. I wonder if ballooning knows which areas need to be directly mapped and still works normally.

  • @TedPhillips
    @TedPhillips 7 месяцев назад

    i had nvidia-smi working fine for my quadro but plex wasn't doing hw transcode. after throwing some semi-stale additional virtualization tweaks at the wall, the real thing was that i used my distro's packaged nvidia driver - which didn't auto include libcuda1 and libnvidia-encode1. eventually figured it out from spelunking the plex debug logs, looks like those two extra packages are enough to get the full hw transcode going, but i'll update here if i notice anything else.

  • @clausdk6299
    @clausdk6299 Год назад +1

    @CraftComputing
    Cant see the dmesg log in the description? .. and its not attached to the google link??!?!

    • @CraftComputing
      @CraftComputing  Год назад +2

      Heh... funny story. I was working on getting Intel's UHD SR-IOV to work, so I would do a video on QuickSync passthrough, and I nuked my Proxmox install. Hadn't captured the dmesg yet 😂😭

    • @clausdk6299
      @clausdk6299 Год назад

      @@CraftComputing OMG 😭😱 Well I'm sure you tried all the KERNEL parameters there is. Just thought it would be fun to have a look 🤪

    • @RKuntz-hx6hc
      @RKuntz-hx6hc Год назад

      @@CraftComputing There is a possibility that you need to load the graphic bios separatly first befor your passthrough works correctly, if you like to try it again later let me know or if you want more informations. there is some good yt video doc from unraid about this problem.
      i run it like this many years now. currently with 2 different 1060gtx... for booth i needed to dump the gpu-bios and give it the gemu engine as information to load.
      this also fix many issues with the passthrough in combination with the audio device and fixed problems with vm reboots or resets where the card will just hang and freeze in its old stage.
      with the gpu-bios given to qemu/kvm all this problems get solved and the hardware is resettable for the quest, wich solves many problems.

  • @dotanuki3371
    @dotanuki3371 10 месяцев назад

    I set up VGA passthrough (what we called it then) back in 2013. I ran Xen, had one GPU for a Windows VM, another GPU for a linux VM, and a cheap GPU for console on the host/dom0.
    Back then it was really messy with card and driver support. Nvidia supported it on Quattro, but not on Geforce, so some people took a soldering iron to their geforce cards to get them to identify as Quattro cards. Then it worked. I used AMD, which worked for setting it up, but not taking it back down cleanly, as the driver didn't manage to reset properly. As a result, if I needed to boot any of the VM's, I needed to boot the whole system.
    Still though, I could play windows games in a VM with only a ~2% performance drop, and some charming artifacting in the top left corner, while leaving anything serious to linux, without having to reboot. Though if not for the tinkering in and of itself, I should have done what I recommended on the forums, "just get two computers".

    • @98f5
      @98f5 6 месяцев назад

      I remember soldering a few GeForce cards to trick them into being quaddros lol those were the days.

  • @jamespadgett5761
    @jamespadgett5761 Год назад +1

    My install of Proxmox on a Dell r530 is EFI but it does not have a file in etc/kernal/cmdline. There is a cmdline in /proc but that cant be edited. Running 8.04.

  • @bwm9637
    @bwm9637 Год назад

    So if I get it right the primary videocard is not used so all the story’s of passed through a intel videocard inside a 12 gen intel processor is not posible until you have a second videocard? So my passive 1215u pc cannot pass trough because this pc cannot host another videocard? Sooooo many hours spilled?
    Or is there another solution? Who???? Pls because usb, sound, 6 network nics al have passtrough😢😢

  • @blkspade23
    @blkspade23 10 месяцев назад

    I've found that sometimes using the "All Functions" option is what is actually causing the failure. Just adding the secondary device manually is more compatible.

  • @NorthhtroN
    @NorthhtroN Год назад

    FYI, if you are passing through a storage controller and runninto slow boot time's of your VM after try disabling ROM-Bar on the passthrough

  • @aidenmoyer4911
    @aidenmoyer4911 4 дня назад

    You can actually use your only video output as a pci device in proxmox no extra config required.

  • @gtbarsi1103
    @gtbarsi1103 Год назад +2

    One important thing I ran into installing TruNAS Scale on ProxMox 8.0.4.
    When you add the EFI storage disable Pre-Enroll keys.
    Failure to do so can cause the error: bad shim signature

    • @deanolivas3011
      @deanolivas3011 Год назад +1

      THANK YOU THANK YOU !!!!!! Was pounding my head on the wall trying to figure that one out.....

    • @gtbarsi1103
      @gtbarsi1103 Год назад

      @@deanolivas3011 I was right there doing the same thing 3 nights ago. Gave up came back the next day and after working through a bunch of suggestions ran into this at the bottom of one trunas forum... I figured I would share this with anyone watching the video...

  • @renhoeknl
    @renhoeknl Год назад +3

    I'd like to see more:
    * Sharing an nvidia card between multiple VMs using MIG
    * On a system running ProxMox, using a VM as a gaming desktop on the machine itself

  • @TheMaevian
    @TheMaevian 10 дней назад

    what is advantage over adding kernel options directly, or adding them to grub

  • @edmundzed9870
    @edmundzed9870 Месяц назад

    Hi Jeff, thanks for your info, hope you have time for this. My new(er) version of proxmox is giving me a headache. The install went fine following your previous instructions. The passtrough
    not so much. One thing I noticed is that my "nano /etc/kernel/cmdline" comes up with an empty file!? while yours shows : "root=ZFS=rpool/ROOT/pve-1 boot=zfs"
    Continue with the instructions I got an error after : "update-initramfs -u -k all"
    Don't have the message here ( its on another computer ) but is was EFI boot not found skipping....
    After a lot of google I have it working now but I am afraid there still is something not quite right.
    One difference with your system and mine is I have one drive and ext4.

  • @JoseJavierAlvarezRodriguez
    @JoseJavierAlvarezRodriguez Год назад +1

    To be able to use GPU, I needed to enable PCIE option on file /etc/default/grub line GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on modprobe.blacklist=amdgpu" (because im ussing a amd 5600) before add that line, remember to execute grub-mkconfig -o /boot/grub/grub.cfg and restart :)

  • @DublinV1
    @DublinV1 Год назад +3

    Hi Jeff,
    Are there any drawbacks (i.e. Performance) not blacklisting your GPU from the host Proxmox O.S.? Currently I have GPU pass through working but I didn't black list that GPU from the host O.S. and everything seems to be working without issues.
    Thanks!

    • @T3hBeowulf
      @T3hBeowulf Год назад

      Same here. I did everything except the Proxmox blacklist and got it working in a Win11 VM.
      I also checked the "PCI Express" box on the pass-through model in Proxmox for the video card. It did not work without this.
      Additionally, my 1070 GTX needed a dummy HDMI plug (or external monitor) to initialize correctly.

    • @ericneo2
      @ericneo2 Год назад +1

      If you can convert a video or see apps use cuda without crashing the VM then no, you are completely golden.

  • @werewolfman007
    @werewolfman007 Год назад +2

    Hey Jeff have you ever tried unraid would like to know your point of view on it

  • @davidschutte2368
    @davidschutte2368 Год назад +1

    I was surprised that GPU passthrough to Debian or Windows based VMs worked out of the Box on my machine. I never configured anything inside Proxmox. I made sure that the UEFI bios was set up correctly. But that was it. Has been running great for months. (Im using a AMD 5900X on a MSI X570 Gaming Plus with a 1080ti)

  • @johnvanwinkle4351
    @johnvanwinkle4351 9 месяцев назад

    Thanks for the great video! I am hoping to try this on my Dell R720 with a Windows VM.

  • @nadpro16
    @nadpro16 11 месяцев назад

    Thank you for explaining why you virtualize your file server. I do it through cli on proxmox and wondered why you would do it through a vm. But HW passthrough of the sata controller makes sense. And I'm even thinking about trying how you do yours now.

  • @burnbrighter
    @burnbrighter 9 месяцев назад +1

    Three questions: 1. Were any Torpedos harmed in the making of this video? 2. Did you film this fast enough to not get warm beer when you shot this? 3. Your beer glass seems to leaking during the making of this vide - the beer is magically disappearing sequentially throughout the video - what is happening?

  • @michaelwaterman3553
    @michaelwaterman3553 Год назад

    Thanks Jeff, great tutorial!

  • @dwrout
    @dwrout 5 месяцев назад +1

    I have followed this process on a couple of proxmox servers (Chinese Machinist Xeon MB and a SuperMicro i9) each time the only way I could get the nvidia GPU to pass through successfully was to set up the VM with SandyBridge as CPU type and BIOS set to SeaBIOS.

  • @LetsChess1
    @LetsChess1 9 месяцев назад +1

    So i know this is 7 month old. However i have spend the last month trying to figure out why i couldn't passthrough my gpu to my VMs and i finally figured it out and this might be why you weren't able to passthrough yours. I have no idea what this does but a random reddit post gave me this answer. I had to run this code in my proxmox shell
    qm set -args '-global q35-pcihost.pci-hole64-size=512G'
    No idea what it does but it fixed everything.

  • @dunknow9486
    @dunknow9486 Год назад

    Excellent tutorial on PCI passthrough.
    Could mention on how to passthrough on motherboard SATA and NVME drive?

  • @JamalIgus-op4sy
    @JamalIgus-op4sy Год назад

    searching whole internet include ai nothing worked. THANK YOU SO SO MUCH for this VIDEO!!!!

  • @BlkRider
    @BlkRider Год назад

    12:43 Not true, you can passthrough intel iGPU even if you don't have any other GPU in the system. You have to of cause set the VFIO driver for it at boot and you will lose video for proxmox. But as you do everything through web or SSH, you don't need video most of the time. You can always reboot to kernel without the VFIO driver linked to iGPU if you lose network connection or need to fix something. There is also GPU partitioning which certain Intel GPUs support. Then you can use one GPU for both proxmox and even multiple VMs. That is a bit more hardcore for now though.

  • @declanmcardle
    @declanmcardle Год назад

    I’ll try this tutorial. The other tutorials don’t seem to let me pass through an embedded graphics card.

  • @smalltimer4370
    @smalltimer4370 Год назад +3

    There is no 'cmdline' in /etc/kernel :(

    • @ouya_expert
      @ouya_expert 10 месяцев назад

      I created the /etc/kernel/cmdline file as well as edited GRUB_CMDLINE_LINUX_DEFAULT in /etc/default/grub. Not sure which one ended up making iommu work though

  • @WebGeeky
    @WebGeeky День назад

    You did not mention anything about the uuid , Which I presume you have already prepared for your system !