Split A GPU Between Multiple Computers - Proxmox LXC (Unprivileged)

Поделиться
HTML-код
  • Опубликовано: 18 июн 2024
  • This video shows how to split a GPU between multiple computers using unprivileged LXCs. With this, you can maximise your GPU usage, consolidate your lab, save money, and remain secure. By the end you will be able to have hardware transoding in Jellyfin (or anything) using Docker.
    LXC Demo:
    github.com/JamesTurland/JimsG...
    Recommended Hardware: github.com/JamesTurland/JimsG...
    Discord: / discord
    Twitter: / jimsgarage_
    Reddit: / jims-garage
    GitHub: github.com/JamesTurland/JimsG...
    00:00 - Introduction to Proxmox LXC GPU Passthrough (Unprivileged)
    03:03 - Proxmox Setup & Example
    04:25 - Getting Started (Overview of Configuration)
    12:56 - Full Walkthrough
    15:20 - Starting LXC
    17:10 - Deploying Jellyfin with Docker
    23:17 - Quad Passthrough
    24:57 - Outro
  • НаукаНаука

Комментарии • 153

  • @stefanbondzulic8001
    @stefanbondzulic8001 4 месяца назад +30

    This is quickly becoming my favorite channel to watch :D Great stuff! Can't wait to see what you have for us next!

    • @Jims-Garage
      @Jims-Garage  4 месяца назад +5

      Haha, thanks for the feedback. Next step is network shares on LXC. Then onto clusters on LXC with GPU shared.

    • @darthkielbasa
      @darthkielbasa 4 месяца назад

      The “eat like an American…” wall hanging got me. The content is secondary.

  • @Mitman1234
    @Mitman1234 4 месяца назад +27

    For anyone else struggling to determine which GPU is which, run `ls -l /dev/dri/by-path`, and cross reference the addresses in that output with the output of `lspci`, which will also list the full GPU name.

    • @massivebull
      @massivebull Месяц назад +1

      I've been rewatching the video twice trying to figure this out - your comment saved me a lot of headaches - thanks a lot !

  • @georgec2932
    @georgec2932 4 месяца назад +8

    Spent the last couple of weeks trying to achieve this myself and couldn't - had to stick with a privileged container. This worked perfectly first time, thank you Jim!

    • @Jims-Garage
      @Jims-Garage  4 месяца назад +2

      Nice work! Enjoy the added security :)

  • @jafandarcia
    @jafandarcia 5 часов назад +1

    I struggled with an AMD igpu pass through with Jellyfin and you were very kind to help , in my case it did not work with a regular VM , but with this it was a breeze to setup Jellyfin with HW transcoding , the only hiccup was the lxc image of Debian 12 did not work , but Ubuntu did , latest proxmox fully updated , thanks again your walkthroughs are really helpful thanks !

  • @Spider210
    @Spider210 3 месяца назад +2

    Finally Subscribed to your channel! Thank YOU!

  • @SamuelGarcia-oc9og
    @SamuelGarcia-oc9og 4 месяца назад +4

    Thank you. Your tutorials are some of the best, very well explained and functional.

  • @happy9955
    @happy9955 3 месяца назад +1

    Great video of Proxmox outside.Thank you Sir!

  • @markwiesemann5654
    @markwiesemann5654 4 месяца назад +1

    Came from the Selfhosted Newsletter a few days ago and I am loving it. Great video, and I will definetly try that as soon as I have time

  • @MarcMcMillin
    @MarcMcMillin 4 месяца назад +1

    This is great stuff! Thanks Jim :-)

    • @Jims-Garage
      @Jims-Garage  4 месяца назад +1

      Thanks. It's a really good feature of LXCs.

  • @BromZlab
    @BromZlab 4 месяца назад +3

    Nice Jim 😀. You keep making great content👌🤌

  • @TheRealAaronJordison
    @TheRealAaronJordison 3 месяца назад +3

    I just used this guide to get hardware encoding working in an unprivileged Immich lxc container, through docker compose. ( After a lot of work) Thank you so much for your great and comprehensive guides.

    • @Jims-Garage
      @Jims-Garage  3 месяца назад

      Great stuff, well done ✅

  • @bassjmr
    @bassjmr 4 месяца назад +2

    Great video. I did a similar thing ages ago to passthrough a couple of printers to an lxc unprivileged cups printer server! Was a headache to figure everything out at the time hehehe

    • @Jims-Garage
      @Jims-Garage  4 месяца назад

      Ooh, that's a great use case. I like it.

  • @drbyte2009
    @drbyte2009 4 месяца назад +2

    I really love your channel Jim. I learn(ed) a lot from you !!
    I would love to see how to get the low power encoding working 🙂

  • @SB-qm5wg
    @SB-qm5wg 4 месяца назад +3

    Your github is a pot of gold. TY sir

  • @robertyboberty
    @robertyboberty 6 дней назад +1

    Hardware passthrough to LXC is definitely something I want to explore. I have a few services running in an Alpine QEMU and the footprint is small but I would prefer to have one LXC per service

    • @robertyboberty
      @robertyboberty 6 дней назад +1

      I started down the hardware passthrough rabbithole with CUPS. Network printing is another use case

  • @IsmaelLa
    @IsmaelLa 4 месяца назад +4

    My weekend project right here. I run unraid in a VM with some docker containers running in it. I want to move all containers outside the unraid VM. Now I can test this and also sharing the iGPU!!! Not straight put to a single VM. NICE!

    • @Jims-Garage
      @Jims-Garage  4 месяца назад +2

      Absolutely, it's pretty huge being able to share the iGPU between LXCs

  • @pkt1213
    @pkt1213 Месяц назад +1

    Great video. I am going to play with this this week so both Jellyfin and Plex have access to the GPU. Maybe other stuff eventually.

  • @gamermerijn
    @gamermerijn 4 месяца назад +1

    Congrats, good stuff. You may want to check out how to run docker images as LXC containers, since they are OCI compliant. It would remove an abstraction layer, but instead of compose it would be set up with ansible.

    • @Jims-Garage
      @Jims-Garage  4 месяца назад +1

      Good suggestion, something I can check out later. Thanks

  • @YannMetalhead
    @YannMetalhead 11 дней назад +1

    Great tutorial.

  • @georgebobolas6363
    @georgebobolas6363 4 месяца назад +2

    Great Content! Would be nice if you elaborated more on the low power encoder in one of your next videos.

  • @scorpjitsu
    @scorpjitsu 4 месяца назад

    Do you make your own thumbnails? Yours are top tier!!!

  • @wusaby-ush
    @wusaby-ush 4 месяца назад +1

    I dont belive I see this, you are the best

  • @autohmae
    @autohmae 4 месяца назад +2

    I also run my Kubernetes test env. in LXC on my laptop, makes a lot of sense.

    • @Jims-Garage
      @Jims-Garage  4 месяца назад +1

      That's great. I'm hoping to do similar for GPU sharing.

    • @autohmae
      @autohmae 4 месяца назад +1

      @@Jims-Garage You've already figured out the hard part.
      13:34 in practice by the way it doesn't matter. As long as the host is newer or the same and you load any kernel modules you might need. Linux mostly adds new functionality, as Linus always says: "don't break user space". I was able to run Debian 2/Hamm LXC container on a modern Linux kernel aka Debian 12. Not like I've never done this before. I was running Linux containers before LXC existed, before I ever touched VMs. On Debian Woody with Linux-VServer.

    • @Jims-Garage
      @Jims-Garage  4 месяца назад

      @@autohmae wow, that's impressive. Thanks for sharing

    • @autohmae
      @autohmae 4 месяца назад

      @@Jims-Garage well, it's supposed to work 🙂

  • @alexanderos8209
    @alexanderos8209 4 месяца назад +1

    I just discovered your series and it is amazing. I am Trying to do something similar on my homelab since a year ago but still failed. I already had some id maps in place for my mounts (more in my next comment on that video) but you essentially solved for me what I was struggeling and nearly given up.
    Now Jellyfin is HW rranscoding on my NUC Lab host and I am so happy with it :D
    One more thing that I am currently struggeling with - and you might have an idea / solution / future video:
    Docker swarm seems not to work inside an lxc container. Containers get deployed but are not accessible via the ingress network.
    Anyways thanks again I am looking forward to the new videos while watching the back catalog.

    • @Jims-Garage
      @Jims-Garage  4 месяца назад +1

      Great work 👍
      Firstly, don't use the KVM image, use a standard cloud image (there's an issue). Let me know if that solves it.

    • @alexanderos8209
      @alexanderos8209 4 месяца назад

      @@Jims-Garage Thank you - I got it working on a debian 12 lxc container.
      Some of the IDs needed to be different but now it is merged with my lxc mounts and everything is working.
      If i now only could get docker swarm to work. (but this a known problem in LXC - works fine in VM).

  • @FacuTopa
    @FacuTopa 4 месяца назад

    What is the command to get the gid or uid when you mention LXC namspace or host namespace?
    Greate video i hope this help me to solve the HWA.

  • @giorgis1731
    @giorgis1731 2 месяца назад +2

    this is way cool ! LXC all the way

    • @Jims-Garage
      @Jims-Garage  2 месяца назад +1

      Thanks, it's a great tool to have.

  • @sku2007
    @sku2007 4 месяца назад +1

    2:40 actually it's for some intel gpus possible to split between vms. but didn't do any benchmark on it and had no use, so i went for priviledged lxc at the time i was setting up mine. but now i'm considering redoing it unpriviledged, thanks for the video!

    • @Jims-Garage
      @Jims-Garage  4 месяца назад

      It was. Unfortunately it's now discontinued...

    • @sku2007
      @sku2007 4 месяца назад +1

      @@Jims-Garage right, there are lots of tiny differences on intel gpus. had it running with an 7700k about a year ago, i think this still would work today if the hw supports it (?)
      also played around with a DVA xpenology vm, unfortunately the 7700 igpu is too new for that

    • @Jims-Garage
      @Jims-Garage  4 месяца назад

      @@sku2007 my understanding is that you have to use sr-iov now.

    • @vitoswat
      @vitoswat 4 месяца назад

      ​​@@Jims-Garage as long as you have older GPU it works but it is quite limited. On mini PC with i5-10500T I was able to split iGpu into 2 GVT devices. Interesting part is that even if you assign vGPU to VMs you can still use real iGPU in LXCs. Of course the performance will suffer this way but in case of load like transcoding it is perfectly fine.
      I suggest you give it a try.

    • @BoraHorzaGobuchul
      @BoraHorzaGobuchul 4 месяца назад +1

      There is a video where a passthrough nvidia GPU is split between vms.

  • @mercian8051
    @mercian8051 4 месяца назад +1

    Great video! How does this work with nvidia drivers with a GPU? Does the driver need to be installed on the host and then in each LXC?

  • @tld8102
    @tld8102 6 дней назад

    amazing. use for my iGPU. are there any other devices apart of the GPU in addition to video and render? can i not pass all the functions to the LXC or virtual machine? On my system it says the iGPU is the same IOMMU group as the USB controllers and such. So i can't pass it through the the VM, would it be possible the share the iPU among VMs?

  • @pr0jectSkyneT
    @pr0jectSkyneT 26 дней назад

    I tested this out and Jellyfin worked great in a Proxmox LXC container also with Intel A380 passthrough. Can you please make a guide on how to get it running on Plex? I could not get Plex working with Hardware Acceleration for the life of me.

  • @rudypieplenbosch6752
    @rudypieplenbosch6752 4 месяца назад +1

    Impressive, I wonder if its as simple with an AMD igpu, with an xcp-ng hypervisor, probably not. But it is amazing to share an igpu like this, multiple graphic cards is rediculous. Seems like this solves gpu sharing in general 🤔

    • @Jims-Garage
      @Jims-Garage  4 месяца назад

      It should work on Proxmox with an iGPU in almost exactly the same way, I've no experience with xcp-ng though... SR-IOV is also another way to do it but consumer devices don't typically support it.

  • @nicholaushilliard6811
    @nicholaushilliard6811 4 месяца назад +1

    Ty for sharing your knowledge
    Two questions if you may know the answer?
    1. Can Proxmox install Nvidia linux drivers over Nouveau and still share the video card?
    2. If one adds a newer headless GPU like the Nvidia L4, can you use this as a secondary or even primary video card in a VM or CT?

    • @Jims-Garage
      @Jims-Garage  4 месяца назад

      Yes to both. Follow the same procedure and mount the additional GPU.

  • @binarydesk8442
    @binarydesk8442 29 дней назад

    Is this possible with LXD?

  • @lachlanvanderdrift7013
    @lachlanvanderdrift7013 Месяц назад

    How exactly do i get this running with a different user other than root? You said that you could do this through somewhere that you mentioned in the start of the tutorial, but i cant seem to figure it out. Pls help hahaha

  • @copytoothpaste
    @copytoothpaste 2 месяца назад +1

    How does it work with dedicated GPUs? Do I need to install the driver on the Proxmox Host or in the LXC? Do I need to specify the card in the docker compose or is the ID enough? Do I need the Container Toolkit for Docker? I really like your content, one of the best channels right now about selfhosting, but haven't found a solution to this.

    • @Jims-Garage
      @Jims-Garage  2 месяца назад +1

      The video is using a dedicated intel arc a380 GPU. For Nvidia you should be able to follow the same process. I believe most modern OS will have drivers but you might need to add them.

    • @copytoothpaste
      @copytoothpaste 2 месяца назад

      @@Jims-Garage Thank you for the answer. I'll try it.

  • @olefjord85
    @olefjord85 4 месяца назад +1

    Really awesome! But how is this working on the technical level without GPU virtualization at all?

    • @Jims-Garage
      @Jims-Garage  4 месяца назад

      The LXC is sharing access with the host's GPU

  • @zag1964
    @zag1964 4 месяца назад +2

    You do have an error in your github notes. After carefully following the directions and c/p from your notes I thought it odd when no /etc/subguid could be found. Still I proceeded but the container wouldn't start. After looking around a bit I noticed that /etc/subguid should have been /etc/subgid. After fixing the issue the container started just fine. Regardless, great video and you gained a new sub. Thanks..

    • @mnejmantowicz
      @mnejmantowicz 4 месяца назад +1

      OMG! Thank you for that! I've been pulling my hair out.

    • @Jims-Garage
      @Jims-Garage  4 месяца назад

      Thanks! I will fix this now.

  • @sebgln
    @sebgln 3 месяца назад +1

    Hello, it's possible on the same PVE node to have a split gpu for LXC and for VM ? Thanks for this good video.

    • @Jims-Garage
      @Jims-Garage  3 месяца назад +1

      Not possible with the same GPU as VM requires the GPU is not loaded by the host. Dual GPU would work.

    • @sebgln
      @sebgln 3 месяца назад +1

      @@Jims-Garage that was what it seemed to me, thanks. (I am French and you are easy to understand)

  • @PODLine
    @PODLine 4 месяца назад +2

    What you say 6 minutes into the video about the /etc/subgid file is wrong. These entries are not mappings but ranges of gid's. It's a start gid and a count.
    I'm still trying to get my head dialled in on the lxc.idmap entries in the .conf file. Getting closer. Thanks for the video.

    • @Jims-Garage
      @Jims-Garage  4 месяца назад +1

      The subguid is a moot point if you're running as root and can be skipped

    • @PODLine
      @PODLine 4 месяца назад

      @@Jims-Garage, what about adding root to the video and render groups on the host (@12:30)...is that necessary? This is a weird step to me.

  • @edwardrhodes4403
    @edwardrhodes4403 4 месяца назад +1

    Is there a way to do the opposite? As in consolidate multiple GPUs, RAM etc. into one server? I have 2 laptops and an external GPU I want to connect together to combine their compute to then be able to redistribute it out to multiple devices similar to this video. Is it possible?

    • @Jims-Garage
      @Jims-Garage  4 месяца назад

      I don't think so. The closest I could imagine is pooling the resources into a Kubernetes cluster or docker swarm.

  • @Alkaiser88
    @Alkaiser88 3 месяца назад

    Jim in your video why is it after you edit the conf file and boot up the 104 container that when you run ls -l /dev/dri the render is showing group ssh 226, 129, shouldnt it be render 226, 129

    • @Alkaiser88
      @Alkaiser88 3 месяца назад

      on my CT the render group is 106 but when I try to edit the conf file and use
      lxc.idmap: u 0 100000 65536
      lxc.idmap: g 0 100000 44
      lxc.idmap: g 44 44 1
      lxc.idmap: g 45 100045 62
      lxc.idmap: g 106 104 1
      lxc.idmap: g 107 100107 65428
      it fails to boot.
      it only works if I use
      lxc.idmap: u 0 100000 65536
      lxc.idmap: g 0 100000 44
      lxc.idmap: g 44 44 1
      lxc.idmap: g 45 100045 62
      lxc.idmap: g 107 104 1
      lxc.idmap: g 108 100108 65428
      but again its showing the /dev/dri is in group _ssh for me instead of render on my CT
      do we need to edit the conf file before the first boot to have render grouped to 107?

    • @rotesblut9904
      @rotesblut9904 Месяц назад

      hello, have you figure it out? how to change the group of renderd128 to render?

  • @theunsortedfolder6082
    @theunsortedfolder6082 4 месяца назад +1

    I did not catch this quite right -so is this a way that works only with many LXC+Docker inside or many LXC+ anything inside. That is - can i run, say, 4 LXC debian containers and in each one of them, one Windows 10 VM? If so - it is interesting and great! Otherwise (LXC+Docker)... isn't it already possible to share GPU with every docker container after installing nvidia cuda docker, and pass -gpu all

    • @Jims-Garage
      @Jims-Garage  4 месяца назад

      Unfortunately you cannot have a windows LXC. You could use this for a Linux desktop though with GPU acceleration. E.g., you could have a Linux gaming remote client

    • @theunsortedfolder6082
      @theunsortedfolder6082 4 месяца назад

      @@Jims-Garageso, you are saying: yes, it is not exclusive for LXC+Docker, but anything running in LXC can access gpu? If so, what would one get just for sake of having it: proxmox > LXC (debian with gpu) > cockpit > windows VM > gpu intensive app like game or cad software?

  • @MrRobot-ek1ih
    @MrRobot-ek1ih 2 месяца назад +1

    Great guide. I just got this working for two LXC and Jellyfin. I am trying to use Plex in a Docker container but can't get the hardware transcoding to work. Can anyone help?

    • @Jims-Garage
      @Jims-Garage  2 месяца назад

      Check the docs here, it's what I use. Almost identical: github.com/linuxserver/docker-plex

    • @narkelo
      @narkelo Месяц назад +1

      @@Jims-Garage great video! I got it working with Jellyfin just like in your video, but under Plex(using the link you provided) I get "No VA display found for device /dev/dri/renderD128" in the Plex transcoder settings it recognizes the iGPU, "lshw" in the container also sees the iGPU. any ideas you can share would be a big help. thanks!

    • @Jims-Garage
      @Jims-Garage  Месяц назад

      @@narkelo It's likely t o be permissions with the Plex user. Try running as root then dial it back if that works.

  • @mg3299
    @mg3299 13 дней назад +1

    Is there a chance this setup can be broken with a future update? That being said is safer to pass through gpu and hdd to a vm so you won’t have to worry about your pass through hardware from not being pass through.

    • @Jims-Garage
      @Jims-Garage  13 дней назад

      Yes, kernel updates can break this without following specific procedures. VMs don't have that problem.

    • @mg3299
      @mg3299 13 дней назад +1

      ⁠@@Jims-Garagedo you have the specific procedures so it won’t break when there’s a kernel update?

    • @Jims-Garage
      @Jims-Garage  13 дней назад

      @@mg3299 there's a handy script here, but do take time to understand it. github.com/tteck/Proxmox

    • @mg3299
      @mg3299 13 дней назад

      @@Jims-Garage are you referring to the hardware acceleration script? If yes I am reading the script and correct if I am wrong but I believe the script requires the container to be a privileged container which is not a good thing.

  • @zabu1458
    @zabu1458 2 месяца назад

    Did I miss a previous step? I have no /dri folder under /dev "ls: cannot access '/dev/dri': No such file or directory"

    • @zabu1458
      @zabu1458 2 месяца назад +3

      Not sure if I should just edit my comment, but... I'm just dumber than I thought. I had a gpu passthrough to a vm. I just removed the gpu from the hardware of that vm and shut it down. But since it's been a while I forgot that I actually had to edit GRUB so proxmox won't load/use the GPU itself.
      i just removed the extra stuff from this line from /etc/default/grub:
      GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt pcie_acs_override=downstream,multifunction nofb nomodeset video=vesafb:off,efifb:off"
      so it would be back at
      GRUB_CMDLINE_LINUX_DEFAULT="quiet"

    • @hiteshhere
      @hiteshhere 2 месяца назад +1

      @@zabu1458 Thanks for taking your time and shareing this. It helped me revolve mine :)

  • @ronny-andrebendiksen4137
    @ronny-andrebendiksen4137 3 месяца назад

    I lost SSH and terminal login access after updating my container. How do I get it back?

    • @zapatista8784
      @zapatista8784 3 месяца назад

      me too. how did you solve it?

  • @ewenchan1239
    @ewenchan1239 4 месяца назад +1

    Three questions:
    1) Have you tried gaming with this, simultaneously?
    2) Have you tested this method using either an AMD GPU and/or a NVIDIA GPU?
    3) Do you ever run into a situation where the first container "hangs on" to the Intel Arc A380 and wouldn't let go of it such that the other containers aren't able to access said Intel Arc A380 anymore?
    I am asking because I am running into this problem right now with my NVIDIA RTX A2000 where the first container sees it and even WITHOUT the container being started and in a running state -- my second container (Plex) -- when I try to run "nvidia-smi", it says: "Failed to initialize NVML: Unknown Error".
    But if I remove my first container, than the second container is able to "get" the RTX A2000 passed through to it without any issues.

    • @Jims-Garage
      @Jims-Garage  4 месяца назад

      1. No, not sure how I'd test it. Would have to be Linux desktop environment I assume.
      2. No, but the process should be identical, it's not intel specific.
      3. No, haven't seen that issue. As per the video I created 4 and all had access and survived reboots etc

    • @ewenchan1239
      @ewenchan1239 4 месяца назад +1

      @@Jims-Garage
      1. I would think that if you ran "apt install -y xfce4 xfce4-goodies xorg dbus-x11 x11-xserver-utils xfce4-terminal xrdp", you should be able to at least install the desktop environment that you can then remote into and install Steam (for example) and then test it with like League of Legends or something like that -- something that wouldn't be too graphically demanding for the Arc A380, no?
      2. The numbers for the cgroup2 stuff that you have to add to the .conf changes depending on whether it's an Intel (i)GPU (or dGPU) vs. NVIDIA.
      i.e. with my Nvidia RTX A2000, I don't have that RenderD128 option or whatever it is that it corresponds to.
      3. Are you able to test passing the same GPU between from a CT to a VM and back?
      This is the issue that I am running into right now with my A2000 where my VM won't release the GPU, even after the VM has been stopped.
      The CT will report back (when I try to run "nvidia-smi") "Failed to initialize NVML: Unknown Error".
      However, prior to shutting down my LXC container and starting the VM, the CT is able to "see" and use said A2000 (as reported by "nvidia-smi") when I am running a GPU accelerated CFD application.
      Shut down the CT, start the VM, run the same GPU accelerated CFD application, shut down the VM, and start the CT again -- that same GPU accelerated CFD application now won't load/utilize said A2000 and "nvidia-smi" will give me that error.
      So I am curious if you're running into the same thing, if you were to try and pass the GPU back and forth between VM CT.

    • @Jims-Garage
      @Jims-Garage  4 месяца назад

      @@ewenchan1239 I could do that by installing a desktop or game I think.
      I think the issue you're facing is that because you're using a VM for passthrough you're likely blacklisting devices and drivers. This would stop the host being able to share the GPU with the LXC

    • @ewenchan1239
      @ewenchan1239 4 месяца назад

      ​@@Jims-Garage
      "I think the issue you're facing is that because you're using a VM for passthrough you're likely blacklisting devices and drivers. This would stop the host being able to share the GPU with the LXC"
      But you would think that when the VM is stopped, it would release the GPU back to the host, so that you can use it for something else, e.g. a LXC.

  • @cachibachero1
    @cachibachero1 2 месяца назад +1

    After days of struggling between guides on the internet I was able to install the NVIDIA drivers on the host. I have tried to install the drivers in the lxc without success. How did you get yours to work?
    Thank you for the answer, and thank you for the awesome guide.

    • @Jims-Garage
      @Jims-Garage  2 месяца назад

      I'm using an intel arc a380 GPU. The drivers are baked into the OS. It's definitely possible with Nvidia though, I'll try to find some instructions.

  • @mdkrush
    @mdkrush 19 дней назад +1

    What if I want to add multiple GPUs?

    • @Jims-Garage
      @Jims-Garage  18 дней назад +1

      That should be possible, you'd need to follow the same process and add the other devices. I haven't ever done it though (perhaps in future).

  • @texasermd1
    @texasermd1 4 месяца назад +1

    Would there be a use case for a higher end card like a spare RYX 3070?

    • @Jims-Garage
      @Jims-Garage  4 месяца назад +1

      This solution is GPU agnostic, you can use whatever you want.

  • @systemmodmen2157
    @systemmodmen2157 4 месяца назад +1

    can i share my gtx 1650 between couple of vms or not ?

    • @Jims-Garage
      @Jims-Garage  4 месяца назад

      Yes, there is a hack for it using vGPU. For an LXC you can follow this video (but it's Linux only).

    • @systemmodmen2157
      @systemmodmen2157 4 месяца назад

      i forget a an important one of the vms is a windows vm and this pc is under my tv can i accses the gpu with hdmi and play directly from it or not and thanks for the respond @@Jims-Garage

  • @ckthmpson
    @ckthmpson 4 месяца назад +1

    Is this simplified if one were to go with a privileged container?

    • @Jims-Garage
      @Jims-Garage  4 месяца назад +1

      A privilegeled LXC doesn't require the idmap, you can simply mount

    • @ckthmpson
      @ckthmpson 4 месяца назад +1

      @@Jims-Garage Thanks. Might try the unprivileged method...just seems like a rather complicated process which would be simplified in the privileged scenario. Do realize the security implications.

    • @Jims-Garage
      @Jims-Garage  4 месяца назад

      @@ckthmpson if it's simply for internal applications you're probably okay

  • @basdfgwe
    @basdfgwe 4 месяца назад +1

    Can i ask why you're running docker inside of a lxc container ?

    • @Jims-Garage
      @Jims-Garage  4 месяца назад +2

      Why not? Simplifies deployment as I have all of the compose files ready. You could do it manually.

    • @basdfgwe
      @basdfgwe 4 месяца назад +1

      @@Jims-Garage Does it provide any advantage of containerising insider of a container ? Don't get me wrong I have docker containers running on unraid, which is running on proxmox....But my reason is: I made a mistake putting my storage on unraid and shifting from unraid is going to cost 000s.

    • @Jims-Garage
      @Jims-Garage  4 месяца назад +1

      @@basdfgwe think of the LXC as a virtual machine. It's the same as running a standard docker instance.

    • @texasermd1
      @texasermd1 4 месяца назад

      What would this look like with a high end GPU like a GTX 3070?

    • @PODLine
      @PODLine 4 месяца назад

      I do the same as Jim and it makes perfectly sense (to me). As a starting point, you could see docker as app containers and lxc as OS containers.

  • @thebullshittersvonmatterho8512
    @thebullshittersvonmatterho8512 4 месяца назад +1

    Is Jim ai generated?

    • @Jims-Garage
      @Jims-Garage  4 месяца назад

      "No, he is real" - JimBotv2.0

  • @ewenchan1239
    @ewenchan1239 4 месяца назад

    So I've been playing around with this some more, and found that if I deleted the VM, and was ONLY running LXC containers (right now, I am using all privileged containers -- haven't tested with unprivileged containers yet) -- I am able to have multiple LXC containers do different things with my RTX A2000.
    Going to be testing with gaming next, so we'll see.
    But yeah - it would appear that I can't have both VMs and CTs on the same host, sharing a GPU.
    I can either have ONE VM using the GPU at a time, or I can have NO VMs (at all, on the host, that uses the GPU), and at least a few LXC containers, sharing the one GPU.

    • @Jims-Garage
      @Jims-Garage  4 месяца назад +1

      Yes, makes sense as a VM requires isolation of the hardware, a LXC doesn't.

    • @ewenchan1239
      @ewenchan1239 4 месяца назад +1

      @@Jims-Garage
      But the crazy thing is that you would think that when the VM ISN'T running, that the LXC should be or ought to be able to use the "free" GPU that isn't being used/tied to a VM anymore.
      That doesn't appear to be the case.
      It wasn't until I removed said VM, did it "release" the GPU back over to the LXC containers.

    • @Jims-Garage
      @Jims-Garage  4 месяца назад +1

      @@ewenchan1239 I could be wrong but it sounds like you aren't blacklisting the drivers and device completely. To my knowledge the LXC wouldn't work with hardware passthrough if you were as the host won't be loading drivers

    • @ewenchan1239
      @ewenchan1239 4 месяца назад

      @@Jims-Garage
      "I could be wrong but it sounds like you aren't blacklisting the drivers and device completely."
      I'm at work right now, so I'll have to pull my config files later, when I get back home.
      *edit*
      Here are the config files:
      /etc/modprobe.d/nvidia.conf
      blacklist nvidia
      blacklist nouveau
      blacklist vfio-pci
      /etc/default/grub
      GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt pcie_acs_override=downstream nofb nomodeset initcall_blacklist=sysfb_init video=vesafbff,efifbff vfio-pci.ids=10de:2531,10de:228e disable_vga=1"
      /etc/modprobe.d/vfio.conf
      options vfio-pci ids=10de:2531,10de:228e disable_vga=1
      /etc/modprobe.d/kvm.conf
      options kvm ignore_msrs=1
      /etc/modprobe.d/iommu_unsafe_interrupts.conf
      options vfio_iommu_type1 allow_unsafe_interrupts=1
      /etc/modprobe.d/pve-blacklist.conf
      blackllist nvidiafb
      blacklist nvidia
      blacklist nouveau
      blacklist radeon
      /etc/modules
      vfio
      vfio_iommu_type1
      vfio_pci
      vfio_virqfd
      nvidia
      nvidia-modeset
      nvidia_uvm
      Yeah...so that's what I have, in my config files.
      As far as I can tell, it's complete (because it works for both VMs and CTs, just not being able to pass the GPU back and forth between said VM(s) and CT(s)). But between CTs, not a problem.

    • @ewenchan1239
      @ewenchan1239 4 месяца назад

      @@Jims-Garage
      "To my knowledge the LXC wouldn't work with hardware passthrough if you were as the host won't be loading drivers"
      Updated my previous comment.
      With the config information that I just shared, it works for both VMs and CTs - just not when they exist on the same host, at the same time.

  • @ziozzot
    @ziozzot 3 месяца назад +2

    does not work for me FFmpeg gives this error [AVHWDeviceContext @ 0x642ff9562240] No VA display found for device /dev/dri/renderD128.
    Device creation failed: -22.
    [h264 @ 0x642ff954c540] No device available for decoder: device type vaapi needed for codec h264.
    Stream mapping:
    Stream #0:0 -> #0:0 (h264 (native) -> h264 (h264_vaapi))
    Stream #0:2 -> #0:1 (aac (native) -> aac (native))
    Device setup failed for decoder on input stream #0:0 : Invalid argument

    • @Jims-Garage
      @Jims-Garage  3 месяца назад +1

      What are you trying to pass through?

    • @ziozzot
      @ziozzot 3 месяца назад

      @@Jims-Garage I tried passing through the iGPU without success. I then attempted it with a privileged container, and it works. I installed Jellyfin directly in the LXC without Docker. Probably there is an issue with the permissions.

    • @ziozzot
      @ziozzot 3 месяца назад +2

      with the help of ChatGPT i figured out the config that works for me lxc.cgroup2.devices.allow: c 226:0 rwm
      lxc.cgroup2.devices.allow: c 226:128 rwm
      lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file
      lxc.idmap: u 0 100000 65536
      lxc.idmap: g 0 100000 44
      lxc.idmap: g 44 44 1
      lxc.idmap: g 45 100045 59
      lxc.idmap: g 104 104 1
      lxc.idmap: g 105 100105 65431

  • @peteradshead2383
    @peteradshead2383 4 месяца назад +2

    You have solved just one of my little problems , I've moved jellyfin form one server to another and frigate VA worked , but jellyfin was giving me a error .
    Stream mapping:
    Stream #0:0 -> #0:0 (h264 (native) -> h264 (h264_amf))
    Stream #0:1 -> #0:1 (aac (native) -> aac (libfdk_aac))
    Press [q] to stop, [?] for help
    [h264_amf @ 0x557e719b81c0] DLL libamfrt64.so.1 failed to open
    double free or corruption (fasttop)
    Could not work it out it was, it was from a backup so the same configs etc , look at your notes and there was a OOPs forgot to the. usermod -aG render,video root
    Now all working again .

    • @Jims-Garage
      @Jims-Garage  4 месяца назад

      Awesome, glad it's fixed