Split A GPU Between Multiple Computers - Proxmox LXC (Unprivileged)

แชร์
ฝัง
  • เผยแพร่เมื่อ 27 พ.ย. 2024

ความคิดเห็น • 275

  • @stefanbondzulic8001
    @stefanbondzulic8001 9 หลายเดือนก่อน +39

    This is quickly becoming my favorite channel to watch :D Great stuff! Can't wait to see what you have for us next!

    • @Jims-Garage
      @Jims-Garage  9 หลายเดือนก่อน +5

      Haha, thanks for the feedback. Next step is network shares on LXC. Then onto clusters on LXC with GPU shared.

    • @darthkielbasa
      @darthkielbasa 9 หลายเดือนก่อน

      The “eat like an American…” wall hanging got me. The content is secondary.

  • @georgec2932
    @georgec2932 9 หลายเดือนก่อน +15

    Spent the last couple of weeks trying to achieve this myself and couldn't - had to stick with a privileged container. This worked perfectly first time, thank you Jim!

    • @Jims-Garage
      @Jims-Garage  9 หลายเดือนก่อน +3

      Nice work! Enjoy the added security :)

  • @TheRealAaronJordison
    @TheRealAaronJordison 8 หลายเดือนก่อน +8

    I just used this guide to get hardware encoding working in an unprivileged Immich lxc container, through docker compose. ( After a lot of work) Thank you so much for your great and comprehensive guides.

    • @Jims-Garage
      @Jims-Garage  8 หลายเดือนก่อน

      Great stuff, well done ✅

  • @Mitman1234
    @Mitman1234 9 หลายเดือนก่อน +46

    For anyone else struggling to determine which GPU is which, run `ls -l /dev/dri/by-path`, and cross reference the addresses in that output with the output of `lspci`, which will also list the full GPU name.

    • @massivebull
      @massivebull 6 หลายเดือนก่อน +1

      I've been rewatching the video twice trying to figure this out - your comment saved me a lot of headaches - thanks a lot !

    • @0mnislash79
      @0mnislash79 2 หลายเดือนก่อน +2

      i dont have a dri folder :(

    • @chris582
      @chris582 หลายเดือนก่อน

      @@0mnislash79Either a VM is already using the GPU passthrough, or you have nomodeset on Grub, preventing the graphics drivers from loading. Or you're not using Intel.

  • @IsmaelLa
    @IsmaelLa 9 หลายเดือนก่อน +4

    My weekend project right here. I run unraid in a VM with some docker containers running in it. I want to move all containers outside the unraid VM. Now I can test this and also sharing the iGPU!!! Not straight put to a single VM. NICE!

    • @Jims-Garage
      @Jims-Garage  9 หลายเดือนก่อน +2

      Absolutely, it's pretty huge being able to share the iGPU between LXCs

  • @SamuelGarcia-oc9og
    @SamuelGarcia-oc9og 9 หลายเดือนก่อน +5

    Thank you. Your tutorials are some of the best, very well explained and functional.

    • @Jims-Garage
      @Jims-Garage  9 หลายเดือนก่อน +1

      You're very welcome!

  • @Wolf-cj1yz
    @Wolf-cj1yz หลายเดือนก่อน

    This video and channel are excellent. I'm sure it took you some time to make this video but you've saved the community a tenfold amount of time. Thanks a ton lol

  • @12gark
    @12gark หลายเดือนก่อน

    I comment just to thank you very much. For some reason I had issues if I used root user on the LXC, but when I created another user and added to the group it worked flawlessly. I would never been able to do something like that on my own, thanks a lot.

  • @markwiesemann5654
    @markwiesemann5654 9 หลายเดือนก่อน +1

    Came from the Selfhosted Newsletter a few days ago and I am loving it. Great video, and I will definetly try that as soon as I have time

    • @Jims-Garage
      @Jims-Garage  9 หลายเดือนก่อน +1

      Awesome! Thank you!

  • @happy9955
    @happy9955 8 หลายเดือนก่อน +2

    Great video of Proxmox outside.Thank you Sir!

    • @Jims-Garage
      @Jims-Garage  8 หลายเดือนก่อน

      You're welcome 😁

  • @BromZlab
    @BromZlab 9 หลายเดือนก่อน +3

    Nice Jim 😀. You keep making great content👌🤌

    • @Jims-Garage
      @Jims-Garage  9 หลายเดือนก่อน

      Thanks 👍

  • @nvmeku
    @nvmeku 5 หลายเดือนก่อน +2

    thank you for this tutorial. It works. just want to let you know, it works also with handbrake docker container, just in the compose file add GROUP_ID=107 in the environment section. intel qsv is detected!

    • @Jims-Garage
      @Jims-Garage  5 หลายเดือนก่อน

      Awesome, thanks for letting me know

  • @Dragonpyro85
    @Dragonpyro85 2 หลายเดือนก่อน

    If you're deploying Jellyfin in a Portainer LXC on Proxmox then you will need to create a Custom Template the in Portainer WebUI and add the yaml contents to the Docker-Compose section. This will create the container and deploy in the Stack. From here, changes can be made to the docker-compose yaml and the container can be updated as necessary. I am still new to Linux, Docker, and Portainer so I am sure there is an easier way to do this but after having a difficult time of finding the yaml for Jellyfin in the LXC this is what worked for me. All other steps from the video worked like a charm!

    • @luis449bp
      @luis449bp 12 วันที่ผ่านมา

      If you use lxc is better just to create a plain debian lxc and add jellyfin repository. Is easier to update and manage that way.

  • @ff34jmr
    @ff34jmr 9 หลายเดือนก่อน +2

    Great video. I did a similar thing ages ago to passthrough a couple of printers to an lxc unprivileged cups printer server! Was a headache to figure everything out at the time hehehe

    • @Jims-Garage
      @Jims-Garage  9 หลายเดือนก่อน

      Ooh, that's a great use case. I like it.

  • @SB-qm5wg
    @SB-qm5wg 9 หลายเดือนก่อน +3

    Your github is a pot of gold. TY sir

    • @Jims-Garage
      @Jims-Garage  9 หลายเดือนก่อน

      Thanks 👍

  • @marcbrown3922
    @marcbrown3922 หลายเดือนก่อน +1

    You are a top man and this is a top channel I just love the way you explain things and break things down

    • @Jims-Garage
      @Jims-Garage  หลายเดือนก่อน

      @@marcbrown3922 thanks, really appreciate it

  • @michaelhopkins256
    @michaelhopkins256 หลายเดือนก่อน +1

    Thanks Jim, this video was great and helped me figure out and fix issues with my Jellyfin LXC. I could not get iGPU decoding to function prior to watching and following your video instructions.

    • @Jims-Garage
      @Jims-Garage  หลายเดือนก่อน

      @@michaelhopkins256 awesome, glad to hear it

  • @Spider210
    @Spider210 8 หลายเดือนก่อน +2

    Finally Subscribed to your channel! Thank YOU!

    • @Jims-Garage
      @Jims-Garage  8 หลายเดือนก่อน +1

      Thanks for subbing!

  • @pkt1213
    @pkt1213 6 หลายเดือนก่อน +1

    Great video. I am going to play with this this week so both Jellyfin and Plex have access to the GPU. Maybe other stuff eventually.

    • @Jims-Garage
      @Jims-Garage  6 หลายเดือนก่อน

      Great stuff 👍

  • @jafandarcia
    @jafandarcia 5 หลายเดือนก่อน +1

    I struggled with an AMD igpu pass through with Jellyfin and you were very kind to help , in my case it did not work with a regular VM , but with this it was a breeze to setup Jellyfin with HW transcoding , the only hiccup was the lxc image of Debian 12 did not work , but Ubuntu did , latest proxmox fully updated , thanks again your walkthroughs are really helpful thanks !

    • @Jims-Garage
      @Jims-Garage  5 หลายเดือนก่อน

      That's great, good job 👍

    • @codexclusiveNL
      @codexclusiveNL 5 หลายเดือนก่อน

      Could you help me out?

    • @jafandarcia
      @jafandarcia 5 หลายเดือนก่อน

      @@codexclusiveNLfollow the steps that Jim laid out, some films depending on the setup will work depending on the browser or device also, for me even the groups that Jim has in the video are the same , y setup is a beeliknk mini pc with an AMD 5650

  • @zyxwvutsrqponmlkh
    @zyxwvutsrqponmlkh 26 วันที่ผ่านมา

    eyeballs extra round, data about gpu sharing in proxmox burned directly into my soul.

  • @chris582
    @chris582 หลายเดือนก่อน

    I put this and the NAS share guide expecting to run into some trouble. However, I found it to be surprisingly painless aside from a missing /dri due to nomodeset. When I removed it, it worked like a charm.

  • @drbyte2009
    @drbyte2009 9 หลายเดือนก่อน +2

    I really love your channel Jim. I learn(ed) a lot from you !!
    I would love to see how to get the low power encoding working 🙂

    • @Jims-Garage
      @Jims-Garage  9 หลายเดือนก่อน +3

      Coming soon!

  • @autohmae
    @autohmae 9 หลายเดือนก่อน +2

    I also run my Kubernetes test env. in LXC on my laptop, makes a lot of sense.

    • @Jims-Garage
      @Jims-Garage  9 หลายเดือนก่อน +1

      That's great. I'm hoping to do similar for GPU sharing.

    • @autohmae
      @autohmae 9 หลายเดือนก่อน +1

      @@Jims-Garage You've already figured out the hard part.
      13:34 in practice by the way it doesn't matter. As long as the host is newer or the same and you load any kernel modules you might need. Linux mostly adds new functionality, as Linus always says: "don't break user space". I was able to run Debian 2/Hamm LXC container on a modern Linux kernel aka Debian 12. Not like I've never done this before. I was running Linux containers before LXC existed, before I ever touched VMs. On Debian Woody with Linux-VServer.

    • @Jims-Garage
      @Jims-Garage  9 หลายเดือนก่อน

      @@autohmae wow, that's impressive. Thanks for sharing

    • @autohmae
      @autohmae 9 หลายเดือนก่อน

      @@Jims-Garage well, it's supposed to work 🙂

  • @MarcMcMillin
    @MarcMcMillin 9 หลายเดือนก่อน +1

    This is great stuff! Thanks Jim :-)

    • @Jims-Garage
      @Jims-Garage  9 หลายเดือนก่อน +1

      Thanks. It's a really good feature of LXCs.

  • @gamermerijn
    @gamermerijn 9 หลายเดือนก่อน +2

    Congrats, good stuff. You may want to check out how to run docker images as LXC containers, since they are OCI compliant. It would remove an abstraction layer, but instead of compose it would be set up with ansible.

    • @Jims-Garage
      @Jims-Garage  9 หลายเดือนก่อน +2

      Good suggestion, something I can check out later. Thanks

    • @RudyBleeker
      @RudyBleeker 2 หลายเดือนก่อน +1

      @@Jims-Garage and @gamermerijn I was wondering about this as well. What's the added benefit of running Jellyfin in Docker when you can just install it in the LXC directly?

    • @Jims-Garage
      @Jims-Garage  2 หลายเดือนก่อน

      @@RudyBleeker mainly security and density. VMs use a separate kernel to the host, especially good for internet facing containers. Furthermore, 1 backup of the VM and all of my containers are covered (albeit you can install docker in an LXC).

    • @RudyBleeker
      @RudyBleeker 2 หลายเดือนก่อน +1

      @@Jims-Garage I know all about the difference between VMs and LXC. But in the video you installed Jellyfin in Docker, that Docker runs in LXC, on a bare metal Proxmox host, correct? There was no talk about VM's, or did I miss something?
      So I'm curious why you chose to install the Docker runtime in LXC and run Jellyfin through that, instead of installing the Jellyfin packages in the LXC directly using Jellyfin's official repos. Adding Docker into the mix just introduces another layer of (very minimal) overhead and complexity if you'd ask me.

  • @sku2007
    @sku2007 9 หลายเดือนก่อน +1

    2:40 actually it's for some intel gpus possible to split between vms. but didn't do any benchmark on it and had no use, so i went for priviledged lxc at the time i was setting up mine. but now i'm considering redoing it unpriviledged, thanks for the video!

    • @Jims-Garage
      @Jims-Garage  9 หลายเดือนก่อน

      It was. Unfortunately it's now discontinued...

    • @sku2007
      @sku2007 9 หลายเดือนก่อน +1

      @@Jims-Garage right, there are lots of tiny differences on intel gpus. had it running with an 7700k about a year ago, i think this still would work today if the hw supports it (?)
      also played around with a DVA xpenology vm, unfortunately the 7700 igpu is too new for that

    • @Jims-Garage
      @Jims-Garage  9 หลายเดือนก่อน

      @@sku2007 my understanding is that you have to use sr-iov now.

    • @vitoswat
      @vitoswat 9 หลายเดือนก่อน

      ​​@@Jims-Garage as long as you have older GPU it works but it is quite limited. On mini PC with i5-10500T I was able to split iGpu into 2 GVT devices. Interesting part is that even if you assign vGPU to VMs you can still use real iGPU in LXCs. Of course the performance will suffer this way but in case of load like transcoding it is perfectly fine.
      I suggest you give it a try.

    • @BoraHorzaGobuchul
      @BoraHorzaGobuchul 9 หลายเดือนก่อน +1

      There is a video where a passthrough nvidia GPU is split between vms.

  • @giorgis1731
    @giorgis1731 8 หลายเดือนก่อน +2

    this is way cool ! LXC all the way

    • @Jims-Garage
      @Jims-Garage  8 หลายเดือนก่อน +1

      Thanks, it's a great tool to have.

  • @robertyboberty
    @robertyboberty 5 หลายเดือนก่อน +1

    Hardware passthrough to LXC is definitely something I want to explore. I have a few services running in an Alpine QEMU and the footprint is small but I would prefer to have one LXC per service

    • @robertyboberty
      @robertyboberty 5 หลายเดือนก่อน +1

      I started down the hardware passthrough rabbithole with CUPS. Network printing is another use case

  • @georgebobolas6363
    @georgebobolas6363 9 หลายเดือนก่อน +2

    Great Content! Would be nice if you elaborated more on the low power encoder in one of your next videos.

    • @Jims-Garage
      @Jims-Garage  9 หลายเดือนก่อน +2

      Noted!

  • @robertodepetro1996
    @robertodepetro1996 2 หลายเดือนก่อน +1

    excellent video, thanks!

    • @Jims-Garage
      @Jims-Garage  2 หลายเดือนก่อน

      @@robertodepetro1996 thanks 👍

  • @wusaby-ush
    @wusaby-ush 9 หลายเดือนก่อน +1

    I dont belive I see this, you are the best

  • @YannMetalhead
    @YannMetalhead 5 หลายเดือนก่อน +1

    Great tutorial.

    • @Jims-Garage
      @Jims-Garage  5 หลายเดือนก่อน +1

      Thanks 👍

  • @eximo5346
    @eximo5346 หลายเดือนก่อน +2

    Great video. Thank you. Just wondering since 8.2 do you still need to map id the device? Lxc now has device Passthrough, I deployed a Jellyfin lxc via helper scripts which uses device Passthrough and it worked out of the box.

    • @Jims-Garage
      @Jims-Garage  หลายเดือนก่อน

      @@eximo5346 I did not know that about 8.2, I'll have to check it out. Thanks.

    • @TinkerLynx
      @TinkerLynx 5 วันที่ผ่านมา

      Please look into it, it sadly makes a few of your videos obsolete.

  • @alexanderos8209
    @alexanderos8209 9 หลายเดือนก่อน +1

    I just discovered your series and it is amazing. I am Trying to do something similar on my homelab since a year ago but still failed. I already had some id maps in place for my mounts (more in my next comment on that video) but you essentially solved for me what I was struggeling and nearly given up.
    Now Jellyfin is HW rranscoding on my NUC Lab host and I am so happy with it :D
    One more thing that I am currently struggeling with - and you might have an idea / solution / future video:
    Docker swarm seems not to work inside an lxc container. Containers get deployed but are not accessible via the ingress network.
    Anyways thanks again I am looking forward to the new videos while watching the back catalog.

    • @Jims-Garage
      @Jims-Garage  9 หลายเดือนก่อน +1

      Great work 👍
      Firstly, don't use the KVM image, use a standard cloud image (there's an issue). Let me know if that solves it.

    • @alexanderos8209
      @alexanderos8209 9 หลายเดือนก่อน

      @@Jims-Garage Thank you - I got it working on a debian 12 lxc container.
      Some of the IDs needed to be different but now it is merged with my lxc mounts and everything is working.
      If i now only could get docker swarm to work. (but this a known problem in LXC - works fine in VM).

  • @mercian8051
    @mercian8051 9 หลายเดือนก่อน +3

    Great video! How does this work with nvidia drivers with a GPU? Does the driver need to be installed on the host and then in each LXC?

    • @Jims-Garage
      @Jims-Garage  9 หลายเดือนก่อน +1

      Yes it does

    • @mg3299
      @mg3299 29 วันที่ผ่านมา +1

      @@Jims-Garage I installed drivers on host and it is working. But I install drivers on lxc and run command nvidia-smi and it says "NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver." I followed all the steps and made sure all the id's match my system.

    • @normlypak
      @normlypak 7 วันที่ผ่านมา

      @@mg3299hey have u figured it out

  • @nicholaushilliard6811
    @nicholaushilliard6811 9 หลายเดือนก่อน +1

    Ty for sharing your knowledge
    Two questions if you may know the answer?
    1. Can Proxmox install Nvidia linux drivers over Nouveau and still share the video card?
    2. If one adds a newer headless GPU like the Nvidia L4, can you use this as a secondary or even primary video card in a VM or CT?

    • @Jims-Garage
      @Jims-Garage  9 หลายเดือนก่อน

      Yes to both. Follow the same procedure and mount the additional GPU.

  • @kterstal
    @kterstal 4 หลายเดือนก่อน +2

    Excellent, thanks! Was able to get it working with the iGPU. Is the process the same for a NVIDIA GPU?

    • @Jims-Garage
      @Jims-Garage  4 หลายเดือนก่อน

      It should be, just change the mappings to suit. Thanks a lot for the donation!

    • @yairabc1
      @yairabc1 27 วันที่ผ่านมา +1

      @@Jims-Garage could you elaborate please, im trying to share nvidia card, I See it on the lxc, but when trying to use it with jellyfin I'm getting a playback error

    • @Jims-Garage
      @Jims-Garage  27 วันที่ผ่านมา +1

      @@yairabc1 as for intel you should match the drivers on the host and in the LXC. Then it'll be a case of finding and matching the device IDs. Should be the same process as in the video albeit instead of 128/9 it might be 130. Each machine is different depending on hardware.

  • @Szala89r
    @Szala89r 3 หลายเดือนก่อน +1

    Hi @Jim's Garage, have you managed to run X11 with nvidia/intel acceleration on it? Something that would allow to run e.g. steam games?

    • @Jims-Garage
      @Jims-Garage  3 หลายเดือนก่อน

      @@Szala89r I haven't as I don't have a card to test it with unfortunately. My understanding is that you wouldn't have graphical output though, just GPU accelerated workloads

  • @zag1964
    @zag1964 9 หลายเดือนก่อน +3

    You do have an error in your github notes. After carefully following the directions and c/p from your notes I thought it odd when no /etc/subguid could be found. Still I proceeded but the container wouldn't start. After looking around a bit I noticed that /etc/subguid should have been /etc/subgid. After fixing the issue the container started just fine. Regardless, great video and you gained a new sub. Thanks..

    • @mnejmantowicz
      @mnejmantowicz 9 หลายเดือนก่อน +1

      OMG! Thank you for that! I've been pulling my hair out.

    • @Jims-Garage
      @Jims-Garage  9 หลายเดือนก่อน

      Thanks! I will fix this now.

  • @PODLine
    @PODLine 9 หลายเดือนก่อน +2

    What you say 6 minutes into the video about the /etc/subgid file is wrong. These entries are not mappings but ranges of gid's. It's a start gid and a count.
    I'm still trying to get my head dialled in on the lxc.idmap entries in the .conf file. Getting closer. Thanks for the video.

    • @Jims-Garage
      @Jims-Garage  9 หลายเดือนก่อน +1

      The subguid is a moot point if you're running as root and can be skipped

    • @PODLine
      @PODLine 9 หลายเดือนก่อน

      @@Jims-Garage, what about adding root to the video and render groups on the host (@12:30)...is that necessary? This is a weird step to me.

  • @pr0jectSkyneT
    @pr0jectSkyneT 6 หลายเดือนก่อน +2

    I tested this out and Jellyfin worked great in a Proxmox LXC container also with Intel A380 passthrough. Can you please make a guide on how to get it running on Plex? I could not get Plex working with Hardware Acceleration for the life of me.

    • @fretbuzzly
      @fretbuzzly 25 วันที่ผ่านมา

      Same with Plex. I followed the instructions and everything worked. In Plex I can now explicitly select the iGPU for transcoding, but it still doesn't use it and only uses the CPU. I even checked the XML (mentioned in some other post I found) and it specifically identifies the iGPU, but again it doesn't use it. I presume this is a Plex for Linux problem. If I find a solution I'll post back here.

    • @pr0jectSkyneT
      @pr0jectSkyneT 25 วันที่ผ่านมา

      @fretbuzzly i ended up using the tteck proxmox helper scripts to create a Plex LXC. It works with hardware acceleration.

    • @DevX94
      @DevX94 11 วันที่ผ่านมา

      @@fretbuzzly Did you find a solution? I have the same issue

  • @scorpjitsu
    @scorpjitsu 9 หลายเดือนก่อน

    Do you make your own thumbnails? Yours are top tier!!!

  • @rudypieplenbosch6752
    @rudypieplenbosch6752 9 หลายเดือนก่อน +1

    Impressive, I wonder if its as simple with an AMD igpu, with an xcp-ng hypervisor, probably not. But it is amazing to share an igpu like this, multiple graphic cards is rediculous. Seems like this solves gpu sharing in general 🤔

    • @Jims-Garage
      @Jims-Garage  9 หลายเดือนก่อน

      It should work on Proxmox with an iGPU in almost exactly the same way, I've no experience with xcp-ng though... SR-IOV is also another way to do it but consumer devices don't typically support it.

  • @Danilo_TI
    @Danilo_TI 3 หลายเดือนก่อน +1

    Thanks a lot! Don't need to activate iommu in grub?

    • @Jims-Garage
      @Jims-Garage  3 หลายเดือนก่อน +1

      @@Danilo_TI no, because it's simply sharing the host's hardware, not passing it through.

    • @Danilo_TI
      @Danilo_TI 3 หลายเดือนก่อน +1

      @@Jims-Garage Can i do same thing for USB storage ?

    • @Jims-Garage
      @Jims-Garage  3 หลายเดือนก่อน +1

      @@Danilo_TI you should be able to share it yes

  • @stanislavtodorov8705
    @stanislavtodorov8705 4 หลายเดือนก่อน

    Hey Jim, thanks for your thorough tutorials. Most of my home lab setup is done with the help of your videos. Following this specific tutorial a question arises: since the Jellyfin CT is using the host's hardware, should I enable the GPU passthrough prior to sharing my iGPU to the Jellyfin CT, or doing the groups mapping trick is enough? By 'sharing' the iGPU does it mean I can still use it for the proxmox host (if I have to connect a display straight to my server and access the proxmox CLI)?

    • @Jims-Garage
      @Jims-Garage  4 หลายเดือนก่อน

      By following this video the host and the CTs can use the iGPU. If you do a passthrough then only that VM can use it (the host cannot).

    • @stanislavtodorov8705
      @stanislavtodorov8705 4 หลายเดือนก่อน

      That's awesome! And thanks for the swift response. In fact, thank you for all the awesome work you do!

    • @stanislavtodorov8705
      @stanislavtodorov8705 4 หลายเดือนก่อน

      @@Jims-Garage mapped the uid and gid values as you did;
      1. I am able to see my renderD128 device in the LXC: crw-rw---- 1 nobody _ssh 226, 128 Jul 4 07:53 renderD128;
      2. lspc icommand in the LXC shows: 00:02.0 VGA compatible controller: Intel Corporation CometLake-S GT2 [UHD Graphics 630] (rev 03)
      3. Installed intel gpu tools and ran intel_gpu_top - Failed to initialize PMU! (Permission denied)
      4. Installed Jellyfin and ffmpeg6, then proceeded to enabling transcoding, following the official Jellyfin guide.
      when trying to check the supported QSV / VA-API codecs gives me the following output:
      root@Jellyfin:~# /usr/lib/jellyfin-ffmpeg/vainfo --display drm --device /dev/dri/renderD128
      Trying display: drm
      Failed to open the given device!
      I will appreciate any help.

    • @stanislavtodorov8705
      @stanislavtodorov8705 4 หลายเดือนก่อน +1

      @@Jims-Garage nevermind. I had mistake in my gid mappings. In your video you map gid 107 (LXC) to 104 (Host). In my case i had to map 106 to 104. Slightly changed my mappings and everything works as a charm now! Thank you once again for the sublime tutorial

    • @Jims-Garage
      @Jims-Garage  4 หลายเดือนก่อน

      @@stanislavtodorov8705 you're most welcome. Perhaps the difference between iGPU and dGPU

  • @copytoothpaste
    @copytoothpaste 7 หลายเดือนก่อน +1

    How does it work with dedicated GPUs? Do I need to install the driver on the Proxmox Host or in the LXC? Do I need to specify the card in the docker compose or is the ID enough? Do I need the Container Toolkit for Docker? I really like your content, one of the best channels right now about selfhosting, but haven't found a solution to this.

    • @Jims-Garage
      @Jims-Garage  7 หลายเดือนก่อน +1

      The video is using a dedicated intel arc a380 GPU. For Nvidia you should be able to follow the same process. I believe most modern OS will have drivers but you might need to add them.

    • @copytoothpaste
      @copytoothpaste 7 หลายเดือนก่อน

      @@Jims-Garage Thank you for the answer. I'll try it.

  • @iroesstrongarm
    @iroesstrongarm หลายเดือนก่อน +1

    For anyone struggling because they installed Jellyfin in the container not through docker, it won't work. You don't have permissions to the GPU as the jellyfin user. There are ways around it, but doing it through docker, as done here, is likely the best option, and it's what I plan to do personally at this point.

    • @dominicd4647
      @dominicd4647 16 วันที่ผ่านมา

      Any refutation of this anyone? I installed Jellyfin directly in Proxmox, not through Docker. Could it work?

    • @iroesstrongarm
      @iroesstrongarm 16 วันที่ผ่านมา

      @dominicd4647 using the method in the video, no, but I went back and solved it so I could install it directly in the LXC instead of docker.

    • @dominicd4647
      @dominicd4647 16 วันที่ผ่านมา

      @@iroesstrongarm Would you mind please give me a few hints in how to do that? I'm pretty new in all this home server/proxmox things. There are some videos that show how to do it in a privileged container. I don't know how much I should fear going that way.

    • @iroesstrongarm
      @iroesstrongarm 16 วันที่ผ่านมา +1

      @@dominicd4647 I'm going to attempt to link the forum thread that goes over it. Hopefully YT doesn't delete it because its a link. Going to do it in a follow up comment to this one so you can tell me if you see it or not.

    • @dominicd4647
      @dominicd4647 16 วันที่ผ่านมา

      @@iroesstrongarm I don't see any link.

  • @heikowillenbacher8773
    @heikowillenbacher8773 หลายเดือนก่อน

    Thank you for your video.
    Can you also make a short video where you pass your Arc A380 to an lxc?
    I have a Jellyfin server running in Proxmox and would like to pass my Arc through to it. Can you help me with this?

  • @M8B2L8
    @M8B2L8 หลายเดือนก่อน

    Great video! Can I first run the proxmox helper scripts by tteck and then share the hosts igpu to individual containers ? (I plan to run plex and docker LXC with the *arr stack in the docker LXC)

  • @olefjord85
    @olefjord85 9 หลายเดือนก่อน +1

    Really awesome! But how is this working on the technical level without GPU virtualization at all?

    • @Jims-Garage
      @Jims-Garage  9 หลายเดือนก่อน

      The LXC is sharing access with the host's GPU

  • @PleaseBeNice17
    @PleaseBeNice17 หลายเดือนก่อน

    Hi, want to clarify shouldn't the group for renderD129 on 16:09 belongs to render and not ssh?

  • @tobi061
    @tobi061 11 วันที่ผ่านมา +1

    anychance this would work with nvidia GPU and CUda usage ? would the nvidia-smi command work ?

    • @Jims-Garage
      @Jims-Garage  11 วันที่ผ่านมา

      @@tobi061 yes, it should do

    • @tobi061
      @tobi061 10 วันที่ผ่านมา +1

      thanks for the prompt reply. didn't work out of the box. Seems NVIDIA needs additional devices to be mapped in the container:
      lxc.cgroup2.devices.allow: c 226:0 rwm
      lxc.cgroup2.devices.allow: c 226:128 rwm
      lxc.cgroup2.devices.allow: c 195:0 rwm
      lxc.cgroup2.devices.allow: c 195:255 rwm
      lxc.cgroup2.devices.allow: c 509:0 rwm
      lxc.cgroup2.devices.allow: c 509:1 rwm
      lxc.cgroup2.devices.allow: c 234:1 rwm
      lxc.cgroup2.devices.allow: c 234:2 rwm
      lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file
      lxc.mount.entry: /dev/nvidia0 dev/nvidia0 none bind,optional,create=file
      lxc.mount.entry: /dev/nvidiactl dev/nvidiactl none bind,optional,create=file
      lxc.mount.entry: /dev/nvidia-uvm dev/nvidia-uvm none bind,optional,create=file
      lxc.mount.entry: /dev/nvidia-uvm-tools dev/nvidia-uvm-tools none bind,optional,create=file
      lxc.mount.entry: /dev/nvidia-caps/nvidia-cap1 dev/nvidia-caps/nvidia-cap1 none bind,optional,create=file
      lxc.mount.entry: /dev/nvidia-caps/nvidia-cap2 dev/nvidia-caps/nvidia-cap2 none bind,optional,create=file

  • @imorganmarshall
    @imorganmarshall 4 หลายเดือนก่อน +1

    Seems like a powerful feature. How do you do this in the UI?

    • @Jims-Garage
      @Jims-Garage  4 หลายเดือนก่อน

      @@imorganmarshall I don't think you have access to all of the necessary options (might be wrong)

    • @Scott-f9
      @Scott-f9 4 หลายเดือนก่อน +1

      @@Jims-Garage 8.2 does

    • @Jims-Garage
      @Jims-Garage  4 หลายเดือนก่อน +1

      @@Scott-f9 ahh that's great news. I'll perhaps revisit this with a gui version

  • @FacuTopa
    @FacuTopa 9 หลายเดือนก่อน

    What is the command to get the gid or uid when you mention LXC namspace or host namespace?
    Greate video i hope this help me to solve the HWA.

  • @nikhilrups
    @nikhilrups 12 วันที่ผ่านมา

    Hi, any idea how to do this for a cdrom/bluray drive device? I am trying to run ARM (auto ripping machine) on an unprivileged lxc, but have failed till now. Logically, the process should be similar. I am able to run the same on a privileged lxc, but would prefer the unpriviliged (for obv reasons). Thanks in advance.

  • @weysinchathamazight9956
    @weysinchathamazight9956 2 หลายเดือนก่อน +1

    Not working for me the Proxmox LXC - Docker is not taking the IGPU no activtiy on the engines while play diffrent content :( i do not know what i am doning wrong.

    • @Jims-Garage
      @Jims-Garage  2 หลายเดือนก่อน

      @@weysinchathamazight9956 probably the groups, I used a dGPU, change it to match iGPU

  • @sebgln
    @sebgln 9 หลายเดือนก่อน +1

    Hello, it's possible on the same PVE node to have a split gpu for LXC and for VM ? Thanks for this good video.

    • @Jims-Garage
      @Jims-Garage  9 หลายเดือนก่อน +1

      Not possible with the same GPU as VM requires the GPU is not loaded by the host. Dual GPU would work.

    • @sebgln
      @sebgln 9 หลายเดือนก่อน +1

      @@Jims-Garage that was what it seemed to me, thanks. (I am French and you are easy to understand)

  • @ashtouareg3330
    @ashtouareg3330 2 หลายเดือนก่อน

    Hello Jim, thanks for your instructions . I really appreciate you sharing your valuvable knowledge, I did follow your instructions, but the sharing does not work at the end with a permission error message, and whenever I do upadte the configuration file for the LXC with the line (inside my /etc/pve/lxc/1xx.conf) lxc.mount.entry%3A /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file , any help please

  • @133col
    @133col หลายเดือนก่อน +1

    Regardless what I do, I am getting the following errors when starting the LXC after completing all the steps with CORRECT numbers:
    run_buffer: 571 Script exited with status 17
    lxc_setup: 3948 Failed to run autodev hooks
    do_start: 1273 Failed to setup container "110"
    sync_wait: 34 An error occurred in another process (expected sequence number 4)
    __lxc_start: 2114 Failed to spawn container "110"
    TASK ERROR: startup for container '110' failed
    This line is causing it - when removing the below, the LXC boots:
    lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file
    Why does that line keep the LXC from booting?

    • @133col
      @133col หลายเดือนก่อน

      Replying to my own comment. No idea why, but it turned out QSV works for me without that particular line in the conf file. Using Jellyfin without docker, straight in the LXC, on a lightweight N100 system. Transcoding at 1080p to hevc does in fact use QSV according to the logs, at less than 10% of CPU utilization (2 cores assigned, means

  • @SiggyT827
    @SiggyT827 2 หลายเดือนก่อน

    At 16:03 is the group for renderD129 supposed to be _ssh? It's the same on my system and in Emby I'm getting a permission denied error, so I'm wondering if those 2 are related

  • @DragoMorke
    @DragoMorke หลายเดือนก่อน +1

    I tried lxc's but I had the issue that when passing though mount points containing the data, I chould not do chmod on files of the mount point inside the lxc.
    Since Nextcloud (in docker) uses chown I could not get it to work.
    I switched to a full VM and enabled sriov (there is a i915 sriov dkms driver) on the Intel iGPU so I have many (virtual) GPUs to pass through.
    That works for me.

    • @Jims-Garage
      @Jims-Garage  หลายเดือนก่อน

      @@DragoMorke thanks for the comment. I think ultimately that a VM is a better solution over LXC for nextcloud.

  • @motominis
    @motominis 4 หลายเดือนก่อน

    Hey tried to follow exactly, but I get stuck after editing the config file. Afterwards when I spin back up the container, I get nothing on console. So as you mention it might be a permission issue, but I think I did everything!

  • @DudeItsDallyBoy
    @DudeItsDallyBoy 2 หลายเดือนก่อน

    only issue with this is mounting NFS shares. I have yet to find a way to mount an NFS share into a unprivileged LXC then recreate the container onto another node that has a GPU.

  • @MrRobot-ek1ih
    @MrRobot-ek1ih 7 หลายเดือนก่อน +1

    Great guide. I just got this working for two LXC and Jellyfin. I am trying to use Plex in a Docker container but can't get the hardware transcoding to work. Can anyone help?

    • @Jims-Garage
      @Jims-Garage  7 หลายเดือนก่อน

      Check the docs here, it's what I use. Almost identical: github.com/linuxserver/docker-plex

    • @narkelo
      @narkelo 6 หลายเดือนก่อน +1

      @@Jims-Garage great video! I got it working with Jellyfin just like in your video, but under Plex(using the link you provided) I get "No VA display found for device /dev/dri/renderD128" in the Plex transcoder settings it recognizes the iGPU, "lshw" in the container also sees the iGPU. any ideas you can share would be a big help. thanks!

    • @Jims-Garage
      @Jims-Garage  6 หลายเดือนก่อน

      @@narkelo It's likely t o be permissions with the Plex user. Try running as root then dial it back if that works.

  • @tld8102
    @tld8102 5 หลายเดือนก่อน

    amazing. use for my iGPU. are there any other devices apart of the GPU in addition to video and render? can i not pass all the functions to the LXC or virtual machine? On my system it says the iGPU is the same IOMMU group as the USB controllers and such. So i can't pass it through the the VM, would it be possible the share the iPU among VMs?

  • @edwardrhodes4403
    @edwardrhodes4403 9 หลายเดือนก่อน +1

    Is there a way to do the opposite? As in consolidate multiple GPUs, RAM etc. into one server? I have 2 laptops and an external GPU I want to connect together to combine their compute to then be able to redistribute it out to multiple devices similar to this video. Is it possible?

    • @Jims-Garage
      @Jims-Garage  9 หลายเดือนก่อน

      I don't think so. The closest I could imagine is pooling the resources into a Kubernetes cluster or docker swarm.

  • @mg3299
    @mg3299 5 หลายเดือนก่อน +1

    Is there a chance this setup can be broken with a future update? That being said is safer to pass through gpu and hdd to a vm so you won’t have to worry about your pass through hardware from not being pass through.

    • @Jims-Garage
      @Jims-Garage  5 หลายเดือนก่อน

      Yes, kernel updates can break this without following specific procedures. VMs don't have that problem.

    • @mg3299
      @mg3299 5 หลายเดือนก่อน +1

      ⁠@@Jims-Garagedo you have the specific procedures so it won’t break when there’s a kernel update?

    • @Jims-Garage
      @Jims-Garage  5 หลายเดือนก่อน

      @@mg3299 there's a handy script here, but do take time to understand it. github.com/tteck/Proxmox

    • @mg3299
      @mg3299 5 หลายเดือนก่อน

      @@Jims-Garage are you referring to the hardware acceleration script? If yes I am reading the script and correct if I am wrong but I believe the script requires the container to be a privileged container which is not a good thing.

  • @wellloaded2157
    @wellloaded2157 11 วันที่ผ่านมา

    I mean... thank you, but: why nobody has ever though to create a script for all this? May be two, one on the host and one on the LXC. This would be a huge time saver.

  • @MnemonicCarrier
    @MnemonicCarrier หลายเดือนก่อน +1

    To only focus on the groups you're interested in: cat /etc/group | grep -E "video|render"
    To only focus on video PCI devices: lspci | grep -i vga

    • @Jims-Garage
      @Jims-Garage  หลายเดือนก่อน +1

      @@MnemonicCarrier thanks 👍

  • @theunsortedfolder6082
    @theunsortedfolder6082 9 หลายเดือนก่อน +1

    I did not catch this quite right -so is this a way that works only with many LXC+Docker inside or many LXC+ anything inside. That is - can i run, say, 4 LXC debian containers and in each one of them, one Windows 10 VM? If so - it is interesting and great! Otherwise (LXC+Docker)... isn't it already possible to share GPU with every docker container after installing nvidia cuda docker, and pass -gpu all

    • @Jims-Garage
      @Jims-Garage  9 หลายเดือนก่อน

      Unfortunately you cannot have a windows LXC. You could use this for a Linux desktop though with GPU acceleration. E.g., you could have a Linux gaming remote client

    • @theunsortedfolder6082
      @theunsortedfolder6082 9 หลายเดือนก่อน

      @@Jims-Garageso, you are saying: yes, it is not exclusive for LXC+Docker, but anything running in LXC can access gpu? If so, what would one get just for sake of having it: proxmox > LXC (debian with gpu) > cockpit > windows VM > gpu intensive app like game or cad software?

  • @Alkaiser88
    @Alkaiser88 9 หลายเดือนก่อน

    Jim in your video why is it after you edit the conf file and boot up the 104 container that when you run ls -l /dev/dri the render is showing group ssh 226, 129, shouldnt it be render 226, 129

    • @Alkaiser88
      @Alkaiser88 9 หลายเดือนก่อน

      on my CT the render group is 106 but when I try to edit the conf file and use
      lxc.idmap: u 0 100000 65536
      lxc.idmap: g 0 100000 44
      lxc.idmap: g 44 44 1
      lxc.idmap: g 45 100045 62
      lxc.idmap: g 106 104 1
      lxc.idmap: g 107 100107 65428
      it fails to boot.
      it only works if I use
      lxc.idmap: u 0 100000 65536
      lxc.idmap: g 0 100000 44
      lxc.idmap: g 44 44 1
      lxc.idmap: g 45 100045 62
      lxc.idmap: g 107 104 1
      lxc.idmap: g 108 100108 65428
      but again its showing the /dev/dri is in group _ssh for me instead of render on my CT
      do we need to edit the conf file before the first boot to have render grouped to 107?

    • @rotesblut9904
      @rotesblut9904 6 หลายเดือนก่อน

      hello, have you figure it out? how to change the group of renderd128 to render?

  • @posalab
    @posalab 5 หลายเดือนก่อน

    It's possible do the same thind with an external disk drive and an unprivileged LXC?
    I try to do a Proxmox Backup Server in this scenario and backup on a USB external disk drive, I managed to install with no problems PBS, but failed multiple time the USB hard drive passthrough...
    If enyone has some useful hints it will be nice...

  • @muhammedakyuz9126
    @muhammedakyuz9126 2 หลายเดือนก่อน +1

    is it possible to do this on multiple windows or macos vms?

    • @Jims-Garage
      @Jims-Garage  2 หลายเดือนก่อน +1

      No, unfortunately this method only works on Linux.

    • @muhammedakyuz9126
      @muhammedakyuz9126 2 หลายเดือนก่อน

      @@Jims-Garage thankyou

  • @NotSneaky
    @NotSneaky หลายเดือนก่อน

    I cant get this to work with render129 which is a Nvidia a2000. Works great with igpu which is 128 .
    For Plex cant get it to work at all....keeps using cpu.

  • @sohail579
    @sohail579 4 หลายเดือนก่อน +1

    If I am using an nvidia card do i need to install he drivers for it on the Proxmox host first?

    • @Jims-Garage
      @Jims-Garage  4 หลายเดือนก่อน

      You'll need the same drivers on the host and LXC.

    • @sohail579
      @sohail579 4 หลายเดือนก่อน +1

      @@Jims-Garage I have 2 GPUs a 4090 for my gaming VM and a Quadro P2000 for my LXCs for Plex, Frigate etc.. will the driver on the host cause me issues with my 4090 and VM passthrough?

    • @Jims-Garage
      @Jims-Garage  4 หลายเดือนก่อน

      @@sohail579 you should be fine, drivers are device specific. I imagine the same driver will be used for both of those cards. A 4090 is wild for a gaming VM. Jealous!

    • @sohail579
      @sohail579 4 หลายเดือนก่อน

      @@Jims-Garage Thanks it used to be a 3090ti but found a deal I couldn't refuse so sold the 3090 and took the plunge on the 4090 and I have plenty of cores with my TR Pro 5975wx its prob not the best thing but I have 1 box which does it all as im only home-lab-ing - and let me tell you that you have been a god send I have really been ramping up what my server does since i came accross your channel you explain so well keep up the good work.. now to go figure out the NVIDIA drivers

  • @mitchelwilson5605
    @mitchelwilson5605 5 หลายเดือนก่อน

    Has anyone gotten this working without running JF in Docker? Or is there anyone who can point me to documentation for commands/configurations for JF for the "group add" and "devices" variables from the yaml for docker compose?

  • @zabu1458
    @zabu1458 7 หลายเดือนก่อน

    Did I miss a previous step? I have no /dri folder under /dev "ls: cannot access '/dev/dri': No such file or directory"

    • @zabu1458
      @zabu1458 7 หลายเดือนก่อน +3

      Not sure if I should just edit my comment, but... I'm just dumber than I thought. I had a gpu passthrough to a vm. I just removed the gpu from the hardware of that vm and shut it down. But since it's been a while I forgot that I actually had to edit GRUB so proxmox won't load/use the GPU itself.
      i just removed the extra stuff from this line from /etc/default/grub:
      GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt pcie_acs_override=downstream,multifunction nofb nomodeset video=vesafb:off,efifb:off"
      so it would be back at
      GRUB_CMDLINE_LINUX_DEFAULT="quiet"

    • @hiteshhere
      @hiteshhere 7 หลายเดือนก่อน +1

      @@zabu1458 Thanks for taking your time and shareing this. It helped me revolve mine :)

    • @Bruno-vz8vk
      @Bruno-vz8vk 5 หลายเดือนก่อน

      @@zabu1458 Very Interesting, I have the same problem. When I change my grub, and restart, it won't add the /dri folder in /dev, but my frigate lxc won't start... I effectively tried multiple tutos to do gpu passthrougt... may i have to do another action to see again the /dri?

    • @Bruno-vz8vk
      @Bruno-vz8vk 5 หลายเดือนก่อน

      Found ! had to edit /etc/modules and remove :
      vfio
      vfio_iommu_type1
      vfio_pci
      vfio_virqfd

  • @durgeshkshirsagar5160
    @durgeshkshirsagar5160 3 หลายเดือนก่อน +1

    I followed you guide but when I played video, it said fatal error. My processor is 4670s and gpu is GTX 950.

    • @Jims-Garage
      @Jims-Garage  3 หลายเดือนก่อน

      @@durgeshkshirsagar5160 did you make sure you changed the mapping for your Nvidia GPU and that each machine has the same drivers?

    • @durgeshkshirsagar5160
      @durgeshkshirsagar5160 3 หลายเดือนก่อน +1

      @@Jims-Garage I changed the mapping as per your guide. But, I do not have the drivers installed on either proxmox host and LXC as well. Sorry, if this is required to make it work.

    • @Jims-Garage
      @Jims-Garage  3 หลายเดือนก่อน +1

      @@durgeshkshirsagar5160 for intel this video works (drivers are installed in the kernel). For Nvidia you need to install drivers on Proxmox host and inside the LXC

    • @durgeshkshirsagar5160
      @durgeshkshirsagar5160 3 หลายเดือนก่อน

      @@Jims-Garage Thank you for the help.

  • @ckthmpson
    @ckthmpson 9 หลายเดือนก่อน +1

    Is this simplified if one were to go with a privileged container?

    • @Jims-Garage
      @Jims-Garage  9 หลายเดือนก่อน +1

      A privilegeled LXC doesn't require the idmap, you can simply mount

    • @ckthmpson
      @ckthmpson 9 หลายเดือนก่อน +1

      @@Jims-Garage Thanks. Might try the unprivileged method...just seems like a rather complicated process which would be simplified in the privileged scenario. Do realize the security implications.

    • @Jims-Garage
      @Jims-Garage  9 หลายเดือนก่อน

      @@ckthmpson if it's simply for internal applications you're probably okay

  • @pnwscitech1589
    @pnwscitech1589 3 หลายเดือนก่อน +1

    How can this be modified to split the GPU between VMs?

    • @Jims-Garage
      @Jims-Garage  3 หลายเดือนก่อน +1

      You can't with consumer gear. You either need an old GPU with a driver hack or an intel iGPU with gpu-vt (but it's deprecated AFAIK)

    • @pnwscitech1589
      @pnwscitech1589 3 หลายเดือนก่อน +1

      @@Jims-Garage Thanks! I'm planning to use an Nvidia Tesla P4 card. I tried following a craft computing tutorial, but some of the repositories arent available anymore. im bummed...

    • @Jims-Garage
      @Jims-Garage  3 หลายเดือนก่อน

      @@pnwscitech1589 Tesla can be split from what I know, grid or something. I don't have one to test though unfortunately

  • @dfleiva
    @dfleiva 3 หลายเดือนก่อน +2

    Just wanted to add that there is a more simple way of doing this by placing the following in the .conf file instead of the other lines including idmap lines: dev0: /dev/dri/card0,gid=44,uid=0 and dev1: /dev/dri/renderD128,gid=105,uid=0

    • @Jims-Garage
      @Jims-Garage  3 หลายเดือนก่อน

      @@dfleiva thanks, I'll test that out

    • @fortedexe8273
      @fortedexe8273 24 วันที่ผ่านมา

      Awesome, thank you. I always found gold in the comment :D

  • @brick4667
    @brick4667 4 หลายเดือนก่อน

    Can you shed some light on why the container would start but then not show anything on the console but a black screen? In my case I'm running an unmanic container on Debian 12 and followed the guide and while I don't get any errors, my Console is just a black screen (but the container shows up on my network - it's unreachable though)

    • @Jims-Garage
      @Jims-Garage  4 หลายเดือนก่อน

      Can you SSH?

    • @brick4667
      @brick4667 4 หลายเดือนก่อน

      @@Jims-Garage ​ Alright so strangely enough it must just take a very long time to startup because going back to the container console after a while does present a prompt.
      However, now I cannot access the app interface (in this case - unmanic) via the IP:PORT but the IP does show up on the network

  • @champ666ZA
    @champ666ZA 4 หลายเดือนก่อน

    how can we do this for an NFS share on an Unprivileged LXC?

  • @yoshidis4
    @yoshidis4 5 หลายเดือนก่อน +1

    I think this might need updating. I followed this exactly but didn't work.

    • @Jims-Garage
      @Jims-Garage  5 หลายเดือนก่อน +1

      Worth hopping onto Discord, this still works for me.

    • @yoshidis4
      @yoshidis4 5 หลายเดือนก่อน

      @@Jims-Garage Thanks but no thanks, that app needs my phone number for some reason, I don't want to get robocalls from them. Do you have anything better set up, like Slack?

  • @GeorgeHirst93
    @GeorgeHirst93 2 หลายเดือนก่อน

    Hmmm, I'm not sure what I'm doing wrong here.
    I followed all the steps and can see renderD128 showing up with ls -l /dev/dri command in my LXC. I can also see that if I run the same command in the client within Portainer. But I when I enable hardware acceleration in Jellyfin then I get a fatal error.
    The only thing that puzzles me is when I was looking for the group ID for my GPU, it was 65534 rather than 107.
    If anyone has any thought's I'd be grateful. It's a big change from running CasaOS on a Pi 😂

    • @GeorgeHirst93
      @GeorgeHirst93 2 หลายเดือนก่อน +1

      I fixed it. Just needed to reboot proxmox to get the permissions to work. I was then getting the group 107 as expected

    • @Jims-Garage
      @Jims-Garage  2 หลายเดือนก่อน +1

      Awesome, glad you fixed it!

  • @lachlanvanderdrift7013
    @lachlanvanderdrift7013 7 หลายเดือนก่อน

    How exactly do i get this running with a different user other than root? You said that you could do this through somewhere that you mentioned in the start of the tutorial, but i cant seem to figure it out. Pls help hahaha

  • @mdkrush
    @mdkrush 6 หลายเดือนก่อน +1

    What if I want to add multiple GPUs?

    • @Jims-Garage
      @Jims-Garage  5 หลายเดือนก่อน +1

      That should be possible, you'd need to follow the same process and add the other devices. I haven't ever done it though (perhaps in future).

  • @systemmodmen2157
    @systemmodmen2157 9 หลายเดือนก่อน +1

    can i share my gtx 1650 between couple of vms or not ?

    • @Jims-Garage
      @Jims-Garage  9 หลายเดือนก่อน

      Yes, there is a hack for it using vGPU. For an LXC you can follow this video (but it's Linux only).

    • @systemmodmen2157
      @systemmodmen2157 9 หลายเดือนก่อน

      i forget a an important one of the vms is a windows vm and this pc is under my tv can i accses the gpu with hdmi and play directly from it or not and thanks for the respond @@Jims-Garage

  • @ronny-andrebendiksen4137
    @ronny-andrebendiksen4137 9 หลายเดือนก่อน

    I lost SSH and terminal login access after updating my container. How do I get it back?

    • @zapatista8784
      @zapatista8784 8 หลายเดือนก่อน

      me too. how did you solve it?

  • @ewenchan1239
    @ewenchan1239 9 หลายเดือนก่อน +1

    Three questions:
    1) Have you tried gaming with this, simultaneously?
    2) Have you tested this method using either an AMD GPU and/or a NVIDIA GPU?
    3) Do you ever run into a situation where the first container "hangs on" to the Intel Arc A380 and wouldn't let go of it such that the other containers aren't able to access said Intel Arc A380 anymore?
    I am asking because I am running into this problem right now with my NVIDIA RTX A2000 where the first container sees it and even WITHOUT the container being started and in a running state -- my second container (Plex) -- when I try to run "nvidia-smi", it says: "Failed to initialize NVML: Unknown Error".
    But if I remove my first container, than the second container is able to "get" the RTX A2000 passed through to it without any issues.

    • @Jims-Garage
      @Jims-Garage  9 หลายเดือนก่อน

      1. No, not sure how I'd test it. Would have to be Linux desktop environment I assume.
      2. No, but the process should be identical, it's not intel specific.
      3. No, haven't seen that issue. As per the video I created 4 and all had access and survived reboots etc

    • @ewenchan1239
      @ewenchan1239 9 หลายเดือนก่อน +1

      @@Jims-Garage
      1. I would think that if you ran "apt install -y xfce4 xfce4-goodies xorg dbus-x11 x11-xserver-utils xfce4-terminal xrdp", you should be able to at least install the desktop environment that you can then remote into and install Steam (for example) and then test it with like League of Legends or something like that -- something that wouldn't be too graphically demanding for the Arc A380, no?
      2. The numbers for the cgroup2 stuff that you have to add to the .conf changes depending on whether it's an Intel (i)GPU (or dGPU) vs. NVIDIA.
      i.e. with my Nvidia RTX A2000, I don't have that RenderD128 option or whatever it is that it corresponds to.
      3. Are you able to test passing the same GPU between from a CT to a VM and back?
      This is the issue that I am running into right now with my A2000 where my VM won't release the GPU, even after the VM has been stopped.
      The CT will report back (when I try to run "nvidia-smi") "Failed to initialize NVML: Unknown Error".
      However, prior to shutting down my LXC container and starting the VM, the CT is able to "see" and use said A2000 (as reported by "nvidia-smi") when I am running a GPU accelerated CFD application.
      Shut down the CT, start the VM, run the same GPU accelerated CFD application, shut down the VM, and start the CT again -- that same GPU accelerated CFD application now won't load/utilize said A2000 and "nvidia-smi" will give me that error.
      So I am curious if you're running into the same thing, if you were to try and pass the GPU back and forth between VM CT.

    • @Jims-Garage
      @Jims-Garage  9 หลายเดือนก่อน

      @@ewenchan1239 I could do that by installing a desktop or game I think.
      I think the issue you're facing is that because you're using a VM for passthrough you're likely blacklisting devices and drivers. This would stop the host being able to share the GPU with the LXC

    • @ewenchan1239
      @ewenchan1239 9 หลายเดือนก่อน

      ​@@Jims-Garage
      "I think the issue you're facing is that because you're using a VM for passthrough you're likely blacklisting devices and drivers. This would stop the host being able to share the GPU with the LXC"
      But you would think that when the VM is stopped, it would release the GPU back to the host, so that you can use it for something else, e.g. a LXC.

  • @binarydesk8442
    @binarydesk8442 6 หลายเดือนก่อน

    Is this possible with LXD?

  • @cachibachero1
    @cachibachero1 7 หลายเดือนก่อน +1

    After days of struggling between guides on the internet I was able to install the NVIDIA drivers on the host. I have tried to install the drivers in the lxc without success. How did you get yours to work?
    Thank you for the answer, and thank you for the awesome guide.

    • @Jims-Garage
      @Jims-Garage  7 หลายเดือนก่อน

      I'm using an intel arc a380 GPU. The drivers are baked into the OS. It's definitely possible with Nvidia though, I'll try to find some instructions.

  • @basdfgwe
    @basdfgwe 9 หลายเดือนก่อน +1

    Can i ask why you're running docker inside of a lxc container ?

    • @Jims-Garage
      @Jims-Garage  9 หลายเดือนก่อน +2

      Why not? Simplifies deployment as I have all of the compose files ready. You could do it manually.

    • @basdfgwe
      @basdfgwe 9 หลายเดือนก่อน +1

      @@Jims-Garage Does it provide any advantage of containerising insider of a container ? Don't get me wrong I have docker containers running on unraid, which is running on proxmox....But my reason is: I made a mistake putting my storage on unraid and shifting from unraid is going to cost 000s.

    • @Jims-Garage
      @Jims-Garage  9 หลายเดือนก่อน +1

      @@basdfgwe think of the LXC as a virtual machine. It's the same as running a standard docker instance.

    • @texasermd1
      @texasermd1 9 หลายเดือนก่อน

      What would this look like with a high end GPU like a GTX 3070?

    • @PODLine
      @PODLine 9 หลายเดือนก่อน

      I do the same as Jim and it makes perfectly sense (to me). As a starting point, you could see docker as app containers and lxc as OS containers.

  • @texasermd1
    @texasermd1 9 หลายเดือนก่อน +1

    Would there be a use case for a higher end card like a spare RYX 3070?

    • @Jims-Garage
      @Jims-Garage  9 หลายเดือนก่อน +1

      This solution is GPU agnostic, you can use whatever you want.

  • @thebullshittersvonmatterho8512
    @thebullshittersvonmatterho8512 9 หลายเดือนก่อน +1

    Is Jim ai generated?

    • @Jims-Garage
      @Jims-Garage  9 หลายเดือนก่อน

      "No, he is real" - JimBotv2.0

  • @ewenchan1239
    @ewenchan1239 9 หลายเดือนก่อน

    So I've been playing around with this some more, and found that if I deleted the VM, and was ONLY running LXC containers (right now, I am using all privileged containers -- haven't tested with unprivileged containers yet) -- I am able to have multiple LXC containers do different things with my RTX A2000.
    Going to be testing with gaming next, so we'll see.
    But yeah - it would appear that I can't have both VMs and CTs on the same host, sharing a GPU.
    I can either have ONE VM using the GPU at a time, or I can have NO VMs (at all, on the host, that uses the GPU), and at least a few LXC containers, sharing the one GPU.

    • @Jims-Garage
      @Jims-Garage  9 หลายเดือนก่อน +1

      Yes, makes sense as a VM requires isolation of the hardware, a LXC doesn't.

    • @ewenchan1239
      @ewenchan1239 9 หลายเดือนก่อน +1

      @@Jims-Garage
      But the crazy thing is that you would think that when the VM ISN'T running, that the LXC should be or ought to be able to use the "free" GPU that isn't being used/tied to a VM anymore.
      That doesn't appear to be the case.
      It wasn't until I removed said VM, did it "release" the GPU back over to the LXC containers.

    • @Jims-Garage
      @Jims-Garage  9 หลายเดือนก่อน +1

      @@ewenchan1239 I could be wrong but it sounds like you aren't blacklisting the drivers and device completely. To my knowledge the LXC wouldn't work with hardware passthrough if you were as the host won't be loading drivers

    • @ewenchan1239
      @ewenchan1239 9 หลายเดือนก่อน

      @@Jims-Garage
      "I could be wrong but it sounds like you aren't blacklisting the drivers and device completely."
      I'm at work right now, so I'll have to pull my config files later, when I get back home.
      *edit*
      Here are the config files:
      /etc/modprobe.d/nvidia.conf
      blacklist nvidia
      blacklist nouveau
      blacklist vfio-pci
      /etc/default/grub
      GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt pcie_acs_override=downstream nofb nomodeset initcall_blacklist=sysfb_init video=vesafbff,efifbff vfio-pci.ids=10de:2531,10de:228e disable_vga=1"
      /etc/modprobe.d/vfio.conf
      options vfio-pci ids=10de:2531,10de:228e disable_vga=1
      /etc/modprobe.d/kvm.conf
      options kvm ignore_msrs=1
      /etc/modprobe.d/iommu_unsafe_interrupts.conf
      options vfio_iommu_type1 allow_unsafe_interrupts=1
      /etc/modprobe.d/pve-blacklist.conf
      blackllist nvidiafb
      blacklist nvidia
      blacklist nouveau
      blacklist radeon
      /etc/modules
      vfio
      vfio_iommu_type1
      vfio_pci
      vfio_virqfd
      nvidia
      nvidia-modeset
      nvidia_uvm
      Yeah...so that's what I have, in my config files.
      As far as I can tell, it's complete (because it works for both VMs and CTs, just not being able to pass the GPU back and forth between said VM(s) and CT(s)). But between CTs, not a problem.

    • @ewenchan1239
      @ewenchan1239 9 หลายเดือนก่อน

      @@Jims-Garage
      "To my knowledge the LXC wouldn't work with hardware passthrough if you were as the host won't be loading drivers"
      Updated my previous comment.
      With the config information that I just shared, it works for both VMs and CTs - just not when they exist on the same host, at the same time.

  • @hristijanangelov2051
    @hristijanangelov2051 3 หลายเดือนก่อน +1

    update: i fix it... its was my mistake
    root:44:1
    rooot:104:1
    :/
    lxc_map_ids: 245 newgidmap failed to write mapping "newgidmap: gid range [107-108) -> [104-105) not allowed": newgidmap 228560 0 100000 44 44 44 1 45 100045 62 107 104 1 108 100108 65428
    lxc_spawn: 1795 Failed to set up id mapping.

  • @ljsmith8456
    @ljsmith8456 16 วันที่ผ่านมา

    *Edit* Tried the above for and JellyFin-LXC install and a fresh LXC-Docker-Ubuntu-JellyFin, neither are working
    This looks exactly like what I'm after, tried loads of other tutorials without success. Followed the video meticulously but I'm getting this error when trying to start the container:
    lxc_map_ids: 245 newgidmap failed to write mapping "newgidmap: gid range [44-45) -> [44-45) not allowed": newgidmap 3570 0 100000 44 44 44 1 45 100045 62 107 104 1 108 100108 65428
    lxc_spawn: 1795 Failed to set up id mapping.
    __lxc_start: 2114 Failed to spawn container "105"
    TASK ERROR: startup for container '105' failed

  • @ziozzot
    @ziozzot 9 หลายเดือนก่อน +2

    does not work for me FFmpeg gives this error [AVHWDeviceContext @ 0x642ff9562240] No VA display found for device /dev/dri/renderD128.
    Device creation failed: -22.
    [h264 @ 0x642ff954c540] No device available for decoder: device type vaapi needed for codec h264.
    Stream mapping:
    Stream #0:0 -> #0:0 (h264 (native) -> h264 (h264_vaapi))
    Stream #0:2 -> #0:1 (aac (native) -> aac (native))
    Device setup failed for decoder on input stream #0:0 : Invalid argument

    • @Jims-Garage
      @Jims-Garage  9 หลายเดือนก่อน +1

      What are you trying to pass through?

    • @ziozzot
      @ziozzot 9 หลายเดือนก่อน

      @@Jims-Garage I tried passing through the iGPU without success. I then attempted it with a privileged container, and it works. I installed Jellyfin directly in the LXC without Docker. Probably there is an issue with the permissions.

    • @ziozzot
      @ziozzot 9 หลายเดือนก่อน +2

      with the help of ChatGPT i figured out the config that works for me lxc.cgroup2.devices.allow: c 226:0 rwm
      lxc.cgroup2.devices.allow: c 226:128 rwm
      lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file
      lxc.idmap: u 0 100000 65536
      lxc.idmap: g 0 100000 44
      lxc.idmap: g 44 44 1
      lxc.idmap: g 45 100045 59
      lxc.idmap: g 104 104 1
      lxc.idmap: g 105 100105 65431

    • @GeorgeHirst93
      @GeorgeHirst93 2 หลายเดือนก่อน

      @@ziozzot What did you ask ChatGPT to do? 😂 I've tried a couple thing to help with permissions and I can't see where I'm going wrong