Put your gaming GPU to work! Remote machine learning on Windows with Docker and WSL2 from anywhere.
ฝัง
- เผยแพร่เมื่อ 2 ก.ค. 2024
- In today's video we explore sharing an NVIDIA GPU from Windows 11, running a containerized workload with docker for windows, over Tailscale to connect Immich - a self-hosted photo library - to hardware accelerate machine learning tasks.
Personal accounts are always free on Tailscale and can include up to 3 users and 100 devices. Get started today at tailscale.com/yt
Supporting Links:
- tailscale.com/blog/docker-tai...
- github.com/tailscale-dev/dock...
- immich.app/docs/guides/remote...
- docs.docker.com/desktop/gpu/
===
00:00 - Start
01:11 - Object Search Rules!
02:06 - Machine Learning in Action
04:41 - Preparing Windows
07:37 - Hardware accelerated docker on Windows!
09:25 - Remote Machine Learning
12:27 - Bringing it all together - วิทยาศาสตร์และเทคโนโลยี
Thanks for the description, my immich processing time now got reduced by 40x when using my desktop GPU vs the i5-1235u in my NAS!
This is awesome! I just started using Immich a couple days ago. I will definitely be trying this out. Thank you for sharing.
Extremely practical and useful show and tell this, thank you!
Wow! This is huge. Thank you so much.
You are always on point. Immich and GPU sharing is a perfect idea👍🏻.
Nice I will have to try this out Thanks
This tutorial is excellent! I could see this becoming a regular thing. It would be great if we could have the gaming rig in sleep mode to save power, and then activate it whenever one or more services need AI compute... Thanks a lot
This is what I'm doing it right now
geat video. My use case is that run synergy over tailscale with windows mac and linux which runs on different wired network.
Possible to do the same for video editing rendering?
Great guide but I am getting "Error: only 0 Devices available, 1 requested. Exiting." on that docker run -it --gpus=all ... command on my RTX 4080 :/. Any idea why pls?
Hey, is it possible to switch to Passkey(Yubikey) login from Google login in TS?
Awesome Video as always. I tried something similar but with Home Assistant and Ollama. Both are connected to my Tailnet, I can reach both from my Laptop (also connected to my Tailnet) but I can't for the life of me reach (ping) the Ollama tailnet instance from within the Home Assistants terminal and subsequently the Ollama integration fails when I try to connect using the tailnet IP or URL. Anyone knows why that is? Does the Home Assistant Addon only allow traffic one way into the tailnet?
Hi Alex, followed your setup but there seems to be no connection between Immich and the machine learning instance. You mention at the end that your Immich instance couldn't see the ML instance because of the 'container' tag. My ML instance is not shared into the tailnet but native, would this still apply? If I remove this line '- TS_EXTRA_ARGS=--advertise-tags=tag:container' from my docker compose config the container doesn't connect to the tailnet anymore :( Any idea how I get the Immich instance into my tailnet without the tag?
i've got immich running on my qnap in container station, i now have tailscale installed on my pc, i followed this video and installed docker and have immich_machine_learning running in the dockerwindows. i tried sharing but since immich is just running on a container there's no way to "paste" the link. i must be missing something here. i guess i need to create tailscale in docker and somehow merge it with the immich_machine_learning container? honestly i'm totally lost lol
See the supporting resources. We assume that Immich is on your tailnet already. Look at the Linux host section for Immich, you’ll need to get that sorted on your qnap end
Got to ask what model the speakers are in the background?
Asking the real questions am I right?
They’re a pair of KEF LS50 I got myself as a graduation milestone with my first real pay cheque after a career change comp sci msc course 9 years ago.
- Alex
@@Tailscale you know it 😜 I'm contemplating the same LS50 Wireless II or the LSX II as they have better connectivity
Passive speakers never need software updates though ;)
When I run a smart search my docker desktop displays this error "CUDA driver version is insufficient for CUDA runtime version ; GPU=21219207" Runtime Error on my windows PC. Has anyone seen this before?
Are you on a recent GPU? Drivers?
@@Tailscale I am on the most recent drivers on Geforce Experience gui
dislike, because docker