hey, not often do i comment on yt videos, but just wanted to say thank you so much! this was so thorough and well presented, it made learning so much easier. now got my GPU crunching all the smart search and facial recog. that it can handle thanks to this! you're awesome.
This tutorial is excellent! I could see this becoming a regular thing. It would be great if we could have the gaming rig in sleep mode to save power, and then activate it whenever one or more services need AI compute... Thanks a lot
Hi Alex, followed your setup but there seems to be no connection between Immich and the machine learning instance. You mention at the end that your Immich instance couldn't see the ML instance because of the 'container' tag. My ML instance is not shared into the tailnet but native, would this still apply? If I remove this line '- TS_EXTRA_ARGS=--advertise-tags=tag:container' from my docker compose config the container doesn't connect to the tailnet anymore :( Any idea how I get the Immich instance into my tailnet without the tag?
6 หลายเดือนก่อน +1
Great guide but I am getting "Error: only 0 Devices available, 1 requested. Exiting." on that docker run -it --gpus=all ... command on my RTX 4080 :/. Any idea why pls?
Have you found if there is a minimum VRAM requirement for these models? This is the kind of thing I would love to use a server I have built from spare parts for. It uses a GTX 1070 and while it seems to meet the software requirements I'm not sure if the 8gb or vram is enough
It would be nice to see how the immich server is set up so that it is within your tailnet. I'm trying to follow the Immich instructions for Unraid (using docker compose) and then add a tailscale sidecar service. I just can't seem to get the immich server to resolve where the redis and postgres services are using the tailscale sidecar method.
I don't want my gaming PC running 24/7. What happens when I add a new picture to immich and the PC is offline? Does it fallback to 'regular' immich or wait with the job until the remote machine is online?
Awesome Video as always. I tried something similar but with Home Assistant and Ollama. Both are connected to my Tailnet, I can reach both from my Laptop (also connected to my Tailnet) but I can't for the life of me reach (ping) the Ollama tailnet instance from within the Home Assistants terminal and subsequently the Ollama integration fails when I try to connect using the tailnet IP or URL. Anyone knows why that is? Does the Home Assistant Addon only allow traffic one way into the tailnet?
i've got immich running on my qnap in container station, i now have tailscale installed on my pc, i followed this video and installed docker and have immich_machine_learning running in the dockerwindows. i tried sharing but since immich is just running on a container there's no way to "paste" the link. i must be missing something here. i guess i need to create tailscale in docker and somehow merge it with the immich_machine_learning container? honestly i'm totally lost lol
See the supporting resources. We assume that Immich is on your tailnet already. Look at the Linux host section for Immich, you’ll need to get that sorted on your qnap end
Asking the real questions am I right? They’re a pair of KEF LS50 I got myself as a graduation milestone with my first real pay cheque after a career change comp sci msc course 9 years ago. - Alex
When I run a smart search my docker desktop displays this error "CUDA driver version is insufficient for CUDA runtime version ; GPU=21219207" Runtime Error on my windows PC. Has anyone seen this before?
hey, not often do i comment on yt videos, but just wanted to say thank you so much! this was so thorough and well presented, it made learning so much easier. now got my GPU crunching all the smart search and facial recog. that it can handle thanks to this! you're awesome.
Thanks for the description, my immich processing time now got reduced by 40x when using my desktop GPU vs the i5-1235u in my NAS!
This tutorial is excellent! I could see this becoming a regular thing. It would be great if we could have the gaming rig in sleep mode to save power, and then activate it whenever one or more services need AI compute... Thanks a lot
This is awesome! I just started using Immich a couple days ago. I will definitely be trying this out. Thank you for sharing.
Extremely practical and useful show and tell this, thank you!
I love watching ur videos so much ❤ straingth into the point
You are always on point. Immich and GPU sharing is a perfect idea👍🏻.
Wow! This is huge. Thank you so much.
Nice I will have to try this out Thanks
Thank you, got it working.
awsome, is this possible with self hosted headscale ? 👍
Hi Alex, followed your setup but there seems to be no connection between Immich and the machine learning instance. You mention at the end that your Immich instance couldn't see the ML instance because of the 'container' tag. My ML instance is not shared into the tailnet but native, would this still apply? If I remove this line '- TS_EXTRA_ARGS=--advertise-tags=tag:container' from my docker compose config the container doesn't connect to the tailnet anymore :( Any idea how I get the Immich instance into my tailnet without the tag?
Great guide but I am getting "Error: only 0 Devices available, 1 requested. Exiting." on that docker run -it --gpus=all ... command on my RTX 4080 :/. Any idea why pls?
This is what I'm doing it right now
Have you found if there is a minimum VRAM requirement for these models? This is the kind of thing I would love to use a server I have built from spare parts for. It uses a GTX 1070 and while it seems to meet the software requirements I'm not sure if the 8gb or vram is enough
can you go over the tags at the last part, I cannot connect to my windows 11 from inside of the immich container running in my unraid system
geat video. My use case is that run synergy over tailscale with windows mac and linux which runs on different wired network.
Possible to do the same for video editing rendering?
It would be nice to see how the immich server is set up so that it is within your tailnet. I'm trying to follow the Immich instructions for Unraid (using docker compose) and then add a tailscale sidecar service. I just can't seem to get the immich server to resolve where the redis and postgres services are using the tailscale sidecar method.
Are you using the unraid compose plugin? It gets a bit confusing if you're not. -Alex
I don't want my gaming PC running 24/7. What happens when I add a new picture to immich and the PC is offline? Does it fallback to 'regular' immich or wait with the job until the remote machine is online?
Hey, is it possible to switch to Passkey(Yubikey) login from Google login in TS?
what is the location of model-cache in windows?
Awesome Video as always. I tried something similar but with Home Assistant and Ollama. Both are connected to my Tailnet, I can reach both from my Laptop (also connected to my Tailnet) but I can't for the life of me reach (ping) the Ollama tailnet instance from within the Home Assistants terminal and subsequently the Ollama integration fails when I try to connect using the tailnet IP or URL. Anyone knows why that is? Does the Home Assistant Addon only allow traffic one way into the tailnet?
i've got immich running on my qnap in container station, i now have tailscale installed on my pc, i followed this video and installed docker and have immich_machine_learning running in the dockerwindows. i tried sharing but since immich is just running on a container there's no way to "paste" the link. i must be missing something here. i guess i need to create tailscale in docker and somehow merge it with the immich_machine_learning container? honestly i'm totally lost lol
See the supporting resources. We assume that Immich is on your tailnet already. Look at the Linux host section for Immich, you’ll need to get that sorted on your qnap end
Got to ask what model the speakers are in the background?
Asking the real questions am I right?
They’re a pair of KEF LS50 I got myself as a graduation milestone with my first real pay cheque after a career change comp sci msc course 9 years ago.
- Alex
@@Tailscale you know it 😜 I'm contemplating the same LS50 Wireless II or the LSX II as they have better connectivity
Passive speakers never need software updates though ;)
When I run a smart search my docker desktop displays this error "CUDA driver version is insufficient for CUDA runtime version ; GPU=21219207" Runtime Error on my windows PC. Has anyone seen this before?
Are you on a recent GPU? Drivers?
@@Tailscale I am on the most recent drivers on Geforce Experience gui
tobad its nvidia gpu not a intel arc one. nvidia locks down their gpu to much vs intel.
dislike, because docker