i am following your proxmox series , and i must say it is very helpful, above all the "info" you provide along with steps to the process adds more value to your videos. Thanks for this one good luck for future ones.
I keep coming back to these videos because they're superinformative and I've gone from just following along without understanding to gradually actually learning and applying what I've picked up here to do other stuff! So a big thank you!!
using tags instead of notes is also pretty handy to use for IP and ports. That way you can hover over the little bubble next to your ct name and boob, there is your endpoint.
Hey! I followed these steps, however it seems like certain services aren't able to interact with media movies by other apps. For example, my plex LXC container doesn't seem to be able to see any content moved/written by Radarr.
I converted tteck’s bash scripts to ansible playbooks. Much easier going forward. Especially installing things like Overseer, which only needs so many vCPU’s and RAM because of the build step which I can skip altogether with ansible.
Is there any advantage of using LXC for each service vs running all arr stack services in docker using a VM? What about resources and managment, using single VM with docker vs 4 seperate LSX?
I don’t see any benefit myself. You can just create one docker stack with all your services and start up in less than 30 seconds. None of this manual interaction with mount points.
@@Boburto I was thinking the same thing. Feels like it would be simpler to just run all the ARR stuff in a docker stack with a single compose file vs a bunch of separate LXCs.
I've been looking to move from docker to proxmox, just to try something new... I don't think I'll be moving anymore tbh since creating an *arr + torrent client stack is so simple. One button and the entire things deployed, in the same network (or define another network). All directories the same with hardlinks Much simpler and can't really see proxmox having a performance benefit either personally
@Novaspirit Tech or anyone else with a solution, could you please help with the following error: I went through this video (and the previous one on setting up a NAS) but the samba share isn't the zfs pool but just a subvol within that zfs pool. However, from the subsequent videos, deluge and others are able to only access the rest of the zfs pool and not the subvol. How do I ensure that either the samba share workds on the entire zfs pool or that deluge is able to access the subvol that samba is sharing?
Haven't tried it but you could try installing kde on top of proxmox. However then you have to deal with additional overhead of having a gui and the risks that are associated with that.
So would this utilise the network when it is downloading ? Unfortunately this is 3 months late. Would have loved to use lxc containers. But i had decided to use ubraid for some contianers and to manage my nas, since then i realised that unraid sucks for vms but im stuck on it now. Because its now managing my nas. Im now using unraid for this stack so that its not using network when it is writing to disk.
yes , i second that, it will be fun when he makes (if he makes) a summary video showcasing all the stuff he has planned in one video once the series is over probably
Great video. I opted for one of those iglnet routers with vpn and adguard to connect my proxmox machine to, then it provide redundancy and guarantee of vpn without leaks. Alternatively I use a docker contaner for deluge with openvpn, that is also a neat solution. I am struggling to hardware transcode with jellyfin LXC container (turnkey) with my AMG igpu though, I'd love to see a video about that.
I actually built this all in portainer and found it difficult so I used casa os and it does have its struggles and I found using portainer and casa os to manage the arr stack and deployment was really easy once I figured out the problems.
But what happen if the tunnel goes down on the openwrt? I think it would leak you IP/DNS? What would you do when that happens? I was setting it up, then simulated tunnel goes down. I ened up setting up gluten docker-compose with all the ARR stacks. If you can explain how to setup a killswitch on openwrt, that would be great... in case tunnel goes down.
For those who are facing an issue where the downloaded content can't be opened in Jellyfin or in the SMB folder. It's a permission issue by Deluge. Here's how I fixed it: In the Deluge LXC / Console run: nano /etc/systemd/system/deluged.service set the UMask value to 000: UMask=000 run: systemctl daemon-reload run: systemctl restart deluged
I know you're taking certain steps for this videos purpose but wouldn't it serve one best to install all ARR's in one VM or LXC as opposed to running a container for each?
i like the idea to run each app in their own lxc, it's what i'm doing now, i didnt like in my previous "homeserveur" (raspberryPi4) having all the app in the same lxc i dont want the "whole" setup to break due to update from one app (probably gonna never happen) but if each app run in standalone, it's better in my opinion
Hi Don i love the videos and the time you put into it to explain how things work. I have a question though, i have setup multiple VPN accounts 3 to be exact in order for the system to bounce back and forth. And this works fine. But i do notice from time to time that Deluge will pole my public IP instead of the VPN. Is there a way to prevent this or even setup a firewall rule to prevent anything that connects to OpenWRT that doesnt have a VPN connection to DROP in a Sink Hole until VPN connection is Established?
I understand how to install the whole arr stack and configure everything, theres just one thing i'm missing: the shared storage, which is why i found your other video about setting up a samba share on proxmox. I followed it and it works. Thanks for that! However, the result of that video is a (in your case) share named "public" on your samba LXC. How do you connect the arr stack to that share? I see you do a mount for deluge at 4:39, which i assume is the connection, but i don't see any "public" share in that command. You do "/zpool/media" - where does the media folder come from? Your videos are great and i appreciate the time, work and effort you put into them. Keep up the good work!
just to add: the timestamp is for this video. Also, i tried the mount command with "/media" because my samba share is "media", but that doesn't work: mp1: unable to hotplug mp1: directory '/zfs-media/media' does not exist. I know i'm missing something, i just don't know what
having the exact same issue, ive done it over like 5 times over 2 days litterally just figured it out my mp was /spool/subvol-100-disk-0/share spool being the name i gave the pool... subvol was there maybe i didnt rename to media ? ? and share is the one created in vid hope that heaps
@@dillonkifer8006 chat GPT gave me some suggestions about modding shared groups permissions and things like that, but none of it was covered so it left me really confused, it's fine for the time being so I just queue up a few downloads and when they're done I run it and make sure everything's going smooth. Somebody in the discord told me to match the permissions but I'm struggling to figure that bit out
@@dillonkifer8006 I wanted to come back to this since I cobbled a working scenario Make sure to click the cog to show advanced settings Sonarr/Radar>settings>media management Bottom area check on the CHMOD and set the numbers to 777 and save! So far it's been going for an hour or two without manual intervention but I'm triple checking I didn't skip bits
@@Calliico thanks for that .. im very new to this things and following Don's steps here .. but also having issues with content downloaded from deluge to smb share having access denied, thy do show in jellyfin, but cant play them .. nor open them in the smb share itself, as it says access denied .. i checked settings as you mentioned in Sonarr/Radarr .. and i see a check box (not checked) for Set permissions .. so i checked that .. also changed chmod folder to 777 .. and i have chmod Group blank .. i will try detting some new stuff from deluge and see if it works .. many thanks for the help
@@CalliicoThanks. Here's how I fixed it for every download. On the Deluge LXC / Console: run: nano /etc/systemd/system/deluged.service set the UMask value to 000: UMask=000 run: systemctl daemon-reload run: systemctl restart deluged
Hey I have a couple of questions about the series but first I want to say thank you . So I'm facing a problem when I tell radarr or sonarr to download anything deluge download it and give it the owner and the group of a root and no read permissions for anyone else and that cause a problem that I can't open any movie and plex can't read it also. Do you have a solution or anyone can help me please.
Found the solution for my problem it's in the comment and it's from this user @Calliico you can find it if you scroll to the bottom Two more things chmod 777 for the whole media folder just to make everything go smoothly, in Radarr and Sonarr there is an option in settings -> media management Click on show advance and check the box for Create empty folders so they can grab the media from download folder.
@@89ajworld Thanks. Here's how I fixed it for every download. On the Deluge LXC / Console: run: nano /etc/systemd/system/deluged.service set the UMask value to 000: UMask=000 run: systemctl daemon-reload run: systemctl restart deluged
I am running 4 proxmomx nodes. 3 are just whatever, the 4th then is running truenas with 14TB of drives passed in. Could I "mount" to the path of my truenas smb shares?
What happens if you reboot the system? Are IPs 10.50.50.XXX static? Or not but they don't change? It's a curiosity that arose when you configured the IPs. I'm thinking about integrating openwrt into my system to do it like this, but I have a doubt at that point
if i have truenas on my proxmox where i am using mirror for my HD, should i just mount them into the nas server directory and if so, how do i authenticate?
Everything worked for me up until accessing the Prowlarr web interface. OpenWRT recognizes the LXC, gives no errors when setting up the port forwarding, no errors that I can find in Proxmox, but I can't access the web interface, it just sits and spins until the connection times out. No issues with Deluge, Sonarr, or Radarr. Any suggestions?
I can access Prowlarr's web UI if I don't establish the connection to openWRT, but it stops working when I switch back to vmbr1 and set up the port forwarding.
hi, how do you resolve the nobody:nogroup problem on the mounted folder. i got this instead of root root. this issue doesn't let radarr (or any app) to access the mounted folder :( thank you for your help
This is just what I needed. Do you have any advice on how I would do this same process, but move the "deluge" side of things to a seedbox? I'm trying to keep torrenting of my linux ISOs on my home network for the sake of privacy. I presume I'd use some sort of sync between my proxmox server and the seedbox, but not sure what's best. Thanks for the video! Stuff like this makes proxmox a bit easier for noobs like me.
I'm having a some kind of permissions issue where Radarr & Sonarr are not letting me add a root folder for my mounted unraid share. I get the following error "Unable to add root folder Folder '/mnt/Unraid-Media/Movies/' is not writable by user 'root'". It's odd because I use this same mount on other LXC's and I don't run into this issue. Any idea?
@@PandaTaco yes, it’s because it was running in an unprivileged LXC. There are ways to map the LXC permissions to the promox host permissions but it’s honestly a huge pain. My recommendation is to just create a new container that is privileged. It’s a larger security vulnerability though.
I enjoy your content, thank you. After creating a mountpoint for deluge it says the container startup has failed. A bit stuck any help? So I skipped the jellyfin setup since I wasn't going to use it, if you are like me and can't load up a container after adding a mountpoint go watch that video for the command line information to make it work.
I would use Docker for this. I would use gluetun and route all of my traffic thru it. Keep the stack updated and if the tunnel goes down, the services go down.
Curious about gluetun What does that achieve? Seen its basically a vpn client Could tailscale/headscale not replace that and be easier? Or is gluetun solely for downloading your "Linux ISOs"?
Can you make a tutorial how i bind my Synology NAS as primary disk space? At my house proxmox, sonarr and so on are installed and running on a intel nuk local. I want that my movies and so are saved to my NAS where i have much more dis space
I also follow this with a lot of interest Don ! 1. What i don't get is, did you make a directory on your zpool named media? 2. Can we acces this media share also thru the samba share?
Refer back to my jelly fin video, I did this so it's less overhead on the CPU instead of sharing the files using smb protocol on the same Proxmox ve, yes it's also attached to my nas so media is browsable
No matter what I tried, I couldn't get hardlinks to work with this setup When I installed the ARRs through truenas apps everything works fine Anyone having issues with that as well?
I run Proxmox on my Nuc. And sonarr inside Proxmox. I want to use my synology NAS as a file storage. So if I downloaded something with sonarr, i want to safe it directly to my nas. Can you make a video or explains me how to configure Proxmox the right way to do that?
Can you help me please I created my smb share, I can see it and remove and change files but when using deluge it download to the mount share just file but can't see it any where else. I use file browser on deluge it's there I can see and play my download from the mounted drive but it doesn't share anywhere else what I'm I doing wrong 😢. PLEASE HELP
is it possible to run these lxc's through a vpn i see poeple setting this up through docker but im trying to see how would it be possible in the lxc way since everything is separate
is it too early to ask for next part showing flaresolver and indexer integration . also based on your judgement and decisions i would like to hear your choices on self hosted form filler (not password manager ) , those browser plugins that helps you fill similar data set again in forms. hope to see something new soon . good luck.
You could have used any host with a browser and assigned it to the new bridge. Then access OpenWRT on the LAN side, rather than opening port 80 on the WAN. You could have also used VLANs rather than a new bridge. Good vid though!
I think it can use 0.0.0.0 as IP as well since all the services are on the same "interface"[vmbr1] Also...is it possible you use the gui to create the mount points instead of the command line, or is it not capable of mounting a ZFS pool in that fashion? Very very cool...is it adding alot of resource usage with the containers? Also with a 4 port gigabit ethernet pci-e device, you could set up 1 port to be your "LAN" for your PROXMOX box (vmbr2 or 3)[with a static internal LAN IP], then have the vmbr0 and vmbr1 as passthru to your OpenWRT build...also...set it as the 1st bootable container when your PROXMOX box starts up so it starts before the other containers to prevent them from going wonky [under options tab start/shudown order] Make sure the WAN firewall is bulletproof...because it could be exposed to the internet without proper rules in place Keep em coming!!!!
correct, you will need the use of command line for the zfs pool otherwise you can perform mount points with gui on standard mounts. the usage wise is really not bad, direct streams from jelly is great but i can only do about 2 transcode at a time due to weak gpu
I've ran into a strange issue. I have installed the Samba share (as descripted in the 'Setting Up NAS Server On Proxmox' video) and have added an extra share (as descripted in the 'iGPU Transcoding In Proxmox with Jellyfin Media Center!' video). I can access the 'media' share in Windows and create folders, but Deluge only sees the root of the share (/mnt/media) and if I create a sub-folder (Downloads) and change this as well in Deluge, the error in Deluge doesn't go away. So, in Deluge this doesn't work: /mnt/media/Downloads ... but this does: /mnt/media. This is probably and authorization inheritence issue, but I can't fix it. Does anyone have any idea?
if you already have a nas with its own os/system to manage storage then you would not need trunas scale, in that case just mount your nas path over network and use so. In case your speed matter to access the storage then go for native truescale os with same system handling containers/vms of your choice. Also down the line you might feel need for hardware acceleration for jellyfin for decoding videos stuff so i would recommend either chose a strong system with hardware encoding/decoding support or else use docker container on linux compared to truenas. In your stack, homeassistant is nor resource hungry nor is torrrents or VPN, jellyfin is the only one needing high resources compared to these three. So with some hardware decoding support i suppose you are good with any choice of OS. If not then go for separate systems to manage accordingly. Good luck!
@@iuhere so for my needs would you choose Proxmox or Truenas scale. I am using Xeon server x5675 6 core will be buying ECC ram and GPU. I am also considering a windows gaming VM for my shield. Cheers
@edwardCactus if I want to replicate or backup or move the stack (all the ARRs) I will have to do that for all the LXC containers but with a single VM and all the ARRs in docker all I would have to do is backup or move the single VM
@@mindshelfpro to each his own but seems redundant. The overhead added does not provide enough value in my opinion. Backing up / moving each container is not a daunting process. Running a container in a VM opposed to directly on the hypervisor just seems un-necessary.
I have decided to create only 1 VM running ubuntu server and deploy all these service on containers... I have found this aproach consumes less vCPU and RAM from the host machine.
These videos have helped me so much getting everthing up and running the way I wanted it, was able to shutdown my Windows VM which was running all my Arr services, saved a heap of driver space, CPU and RAM... I have noticed now there is a Flaresolverr helper-script, curious, has anyone tried it?
As being in talent management for over a decade I can say that your content is good enough to grow. You should do better in the hashtags(#) or the keywords and you will be good to go 👍
The arrs are so bad. It’s incredible to me that in 2024 this is the best we have. I’ve spent at least as many hours configuring, troubleshooting, and managing the arrs than I have w joying them, and I still have to download manually for certain shows, especially daily shows where sonar is too slow to find.
@@ElmokillaXDK Nah, that ain’t it. I’ve had them working fine for months, everything automated from overseerr all the way through Discord end to end, and then one day, boom, they stop working and try to delete my entire library, without even an update or any change. Then the support forums go silent and you can’t get anyone to even comment on mappings, logs, or anything. They’re great when they work, but the whole setup is too brittle and unpredictable.
@@ryanmalone2681 what? That happened because of something you did donate has never deleted a thing in my library your setup was just a little to janky and I’m sure you just blamed sonarr for doing this with no logs or anything which is why nobody probably replied to your post.
@@ElmokillaXDK Seriously, everything working fine for months, then all of a sudden, I’m getting Discord notifications that all this content is being deleted. Luckily I noticed straight away. I wasn’t even logged in. Even did it on my backup server which in only on once a week.
Pity you didn't explore using the new SDN routing options rather than installing OpenWrt. Nothing against OpenWrt, but its routing functionality is not really needed anymore.
i am following your proxmox series , and i must say it is very helpful, above all the "info" you provide along with steps to the process adds more value to your videos. Thanks for this one good luck for future ones.
I keep coming back to these videos because they're superinformative and I've gone from just following along without understanding to gradually actually learning and applying what I've picked up here to do other stuff! So a big thank you!!
using tags instead of notes is also pretty handy to use for IP and ports.
That way you can hover over the little bubble next to your ct name and boob, there is your endpoint.
Another great video! I'm LOVING this series.
man this tutorial series is just THE best!, thank you so much!
Hey! I followed these steps, however it seems like certain services aren't able to interact with media movies by other apps. For example, my plex LXC container doesn't seem to be able to see any content moved/written by Radarr.
How is the lifecycle of each individual container? How do you keep them updated? It looks a bit of a chaos.
I converted tteck’s bash scripts to ansible playbooks. Much easier going forward. Especially installing things like Overseer, which only needs so many vCPU’s and RAM because of the build step which I can skip altogether with ansible.
can u provide that script ?
yes please provide.
please share
the man is sharping the scripts before sharing i hope
do you need to adjust ownership right for the zpool/media in the node shell for it to be writeable on other lxc (chown 100000) ?
A tutorial for this! I can´t create directories withing the deluge command line due to permissions with the zfs pool!
Yeah i think so, but cant seem to find how
Is there any advantage of using LXC for each service vs running all arr stack services in docker using a VM? What about resources and managment, using single VM with docker vs 4 seperate LSX?
I don’t see any benefit myself. You can just create one docker stack with all your services and start up in less than 30 seconds. None of this manual interaction with mount points.
@@Boburto I was thinking the same thing. Feels like it would be simpler to just run all the ARR stuff in a docker stack with a single compose file vs a bunch of separate LXCs.
I've been looking to move from docker to proxmox, just to try something new...
I don't think I'll be moving anymore tbh since creating an *arr + torrent client stack is so simple. One button and the entire things deployed, in the same network (or define another network). All directories the same with hardlinks
Much simpler and can't really see proxmox having a performance benefit either personally
You can do this with docker compose, kubernetes manifest, or podman quadlet.
@Novaspirit Tech or anyone else with a solution, could you please help with the following error:
I went through this video (and the previous one on setting up a NAS) but the samba share isn't the zfs pool but just a subvol within that zfs pool. However, from the subsequent videos, deluge and others are able to only access the rest of the zfs pool and not the subvol. How do I ensure that either the samba share workds on the entire zfs pool or that deluge is able to access the subvol that samba is sharing?
What video did you setup SONARR the mount directory, etc? I can't seem to find it. Thanks!
At 7:34 into the video, you use Dolphin to access the SMB share and create a folder. Is there some way to use Dolphin file browser with Proxmox?
Haven't tried it but you could try installing kde on top of proxmox. However then you have to deal with additional overhead of having a gui and the risks that are associated with that.
There are more steps associated with adding a gui but the basic idea is getting to that point
Great videos thanks to you and the people here filling in some of the gaps.
edit. nvm double triple and quadruple check your spelling.
So would this utilise the network when it is downloading ?
Unfortunately this is 3 months late. Would have loved to use lxc containers. But i had decided to use ubraid for some contianers and to manage my nas, since then i realised that unraid sucks for vms but im stuck on it now. Because its now managing my nas. Im now using unraid for this stack so that its not using network when it is writing to disk.
I Can't wait to see the end products of all your lxc stuff !!!
yes , i second that, it will be fun when he makes (if he makes) a summary video showcasing all the stuff he has planned in one video once the series is over probably
great proxmox series! saludos ! :D
Thanks for the demo and info, awesome video, have a great day
why would you have this method via LXC rather than all these services in Docker and manage it via Portainer?
Great video. I opted for one of those iglnet routers with vpn and adguard to connect my proxmox machine to, then it provide redundancy and guarantee of vpn without leaks. Alternatively I use a docker contaner for deluge with openvpn, that is also a neat solution. I am struggling to hardware transcode with jellyfin LXC container (turnkey) with my AMG igpu though, I'd love to see a video about that.
Are you going to add Home Assistant as well? Also doing a Lancache server for games, updates and so forth would be great!
yes i will be doing that too as part of a different series to this media server
@@NovaspiritTechwill be looking forward to it as well
This is exactly what I’m trying to do
How do u mountpoint to the deb-nas not the zpool. Type is directory
you need to mount it to the host 1st and then assign it in the configuration for each container
I actually built this all in portainer and found it difficult so I used casa os and it does have its struggles and I found using portainer and casa os to manage the arr stack and deployment was really easy once I figured out the problems.
I use your gluetunnel for my vpn tunnel with deluge
and I use wireguard to access it from the outside
But what happen if the tunnel goes down on the openwrt? I think it would leak you IP/DNS? What would you do when that happens? I was setting it up, then simulated tunnel goes down. I ened up setting up gluten docker-compose with all the ARR stacks. If you can explain how to setup a killswitch on openwrt, that would be great... in case tunnel goes down.
Hey, may I have the docker compose ?
For those who are facing an issue where the downloaded content can't be opened in Jellyfin or in the SMB folder.
It's a permission issue by Deluge.
Here's how I fixed it:
In the Deluge LXC / Console
run: nano /etc/systemd/system/deluged.service
set the UMask value to 000: UMask=000
run: systemctl daemon-reload
run: systemctl restart deluged
thank you, it worked
I know you're taking certain steps for this videos purpose but wouldn't it serve one best to install all ARR's in one VM or LXC as opposed to running a container for each?
i like the idea to run each app in their own lxc, it's what i'm doing now, i didnt like in my previous "homeserveur" (raspberryPi4) having all the app in the same lxc
i dont want the "whole" setup to break due to update from one app (probably gonna never happen) but if each app run in standalone, it's better in my opinion
One VM would take more resources though ? I believe this has resource constraints.
Hi Don i love the videos and the time you put into it to explain how things work. I have a question though, i have setup multiple VPN accounts 3 to be exact in order for the system to bounce back and forth. And this works fine. But i do notice from time to time that Deluge will pole my public IP instead of the VPN. Is there a way to prevent this or even setup a firewall rule to prevent anything that connects to OpenWRT that doesnt have a VPN connection to DROP in a Sink Hole until VPN connection is Established?
How did you connect the document with each other? The Plex doesn’t find the folder where the movie is downloaded?
I understand how to install the whole arr stack and configure everything, theres just one thing i'm missing: the shared storage, which is why i found your other video about setting up a samba share on proxmox. I followed it and it works. Thanks for that! However, the result of that video is a (in your case) share named "public" on your samba LXC. How do you connect the arr stack to that share? I see you do a mount for deluge at 4:39, which i assume is the connection, but i don't see any "public" share in that command. You do "/zpool/media" - where does the media folder come from?
Your videos are great and i appreciate the time, work and effort you put into them. Keep up the good work!
just to add: the timestamp is for this video. Also, i tried the mount command with "/media" because my samba share is "media", but that doesn't work: mp1: unable to hotplug mp1: directory '/zfs-media/media' does not exist. I know i'm missing something, i just don't know what
having the exact same issue, ive done it over like 5 times over 2 days
litterally just figured it out my mp was /spool/subvol-100-disk-0/share spool being the name i gave the pool... subvol was there maybe i didnt rename to media ? ? and share is the one created in vid
hope that heaps
@@Cughin Nice one. I was stuck on this for ages!!
ive been using arr services in docker, is there a benefit of using it in lxc rather than docker?
At what point did i hiccup if im having to chmod777-R /zpool/media everytime deluge downloads something so that Plex can see the conrent?
Same I cant move files or add files unless i chmod 777 every time
@@dillonkifer8006 chat GPT gave me some suggestions about modding shared groups permissions and things like that, but none of it was covered so it left me really confused, it's fine for the time being so I just queue up a few downloads and when they're done I run it and make sure everything's going smooth.
Somebody in the discord told me to match the permissions but I'm struggling to figure that bit out
@@dillonkifer8006 I wanted to come back to this since I cobbled a working scenario
Make sure to click the cog to show advanced settings
Sonarr/Radar>settings>media management
Bottom area check on the CHMOD and set the numbers to 777 and save!
So far it's been going for an hour or two without manual intervention but I'm triple checking I didn't skip bits
@@Calliico thanks for that .. im very new to this things and following Don's steps here .. but also having issues with content downloaded from deluge to smb share having access denied, thy do show in jellyfin, but cant play them .. nor open them in the smb share itself, as it says access denied ..
i checked settings as you mentioned in Sonarr/Radarr .. and i see a check box (not checked) for Set permissions .. so i checked that .. also changed chmod folder to 777 .. and i have chmod Group blank .. i will try detting some new stuff from deluge and see if it works .. many thanks for the help
@@CalliicoThanks.
Here's how I fixed it for every download.
On the Deluge LXC / Console:
run: nano /etc/systemd/system/deluged.service
set the UMask value to 000: UMask=000
run: systemctl daemon-reload
run: systemctl restart deluged
Hey I have a couple of questions about the series but first I want to say thank you .
So I'm facing a problem when I tell radarr or sonarr to download anything deluge download it and give it the owner and the group of a root and no read permissions for anyone else and that cause a problem that I can't open any movie and plex can't read it also.
Do you have a solution or anyone can help me please.
Found the solution for my problem it's in the comment and it's from this user @Calliico you can find it if you scroll to the bottom
Two more things chmod 777 for the whole media folder just to make everything go smoothly, in Radarr and Sonarr there is an option in settings -> media management
Click on show advance and check the box for Create empty folders so they can grab the media from download folder.
@@89ajworld Thanks.
Here's how I fixed it for every download.
On the Deluge LXC / Console:
run: nano /etc/systemd/system/deluged.service
set the UMask value to 000: UMask=000
run: systemctl daemon-reload
run: systemctl restart deluged
I am running 4 proxmomx nodes. 3 are just whatever, the 4th then is running truenas with 14TB of drives passed in. Could I "mount" to the path of my truenas smb shares?
What happens if you reboot the system? Are IPs 10.50.50.XXX static? Or not but they don't change?
It's a curiosity that arose when you configured the IPs.
I'm thinking about integrating openwrt into my system to do it like this, but I have a doubt at that point
You should at least make static leases for those hosts so your port forwarding doesn't break.
if i have truenas on my proxmox where i am using mirror for my HD, should i just mount them into the nas server directory and if so, how do i authenticate?
Everything worked for me up until accessing the Prowlarr web interface. OpenWRT recognizes the LXC, gives no errors when setting up the port forwarding, no errors that I can find in Proxmox, but I can't access the web interface, it just sits and spins until the connection times out. No issues with Deluge, Sonarr, or Radarr. Any suggestions?
I can access Prowlarr's web UI if I don't establish the connection to openWRT, but it stops working when I switch back to vmbr1 and set up the port forwarding.
hi, how do you resolve the nobody:nogroup problem on the mounted folder.
i got this instead of root root.
this issue doesn't let radarr (or any app) to access the mounted folder :(
thank you for your help
I mounted the NFS share
# pct set YOU_CT_ID -mp0 /mnt/share/,mp=/mnt/share
After that: # chmod 777 /mnt/share
This is just what I needed. Do you have any advice on how I would do this same process, but move the "deluge" side of things to a seedbox? I'm trying to keep torrenting of my linux ISOs on my home network for the sake of privacy. I presume I'd use some sort of sync between my proxmox server and the seedbox, but not sure what's best. Thanks for the video! Stuff like this makes proxmox a bit easier for noobs like me.
Did you ever figure it out?
I'm having a some kind of permissions issue where Radarr & Sonarr are not letting me add a root folder for my mounted unraid share. I get the following error "Unable to add root folder
Folder '/mnt/Unraid-Media/Movies/' is not writable by user 'root'". It's odd because I use this same mount on other LXC's and I don't run into this issue. Any idea?
did you found a fix for this? I'm having the same issue,
@@PandaTaco yes, it’s because it was running in an unprivileged LXC. There are ways to map the LXC permissions to the promox host permissions but it’s honestly a huge pain. My recommendation is to just create a new container that is privileged. It’s a larger security vulnerability though.
I enjoy your content, thank you. After creating a mountpoint for deluge it says the container startup has failed. A bit stuck any help? So I skipped the jellyfin setup since I wasn't going to use it, if you are like me and can't load up a container after adding a mountpoint go watch that video for the command line information to make it work.
I would use Docker for this. I would use gluetun and route all of my traffic thru it. Keep the stack updated and if the tunnel goes down, the services go down.
Curious about gluetun
What does that achieve? Seen its basically a vpn client
Could tailscale/headscale not replace that and be easier?
Or is gluetun solely for downloading your "Linux ISOs"?
why use docker when proxmox has LXCs?
ease @@edwardCactus
Can you make a tutorial how i bind my Synology NAS as primary disk space? At my house proxmox, sonarr and so on are installed and running on a intel nuk local. I want that my movies and so are saved to my NAS where i have much more dis space
thank you so much for this!
I also follow this with a lot of interest Don !
1. What i don't get is, did you make a directory on your zpool named media?
2. Can we acces this media share also thru the samba share?
Refer back to my jelly fin video, I did this so it's less overhead on the CPU instead of sharing the files using smb protocol on the same Proxmox ve, yes it's also attached to my nas so media is browsable
@@NovaspiritTech Watched the jellyfin video again, it's clear to me now 🙂 Thnx !!
No matter what I tried, I couldn't get hardlinks to work with this setup
When I installed the ARRs through truenas apps everything works fine
Anyone having issues with that as well?
I run Proxmox on my Nuc. And sonarr inside Proxmox. I want to use my synology NAS as a file storage. So if I downloaded something with sonarr, i want to safe it directly to my nas.
Can you make a video or explains me how to configure Proxmox the right way to do that?
Can you help me please I created my smb share, I can see it and remove and change files but when using deluge it download to the mount share just file but can't see it any where else. I use file browser on deluge it's there I can see and play my download from the mounted drive but it doesn't share anywhere else what I'm I doing wrong 😢. PLEASE HELP
is it possible to run these lxc's through a vpn i see poeple setting this up through docker but im trying to see how would it be possible in the lxc way since everything is separate
is it too early to ask for next part showing flaresolver and indexer integration . also based on your judgement and decisions i would like to hear your choices on self hosted form filler (not password manager ) , those browser plugins that helps you fill similar data set again in forms. hope to see something new soon . good luck.
You could have used any host with a browser and assigned it to the new bridge. Then access OpenWRT on the LAN side, rather than opening port 80 on the WAN.
You could have also used VLANs rather than a new bridge.
Good vid though!
Interesting. Why not just put everything on one VM in docker? Then use ufw on that linux VM?
I think it can use 0.0.0.0 as IP as well since all the services are on the same "interface"[vmbr1]
Also...is it possible you use the gui to create the mount points instead of the command line, or is it not capable of mounting a ZFS pool in that fashion?
Very very cool...is it adding alot of resource usage with the containers?
Also with a 4 port gigabit ethernet pci-e device, you could set up 1 port to be your "LAN" for your PROXMOX box (vmbr2 or 3)[with a static internal LAN IP], then have the vmbr0 and vmbr1 as passthru to your OpenWRT build...also...set it as the 1st bootable container when your PROXMOX box starts up so it starts before the other containers to prevent them from going wonky [under options tab start/shudown order]
Make sure the WAN firewall is bulletproof...because it could be exposed to the internet without proper rules in place
Keep em coming!!!!
correct, you will need the use of command line for the zfs pool otherwise you can perform mount points with gui on standard mounts.
the usage wise is really not bad, direct streams from jelly is great but i can only do about 2 transcode at a time due to weak gpu
I've ran into a strange issue. I have installed the Samba share (as descripted in the 'Setting Up NAS Server On Proxmox' video) and have added an extra share (as descripted in the 'iGPU Transcoding In Proxmox with Jellyfin Media Center!' video). I can access the 'media' share in Windows and create folders, but Deluge only sees the root of the share (/mnt/media) and if I create a sub-folder (Downloads) and change this as well in Deluge, the error in Deluge doesn't go away. So, in Deluge this doesn't work: /mnt/media/Downloads ... but this does: /mnt/media. This is probably and authorization inheritence issue, but I can't fix it. Does anyone have any idea?
Fixed it. I didn't correctly map the container with the partition on the SSD and the partition didn't have the correct permissions.
I am building a server I was thinking Truenas scale what OS would you use?
I need nas, jellyfin, home assistant, torrents and VPN.
Thanks
if you already have a nas with its own os/system to manage storage then you would not need trunas scale, in that case just mount your nas path over network and use so. In case your speed matter to access the storage then go for native truescale os with same system handling containers/vms of your choice. Also down the line you might feel need for hardware acceleration for jellyfin for decoding videos stuff so i would recommend either chose a strong system with hardware encoding/decoding support or else use docker container on linux compared to truenas. In your stack, homeassistant is nor resource hungry nor is torrrents or VPN, jellyfin is the only one needing high resources compared to these three. So with some hardware decoding support i suppose you are good with any choice of OS. If not then go for separate systems to manage accordingly. Good luck!
@@iuhere so for my needs would you choose Proxmox or Truenas scale. I am using Xeon server x5675 6 core will be buying ECC ram and GPU. I am also considering a windows gaming VM for my shield.
Cheers
@@gasmoney9319I would choose proxmox for your case.
Is there a script to configure ARR services in a single docker container stack?
yes but why would you use docker containers when Proxmox has LXCs?
@edwardCactus if I want to replicate or backup or move the stack (all the ARRs) I will have to do that for all the LXC containers but with a single VM and all the ARRs in docker all I would have to do is backup or move the single VM
@@mindshelfpro to each his own but seems redundant. The overhead added does not provide enough value in my opinion. Backing up / moving each container is not a daunting process. Running a container in a VM opposed to directly on the hypervisor just seems un-necessary.
I have decided to create only 1 VM running ubuntu server and deploy all these service on containers... I have found this aproach consumes less vCPU and RAM from the host machine.
How would VM end up less CPU and ram than lxc containers ? You would have the overhead of the VM ?
@@basdfgwe1 VM with all services on it is probably less than making an individual LXC for each service.
if vm, instead of lxc how would you share zpool on proxmox to vm to write the Media files?
@@cetrockz great question. I ended up just using individual LXCs, but I imagine you could follow the same process of using a mount point for a VM
These videos have helped me so much getting everthing up and running the way I wanted it, was able to shutdown my Windows VM which was running all my Arr services, saved a heap of driver space, CPU and RAM... I have noticed now there is a Flaresolverr helper-script, curious, has anyone tried it?
Never mind, installed it, tested, quick...
As being in talent management for over a decade I can say that your content is good enough to grow. You should do better in the hashtags(#) or the keywords and you will be good to go 👍
Very good my friend
Nice video! But let's not gloss over the trust involved with pasting script command from some website into hypervisor root shell...
Thank you.
The arrs are so bad. It’s incredible to me that in 2024 this is the best we have. I’ve spent at least as many hours configuring, troubleshooting, and managing the arrs than I have w joying them, and I still have to download manually for certain shows, especially daily shows where sonar is too slow to find.
Sounds like your sources are the ones that bad I don’t have any issues with em myself
@@ElmokillaXDK Nah, that ain’t it. I’ve had them working fine for months, everything automated from overseerr all the way through Discord end to end, and then one day, boom, they stop working and try to delete my entire library, without even an update or any change. Then the support forums go silent and you can’t get anyone to even comment on mappings, logs, or anything. They’re great when they work, but the whole setup is too brittle and unpredictable.
@@ryanmalone2681 what? That happened because of something you did donate has never deleted a thing in my library your setup was just a little to janky and I’m sure you just blamed sonarr for doing this with no logs or anything which is why nobody probably replied to your post.
@@ElmokillaXDK Seriously, everything working fine for months, then all of a sudden, I’m getting Discord notifications that all this content is being deleted. Luckily I noticed straight away. I wasn’t even logged in. Even did it on my backup server which in only on once a week.
@@ElmokillaXDK One would need to turn on the container to get the logs and delete god knows how much of my 100TB library. Added all the mappings.
Please do the same arr stack on the pi hosted series
i think pi might be little under powered for the stack , not sure but pi cannot handle too much load i suppose.🤔
Lol.. Who needs Whisparr when you have ...Hub
Like a pirate, lmao. I WONDER WHAT THESE PROGRAMS COULD BE USED FOR!???
arrrr
Sourcing and downloading Linux ISO’s, obviously.
@@hyperprotagonist finally someone with true intent and correct answer 😊
Pity you didn't explore using the new SDN routing options rather than installing OpenWrt. Nothing against OpenWrt, but its routing functionality is not really needed anymore.
In my setup the torrent manager runs on my wii running wii-linux-ngx