Hi Matthias, why I just found your video is beyond me, as I have been struggling with all these topics separately the last few weeks. Turns out all the answers were already here! Currently following this amazingly well put together guide, thank you so much for this.
Surpassing video 👍 1:04:11 -- qm set 100 -iscsiXX /dev/disk/by-id/devicenameZZ for passing through a storage device directly to VM 100 45:19 and 1:18:04 -- Async IO options: default (io_uring), io_uring, native, threads . . . I shall have to try these.
You sir do not understand how much I appreciate this video. The amount of effort you have put into creating this tutorial is beyond the thumbs up button, the subscribe, and commenting my appreciation. I hope you are able to continue this and I will support this kind of content as much as I can. I am so glad to see this, made my year. You sir are the GOAT!!!!!
Thank you very much for putting this guide together! I feel like I can finally get a jump up to speed with everything else now because of Homelab setup guide.
Matthias, you came on my youtube autoplay literally while I was finishing up the build for my server. I'd been planning to do a Proxmox / TrueNas build with nearly all the services you showed off here. This has been beyond valuable. And your ability to navigate all these intalls and configs essenatially live without any major hickups is extremely impressive. Thanks so much.
You put a lot of work into this and deserve more views. My personal way of doing stuff at home now is in fact to not expose much anymore. Its just too hard and tiresome to work on security around these services (or any services). So in my own things, I place wireguard VPN - and don't expose other ports and things. VPN in, use the services. In its own way it simplifies a fair amount. This is less nice if you want to share things, or give more to users, but equally, you can choose to allow them VPN access.. Still appreciate all the stuff here, really good video.
Thanks for the kind words! If security is a high priority, a VPN is indeed a safe bet. Personally I also only make a few things available over the internet. Something like basic auth or a service like Authalia with 2FA can make it pretty robust as well imo.
Interesting setup thanks for sharing. Proxmox + Truenas are too much for my minipc RAM I do similar things with Debian 12 + Portainer + docker compose + Traefik reverse proxy
Hey Matthias, thank you so much for the video, if it wasn't for this I would have probably taken much much longer to figure this all out on my own. I'm done setting up Wireguard but I'm a bit lost on how to connect a PC to it. The readme gives me a hint of how to do it but I just don't know if I should recreate the container with the peers info (private/public keys) or if I can just edit that into the config file on the server.
Thanks, glad to hear! I have to be honest. I haven't messed with wireguard for a while. It's something you set up and forget about (even when you use it daily). So it has been a while since i messed with the container. If i can recall correctly, the easiest way was to edit the compose file (adding a peer) and just rebuilding the container. With the docker container, I created a persistent volume where all the configs were stored and easily accessible. to connect to a PC, i pretty much just copy the .conf file (not messing with the keys). In the official gui application you can just use that. If you are using the cli tool, just follow the official guide and use the keys. if you are using a 3rd party tool, I can't really help, but you will probably just have to fill out the content of the keys as well. Note that with VPNs there are many variables that can mess with a valid connection, ip, dns, firewall, ip forwarding, etc. etc..., so you need some knowledge to get it actually running. Not something i can really all explain in a YT comment, but best of luck.
@@matthiasbenaets oh man thanks for the quick reply, just as an update: I managed to understand how to copy the conf file from the remote, I didn't get at first that the scp command was to be run on the proxmox instance, then to copy stuff you need to log in either with an SSH key or the root user (which is disabled by default), then using the conf file on wireguard was actually easier than I thought. Now, I can't seem to create an nginx container because port 443 is already allocated (I guess I can't have both PiHole and nginx running on the same VM?)
@@matthiasbenaets one last question if I may, would you or anyone know why our Pihole's memory usage is so high? I have added more ram to the debian container and in Proxmox it shows that it's not taking more than 25% memory, PiHole shows 83.2% and in the video it's also at something like 76%. My guess is that PiHole sees the mem usage accross the whole server? (at this point I only have wireguard installed on this container, so TrueNAS is the only thing that could affect it)
@@virtualnk5825 I think it's just a side effect of being containerized. I never had any issues with memory. memory is memory, if it needs more, it will be allocated, yet it will also take as much as possible for caching but will be freed up if needed.
Hello Mathias, Great instructional video, it was easy to follow and easy to understand. Question though, are you able to have Truenas Scale run SMART tests while it is virtualized? I am able to run SMART tests on the hypervisor (Proxmox) but not on the VM itself. Thanks
In the webui copy paste (to my knowledge) only work with the proxmox and lxc shells, not normal vm's. Just use the normal ctrl+shift+v (or just middle mouse on linux). For general vm's I recommend just using ssh.
Hi Matthias, first off, thank you so much for this video. It is beyond helpful, and has let me get my homelab up and running. When I attempt to compose the nginx proxy manager I get errors that the ports are already in use. Pihole has some of the same port settings. Ports 80 and 443 are used by both. How would you fix this? edit: I fixed this by running in a different lxc (different ip) so there were no port conflicts, would still be interested to know how to run it in same lxc as pihole.
Hey man, im running into an issue with Pihole and NPM. They are fighting over the port 443. Any advice? - Solved along with some other issues, ill post my solutions in this thread if interested!
If you are having an issue w/ NPM reaching your proxmox installation and throwing 401, it's because of the certs. Navigate to /etc/pve/local in your PVE installation and grab the .pem and .key files. A simple cat and copy/paste to your desktop is fine. Then navigate to SSL Certs in NPM and add custom certs using the files you just grabbed. Then navigate back to proxy hosts and assign the SSL cert you just created to your proxmox proxy host! Should work now!
@redpillaussie9441 Great video Mathias - Do you have any tips on best method and practices (and the most secure) to remote connect to the Proxmox VE Server Web console UI without connection through some sort of central pipe, like VPN or Cloudflare or such. I want to connect to my Proxmox Web UI remotely as I travel a lot and don't want messy subscriptions either.
Great video, l loved how you went step-by-step, however, I am having trouble accessing the Pi-hole as you explained in the timestamp: 1:44:00, can you please advise how to access it? Again thank you so much for this great video.
it depends when your behind a proxy manager or not and if it's your dns or not. normally it in subdirectory /admin so: /admin . if it goes to a blank website afterwards just load the ip address.
@@matthiasbenaets I was finally able to access it using the same IP as Portainer with /admin on it. Thank you for your help. My next question is since I have specific hardware running Pfsense which hands out DHCP and DNS. So for example my SOHO looks like Internet>Pfsesne>switch>my devices, server(Proxmox). In this cases how would you set up pi-hole? Again thank you for all your help.
@@lakshya238 Personally I would set up pihole on the pfsense box (if possible, not sure, I only know opnsense which has a plugin). If it's on another machine, give that machine a static ip in pfsense or work with ddns, and point the dns server address to the pihole machine.
@@matthiasbenaets I see and agree; however, the problem I am facing is Docker is assigning its own IP (172.x.x.x) with its own DHCP which is a pain because I really do not how to assign IP to these containers and get them working. Please advise if you know what should I do.
Hi Matthias, amazing tutorial to begin with however I have been losing my mind a bit trying to get IOMMU to work. I should have it on my r5 3600, asrock b550m steel legend should I not? I have updated bios, done all the file edits on this tutorial, rebooted etc.
The b550 chipset isn't really ideal for this, but it should work (but the grouping might be a bit strange). I recommend that you go over the official wiki page for pci passthrough, it might be insightful. If I'd had to guess, it's probably just a setting in the bios that is either not enabled or not specifically set to enabled (not auto). Just check that SVM is enabled. On some board you migh also need to enable iommu via de nbio options under AMD CBS.
Thanks @@matthiasbenaets ! It was hidden under the AMD CBS options. I was aware that b550 isnt ideal but I had most components laying around from previous upgrades and in need of a home server.
Thanks! Nothing special due to budget constraints, ryzen 9 5900x (12c/24t), 64GB ECC RAM, 250GB boot nvme, 2x1.6TB SATA SSD (mirrored for vm's) and 3x8TB HDD (raidz for proxmox backups and general storage), RX580 for passthrough for adobe cc. A few more cores, ram would be great. Also maybe a SAS HBA and 10G NIC would be useful in the future. But you can already do a lot with way less then this.
I was looking at such type of configuration for my setup, thank you for sharing this video, help a lot. One question about the container creation, why don’t create it using the disk inside the NAS instead of local. In that way the container and docker would have some kind of redundancy in case of failure. What do you think?
Hi Marco, I assume you mean docker containers (not lxc). So for my current setup i have it like this: proxmox installed on nvme 128gb with truenas in local-lvm. All my lxc containers are stored on a second nvme 128gb. This already separated them from the boot drive. The lxc containers (and thus the docker containers) have a frequent backup to my truenas zfs pool for added redundancy as you mentioned. Now for docker containers that need to always have the latest data backed up, I just set up a cron job to rsync to a very small striped pool on a slow spinning disk. I guess to make it fully redundant I might also need to set something like this up for the cloud, but haven't had any time to figure that out.
Hi Matthias, thank you for the effort in making this video. It is most helpful for me. However I'm struggling with one part. I.m building my cloud lxc container. First i created a unprivileged container but then i couldn't mount my truenas share. After reading your notes i saw i had to create a privaliged container, and although te nesting option is checked but greyed out i can't install docker containers. this gives me an error. Can you tell me what i do wrong?
When creating a privileges container it indeed checks nesting but it's greyed out. So it probably is no longer enabled. Open the VM's options, and under features enable it again. If you want to use Shares aswell, maybe also enable smb/cifs. If you still receive errors, it might be due to a lxc from a distro that need extra steps. For me debian pretty much always works.
I have a workstation laptop with three disks. Two nvme and one hdd. Can anyone please suggest me a storage setup for home server? Should I install proxmox on one disk and create a ZFS pool for the remaining two?
Just in case… you show the password used for the web access for your main instance of pihole. Incase that was a ”real” password it is now compromised. FYI.
hehe good catch, luckily I only used it for like 2 or 3 locally hosted services that require only a password. Everything public or requiring a full login I use vaultwarden. I do appreciate the heads up though!
Running a Proxmox cluster with high availability is one option, this makes sure that nothing is lost when one node goes down. This also means you don't really need to back anything up. Another option is to set up a Proxmox backup server, this way you can back up a complete node all at once. I believe it also has a few more options than just backup up individual vms to a separate storage location. I guess installing Proxmox on a mirrored ZFS is also an option.
Well having a cluster set is indeed ideal. For home use with less infrastructure it depends. There are many options available. You can always just set up a Proxmox backup server and import the complete node after reinstall after adding it as a storage option. You can use TrueNas in this case, but if are good with the command line, you can also just set up a ZFS pool within proxmox with a couple extra drives. After a reinstall you can simply just search for the pool, import it, and restore all vm's.
Probably due to pi-hole not being able to bind to port 53. Check the vm if port 53 is already being used with $ sudo lsof -i :53 . I believe some linux distros come with their own resolver, which is not ideal. You can disable the service, but depending on the disto, this can cause issues. Especially with Ubuntu I believe you will have to add a couple more lines to the docker-compose file, but for that I'll refer you to the docker-pi-hole github page.
friend can't install vm on supermicro x9dbl-if 2x e5-2470v2 not add 20 cores,40 threads vm, The product of vCPUs, cores and threads must not exceed 255 on this system. truenas scale, do you have an idea? thanks!
I did all the setup with PiHole, I added my PiHole's address as my local DNS-Server on my Router (I didn't change it on my PC as you did in the video), I did the setup with NGINX (had to run it in a separate LXC container, added it to portainer agent by adding a new enviroment on Portainer) added "port.lan" exactly as you did in nginx, PiHole's DNS record has the domain "port.lan" pointed at my nginx ip address but once I try to access "port.lan" nothing happens (can't find the site). Any ideas?
This is impossible to say. I cannot help you with this, sorry. It mostly depends on your personal devices, setups and config. More likely than not, this is just dns or ip leaking, either with your pc or with your router (and only if your setup is actually correct). To start debugging this, you should first disable ipv6 on your pc and directly use the dns of pihole. if you have a pihole dns record for port.lan pointing to npm, and a proxy host to the correct ip:port on npm, it should work. if it does, you should evaluate the network traffic using something like traceroute. If it does not work, check your record and check if the ip's are actually correct.
@@matthiasbenaets hey I resolved the issue by installing pihole on an LXC container (without docker) and it's now working super smooth, only thing I'm having a hard time with is getting the reverse proxy with nginx and pihole's address with the /admin at the end. Thank you again for the help and Merry Christmas!
Hello, great video. could you do a tutorial on Setting up from scratch all the way to the end of how to create an Nginx Website and also Nginx Proxy Manager to get it hosted online?
I’m having the same issue your friend is having I’m trying to figure out how to use proxmox truenas and make automated media server with jellyfin definitely a lot more complex then I thought. If any one in the comment section is willing to help me plzzz let me know I have a discord. 😂
What is the point of presenting hardware with details on the importance of ram ecc at the beginning of the video, if it is to realize the installation of proxmox in virtual (vda) and not in baremetal that avoids a useless encapsulation and greedy in resources ? It's probably because I'm too old or too stupid
The initial install in vm is purely because I don't have a system available nor am able to capture the video output while doing it. For a video, this is more user friendly to follow. I never recommend visualizing proxmox unless it's in proxmox itself for testing reasons. Proxmox is type 1 hypervisor ie bare mental hyprvisor.
Your video started great, but gradually you seemed to assume far too much. A more comprehensive tutorial with exact steps from portainer onwards would be helpful. For example, when you showed us how to set up a truenas share with Nextcloud, you failed to show us how to get the uid. You also assumed that the portainer setup went without hiccups, but it didn't. It would help if you informed us that we needed an account for portainer and could use only five nodes for free. As already stated, the amount of effort put into the first half of the video was superb. I do not usually comment, but after wasting time trying to work around what you started, it was only fair that I said something. I ended up building Nextcloud on an Ubuntu server on promox by watching a learnlinux tv tutorial.
Hi, Thanks for the feedback. I tried to fit as much info as possible in as little time as possible. My aim in these guides is not a hand-holding experience rather a teaching one. uid's and such are just one google search away and this is not a Linux tutorial. All the services presented here are some you and others might find useful in their homelab, that does not mean all of them will have a full blown tutorial but rather some tip and trick, especially since not everyone will use them. Care to elaborate on the portainer and nextcloud issue? I can't recall that you need an extra account for portainer, unless you want to use the EE version (which again, is not really too relevant for starting out). If you're only issue with nextcloud was mounting of the smb in the lxc, you could have also used the Dockerfile instead (from the github repo), this will install nextcloud with the needed samba packages. Alternatively you can also just install them manually in the docker container. The smb option should then become available. My method maybe wasn't too clear since the uid might not be the same depending on the vm/lxc used, but this way it does not require people to learn how to use custom dockerfiles or run the same commands every time you pull the latest image. If you don't understand the usage, here's a quick explanation as to why it's uid 33. The persistent data generated by the nextcloud container is made by user "www-data" (atleast in my case). To prevent any future permission issues, I mount the smb shares as this user. To find this uid and gid I can simply run $ id -u www-data, or with flag -g for group.
hi, thanks for the kind words. I did not go into this futher because imo it's only relevant for people who want to access it over the internet. This is only 1 out of 3 possible scenarios, the other being using a local ip or setting up the container with the pihole network and using a local dns. In most situations both of these don't need the extra security. You did remind me to re-enable this for my personal setup so thanks! Of course this is highly recommended when making it available to the internet. For anyone else intrested and reading this, I'll add a short explanation in the description under notes.
Absolutely fantastic, saved me from some guides by influencers that just gave the wrong advice. Thanks!
Hi Matthias, why I just found your video is beyond me, as I have been struggling with all these topics separately the last few weeks. Turns out all the answers were already here! Currently following this amazingly well put together guide, thank you so much for this.
Thanks! Glad you found it informative
Same here thanks alot for this great guide, it’s the best starting point for any beginner.
I hope there well be part 2 related to security
Man, the amount of times I have searched this exact title... Thank you so much for the content!
Surpassing video 👍
1:04:11 -- qm set 100 -iscsiXX /dev/disk/by-id/devicenameZZ for passing through a storage device directly to VM 100
45:19 and 1:18:04 -- Async IO options: default (io_uring), io_uring, native, threads . . . I shall have to try these.
It's one of the best tutorial for homebrew solution
You sir do not understand how much I appreciate this video. The amount of effort you have put into creating this tutorial is beyond the thumbs up button, the subscribe, and commenting my appreciation. I hope you are able to continue this and I will support this kind of content as much as I can. I am so glad to see this, made my year. You sir are the GOAT!!!!!
Thank you very much 😃
B
Thank you very much for putting this guide together! I feel like I can finally get a jump up to speed with everything else now because of Homelab setup guide.
the network stack in Docker as very informative. Going to use that in my home lab, and will integrate Unbound into it. and maybe VaultWarden
Matthias, you came on my youtube autoplay literally while I was finishing up the build for my server. I'd been planning to do a Proxmox / TrueNas build with nearly all the services you showed off here. This has been beyond valuable. And your ability to navigate all these intalls and configs essenatially live without any major hickups is extremely impressive. Thanks so much.
Thanks for the compliment Luke!
You put a lot of work into this and deserve more views. My personal way of doing stuff at home now is in fact to not expose much anymore. Its just too hard and tiresome to work on security around these services (or any services). So in my own things, I place wireguard VPN - and don't expose other ports and things. VPN in, use the services. In its own way it simplifies a fair amount. This is less nice if you want to share things, or give more to users, but equally, you can choose to allow them VPN access..
Still appreciate all the stuff here, really good video.
Thanks for the kind words!
If security is a high priority, a VPN is indeed a safe bet. Personally I also only make a few things available over the internet. Something like basic auth or a service like Authalia with 2FA can make it pretty robust as well imo.
This is INSANE. Thank you so much for making this video! You're a lifesaver!!!!!!!
Great and comprehensive guide, thanks a lot Matthias!
Incredibke community contribution. Thank you
thanks from egypt
i learn alot and do it in my server, thanks
Amazing guide my friend. Thank you so much.
Plz make more Homelab vids 😭 This was such a good guideeeee
20:22 You can use Ventoy: flash the thumb then just drop whole ISOs on the main partition, will allow you to select one of them at boot
Usefull vedio, very nicely explained
very funny video! i laughed very hard! Thank you for this entertainment! Keep on going!
Interesting setup thanks for sharing.
Proxmox + Truenas are too much for my minipc RAM I do similar things with Debian 12 + Portainer + docker compose + Traefik reverse proxy
HELEMAAL MOOIE DINGEN!
hoorde het drekt aant accent :p
This video is great, thanks!
Great video, thank you
Creat video. Thanks for sharing. I hit the subscribe button. Woud a container solution like NextCloud work as well?
Hey Matthias, thank you so much for the video, if it wasn't for this I would have probably taken much much longer to figure this all out on my own. I'm done setting up Wireguard but I'm a bit lost on how to connect a PC to it. The readme gives me a hint of how to do it but I just don't know if I should recreate the container with the peers info (private/public keys) or if I can just edit that into the config file on the server.
Thanks, glad to hear! I have to be honest. I haven't messed with wireguard for a while. It's something you set up and forget about (even when you use it daily). So it has been a while since i messed with the container. If i can recall correctly, the easiest way was to edit the compose file (adding a peer) and just rebuilding the container. With the docker container, I created a persistent volume where all the configs were stored and easily accessible. to connect to a PC, i pretty much just copy the .conf file (not messing with the keys). In the official gui application you can just use that. If you are using the cli tool, just follow the official guide and use the keys. if you are using a 3rd party tool, I can't really help, but you will probably just have to fill out the content of the keys as well. Note that with VPNs there are many variables that can mess with a valid connection, ip, dns, firewall, ip forwarding, etc. etc..., so you need some knowledge to get it actually running. Not something i can really all explain in a YT comment, but best of luck.
@@matthiasbenaets oh man thanks for the quick reply, just as an update: I managed to understand how to copy the conf file from the remote, I didn't get at first that the scp command was to be run on the proxmox instance, then to copy stuff you need to log in either with an SSH key or the root user (which is disabled by default), then using the conf file on wireguard was actually easier than I thought. Now, I can't seem to create an nginx container because port 443 is already allocated (I guess I can't have both PiHole and nginx running on the same VM?)
@@virtualnk5825 great, an yes you are correct pinhole already uses 80 and 443 since you are routing all your traffic through it.
@@matthiasbenaets one last question if I may, would you or anyone know why our Pihole's memory usage is so high? I have added more ram to the debian container and in Proxmox it shows that it's not taking more than 25% memory, PiHole shows 83.2% and in the video it's also at something like 76%. My guess is that PiHole sees the mem usage accross the whole server? (at this point I only have wireguard installed on this container, so TrueNAS is the only thing that could affect it)
@@virtualnk5825 I think it's just a side effect of being containerized. I never had any issues with memory. memory is memory, if it needs more, it will be allocated, yet it will also take as much as possible for caching but will be freed up if needed.
Hello Mathias,
Great instructional video, it was easy to follow and easy to understand.
Question though, are you able to have Truenas Scale run SMART tests while it is virtualized? I am able to run SMART tests on the hypervisor (Proxmox) but not on the VM itself.
Thanks
thanks and yes, pass through the sata controller or an hba and smart test work just fine.
@@matthiasbenaets Thank you for confirming.
Great video! How did you solve the copy/paste problem between your home computer and the Proxmox console?
In the webui copy paste (to my knowledge) only work with the proxmox and lxc shells, not normal vm's. Just use the normal ctrl+shift+v (or just middle mouse on linux). For general vm's I recommend just using ssh.
20:35 ventoy my love
Hi Matthias, first off, thank you so much for this video. It is beyond helpful, and has let me get my homelab up and running. When I attempt to compose the nginx proxy manager I get errors that the ports are already in use. Pihole has some of the same port settings. Ports 80 and 443 are used by both. How would you fix this?
edit: I fixed this by running in a different lxc (different ip) so there were no port conflicts, would still be interested to know how to run it in same lxc as pihole.
Yeah, you can't really remap these ports cause they are used for the http(s) protocol. I'd just use separate machines/vm for these two services.
Hey man, im running into an issue with Pihole and NPM. They are fighting over the port 443. Any advice? - Solved along with some other issues, ill post my solutions in this thread if interested!
If anyone is having this issue, change the ports on pihole to 8080:80 and 4443:443
If you are having an issue w/ NPM reaching your proxmox installation and throwing 401, it's because of the certs. Navigate to /etc/pve/local in your PVE installation and grab the .pem and .key files. A simple cat and copy/paste to your desktop is fine. Then navigate to SSL Certs in NPM and add custom certs using the files you just grabbed. Then navigate back to proxy hosts and assign the SSL cert you just created to your proxmox proxy host! Should work now!
ryzen pro cpu's work with ecc ram
@redpillaussie9441
Great video Mathias - Do you have any tips on best method and practices (and the most secure) to remote connect to the Proxmox VE Server Web console UI without connection through some sort of central pipe, like VPN or Cloudflare or such. I want to connect to my Proxmox Web UI remotely as I travel a lot and don't want messy subscriptions either.
Great video, l loved how you went step-by-step, however, I am having trouble accessing the Pi-hole as you explained in the timestamp: 1:44:00, can you please advise how to access it? Again thank you so much for this great video.
it depends when your behind a proxy manager or not and if it's your dns or not. normally it in subdirectory /admin so: /admin . if it goes to a blank website afterwards just load the ip address.
@@matthiasbenaets I was finally able to access it using the same IP as Portainer with /admin on it. Thank you for your help. My next question is since I have specific hardware running Pfsense which hands out DHCP and DNS. So for example my SOHO looks like Internet>Pfsesne>switch>my devices, server(Proxmox). In this cases how would you set up pi-hole? Again thank you for all your help.
@@lakshya238 Personally I would set up pihole on the pfsense box (if possible, not sure, I only know opnsense which has a plugin). If it's on another machine, give that machine a static ip in pfsense or work with ddns, and point the dns server address to the pihole machine.
@@matthiasbenaets I see and agree; however, the problem I am facing is Docker is assigning its own IP (172.x.x.x) with its own DHCP which is a pain because I really do not how to assign IP to these containers and get them working. Please advise if you know what should I do.
Hi Matthias, amazing tutorial to begin with however I have been losing my mind a bit trying to get IOMMU to work. I should have it on my r5 3600, asrock b550m steel legend should I not? I have updated bios, done all the file edits on this tutorial, rebooted etc.
The b550 chipset isn't really ideal for this, but it should work (but the grouping might be a bit strange). I recommend that you go over the official wiki page for pci passthrough, it might be insightful. If I'd had to guess, it's probably just a setting in the bios that is either not enabled or not specifically set to enabled (not auto). Just check that SVM is enabled. On some board you migh also need to enable iommu via de nbio options under AMD CBS.
Thanks @@matthiasbenaets ! It was hidden under the AMD CBS options. I was aware that b550 isnt ideal but I had most components laying around from previous upgrades and in need of a home server.
What are the specs of you machine? Awesome video btw.
Thanks! Nothing special due to budget constraints, ryzen 9 5900x (12c/24t), 64GB ECC RAM, 250GB boot nvme, 2x1.6TB SATA SSD (mirrored for vm's) and 3x8TB HDD (raidz for proxmox backups and general storage), RX580 for passthrough for adobe cc. A few more cores, ram would be great. Also maybe a SAS HBA and 10G NIC would be useful in the future. But you can already do a lot with way less then this.
@@matthiasbenaets Sweet I have 32gb of RAM and Ryzen 7 so I’m been debating about upgrading
I was looking at such type of configuration for my setup, thank you for sharing this video, help a lot. One question about the container creation, why don’t create it using the disk inside the NAS instead of local.
In that way the container and docker would have some kind of redundancy in case of failure.
What do you think?
Hi Marco, I assume you mean docker containers (not lxc). So for my current setup i have it like this: proxmox installed on nvme 128gb with truenas in local-lvm. All my lxc containers are stored on a second nvme 128gb. This already separated them from the boot drive. The lxc containers (and thus the docker containers) have a frequent backup to my truenas zfs pool for added redundancy as you mentioned. Now for docker containers that need to always have the latest data backed up, I just set up a cron job to rsync to a very small striped pool on a slow spinning disk. I guess to make it fully redundant I might also need to set something like this up for the cloud, but haven't had any time to figure that out.
Hi Matthias, thank you for the effort in making this video. It is most helpful for me. However I'm struggling with one part. I.m building my cloud lxc container. First i created a unprivileged container but then i couldn't mount my truenas share. After reading your notes i saw i had to create a privaliged container, and although te nesting option is checked but greyed out i can't install docker containers. this gives me an error. Can you tell me what i do wrong?
When creating a privileges container it indeed checks nesting but it's greyed out. So it probably is no longer enabled. Open the VM's options, and under features enable it again. If you want to use Shares aswell, maybe also enable smb/cifs. If you still receive errors, it might be due to a lxc from a distro that need extra steps. For me debian pretty much always works.
@@matthiasbenaets thnx for the quick response and help that did it. Enjoyed following your video!
Thank you.
I have a workstation laptop with three disks. Two nvme and one hdd. Can anyone please suggest me a storage setup for home server? Should I install proxmox on one disk and create a ZFS pool for the remaining two?
Why plex vs jellyfin?
Just in case… you show the password used for the web access for your main instance of pihole. Incase that was a ”real” password it is now compromised. FYI.
hehe good catch, luckily I only used it for like 2 or 3 locally hosted services that require only a password. Everything public or requiring a full login I use vaultwarden. I do appreciate the heads up though!
Backing up a container/VM is easy. But how do you backup the proxmox host?
Running a Proxmox cluster with high availability is one option, this makes sure that nothing is lost when one node goes down. This also means you don't really need to back anything up. Another option is to set up a Proxmox backup server, this way you can back up a complete node all at once. I believe it also has a few more options than just backup up individual vms to a separate storage location. I guess installing Proxmox on a mirrored ZFS is also an option.
How do you backup your proxmox machine, to restore it as fast as possible if proxmox boot disk dies? What are you saying to Ceph instead of TrueNas?
Well having a cluster set is indeed ideal. For home use with less infrastructure it depends. There are many options available. You can always just set up a Proxmox backup server and import the complete node after reinstall after adding it as a storage option. You can use TrueNas in this case, but if are good with the command line, you can also just set up a ZFS pool within proxmox with a couple extra drives. After a reinstall you can simply just search for the pool, import it, and restore all vm's.
Which hypervisor you have used to install proxmox on a single machine
QEMU/KVM
great video, I have one issue when i try to install pi-hole i get port 53 in use any idea on how to fix that ?
Probably due to pi-hole not being able to bind to port 53. Check the vm if port 53 is already being used with $ sudo lsof -i :53 . I believe some linux distros come with their own resolver, which is not ideal. You can disable the service, but depending on the disto, this can cause issues. Especially with Ubuntu I believe you will have to add a couple more lines to the docker-compose file, but for that I'll refer you to the docker-pi-hole github page.
friend can't install vm on supermicro x9dbl-if 2x e5-2470v2 not add 20 cores,40 threads vm, The product of vCPUs, cores and threads must not exceed 255 on this system. truenas scale, do you have an idea? thanks!
try allocating less cores to the vm. 20cx40tx1cpu=800 vCPUs.
Can it be installed via nix? Declaratively?
very useful for me
have not any home lab XD
I did all the setup with PiHole, I added my PiHole's address as my local DNS-Server on my Router (I didn't change it on my PC as you did in the video), I did the setup with NGINX (had to run it in a separate LXC container, added it to portainer agent by adding a new enviroment on Portainer) added "port.lan" exactly as you did in nginx, PiHole's DNS record has the domain "port.lan" pointed at my nginx ip address but once I try to access "port.lan" nothing happens (can't find the site). Any ideas?
This is impossible to say. I cannot help you with this, sorry. It mostly depends on your personal devices, setups and config. More likely than not, this is just dns or ip leaking, either with your pc or with your router (and only if your setup is actually correct). To start debugging this, you should first disable ipv6 on your pc and directly use the dns of pihole. if you have a pihole dns record for port.lan pointing to npm, and a proxy host to the correct ip:port on npm, it should work. if it does, you should evaluate the network traffic using something like traceroute. If it does not work, check your record and check if the ip's are actually correct.
@@matthiasbenaets hey I resolved the issue by installing pihole on an LXC container (without docker) and it's now working super smooth, only thing I'm having a hard time with is getting the reverse proxy with nginx and pihole's address with the /admin at the end. Thank you again for the help and Merry Christmas!
Hello, great video. could you do a tutorial on Setting up from scratch all the way to the end of how to create an Nginx Website and also Nginx Proxy Manager to get it hosted online?
I’m having the same issue your friend is having I’m trying to figure out how to use proxmox truenas and make automated media server with jellyfin definitely a lot more complex then I thought. If any one in the comment section is willing to help me plzzz let me know I have a discord. 😂
How do I get Proxmox dark mode? 😳
browser addon: Dark Reader
What is the point of presenting hardware with details on the importance of ram ecc at the beginning of
the video, if it is to realize the installation of proxmox in virtual (vda) and not in baremetal that avoids a useless encapsulation and greedy in resources ? It's probably because I'm too old or too stupid
The initial install in vm is purely because I don't have a system available nor am able to capture the video output while doing it. For a video, this is more user friendly to follow. I never recommend visualizing proxmox unless it's in proxmox itself for testing reasons. Proxmox is type 1 hypervisor ie bare mental hyprvisor.
Your video started great, but gradually you seemed to assume far too much. A more comprehensive tutorial with exact steps from portainer onwards would be helpful. For example, when you showed us how to set up a truenas share with Nextcloud, you failed to show us how to get the uid. You also assumed that the portainer setup went without hiccups, but it didn't. It would help if you informed us that we needed an account for portainer and could use only five nodes for free. As already stated, the amount of effort put into the first half of the video was superb. I do not usually comment, but after wasting time trying to work around what you started, it was only fair that I said something. I ended up building Nextcloud on an Ubuntu server on promox by watching a learnlinux tv tutorial.
Hi, Thanks for the feedback. I tried to fit as much info as possible in as little time as possible. My aim in these guides is not a hand-holding experience rather a teaching one. uid's and such are just one google search away and this is not a Linux tutorial. All the services presented here are some you and others might find useful in their homelab, that does not mean all of them will have a full blown tutorial but rather some tip and trick, especially since not everyone will use them. Care to elaborate on the portainer and nextcloud issue? I can't recall that you need an extra account for portainer, unless you want to use the EE version (which again, is not really too relevant for starting out). If you're only issue with nextcloud was mounting of the smb in the lxc, you could have also used the Dockerfile instead (from the github repo), this will install nextcloud with the needed samba packages. Alternatively you can also just install them manually in the docker container. The smb option should then become available. My method maybe wasn't too clear since the uid might not be the same depending on the vm/lxc used, but this way it does not require people to learn how to use custom dockerfiles or run the same commands every time you pull the latest image. If you don't understand the usage, here's a quick explanation as to why it's uid 33. The persistent data generated by the nextcloud container is made by user "www-data" (atleast in my case). To prevent any future permission issues, I mount the smb shares as this user. To find this uid and gid I can simply run $ id -u www-data, or with flag -g for group.
Great vid, thx - one question:
since you are using nextcloud and onlyoffice exposed to the internet why don't you use a jwt token for onlyoffice?
hi, thanks for the kind words. I did not go into this futher because imo it's only relevant for people who want to access it over the internet. This is only 1 out of 3 possible scenarios, the other being using a local ip or setting up the container with the pihole network and using a local dns. In most situations both of these don't need the extra security. You did remind me to re-enable this for my personal setup so thanks! Of course this is highly recommended when making it available to the internet. For anyone else intrested and reading this, I'll add a short explanation in the description under notes.