Well in the 4th quarter of 2022. I went from a single spare PC (5600G 32GB ram) and 2 pi4's to a full 48u rack, 2 Dell r620's (dual E5-2650l v2 [10c/20t], 64GB each), and the existing hardware I mentioned from before the rack... Hard to say what my favorite thing is.
*if you don't want to pay for a bunch of streaming services, have spotty service due to under-powered ISP/cheap consumer equipment, and/or have your ISP, companies like Google and Meta, and the government harvesting all your data.
Having built my first rack-mounted homelab this past year, I really like what you've done with yours. I included cable-mgmt as a design req't and that helps me understand a lot of the decisions you made during your upgrade. I like what you're doing on your channel...keep up the great work in 2023!
I love it! I’m starting to build a home lab with old barracuda hardware I was given. Upgrading from just a raspberry pi 4. 2023 is gonna be a good year!
I went to a rack this year and started the UI journey... and actually currently building my first dedicated homelab server... storage was mainly what I cared about before... but starting to enjoy thoughts of running some more services locally. Thanks for the great videos this year.
My favorite thing? The Supermicro servers with IPMI and lot of RAM! This is a very nice and very capable HomeLab... and even more because the way you're handling it with Kubernetes, Ansible and so on. It's nicely laid out, don't worry about what others think of it! I'm a old timer Sysadmin / Network Engineer who did many things in my life including using mainframes like Vax and Big Old IBMs, i used to have a Sun Workstation on my desk and rolled out the first bind DNS for the utility i worked for! I installed the first version of SlackWare using diskettes on a Dual Pentium Pro system! I installed the first Cisco Routers that looked like coming out of a garage with cards hooked with flat cables (the AGS / MGS etc) routing for a big network of Synoptics hubs (we had 18 000 PCs, 450 Routers, etc)! A bit after, we started to use Kalpana switches (later bought by Cisco). I'm now converting to containers and orchestration, state management and CI/CD! For the last 20 years, i did virtualization firstly with VMWare and after a while, i converted to Xen (on CentOS) and just a bit after, to KVM. I did a lot of scripting for automation, replaced now with Ansible. I found your very good videos wandering around on youtube: I'm learning a lot from you and i like youngsters sharing their great passion for techno! I learned all of my life and i still do everyday, that's the essence of what we do. Kudos for the very good information you're sharing!
My main project for the past year was to make custom length IEC power cables. I wanted to get rid of all extra slack in my racks. Almost all of my racks are mobile racks for live music and a few hifi vintage racks. I shortened all my power cables to the perfect length in my racks, and replaced the normal IEC plug with a locking plug. These locking IEC plugs are a godsend.
I'm so exited starting with my proper HomeLab this year, too! I have a server lying around a friend gifted me, a couple of Raspberry Pis. I'm planning on more security for my flat as well as more NAS storage and better/faster one, too (mentioned in another comment). I'm also thinking about the enclosure, as my HomeLab will also be the heating element for my terrarium. But 27°C seems like a dream (and the perfect heat) :D I was always fascinated by racks and rackmounted gear. I'm also thinking about rackmounting my PC even though, I do love it standing on my desk, too!
TechnoTim, I really enjoyed your video! You have a lot to be proud of and share. By way of background, I have built many large data centers even receiving a national ASHRAE award. I've been running my home lab since 1993 and am going to be making a few changes to my 42U home lab cabinet as a result of your video. Thank you. The white rack and cabinet lighting is something I remember seeing at EBAY's data centers. Visibility in the rack makes work more efficient with less mistakes. I do have a couple of suggestions for you. #1 I would get a 4" ladder or more likely basket tray to manage your umbilical cord line from the wall to the rack. (Cabling does best in established pathways.) You could convert the resulting excess cable length into a service loop on top of the rack to allow the cabinet to still move around. Maybe secure the basket tray to the wall control and ceiling suspended just above the cabinet? I installed a 4" wide x 2" tall white basket tray while back that worked out really well. #2 It looked like your Philips Hue lines may be breaking one of my personal DC rules. Never "bundle" 110/220 and UTP together. I have many sad networking stories and A/B test results that inform this opinion. You have done an excellent job of separating out the two cable types otherwise!
Thank you so much! I had no idea what to do with the umbilical cord! This is a great idea! I am just trying to figure this all out with little to no experience besides the internet 😀
@@TechnoTim Your server room is amazing. I've never seen a server room or data center that was perfect (especially the ones I've done). ha ha. You have zero to be apologetic about. I kinda envy you brave content creators in that you benefit from a crowd sourced advisory committee. Big fan of your channel.
Very nice setup 👍 I picked up an Intel NUC 9 Extreme this year to use as my Virtual server host. I moved over 64GB of RAM and a 1TB Samsung 970 EVO Plus. It is pretty quiet and does not take up as much space as the pair of smaller NUC's that I was using. The Dell Poweredge R210 II that I have was upgraded to 32GB or RAM and has a 1TB Samsung SSD boot drive and a 4TB WD Green drive to backup my VM's from the NUC 9. I sold my old 4th gen NUC & moved the 7th gen NUC downstairs to be my Plex server. I am happy with this now the Plex server is running Windows Server 2019 Datacenter with a pair of Samsung SSD's and 32GB of RAM. The NUC 9 was a steal at 50% off list price. The NUC 8 that was my day to day Hyper-V server has been rebuilt with 48GB of RAM and a spare 1TB Samsung 970 EVO Plus. It is running Windows 11 Pro and is my go to machine for browsing the Web and listening to podcasts. I can also spin up the odd VM on it for testing before moving them to the NUC 9. I do not have anything fancy as far as switches. I invested in a Cisco CBS250 16 port fanless switch and got a TP-Link DECO mesh Wi-Fi unit with 3 units in the box. This sorted out my Wi-Fi issues out and also sped up my data transfer from my machines upstairs and my Plex server & my Dell backup server. I like that the NUC 9 has a pair of 1GB Intel NIC's on board. My next project is to get Pfsense virtualized under Hyper-V and see how that performs. I would love to get a second Cisco switch to replace the cheap slink that I have connected to the Dell Poweredge. The NUC 9 with it's 45W 9750H i7 with 6 Cores and 12 threads is a great small system for a Hyper-V server. I would like to run some CAT6 between my router and my office upstairs and my Dell server under the stairs but for the moment the mesh Wi-Fi will have to do. I was using home plugs before but I was getting transfers if 10MB Per second between my systems upstairs a d my Dell server under the stairs. Now I am getting up to 52MB's per second. When you could be moving 1.5TB of files a 5x improvement in data transfer speed is a great improvement.
'especially PBS' is about the most Minnesotian thing anyone from there can say. Being from Lake Wobegon, naturally it's MPR for me. ;) Happy Holidays Tim!
First a Jeff Geerling upload, then a TechnoTim upload. Definitely a good Christmas eve right here. Now to get another Pi-Hole up and running as failover.
13:25 Guys I am just a random who clicked on this video, but could someone help me understand where one would use this much storage for? Like even if you record-store in 4k a bunch of videos how is this justified. Is there something I am missing?
Tim, you have been a great inspiration to me and my home lab journey over the last few years. Happy Holidays and Happy New Year! Can’t wait to see what 2023 brings for you. Cheers!
I just ordered a UDM SE. I'm slowing building a few new systems to replace what my OpnSense powered router/firewall can do (like local DNS override -- hello PiHole) before the UDM arrives at which point my entire infrastructure is getting nuked (from space) and I am starting over. I would love a walk through of your current config and setup, firewall rules, etc if you're looking for a video idea before sometime next week when my UDM arrives. Thanks for what you do, I've implemented a lot of your ideas. Now if only I could get my wife to get an RGB server rack LOL. Merry Christmas and Happy 2023!
I like tech alot been looking into the whole home server rack with patch panels, switches, etc because my wife and I are purchasing a house and for the 1st time I'll be able to run my own home networking. I'm a low voltage tech for job so it's nice I can run my own stuff and not hire out to someone to run my cables. I look forward to seeing other things I can add to my rack when I get one. I'd love to have a setup inspired by yours! Love the video! Anything I should keep in mind while installing my stuff. Tips, Tricks, Ideas, etc
I loved your tour and you Inspire the homelabers to improve or iterate ideas and try new things. What I would change in your setup as a next upgrade is to have layer "Core-Distribution-Access" without the complexity of a data center.
*Slaps server rack. "You can fit soooo much networking in the front of this baby." Lol I love your set up. I never understood the power and networking in the back. It all seems much more workable when it's all accessible. Also Draco and Hydra may be the coolest sever names I have heard! I have my PC (Caspar) my daughter's PC (Melchior) and my mini home lab/NAS (Balthazar) on my home network. Named after the three wise men. Inspired by Evangelion.
I've been waiting for this year's tour! It's amazing to see how much your setup changed in a year. The editing also improved a lot since last year. Proud of you boi.
Looks great and is always work in progress. If you ever touch the cables running to the rack, consider to 'mount' them to the ceiling. That would give you clearance to walk around the rack.
Tim, I highly recommend using a 'cable comb' it will allow you to have perfectly aligned cables and will make your home lab look that much better. Happy Christmas!
Tim, very nice setup for a home network rack. I'm not much different but definitely less compute and storage. I am somewhat lucky to be able to do my home lab in an area where the cost for power is the second cheapest in nation at 2.6 cents per kilowatt hour. Where and how are you forcing the hot air out too? Are you having to install any active cooling?
Nice. I like that Sysrack. I like that you have a fan control on the rack. I have a NavePoint rack right now and my fans are either on or off. They are so loud I do not run them.I just ordered a UPS-PDU Pro looking forward to getting it.
I did not check if you use homeassistant yet, but wouldn't it be better to have your zigbee antenna, connect directly your hue/zigbee lights to your server (through zigbee2mqtt) not use the philips hue hub? So you don't rely on philips network, don't need 2 hubs, your lights work without internet and turning them on doesn't need to rely on the speed of internet connection?
Very nice. Yeah enclosed rack does trap that heat - one way to turn it into a positive is to duct heat out of room. My summer temps are high. On my 24u open frame I got thick grow plastic + magnetic tape to create rear side "panels" to help contain heat right at servers' exhaust. Then picked up inexpensive infinity booster fan + duct to exhaust heat outside with AC window kit. Sits right behind 3x 2u servers. Works well to not keep recirculating heat. That and a 20' king air house exhaust fan sitting in window 5ft away from rack to cool at night. :) Not sure your heat situation in summer but perhaps outside exhaust solution could help with enclosed rack. AC works too. On another note - slim CAT6 cables patch work making me want to spend $.
Love your setup, I have just one question on average how much power dose this setup pull. I did have two dell severs running proxmox but had to retire one due to rising electricity prices. Have a lovely Christmas.
I have been racking gear for years through my IT travels including in corporate data centers. The networking and cabling is usually done in the back as all the connections for the servers are in the back. Most people need all the U's in their rack and cannot justify the cost or the loss of the U's to do everything in the front. You lost two U's for the keystone pass through and then the cost of that setup. Some gear you can share U's by putting one in the front and one in the back. Assuming that is possible here you also lose U's for that. Doing this in the home is fine but you will not see this in an actual data center or corporate setup.
Beautiful job, nice and clean. It would be nice to see in the future a video about power usage. Where I'm at power is very expensive. So I use raspberry pi's, nuts and soon a Zimaboard for my home lab hardware. Keep up the good work.
Great update and upgrades. I'm slowly moving away from Ubiquiti as an edge device option and back to Pfsense/OPNsense. Ubiquiti just lacks creature comforts like HAproxy and other simple services. I will miss the pretty UI graphs but I just dump everything into grafana. Currently have a UDM-PRO that I'll be selling to cover the UNVR to run my cameras and protect. I'll virtualize the controller to manage my AP's and SSID's. Got a killer dream on a Netgate XG-7100 that will be running Pfsense/OPNsense, it'll handle my vlans and dhcp duties as well as the previously mentioned HAproxy.
Great video. I like the organization/labeling. Something I’d recommend is a labeled. I really like the brother p-touch. Can print wire labels as well as larger labels.
Hey, are you running home assistant? If so does the unifi power distribution integrate into it? I would like to be able to use home assistant to be able to monitor its power usage, which I am currently doing with a kasa power bar, but that's obviously not rack mount
With the release of K8s 1.26 it is now possible to have a service of type loadbalancer and mixed tcp and udp protocols. That allows me to have a 3 node Pi4 K3s HA cluster (thanks to your ansible playbook) and PiHole finally inside the k3s. Longhorn has currently in the version 1.3.2 some breaking changes with k8s 1.25 and up. I had to use the master branch of longhorn to get it to work in k8s 1.26. They plan to have it fixed in the release 1.4.0.
I could be wrong, but wouldn't hooking the other redundant supply on the storage shelf share the load between both power supplys. Therefore cutting the use of each in half. Basically using the same amount of power. Other than that, it looks good. Over complicating cable management IMHO just hinders the time in making repairs or replacing cables.
This is an excellent video. I'm looking at moving my home equipment from a bookshelf into a rack soon. I'd be very interested in a video on the PDU Pro, particularly with failover with multiple UDM Pros. Thanks for addressing the RGB lighting as well. I've definitely fumbled around in a few racks that were a bit too dim or had areas that fell into shadow. Lighting strips seem like a great way to solve that problem. Perhaps the RGB color could be tied to rack temp for added utility???
LOL - my homelab is a single 14c PC running Proxmox with 2.5g ethernet connected. I am upgrading my network to 10g and I have a Synology NAS (8 drives) - but I don't consider that part of my homelab. I have an office PC, a gaming PC and a handful of project systems like mini PCs and PIs for playing around. My wife thinks all this is overkill. I have to admit, I've never seen RGB in a rack before - nice touch.
The Amazon listing for your rack says it is 42U - but unless you are a giant, there's somethign fishy going on. My 42U rack here is significantly taller than I am, and I'm not short. I'm not sure how tall you are - but I would guess your rack is only about 36U. The rest looks really good - better than most of the racks at work. The only thing that would get you in trouble is your "umbilical" - that needs to be suspended up high enough to walk under. Best would be to have a little piece of cable-tray stick out from the wall, and just have a round on the end to control bend-radius and let it all drop to the rack. Easy to just drop new cables into the tray as needed in the future then.
Love your setup! I was just wondering if you could share how your monthly electricity bill looks like? I have very similar equipment that I’m afraid to turn on 😂 because of electricity bill.
Tim, you should consider Solar. Yes, I know, Minnesota. But you'd be surprised. If you have the ability to clear the snow, off your panels in the winter, you could easily offset your server room with a 5-6Kwh system. DM if you want to see mine. Undecided with Matt Ferrell has a large TH-cam channel, he's in Massacussets and is able to offset his Telsa Model 3 with his system. Coincidently, he's also a software developer!
Excellent video - thanks for sharing! However I haven't seen any earthing cables within your server room (rack, walls, etc), which in your case, will increase your protection against surge protection, eddy currents, static electricity, etc. Maybe you covered them so nicely which I haven't noticed them 😎
@Techno_Tim1 Glad to watch, but considering you lost your account verification, I'll pass. An hour old account too. Hope you have all your TH-cam footage backed up if you lost your account. /s Backup is key!!
Love what you do in your homelab and wish I had the time to do what you do. Live in Australia and I'm finding that my under the stairs rack is getting very warm with the doors closed and 4 rack fans running - once I open the doors under the stairs it cools down dramatically. Both in Winter and summer - though our winter is not like US 🙂 Do you have any air conditioning with the amount of the equipment you have and what would you recommend? It seems to be very hard to get refrigerated air conditioning to cool a small space (approx 4-5cubic metres). Any suggestions that can be picked (inexpensive) up in Australia?
Tim. You the man. This is all very nice. I think the cabling looks great. If Tom Lawrence or Mac Telecom redid the wires, I would wonder what could be done better! I hate to be "that guy" but my only concern and I couldn't quite see, is the umbilical cord coming in on the top, is that going to rub on that metal edge? Won't happen overnight, but that is my only thought. And part of the reason it's my only thought is I am leaning from you! Lol I have no idea what I'm doing. Lol Thanks for everything.
I'm not a fan of "power up front" because I feel like it adds complexity to maintain the rack. If all of your power cables are strictly in the back, including PDUs, then you don't have to open the front at all to deal with power issues. I'm not sure I'll get a Unifi PDU, but if I do, it will go in the back of the rack. That said, I like what you've done with your rack. It does look very nice, and there's a lot of things you can do just from the front.
Hi, how did you ground your patch panels, they seem to have shielded keystones and you are running PoE through them? This sometimes causes leaks into the rack, have you dealt with that or? Also, why not a single node with dual socket? Doing separate 1U servers is going to draw more power, more components etc. Lastly, in your previous video talking about UPS, you showed they had numerous sockets on it themselves, producing simulated or true sine waves and have grouped outlets on them as well. So, by using separate PDU, does that take away the advantage of that? I thought there were PDU's where you can get dedicated inlet for each outlet? So, to make full use of the UPS. Last question.. as a software engineer do you feel like homelab takes away too much time from your actual career or it feels like an addition? Thank you, Just asking, nice video man.
Best setup for homelab are 2x itx x570 boards with ipmi/ibmc that supports ecc ram. 2x because of failover/ha/updating/rebooting and you need 2 anyway for the storage. Consumer boards are usually limited to 8x sata + 1x nvme. So that means in short asrock rack x570 with an amd 5600/5800. Because it supports ecc ram, has 2x 10g ports and ipmi. No graphic cards is needed nowadays, because all the boxes (shield pro/appletv/etc) decodes themselves. So use that graphiccard slot for an x16x4 nvme bifurcation card for cache/metadata vdev/log device for zfs! Use simply some cheap optane modules there. Get a lot of ram. 128gb is the best. Because you need a lot of ram for virtualization and zfs. Put a lot of 20tb hdds in that in at least zfs z2 or raid 10 (zfs z1 is slow and it will get probably abandoned at some point, raid 5 is okay but raid 10 is actually the fastest, just yes you loose a lot of space) Connect both servers together over the 10gbe port, use the second port for the switch. Best thing is, if you have the ability to split up your 10gbe port for virtualization (sr-iov on intel nics for example) use that instead of virtual adapters. Because that offers hw acceleration. You can spec the second server lower, to use it as a backup server simply. Means less memory like 32gb (you can do less, but you still need a bit, because if you move the vm's from the other server to it, to reboot/upgrade/etc the other sever). Just keep the cpu the same generation in both servers. Means in the main you can use an x5800 in the second an x5600... Same generation, because you want to pass the cpu flags to the vm's to get sometimes better efficiency. Means setting the flags to host. So in the end, you will have a relatively cheap mainserver and an even cheaper backupserver, both running as proxmox cluster. Powerwise you can even do some scripts to turn off/on the second server just for backups. Otherwise you actually don't need that much space for the backupserver (but keep in mind, don't use proxmox backupserver, use normal proxmox on it, i just call it backupserver) So not so much storage, additionally the storage don't need to be fast on the secondary/backupserver, means an raid5, or zfs z2, even if i don't generally recommend, to safe money, you can use zfs z1 there. The whole task of the secondary server is, to run some important vm's from the mainserver when you work on the mainserver and keep long time backups. That's it. The rack thing is something for company's, not for homelab 😂
Btw as example, i do run my homelab that way. Main Server: Asrock rack x570 d4i Amd 5800x 128gb ecc memory 8x20tb as raid 10 + bifurcation in 2x raid1 (one for metadata vdev "optane", and the other for log "optane") 2tb nvme 980pro in the nvme slot, splitted with lvm up (600gb ext4 partition for lvm containers, docker container, etc. And 1.4tb as zfs cache device) Using one nic in sr-iov mode, for vms (opnsense and so on) And the other nic, back to back to the other server. 1x 250gb nvme ssd in and usb3 enclosure for the proxmox os. The second (backup proxmox node) Asrock rack x570d4i 3x 20tb hdds in zfs z1 (limited max l2 cache to 8gb) (for long term backup storage) Amd 5600x 250gb nvme for proxmox. 2x 2tb some cheap sata ssds, Kingston a500 or sth like that (as vm storage simply if i need to work on the mainserver), with mdadm in raid 1 simply and ext4. Why do i use zfs z1 for backup storage, instead of raid5 ext4 storage? Well ext4 has no safety features for long term bitswap errors etc. Means your data can get corrupted on very old backups. I don't think that anyone needs more, that's already a high end setup for homelab. The only thing that was really freaking expensive are the 20tb hdds. But you can get nowadays one for like 300-350usd. And probably most people don't even need that ton of storage, i just have a ton of data. Additionally, you don't really need to backup everything, as my mainserver has a ton of data, and a ton of storage, i use the second one only for vm backups and so on. Mainly the second server is just there to bridge the downtime for the first Server. As i have opnsense running and if i would have only one server, i would have no internet if the server goes down 😂 The backup job is just an additional task. Absolutely everything is running on that server (zigbee, opnsense, etc...) So you basically run everything on it and you don't need any other dedicated hardware in your house. Zfs on the mainserver is blazing fast due to the vdev/log/cache. Additionally i have datasets for samba with shadowcopy and zfs snapshots. So you have backups that way either. Cheers
Thanks for sharing. Min 2, second 38, What’s the name of the whole silver panel.? It looks awesome for cable management and how did you hang there the raspberry and Philips hue hubs?
@TechnoTim thanks for all your videos and support you provide! Your homelab looks wonderful any color scheme you activate :) Following your guides I'm running proxmox, wireguard, nextcloud, uptime kuma and some more of your homelab recommendations. Maybe you could point me (and also others) to the right direction - I'd like to setup bitwarden as a VM on my proxmox server. However, I'd love not to have to use docker... Since there's no how-to anywhere, do you have an idea to find out, what is needed to setup bitwarden manually? Kind regards from Berlin, Tim
What's your favorite thing in your HomeLab?
The number of *CORES* 😎
And most importantly a silent setup 🔕
My Jellyfin server. It's open source, doesn't require an internet connection, and handles all my media without issue.
I love my Unifi USW-24 POE
The 3 silent mini-pcs I got from school for free
Well in the 4th quarter of 2022. I went from a single spare PC (5600G 32GB ram) and 2 pi4's to a full 48u rack, 2 Dell r620's (dual E5-2650l v2 [10c/20t], 64GB each), and the existing hardware I mentioned from before the rack... Hard to say what my favorite thing is.
"honestly honey, this is what you need to get internet on your phone nowadays"
Honey, we need to take up some space for the internet
It’s all fun and games until Plex/Jellyfin goes down while watching a show.
*if you don't want to pay for a bunch of streaming services, have spotty service due to under-powered ISP/cheap consumer equipment, and/or have your ISP, companies like Google and Meta, and the government harvesting all your data.
I've found a different way to accomplish this. It involves divorce, but I'm finding it effective.
@@NotASingleGoodNameLeft Yeap, been there.
Having built my first rack-mounted homelab this past year, I really like what you've done with yours. I included cable-mgmt as a design req't and that helps me understand a lot of the decisions you made during your upgrade. I like what you're doing on your channel...keep up the great work in 2023!
Thank you!
We are in AWE of your home lab! You make homelabbers proud! Maintain the high calibre of work that you've been doing. Have a wonderful Christmas!
Thank you! You too!
Dude. It looks so good!! Well done!
Thanks David!
I love it! I’m starting to build a home lab with old barracuda hardware I was given. Upgrading from just a raspberry pi 4.
2023 is gonna be a good year!
We LOVE your homelab! You make homelabbers proud! Keep up the excellent work you do. Merry Christmas!
Thank you! You too!
I went to a rack this year and started the UI journey... and actually currently building my first dedicated homelab server... storage was mainly what I cared about before... but starting to enjoy thoughts of running some more services locally. Thanks for the great videos this year.
Favorite thing in my homelab - my cluster of mini PCs (which is most of my lab currently). Thanks for being one of the best parts of my 2022!
My favorite thing? The Supermicro servers with IPMI and lot of RAM! This is a very nice and very capable HomeLab... and even more because the way you're handling it with Kubernetes, Ansible and so on. It's nicely laid out, don't worry about what others think of it!
I'm a old timer Sysadmin / Network Engineer who did many things in my life including using mainframes like Vax and Big Old IBMs, i used to have a Sun Workstation on my desk and rolled out the first bind DNS for the utility i worked for! I installed the first version of SlackWare using diskettes on a Dual Pentium Pro system! I installed the first Cisco Routers that looked like coming out of a garage with cards hooked with flat cables (the AGS / MGS etc) routing for a big network of Synoptics hubs (we had 18 000 PCs, 450 Routers, etc)! A bit after, we started to use Kalpana switches (later bought by Cisco).
I'm now converting to containers and orchestration, state management and CI/CD! For the last 20 years, i did virtualization firstly with VMWare and after a while, i converted to Xen (on CentOS) and just a bit after, to KVM. I did a lot of scripting for automation, replaced now with Ansible. I found your very good videos wandering around on youtube: I'm learning a lot from you and i like youngsters sharing their great passion for techno! I learned all of my life and i still do everyday, that's the essence of what we do. Kudos for the very good information you're sharing!
The level of how nicely and clean it look is outrageous. I hate it, it's just amazing and I want it XD
From France with Love. Absolute art you've done there.
My main project for the past year was to make custom length IEC power cables. I wanted to get rid of all extra slack in my racks. Almost all of my racks are mobile racks for live music and a few hifi vintage racks. I shortened all my power cables to the perfect length in my racks, and replaced the normal IEC plug with a locking plug. These locking IEC plugs are a godsend.
Great video Tim and love your HW! Look`s awesome!
I wish you a merry Christmas and all the best on 2023 to you and your family!
Thank you so much!
I'm so exited starting with my proper HomeLab this year, too!
I have a server lying around a friend gifted me, a couple of Raspberry Pis. I'm planning on more security for my flat as well as more NAS storage and better/faster one, too (mentioned in another comment).
I'm also thinking about the enclosure, as my HomeLab will also be the heating element for my terrarium. But 27°C seems like a dream (and the perfect heat) :D
I was always fascinated by racks and rackmounted gear. I'm also thinking about rackmounting my PC even though, I do love it standing on my desk, too!
Great network tour! Power in the front is very practical. Great work! Merry Christmas!
TechnoTim, I really enjoyed your video! You have a lot to be proud of and share. By way of background, I have built many large data centers even receiving a national ASHRAE award. I've been running my home lab since 1993 and am going to be making a few changes to my 42U home lab cabinet as a result of your video. Thank you. The white rack and cabinet lighting is something I remember seeing at EBAY's data centers. Visibility in the rack makes work more efficient with less mistakes. I do have a couple of suggestions for you. #1 I would get a 4" ladder or more likely basket tray to manage your umbilical cord line from the wall to the rack. (Cabling does best in established pathways.) You could convert the resulting excess cable length into a service loop on top of the rack to allow the cabinet to still move around. Maybe secure the basket tray to the wall control and ceiling suspended just above the cabinet? I installed a 4" wide x 2" tall white basket tray while back that worked out really well. #2 It looked like your Philips Hue lines may be breaking one of my personal DC rules. Never "bundle" 110/220 and UTP together. I have many sad networking stories and A/B test results that inform this opinion. You have done an excellent job of separating out the two cable types otherwise!
Thank you so much! I had no idea what to do with the umbilical cord! This is a great idea! I am just trying to figure this all out with little to no experience besides the internet 😀
@@TechnoTim Your server room is amazing. I've never seen a server room or data center that was perfect (especially the ones I've done). ha ha. You have zero to be apologetic about. I kinda envy you brave content creators in that you benefit from a crowd sourced advisory committee. Big fan of your channel.
Your videos always get me hyped to do upgrades on my own rack. Thanks for making such great content.
Glad you like them!
Very nice setup 👍
I picked up an Intel NUC 9 Extreme this year to use as my Virtual server host. I moved over 64GB of RAM and a 1TB Samsung 970 EVO Plus.
It is pretty quiet and does not take up as much space as the pair of smaller NUC's that I was using.
The Dell Poweredge R210 II that I have was upgraded to 32GB or RAM and has a 1TB Samsung SSD boot drive and a 4TB WD Green drive to backup my VM's from the NUC 9.
I sold my old 4th gen NUC & moved the 7th gen NUC downstairs to be my Plex server.
I am happy with this now the Plex server is running Windows Server 2019 Datacenter with a pair of Samsung SSD's and 32GB of RAM.
The NUC 9 was a steal at 50% off list price.
The NUC 8 that was my day to day Hyper-V server has been rebuilt with 48GB of RAM and a spare 1TB Samsung 970 EVO Plus.
It is running Windows 11 Pro and is my go to machine for browsing the Web and listening to podcasts.
I can also spin up the odd VM on it for testing before moving them to the NUC 9.
I do not have anything fancy as far as switches. I invested in a Cisco CBS250 16 port fanless switch and got a TP-Link DECO mesh Wi-Fi unit with 3 units in the box.
This sorted out my Wi-Fi issues out and also sped up my data transfer from my machines upstairs and my Plex server & my Dell backup server.
I like that the NUC 9 has a pair of 1GB Intel NIC's on board.
My next project is to get Pfsense virtualized under Hyper-V and see how that performs.
I would love to get a second Cisco switch to replace the cheap slink that I have connected to the Dell Poweredge.
The NUC 9 with it's 45W 9750H i7 with 6 Cores and 12 threads is a great small system for a Hyper-V server.
I would like to run some CAT6 between my router and my office upstairs and my Dell server under the stairs but for the moment the mesh Wi-Fi will have to do.
I was using home plugs before but I was getting transfers if 10MB Per second between my systems upstairs a d my Dell server under the stairs.
Now I am getting up to 52MB's per second.
When you could be moving 1.5TB of files a 5x improvement in data transfer speed is a great improvement.
Bro paid 1000 dollars for windows server 💀💀💀
@@Logilype I paid for the hardware. I already had the software.
@@Logilype the normal barebone price for the Intel NUC 9 Extreme I7QNX is on Amazon for £950. You have to add RAM & SSD plus OS to that.
'especially PBS' is about the most Minnesotian thing anyone from there can say. Being from Lake Wobegon, naturally it's MPR for me. ;) Happy Holidays Tim!
Great homelab. You should add 10Gb/s Ethernet between all servers, in order to experiment "clustering" and other cool features.
First a Jeff Geerling upload, then a TechnoTim upload. Definitely a good Christmas eve right here.
Now to get another Pi-Hole up and running as failover.
Yes! Check out Geerling's video too!
13:25 Guys I am just a random who clicked on this video, but could someone help me understand where one would use this much storage for? Like even if you record-store in 4k a bunch of videos how is this justified. Is there something I am missing?
Tim, you have been a great inspiration to me and my home lab journey over the last few years. Happy Holidays and Happy New Year! Can’t wait to see what 2023 brings for you. Cheers!
Thank you!
I just ordered a UDM SE. I'm slowing building a few new systems to replace what my OpnSense powered router/firewall can do (like local DNS override -- hello PiHole) before the UDM arrives at which point my entire infrastructure is getting nuked (from space) and I am starting over. I would love a walk through of your current config and setup, firewall rules, etc if you're looking for a video idea before sometime next week when my UDM arrives. Thanks for what you do, I've implemented a lot of your ideas. Now if only I could get my wife to get an RGB server rack LOL. Merry Christmas and Happy 2023!
I like tech alot been looking into the whole home server rack with patch panels, switches, etc because my wife and I are purchasing a house and for the 1st time I'll be able to run my own home networking. I'm a low voltage tech for job so it's nice I can run my own stuff and not hire out to someone to run my cables. I look forward to seeing other things I can add to my rack when I get one. I'd love to have a setup inspired by yours! Love the video! Anything I should keep in mind while installing my stuff. Tips, Tricks, Ideas, etc
I loved your tour and you Inspire the homelabers to improve or iterate ideas and try new things. What I would change in your setup as a next upgrade is to have layer "Core-Distribution-Access" without the complexity of a data center.
Nice rack! So glad I found you in 2022 and looking forward to the new year.
*Slaps server rack. "You can fit soooo much networking in the front of this baby." Lol I love your set up. I never understood the power and networking in the back. It all seems much more workable when it's all accessible. Also Draco and Hydra may be the coolest sever names I have heard! I have my PC (Caspar) my daughter's PC (Melchior) and my mini home lab/NAS (Balthazar) on my home network. Named after the three wise men. Inspired by Evangelion.
Your Chrismas Tree looks wicked with all those switches and servers hanging, Merry Christmas!
I've been waiting for this year's tour! It's amazing to see how much your setup changed in a year. The editing also improved a lot since last year. Proud of you boi.
Thank you so much, on all fronts!
Impressive setup! Also, who cares what people think as far as whether or not they like the choices you made. All that matters is what you enjoy.
Thank you! Sometimes I feel like I have to defend my decisions!
Looks great and is always work in progress. If you ever touch the cables running to the rack, consider to 'mount' them to the ceiling. That would give you clearance to walk around the rack.
This was so amazing, so glad to see the updated video
Tim, I highly recommend using a 'cable comb' it will allow you to have perfectly aligned cables and will make your home lab look that much better. Happy Christmas!
Thank you! I had no idea how to google this and I had no idea what they were called!
You're very welcome, keep rockin' :)
As soon as I can get a dedicated space for my study room, I will start ref your setup to make my server room like this!
Thanks for the demo and info. Nice Rack! Have a great day
Thanks Tim, great inspiration for a smaller scale project. Btw, loving power in front.
I love the all white led use. So bright
Tim, very nice setup for a home network rack. I'm not much different but definitely less compute and storage. I am somewhat lucky to be able to do my home lab in an area where the cost for power is the second cheapest in nation at 2.6 cents per kilowatt hour. Where and how are you forcing the hot air out too? Are you having to install any active cooling?
This is insane. If i saw this in someone's house i'd definitely think they were some sort of alien informant. nuts
Nice. I like that Sysrack. I like that you have a fan control on the rack. I have a NavePoint rack right now and my fans are either on or off. They are so loud I do not run them.I just ordered a UPS-PDU Pro looking forward to getting it.
For me, network in the front, power in the back, other than that awesome rack! Subscribed :)
I wish I had room and budget to do the same in my own business ;-) Happy new year and thank's for all the great content you created in 2022.
I did not check if you use homeassistant yet, but wouldn't it be better to have your zigbee antenna, connect directly your hue/zigbee lights to your server (through zigbee2mqtt) not use the philips hue hub? So you don't rely on philips network, don't need 2 hubs, your lights work without internet and turning them on doesn't need to rely on the speed of internet connection?
Cant wait for the Services & Apps video!
Love the Gear, The Channel, The Attitude,
Keep up the Good work ! Subscribed !
Happy 2023, from nyc
Built my homelab inspired by your videos. Thanks for the great content :)
Merry Christmas Tim, you rock sir..
Liking the video before I even watch it because I know it's good.
Very nice. Yeah enclosed rack does trap that heat - one way to turn it into a positive is to duct heat out of room. My summer temps are high. On my 24u open frame I got thick grow plastic + magnetic tape to create rear side "panels" to help contain heat right at servers' exhaust. Then picked up inexpensive infinity booster fan + duct to exhaust heat outside with AC window kit. Sits right behind 3x 2u servers. Works well to not keep recirculating heat. That and a 20' king air house exhaust fan sitting in window 5ft away from rack to cool at night. :) Not sure your heat situation in summer but perhaps outside exhaust solution could help with enclosed rack. AC works too. On another note - slim CAT6 cables patch work making me want to spend $.
Great idea! I wish I could fine a clean way to seal a duct to my block window vent! Any ideas?
Love your setup, I have just one question on average how much power dose this setup pull. I did have two dell severs running proxmox but had to retire one due to rising electricity prices. Have a lovely Christmas.
I have been racking gear for years through my IT travels including in corporate data centers. The networking and cabling is usually done in the back as all the connections for the servers are in the back. Most people need all the U's in their rack and cannot justify the cost or the loss of the U's to do everything in the front. You lost two U's for the keystone pass through and then the cost of that setup. Some gear you can share U's by putting one in the front and one in the back. Assuming that is possible here you also lose U's for that. Doing this in the home is fine but you will not see this in an actual data center or corporate setup.
And this is the exact reason why I've avoided racks. The possibilities are infinite.
You encourage to build one for my mom, I already have a couple of stuff here to install as soon her construction is finish.
Beautiful job, nice and clean. It would be nice to see in the future a video about power usage. Where I'm at power is very expensive. So I use raspberry pi's, nuts and soon a Zimaboard for my home lab hardware. Keep up the good work.
It looks great Tim. Thanks for the great content this year, I always enjoy your videos. Keep it up. Merry Christmas to you and your family 👍
Than you! Merry Christmas!
it'd be great to see you add solar to make the lab green. Maybe add some batteries as time goes on.
Great update and upgrades. I'm slowly moving away from Ubiquiti as an edge device option and back to Pfsense/OPNsense. Ubiquiti just lacks creature comforts like HAproxy and other simple services. I will miss the pretty UI graphs but I just dump everything into grafana.
Currently have a UDM-PRO that I'll be selling to cover the UNVR to run my cameras and protect. I'll virtualize the controller to manage my AP's and SSID's. Got a killer dream on a Netgate XG-7100 that will be running Pfsense/OPNsense, it'll handle my vlans and dhcp duties as well as the previously mentioned HAproxy.
Awesome setup Tim, love the upgrade.
Next up, 10gbit lan?
Great video. I like the organization/labeling. Something I’d recommend is a labeled. I really like the brother p-touch. Can print wire labels as well as larger labels.
i luv it!! 😍 i learned quite a bit from your content ty, wish amazing holidays!
Hey, are you running home assistant? If so does the unifi power distribution integrate into it? I would like to be able to use home assistant to be able to monitor its power usage, which I am currently doing with a kasa power bar, but that's obviously not rack mount
With the release of K8s 1.26 it is now possible to have a service of type loadbalancer and mixed tcp and udp protocols. That allows me to have a 3 node Pi4 K3s HA cluster (thanks to your ansible playbook) and PiHole finally inside the k3s.
Longhorn has currently in the version 1.3.2 some breaking changes with k8s 1.25 and up. I had to use the master branch of longhorn to get it to work in k8s 1.26. They plan to have it fixed in the release 1.4.0.
I could be wrong, but wouldn't hooking the other redundant supply on the storage shelf share the load between both power supplys. Therefore cutting the use of each in half. Basically using the same amount of power. Other than that, it looks good. Over complicating cable management IMHO just hinders the time in making repairs or replacing cables.
Sick setup Tim!
Merry Christmas to you and your family!
I have everything in the front also. If done proper it looks good. I think your layout looks great.
This is an excellent video. I'm looking at moving my home equipment from a bookshelf into a rack soon. I'd be very interested in a video on the PDU Pro, particularly with failover with multiple UDM Pros. Thanks for addressing the RGB lighting as well. I've definitely fumbled around in a few racks that were a bit too dim or had areas that fell into shadow. Lighting strips seem like a great way to solve that problem. Perhaps the RGB color could be tied to rack temp for added utility???
LOL - my homelab is a single 14c PC running Proxmox with 2.5g ethernet connected. I am upgrading my network to 10g and I have a Synology NAS (8 drives) - but I don't consider that part of my homelab. I have an office PC, a gaming PC and a handful of project systems like mini PCs and PIs for playing around. My wife thinks all this is overkill. I have to admit, I've never seen RGB in a rack before - nice touch.
That's awesome man. Nice work!
The Amazon listing for your rack says it is 42U - but unless you are a giant, there's somethign fishy going on. My 42U rack here is significantly taller than I am, and I'm not short. I'm not sure how tall you are - but I would guess your rack is only about 36U.
The rest looks really good - better than most of the racks at work. The only thing that would get you in trouble is your "umbilical" - that needs to be suspended up high enough to walk under. Best would be to have a little piece of cable-tray stick out from the wall, and just have a round on the end to control bend-radius and let it all drop to the rack. Easy to just drop new cables into the tray as needed in the future then.
I really love that Unifi PDU. Sadly they haven't made an Australian version... yet. Hopefully they will one day soon.
This is an awesome setup. My setup is much simpler but so are my needs. I'm really digg'n on the enclosed rack.
Love your setup! I was just wondering if you could share how your monthly electricity bill looks like? I have very similar equipment that I’m afraid to turn on 😂 because of electricity bill.
The storinator! Its getting ready to take on some new roles! But I hvnt done any upgrades on this, BESIDES..... perfect ;D
Nice work man! We should hang out. I'm in south Minneapolis too, by Powderhorn park. I want to see your setup up close!
do you have a tutorial on how to setup the practical rgb? the motion sensing triggering the rgb power and color.. that would be awesome!
Tim, you should consider Solar. Yes, I know, Minnesota. But you'd be surprised. If you have the ability to clear the snow, off your panels in the winter, you could easily offset your server room with a 5-6Kwh system. DM if you want to see mine. Undecided with Matt Ferrell has a large TH-cam channel, he's in Massacussets and is able to offset his Telsa Model 3 with his system. Coincidently, he's also a software developer!
Excellent video - thanks for sharing! However I haven't seen any earthing cables within your server room (rack, walls, etc), which in your case, will increase your protection against surge protection, eddy currents, static electricity, etc. Maybe you covered them so nicely which I haven't noticed them 😎
Love it! Happy Holidays
Love the rack. Personal fan of wide racks giving more room on the sides, but great work!
@Techno_Tim1 Glad to watch, but considering you lost your account verification, I'll pass. An hour old account too. Hope you have all your TH-cam footage backed up if you lost your account. /s Backup is key!!
Love what you do in your homelab and wish I had the time to do what you do. Live in Australia and I'm finding that my under the stairs rack is getting very warm with the doors closed and 4 rack fans running - once I open the doors under the stairs it cools down dramatically. Both in Winter and summer - though our winter is not like US 🙂
Do you have any air conditioning with the amount of the equipment you have and what would you recommend? It seems to be very hard to get refrigerated air conditioning to cool a small space (approx 4-5cubic metres).
Any suggestions that can be picked (inexpensive) up in Australia?
Holy patch cables batman. Looks great but what all do you have hooked up that you need ~48 ethernet cables throughout your house?
a wonderful server rack, but why u replaced your Detroit patch panel with key stones patch panel ?
Tim. You the man. This is all very nice. I think the cabling looks great. If Tom Lawrence or Mac Telecom redid the wires, I would wonder what could be done better!
I hate to be "that guy" but my only concern and I couldn't quite see, is the umbilical cord coming in on the top, is that going to rub on that metal edge? Won't happen overnight, but that is my only thought. And part of the reason it's my only thought is I am leaning from you! Lol I have no idea what I'm doing. Lol
Thanks for everything.
Thank you! I'll see what I can do about the cord!
Awesome set up Tim
Looks awesome.. Only suggestion you need some rack blanks to pretty is up..
I'm not a fan of "power up front" because I feel like it adds complexity to maintain the rack. If all of your power cables are strictly in the back, including PDUs, then you don't have to open the front at all to deal with power issues. I'm not sure I'll get a Unifi PDU, but if I do, it will go in the back of the rack. That said, I like what you've done with your rack. It does look very nice, and there's a lot of things you can do just from the front.
Hi, how did you ground your patch panels, they seem to have shielded keystones and you are running PoE through them? This sometimes causes leaks into the rack, have you dealt with that or? Also, why not a single node with dual socket? Doing separate 1U servers is going to draw more power, more components etc. Lastly, in your previous video talking about UPS, you showed they had numerous sockets on it themselves, producing simulated or true sine waves and have grouped outlets on them as well. So, by using separate PDU, does that take away the advantage of that? I thought there were PDU's where you can get dedicated inlet for each outlet? So, to make full use of the UPS. Last question.. as a software engineer do you feel like homelab takes away too much time from your actual career or it feels like an addition? Thank you, Just asking, nice video man.
you are a good one, I really enjoy every video of you! I’m very jealous of your 45drive storage🤗👍
Best setup for homelab are 2x itx x570 boards with ipmi/ibmc that supports ecc ram.
2x because of failover/ha/updating/rebooting and you need 2 anyway for the storage. Consumer boards are usually limited to 8x sata + 1x nvme.
So that means in short asrock rack x570 with an amd 5600/5800. Because it supports ecc ram, has 2x 10g ports and ipmi.
No graphic cards is needed nowadays, because all the boxes (shield pro/appletv/etc) decodes themselves.
So use that graphiccard slot for an x16x4 nvme bifurcation card for cache/metadata vdev/log device for zfs! Use simply some cheap optane modules there.
Get a lot of ram. 128gb is the best.
Because you need a lot of ram for virtualization and zfs.
Put a lot of 20tb hdds in that in at least zfs z2 or raid 10 (zfs z1 is slow and it will get probably abandoned at some point, raid 5 is okay but raid 10 is actually the fastest, just yes you loose a lot of space)
Connect both servers together over the 10gbe port, use the second port for the switch.
Best thing is, if you have the ability to split up your 10gbe port for virtualization (sr-iov on intel nics for example) use that instead of virtual adapters.
Because that offers hw acceleration.
You can spec the second server lower, to use it as a backup server simply. Means less memory like 32gb (you can do less, but you still need a bit, because if you move the vm's from the other server to it, to reboot/upgrade/etc the other sever).
Just keep the cpu the same generation in both servers. Means in the main you can use an x5800 in the second an x5600...
Same generation, because you want to pass the cpu flags to the vm's to get sometimes better efficiency. Means setting the flags to host.
So in the end, you will have a relatively cheap mainserver and an even cheaper backupserver, both running as proxmox cluster.
Powerwise you can even do some scripts to turn off/on the second server just for backups.
Otherwise you actually don't need that much space for the backupserver (but keep in mind, don't use proxmox backupserver, use normal proxmox on it, i just call it backupserver)
So not so much storage, additionally the storage don't need to be fast on the secondary/backupserver, means an raid5, or zfs z2, even if i don't generally recommend, to safe money, you can use zfs z1 there.
The whole task of the secondary server is, to run some important vm's from the mainserver when you work on the mainserver and keep long time backups.
That's it.
The rack thing is something for company's, not for homelab 😂
Btw as example, i do run my homelab that way.
Main Server:
Asrock rack x570 d4i
Amd 5800x
128gb ecc memory
8x20tb as raid 10 + bifurcation in 2x raid1 (one for metadata vdev "optane", and the other for log "optane")
2tb nvme 980pro in the nvme slot, splitted with lvm up (600gb ext4 partition for lvm containers, docker container, etc. And 1.4tb as zfs cache device)
Using one nic in sr-iov mode, for vms (opnsense and so on)
And the other nic, back to back to the other server.
1x 250gb nvme ssd in and usb3 enclosure for the proxmox os.
The second (backup proxmox node)
Asrock rack x570d4i
3x 20tb hdds in zfs z1 (limited max l2 cache to 8gb) (for long term backup storage)
Amd 5600x
250gb nvme for proxmox.
2x 2tb some cheap sata ssds, Kingston a500 or sth like that (as vm storage simply if i need to work on the mainserver), with mdadm in raid 1 simply and ext4.
Why do i use zfs z1 for backup storage, instead of raid5 ext4 storage?
Well ext4 has no safety features for long term bitswap errors etc. Means your data can get corrupted on very old backups.
I don't think that anyone needs more, that's already a high end setup for homelab.
The only thing that was really freaking expensive are the 20tb hdds.
But you can get nowadays one for like 300-350usd.
And probably most people don't even need that ton of storage, i just have a ton of data.
Additionally, you don't really need to backup everything, as my mainserver has a ton of data, and a ton of storage, i use the second one only for vm backups and so on.
Mainly the second server is just there to bridge the downtime for the first Server.
As i have opnsense running and if i would have only one server, i would have no internet if the server goes down 😂
The backup job is just an additional task.
Absolutely everything is running on that server (zigbee, opnsense, etc...) So you basically run everything on it and you don't need any other dedicated hardware in your house.
Zfs on the mainserver is blazing fast due to the vdev/log/cache.
Additionally i have datasets for samba with shadowcopy and zfs snapshots. So you have backups that way either.
Cheers
Too big, Out of control, Crazy; yes. ...and I love it.
Thanks for sharing. Min 2, second 38, What’s the name of the whole silver panel.? It looks awesome for cable management and how did you hang there the raspberry and Philips hue hubs?
Epic setup thanks for sharing can’t wait for your services vid thanks
Insane gear/setup. Question: Are you using Solar Power yet?
@TechnoTim thanks for all your videos and support you provide!
Your homelab looks wonderful any color scheme you activate :)
Following your guides I'm running proxmox, wireguard, nextcloud, uptime kuma and some more of your homelab recommendations.
Maybe you could point me (and also others) to the right direction - I'd like to setup bitwarden as a VM on my proxmox server.
However, I'd love not to have to use docker... Since there's no how-to anywhere, do you have an idea to find out, what is needed to setup bitwarden manually?
Kind regards from Berlin,
Tim
You should upgrade you motion sensor to a mmwave one. Much more sensitive so basically you standing there would be good enough.
I built my with power in the front and back. It looks neat so not a bad thing. 😊
10:08 yes please
Also great vid