FYI I will be doing a follow up Q&A video in a few days. Leave a comment with your question below and I'll do my best to address it in the upcoming video.
Alex, these videos are awesome! Thank you for sharing your journey with the world. I would love for you to go through an introduction of using NixOS for your perfectmediaserver setup. I've been happily plodding away on Debian/Ubuntu for years and have everything scripted for myself, but I'm very interested in the idea of the OS as code. I currently have my basic setup all covered with Ansible playbooks, but NixOS is still very intriguing to me (I know that you are a big Ansible user, so I'd also like to know what ends up in your config or flakes vs. what you configure with Ansible). In a not surprising thing to me, we are currently running the same mobo and CPU combo from the same eBay seller (although, mine is in a 2U Supermicro 826 case and jbod'd to a couple of other Supermicro chassis). Keep up the great work!
@@rubylaserz you can bet on this being a theme on the channel soon enough (it already is on the podcast - selfhosted.show) good to see you knocking around again mate.
I have that Icy Dock with 6 2.5" bays. The fans isn't too noisy. It has a built in fan controller. There is a switch at the back. Full speed, half and off. I run half speed and it silent
You sir have done what I tell everyone who thinks they want to build a workstation/HDET with a Threadripper or Xeon - just get an Epyc. You save money, get more PCIe channels and more threads. Great build - boring is good!
Couldn't agree more in regards to "boring is good". I am suffering with my W680, DDR5 ECC UDIMM RAM and i5-12500. I wish I had gone for a true server platform despite some more watts. The RAM ultimately caused restarts so much so, that I replaced the CPU, PSU and motherboard, not knowing, that I should run the RAM at 3600MT/s or - even better - at 3200MT/s, because otherwise it's unstable. No traces of the random reboots in the BMC, journal or kernel (or really any) log and when running Memtest86 for hours, it just worked flawlessly. 3 months of time lost... I wish my build was boring.
Absolutely incredible video. This is my dream system. Been reading everywhere that the 7302/7402 drew like 100W at idle. Great to hear yours was doing 40-50W at idle.
though i dont have a wall meter according to hwmonitor both 7302's run around 55W on idle, roughly 25'c-35'c using those same 92mm fan 6 heatpipe heatsinks in his video, with thermal grizzly kryonaught extreme heatsink compound. Running cinebench R23 hits 140W on cpu1 52W on cpu2 45-55'C (ambient is pretty cold however i have seen 60'c-65'c in the past) 27 passes with a score of 35341 beating Ryzen Threadripper 2990WX by over 5000 points.
@@ktzsystems do you have any data for how much the 7402 can draw under full load? I have a 7551P and noticed online literature suggesting max power draw could be over 300W, but I'm not sure how that's possible at 180W TDP, and my chip never seems to want to pull more than 130W anyway.
a few months back I stumbled upon you video where you 3D printed the Dell OptiPlex Rackmount then after that I subscribed and went back and watched almost all your other videos. This PC build is almost exactly what I would like to do except I would make a few changes so it fits my needs! These kinds of builds are cool and hopefully it fulfills your homelab needs until you grow! Thanks for making these cool videos!
You don't need a python script to control your fans. Just use the IMPI to set the min fan speed to 250 rpm. I have the same motherboard. ipmiutil sensor -g fan -n 42 -l 250 -N 192.x.x.x -U user -P password That is for fan 42 which is fan2. You will need to set it for each fan you want to set the min fan speed.
I'd have to disagree, if you're not monitoring your cpu temperatures and adjusting fan speed based on that could burn out your CPUs. I think would be a good temporary setup, but coding a custom python script you could add as a cronjob would be more efficient in the end. Plus python script should email you immediately if cpu temperature crosses a threshold for manual intervention. What if you had 128 cores going full blast mining etc and thermal paste wears off after a year
@@syleishere I have a H12SSLi. Using IMPI I set the min fan speed. This is because I am using 140mm fans which don't need to spin fast. The motherboard ramps up and down the fan speed for the 2 thermal zones automatically based on temperatures. And it the CPUs do get too hot, a annoying sound alarm will go off and you will have to unplug the power supply to make it go away. As long as you are using 4 pin fans, the mb does a fine job of changing fan speed as a function of internal temperatures.
Thank you for the video, really interesting to watch and explained quiet a lot - excited about my build now. Dipping my feet into homelab stuff and just bought a H11SSL-i Mobo with a 7551P and 128GB RAM - I have so much more info now!
I have that board/cpu/memory from Tmug on ebay, and it is great for what I am using them for. One is in a unraid where it might be overkill and the otherone is in my esxi system where it is perfect for hosting a bunch of vms for homelabbing.
You are a gentleman and a scholar. I am actually planning to buy this motherboard in a combo on eBay with an EPYC 7402P CPU so this gives me a nice preview of this system board. Not planning to use that chassis though, I'm planning to use a Rosewill RSV-L4412U.
A military data center did not allow modification of the equipment to attach an air filter I cut open cell washable filter material to friction fit in the the air intake. The combination of the friction fit and intake airflow held the air filter in place. In your case you may need a bit of Velcro to hold the filter in place.
On the H12 motherboard, you can change fan settings which might stop the ramp-up/down. Try optimal or even heavy workload. Your Noctuas will still be quiet. Better than messing with raw bmc scripting if it works. Nice video sir.
Loved this review of your hardware. Nice to see what a really upscale build can do for a person. I'd love to have something like this one day, because you've got a lot of PCIe lanes. Very nice.
I love my Sliger case. This one is pretty cool but for the cost of the case I can buy a R730xd. For the cost of the motherboard combo I can fill the R730xd full of fancy drives and upgrades. That being said I kinda wish I had a CX4712 for my eventual homelab build for my home. It would be cool to have a quiet case since it will be close to my bedroom. I prefer Dell's iDrac over other brand's LOM though.
I first came across the motherboard tray design back in the late 1990s, InWin had a case that had that feature. I wish more cases would re-examine that ability.
Wow, I never have seen something closer to my new nas. haha. Your spec sheet is almost the same as what I just built out. Even down to the case. I am super happy with the Sliger case.
I am really likeing mine. I went a little more insane than you did... I have a 8 bay 2.5 caddy full of 4tb ssds and 4 4tb nvme drives..Then 10,10tb sata drives...@@ktzsystems
oooh yeah i waited for this video. your first one on that board a couple of days ago was my first video i watched from you. instant sub. your videos are really good.
The Case is absolutely impressive! 175 Watts is still a lot. With a bad energy contract here in germany it would be around 600€ in Electricity costs alone. Even with a good one it would be around 450€
BTW - i've watched a billion simialr vids and have been really looking for a similar upgrade for my 200TB array. I pulled the trigger on a very simair config. Thanks for the recommendation!
Yes, hello Alex, can you give more details into the boot drive configuration. You mentioned in the video @1:55 you mention the two SSDs will be on the two `dumb ports` on the motherboard. How will you configure them in an RAID? Will this be done during the install with mdraid, is there a software raid on the motherboard, etc. Thank you in advance.
the removable fan on the cpu cooler is from the server world, that way you can remove the fan quickly and easily with out having to fight with the cooler, just the fan, on some systems you can replace a failed fan with out turning off the system, for places screaming about 5.9 uptime it can be a good thing.
lots of older cases had motherboard trays. My inwin q500's ive had since my 486dx2 days. I have 2 of them still. One has my older unraid 5.0.6 running 4tb or smaller drives either internally in the case or in my sata2 hotswap drive enclosures. has a 3 bay and 4 bay setup in the 5.25 bays.
14:40 FROM UNDERNEATH?? Never even thought to look due to the absurdity. assumed they were rivets. It was easier taking apart the cage holding the 10 adapters (which isn't even needed).
What a coincidence! I just started an Epyc build myself with very similar components! I just received my Supermicro h12ssl-nt. Almost identical motherboard with a few differences like onboard dual 10gbe and 2 slimsas connectors. I paired it with the same 8x32GB ddr4 3200mt/s rdimms as well. I went with a 7302 cpu, a rosewill rsv-l4500 case, and a Sparkle Intel Arc a380 low profile card (the cooler still has a width of 2slots 😢). The only thing left for me to buy is storage and a RTX 3090 or two
@@ktzsystems I think I found a downside with this setup that hopefully gets resolved in a future bios update. Resizable bar isn’t available currently as an option in the bios to enable. That impacts the performance of Intel Arc cards big time from what I’ve read, including their transcoding performance. Supermicro is allegedly working on a bios update to enable so hopefully that is released soon
Ha! Freezer Bags, yep, I have three. Cables, Screws, and random small parts :). Also, it feels weird to hear someone say, "during Covid" and knowing exactly what you mean.
I've also just realised you're the Tailscale guy whom I've been watching over and over this week. No idea why it didn't click... I was like " this guy looks familiar..."
I have the Icy Dock 4 drive bay for 5.25 drive bay, my storage server is using 2 of them and they are great. You can easily replace the fans with Noctua fans of the same size and never hear them.
For example, I have had bad experiences with the Crucial MX500. The first, as well as the replacement delivery had write errors with Zfs after a few weeks. But this may also have been due to the OpenZFS bug.
Thank you for the video and awesome information. I am looking at this build to upgrade my HomeLab habit (currently have a couple of HP Elitedesk 800 G3 sff). My use case will be the following: - Plex with Arr’s and associated apps - HomeAssistant and associated apps - Frigate and Associated apps - Local LLM - Retro Gaming Is there a list of full build parts (like PSU)? I am currently on OMV but looking at TrueNas or Unraid. Any suggestions? Thanks again!
Great video, why would you use 2 old SSDs? 4 nvme be great for freebsd raid0 stripe across them all, rest of space for bhyve virtualization. Then another raid for the spinning disks. I'd like it if you used 1 64 core epyc or duals, most people rocking 128 cores these days in dual cpu setups. For nvme you have PCIe 4 lanes for some 990 Samsung nvmes for the bifurcation card?
NIce video, thx a lot. One question - the H12SSL manual says that 4 DIMMs must be placed to the furthest slots while you have them installed in the closest slots. I wonder if you did that deliberatly to check if it works or you did so worngly? Thanks!
This will probably be my next big server upgrade in a few years. Right now I am on dual Haswell 18core CPUs, 5x Nvidia P4 GPUs, and about 15 or so drives. Some SSD, some HDDs. I am Doing a bunch of VDI/ cloud gaming stuff for fun mostly, but eventually I want to increase my per core speed and my PCIE connectivity. I think Board/CPU combos are around $500-700USD? Not bad. I'll have to save up though. Obviously a big investment for a hobbyist vs a professional/ content creator.
@@ktzsystems Understood. May your job pay you well, your hobbies pay for themselves, or at least may you learn valuable lessons along the way. Mostly my home lab adventures have been fun, frustrating, and HIGHLY educational. Because I do not create content, fun & education was always the goal. With maturity, I have come to occasionally appreciate the challenge.
@ktzsystems do you have the build list in a Google Spreadsheet format? I'd like to build the same thing you did for my home lab but I'm curious about the raid card, PSU and cable you purchased to wire up the drives?
Amazing Python script! I have some issues with an Asrock Rack Romed28t motherboard and Noctua fan speed control too (seems the MB isn't able to speed up the fans when under load).
I've got my truenas server running a 16 core epyc with 128gb of ram. I love it. Need to migrate to to scale though so I can get my a2000 working for ending plex streams. Not that's it's necessary.
As a heads up, NAND really prefers to be warm, and can increase lifetime. The controller you want to keep under throttle, but the NAND can be hot, and is preferred. 60c is more than reasonable.
Huh. That’s super interesting, thanks for sharing. I’m still mildly concerned about the pcie slot area being so congested and moving those u.2 to the front connected via the slimSAS connector on the board into a caddy would free up a lot of space.
Good price on these CPUs , i had no idea an EPYC was this attainable. Arent they rather slow though? They’re half speed compared to what the 12th/13th gen regular intel chips boost to. RAM is also starting to hover in the half speed range compared to DDR5. I currently have a refurb dual CPU Xeon Workstation and it’s barely usable. Are there homelab workloads that take advantage of slower processing but more cores/threads? The main advantage I can see is the huge amount of pcie lanes for addon cards and drives.
Single threaded they aren’t lightning fast you’re right. But they can crunch a lot all at once. And connect a lot of things together with all the pcie bandwidth. I’ve just ordered an upgrade for my media box with an i5 13600k for the speed. I think eventually this epyc system will end up in a colo. it’d be so great for that.
Hey @ktzsystems, I really enjoyed this, clear audio, informational and very good explanations about your choices and bifurcation. Thanks! 👏🏻😎 instant sub 🤘🏻
Love the video. Can I ask about the sata dom ports. Does the ssds that you`re using in this build for the boot drives get powered by the DOM ports or did you add the power from the PSU?
I had a similar power issue when putting in a RTX2070 into my system passed through to a VM, I found that the default Nvidia driver that the Ubuntu server script installed was a bit old so I updated the Nvidia driver and the idle power state dropped down to 4 watts with fan at 0 rpm.
@@ktzsystems I haven't seen that kind of implementation, because each vendor has a separate library for managing the GPU (CUDA for Nvidia, ROCm for AMD).
I'd love to have more pcie lanes but I'm betting that my current 13700k in my server does a better job for the game servers or at least the ones that have just a few main threads.
Interesting, which card do you use to put 2 nvme drives on and use bifurcation, probably not pcie5 right? I'm building a genoa system as well, thinking about kioxia drives, but the pcie5 are still very expensive and i don't think pcie5 is possible on such a card, but was only able to find single drive cards. I'm using a fractal 2xl case, the icy dock bays are rediculously expensive, for a 6bay nvme, about 450 euro for a piece of metal with some u.3 adapters 😮, 200 would be more reasonable.
is it a bad idea if I use Epyc for data storage / nas. I was thinking on buying the normal hardware but the price of the second had epyc and motherboard is quite cheap. My only worried that if I cheap up on the HW by buying the Epyc, I might be paying more in terms of power bill
it is almost certainly overkill for basic NAS / storage only. but if you're looking to do VMs, and experiment more in a homelabby type fashion then it might be worth it.
the sliger case doesn't officially support it but i screwed in a single 80mm fan. since the video i have removed it as the fan control script coupled with nvidia power management discussed in the video have kept idle temps under control.
Can I ask, I have mounted the cpu fan the opposite way around IE fan facing rear, will this effect the vrms temp and will it be ok as it is? or will I have to turn the heatsink aroung? don't really want to do this because of how sensitive the eypc can be. Also, on reading the script it says it can be dangerous and smoke the cpu, is the script safe to run?
Brother same motherboard and cpu i buy i need like a server brother but i cant buy the casing bro i very tired to search internet and much more exposive what is the alternative sollution bro
How did you get your Arc GPU properly passed through to your VM running Plex in docker? I have mine (a380) passed through from Proxmox to a Ubuntu 22.04 VM. The drivers are installed in the VM so that the card shows up in intel_gpu_top, but the VM crashes if I try to transcode with the card. I also get errors in dmesg about resizable BAR not working and only allocating 256MB to the card. Any tips?
The bios doesn’t have a resizeable bar option (yet). I read - forget where - it is coming soon so until it does I’m not passing through and just using the card for quicksync duties.
Are you saying that with everything assembled and running your power usage is under 150 watts? Is that an estimate or meter reading ? If it’s real I will be replacing my Dell R720s.
175w with everything assembled at a "base idle" which means both GPUs loaded, a handful of app containers and VMs not doing anything intensive, and 10 HDDs spinning.
its a shame the 7402 only has a 2.8ghz base and 3.35ghz boost, mind you i have a dual setup with 7302x2 so i still maintain a high core count 16 core 32thread x2 making 32core 64threads, my bug bear is that disspite the fact that it has a 4g decoder setting in bios theres no rebar setting, and the board only supports gen 3 pci-e. Yeah took me a while to work out that it only sees 4 drives if you bificate the slot into x4x4x4x4, and i had to buy priotory SuperMicro cables for U.2 drives and SAS to x4 SATA x2. i dont know if U.2 is faster but i bought a converter by IcyDock to converter an M.2 into a U.2 for 1 of the U.2 ports as the boot drive and an intel U.2 drive for the other Port, originally i thought the onboard m.2 port was a SATA type but i was wrong. Im gutted one of the 10TB NAS Drives upped and died, and atm cant afford to replace it :(, im wondering how good the Dual SP3 Epyc Gigabyte motherboard is but they are typically £800-£1000 stead of £250-£400, oh if you see cheap epyc 7001 or 7002 cpu's make sure there vendor unlocked typically the cheaper version are because there vendor locked usually to Dell Systems, and initially you may need a retro SVGA monitor to get into bios via the onboard BMC port b4 changeing to the discrete card. far as i can tell BMC doesnt work with windows 11. oh and I only got 15 out of 16 rams slots running so stead of 128GB its got 120GB, suspect cpu mounting pressure or faulty chip maybe it justs needs swapping to the second cpu socket. but ive killed this type of socket in the past trying to upgrade, so im leaving it alone.
I searched for the detailed specifications of the EPYC 7402 processor, and combined with the information shown in your video, I guess you purchased the retired hardware used by CCP to monitor the Chinese. Anyway, you are satisfied that you made a good deal, regardless of it Evil or not. I personally will never purchase e-waste decommissioned by the Chinese government.
FYI I will be doing a follow up Q&A video in a few days. Leave a comment with your question below and I'll do my best to address it in the upcoming video.
After messing with all those cables for the drives in that case... does the price point of the HL15 feel easier to swallow?
How's the vibration noises / sound dampening with all those spinning disk drives?
Alex, these videos are awesome! Thank you for sharing your journey with the world. I would love for you to go through an introduction of using NixOS for your perfectmediaserver setup. I've been happily plodding away on Debian/Ubuntu for years and have everything scripted for myself, but I'm very interested in the idea of the OS as code. I currently have my basic setup all covered with Ansible playbooks, but NixOS is still very intriguing to me (I know that you are a big Ansible user, so I'd also like to know what ends up in your config or flakes vs. what you configure with Ansible). In a not surprising thing to me, we are currently running the same mobo and CPU combo from the same eBay seller (although, mine is in a 2U Supermicro 826 case and jbod'd to a couple of other Supermicro chassis). Keep up the great work!
@@rubylaserz you can bet on this being a theme on the channel soon enough (it already is on the podcast - selfhosted.show)
good to see you knocking around again mate.
Thanks! Now, I’ll have to check out the podcast ;)
It's impressive that you can play RimWorld while also presenting!
I just taped down the mouse button ;)
@@ktzsystemsOhhhh smart.
I have that Icy Dock with 6 2.5" bays. The fans isn't too noisy. It has a built in fan controller. There is a switch at the back. Full speed, half and off. I run half speed and it silent
i have the same icy dock and put tiny noctua fans on it to fix the noise
You sir have done what I tell everyone who thinks they want to build a workstation/HDET with a Threadripper or Xeon - just get an Epyc. You save money, get more PCIe channels and more threads. Great build - boring is good!
Couldn't agree more in regards to "boring is good". I am suffering with my W680, DDR5 ECC UDIMM RAM and i5-12500. I wish I had gone for a true server platform despite some more watts. The RAM ultimately caused restarts so much so, that I replaced the CPU, PSU and motherboard, not knowing, that I should run the RAM at 3600MT/s or - even better - at 3200MT/s, because otherwise it's unstable.
No traces of the random reboots in the BMC, journal or kernel (or really any) log and when running Memtest86 for hours, it just worked flawlessly.
3 months of time lost... I wish my build was boring.
Absolutely incredible video. This is my dream system. Been reading everywhere that the 7302/7402 drew like 100W at idle. Great to hear yours was doing 40-50W at idle.
though i dont have a wall meter according to hwmonitor both 7302's run around 55W on idle, roughly 25'c-35'c using those same 92mm fan 6 heatpipe heatsinks in his video, with thermal grizzly kryonaught extreme heatsink compound. Running cinebench R23 hits 140W on cpu1 52W on cpu2 45-55'C (ambient is pretty cold however i have seen 60'c-65'c in the past) 27 passes with a score of 35341 beating Ryzen Threadripper 2990WX by over 5000 points.
@@danthompsett2894 some good numbers there! these modern chips are so good.
@@ktzsystems do you have any data for how much the 7402 can draw under full load? I have a 7551P and noticed online literature suggesting max power draw could be over 300W, but I'm not sure how that's possible at 180W TDP, and my chip never seems to want to pull more than 130W anyway.
Came for the server build content, stayed for the ziploc bag reviews
a few months back I stumbled upon you video where you 3D printed the Dell OptiPlex Rackmount then after that I subscribed and went back and watched almost all your other videos. This PC build is almost exactly what I would like to do except I would make a few changes so it fits my needs! These kinds of builds are cool and hopefully it fulfills your homelab needs until you grow! Thanks for making these cool videos!
Really looking forward to your comparison of this case to the HL-15. Really enjoy your videos, thank you for your efforts.
You don't need a python script to control your fans. Just use the IMPI to set the min fan speed to 250 rpm. I have the same motherboard. ipmiutil sensor -g fan -n 42 -l 250 -N 192.x.x.x -U user -P password
That is for fan 42 which is fan2. You will need to set it for each fan you want to set the min fan speed.
I'd have to disagree, if you're not monitoring your cpu temperatures and adjusting fan speed based on that could burn out your CPUs. I think would be a good temporary setup, but coding a custom python script you could add as a cronjob would be more efficient in the end.
Plus python script should email you immediately if cpu temperature crosses a threshold for manual intervention. What if you had 128 cores going full blast mining etc and thermal paste wears off after a year
@@syleishere I have a H12SSLi. Using IMPI I set the min fan speed. This is because I am using 140mm fans which don't need to spin fast. The motherboard ramps up and down the fan speed for the 2 thermal zones automatically based on temperatures. And it the CPUs do get too hot, a annoying sound alarm will go off and you will have to unplug the power supply to make it go away. As long as you are using 4 pin fans, the mb does a fine job of changing fan speed as a function of internal temperatures.
Thank you for the video, really interesting to watch and explained quiet a lot - excited about my build now. Dipping my feet into homelab stuff and just bought a H11SSL-i Mobo with a 7551P and 128GB RAM - I have so much more info now!
I have that board/cpu/memory from Tmug on ebay, and it is great for what I am using them for. One is in a unraid where it might be overkill and the otherone is in my esxi system where it is perfect for hosting a bunch of vms for homelabbing.
You are a gentleman and a scholar. I am actually planning to buy this motherboard in a combo on eBay with an EPYC 7402P CPU so this gives me a nice preview of this system board. Not planning to use that chassis though, I'm planning to use a Rosewill RSV-L4412U.
I bought a 7302 and 128GB memory from the same seller a few months ago. They were very responsive to questions as well.
I bought similar with a 256GB memory and i had a great experience with the seller and build.
A military data center did not allow modification of the equipment to attach an air filter I cut open cell washable filter material to friction fit in the the air intake. The combination of the friction fit and intake airflow held the air filter in place. In your case you may need a bit of Velcro to hold the filter in place.
On the H12 motherboard, you can change fan settings which might stop the ramp-up/down. Try optimal or even heavy workload. Your Noctuas will still be quiet. Better than messing with raw bmc scripting if it works. Nice video sir.
more videos about self hosting please. Keep up the great tutorials.
Loved this review of your hardware. Nice to see what a really upscale build can do for a person. I'd love to have something like this one day, because you've got a lot of PCIe lanes. Very nice.
Had a couple of cases with removeable motherboard trays way back (last century in fact) and they were great!
Removable motherboard trays was a feature on many PC cases in the late 90s
i'm partying like it's 1997 over here then!
@@ktzsystems yeaj, was a nice feature I really missed. Beats building the PC on the Motherboard box.
I love my Sliger case. This one is pretty cool but for the cost of the case I can buy a R730xd. For the cost of the motherboard combo I can fill the R730xd full of fancy drives and upgrades. That being said I kinda wish I had a CX4712 for my eventual homelab build for my home. It would be cool to have a quiet case since it will be close to my bedroom. I prefer Dell's iDrac over other brand's LOM though.
Just came across your channel today, great work! I really enjoyed the video, gonna go check out some more of your stuff :)
I first came across the motherboard tray design back in the late 1990s, InWin had a case that had that feature. I wish more cases would re-examine that ability.
Wow, I never have seen something closer to my new nas. haha. Your spec sheet is almost the same as what I just built out. Even down to the case. I am super happy with the Sliger case.
Awesome! How are you liking the build?
I am really likeing mine. I went a little more insane than you did... I have a 8 bay 2.5 caddy full of 4tb ssds and 4 4tb nvme drives..Then 10,10tb sata drives...@@ktzsystems
oooh yeah i waited for this video. your first one on that board a couple of days ago was my first video i watched from you. instant sub. your videos are really good.
The Case is absolutely impressive! 175 Watts is still a lot. With a bad energy contract here in germany it would be around 600€ in Electricity costs alone. Even with a good one it would be around 450€
Thank you for doing a build video without once mentioning bloody "gaming".
Thoroughly refreshing.
Thank you, sir.
The terminal is the only video game I need :p
Casually like $8000 AUD worth of unobtainium in Australia.
Nice gear! I like even your Cologne number plate in the background ;-)
I love the Jezza reference! Awesome build, Alex!
Nicely done video. Great machine. Thanks for taking your time to do this wonderful content.
Removable mb trays used to be a common thing, god knows why it died out (prob cost/complexity) made life so much easier. Good selection of parts here
I have a similar setup just waiting for storage, and 100 percent agree with the case I love it but the only pain was wiring up the hard drives
BTW - i've watched a billion simialr vids and have been really looking for a similar upgrade for my 200TB array. I pulled the trigger on a very simair config. Thanks for the recommendation!
Yes, hello Alex, can you give more details into the boot drive configuration. You mentioned in the video @1:55 you mention the two SSDs will be on the two `dumb ports` on the motherboard. How will you configure them in an RAID? Will this be done during the install with mdraid, is there a software raid on the motherboard, etc. Thank you in advance.
the removable fan on the cpu cooler is from the server world, that way you can remove the fan quickly and easily with out having to fight with the cooler, just the fan, on some systems you can replace a failed fan with out turning off the system, for places screaming about 5.9 uptime it can be a good thing.
lots of older cases had motherboard trays. My inwin q500's ive had since my 486dx2 days. I have 2 of them still. One has my older unraid 5.0.6 running 4tb or smaller drives either internally in the case or in my sata2 hotswap drive enclosures. has a 3 bay and 4 bay setup in the 5.25 bays.
Thank you for this video!!!!! I really appreciate it. Hopefully at some point soon I will have an Epyc workstation similar to this one.
That case looks amazing actually. twin hot swappable bays for 6 2.5's each? That's clean.... I like it.
And it looks down right low cost next to the hl15...
14:40 FROM UNDERNEATH?? Never even thought to look due to the absurdity. assumed they were rivets. It was easier taking apart the cage holding the 10 adapters (which isn't even needed).
What a coincidence! I just started an Epyc build myself with very similar components! I just received my Supermicro h12ssl-nt. Almost identical motherboard with a few differences like onboard dual 10gbe and 2 slimsas connectors. I paired it with the same 8x32GB ddr4 3200mt/s rdimms as well. I went with a 7302 cpu, a rosewill rsv-l4500 case, and a Sparkle Intel Arc a380 low profile card (the cooler still has a width of 2slots 😢). The only thing left for me to buy is storage and a RTX 3090 or two
You’ll just have to hang the GPU out of the bottom slot so it overhangs into the ether!
@@ktzsystems I think I found a downside with this setup that hopefully gets resolved in a future bios update. Resizable bar isn’t available currently as an option in the bios to enable. That impacts the performance of Intel Arc cards big time from what I’ve read, including their transcoding performance. Supermicro is allegedly working on a bios update to enable so hopefully that is released soon
Did you solve this issue with Rebar and any links you can share? @@tehsnipes123
@@tehsnipes123 BIOS 3.0 was released back in July which supposedly includes ReBAR. Do you know if it's working as expected?
@@jasonmako343 Yep I gave it a shot and ReBar is working great! Shoutout to Supermicro for updating the BIOS
Ha! Freezer Bags, yep, I have three. Cables, Screws, and random small parts :).
Also, it feels weird to hear someone say, "during Covid" and knowing exactly what you mean.
I've also just realised you're the Tailscale guy whom I've been watching over and over this week. No idea why it didn't click... I was like " this guy looks familiar..."
c'est moi!
I have the Icy Dock 4 drive bay for 5.25 drive bay, my storage server is using 2 of them and they are great. You can easily replace the fans with Noctua fans of the same size and never hear them.
love your videos. keep up the great work!
For example, I have had bad experiences with the Crucial MX500. The first, as well as the replacement delivery had write errors with Zfs after a few weeks. But this may also have been due to the OpenZFS bug.
I have a few MX500 without issues. Yes it could have been the OpenZFS bug. Try reformatting it and redo the ZFS pool. Might work just fine.
Thank you for the video and awesome information. I am looking at this build to upgrade my HomeLab habit (currently have a couple of HP Elitedesk 800 G3 sff).
My use case will be the following:
- Plex with Arr’s and associated apps
- HomeAssistant and associated apps
- Frigate and Associated apps
- Local LLM
- Retro Gaming
Is there a list of full build parts (like PSU)? I am currently on OMV but looking at TrueNas or Unraid. Any suggestions? Thanks again!
Great video, why would you use 2 old SSDs? 4 nvme be great for freebsd raid0 stripe across them all, rest of space for bhyve virtualization. Then another raid for the spinning disks. I'd like it if you used 1 64 core epyc or duals, most people rocking 128 cores these days in dual cpu setups.
For nvme you have PCIe 4 lanes for some 990 Samsung nvmes for the bifurcation card?
B&H Photo Video sells the Intel Arc Pro A40. I purchased one from them for around $200.
NIce video, thx a lot. One question - the H12SSL manual says that 4 DIMMs must be placed to the furthest slots while you have them installed in the closest slots. I wonder if you did that deliberatly to check if it works or you did so worngly? Thanks!
This will probably be my next big server upgrade in a few years. Right now I am on dual Haswell 18core CPUs, 5x Nvidia P4 GPUs, and about 15 or so drives. Some SSD, some HDDs. I am Doing a bunch of VDI/ cloud gaming stuff for fun mostly, but eventually I want to increase my per core speed and my PCIE connectivity. I think Board/CPU combos are around $500-700USD? Not bad. I'll have to save up though. Obviously a big investment for a hobbyist vs a professional/ content creator.
Make no mistake it’s a big investment for me too!!
@@ktzsystems Understood. May your job pay you well, your hobbies pay for themselves, or at least may you learn valuable lessons along the way. Mostly my home lab adventures have been fun, frustrating, and HIGHLY educational. Because I do not create content, fun & education was always the goal. With maturity, I have come to occasionally appreciate the challenge.
nice rimworld gameplay in the background
Winner!
@ktzsystems do you have the build list in a Google Spreadsheet format? I'd like to build the same thing you did for my home lab but I'm curious about the raid card, PSU and cable you purchased to wire up the drives?
You can get a custom filter from Demci, just sketch up the shape you need it and they'll make it up for you.
Amazing Python script! I have some issues with an Asrock Rack Romed28t motherboard and Noctua fan speed control too (seems the MB isn't able to speed up the fans when under load).
wow, i purchased almost this whole setup last week also... i went with bunch of optane drives though
Thats what im looking at
FYI: That case has space for two SSD on top of the power supply, those could be your boot drives.
Indeed it does!
I've got my truenas server running a 16 core epyc with 128gb of ram. I love it. Need to migrate to to scale though so I can get my a2000 working for ending plex streams. Not that's it's necessary.
As a heads up, NAND really prefers to be warm, and can increase lifetime. The controller you want to keep under throttle, but the NAND can be hot, and is preferred. 60c is more than reasonable.
Huh. That’s super interesting, thanks for sharing.
I’m still mildly concerned about the pcie slot area being so congested and moving those u.2 to the front connected via the slimSAS connector on the board into a caddy would free up a lot of space.
Total nonsense.
At that cost is should be 12 bays for the HDD and have a back plane.
Good price on these CPUs , i had no idea an EPYC was this attainable. Arent they rather slow though? They’re half speed compared to what the 12th/13th gen regular intel chips boost to. RAM is also starting to hover in the half speed range compared to DDR5. I currently have a refurb dual CPU Xeon Workstation and it’s barely usable. Are there homelab workloads that take advantage of slower processing but more cores/threads? The main advantage I can see is the huge amount of pcie lanes for addon cards and drives.
Single threaded they aren’t lightning fast you’re right. But they can crunch a lot all at once. And connect a lot of things together with all the pcie bandwidth.
I’ve just ordered an upgrade for my media box with an i5 13600k for the speed.
I think eventually this epyc system will end up in a colo. it’d be so great for that.
Great tips along the way
i've heard some horror stories about getting bifurcation working.. so it's nice to hear it worked right out of the box!
Are the RAM memories specific to this motherboard, or can they be any DDR4? Congratulations on the video!
Hey @ktzsystems, I really enjoyed this, clear audio, informational and very good explanations about your choices and bifurcation. Thanks! 👏🏻😎 instant sub 🤘🏻
Love the video. Can I ask about the sata dom ports. Does the ssds that you`re using in this build for the boot drives get powered by the DOM ports or did you add the power from the PSU?
They could be powered with adapters from the dom port but in this case I ran a power cable from the PSU
@@ktzsystemsThank you
got abo because of rimworld play in background panel 😁
I hve the same motherboard, but cannot get it to POST, did you have the GPU installed when you got it to POST first time around?
I had a similar power issue when putting in a RTX2070 into my system passed through to a VM, I found that the default Nvidia driver that the Ubuntu server script installed was a bit old so I updated the Nvidia driver and the idle power state dropped down to 4 watts with fan at 0 rpm.
Would love to see the Ollama / Llama / ML setup on the system. Try to do dual GPU, I think the MB can support it.
would that be sane to attempt with 1 nvidia gpu and 1 arc?
@@ktzsystems I haven't seen that kind of implementation, because each vendor has a separate library for managing the GPU (CUDA for Nvidia, ROCm for AMD).
@@ktzsystems from what i've read you want as much vram as possible so you can load the whole llm into the vram so it is performant. 3090's are king.
And I'm getting a dell r410 and your rocking this beast. What's the proper English phrase to say "jealous"?
You bastard.
What’s with the cool german numberplate in the back?
I'd love to have more pcie lanes but I'm betting that my current 13700k in my server does a better job for the game servers or at least the ones that have just a few main threads.
Likely single thread the 13700k will win.
Wait, so server motherboards follow the same ATX and E-ATX sizing standards that desktop motherboards follow?
Interesting, which card do you use to put 2 nvme drives on and use bifurcation, probably not pcie5 right? I'm building a genoa system as well, thinking about kioxia drives, but the pcie5 are still very expensive and i don't think pcie5 is possible on such a card, but was only able to find single drive cards. I'm using a fractal 2xl case, the icy dock bays are rediculously expensive, for a 6bay nvme, about 450 euro for a piece of metal with some u.3 adapters 😮, 200 would be more reasonable.
I just added a link to the description as well. It’s this one.
amzn.to/3P2tg8C
is it a bad idea if I use Epyc for data storage / nas. I was thinking on buying the normal hardware but the price of the second had epyc and motherboard is quite cheap. My only worried that if I cheap up on the HW by buying the Epyc, I might be paying more in terms of power bill
it is almost certainly overkill for basic NAS / storage only. but if you're looking to do VMs, and experiment more in a homelabby type fashion then it might be worth it.
I don’t know about the 870 Evo SSD, but u lost a 970 evo plus ssd last year with one year of light use, it had a faulty firmware
What size is the rear fan?
the sliger case doesn't officially support it but i screwed in a single 80mm fan. since the video i have removed it as the fan control script coupled with nvidia power management discussed in the video have kept idle temps under control.
Can I ask, I have mounted the cpu fan the opposite way around IE fan facing rear, will this effect the vrms temp and will it be ok as it is? or will I have to turn the heatsink aroung? don't really want to do this because of how sensitive the eypc can be. Also, on reading the script it says it can be dangerous and smoke the cpu, is the script safe to run?
Brother same motherboard and cpu i buy i need like a server brother but i cant buy the casing bro i very tired to search internet and much more exposive what is the alternative sollution bro
How did you get your Arc GPU properly passed through to your VM running Plex in docker? I have mine (a380) passed through from Proxmox to a Ubuntu 22.04 VM. The drivers are installed in the VM so that the card shows up in intel_gpu_top, but the VM crashes if I try to transcode with the card. I also get errors in dmesg about resizable BAR not working and only allocating 256MB to the card. Any tips?
The bios doesn’t have a resizeable bar option (yet). I read - forget where - it is coming soon so until it does I’m not passing through and just using the card for quicksync duties.
I'm looking to upgrade my lab from 3x Xeon E5 2696v3 18c 256GB DDR3 to 3x Epyc 7402 with 256GB DDR4. How good is your Epyc build against the Xeons?
I don’t have the xeons any longer. But if you want me to run some specific benchmarks I’d be happy to.
Is there a reason you don't go with IPPC-3000 fans from Noctua with your server?
Doing the same build to replace an older Supermicro dual processor Xeon setup.
Are you saying that with everything assembled and running your power usage is under 150 watts? Is that an estimate or meter reading ?
If it’s real I will be replacing my Dell R720s.
175w with everything assembled at a "base idle" which means both GPUs loaded, a handful of app containers and VMs not doing anything intensive, and 10 HDDs spinning.
Bro what kind of power supply is that please attach the ebay affiliate link i want buy that
its a shame the 7402 only has a 2.8ghz base and 3.35ghz boost, mind you i have a dual setup with 7302x2 so i still maintain a high core count 16 core 32thread x2 making 32core 64threads, my bug bear is that disspite the fact that it has a 4g decoder setting in bios theres no rebar setting, and the board only supports gen 3 pci-e. Yeah took me a while to work out that it only sees 4 drives if you bificate the slot into x4x4x4x4, and i had to buy priotory SuperMicro cables for U.2 drives and SAS to x4 SATA x2. i dont know if U.2 is faster but i bought a converter by IcyDock to converter an M.2 into a U.2 for 1 of the U.2 ports as the boot drive and an intel U.2 drive for the other Port, originally i thought the onboard m.2 port was a SATA type but i was wrong. Im gutted one of the 10TB NAS Drives upped and died, and atm cant afford to replace it :(, im wondering how good the Dual SP3 Epyc Gigabyte motherboard is but they are typically £800-£1000 stead of £250-£400, oh if you see cheap epyc 7001 or 7002 cpu's make sure there vendor unlocked typically the cheaper version are because there vendor locked usually to Dell Systems, and initially you may need a retro SVGA monitor to get into bios via the onboard BMC port b4 changeing to the discrete card. far as i can tell BMC doesnt work with windows 11. oh and I only got 15 out of 16 rams slots running so stead of 128GB its got 120GB, suspect cpu mounting pressure or faulty chip maybe it justs needs swapping to the second cpu socket. but ive killed this type of socket in the past trying to upgrade, so im leaving it alone.
you would think that today, parts of a PC that do nothing, would not draw anything, maybe 1 watt for being borderline standby
For these price point. it should be include a HDD backplane.
is that MrSamuelStreamer playing in the background lmfao??? Edit: I think i saw AmbiguousAmphibian's logo. Either way Rimworld funny
Did you get them shipped to the uk or the us?
USA 🇺🇸 as that’s where I live don’t let the accent fool you ✌️
My understanding the problem was with the nvme version
What power supply did you use for this?
$400 feels a bit much for what you get
creamy a.f!
tugm is good :)
Do you live in Germany ?
J’habite en North Carolina.
I searched for the detailed specifications of the EPYC 7402 processor, and combined with the information shown in your video, I guess you purchased the retired hardware used by CCP to monitor the Chinese. Anyway, you are satisfied that you made a good deal, regardless of it Evil or not. I personally will never purchase e-waste decommissioned by the Chinese government.
post your source for this claim, I am interested.
Great video! Inspires me to want to build something similar.