Mr. jetkvm, if I back your product, do you promise to deliver? The reviews all seem fantastic, but most come with a warning that Kickstarter projects can just disappear overnight with your money.
I'm currently in the process of building a 3 x 1U cluster myself, and I relate so much. The thermal constraints, the search for the right motherboard/CPU combo, cable management, enough 2.5" space, the noise constraints, *the chassis itself*... What a headache. But what fun it is at the same time
a Dremel to the heatsync is fine... no issues there. If you had taken the Dremel to the motherboard like LTT did on his PFsense build from a couple of years ago and destroyed 4 motherboards, then that would be a horse of a different color. This is so not hacky, well done.
LTT -> LHNT: Linus How Not To! or LSTT: Linus Sketchy Tech Tips. I think most people watch the show as comedy: How will Linus Frack something up horrible today?
you got a dremel right...notch that little tab...sheesh...lol Such an awesome little build with TONS of HP shoehorned in! And for someone that is budget minded, not getting 10's of thousands of sponsored drives and hardware...buying it yourself is within reason Keep em coming!!!!
Best server I ever built was a HP beats laptop from 2015 😂😂😂 Every kernel update I have to go and re-enable the USB ethernet adapter 😂😂😂 I think it's perfect because if I can get this thing running for a hundred days at a crack with no downtime then I could get legitimate hardware running perfectly.
Every time I see a 1U build I'm like "that's a cool system and a totally unreasonable amount of human suffering". Even 2U has certain imposed limits, so I think I'll use those for a practical minimum viable idea. Thanks for the video.
Yeah I was looking at building my own opnsense box and trying to decide 1u or 2u. I have decided on 2u because it isn't nearly as hard to plan for. The added benefits to part compatibility, noise and cooling can't be understated. Instead of dealing with 40mm fans you can use 80mm fans. You also just get to use half height cards in a regular chassis. This means most addin cards will just work without any issues. Any full height cards aren't all that difficult either as you can use a reiser and actually still handle some of the bigger cards like GPUs even if they are thick. Yes 2U is 2x as much space as 1U but 2" is not all that much when installed in a home lab. The benefits you get by going with 2u is just worth it.
@@chaosfenix It's also much much more versatile to go 2U chassis - so many more build configs can fit in one. I hate single use computers so anytime I can retire a machine and re-deploy it for friends/family as something useful to them has major value to me. Not so much future-proofing fallacy as 'upcycling value in future'.
@@FrenziedManbeast Well and to this point you can usually deploy more powerful hardware without resorting to jet engine fans. In this build it was being pushed thermally with a relatively low power CPU. If it was 2U you could fit a ton more cooling and as such go with something that has 2x as many cores. Are you really gaining that much density with a 1U if you can fit 2x as many cores and 2x as many addin cards in the 2U system?
Yeah, a 4u that you can stuff full of consumer parts and large fans... will be quiet and expandable and will run the entirety of a homelab for years. Build something fast that can idle down to low power... and virtualize/containerize the heck outta it!
I always like the challenge if a 1U formfactor that doesn't scream. I used a small 1.5U server case as an enclosure to fit in a pelican 1490 briefcase with a monitor mounted in the lid -- was still able to use an SFX PSU if I'd wanted to.. much lower availability for cases. I'd look at doing a proxmox cluster out of prebuilt NUC formfactor computers (using USB-C [40gbs?] as the cluster networking). See if you can do similar performance for around the same price with less noise/power factor/space etc. Great video, keep it up bro!
There's always going to be compromises in a home built 1U. I think you did a great job in addressing those dude! perhaps a bit of creative 3d printing will be able to support that dual 25G nic? Overall you nailed that 99% of the way :)
Nahh. It's a flat piece with a few tabs bent over. Mock it up with some cereal box cardboard and then cut it out of a scrap of sheet metal. You can get a square big enough to make 2-3 of them from your local hardware store for $3 if you don't want to use a random piece of scrap. 3d printers are for art and prototyping. Do not print flat sheets with them.
@@RaidOwl but who am I to talk. I spent like 5-7 grand upgrading my home network to enterprise level 10gig to edge with whole home wifi 7 - so carry on you magnificent bastard and we will keep watching in awe and appreciation.
Spent way too much time trying to find that board support list. Please link in the future stuff like that in your video or in description. Much appreciated!
Great vid! I'm specing out a very similar build now. The only difference: I want ECC memory and a faster CPU. I'm currently deciding between three mobos: Gigabyte X870E Aorus Master Asrock X870E Taichi - Level1Techs has a great video. Asrock X870E Nova WiFi - has 5(!) M.2 slots, 3 pcie slots, 5gbps onboard, and at $330, it's one of the better value in the line. I like how your rack chassis looks like unifi gear.
Take a look at some of the X670s or X670Es they share a very similar feature set, just lacking some things like USB 4 for the most part. You can get the same ammount of NVME space. Unless you really need the features like 10g networking or a 5g RJ45 port there would be no reason to consider past b650 honestly. Most of the VRMs are overkill, the E variants especially.
@@xiraijuakara5988 Thanks! 10g networking via the chipset would be the only benefit for me. But even that doesn't work because I think all the 10g RJ45 mobos don't support ECC memory! Dang. Now I'm between b650 or threadripper. lol
I've had lots (unfortunately) of builds with similar problems that you found with the connectx-4. Protip: Use some zipties or some twist ties. It's not crucial by any means, but I've found that accidental bumps happen, which can halt/freeze the system and in some cases break the card.
Great video as always fam. Quick question. What is that 48 bay NAS in your rack? I was looking for something with that many bays in the front. Thanks (UNAS Pro Enterprise) lol
I really like the idea of 1u servers even with the potential increase in cost in favor of more space. I have however been buying the cheapest 2u server cases I can find (usually 40-50 on Reddit). For storage I go with a big boi server because space is king
What a coincidence, today I was planning a 1U NAS and followed an AM4 path to the exact same Dynatron cpu cooler complete with Amazon reviewers warning about having to chop it with a dremel to fit your motherboard. My challenge is trying to get an ECC system with 10GbE that idles under 20 watts. Due to the surge in electricity costs in many western nations homelabbers have swallowed up the global supply of low power Fujitsu, Supermicro, and Asrok Rack mini-ITX server motherboards that support ECC.
I just built almost the exact same system, in a 2U. At idle the 7600 with a Noctua NH-L9a sat at around 51-52c, -25 undervolt dropped it 15 degrees. I added an a310 for transcoding though, system sits around 60-65w with a couple transcodes and direct streams in Plex.
@@fandywinata254 if you aren’t running multiple transcodes at once get an N100 PC. They run under 25w at full load. More than capable for the average user. I’m in Aus too btw. My build in the comment above will cost no more than $500 a year to run in electricity.
Very cool build, I’ve been meaning to do something similar with the same cpu, as a bought a b650 motherboard lot of 5 off of fb marketplace for $60 and got 2 of the boards working (that just had bent pins) with this cpu as the test subject lol… Are you planning to make one volume on the NVME’s and another on the sata? I know unraid can do volumes spanned across different types of disks but wasn’t sure if proxmox could do it too
@@RaidOwl Great vid idea! Also some pointers for us noobs to undervolting would be great. How to know when you've gone far enough. Also, what's the power savings at idle vs high load? Thx m8.
@RaidOwl it's a bit of a lottery as result depends on a silicon quality. It tends to work better with higher end parts as they typically use better binned chips.
Hah - I love ghetto servers! I followed along on the original build you did in the case, and adapted it for my own scenario - I went with some used supermicro D-1521 boards (yeah yeah, cpu etc.), replaced the stock inwin fans with some Noctua quiet fans (fine for the low power Xeons), dropped in 4 Intel enterprise 4TB SATA drives, a 1TB m.2 boot drive, and the Intel ARC A310 Sparkle card, and 128GB of ECC RAM. These are super as low power ceph storage nodes, runs Plex with transcoding perfectly, piHole, and a few other low demand LXC's under Proxmox. I wound up building three of them, and total power draw is around 120W when they're all running heavy I/O loads. Thanks for the tips! Total cost for the trio of servers (minus the storage) was under $1200. I did find a great used deal on the SATA SSD's tho, all less than 2% wear, about $180/each.
@@KS-wr8ub Idle is harder to determine, since they aren't ever really idle with ceph. I've seen them down around 35W/each when I was setting them up. That whole rack hovers around 220W, and that's the three servers, another proxmox terramaster F8 plus, an IP capable power distributor, and a 10GbE 8 port switch, and probably a few watts of parasitic draw from the UPS.
I hope you slapped a fan on top of that 25GE to keep it cool because you'll get transmission errors. Unless the casing has fans blowing directly on it, it will overheat with moderate traffic on it.
Personally went with Ryzen 5 7600X (it was the same price as 7600 where I live), ASUS PRO B650M-CT (this was before all the warranty drama...) and 2x32GB of cheap Crucial RAM. Ended up housing it all in a Silverstone 2U chassi. Couldn't be happier! (other than idle wattage maybe? It pulls around 40 watts with all the useless stuff disabled in BIOS)
After being disappointed by the AsrockRack B650D4U i'm now on the Supermicro H13SAE-MF which not only has ipmi but also 3 pcie slots and audio connectors. Pretty nice board for an AM5-server, especially with 192GB ram.
1U is generally a bad choice for a quiet server to be honest, and most people have plenty of rack space and can snag 4U chassis without much trouble aside from them being more expensive than what you have here. Which really, that's the main upside, you can get modest compute in a rack for cheap and have it not sound like a hairdryer
Maybe 3D print a lip you can slide to the side of the card and that has an adjustable screw underneath? Kind of like GPU support, but with a way to "hug" the network card.
Yeah I would’ve gone 2U for more flexibility of PCIe attached devices. I’m building one at the moment and will go Intel i5 for the iGPU for Plex as a backup, an A380 for my daily driver VM, and the rest will be Proxmox VMs and containers. Still should be pretty low power even with 2 GPUs for my use case. I have separate machines for work, editing, and different types of gaming. I have another Proxmox server that is just the same thing running as backup, but no homelabbing as I want to keep it reliable. This one is just for my daily driver and messing around with mostly ephemeral VMs and containers.
I see a very thin NAS client here, if you replace the 2 NVME with SATA expansion cards in M2 format. But that's the great thing about open platforms, you can do almost anything that comes to mind.
"Who the hell needs 100 gig networking in their homelab..." Been rocking 100 Gbps Infiniband in my basement since 2018-2019-ish timeframe. Just finished two CFD runs which took about 65000 seconds each (or about 18 hours a piece against 107.4 million elements/cells).
The best server is the one that fits your use case. Everyone has a different use case. My perfect server is the one I have now, and it blows yours out of the water for my use case.
"You cant beat that" Hold my beer! You could have purchased a used R630 for around $160 shipped to you, added a video card and had 8 2.5 drive bays, add some ram, and some beefier cpu's and call it a day, all in a 1u FF with dual Power supplies, dual cpu's etc.. Nice little project none the less, and it includes IPMI.
@@RaidOwl Right now with 3 vps servers at idle is 70watts, granted they are not under load but when I ran the tests it pulled at most 150watts at max peak. also no video card, so deduct that as well.
I don't see dedicated cooling for that U.2 drive. You are going to have a bad time unless you fix that. I jumped into the world of enterprise U.2 stuff this year and the biggest issue I had (besides the expensive ass cables...) was needing to actively cool every drive. You can get away with just case airflow for certain brands if you are letting them sleep most of the time. Micron and HGST/WD are not those brands... I'd bet if you look up that drive it pulls 20-25w of power while "active". (And 10-15w while "idle") It will absolutely kill itself without active cooling. And not a gentle Noctua 40mm quiet boy either. I'm actually not sure you can do it quietly with 40mm fans unless they scream.
Funny I just got an ASRock B650M-HDV an a AMD 7600 to upgrade my NAS. But i did go with a 2U case from SilverStone RM43-320-RS. Maybe a review of you old servers to know what was going wrong with them would be interesting.
That's a sexy build and I wanna copy it, but dunno about Dremel-ing down the heatsink. Already have a B650M PG RIPTIDE, 7900, 64GB RAM, 2TB NVMe, dual 10Gb SFP+ NIC, quad 2.5 GbE NIC, and an HDPLEX GaN 250W PSU sitting around in a 2U Rosewill case doing nothing.
A cheaper way to go about this sort of build is with an X470/570 board and something like a Ryzen 5700G or even better, 5700GE. The GE is a 35W SKU with slightly lower burst clocks and it's an OEM-only processor, but it can do low power and at least with X470+, you aren't limited at all in your I/O options; I have 40Gb Infiniband and a Tri-Mode HBA on mine.
Not a mix of hardware that I would have chosen for myself but my requirements differ. Honestly for a 1U pizza box that thing is pretty damn good. I'd be interested to see how stable everything is bifurcating that PCIe slot with all of that burst bandwidth hogging drive/ethernet stuff. I hope we get a followup video showcasing what you land on as far as software load.
I have these same cases but using an AsRock Rack ROMED4ID-2T motherboard and some Epyc 7F32 CPUs. For cooling I'm using a Dynatron L18 AIO and it's actually pretty quiet. I'm using some dual QSFP ConnectX-3 Pro cards (these are routers) but I'm thinking of building another in this case with the same motherboard but an Epyc 7402P and an RTX 4000 Ada Generation with the single-slot mod.
You could have set a negative curve in PBO underclocking your CPU by around 30 units (~100mV), That helped me gain performance instead of losing it since my 5600X no longer overheat like before and can maintain its turbo boost longer.
About the 190$ for that case and psu u can't beat. You can beat it really well actually I have a refurbished Dell power edge r630 10 cores each CPU and dual 16 lane pcie and it's 1u with 8 2.5 inch drive bays all for 180 including shipping on Amazon of all places but it does drink power
while watching this i wen to the 'egg to see what they had in stock for CPUs. they had an open box 7700 for $219. 2 more cores for 10% more. That'd been handy perhaps, but probably not needed. But still!
Potentially dumb question: Would a JetKVM or something similar offer me anything useful if the server doesn't have an iGPU or dedicated graphics card? Running a Ryzen 5600X in my home server, and I would be satisfied if I just got a console or something I could pop into. Not sure if even that's possible though.
@@RaidOwl appreciate the response! You don't know how long that question has been rattling around in my skull and Google just turned up nothing. I can finally rest lol.
Unfortunately AMD is like...a decade behind Intel when it comes to encoders... Heck even Nvidia is way behind Intel. QuickSync really is THAT good. It sucks that Intel bifurcation options are usually lacking. It makes this kind of thing so much harder to build.
@@RaidOwl Thanks for the reply, id be interested to know if it works in a proxmox vm running docker for things like jellyfin/emby and frigate, great video and setup btw
For additional 30 usd, you could've gotten a minisforum bd790i se which has a ryzen 9 7940hx 16-core/32-thread cpu for 329 usd. It supports bifurcation also
Home servers shouldn't be 1U. Just makes them loud, expensive and inconvenient for home use. Just use 2U, and all of a sudden all your standard parts fit.
"But I already own a Dremel so..." The start of many emergency room tales.
I referenced this video at the ER. Told them to like and subscribe.
Red shirt Jeff approved...just sayin'
🙌Thanks for including us in the build! 🙌
Awesome work 💪🏼
woah
Sweet piece of hardware - investigating now!
Mr. jetkvm, if I back your product, do you promise to deliver? The reviews all seem fantastic, but most come with a warning that Kickstarter projects can just disappear overnight with your money.
where can i buy jet kvm??
Can't wait to see what functional 1U jank you can dremel together for your 200K special!
I'm currently in the process of building a 3 x 1U cluster myself, and I relate so much. The thermal constraints, the search for the right motherboard/CPU combo, cable management, enough 2.5" space, the noise constraints, *the chassis itself*... What a headache. But what fun it is at the same time
a Dremel to the heatsync is fine... no issues there. If you had taken the Dremel to the motherboard like LTT did on his PFsense build from a couple of years ago and destroyed 4 motherboards, then that would be a horse of a different color.
This is so not hacky, well done.
LTT -> LHNT: Linus How Not To! or LSTT: Linus Sketchy Tech Tips.
I think most people watch the show as comedy: How will Linus Frack something up horrible today?
LTT is a quack factory masquerading as a tech channel.
you got a dremel right...notch that little tab...sheesh...lol
Such an awesome little build with TONS of HP shoehorned in!
And for someone that is budget minded, not getting 10's of thousands of sponsored drives and hardware...buying it yourself is within reason
Keep em coming!!!!
Bro love this Brett is getting back to the HomeLab roots!!!! nice build 10000% approved
Told you ;)
Best server I ever built was a HP beats laptop from 2015 😂😂😂
Every kernel update I have to go and re-enable the USB ethernet adapter 😂😂😂
I think it's perfect because if I can get this thing running for a hundred days at a crack with no downtime then I could get legitimate hardware running perfectly.
Every time I see a 1U build I'm like "that's a cool system and a totally unreasonable amount of human suffering". Even 2U has certain imposed limits, so I think I'll use those for a practical minimum viable idea. Thanks for the video.
Smart
Yeah I was looking at building my own opnsense box and trying to decide 1u or 2u. I have decided on 2u because it isn't nearly as hard to plan for. The added benefits to part compatibility, noise and cooling can't be understated. Instead of dealing with 40mm fans you can use 80mm fans. You also just get to use half height cards in a regular chassis. This means most addin cards will just work without any issues. Any full height cards aren't all that difficult either as you can use a reiser and actually still handle some of the bigger cards like GPUs even if they are thick. Yes 2U is 2x as much space as 1U but 2" is not all that much when installed in a home lab. The benefits you get by going with 2u is just worth it.
@@chaosfenix It's also much much more versatile to go 2U chassis - so many more build configs can fit in one. I hate single use computers so anytime I can retire a machine and re-deploy it for friends/family as something useful to them has major value to me.
Not so much future-proofing fallacy as 'upcycling value in future'.
@@FrenziedManbeast Well and to this point you can usually deploy more powerful hardware without resorting to jet engine fans. In this build it was being pushed thermally with a relatively low power CPU. If it was 2U you could fit a ton more cooling and as such go with something that has 2x as many cores. Are you really gaining that much density with a 1U if you can fit 2x as many cores and 2x as many addin cards in the 2U system?
Yeah, a 4u that you can stuff full of consumer parts and large fans... will be quiet and expandable and will run the entirety of a homelab for years. Build something fast that can idle down to low power... and virtualize/containerize the heck outta it!
I always like the challenge if a 1U formfactor that doesn't scream. I used a small 1.5U server case as an enclosure to fit in a pelican 1490 briefcase with a monitor mounted in the lid -- was still able to use an SFX PSU if I'd wanted to.. much lower availability for cases.
I'd look at doing a proxmox cluster out of prebuilt NUC formfactor computers (using USB-C [40gbs?] as the cluster networking). See if you can do similar performance for around the same price with less noise/power factor/space etc.
Great video, keep it up bro!
The 8/4/4 adapter is useful in many ways , also good on am4 APU
That’s actually a pretty sweet build. The number of drives you’re fitting in is pretty dope.
You’re pretty dope
Think I'd try an add a fan in there. Even just a small 40mm, one or two in the front grill. Some air flow thru the case will go a long way.
5:54 😂 “Turbo nerd shit”
Nice little project here. Thanks for showing us the build.
There's always going to be compromises in a home built 1U. I think you did a great job in addressing those dude! perhaps a bit of creative 3d printing will be able to support that dual 25G nic? Overall you nailed that 99% of the way :)
lol yeah I REALLY need to invest some time into learning some 3d design
Nahh. It's a flat piece with a few tabs bent over.
Mock it up with some cereal box cardboard and then cut it out of a scrap of sheet metal.
You can get a square big enough to make 2-3 of them from your local hardware store for $3 if you don't want to use a random piece of scrap.
3d printers are for art and prototyping.
Do not print flat sheets with them.
Smart build, that's a great use of bifurcation on this chipset. I have the same case sat idle for about 3 years now... Hmm
Only Brett is stubborn enought to make this all work. Nice job!
Compromise 1 - Expensive as balls.
Big tru
@@RaidOwl but who am I to talk. I spent like 5-7 grand upgrading my home network to enterprise level 10gig to edge with whole home wifi 7 - so carry on you magnificent bastard and we will keep watching in awe and appreciation.
@@DenofLore yeah, but you won't have to make another purchase like that for like, 8-10 years. great investment imo.
As an "all at once" cost maybe.
On the other hand, look at what an all SSD NAS with inferior specs and lots of compromises costs. 😂
How expensive are balls these days? I got mine for free
Love the video, so good to get some useful advice about a realistic home build.
Spent way too much time trying to find that board support list. Please link in the future stuff like that in your video or in description. Much appreciated!
This is very cool, definitely worth a follow! 💯
Great vid! I'm specing out a very similar build now. The only difference: I want ECC memory and a faster CPU.
I'm currently deciding between three mobos:
Gigabyte X870E Aorus Master
Asrock X870E Taichi - Level1Techs has a great video.
Asrock X870E Nova WiFi - has 5(!) M.2 slots, 3 pcie slots, 5gbps onboard, and at $330, it's one of the better value in the line.
I like how your rack chassis looks like unifi gear.
Take a look at some of the X670s or X670Es they share a very similar feature set, just lacking some things like USB 4 for the most part. You can get the same ammount of NVME space. Unless you really need the features like 10g networking or a 5g RJ45 port there would be no reason to consider past b650 honestly. Most of the VRMs are overkill, the E variants especially.
@@xiraijuakara5988 Thanks! 10g networking via the chipset would be the only benefit for me. But even that doesn't work because I think all the 10g RJ45 mobos don't support ECC memory! Dang.
Now I'm between b650 or threadripper. lol
My dude, undervolt. You’ll be thankful over time. It’s so easy to do.
I've had lots (unfortunately) of builds with similar problems that you found with the connectx-4. Protip: Use some zipties or some twist ties. It's not crucial by any means, but I've found that accidental bumps happen, which can halt/freeze the system and in some cases break the card.
Of course it's not Crucial, it's Mellanox/nVidia...
Great video as always fam. Quick question. What is that 48 bay NAS in your rack? I was looking for something with that many bays in the front. Thanks (UNAS Pro Enterprise) lol
InWin IW-RS436-07
I really like the idea of 1u servers even with the potential increase in cost in favor of more space. I have however been buying the cheapest 2u server cases I can find (usually 40-50 on Reddit). For storage I go with a big boi server because space is king
What a coincidence, today I was planning a 1U NAS and followed an AM4 path to the exact same Dynatron cpu cooler complete with Amazon reviewers warning about having to chop it with a dremel to fit your motherboard.
My challenge is trying to get an ECC system with 10GbE that idles under 20 watts. Due to the surge in electricity costs in many western nations homelabbers have swallowed up the global supply of low power Fujitsu, Supermicro, and Asrok Rack mini-ITX server motherboards that support ECC.
I just built almost the exact same system, in a 2U. At idle the 7600 with a Noctua NH-L9a sat at around 51-52c, -25 undervolt dropped it 15 degrees. I added an a310 for transcoding though, system sits around 60-65w with a couple transcodes and direct streams in Plex.
how much it consime the power when it is idle , i was plan to build one of this , but 60-65w , thats cost crazy in Australia 😢
@@fandywinata254 if you aren’t running multiple transcodes at once get an N100 PC. They run under 25w at full load. More than capable for the average user. I’m in Aus too btw. My build in the comment above will cost no more than $500 a year to run in electricity.
Very cool build, I’ve been meaning to do something similar with the same cpu, as a bought a b650 motherboard lot of 5 off of fb marketplace for $60 and got 2 of the boards working (that just had bent pins) with this cpu as the test subject lol…
Are you planning to make one volume on the NVME’s and another on the sata? I know unraid can do volumes spanned across different types of disks but wasn’t sure if proxmox could do it too
You are not a home-lab unless you are undervolting... 😂 Just kidding, cool build!
lol yeah kicking around a video idea of undervolting everything to see if its worth it.
@@RaidOwl Great vid idea! Also some pointers for us noobs to undervolting would be great. How to know when you've gone far enough.
Also, what's the power savings at idle vs high load? Thx m8.
@RaidOwl it's a bit of a lottery as result depends on a silicon quality. It tends to work better with higher end parts as they typically use better binned chips.
That's a nice server!
That's a nice comment!
Hah - I love ghetto servers! I followed along on the original build you did in the case, and adapted it for my own scenario - I went with some used supermicro D-1521 boards (yeah yeah, cpu etc.), replaced the stock inwin fans with some Noctua quiet fans (fine for the low power Xeons), dropped in 4 Intel enterprise 4TB SATA drives, a 1TB m.2 boot drive, and the Intel ARC A310 Sparkle card, and 128GB of ECC RAM. These are super as low power ceph storage nodes, runs Plex with transcoding perfectly, piHole, and a few other low demand LXC's under Proxmox. I wound up building three of them, and total power draw is around 120W when they're all running heavy I/O loads. Thanks for the tips! Total cost for the trio of servers (minus the storage) was under $1200. I did find a great used deal on the SATA SSD's tho, all less than 2% wear, about $180/each.
That seems like a very nice system. What’s the idle consumption?
@@KS-wr8ub Idle is harder to determine, since they aren't ever really idle with ceph. I've seen them down around 35W/each when I was setting them up. That whole rack hovers around 220W, and that's the three servers, another proxmox terramaster F8 plus, an IP capable power distributor, and a 10GbE 8 port switch, and probably a few watts of parasitic draw from the UPS.
nice video,
you just give me an idea...
i should 3D print a rack, and then 3D print some 1U or 2U rack
i have no idea why this pop in my head lol
PSA: Always, always, always wear safety glasses (or some form of eye protection) when using a Dremel.
Yep 👍🏼
I hope you slapped a fan on top of that 25GE to keep it cool because you'll get transmission errors.
Unless the casing has fans blowing directly on it, it will overheat with moderate traffic on it.
I have some little 40mm Noctuas to zip tie to it
Personally went with Ryzen 5 7600X (it was the same price as 7600 where I live), ASUS PRO B650M-CT (this was before all the warranty drama...) and 2x32GB of cheap Crucial RAM. Ended up housing it all in a Silverstone 2U chassi.
Couldn't be happier! (other than idle wattage maybe? It pulls around 40 watts with all the useless stuff disabled in BIOS)
Looking forward to this server's stability update :)
After being disappointed by the AsrockRack B650D4U i'm now on the Supermicro H13SAE-MF which not only has ipmi but also 3 pcie slots and audio connectors. Pretty nice board for an AM5-server, especially with 192GB ram.
1U is generally a bad choice for a quiet server to be honest, and most people have plenty of rack space and can snag 4U chassis without much trouble aside from them being more expensive than what you have here.
Which really, that's the main upside, you can get modest compute in a rack for cheap and have it not sound like a hairdryer
Maybe 3D print a lip you can slide to the side of the card and that has an adjustable screw underneath? Kind of like GPU support, but with a way to "hug" the network card.
Yeah I would’ve gone 2U for more flexibility of PCIe attached devices. I’m building one at the moment and will go Intel i5 for the iGPU for Plex as a backup, an A380 for my daily driver VM, and the rest will be Proxmox VMs and containers. Still should be pretty low power even with 2 GPUs for my use case. I have separate machines for work, editing, and different types of gaming. I have another Proxmox server that is just the same thing running as backup, but no homelabbing as I want to keep it reliable. This one is just for my daily driver and messing around with mostly ephemeral VMs and containers.
I see a very thin NAS client here,
if you replace the 2 NVME with SATA expansion cards in M2
format.
But that's the great thing about open platforms,
you can do almost anything that comes to mind.
I think you and @HardwareHaven should have a 1U server build challenge
Nicely done! 👍🏻
"Who the hell needs 100 gig networking in their homelab..."
Been rocking 100 Gbps Infiniband in my basement since 2018-2019-ish timeframe.
Just finished two CFD runs which took about 65000 seconds each (or about 18 hours a piece against 107.4 million elements/cells).
Really really nice build.
The best server is the one that fits your use case. Everyone has a different use case. My perfect server is the one I have now, and it blows yours out of the water for my use case.
Good
"You cant beat that" Hold my beer! You could have purchased a used R630 for around $160 shipped to you, added a video card and had 8 2.5 drive bays, add some ram, and some beefier cpu's and call it a day, all in a 1u FF with dual Power supplies, dual cpu's etc.. Nice little project none the less, and it includes IPMI.
Now do the power draw
@@RaidOwl Right now with 3 vps servers at idle is 70watts, granted they are not under load but when I ran the tests it pulled at most 150watts at max peak. also no video card, so deduct that as well.
Cool, sounds like a solid setup
I thought you were gonna trim off some of those transistors 😂 💜
lmao
Got the exact same adapter for my connectx4. Could not stand the wasted PCIe lanes. Especially on an ITX board.
pci-e riser card 😎 you just solved so many problems for me 🙂
I don't see dedicated cooling for that U.2 drive.
You are going to have a bad time unless you fix that.
I jumped into the world of enterprise U.2 stuff this year and the biggest issue I had (besides the expensive ass cables...) was needing to actively cool every drive.
You can get away with just case airflow for certain brands if you are letting them sleep most of the time.
Micron and HGST/WD are not those brands...
I'd bet if you look up that drive it pulls 20-25w of power while "active".
(And 10-15w while "idle")
It will absolutely kill itself without active cooling.
And not a gentle Noctua 40mm quiet boy either.
I'm actually not sure you can do it quietly with 40mm fans unless they scream.
Nah theres where I cook eggs
Funny I just got an ASRock B650M-HDV an a AMD 7600 to upgrade my NAS. But i did go with a 2U case from SilverStone RM43-320-RS. Maybe a review of you old servers to know what was going wrong with them would be interesting.
Hey, I have one of those team group drives. They are surprisingly not that bad.
That's a sexy build and I wanna copy it, but dunno about Dremel-ing down the heatsink. Already have a B650M PG RIPTIDE, 7900, 64GB RAM, 2TB NVMe, dual 10Gb SFP+ NIC, quad 2.5 GbE NIC, and an HDPLEX GaN 250W PSU sitting around in a 2U Rosewill case doing nothing.
Yea def go 2U...so much easier.
A cheaper way to go about this sort of build is with an X470/570 board and something like a Ryzen 5700G or even better, 5700GE. The GE is a 35W SKU with slightly lower burst clocks and it's an OEM-only processor, but it can do low power and at least with X470+, you aren't limited at all in your I/O options; I have 40Gb Infiniband and a Tri-Mode HBA on mine.
4:24 you could have put big tower cooler there and just drill hole through the chassis, no? xD
No cuz the whole point is to keep it 1U since the idea is to have 3 of them stacked.
Not a mix of hardware that I would have chosen for myself but my requirements differ. Honestly for a 1U pizza box that thing is pretty damn good. I'd be interested to see how stable everything is bifurcating that PCIe slot with all of that burst bandwidth hogging drive/ethernet stuff. I hope we get a followup video showcasing what you land on as far as software load.
Yep there is no such thing as the 'perfect' server since everyone's needs can be so different.
I have these same cases but using an AsRock Rack ROMED4ID-2T motherboard and some Epyc 7F32 CPUs. For cooling I'm using a Dynatron L18 AIO and it's actually pretty quiet. I'm using some dual QSFP ConnectX-3 Pro cards (these are routers) but I'm thinking of building another in this case with the same motherboard but an Epyc 7402P and an RTX 4000 Ada Generation with the single-slot mod.
You could have set a negative curve in PBO underclocking your CPU by around 30 units (~100mV), That helped me gain performance instead of losing it since my 5600X no longer overheat like before and can maintain its turbo boost longer.
One more server.. next one is the perfect one..
Geez, I wish there was a chassis like that available that didn't cost as much as a CPU and motherboard combined.
About the 190$ for that case and psu u can't beat. You can beat it really well actually I have a refurbished Dell power edge r630 10 cores each CPU and dual 16 lane pcie and it's 1u with 8 2.5 inch drive bays all for 180 including shipping on Amazon of all places but it does drink power
Turbo Nerd 100Gbe or why even bother. Dual 25Gbe...pshaw...freaking piker! :)
I thought your previous cluster was good, i was thinking of building one as well, what was wrong with it?
Unstable
It's got character, yeah
while watching this i wen to the 'egg to see what they had in stock for CPUs. they had an open box 7700 for $219. 2 more cores for 10% more. That'd been handy perhaps, but probably not needed. But still!
Do you have a link to the 90 degree PCIE riser card?
Potentially dumb question: Would a JetKVM or something similar offer me anything useful if the server doesn't have an iGPU or dedicated graphics card? Running a Ryzen 5600X in my home server, and I would be satisfied if I just got a console or something I could pop into. Not sure if even that's possible though.
Nope that wouldn’t work
@@RaidOwl appreciate the response! You don't know how long that question has been rattling around in my skull and Google just turned up nothing. I can finally rest lol.
Hi mate, can you help me make a decision …..hp z4 g4 vs dell t5820 which one is it worth buying in 2024?
Out of sincere curiosity what benefit do you have using the kvm vs remote desktop?
It lets me access the bios and install other operating systems
i like the refurb options but you always try to offer some perspectives - including refurb options
what riser card did you use for this build? I got the PCIE card but you dont have the riser as part of the inventory
Just need a EDSFF SSDs on front.. i like this form factor for SSDs
Would you consider unregistered ECC memory worth the added price for this server?
ECC is cool but not worth it to go out of your way for
You're not really homelabbing unless there is a Dremel involved. Looking at you Hardware Haven...
😂😂😂
For 700 dollars I would just run something like my personal all time favourite DL360p G8's and maybe even G9's
Do mellonox (sp?) NICs have issues with Proxmox?
I've never had issues, no
Are you having any problems/issues setting up or using AMD igpu instead of intel igpu for transcoding?
Not natively but I haven't tried passthrough yet
Unfortunately AMD is like...a decade behind Intel when it comes to encoders...
Heck even Nvidia is way behind Intel.
QuickSync really is THAT good. It sucks that Intel bifurcation options are usually lacking. It makes this kind of thing so much harder to build.
@@RaidOwl Thanks for the reply, id be interested to know if it works in a proxmox vm running docker for things like jellyfin/emby and frigate, great video and setup btw
I added IPMI to my server last year with an asrock rack Paul add-in card since my current server is fully headless (5950X). No gpu to even connect to.
For additional 30 usd, you could've gotten a minisforum bd790i se which has a ryzen 9 7940hx 16-core/32-thread cpu for 329 usd. It supports bifurcation also
Oooo good call
Why not 8600G? 7600X and 7700X have pretty high idle draws
Pcie 4 and less pcie lanes
I'm not a computer person but need to know how this would be for llama 3 AI system if I wanted to have that and only that running any advice
Home servers shouldn't be 1U. Just makes them loud, expensive and inconvenient for home use.
Just use 2U, and all of a sudden all your standard parts fit.
What idle power draw do you get with the system?
In Proxmox around 45
May I ask what is the average power consumption of this setup.?
45 in Proxmox
I love 1U builds…..
Would the little gpu fit in a 2u without the adapter?
Yeah it has a half height bracket too
"Best" is relative, It all depends on what you are wanting to do with it I would say it is VVEERRYY good bang for the buck.
Haha yeah but “best” sounds better in the title/thumbnail
What about the fan noise?
i will never trancode videos in my 1u gig,it's loud,it's sticks hands, and expensive. but enjoy it
AsRock Rack does the the same board with ipmi, but surely more than 69 bucks 😅
Is it the prettiest server in the rack, yes it is.
Kinda curious what this system would do as a Pfsense box aswell 😅😅😅
Could def fit a quad 1G nic 🤔
AMD GPU and Plex?
6 hours later and it reached US$ 1,014,887 ok Kickstarter
But brett! This isn't the greatest server ever made. That one would have had me write a better comment as im only writing this to boost engagement. 😅
The price is acceptable, but 60w at idle is to much
All that effort, but didn't buy an ECC supported MB :(
I don’t need it
@@RaidOwl Yeah it's fine for 99% of homelab'ers. But be even better build for people like me who require it but like to build my own low power server!