Enterprise server vs bespoke gaming chassis. However not sure why Linus just didn’t get a second rack and move the networking equipment and do 5 c 4u chassis and avoid the headache. Those are easy to obtain and let him run more common components.
In fairness, Linus isn't using the chassis fans for any significant amount of cooling, so that negates half of the reasons Wendell suggests that 1u is dead.
@@handlealreadytaken His idea's were unsustainable and ended up in "I need 1 rack per computer", which pretty quickly devolves into an explosion of racks... Prob best not to buy a new rack every time he has a new idea :P
"Nuisance" or "Pain in the Ass" sound about right for when you have to troubleshoot them. For those rare times when everything is okay? Hairdryers is already taken by some GPU's. And in US English IDK any short word for Vacuum cleaner. But when you have whole rack of them you certainly need some protective platforms, like on Aircraft Carriers, when jets are taking off. When those fans spin up on every unit at the same time, you do have most important building block of Wind Tunnel. And yes - there are Wind Tunnels (or at least Wind Simulators) that use a lot of PC fans, so that you can control the flow and strength of the wind with good granularity and create uneven Wind to simulate for example Urban environment.
We have a few multi node chassis from Supermicro running since several years. Mainly 2U QuadNodes (I believe TwinPros). While having multiple nodes so densely in a single chassis is great, it comes with a major downside: The nodes often share a single backplane (which is partitioned). So if you have a failure there, you are screwed. Additionally, if you have an issue with an onboard controller, you are screwed as well: you need to replace the whole node, as you cannot simply install a backup raid card / hba. While yes, these things are great, you should be aware of the downsides to some or these models. Ours always ran great without any issue until I bricked an onboard controller - after half a day, and many tries, I was able to recover it but it made me very aware of the downsides :-)
@Chloiber do you know if server racks with shared psus and cooling fans exist to centralize components. Maybe one standard height rack with two nodes per u and three or five shared psus. For further energy optimisation the systems could be liquid cooled and the rack could be powered by 400 volt direct current.
Everything is builtin these days. You're lucky if you can replace a processor or memory. (and now there's Stupid(tm) to prevent changing the processor.)
I get that it's not really your thing. But i think many of us would be interested in seeing builds like these, but which are optimized for energy effiency / low noise instead.
(Is your avatar Lain with a crown of roses??) Also yes, I would like to see that. I think a bunch of us (maybe even the majority?) don't have a noise-insulated server room at home!
@@morosis82 Well, having the fans at 100% all the time makes no sense be it power efficiency wise or attrition, especially of the bearings. When Wendell entered the serverroom, you can hear one of the servers constantly cycling between two fan speeds back and forth -> no full fan speed. When the "new" one gets turned on, the fans spin up to full speed (PCs do that, too) and then reduce that speed after successful initialization. For fan speeds in general: A certain minimum of fan speed is necessary so the fans can spin at all. I've never seen a 10k RPM fan be able to spin at 1k RPM. (1U server fans can go up to over 20k RPM) The combination of density and heat production makes such loud and truly "moving" fans necessary.
@@bernds6587 unfortunately supermicro doesn't have good fan curve controls... Because they don't care. I had to write an ipmi hack script which does it on our nvme server because they offer no customization. Their solution is "Oh it's 1C over threshold? Time for 100% fan until it's cool enough and then back to 25% for 5 minutes " way more irritating than keeping the fans a little higher and holding steady.
Wendel, my first through on this was "huh, that kinda looks like a horizontal blade setup", what are your thoughts on that comparison? Are blades going to make a comeback?
1u never made sense to me for the reasons mentioned for going 2u in this video. Take it to its logical extreme though and you're back to blades of some sort!
It arguably could have made sense in some extreme circumstances back when Intel was limiting everyone to 4 cores per socket. For customers looking to run a couple of hundred or thousand cores it could save them the cost of building a new physical space. But that was quite a while back now lol.
Do you think eventually we'll move to 4U equivalents? For that 1 power supply failure would still provide 3 PSUs for 4 systems which would proportionally offer more power per system and offer redundancy even with one unit down. Could also use fans that were larger again.
I know I'll probably never get to work with anything like this, but it's still fun and interesting to watch. It's like I'm on a field trip to a data center and the technician is trying to make everything fun and engaging for the students :D
Both HPE and Dell sell a lot of servers in the 1U form factor. For example the HPE proliant servers have a lot of cheaper 1U configurations like the DL325. No its not used in a datacenter, but there's a huge use case for racks outside of a data center. Enterprise customers need racks but they don't have an entire datacenter. 1U is not dead at all in the SMB space.
Wendell doing the sillies when he's excited 🙂 Love it! Also the plural of servers should be a sounder of servers (a group of wild boar is called a sounder) because they make such a racket!
So Server Cadres based around 1U Servers are going the way of the Dodo and instead we'll have some sort of Irish based Server Cadre Datacenters around "U2" nodes :P
Big agreement - a 2U chassis with 2U redundant PSU's and a full 2U cooling system combined with doubled 1u internals makes much more sense for space utilization and redundancy.
it also comes down if you are single tenant or multi tenant and how the SLAs are structured. those 1Us are damn cheap, we swap them out like underwear :) those are also very interesting if the stuff you run doesnt need a lot of compute, like webservers and stuff. for database servers you are running 4U server usually since you need the pcie slots
9:41 Sounds you don't want to hear when you are at the back of a messy rack. Happened to me last week when i was trying to clean up some old shit at the back of a rack and all of a sudden our pure storage starts sounding like a jet taking off as i knocked a PSU out :D. This server just screams VDI at me.
Is there a spot for a fourth gpu? Frontier says it uses 4 gpus per cpu, is this the same chassis? Also, what is meant by "Frontier has coherent interconnects between CPUs and GPUs" -wikipedia, Are these interconnects physical?
I gave up thinking about dense 1U servers myself over a decade ago because I'd run out of power long before of rackspace in every cabinet. Even in this video you're not able to plugin more than one of these into your circuit lol. So I standardized more on 2U setups for all the reasons you gave, fans for airflow, more room for storage and cards, or gpus, etc. Plus its easier to work on than some ultra dense 2 servers in 1U setup. Thanks for the video!
I love how the sound of the fans comes together for a kind of screams of the damned from far away in old horror movies sound, very season appropriate. The hardware's pretty damned dope too.
The returns are virtually fully realized with 2U because it gets you 89mm height for decent sized fans. 3U would get you 120mm but servers rely so much more on pressure that going up from 80mm to 120mm fans would see very little benefit. Noise reduction would be most of it and the industry has already come to terms with noise from racks. 3U or taller would get you full PCI card height perpendicular to the mainboard, but angle adapters and risers have gotten around that for decade now.
@Level1Techs serious question: why are we using PSUs in servers? We used to have rack or cage level DC power fed to the servers on DC busses. It was safe, centralised, efficient and could be triple redundant. It left 100% of the space in every server for doing work and every server could be yanked out for maintenance without affecting the others.
So this basically is the comeback of the BladeServer just on a smaller scale? We still have a six-blade system from Intel in the basement for testing purposes, some features are really cool. Failed node? No worries, the chassis will automatically relocate the virtual drive to a spare blade and boot it back up, almost no downtime.
Isn't it just a cut down 2u style "blade server" box? Obviously the blades in this 2u are horizontal and the original blades were vertical (with 8+ blades) and if I recall didn't have space for a graphics card... but still. That said, I guess if you put the thing on its side and made the "box" square and then had space for multiple "blades" you'd still not get any extra density because you'd still need multiple sets of redundant power supplies. As backplanes are much less of a thing now, with such high speed serial network cards, you'd also not gain much if you used some kind of backplane system either.
One of my past clients consolidated down from about 40 racks to 20 from snagging a few c6000 blade chassis and Virtualizing a lot of their older hardware , 16 bay's for servers per chassis in 10u of rack space is some pretty solid density. This type of 2 node setup probably makes more sense for an engineering perspective but I always appreciated how scalable the Blade Chassis design was. If you have a free bay populated or upgraded one of the blades you just plop the new one in and away you go. No need to re-rack or fiddle around with rails, re-run cables etc. That said it does suffer from the size restrictions of a blade chassis, which is even smaller than a 1u server so fan pressure and the other issues Wendel raised are still a problem.
His systems are for massing GPU's. This little 2U thing is one of the few ways to do this without having to sell body parts. For you and me, who care about general purpose computing, blades have been the way to go for decades. (but it does often mean settling for vendor lock-in. and once they know you're on the hook, the deep discounts go away.)
To be fair a gaming computer doesn’t need redundancy or anywhere near as much cooling, which is what this video is about. Linus outsources the cooling to an external radiator anyway. Linus’ new gaming computer is stupid for many reasons, and while the 1U rack case is definitely one of them, a 2U case wouldn’t have been any better. The issue there is insisting on stationary PCs in the first place. The premise of the video was that he needed something unobtrusive for his children to game on. Instead of a server closet we know he won’t take proper care of, the solution is to just get them macbooks with thunderbolt docks instead. Plug it in at home and it’s a decent gaming rig, bring it to school and it’s a good study computer. With actually good parental controls. Unless you actually need a full-power workstation, desktop PCs are almost never the right answer today.
@@СусаннаСергеевна i know, the timing is just funny. one day linus is building five 1u gaming rackmount systems, and the day after there's Wendell saying 1u is dead :) But of course it's two entirely different situations, especially since Wendell is talking enterprise, and Linus, as advanced as it may be, is still talking about home usage.
I have that same rack monitor, but some idiot cut the cord to the Monitor as well as the keyboard / mouse combo. the VGA was a PITA, but standard.... and I had both parts. The keyboard is not standard, and I am missing the connectors. I really wish I had a way to figure out the pinout because 8 wires seems like it should be 2 PS/2 connectors.
Redundancy is everything, 7x HPE DL360 with dual PSU of 800W has been a life saver many times. EPYC 24 core, 512GB of RAM and 6 1.92GB of Storage in vSAN. No 1U servers will live a long time. :)
No *modern* 1U server will live a long time. (I have plenty from the long long ago that still work perfectly. But they don't draw more power than my entire neighborhood.)
You always have great informative videos. Some a little too complex for me, a non IT Guy. I now know I need a a Chassis (not rack mount) Server and the server should have E1.S drives....maybe start with 6- 7 TB drives....dont now where to buy.
If Wendell says it - I believe it. He might be wrong, but do I really care? It comes down to opinion, and his arguments are reasonable. That is all I care about. Please, Wendell, try having as many children as you can. We need more people like you.
Well a server is a box, the plural of which is boxen. And two oxen are called a yoke. So that server could a yoke of boxen. But I suppose for more than two it would be a herd. A herd of boxen.
@@AndirHon I prefer the Jargon file definition: boxen: pl.n. [very common; by analogy with VAXen] Fanciful plural of box often encountered in the phrase ‘Unix boxen’, used to describe commodity Unix hardware. The connotation is that any two Unix boxen are interchangeable.
Would be nice to know what kind of use cases we could use these servers for in 5/6 years when they get decommissioned and get into the hands of homelabs….
Floor space was the limiting factor long time ago; now you could put your board with off the shelf components together, run the board in china, run the board to a pic and place factory and you'll get your custom board if you are really tight on space; now is power and cooling the most limiting factor. Think a few years back, where you had to offer each an every costumer a full server as virtualization wasn't a big factor. Now you run 100-400 virtual servers in a 2-4U unit. Before this you put as many FPGA's (those 10000 $-200000$ cpu's) in on case as you physically could and if you really wanted to use huge loads you could always press the real out button in Xilinx Vivado. Now you have access to virtual Cloud. F1-Instances (8000-50000$ CPU's) and virtual cloud GPU.
That was actually one of the contenders for this video lol! Fun fact, all the thumbnails are created with assets from the video it is being made for. ~ Editor Autumn
The thing is, if you colocate and use a lot of power, it does not really matter if you use 1U or 2U, it going to host you almost the same, because primary cost will be power. If you have color or dc, that allow to deliver a lot of power to the rack, then it is not about optimizing cost, but rather just a quest how many you can put in a single rack or few close racks, so they are all connected over very fast network. I rent a rack in Germany, and I am limited by space and network. I cannot put more servers, because I do not have enough power in the rack, or ports in the switches. I even have few empty units, because I am at the limit basically. I cannot switch everything from 1U to 2U, but if I can cram more into 1U, by upgrading to higher density, and or replace 2x1U, by 2U that actually is more efficient, I will definitively do it. We use a lot of Kubernetes for compute, Ceph for storage, and few host for virtualization (Proxmox). 2u dual node, is definitively more interesting than blade systems. Blade systems were always too expensive, requiring too much licensing and special setups. Hybrid like this, without expensive chassis is perfect.
I like the compute density, but that backwards mounting is a deal killer for me. Given how much of a 'rats nest' the rear of a server rack often is, i really don't think i want to deal with that every time i have a failure or need to do something with it.
Always thought "fleet" was already a thing for servers, though maybe a "flight" given they make so much noise you'd think they're going to take off any moment.
We been using 2U for a while. 1u is for hardware and the other is for making the grilled cheese sandwiches and the top for hot drinks or a hot plate. Boss thinks we are always busy; yeah we are busy running prime & disktest so the food cooks faster. LOL 🤣
The 2U is honestly dead too, the datacenter I work with is moving completely to HPE Synergy 12000 Frames, these can be configured with 12 blade modules hosting Dual 28 core Xeons with up to 4.5 TB of Ram each and a T4 Accelerator Card. Thus in 10 U's will hold 24 - 28 core Xeons, 54TB of Ram and 12 - T4 Cards. Everything runs on VMs and in the networking of the unit everything has zero trust between the internal machines. If the size of the datacenter is a concern though they should be looking into 52U racks. Just doing this will increase the size of your site by around 25%.
A 1RU Intel server (thinking Dell PowerEdge 650) can have 2x 40 core Xeon Platinums, 8TB RAM, 3x T4s or A2s, and dedicated 4x 25Gb Ethernet. In 10RU, that's 800 cores (40 cores x 2 sockets x 10 servers), 80TB RAM, 30 GPUs, and 100Gb of dedicated networking per node. Different scenarios and use cases call for different requirements. 1RU servers are not dead. 2RU servers are not dead. Blades are not dead. None of them should die - to help give you the ability to get a solution that best fits your environment.
i cant be the only one who finds the tone of server fans after they come down from full tilt and settle at that lower volume very peaceful. I have fully fallen asleep sitting next to a full rack of servers with their fans at that nice low drown
well, that's why you don't sleep next to that thing cause all it takes is for a heavy workload on that thing to wake you up in the middle of the night🤣🤣
2U has always been more efficient... a 2U fan can simply move more air - period. My former employer resisted this almost to their last breath. With 2 150W CPUs in the box, their hand was forced. Originally, the only 2U boxes were because that was the only way to get 2 power supplies, but there are plenty tiny PSU's these days. (the system shown here _could_ be done in 1U, as there are 1K 1U PSU's. but air cooling it would difficult.) (To do 1U for our systems would require a load of 15k RPM fans - $30/ea not $3 - and they'll last a year not 3-5. And they needed solid copper heatsinks, which were 100x more expensive than aluminum.)
Ok but there are different use cases and needs for servers. We aren't just deploying multi-gpu compute units in the data center. I'm sure 1U will continue to be a thing just fine for a very long time to come.
I’d agree if you need GPUs in your servers, but that’s still a niche usecase. Otherwise, nothing much I see changes. People have been cramming in super hot cpus in 1U for a long time and they will continue to do so, nothing really has changed. Of course, that’s assuming you don’t just move to AWS or GCP.
With the prevalence of multi node servers I agree that 1u servers are a dying breed. I am guessing you already covered the topic but do blade servers really have a purpose these days?
In short. The main advantages of blade systems are still relevant. Shared redundant power and cooling. Though, blade systems also tends to toss on shared management as well as networking.
This is definitely a "why didn't they think of this before" thing. Fans are why 3U is my favourite form factor for DIY rack-case builds. Unfortunately, 3U is kind of a rarity.
They have thought of this before, and at even more density. Dells current line up include the PowerEdge FX, which has 4 slots (half width 1U blades), but he concept goes back a few years with the PowerEdge M-series
IMHO, 1u servers are a legacy from an era when CPU and all other pieces used 150W total, with 24 PCIE lanes tops. Right now, 1U stuff is just left for network and any nodes that don't have to go full bore, and biggest ones will move to bigger ones, IMHO 3u, will be next popular size as it's compromise of 2 previous systems together, packed full of devices, either discs or gpu's. Something like mining racks, but standardized as plug and play. whatever happens, I will rise a toast to death of those 1u sized screaming monsters, let them burn in hell.
let me be straight forward: this uses something where i long ago asked myself why it isnt done.. propably cause tech wasnt ready back then.. but yea less separate PSU's can be a giant atvantage but I am just a simple software develloper and in most cases i dont even know what actual hardware I am working on or my work will be deployed on.. so what do i know, right? thinking about getting me a bit of an older server to play around with at home but noise would need to be at agreeable levels for my neighbours.
@@hidedollar5818 still not quite the same thing overall but yeah somehow i spaced out on those but might because i never have seen one in person or done stuff with it
@@johanneskaramossov5103 What was shown in this video, does exist only because of demand, not because there was not technology to do that. Big Data, AI, autonomic vechiles, etc, wich all demand massive amount compute power have just blow'd up in demand. And now adays we have so called "hyper scalers" (Google, Facebook/Meta, Amazon, Microsoft) wich pretty much doesnt buy directly from companys like Supermicro, they buy directly from so called "White label" manufacturers, wich desing and actually manufacture those servers or switches. I mean, that what is happening in those hyperscalers, will happend in delay in traditional datacenters. And thats how like in this vid Supermicro server have come to be in L1 hands. It is all about economics! If nobody need it, there is no point to create it.
Why not take it a step further and do the same with a 4U server? Or if power and cooling are density hogs, then why not build racks with power rails and fans on the front door?
One day apart in 2 videos: Linus "I want all my LAN-PCs in 1U, so I don't waste 1 rack-slot" - Wendell "1U is dead cause 2U is more efficient" XD
Enterprise server vs bespoke gaming chassis. However not sure why Linus just didn’t get a second rack and move the networking equipment and do 5 c 4u chassis and avoid the headache. Those are easy to obtain and let him run more common components.
In fairness, Linus isn't using the chassis fans for any significant amount of cooling, so that negates half of the reasons Wendell suggests that 1u is dead.
Linus isn't technical person.
@@wiziek Nowadays he basically outsources all of the knowledge and throws either his money or his influence at the wall.
@@handlealreadytaken His idea's were unsustainable and ended up in
"I need 1 rack per computer", which pretty quickly devolves into an explosion of racks...
Prob best not to buy a new rack every time he has a new idea :P
@1:39 the 1U servers aren't dead, they're just huddled together in 2U chassis for warmth.
In a nutshell: 1U chassis is dead, long live 2U chassis.
Having spent some time with multi-node chassis-based systems like this, my vote for a collective noun for a group of servers goes to "a cacophony."
Alternatively: "A tinnitus of servers".
I am getting such a kick out of these replies
How about a "whatt?!" Because you can't hear anything over the fans
A *MULTIPLICITY* of Nodes/Servers ?
"Nuisance" or "Pain in the Ass" sound about right for when you have to troubleshoot them. For those rare times when everything is okay? Hairdryers is already taken by some GPU's. And in US English IDK any short word for Vacuum cleaner. But when you have whole rack of them you certainly need some protective platforms, like on Aircraft Carriers, when jets are taking off. When those fans spin up on every unit at the same time, you do have most important building block of Wind Tunnel. And yes - there are Wind Tunnels (or at least Wind Simulators) that use a lot of PC fans, so that you can control the flow and strength of the wind with good granularity and create uneven Wind to simulate for example Urban environment.
Legend has it headphone users ears are still bleeding.
A. Scream of servers?
A gaggle of those servers would certainly murder my power bills, and my ear drums.
A racket of servers. A reference to the fact that they're in racks but also to the noise.
Finally, more server content! Please make them more frequently!
We have a few multi node chassis from Supermicro running since several years. Mainly 2U QuadNodes (I believe TwinPros).
While having multiple nodes so densely in a single chassis is great, it comes with a major downside:
The nodes often share a single backplane (which is partitioned). So if you have a failure there, you are screwed. Additionally, if you have an issue with an onboard controller, you are screwed as well: you need to replace the whole node, as you cannot simply install a backup raid card / hba.
While yes, these things are great, you should be aware of the downsides to some or these models. Ours always ran great without any issue until I bricked an onboard controller - after half a day, and many tries, I was able to recover it but it made me very aware of the downsides :-)
@Chloiber do you know if server racks with shared psus and cooling fans exist to centralize components. Maybe one standard height rack with two nodes per u and three or five shared psus. For further energy optimisation the systems could be liquid cooled and the rack could be powered by 400 volt direct current.
Everything is builtin these days. You're lucky if you can replace a processor or memory. (and now there's Stupid(tm) to prevent changing the processor.)
Thoughts on doing walk-through of your data center / "server room"? Would be interesting to see what you're running for day-to-day.
I get that it's not really your thing. But i think many of us would be interested in seeing builds like these, but which are optimized for energy effiency / low noise instead.
(Is your avatar Lain with a crown of roses??)
Also yes, I would like to see that. I think a bunch of us (maybe even the majority?) don't have a noise-insulated server room at home!
the server room is built to contain the sound. they dont care how loud the servers are as long as vibration is controlled.
@@jmwintenn sort of true, but systems that need fans running at full speed constantly spend a lot of power budget on cooling and not computing.
@@morosis82 Well, having the fans at 100% all the time makes no sense be it power efficiency wise or attrition, especially of the bearings. When Wendell entered the serverroom, you can hear one of the servers constantly cycling between two fan speeds back and forth -> no full fan speed.
When the "new" one gets turned on, the fans spin up to full speed (PCs do that, too) and then reduce that speed after successful initialization.
For fan speeds in general: A certain minimum of fan speed is necessary so the fans can spin at all. I've never seen a 10k RPM fan be able to spin at 1k RPM. (1U server fans can go up to over 20k RPM)
The combination of density and heat production makes such loud and truly "moving" fans necessary.
@@bernds6587 unfortunately supermicro doesn't have good fan curve controls... Because they don't care.
I had to write an ipmi hack script which does it on our nvme server because they offer no customization.
Their solution is "Oh it's 1C over threshold? Time for 100% fan until it's cool enough and then back to 25% for 5 minutes " way more irritating than keeping the fans a little higher and holding steady.
a whole restaurant of servers ?
I'll show myself out
You got a gen-u-wine laugh out of me!
it's always nice when Wendell is excited about something
A serfdom of servers 😅
Also, I hope you had hearing protection while in your comms room! That node was SUPER loud!
Wendel, my first through on this was "huh, that kinda looks like a horizontal blade setup", what are your thoughts on that comparison? Are blades going to make a comeback?
1u never made sense to me for the reasons mentioned for going 2u in this video. Take it to its logical extreme though and you're back to blades of some sort!
It arguably could have made sense in some extreme circumstances back when Intel was limiting everyone to 4 cores per socket. For customers looking to run a couple of hundred or thousand cores it could save them the cost of building a new physical space. But that was quite a while back now lol.
Everything old is new again.
"Definitely think you'll find that appealing"
god fucking dammit😂
Do you think eventually we'll move to 4U equivalents? For that 1 power supply failure would still provide 3 PSUs for 4 systems which would proportionally offer more power per system and offer redundancy even with one unit down. Could also use fans that were larger again.
9:42 You can feel the current limiting making the fans start up slowly! Beauty!
I know I'll probably never get to work with anything like this, but it's still fun and interesting to watch. It's like I'm on a field trip to a data center and the technician is trying to make everything fun and engaging for the students :D
You may not be able to afford this but used enterprise stuff can be had extremely cheap and you can have almost as much fun. ;-)
Some of the older x10 platforms from Supermicro are getting somewhat affordable these days, the twin family of servers aren't crazy anymore.
Both HPE and Dell sell a lot of servers in the 1U form factor. For example the HPE proliant servers have a lot of cheaper 1U configurations like the DL325. No its not used in a datacenter, but there's a huge use case for racks outside of a data center. Enterprise customers need racks but they don't have an entire datacenter. 1U is not dead at all in the SMB space.
Every time I see a new upload, I'm excited. I can't say the same about ANY other channels on YT. I love what you're doing Wendell-never stop!
Wendell doing the sillies when he's excited 🙂 Love it! Also the plural of servers should be a sounder of servers (a group of wild boar is called a sounder) because they make such a racket!
So Server Cadres based around 1U Servers are going the way of the Dodo and instead we'll have some sort of Irish based Server Cadre Datacenters around "U2" nodes :P
Big agreement - a 2U chassis with 2U redundant PSU's and a full 2U cooling system combined with doubled 1u internals makes much more sense for space utilization and redundancy.
it also comes down if you are single tenant or multi tenant and how the SLAs are structured. those 1Us are damn cheap, we swap them out like underwear :) those are also very interesting if the stuff you run doesnt need a lot of compute, like webservers and stuff. for database servers you are running 4U server usually since you need the pcie slots
9:41 Sounds you don't want to hear when you are at the back of a messy rack. Happened to me last week when i was trying to clean up some old shit at the back of a rack and all of a sudden our pure storage starts sounding like a jet taking off as i knocked a PSU out :D.
This server just screams VDI at me.
Is there a spot for a fourth gpu? Frontier says it uses 4 gpus per cpu, is this the same chassis? Also, what is meant by "Frontier has coherent interconnects between CPUs and GPUs" -wikipedia, Are these interconnects physical?
What do you hear when you put your ear up next to a 1U server fan?
Nothing from then on.
I gave up thinking about dense 1U servers myself over a decade ago because I'd run out of power long before of rackspace in every cabinet. Even in this video you're not able to plugin more than one of these into your circuit lol. So I standardized more on 2U setups for all the reasons you gave, fans for airflow, more room for storage and cards, or gpus, etc. Plus its easier to work on than some ultra dense 2 servers in 1U setup. Thanks for the video!
So we came full circle and blade centers are cool again?
Dang I was excited for Wendell to look one of the cray ex liquid cooled nodes.
I love how the sound of the fans comes together for a kind of screams of the damned from far away in old horror movies sound, very season appropriate. The hardware's pretty damned dope too.
banshee fans
Suddenly I wonder if there's a supercomputer or other cluster named "Banshee"
Is there a benefit to go even further with a "4U 4-Node" configuration? Or are there some diminishing returns after a 2U 2-Node config?
The returns are virtually fully realized with 2U because it gets you 89mm height for decent sized fans. 3U would get you 120mm but servers rely so much more on pressure that going up from 80mm to 120mm fans would see very little benefit. Noise reduction would be most of it and the industry has already come to terms with noise from racks.
3U or taller would get you full PCI card height perpendicular to the mainboard, but angle adapters and risers have gotten around that for decade now.
@Level1Techs serious question: why are we using PSUs in servers? We used to have rack or cage level DC power fed to the servers on DC busses. It was safe, centralised, efficient and could be triple redundant. It left 100% of the space in every server for doing work and every server could be yanked out for maintenance without affecting the others.
Wendell and LTT anthony should collab. Talk about general server stuff, linux distros and how to dominate the world.
That’ll never happen. The powers that be would never let that much nerd power collect in one room
That would be EXTREMELY cool. Hope it happens someday
I've also seen the side-by-side HP HP Left & Right GPU 4RU servers. Basically this is a change in blade chassis form factor and capital investment.
So this basically is the comeback of the BladeServer just on a smaller scale?
We still have a six-blade system from Intel in the basement for testing purposes, some features are really cool. Failed node? No worries, the chassis will automatically relocate the virtual drive to a spare blade and boot it back up, almost no downtime.
Blades share way more. This is just power and cooling being shared
@8:20 - "it's an older cord, but it checks out..."
Haven't blades been following this principle for like.. ever?
Isn't it just a cut down 2u style "blade server" box? Obviously the blades in this 2u are horizontal and the original blades were vertical (with 8+ blades) and if I recall didn't have space for a graphics card... but still.
That said, I guess if you put the thing on its side and made the "box" square and then had space for multiple "blades" you'd still not get any extra density because you'd still need multiple sets of redundant power supplies. As backplanes are much less of a thing now, with such high speed serial network cards, you'd also not gain much if you used some kind of backplane system either.
One of my past clients consolidated down from about 40 racks to 20 from snagging a few c6000 blade chassis and Virtualizing a lot of their older hardware , 16 bay's for servers per chassis in 10u of rack space is some pretty solid density. This type of 2 node setup probably makes more sense for an engineering perspective but I always appreciated how scalable the Blade Chassis design was.
If you have a free bay populated or upgraded one of the blades you just plop the new one in and away you go. No need to re-rack or fiddle around with rails, re-run cables etc. That said it does suffer from the size restrictions of a blade chassis, which is even smaller than a 1u server so fan pressure and the other issues Wendel raised are still a problem.
His systems are for massing GPU's. This little 2U thing is one of the few ways to do this without having to sell body parts. For you and me, who care about general purpose computing, blades have been the way to go for decades. (but it does often mean settling for vendor lock-in. and once they know you're on the hook, the deep discounts go away.)
watching Wendell booting up a server being blasted by the air is like watching a kid in a giant candy store for the first time in their life :D
And here is Linus (LTT) just now building five 1u gaming systems ;)
To be fair a gaming computer doesn’t need redundancy or anywhere near as much cooling, which is what this video is about. Linus outsources the cooling to an external radiator anyway.
Linus’ new gaming computer is stupid for many reasons, and while the 1U rack case is definitely one of them, a 2U case wouldn’t have been any better. The issue there is insisting on stationary PCs in the first place.
The premise of the video was that he needed something unobtrusive for his children to game on. Instead of a server closet we know he won’t take proper care of, the solution is to just get them macbooks with thunderbolt docks instead. Plug it in at home and it’s a decent gaming rig, bring it to school and it’s a good study computer. With actually good parental controls. Unless you actually need a full-power workstation, desktop PCs are almost never the right answer today.
@@СусаннаСергеевна i know, the timing is just funny. one day linus is building five 1u gaming rackmount systems, and the day after there's Wendell saying 1u is dead :)
But of course it's two entirely different situations, especially since Wendell is talking enterprise, and Linus, as advanced as it may be, is still talking about home usage.
I have that same rack monitor, but some idiot cut the cord to the Monitor as well as the keyboard / mouse combo. the VGA was a PITA, but standard.... and I had both parts. The keyboard is not standard, and I am missing the connectors. I really wish I had a way to figure out the pinout because 8 wires seems like it should be 2 PS/2 connectors.
Reading your thumbnail, Linus is crying over his recent build. From far continent I can hear "Why Wendel ? why ?"🤣
Clicking the link and commenting here for your engagement. Cheers bud, keep up the great work!
Redundancy is everything, 7x HPE DL360 with dual PSU of 800W has been a life saver many times. EPYC 24 core, 512GB of RAM and 6 1.92GB of Storage in vSAN. No 1U servers will live a long time. :)
No *modern* 1U server will live a long time. (I have plenty from the long long ago that still work perfectly. But they don't draw more power than my entire neighborhood.)
You always have great informative videos. Some a little too complex for me, a non IT Guy. I now know I need a a Chassis (not rack mount) Server and the server should have E1.S drives....maybe start with 6- 7 TB drives....dont now where to buy.
If Wendell says it - I believe it. He might be wrong, but do I really care? It comes down to opinion, and his arguments are reasonable. That is all I care about. Please, Wendell, try having as many children as you can. We need more people like you.
Well a server is a box, the plural of which is boxen. And two oxen are called a yoke. So that server could a yoke of boxen. But I suppose for more than two it would be a herd. A herd of boxen.
box·en | \ ˈbäksən \
Definition of boxen
archaic
: of, like, or relating to boxwood or the box
@@AndirHon I prefer the Jargon file definition:
boxen: pl.n.
[very common; by analogy with VAXen] Fanciful plural of box often encountered in the phrase ‘Unix boxen’, used to describe commodity Unix hardware. The connotation is that any two Unix boxen are interchangeable.
Imagine how his mind will explode the first time he sees a BladeCenter.
Would be nice to know what kind of use cases we could use these servers for in 5/6 years when they get decommissioned and get into the hands of homelabs….
I have an old 4 node system that I use as a Virtualization cluster
I’m sorry in advance.
But can it run Crysis?
Floor space was the limiting factor long time ago; now you could put your board with off the shelf components together, run the board in china, run the board to a pic and place factory and you'll get your custom board if you are really tight on space; now is power and cooling the most limiting factor. Think a few years back, where you had to offer each an every costumer a full server as virtualization wasn't a big factor. Now you run 100-400 virtual servers in a 2-4U unit. Before this you put as many FPGA's (those 10000 $-200000$ cpu's) in on case as you physically could and if you really wanted to use huge loads you could always press the real out button in Xilinx Vivado. Now you have access to virtual Cloud. F1-Instances (8000-50000$ CPU's) and virtual cloud GPU.
AHAH, jokes on you wendell, my 4U ATX compliant consumer grade server will NEVER DIE :D
7:41 From here on out you can hear the maddening sound of an SPC being nearby.
I just realized the music you use gives me "contraption zack" vibes. if you remember that game from the dos days.
I paused @7:23 and accidentally discovered your next video's thumbnail. Editor Autumn, you're welcome.
That was actually one of the contenders for this video lol! Fun fact, all the thumbnails are created with assets from the video it is being made for. ~ Editor Autumn
The thing is, if you colocate and use a lot of power, it does not really matter if you use 1U or 2U, it going to host you almost the same, because primary cost will be power.
If you have color or dc, that allow to deliver a lot of power to the rack, then it is not about optimizing cost, but rather just a quest how many you can put in a single rack or few close racks, so they are all connected over very fast network.
I rent a rack in Germany, and I am limited by space and network. I cannot put more servers, because I do not have enough power in the rack, or ports in the switches. I even have few empty units, because I am at the limit basically. I cannot switch everything from 1U to 2U, but if I can cram more into 1U, by upgrading to higher density, and or replace 2x1U, by 2U that actually is more efficient, I will definitively do it. We use a lot of Kubernetes for compute, Ceph for storage, and few host for virtualization (Proxmox).
2u dual node, is definitively more interesting than blade systems. Blade systems were always too expensive, requiring too much licensing and special setups. Hybrid like this, without expensive chassis is perfect.
I saw that!!! You didnt screew-in the rail screws :P
If rack mount, is it "a scream of servers"???
I like the compute density, but that backwards mounting is a deal killer for me.
Given how much of a 'rats nest' the rear of a server rack often is, i really don't think i want to deal with that every time i have a failure or need to do something with it.
So basically we're moving back to blade chassis?
this will be a cramped and warm hot-aisle-job to maintain these
Always thought "fleet" was already a thing for servers, though maybe a "flight" given they make so much noise you'd think they're going to take off any moment.
Wendell and Steve, sit down nerds the chosen ones are on screen
We been using 2U for a while. 1u is for hardware and the other is for making the grilled cheese sandwiches and the top for hot drinks or a hot plate. Boss thinks we are always busy; yeah we are busy running prime & disktest so the food cooks faster. LOL 🤣
4U with silent 120mm fans will be nice.
There's a bunch of cases on the market for this now! Some even support liquid cooling. Sliger makes some (expensive though).
7:40 THIS LOOKS AND SOUNDS LIKE AN INTRO TO A HORROR MOVIE
The 2U is honestly dead too, the datacenter I work with is moving completely to HPE Synergy 12000 Frames, these can be configured with 12 blade modules hosting Dual 28 core Xeons with up to 4.5 TB of Ram each and a T4 Accelerator Card. Thus in 10 U's will hold 24 - 28 core Xeons, 54TB of Ram and 12 - T4 Cards. Everything runs on VMs and in the networking of the unit everything has zero trust between the internal machines.
If the size of the datacenter is a concern though they should be looking into 52U racks. Just doing this will increase the size of your site by around 25%.
A 1RU Intel server (thinking Dell PowerEdge 650) can have 2x 40 core Xeon Platinums, 8TB RAM, 3x T4s or A2s, and dedicated 4x 25Gb Ethernet. In 10RU, that's 800 cores (40 cores x 2 sockets x 10 servers), 80TB RAM, 30 GPUs, and 100Gb of dedicated networking per node.
Different scenarios and use cases call for different requirements. 1RU servers are not dead. 2RU servers are not dead. Blades are not dead. None of them should die - to help give you the ability to get a solution that best fits your environment.
All I can think is that the Frontier supercomputer shares a name with the worst ISP I've ever had the misfortune of dealing with.
Thank you for the informative video,
Cheers mate
A "Banquet of Servers" maybe :o?
i cant be the only one who finds the tone of server fans after they come down from full tilt and settle at that lower volume very peaceful. I have fully fallen asleep sitting next to a full rack of servers with their fans at that nice low drown
well, that's why you don't sleep next to that thing cause all it takes is for a heavy workload on that thing to wake you up in the middle of the night🤣🤣
I fell asleep on a helicopter flight....
You can get used to anything over time.
2U has always been more efficient... a 2U fan can simply move more air - period. My former employer resisted this almost to their last breath. With 2 150W CPUs in the box, their hand was forced. Originally, the only 2U boxes were because that was the only way to get 2 power supplies, but there are plenty tiny PSU's these days. (the system shown here _could_ be done in 1U, as there are 1K 1U PSU's. but air cooling it would difficult.)
(To do 1U for our systems would require a load of 15k RPM fans - $30/ea not $3 - and they'll last a year not 3-5. And they needed solid copper heatsinks, which were 100x more expensive than aluminum.)
HP C7000 has entered chat
Well linus will be gutted hes just built a 1u home rig
Ok but there are different use cases and needs for servers. We aren't just deploying multi-gpu compute units in the data center. I'm sure 1U will continue to be a thing just fine for a very long time to come.
I’d agree if you need GPUs in your servers, but that’s still a niche usecase. Otherwise, nothing much I see changes. People have been cramming in super hot cpus in 1U for a long time and they will continue to do so, nothing really has changed. Of course, that’s assuming you don’t just move to AWS or GCP.
It's not as niche as it used to be.
With the prevalence of multi node servers I agree that 1u servers are a dying breed. I am guessing you already covered the topic but do blade servers really have a purpose these days?
In short. The main advantages of blade systems are still relevant.
Shared redundant power and cooling.
Though, blade systems also tends to toss on shared management as well as networking.
What ever happend to the Dell chassies with 4 nodes in them?
This one server, in 2u of rack space has more compute power than my entire house with several servers and gaming desktops in it
Isnt this the idea behind blades?
Do they make these multi-node boxes in 3U or 4U sizes too, but crammed with 1U subnodes?
I figure 1U spaces will still be used in racks at the very least
*opens server room door* I can hear the children screaming!
This is definitely a "why didn't they think of this before" thing. Fans are why 3U is my favourite form factor for DIY rack-case builds. Unfortunately, 3U is kind of a rarity.
They have thought of this before, and at even more density. Dells current line up include the PowerEdge FX, which has 4 slots (half width 1U blades), but he concept goes back a few years with the PowerEdge M-series
Blade servers all over again^^
IMHO, 1u servers are a legacy from an era when CPU and all other pieces used 150W total, with 24 PCIE lanes tops. Right now, 1U stuff is just left for network and any nodes that don't have to go full bore, and biggest ones will move to bigger ones, IMHO 3u, will be next popular size as it's compromise of 2 previous systems together, packed full of devices, either discs or gpu's. Something like mining racks, but standardized as plug and play.
whatever happens, I will rise a toast to death of those 1u sized screaming monsters, let them burn in hell.
They are the same cables I have throughout my house :) Cool video :)
how about a service of servers
Yes, "Murder" is the plural for crows. Servers?: a "noise"?
Sounds like my colleague's laptop with a "couple" of chrome tabs open
let me be straight forward: this uses something where i long ago asked myself why it isnt done.. propably cause tech wasnt ready back then.. but yea less separate PSU's can be a giant atvantage but I am just a simple software develloper and in most cases i dont even know what actual hardware I am working on or my work will be deployed on.. so what do i know, right? thinking about getting me a bit of an older server to play around with at home but noise would need to be at agreeable levels for my neighbours.
Blade servers have been around here a long time ago. I guess that you haven't heard about those? en.wikipedia.org/wiki/Blade_server
@@hidedollar5818 still not quite the same thing overall but yeah somehow i spaced out on those but might because i never have seen one in person or done stuff with it
@@johanneskaramossov5103 What was shown in this video, does exist only because of demand, not because there was not technology to do that. Big Data, AI, autonomic vechiles, etc, wich all demand massive amount compute power have just blow'd up in demand.
And now adays we have so called "hyper scalers" (Google, Facebook/Meta, Amazon, Microsoft) wich pretty much doesnt buy directly from companys like Supermicro, they buy directly from so called "White label" manufacturers, wich desing and actually manufacture those servers or switches.
I mean, that what is happening in those hyperscalers, will happend in delay in traditional datacenters. And thats how like in this vid Supermicro server have come to be in L1 hands.
It is all about economics! If nobody need it, there is no point to create it.
It's of course a servitude of servers, but that is only when they are local, otherwise it's a cirrus of servers.
Plural of servers? A Ruckus.
If it is not water cooled it isn't a modern super computer node... Frontier uses water cooling, as all modern exascale super computers.
Why not take it a step further and do the same with a 4U server? Or if power and cooling are density hogs, then why not build racks with power rails and fans on the front door?
A whole restaurant of servers?