Love the content on this channel, but can never afford any of the hardware. Now the HP DL360 G8's are finally affordable, 11+ years later after release :)
Patrick, this is so far from serving the home that after I watched it I removed all the mini pcs from my 42u cabinet, sat in it and cried. Beauty of a beast nonetheless!
I run three old Dell servers in my home lab. Between the three, 104 CPUs, 597gig of RAM, and about 7tb of drive storage, which doesn't include a PC running a 16TB storage array. (8tb available, raid-1 config, upgrading when my animals stop being sick and I can afford another pair of drives). What I have I think is an already overkill home lab (This hardware is just stuff that was bound from the boneyard when the company I work for was retiring hardware) I get how exciting this hardware is, what kind of performance you get, and the kinda "fun" you can get into, but the whole "Serve the Home" theme... ... who runs THIS level of hardware at HOME? Who can AFFORD this kind of equipment? I went and looked for a price tag just to feel my stomach drop and I have to contact retailers to get a price. Also, when you want to review the hardware, but, have to lean on suppliers to give you "hot off the press" hardware... That's... That's not HOME use. This channel should be renamed to "Serve the Enterprise" or "You can own this in 12 years"....
@@BryanSeitz That's my point. This isn't serving the "home" audience by ANY stretch of the imagination. The name of this channel is very much misleading.
It should sound a bit different. Sennheiser MKH 416 sitting just off frame in the sitting set to using a lav mic here. This is lav mic #11 (DPA 4071) and it sounds at least somewhat better. We still need a bit more sound treatment, but the set is much larger and it is a different type of mic. The stationary shotgun microphone does not work as well when I am walking around and turning my head.
Yeah, I just compared the frequency between old and new studio using some past videos. For new, there is considerable high-cut on frequencies over 1k, almost half the energy is attenuated over the typical vocal range compared to the old studio. The EQing on the lapel mic might be different or maybe a different/make model.
4:36 you are wrong, those two screws make it possible to attach EDSFF drives internally. We use them to attach those drives to water coolplate in our custom servers. Also, externally we use them with latchless handles (they're held with friction from leaf springs pushing them onto the water coolplate). I can give you more info if you're interested
Crazy that all that is packed into 1u, this single server could probably have replaced a rack of servers when I worked for a company that did web/cloud hosting some time ago.
Your comments at the end made me think of a cool idea for a video: Compare the performance of an entire 42U rack of dual socket Intel E5 v1 servers to a single 1U dual socket Epyc Genoa server. I honestly think we've come to the point where a single 1U server replaces the compute of an entire rack. Would be interesting to do the tests to find at which generation it's break even.
Replace the compute, sure. But try getting 10+ PETAbytes of storage on that 1U. Then try AFFORDING it. There is still a place for larger machines - for at least a few more years. BTW - you're not looking at a fair comparison anyway, the E5 v1 is a 10+ year old server CPU platform. At least compare to fairly CURRENT stuff.
@@bricefleckenstein9666 You seem to be missing the point of my comment. The entire premise is to find out in what length of time has the industry been able to compress an entire rack worth of compute down to a single 1U server. Storage density has increased as well, but it is a much more linear, rather than exponential progression, so is much less interesting which is why I didn't mention it.
It's very much possible, because going over the network is slow in comparison, so multiple servers is always a disadvantage. Probably best inter connect at the time was ? InfiniBand
@@semosesam Then why not go back to the ORIGINAL Xeons of the Pentium II generation? Why stop at the E5? And you're somewhat wrong about storage density, I've USED the original ST-506 and that was a SMALL drive in physical size when it came out (14" was the norm at the time). Gets worst than that - the MAIN memory on a Bendix G-15 (I worked on one when I was in college) was a magnetic DRUM that wasn't a ton smaller than a washing machine drum (about 3/4 the diameter of a typical apartment drier drum, IIRC, or about half the diameter of the ones you find in most laundromats). Held ballpark 4k or something tiny like that of storage. And the G-15 did take up about the same size as a 42U rack, so the "compress an entire rack" happened a LONG time ago.
Would really like to see more short depth rack servers. Custom or off the shelf doesnt really matter. Something that can fit in a typical home lab rack.
Always cool to see EDSFF drives in their natural habitat. I work in the storage industry and have a lot of hands on time with EDSFF, but only with specialized test equipment or test benches.
The sound in the new studio is completely different, It's actually weird, hahahah. There are some moments when it even sounds like an AI, especially in the moments when Patric is not on camera.
EDSFF needs to become a more common standard for nvme drives, just imagine how many of those things you could fit in a 2u case if it was entirely those drives along the front
0:48 An electrostatic discharge mat would have been more appropriate, but sure generate static electricity between the sensitive components and the blanket. It also kind of like your saying it isn't that big of a deal... I hope the warranty on your parts covers that According to Super User, industry experts estimate that static can cause product losses of up to 33%.
Do Gigabyte and Dell partner? This looks so close to a Dell PowerEdge server, both with the internals and even the diagram on the underside of the cover.
Totally true there are lower-core count SKUs for higher per-core performance, and also lower core count, high cache Genoa-X parts for a lot of applications that can take advantage of that type of chip.
I like the approach move out from VGA. It's should be obsolete by now in server field. May be, HDMI a lot better than use Mini-DP because you can take any TV and Monitors to use with it. May be the problem are BMC chip itself, while BMC should also implement HDMI instead. limit resolution to just 1080p at 30fps to keep bandwidth suitable for remote management.
I would guess that the latching mechanism effectively gives OEMs a chance to advertise when the chassis is in the rack. Aside from that, they would just look generic, or god forbid, the storage manufacturers would be the most recognizable part of the faceplate. Seems pretty petty, but I wouldn't be surprised if that was the thinking.
@@thedopplereffect00 then you'd be able to replace your dell sleds with HP sleds and we all know customers can't really have nice things if they are not tied to specific vendors. ;)
Thanks for the info.I glossed ever the rest of it. But, was really interested in the ssd drive. I was just thinking the other day how putting a m.2 stick in a small bay would make sense.
Would love to see a reasonably priced (home lab territory), low power, 8-16 core x64 cpu with lots of pcie lanes...would love me some nvme storage options
Epyc 8004 is probably best bet if you can get your hands on one. There is an 8c 80w chip but once you add in motherboard and ram it's not cheap. It will be great when these get dumped on the market in the future though. I'd love one for my NAS rebuild that I'm planning.
What do you mean by fully loaded? You can easily crank up one of these systems to 50k. The number of servers per rack depends a bit on airflow design of the data center, and a lot on the amount of power you can draw per rack.
This thing is bonkers - but those EDSFF drives would actually be awesome for System Z. The current interface cards only allow 1 U.2 drive per adapter card, which can be limiting.
probably depends on the BMC or whatever is monitoring the fans. assuming it is OK with the tamper switch (lid lifted), and it considers fan with 0RPM a warning and waits until the affected components are out of temp spec before shutting things down, you might get away with it.
You should create a new channel named ServeTheBusiness. This is way beyond the homelab. These types of server are for high availability - hyperconvergence.
Why? The STH main site is by far the largest storage, server, and networking review site out there and has been for almost a decade (it just turned 15 but the first 5 years it was smaller.) We were doing 8-way GPU servers back in 2016-2017 before they were fashionable.
Definitely going to take some getting used to the new layout and little differences to style with all the changes, but I like it so far. Also looking forward to getting to play with these in 10 years or so. Just finally got my hands on some intel scalable servers to run with some front facing NVMe, can't wait for those that have it standard to really come down, it's slightly overkill for most but it will be good to see it more accessible to all market segments.
this thing looks like it could easily fit 24 e1s bays and being a dual cpu, pci-e lanes won't be a problem. wouldn;t some grills on that top metal plate help a bit with cooling, or would they mess up the air current? looks nice, i will buy a few of these once a get rich!!
the density is mind-boggling. I remember bringing up Nehalem-EX systems (Stoutland platform, Quanta QSSC-S4R) and we had to be careful to split the power supplies across different breakers otherwise we would trip them. Those were 4U with four power supplies. Now we can do that in 1U, and there's still more CPU and memory power!
HOLY CRAP!!! I think many businesses will be challenged to afford a beast like this! Definitely never going to see anything remotely like this in any home I'm ever going to be in....
Ah. This explains all those used large capacity 2.5" drives on the market. I thought they were pulled out of laptops but some of them seem too big. Didn't realize people used them in rack mount servers too. The removable caddies on servers always reminds me of Dave Bowman removing HAL 9000's cartridges - somehow those cartridges seem too large these days - in the imagination of the late 1960s that would have been the size of miniaturized computer storage units. If they'd made the movie today the cartridges would be long, thin things the size of M.2 cards.
Good pickups. Absolutely not. I got very sick two weeks ago and this is the first full video I have recorded after being somewhat recovered. On the audio side, different microphones / types. This has taken 11 lav mics to get anywhere near the MKH 416.
@@ServeTheHomeVideo I hope you are back to yourself soon. Be well. You probably know way more about this than me anyways, but I have good luck with with wireless Sennheiser lavalieres. Shure is the other major player.
mini HDMI is absurd for a data center/racked server situation. If they want to move away from VGA it should be to DP. Could be full sized, could be mini, could be over USB-C, doesn't matter much. But should not be HDMI
I dont know... For business use i have to go back over ten years for the last time i actually used storage in servers... All storage is done by dedicated and redundant powerhouses like netapp or similar.. so this hardware will make our storage solutions faster in the future which is great.. but when stability is more important than raw performance i really prefer a seperated storage / compute situation
Serve the Home, or Serve the Data Center? I don't know many people that would get this for their home. A little disappointed that you're not focussing on home-use equipment.
Strange comment. The STH main site has been the largest server review site for a decade? We have 13 folks working on STH so we have plenty of people to do both.
Something not being discussed here... Gigabyte as a brand typically loses about 1% performance hit in exchange for a 5-15% lifespan longevity bonus. It's been that way on their desktop systems for almost the life of the brand, and they've brought the concept over to the server space to compete with the bigger brand names that tend to try to get you to turn over a whole datacenter every time the 3-5 year mfg extended warranty purchases run out.
Really good idea. I like the wood finish though. The problem is also that heavier 4U GPU servers and such will gash the wood in the one second something slips or someone is not paying attention. You are right that we should cover it with something else.
Serve the home? That looks like true overkill, unless your home is also a Resort. Thanks. No mention of price? Clearly configuration would impact that but say the price the way you would use it?
Rather than deal with 3 different types of storage for 17 drives, why not one that has 24x E1.S slots in a 1U like a Super Micro SSG-121E-NES24R? U.2 seems like dead tech already...
I will get one for my homelab in about 10 years when they are used and more affordable 👍
We are going to have an EDSFF option for the homelab in a video later this month. Stay tuned for that.
Love the content on this channel, but can never afford any of the hardware. Now the HP DL360 G8's are finally affordable, 11+ years later after release :)
Patrick, this is so far from serving the home that after I watched it I removed all the mini pcs from my 42u cabinet, sat in it and cried. Beauty of a beast nonetheless!
I run three old Dell servers in my home lab. Between the three, 104 CPUs, 597gig of RAM, and about 7tb of drive storage, which doesn't include a PC running a 16TB storage array. (8tb available, raid-1 config, upgrading when my animals stop being sick and I can afford another pair of drives).
What I have I think is an already overkill home lab (This hardware is just stuff that was bound from the boneyard when the company I work for was retiring hardware)
I get how exciting this hardware is, what kind of performance you get, and the kinda "fun" you can get into, but the whole "Serve the Home" theme... ... who runs THIS level of hardware at HOME? Who can AFFORD this kind of equipment? I went and looked for a price tag just to feel my stomach drop and I have to contact retailers to get a price.
Also, when you want to review the hardware, but, have to lean on suppliers to give you "hot off the press" hardware... That's... That's not HOME use.
This channel should be renamed to "Serve the Enterprise" or "You can own this in 12 years"....
Most things covered on STH are nowhere near home pricing :)
@@BryanSeitz That's my point. This isn't serving the "home" audience by ANY stretch of the imagination. The name of this channel is very much misleading.
i will wait 10 years and then youtube will be like remember this video and i can finally afford one
i don't think so
I'm so used to the old studio's sound, something seems off
Agreed, the sound has a hollowness to it.
It should sound a bit different. Sennheiser MKH 416 sitting just off frame in the sitting set to using a lav mic here. This is lav mic #11 (DPA 4071) and it sounds at least somewhat better. We still need a bit more sound treatment, but the set is much larger and it is a different type of mic. The stationary shotgun microphone does not work as well when I am walking around and turning my head.
Yeah, I just compared the frequency between old and new studio using some past videos. For new, there is considerable high-cut on frequencies over 1k, almost half the energy is attenuated over the typical vocal range compared to the old studio. The EQing on the lapel mic might be different or maybe a different/make model.
Sounds like going from stereo to mono
I said this in the first video of the new set.
4:36 you are wrong, those two screws make it possible to attach EDSFF drives internally. We use them to attach those drives to water coolplate in our custom servers. Also, externally we use them with latchless handles (they're held with friction from leaf springs pushing them onto the water coolplate). I can give you more info if you're interested
Sir can you please tell me for what and why these server machine is used for ?
@@murtaza9669 Minecraft
Crazy that all that is packed into 1u, this single server could probably have replaced a rack of servers when I worked for a company that did web/cloud hosting some time ago.
Your comments at the end made me think of a cool idea for a video: Compare the performance of an entire 42U rack of dual socket Intel E5 v1 servers to a single 1U dual socket Epyc Genoa server. I honestly think we've come to the point where a single 1U server replaces the compute of an entire rack. Would be interesting to do the tests to find at which generation it's break even.
Exactly what i had in mind too while watching this video
Replace the compute, sure.
But try getting 10+ PETAbytes of storage on that 1U.
Then try AFFORDING it.
There is still a place for larger machines - for at least a few more years.
BTW - you're not looking at a fair comparison anyway, the E5 v1 is a 10+ year old server CPU platform. At least compare to fairly CURRENT stuff.
@@bricefleckenstein9666 You seem to be missing the point of my comment. The entire premise is to find out in what length of time has the industry been able to compress an entire rack worth of compute down to a single 1U server. Storage density has increased as well, but it is a much more linear, rather than exponential progression, so is much less interesting which is why I didn't mention it.
It's very much possible, because going over the network is slow in comparison, so multiple servers is always a disadvantage.
Probably best inter connect at the time was ? InfiniBand
@@semosesam Then why not go back to the ORIGINAL Xeons of the Pentium II generation?
Why stop at the E5?
And you're somewhat wrong about storage density, I've USED the original ST-506 and that was a SMALL drive in physical size when it came out (14" was the norm at the time).
Gets worst than that - the MAIN memory on a Bendix G-15 (I worked on one when I was in college) was a magnetic DRUM that wasn't a ton smaller than a washing machine drum (about 3/4 the diameter of a typical apartment drier drum, IIRC, or about half the diameter of the ones you find in most laundromats).
Held ballpark 4k or something tiny like that of storage.
And the G-15 did take up about the same size as a 42U rack, so the "compress an entire rack" happened a LONG time ago.
Would really like to see more short depth rack servers. Custom or off the shelf doesnt really matter. Something that can fit in a typical home lab rack.
Next week’s video will be a lower power short depth server with 25GbE.
@9:04 that little sponge under the M.2 slots is holding a thermal probe sensor on a compliant flex PCB and sponge ! so cool
Always cool to see EDSFF drives in their natural habitat. I work in the storage industry and have a lot of hands on time with EDSFF, but only with specialized test equipment or test benches.
We are going to have another cool EDSFF system later this month.
I still remember Lisa Spelman introducing them as the "data ruler" and wondering how long the name would stick.
You can certainly tell they used Quanta as the ODM/OEM supplier for this case / chassis
This thing was at the 2018 Flash memory summit. Amazing it took this long to come out. Obviously much newer internals though.
"all you do is just push this little tab down and then you pray and then the little latch comes up" LOL
:-)
The sound in the new studio is completely different, It's actually weird, hahahah. There are some moments when it even sounds like an AI, especially in the moments when Patric is not on camera.
Still working on it. 11 mics later switching from shotgun to lav is a pain. 100% of the video was recorded on that set with that lav setup though.
I think what ever is happening with the sound is to do with noise suppression or auto levelling. The highs in his voice seem to be getting muted.
I would take a hotswap bay or BOSS over a internal M.2 any day, no need to take the node offline to replace it
You should get a GN Modmat! Unironically a great way to cover the workbench.
That is a great idea. I should send Steve a note
I like the moving blanket. Very home oriented. Keep it!
EDSFF needs to become a more common standard for nvme drives, just imagine how many of those things you could fit in a 2u case if it was entirely those drives along the front
Totally
You had me at hot swapping those EDSFF hard drives. Sounds pretty spendy but I am certainly curious!
0:48 An electrostatic discharge mat would have been more appropriate, but sure generate static electricity between the sensitive components and the blanket. It also kind of like your saying it isn't that big of a deal... I hope the warranty on your parts covers that
According to Super User, industry experts estimate that static can cause product losses of up to 33%.
Patrick: counter top protection. Get a Office floor protector (used for rolling office chairs). You can cut it to size for your counter top.
That is an awesome idea
Do Gigabyte and Dell partner? This looks so close to a Dell PowerEdge server, both with the internals and even the diagram on the underside of the cover.
Actually, you will probably see more HPE Cray servers sold that are based on Gigabyte designs.
@ServerTheHome The front bays for the NVMe drives are actually U.3 since they rely on PCIe Gen5
Hrm. More cores isn't always the answer if the single core performance is lower. But still a neat box.
Also... What software can realistically run a SDS using that much performance. Maybe vsan?
Totally true there are lower-core count SKUs for higher per-core performance, and also lower core count, high cache Genoa-X parts for a lot of applications that can take advantage of that type of chip.
I like the approach move out from VGA. It's should be obsolete by now in server field. May be, HDMI a lot better than use Mini-DP because you can take any TV and Monitors to use with it.
May be the problem are BMC chip itself, while BMC should also implement HDMI instead. limit resolution to just 1080p at 30fps to keep bandwidth suitable for remote management.
I would guess that the latching mechanism effectively gives OEMs a chance to advertise when the chassis is in the rack.
Aside from that, they would just look generic, or god forbid, the storage manufacturers would be the most recognizable part of the faceplate. Seems pretty petty, but I wouldn't be surprised if that was the thinking.
That is 100% the idea, and it was driven by a few very large OEMs.
Why not just allow the addition of plastic cover plates that snap on for branding?
@@thedopplereffect00 then you'd be able to replace your dell sleds with HP sleds and we all know customers can't really have nice things if they are not tied to specific vendors. ;)
You’ve really streamlined your presenting and it’s a massive improvement thank you!
Thanks. Always a work in progress
Thanks for the info.I glossed ever the rest of it. But, was really interested in the ssd drive. I was just thinking the other day how putting a m.2 stick in a small bay would make sense.
We have a video coming that will be on point for that later this month
9:30 also we build 1u 1 petabyte storage server using 32 intel ruller 32TB SSDs, and it is also fully water cooled)
Sorry but I need to do it, after hearing to it 4 times in a row. It's Bèrgamo, not Bergàmo :-)
The background music was kinda too loud dringend the last 5minutes. Made it difficult too understand
Been waiting to see something like this since seeing the EDSFF draft.. this is hawt
The audio is weird in this video. It sounds muffled. Patrick might want to work on that for the new studio.
11 different lav microphones thus far. I am also recovering from a cold though :-/
Would love to see a reasonably priced (home lab territory), low power, 8-16 core x64 cpu with lots of pcie lanes...would love me some nvme storage options
Epyc 8004 is probably best bet if you can get your hands on one. There is an 8c 80w chip but once you add in motherboard and ram it's not cheap. It will be great when these get dumped on the market in the future though. I'd love one for my NAS rebuild that I'm planning.
As interesting as it is to see these system, who is going to put these in their home?
So, fully loaded, this rack would be what, $100K?
What do you mean by fully loaded? You can easily crank up one of these systems to 50k. The number of servers per rack depends a bit on airflow design of the data center, and a lot on the amount of power you can draw per rack.
my favorite type of computer hardware (Server hardware) :) love the video
This thing is bonkers - but those EDSFF drives would actually be awesome for System Z. The current interface cards only allow 1 U.2 drive per adapter card, which can be limiting.
Great point! Have an awesome weekend.
7:53 so the fans are not hot-swappable, as in easy-peasy - but, if you wanted to attempt it, COULD you swap the fans while the system is "hot"?
probably depends on the BMC or whatever is monitoring the fans. assuming it is OK with the tamper switch (lid lifted), and it considers fan with 0RPM a warning and waits until the affected components are out of temp spec before shutting things down, you might get away with it.
You should create a new channel named ServeTheBusiness. This is way beyond the homelab. These types of server are for high availability - hyperconvergence.
Why? The STH main site is by far the largest storage, server, and networking review site out there and has been for almost a decade (it just turned 15 but the first 5 years it was smaller.) We were doing 8-way GPU servers back in 2016-2017 before they were fashionable.
I had to check whenever this video is at x1.25 due to voice pitch
Wish that you would've added referral links for where. to buy all the add ons you are reviewing
Definitely going to take some getting used to the new layout and little differences to style with all the changes, but I like it so far. Also looking forward to getting to play with these in 10 years or so. Just finally got my hands on some intel scalable servers to run with some front facing NVMe, can't wait for those that have it standard to really come down, it's slightly overkill for most but it will be good to see it more accessible to all market segments.
Thought it was a 2U blade server chassis or something at first
“We tried some lower TDP CPU’s like in the 300W range”
Gosh I know that’s reality but I’m still getting used to hearing it
this thing looks like it could easily fit 24 e1s bays and being a dual cpu, pci-e lanes won't be a problem. wouldn;t some grills on that top metal plate help a bit with cooling, or would they mess up the air current? looks nice, i will buy a few of these once a get rich!!
I've used Gigabyte servers before. Support kind of sucks.
Like the new studio. The audio threw me off at first because it sounded so much better.
you need to fix the sound on the new set, i was hoping you would notice sooner
the density is mind-boggling. I remember bringing up Nehalem-EX systems (Stoutland platform, Quanta QSSC-S4R) and we had to be careful to split the power supplies across different breakers otherwise we would trip them. Those were 4U with four power supplies. Now we can do that in 1U, and there's still more CPU and memory power!
Maybe a custom sized desk mat could protect the workbench
Totally would love to, but I have no idea to go about doing that
HOLY CRAP!!! I think many businesses will be challenged to afford a beast like this! Definitely never going to see anything remotely like this in any home I'm ever going to be in....
Lovely presentation. But, I didn't understand a word of it.
Hot-swappable NVMe drives? Just pair them with 40GbE and you're off to the races!
Those power supplies are crazy. You actually need 2 dedicated 120V circuit breakers to run this one unit? Do you recommend 240V?
yes and yes
Very nice , Thank you for the video !
Glad you liked it!
Ah. This explains all those used large capacity 2.5" drives on the market. I thought they were pulled out of laptops but some of them seem too big. Didn't realize people used them in rack mount servers too. The removable caddies on servers always reminds me of Dave Bowman removing HAL 9000's cartridges - somehow those cartridges seem too large these days - in the imagination of the late 1960s that would have been the size of miniaturized computer storage units. If they'd made the movie today the cartridges would be long, thin things the size of M.2 cards.
lol we tried very hard to make latch part of the spec like the original Intel ruler had...sigh
I know :(
Patrick, buddy, are you ok (read that in Elmo's voice)? This is semi-sleep state level of energy for you. Audio is a little muffled.
Good pickups. Absolutely not. I got very sick two weeks ago and this is the first full video I have recorded after being somewhat recovered. On the audio side, different microphones / types. This has taken 11 lav mics to get anywhere near the MKH 416.
@@ServeTheHomeVideo Hope you're feeling 100% soon! Seems like everyone's had some sort of virus the past few weeks, and somehow I was spared!
@JeffGeerling thank you sir. I hope you do not catch this.
@@ServeTheHomeVideo I hope you are back to yourself soon. Be well. You probably know way more about this than me anyways, but I have good luck with with wireless Sennheiser lavalieres. Shure is the other major player.
Great content as always!!
Appreciate it!
I have some u.3 carriers that fit 4 nvme drives in each one. I wonder if it will work with this
mini HDMI is absurd for a data center/racked server situation. If they want to move away from VGA it should be to DP. Could be full sized, could be mini, could be over USB-C, doesn't matter much. But should not be HDMI
I dont know... For business use i have to go back over ten years for the last time i actually used storage in servers... All storage is done by dedicated and redundant powerhouses like netapp or similar.. so this hardware will make our storage solutions faster in the future which is great.. but when stability is more important than raw performance i really prefer a seperated storage / compute situation
More for keeping all the VMs and snapshots..
Maybe a hyperconverged system...for example VMWARE vsan etc..
Serve the Home, or Serve the Data Center? I don't know many people that would get this for their home. A little disappointed that you're not focussing on home-use equipment.
Strange comment. The STH main site has been the largest server review site for a decade? We have 13 folks working on STH so we have plenty of people to do both.
The price for this server makes it out of reach for most home users. That's the point of the comment. @@ServeTheHomeVideo
Did you wrestle that blanket from @CuriousMarc?
Many moving blankets from the move to AZ.
Something not being discussed here... Gigabyte as a brand typically loses about 1% performance hit in exchange for a 5-15% lifespan longevity bonus. It's been that way on their desktop systems for almost the life of the brand, and they've brought the concept over to the server space to compete with the bigger brand names that tend to try to get you to turn over a whole datacenter every time the 3-5 year mfg extended warranty purchases run out.
this is a great unit if you want the sound of a jet engine in your home 24/7 :)
How much does the R183-Z95 cost?
Audio sounds like the RTX noise cancellation thing was turned on(?). Sorta hollow
Completely different type of mic. Different size set. Still working on the audio, but it is actually 10x better than it was.
Patrick get yourself a Sheet of UHMW for the top of that workbench. Its super durable and stays looking decent even once its scratched up.
Really good idea. I like the wood finish though. The problem is also that heavier 4U GPU servers and such will gash the wood in the one second something slips or someone is not paying attention. You are right that we should cover it with something else.
Finally a computer I can code and play pong on!
4:29 - amen. specs are specs for a reason.
What brand screw driver are you using @10:06?
Serve the home? That looks like true overkill, unless your home is also a Resort. Thanks. No mention of price? Clearly configuration would impact that but say the price the way you would use it?
geniuses came up with the idea to place nvme ssd vertically in 1 unit :) BRAVO
My friends most likely would try to turn that server into a laptop.
SLOW DOWN PLS! The speed of this video and Patrick talking is so exhausting
performance is very dependant on the CPUs you use. Well...DAH
That server is Freaking Beautiful!
1u servers = loud and lots of heat. Specially when sandwiched between other servers. It was always the 1u server having problems.
You need a Huge Anti-Static Mat for that New Desk :D
but two 1Gb/s ethernet is a joke. With those NVMe there should be 10Gb/s for CEPH storage or any other distributed storage.
Usually you would use OCP NIC 3.0 NICs for the data path (e.g. Ceph) or add-in cards. The 1GbE is just for the management interface.
Would not want pcie lanes wasted on anything else.
Hmm why 2 CPU for storage tho , plus EPYC, feels wrong ~.~
The future of TH-cam is Patrick from sth. The best in enterprise.
Fan's powered from storage backplane LOL....
why would the usb port be on a rack ear???
It frees up faceplate space for more drives when you try to put 10x or 12x 2.5" drives on a 1U faceplate.
Won't these NVMe-s fry being so close to each other? They usually get very hot.
Isn't 6x E1.s with a max of 7.68TB each a much lower storage density than 2x U.2 with 61.44TB each?
E1.S can go higher, but you are right that 7.68TB is common. Sometimes density is not just the capacity, but also the number of devices.
I believe this six pack of drives is impossible to keep below throttling temp. Under heavy io load this server becomes useless then
Is this why the price of SSDs is skyrocketing lol
do they offer other rear daughter cards?
it is LONG. Where does one install it?
SSD it fully crap once use, and it broke...
How much?
"Damaging the work bench"?
It's not a work bench if you're worried about doing work on it.
You should see the big gashes wood gets from servers.
Only 2x1 Gb/s LAN ports for this monster?!!! Are they serious?!!! 🤷
Totally. Those end up being management ports. The OCP NIC 3.0 and PCIe Gen5 x16 expansion slots are for the high speed NICs.
Rather than deal with 3 different types of storage for 17 drives, why not one that has 24x E1.S slots in a 1U like a Super Micro SSG-121E-NES24R? U.2 seems like dead tech already...
Where does a consumer even buy stuff like this?
Yeah that'll never be in an average homelab.
“This is Patrick..”, why not “I’m Patrick..”?
Started doing it 4 years ago, so it is a thing now.