@@jfbeam actually they don't, their XPS desktop machines sometimes have the most weird design decisions, and their XPS laptops have had throttling issues for many generations now...
@@TH-camGlobalAdminstrator Because *they are designing them to be cheap.* When reducing manufacturing costs is your main focus, all kinds of oddities will surface.
They are. In the server world it's not unusual to have non-standard components, interfaces, motherboard layouts, etc. etc. They're not considered pre-manufactured e-waste there, because servers tend to be used quite long and often find their way into the second hand market (sometimes more than once). So when dell do their weird design quirks with the motherboard that goes all the way to the front-IO in a consumer pc that is bad, because consumers don't usually wholesale replace their fleet of machines, and instead upgrade/change individual components of a computer, which is made impossible if manufacturers like Dell don't adhere to any of the standards. It's fine in servers tho, because most of the time when you buy a server you buy at least a combo of chassis, mainboard, PSU and storage backplane. and like this they're also sold on. And if I say any more about this I'll just keep repeating what I already said.
I'm definitely a fan of the newer PowerEdge with the iDRAC 9. Although I am a bit biased because that's basically all we deploy. I've got two in my homelab, one running Proxmox the other TrueNAS.
Yes, EVERYONE does crazy expensive markup on RAM and storage. As you say, it's an easy carrot, and easily the most profitable part of the system. Many times have I told Dell, Lenovo, etc. to ship it with ZERO memory because of their bullshit pricing. (Cisco is infamous for that as well. But Cisco just doesn't want to stock memory.)
I was playing with Dell's configurator on an R960, and it was quite fun seeing the price of the server go up by $18,000 for each 256 GB RAM stick added....
As the proud owner of three older Dell Poweredge machines- I freaking love them. Nice and easy to service, and they've been rock solid reliability-wise with hardware support that you just never see in any other server vendors' back-catalog. The only downside is the 1U stuff is a *bit* loud. My two R410s sound like a pair of idling Pratt & Whitney J58 turbojets lol
ปีที่แล้ว +80
I watch these from the bottom of my poverty pit so I can plan for my lottery win.
@@mechy2k2000 Server equipment is not quiet. The question is instead just how loud it is. Having damaged my hearing by working with servers I can safely say that you should always use hearing protection when working with more than one server at a time. And even then it's worth using those earplugs just for the peace of mind. My hearing was shoot before I turned 40...
@@mechy2k2000 it is worse. These CPUs are power hungry mtfs. They can climb up to 700W each. Dell enviromental certs state these servers can be as loud as 72db. Hpe and Lenovo show similar numbers
I like this intro format MUCH better than the rewind thing you were testing before. The rewind thing can work once in a while, but it's kinda jarring seeing it back to back if binge watching STH
Dell Poweredge Servers might just have the longest life as compared to any other server. I myself have transitioned to only using dells in my homelab after trying out HP, Cisco, Lenovo and Supermicro
@@jannikmeissner don't know about OP Divyansh but I did the same, basically parts availability is HUGE plus the build quality of the Dells is much better if you're working on them all the time. Other niceties such as firmware updating (dell DRM, or lifecyclemanager) is a godsend.
@@eng3d pretty sure it still won’t change my answer because again parts not easiest to find. Dell is in a recursive loop, people buy more Dell because one of the reasons is easy part availability which causes more parts to be available in second hand market and so on. Also idrac is so frikin good, I think the only advantage HPE has over dell is their trial console viewer in the cheaper versions of ilo.
@@eng3d have tried/used IBM servers they're basically like HP in that getting firmware, updates, and parts are a PITA. actually a little worse than HP in the parts market as you can find HP parts a little easier. But both suck for microcode and other support compared to dell.
Patrick, you forgot about the type of viewer that watches just to learn about the industry they went to school for, got an okay job for a while and then the company closed down and then couldn't geta foot in the door again, so had to work in a different field, and NOW watches STH to reminisce and dream about the amazing field they long to be a part of again....... oh and you forgot about people like me that just like learning about servers LOL :P (that first type of viewers is me too, if you didn't figure out already lol)
Inside and out, the R760 looks just like my R750. Your noting power usage is interesting. My R750 uses almost 400 watts at IDLE with two gold CPUs and 8 or 16 RDIMMs installed. That's crazy; more than twice what my T620 pulled when running 30 virtual machines. The thing didn't touch on, seeing as these machines are going to end, like you said, in home labs in five or 7 years, is the noise. The sound of the server, equipped with so-called very high performance, or "gold" fans is unlivable in a home situation. I have a case open with Dell on my R750 to have them allow the fast to spin down below about 37%. I have taken to replace the gold fans with standard performance fans just to be able to use the server for a few hours, as a swing host while doing maintenance on my R740, which has the same fan speed issue, but is a little quieter overall. Dell's servers are amazing pieces of engineering for the data center and for those who can afford them. But the last couple of generations, and the ones to come, are not going to be welcome in the home lab.
The poor idle numbers could be partially due to poor efficiency of the power supply at the extreme low end of utilization. If top end will never approach 1400 watts, it's better to get a lower spec psu to better match the load.
@6:36 I speculate it's due to Dell's compact cooling architecture as opposed to Microsoft's huge and complicated heat pipe system. I suppose there's no IPMI script to force the later gen PowerEdge fans to idle at 5%? Doing that did wonders for my R730. It went from unbearable to being able to live with it in my bedroom
@@AlexKidd4Fun You make a good point regarding efficiency of the power supplies at lower output, but how long would it take to recoup the cost of electricity and the few percent efficiency gains after spending for new power supplies. "Greenness" aside, the outlay is not worth it.
One of the biggest things that bugs me that Dell does is the front drive bays. For the love of god just include all usable caddies!! This could be due to having hundreds of dell systems from colocation which just pile up with those things into the trash.
As far as idle power, I've found Dell servers consume a huge amount of power in their fans. (On much older models, there are undocumented IPMI commands to manually control fan speeds. They obviously removed them once people found them. I can reduce the power consumption of mine by ~100W!!! by slowing the buggers down.) Also, storage... the more storage to stack in there, the more power it's going to need. esp. for non-SSD's. But seriously, who's going to buy one these and leave it sitting idle!
LOL, here I am still rocking a R720. It’s too slow for most of my jobs but has too much memory (128GB) to justify replacing yet. I’ve moved everything else to my TR2950x server and now the R720 just sits there as my firewall/backup host.
i have a r630 and i am very happy with it, besides the fact it hates GPUs, i have a T400 in it for LLM and Video transcoding. Yeah, i know it's over the power limit for the PCIE, because DEL says 35W for the dual raiser, but it's the only PCIE device in that raiser so it should be fine. i bought my r630 for 600€, 2 years ago inclusive 2 damaged powervalt. It had at a single e5 2603v3, 128gb ecc 2133 32gb RAM modules, PCIE RAID controller for the PowerValt, internal perc h330 mini, 4 port 10 gbit sfp+ NIC and the chassis is the 8 bay model, and it has a idrac8 with Enterprise as well. To my surprise, in the slightly dented PowerValt's are each 12 SAS 4tb 10k drives and only 2 of the 24 drives weren't operational. At first i thought the back plane maybe damaged but after testing, i conformed those 2 drives are dead. I ask the seller about the drives and he said i can have them, it's his fault not checking for the drives so i bought them as well. i spend another 600€ on 2 new CPUs and RAM modules for the second socket and 8 SSDs. Now my system has 2 xeon e5 2683v4 and 256gb RAM. in the r630 are all SATA SSD 500gb for the OS and my LLM and services, in raid 10 for the fastest performance and somewhat protection against of data loss. The NAS and Backup part is saved on the PowerValt's, i mean nearly 35TB is more as enough. Each PowerValt is set up in a raid 6 and both PowerValt's are a copy of each other. i think that is safe enough, for now.
I love these videos but they're always just a distant dream. What makes the USA second hand data center hardware market so accessible that I just don't see in my country or most other countries for that matter? US prices are affordable in third world countries. In my country dumb server refurbishers are pricing 15-20yr junk that they never sell for more than US new parts. I want to move to USA just for homelabbing.
@ The issue is just the weight of the (mostly) steel chassis. If a company could replace them with lighter materials, I bet the overall price would decrease and the machines more homelab-friendly. I bought an HP C7000 chassis which weights 70kg with all swappable components removed and ended up paying more for the shipping than for the machine itself. The vendor was Bargainhardware, shipped from the UK to Sweden.
One of the universal laws of computing is that *ALL* 1U servers suck. Either you use them without rear cable management which means you have to unplug all the cables every time you wish to work on the internals which just sucks or you use cable management and if you need to unplug a cable for some reason, well that really sucks.
Left you a comment regarding the consumption but for some reason - youtube is either censoring it off? Anyhow, i have a single socket Intel Gold 5317 Dell R450 deployed consuming 190-200W out of the box doing nothing - idling in Windows. Contacted support and they told me they had this configured as "Max Performance" out of the box. This setting can be changed under a power profile to "tame" the power consumption in IDRAC/BIOS - this was what support advised. Consumption went down to around 110-130W under load.
Nice review, clearly a great system. I had the sense (from operational experience with the PowerEdge line) that Dell work in a way I prefer relative to other leading vendors.This review underscores my perception. Many thanks.
I love the front direct access idrac usb connecter on the poweredge servers. Very convenient when rack stack alot of them , or troubleshooting. Did take me a while before I noticed this feature 😅
The 2nd hand market is also very healthy when it comes to Dell, tons of stuff on eBay and even buying parts through Dell has been reasonably priced. The only exception to that is when you have to buy Dell storage.
What's the diff between Open manage and idrac? Also do they just add a number in front for the epic systems? I've been looking at r7715 2u That looks really similar to this. Do you have any idea if the new perk 12 likes micron nvme drives?
I was watching Jeff G installing a Rasp Pi on a 4090, then my mouse somehow wondered towards your thumbnail. Your yellow T-shirt thumbnail started animating wildly!. WTF. Patrick "screaming" to get attention from Jeff !... Wait, I am getting back to those first!.
Nice piece of hardware, thank you for this review. I love Dell's servers, but It's bad that they're based on Intel CPUs again, I hoped for Epyc servers, but I'm waiting for tower versions because they're much quieter than that 1U / 2U etc... Actually, I have T630 but modified with Noctua fans. I still hope that Dell brings dual Epyc tower servers.
Regarding the heat sink size comparison, are the Microsoft servers of the same form factor? Comparing total mass and fin area would be really interesting, as well as the total fan rotor area / CFM in each case so the cooling systems can be completely compared.
Yeah, R730's are a great deal, would skip the R740 and watch for the R750's as the R740's are PCIe gen 3 (same as R730) and R750 brings Gen4. Plus performance difference between intel cpu's is minimal. From ~2016 is where AMD's Epyc really took the lead and intel has been doing very small IPC improvements since ~2012 or so.
14:26 - Yes there is the normal Markup Game from the Big OEMs, but DDR5 has been way more expensive from launch than DDR4 and is slowly coming down. HPE's "Product Bulletin" tool has pricing data and I've been collecting it and wrote a script to scan the data for a given SKU. Here is the 64GB DDR5: IPL-2022-12-14:P50312-B21 HPE 64GB 2Rx4 PC5-4800B-R Smart Kit $5193.00 IPL-2023-02-03:P50312-B21 HPE 64GB 2Rx4 PC5-4800B-R Smart Kit $5193.00 IPL-2023-04-10:P50312-B21 HPE 64GB 2Rx4 PC5-4800B-R Smart Kit $4421.00 IPL-2023-05-10:P50312-B21 HPE 64GB 2Rx4 PC5-4800B-R Smart Kit $4068.00 IPL-2023-06-13:P50312-B21 HPE 64GB 2Rx4 PC5-4800B-R Smart Kit $4068.00 So you can see the 64GB DIMM WAS over 5 grand "List Price" and has dropped twice since then down to 4 grand. During the same time period, the DDR4-3200 (Ice Lake) equivalent was 3500 initially and has dropped to 3100 now. So DDR5 is still more expensive, but when you factor in the Markup Game (These are all "List Price" which almost no one actually pays), it's right now only 20-30% more expensive depending on the DIMM Model/Size.
Oh and as Patrick said, the DDR5 is 50% faster right out of the gate. So if you want the speed, a 20-30% price increase to get 50% more performance isn't terrible. If you want the cheaper price, buy the older model. R750 (or HPE DL380 Gen10 Plus would be equivalent)
Golden Cove server chips at idle? Yeah, that sounds about right. Just shows how efficient Epycs really are! I am actually going to watch your review of Sapphire rapids to see the comparison
Man that's a crazy expensive server. Would be interesting to hear who this thing is for? If you're a big fish, surely you'll just custom-design your own servers rather than buying these, and if you're a small fish, 16+k $ is a lot of money versus going for more budget friendly options
$16k base is high, but as we discussed it seems like it is priced for very large discounts. To custom design servers most outfits need to be very large. We will cover the OCP Regional Summit on the main site next week
$16k base? Let's discuss price if fitted with a pair of Platinum CPUs, stuffed full of RAM, and engorged with all front panel slots FULL of LARGE capacity NVME drives. Buy the server, or buy a house in Georgia, your choice... :)
Just some feedback-the music choice for this episode was a little distracting. I think a much more mellow/soft music would be a better fit, kinda like what Wendell uses in Lvl1Tech videos
This is video #2 from our new editor John. We are still working on tweaking the setup. Always a process. The next video on the publishing queue will be one of Alex's again
This looks like work lol All we buy are Dell PowerEdge R series servers for our clients. Of course we manage them as well. iDrac is just overall better to use than many competing products, especially with Solarwinds MSP, than the HP iLO. I just wish Dell had an R series alternative featuring AMD Epyc processors that was as compelling and complete a solution. Dell is very good at engineering their own solutions and if they gave an AMD system the same care they do Intel ones they would probably come up with something great.
I do hope that their quality has improved. It wasn’t that long ago if you bought a rack of Dell R series servers you’d get 20% DOA after 1 month. Once you got past the DOA rate they’d last forever, but that initial quality was terrible. Worked for 2 hyperscalers after that, and the white box servers they ordered weren’t amazing (engineered for price after all) but the quality was soooo much better then Dell. Oh and ya, no one pays list. If you aren’t getting 70-80% discounts you are paying too much…
@@ServeTheHomeVideo good call i think, surely putting a good first 10-15 sec part to the beginning as a "hook" for the viewers is important for retention and stuff, but that rewind part just slows down the pace a bit unnecessarily in my opinion, obviously it's rather small detail but you know, just trying to give (I think) a positive feedback to a channel that I like :)
Big fail from Dell actually in not using U.3 interface and tri-mode for the new perc for the front bays. Should really have taken the broadcom MegaRAID 9670-24i for the new perc to allow all 24 front bays to be sata/sas/nvme.
@@Goodman_4629 wow for a 500 series Dell the t550 is no slouch. It is a bummer is a tower though. Not that I'm worried about density in my silly little setup. Honestly the r720 I'm running is overkill as it is.
Fun fact, we have Power 9 servers that are IBM branded and made by Supermicro. On the Z-Power side that is basically a separate market at this point. R760 is to servers what the Rolls Royce is to sedans. IBM Z is the hovercraft option
From a homelab point of view, these aren’t going to be suitable in my opinion. The latest gen are at least 2x the priced the last gen. The noise from the fans is also high unless you mod the server. Which due to no cables for the fans looks impossible on this server. Then the idle power consumption…. I wonder if the r7615 is different…. That said I’m super happy with my r7515 and plan on getting another.
I think you are right. That is a big challenge with the new generation of servers. Also, the power consumption is increasing by a ton. Next year we will have 500W TDP CPUs from AMD and Intel and NVIDIA's Grace Superchip is already at 500W.
@@ServeTheHomeVideo Thanks for the reply! The other thing that goes with power consumption is heat too. My single server heats up a room noticeably, it'd be interesting to think of the 500w TDP CPUs in terms of heat dissipation - will you be looking at the watercooled options for the latest dells too?
Lets see you actually go through their web system configuration page for a specific/custom config and not end up with something you don't want or not get what you need.
holy hell the power usage is nuts and intel selling out does not bode well for this platform - they are owned by amd essentially in almost all mkt segments now and for years to come but particularly in servers they are losing massive mkt and mindshare and for good reasons too - at least dell is using less proprietary parts #mkt driven decision tree
A lot of times they get bumped during racking. Too and then I go to power on the system and it doesn’t power up or complains about one of the cards missing
Welp, In 5-10 time that server will only cost £10-20 or even given away for free by companies who want to get rid of 'em as these things go obsolete extremely fast.
Microsoft servers? I didn't know Microsoft were in the business of selling server. Now, if you had a Dell with the EPYC cpu instead, now that would really be a beast, lower powered, and more of everything at a fraction of those 2 Platinum Intels...
Yeap. And that's a 1U heatsink. 1U is tough with modern 1000 core systems. (One of the systems we designed needed solid copper heat sinks and 15k fans for a 1U chassis. 2U needed none of that crap.)
Re: idrac section and choosing bios settings without spamming a key on reboot.... seriously I feel like a cave person when I'm trying to multitask and miss the moment to press a key and need to reboot a slow server to try again
You're so genuinely excited that you're losing your breath. This is why I subscribe.
And I have to do these videos at 4:30-5AM before drinking coffee.
Man I wish Dell did engineering work like this on consumer machines.
They do. But consumer machines are made to be _cheap._
@@jfbeam actually they don't, their XPS desktop machines sometimes have the most weird design decisions, and their XPS laptops have had throttling issues for many generations now...
dont forget their AlienPlasticWare
@@TH-camGlobalAdminstrator Because *they are designing them to be cheap.* When reducing manufacturing costs is your main focus, all kinds of oddities will surface.
They are. In the server world it's not unusual to have non-standard components, interfaces, motherboard layouts, etc. etc.
They're not considered pre-manufactured e-waste there, because servers tend to be used quite long and often find their way into the second hand market (sometimes more than once).
So when dell do their weird design quirks with the motherboard that goes all the way to the front-IO in a consumer pc that is bad, because consumers don't usually wholesale replace their fleet of machines, and instead upgrade/change individual components of a computer, which is made impossible if manufacturers like Dell don't adhere to any of the standards. It's fine in servers tho, because most of the time when you buy a server you buy at least a combo of chassis, mainboard, PSU and storage backplane. and like this they're also sold on. And if I say any more about this I'll just keep repeating what I already said.
I'm definitely a fan of the newer PowerEdge with the iDRAC 9. Although I am a bit biased because that's basically all we deploy. I've got two in my homelab, one running Proxmox the other TrueNAS.
iDRAC9 is GOAT
Yes, EVERYONE does crazy expensive markup on RAM and storage. As you say, it's an easy carrot, and easily the most profitable part of the system. Many times have I told Dell, Lenovo, etc. to ship it with ZERO memory because of their bullshit pricing. (Cisco is infamous for that as well. But Cisco just doesn't want to stock memory.)
I was playing with Dell's configurator on an R960, and it was quite fun seeing the price of the server go up by $18,000 for each 256 GB RAM stick added....
th-cam.com/video/z1wyVl_udL8/w-d-xo.htmlsi=nyUuuiUlsJKxRRib
As the proud owner of three older Dell Poweredge machines- I freaking love them. Nice and easy to service, and they've been rock solid reliability-wise with hardware support that you just never see in any other server vendors' back-catalog.
The only downside is the 1U stuff is a *bit* loud. My two R410s sound like a pair of idling Pratt & Whitney J58 turbojets lol
I watch these from the bottom of my poverty pit so I can plan for my lottery win.
or get it in 8 years when its $200. I wonder how quiet it is.
it is not
@@mechy2k2000 Server equipment is not quiet. The question is instead just how loud it is. Having damaged my hearing by working with servers I can safely say that you should always use hearing protection when working with more than one server at a time. And even then it's worth using those earplugs just for the peace of mind. My hearing was shoot before I turned 40...
@@ServeTheHomeVideo is it as loud as a Dell R710/R730 when idle, In a cool room?
@@mechy2k2000 it is worse. These CPUs are power hungry mtfs. They can climb up to 700W each. Dell enviromental certs state these servers can be as loud as 72db. Hpe and Lenovo show similar numbers
My Dell R720XD and Dell R620 are my work horses in my homelab....there servers are solid
Agree same config #dellpower #edging
I like this intro format MUCH better than the rewind thing you were testing before. The rewind thing can work once in a while, but it's kinda jarring seeing it back to back if binge watching STH
Trying different versions now
i have always loved the engineering that dell puts into their products.
this is sarcasm or?
@@scudsturm1 tbh I agree with him
th-cam.com/video/z1wyVl_udL8/w-d-xo.htmlsi=nyUuuiUlsJKxRRib
@@scudsturm1Not sarcasm, no
Dell Poweredge Servers might just have the longest life as compared to any other server. I myself have transitioned to only using dells in my homelab after trying out HP, Cisco, Lenovo and Supermicro
What made you prefer dell over Supermicro?
@@jannikmeissner don't know about OP Divyansh but I did the same, basically parts availability is HUGE plus the build quality of the Dells is much better if you're working on them all the time. Other niceties such as firmware updating (dell DRM, or lifecyclemanager) is a godsend.
Because you are not tried ibm servers
@@eng3d pretty sure it still won’t change my answer because again parts not easiest to find. Dell is in a recursive loop, people buy more Dell because one of the reasons is easy part availability which causes more parts to be available in second hand market and so on. Also idrac is so frikin good, I think the only advantage HPE has over dell is their trial console viewer in the cheaper versions of ilo.
@@eng3d have tried/used IBM servers they're basically like HP in that getting firmware, updates, and parts are a PITA. actually a little worse than HP in the parts market as you can find HP parts a little easier. But both suck for microcode and other support compared to dell.
I have an r720xd in my "homelab". such a powerhouse
as an R720 owner, I can attest, absolutely legendary system
R730 here but I want an R830 now for the Quad Core lol
Patrick, you forgot about the type of viewer that watches just to learn about the industry they went to school for, got an okay job for a while and then the company closed down and then couldn't geta foot in the door again, so had to work in a different field, and NOW watches STH to reminisce and dream about the amazing field they long to be a part of again....... oh and you forgot about people like me that just like learning about servers LOL :P
(that first type of viewers is me too, if you didn't figure out already lol)
Small note they did also change the boss card from the previous gen to NVME instead of SATA
Inside and out, the R760 looks just like my R750. Your noting power usage is interesting. My R750 uses almost 400 watts at IDLE with two gold CPUs and 8 or 16 RDIMMs installed. That's crazy; more than twice what my T620 pulled when running 30 virtual machines. The thing didn't touch on, seeing as these machines are going to end, like you said, in home labs in five or 7 years, is the noise. The sound of the server, equipped with so-called very high performance, or "gold" fans is unlivable in a home situation. I have a case open with Dell on my R750 to have them allow the fast to spin down below about 37%. I have taken to replace the gold fans with standard performance fans just to be able to use the server for a few hours, as a swing host while doing maintenance on my R740, which has the same fan speed issue, but is a little quieter overall. Dell's servers are amazing pieces of engineering for the data center and for those who can afford them. But the last couple of generations, and the ones to come, are not going to be welcome in the home lab.
The poor idle numbers could be partially due to poor efficiency of the power supply at the extreme low end of utilization. If top end will never approach 1400 watts, it's better to get a lower spec psu to better match the load.
@6:36 I speculate it's due to Dell's compact cooling architecture as opposed to Microsoft's huge and complicated heat pipe system. I suppose there's no IPMI script to force the later gen PowerEdge fans to idle at 5%? Doing that did wonders for my R730. It went from unbearable to being able to live with it in my bedroom
@@AlexKidd4Fun You make a good point regarding efficiency of the power supplies at lower output, but how long would it take to recoup the cost of electricity and the few percent efficiency gains after spending for new power supplies. "Greenness" aside, the outlay is not worth it.
@@primeral Nope. Dell firmware ignores any changes made through iDRAC, IPMItool, or anything else.
I have an R720 and it still works.
Another great video, I appreciate the talking speed, it's fast enough to keep me engaged throughout the video!
Thank you. Have a great weekend!
@@ServeTheHomeVideo you too!
th-cam.com/video/z1wyVl_udL8/w-d-xo.htmlsi=nyUuuiUlsJKxRRib
Thanks for the hint about RAM prices on Dell's online configurator.
NP. On the main site we show the SSD pricing as well
@@ServeTheHomeVideo Dell's UK website charges about £2K per 64GB module. May be that's not just a mistake but crazy pricing! I will turn to Supermicro
One of the biggest things that bugs me that Dell does is the front drive bays. For the love of god just include all usable caddies!! This could be due to having hundreds of dell systems from colocation which just pile up with those things into the trash.
Great point
As far as idle power, I've found Dell servers consume a huge amount of power in their fans. (On much older models, there are undocumented IPMI commands to manually control fan speeds. They obviously removed them once people found them. I can reduce the power consumption of mine by ~100W!!! by slowing the buggers down.) Also, storage... the more storage to stack in there, the more power it's going to need. esp. for non-SSD's.
But seriously, who's going to buy one these and leave it sitting idle!
I found that too I was able to get my r7515 down to about 100-140w idle with hardware mods
Thanks for this video, i hope you also do review of the Dell R7625
I think we did the R7525 so maybe the R7625 if we get a chance
Can you do a video about how e3.s is being used for cxl memory expansion
We can, but it will not happen in this SPR server. Sapphire Rapids does not support Type-3 CXL devices. AMD does.
@@ServeTheHomeVideo thank you
Got 3 at work with Oracle RAC cluster running on it
There are beasts 🤯
I much prefer this intro to the "lets rewind".
I am still loving my R730XD
I don’t even know what I’d do with one of these but it sure is a beaut
Great swan song for Intel exiting the space
LOL, here I am still rocking a R720. It’s too slow for most of my jobs but has too much memory (128GB) to justify replacing yet. I’ve moved everything else to my TR2950x server and now the R720 just sits there as my firewall/backup host.
That is a high power firewall and backup host!
i have a r630 and i am very happy with it, besides the fact it hates GPUs, i have a T400 in it for LLM and Video transcoding. Yeah, i know it's over the power limit for the PCIE, because DEL says 35W for the dual raiser, but it's the only PCIE device in that raiser so it should be fine. i bought my r630 for 600€, 2 years ago inclusive 2 damaged powervalt. It had at a single e5 2603v3, 128gb ecc 2133 32gb RAM modules, PCIE RAID controller for the PowerValt, internal perc h330 mini, 4 port 10 gbit sfp+ NIC and the chassis is the 8 bay model, and it has a idrac8 with Enterprise as well.
To my surprise, in the slightly dented PowerValt's are each 12 SAS 4tb 10k drives and only 2 of the 24 drives weren't operational. At first i thought the back plane maybe damaged but after testing, i conformed those 2 drives are dead. I ask the seller about the drives and he said i can have them, it's his fault not checking for the drives so i bought them as well.
i spend another 600€ on 2 new CPUs and RAM modules for the second socket and 8 SSDs. Now my system has 2 xeon e5 2683v4 and 256gb RAM. in the r630 are all SATA SSD 500gb for the OS and my LLM and services, in raid 10 for the fastest performance and somewhat protection against of data loss. The NAS and Backup part is saved on the PowerValt's, i mean nearly 35TB is more as enough. Each PowerValt is set up in a raid 6 and both PowerValt's are a copy of each other. i think that is safe enough, for now.
I love these videos but they're always just a distant dream. What makes the USA second hand data center hardware market so accessible that I just don't see in my country or most other countries for that matter?
US prices are affordable in third world countries. In my country dumb server refurbishers are pricing 15-20yr junk that they never sell for more than US new parts.
I want to move to USA just for homelabbing.
You could get stuff shipped from the US.
@ The issue is just the weight of the (mostly) steel chassis. If a company could replace them with lighter materials, I bet the overall price would decrease and the machines more homelab-friendly. I bought an HP C7000 chassis which weights 70kg with all swappable components removed and ended up paying more for the shipping than for the machine itself. The vendor was Bargainhardware, shipped from the UK to Sweden.
@@alvinnorin8820 just do a weekend trip to holland. There's loads of hardware here as well due to so many datacenters.
One of the universal laws of computing is that *ALL* 1U servers suck. Either you use them without rear cable management which means you have to unplug all the cables every time you wish to work on the internals which just sucks or you use cable management and if you need to unplug a cable for some reason, well that really sucks.
Awesome! Just landed and saw this. Made me smile. Have a great weekend
Those power supplies are amazing. Who's the manufacturer of those 1.4KW PSUs? Can you please upload photos of the label on STH?
I don't know about the R760, but in past gens I've seen Dell use PSUs from Emerson, Delta, and Lite-On.
3:18 The OEM is Delta Electronics.
Left you a comment regarding the consumption but for some reason - youtube is either censoring it off?
Anyhow, i have a single socket Intel Gold 5317 Dell R450 deployed consuming 190-200W out of the box doing nothing - idling in Windows.
Contacted support and they told me they had this configured as "Max Performance" out of the box.
This setting can be changed under a power profile to "tame" the power consumption in IDRAC/BIOS - this was what support advised.
Consumption went down to around 110-130W under load.
That is totally correct. Dell configured that power option but it is what we used for the performance runs
I think future models should be modular like this one ☝🏻
That looks quite fancy compared to my PowerEdge 2950 III which I used to host Minecraft servers etc. in 2013-2017
Nice review, clearly a great system. I had the sense (from operational experience with the PowerEdge line) that Dell work in a way I prefer relative to other leading vendors.This review underscores my perception. Many thanks.
I love the front direct access idrac usb connecter on the poweredge servers. Very convenient when rack stack alot of them , or troubleshooting. Did take me a while before I noticed this feature 😅
The 2nd hand market is also very healthy when it comes to Dell, tons of stuff on eBay and even buying parts through Dell has been reasonably priced. The only exception to that is when you have to buy Dell storage.
What's the diff between Open manage and idrac? Also do they just add a number in front for the epic systems? I've been looking at r7715 2u That looks really similar to this. Do you have any idea if the new perk 12 likes micron nvme drives?
Ive been cruising along using Micron & Intel SATA SSD's on Dell 14th gen servers, anyone know if non Dell NVMe also works for these latest models?
I really want to see the liquid cooling optioned
Agreed!
I was watching Jeff G installing a Rasp Pi on a 4090, then my mouse somehow wondered towards your thumbnail. Your yellow T-shirt thumbnail started animating wildly!. WTF. Patrick "screaming" to get attention from Jeff !... Wait, I am getting back to those first!.
Ha! I was chatting with Jeff last night while I was finishing up reviewing this video.
Nice piece of hardware, thank you for this review. I love Dell's servers, but It's bad that they're based on Intel CPUs again, I hoped for Epyc servers, but I'm waiting for tower versions because they're much quieter than that 1U / 2U etc... Actually, I have T630 but modified with Noctua fans. I still hope that Dell brings dual Epyc tower servers.
Regarding the heat sink size comparison, are the Microsoft servers of the same form factor?
Comparing total mass and fin area would be really interesting, as well as the total fan rotor area / CFM in each case so the cooling systems can be completely compared.
Great content Pat, but please slow down and annunciate a little more. Literally have to rewind to understand. Great work buddy!!
Hey Patrick, love the video, but I think the background music is a touch loud this time?
I agree. The drafts were louder. This is John's second public video for STH. We need to work a bit on the music/ levels for #3.
Well this explains all of those $200 730s with V4 Xeons that were on the used market a month ago I passed up on and missed out 🤡😢......
Yeah, R730's are a great deal, would skip the R740 and watch for the R750's as the R740's are PCIe gen 3 (same as R730) and R750 brings Gen4. Plus performance difference between intel cpu's is minimal. From ~2016 is where AMD's Epyc really took the lead and intel has been doing very small IPC improvements since ~2012 or so.
14:26 - Yes there is the normal Markup Game from the Big OEMs, but DDR5 has been way more expensive from launch than DDR4 and is slowly coming down.
HPE's "Product Bulletin" tool has pricing data and I've been collecting it and wrote a script to scan the data for a given SKU. Here is the 64GB DDR5:
IPL-2022-12-14:P50312-B21 HPE 64GB 2Rx4 PC5-4800B-R Smart Kit $5193.00
IPL-2023-02-03:P50312-B21 HPE 64GB 2Rx4 PC5-4800B-R Smart Kit $5193.00
IPL-2023-04-10:P50312-B21 HPE 64GB 2Rx4 PC5-4800B-R Smart Kit $4421.00
IPL-2023-05-10:P50312-B21 HPE 64GB 2Rx4 PC5-4800B-R Smart Kit $4068.00
IPL-2023-06-13:P50312-B21 HPE 64GB 2Rx4 PC5-4800B-R Smart Kit $4068.00
So you can see the 64GB DIMM WAS over 5 grand "List Price" and has dropped twice since then down to 4 grand.
During the same time period, the DDR4-3200 (Ice Lake) equivalent was 3500 initially and has dropped to 3100 now.
So DDR5 is still more expensive, but when you factor in the Markup Game (These are all "List Price" which almost no one actually pays), it's right now only 20-30% more expensive depending on the DIMM Model/Size.
Oh and as Patrick said, the DDR5 is 50% faster right out of the gate.
So if you want the speed, a 20-30% price increase to get 50% more performance isn't terrible.
If you want the cheaper price, buy the older model. R750 (or HPE DL380 Gen10 Plus would be equivalent)
Golden Cove server chips at idle? Yeah, that sounds about right. Just shows how efficient Epycs really are! I am actually going to watch your review of Sapphire rapids to see the comparison
Hearing a new PowerEdge compared to a Rolls or whatnot is bizarre. The 750-2950's I dealt with progressed from Isetta to Murano CrossCab.
Man that's a crazy expensive server. Would be interesting to hear who this thing is for? If you're a big fish, surely you'll just custom-design your own servers rather than buying these, and if you're a small fish, 16+k $ is a lot of money versus going for more budget friendly options
$16k base is high, but as we discussed it seems like it is priced for very large discounts. To custom design servers most outfits need to be very large. We will cover the OCP Regional Summit on the main site next week
$16k base? Let's discuss price if fitted with a pair of Platinum CPUs, stuffed full of RAM, and engorged with all front panel slots FULL of LARGE capacity NVME drives. Buy the server, or buy a house in Georgia, your choice... :)
Just some feedback-the music choice for this episode was a little distracting. I think a much more mellow/soft music would be a better fit, kinda like what Wendell uses in Lvl1Tech videos
This is video #2 from our new editor John. We are still working on tweaking the setup. Always a process. The next video on the publishing queue will be one of Alex's again
May I know what the PSU supplier is, it looks like a customized design by Dell?
I hope Apple will go back with Apple Silicon Xserve (or mac server?)
This looks like work lol
All we buy are Dell PowerEdge R series servers for our clients. Of course we manage them as well. iDrac is just overall better to use than many competing products, especially with Solarwinds MSP, than the HP iLO. I just wish Dell had an R series alternative featuring AMD Epyc processors that was as compelling and complete a solution. Dell is very good at engineering their own solutions and if they gave an AMD system the same care they do Intel ones they would probably come up with something great.
I do hope that their quality has improved. It wasn’t that long ago if you bought a rack of Dell R series servers you’d get 20% DOA after 1 month. Once you got past the DOA rate they’d last forever, but that initial quality was terrible. Worked for 2 hyperscalers after that, and the white box servers they ordered weren’t amazing (engineered for price after all) but the quality was soooo much better then Dell.
Oh and ya, no one pays list. If you aren’t getting 70-80% discounts you are paying too much…
20% failure? that sounds like a very high number. Does Dell not do burn ins before they ship stuff to customers?
4:31 Tetris level rear side
Had one of these at home, I think it was the number of ram slots or something, but it would always idle around 300 watts lol
How the hell do you keep track of all these acronyms?
The force is strong
thank you for not saying "let's back up a little" this time, honestly it was a bit unnecessary
We pulled the rewind out of this one
@@ServeTheHomeVideo good call i think, surely putting a good first 10-15 sec part to the beginning as a "hook" for the viewers is important for retention and stuff, but that rewind part just slows down the pace a bit unnecessarily in my opinion, obviously it's rather small detail but you know, just trying to give (I think) a positive feedback to a channel that I like :)
Big fail from Dell actually in not using U.3 interface and tri-mode for the new perc for the front bays. Should really have taken the broadcom MegaRAID 9670-24i for the new perc to allow all 24 front bays to be sata/sas/nvme.
Part of me wishes there is a chassis that supported 8x 3.5" drives and 8x 2.5" drives in the front.
T550 Tower model has an 8x 3.5" + 8x 2.5" NVMe! But obviously it's a tower instead of a rack form factor.
@@Goodman_4629 wow for a 500 series Dell the t550 is no slouch. It is a bummer is a tower though. Not that I'm worried about density in my silly little setup. Honestly the r720 I'm running is overkill as it is.
am not sure, but I think you like this server😃
Imagine calling a Dell server the Rolls Royce of servers in a world where IBM Z-Power servers exist.
Fun fact, we have Power 9 servers that are IBM branded and made by Supermicro. On the Z-Power side that is basically a separate market at this point. R760 is to servers what the Rolls Royce is to sedans. IBM Z is the hovercraft option
How do I get a sales rep?
Heatsinks that big? Intel said screw it with power efficiency in an effort to stay close to amd?
I love the Dell poweredge #edging
Just fill all storage slots with Optane P5800X, for the LOLs
From a homelab point of view, these aren’t going to be suitable in my opinion. The latest gen are at least 2x the priced the last gen. The noise from the fans is also high unless you mod the server. Which due to no cables for the fans looks impossible on this server. Then the idle power consumption…. I wonder if the r7615 is different…. That said I’m super happy with my r7515 and plan on getting another.
I think you are right. That is a big challenge with the new generation of servers. Also, the power consumption is increasing by a ton. Next year we will have 500W TDP CPUs from AMD and Intel and NVIDIA's Grace Superchip is already at 500W.
@@ServeTheHomeVideo Thanks for the reply! The other thing that goes with power consumption is heat too. My single server heats up a room noticeably, it'd be interesting to think of the 500w TDP CPUs in terms of heat dissipation - will you be looking at the watercooled options for the latest dells too?
Dell is very proud of their parts. SSDs are highway robbery.
Lets see you actually go through their web system configuration page for a specific/custom config and not end up with something you don't want or not get what you need.
RIP homelabbers, 600 watt idle is insane.
holy hell the power usage is nuts and intel selling out does not bode well for this platform - they are owned by amd essentially in almost all mkt segments now and for years to come but particularly in servers they are losing massive mkt and mindshare and for good reasons too - at least dell is using less proprietary parts #mkt driven decision tree
I'm the third type of guy: I watch but I can't afford it even after 5 years 😄😄
I don't see any difference in terms of engineering from HPE or Lenovo🤔
cant wait to put one of these PSUs in my pc when intel ever launches its ATX12VO spec xD
the boss SSD trays always look crooked
A lot of times they get bumped during racking. Too and then I go to power on the system and it doesn’t power up or complains about one of the cards missing
Welp, In 5-10 time that server will only cost £10-20 or even given away for free by companies who want to get rid of 'em as these things go obsolete extremely fast.
Starting at $16,004.73
We talk about pricing on key lessons learned and the main site article
Microsoft servers? I didn't know Microsoft were in the business of selling server.
Now, if you had a Dell with the EPYC cpu instead, now that would really be a beast, lower powered, and more of everything at a fraction of those 2 Platinum Intels...
Yeap. And that's a 1U heatsink. 1U is tough with modern 1000 core systems. (One of the systems we designed needed solid copper heat sinks and 15k fans for a 1U chassis. 2U needed none of that crap.)
HPE have done removable fan partitions for years now. it always frustrated me that our Dell's didnt
Re: idrac section and choosing bios settings without spamming a key on reboot.... seriously I feel like a cave person when I'm trying to multitask and miss the moment to press a key and need to reboot a slow server to try again
Nothing fancy.....
This is not a server for home, by any stretch of imagination..
Too power hungry, inefficient and way way overpriced!!
Jesus loves us God bless everyone!11111!!!!!!
first
You are today's big winner!