Personally I feel like the glory days of rackmount servers for the homelab are behind us. Power consumption is already way too high and only getting higher, and the systems are just getting to be complete overkill with a single host being capable of what 3-4 older boxes used to be. That and newer desktop CPU's are crazy fast and efficent, high capacity dimms let you pack a ton of ram into a tiny system. I had a 24u rack with a few R*20 systems, disk shelf, multiple switches, tape library etc. Sold it all and replaced it with a albeit expensive but tiny sff ryzen system in a NAS case that idles at ~80w with half a dozen disks and have not looked back at all. Although I did keep the tape library, that things dope af.
That’s like 60% correct…..You’re forgetting something, Effing software bloat everywhere !! Like no matter how beefy your system is the software you run on it will be the ultimate bottleneck!! Even Linux does it not only Windows. I’ve found that only FreeBSD is the exception but then again, nothing is ever made for it
Agree. I currently run TrueNAS on an R730 and with California’s electricity prices, I very much am doing the math on other builds right now. I pay $0.43/kWH (average). That means I pay ~$350/year to run it 24/7. It begins to make sense to buy almost anything else and come out ahead.
I agree for the most part! But there exists very legitimate options for building a rackmount server with standard consumer/prosumer ATX parts. It's like the best of both worlds Edit: just realised I missed the whole point of your comment lol, I fully agree after rereading it enterprise computing is still far too overkill for homelab especially with current consumer hardware
The platinum TH-cam plaque on the wall behind you isn't level Tom. My OCD taking over. LOVE your videos, thank you for all the hard work you do for us for free.
Run a shitload of refurbished 740s for only a 3rd of the price of new 750s without any performance hits. Love to build clusters with 20 or 24 R740s instead of 16 R750s. Better performance for my customers, less investment. And, also, can get them in a few days instead of weeks/months.
Roughly how much are these now in the US? Mine cost quite a lot but that’s cause I’m in Asia and at the mercy of local Dell agents; but I got it new. Its about 4 years old now
I chose to go the second-hand HP route. I've had about a dozen over the past 10 years spanning Gen6 - Gen10. The good new is that my therapist says we are close to uncovering the early traumatic event that sent me down the road to self-flagellation.
I have purchased a good number of the R740XD's from Tech Supply Direct. I do a little extra analysis when deciding on which CPU's to use, by creating a quick spreadsheet of the CPU's and price. Then Google each one, and click the CPU benchmark link to see what the Multithread and Single Thread ratings are. Once complete I do some sorting to find the best performance / price. It's a bit of an eye opener as you can usually see substantial price difference in some models, with very little difference in actual performance.
Still rocking a R720, been in service for around 8 years now, runs around 110-140w at low load. Currently it just runs my PfSense VM. Not much else I can use it for without it being too slow to be reasonable. Most AI programs wont run due to missing instruction sets, no HW video decoding so I can't make it a decent plex box without investing in a GPU (Already have an intel CPU server that I use HW decoding with anyways), and it's way too slow for most other tasks that would benefit from lots of cores but low core clock, such as 3D modeling large fluid simulations (also needs a GPU,)
Plan to replace them in a couple years. I had 10 year old machines running, the purple tabs would turn pinkish-orange and would disintegrate if you even looked at them. Ran backups on a few machines and that was enough to run the hardware raid card to the point of crashing the machine. Swapped it out with a spare broke most of the tabs I touched on the other machine and a few on the new one (I swapped the hardware nic out, because Intel and the Mac addresses are a pain to remap on all the VM's). Other then having to repaste the cpus on a few machines in the stack (and the endless number of mid-day power supply swaps at the data center): they were fairly decent machines. First 5 years were flawless. Food for thought.
Big thank you Tom! Way back when you started covering Xcpng and XOA I started playing around with some random hardware. I ended up running a primary Norco box and and a better secondary Supermicro for my main FreeNAS setup with the master replicating across. This has been running for over 5 years now. I also bought a R740xd new from local Dell agents (way too expensive but I got 5-year 4 hour support) that’s been my Xcpng VM computer cluster. However Dell for a long time couldn’t verify if TrueNAS could work with the existing PERC controller. I think you’ve answered that for me as ideally I’d like to move my aging Norco to another Dell solution. You content on pfSense, FreeNAS and Xcpng too my homelab to the next level and now I have two racks.
I've got one R740 running in my homelab. It's a non-XD version. I picked the one version without drivebays even, because I won't use them anyway. My storage are 2 Intel 4TB NVMe drives interally. The best part of the machine is, that it's running idle on 56w. Specs: 1x Xeon 6132 (14c/28t @ 2,6Ghz) 192GB DDR4-2400 2x Intel P4510 4TB on an AliExpress U.2 to PCIe PCB Dell rNDC SFP+ NIC 1x 750w PSU Running the newest Proxmox. But I only use it for testing, so it's off when I don't need it.
I kinda wish we, as a community, had a good pool of info on reusing these chassis. I'd love to know how far back I can go if I want NVMe connectivity, and I'm willing to strip the chassis down to the backplane/cages. I recently got my hands on a bunch of 4/8TB U.2 drives for very little money, and now I'm getting sticker-shock at just how much money it's going to cost to even connect them. A $80 cage + $25 cable is hard to justify to attach a $50 drive...
@@farmeunit Make what fit what port? The point of my post was that there isn't much info on reusing/rebuilding older chassis with modern parts. What ones can be refit with NVMe backplanes? What are those part numbers? Are they compatible with only the original vendor's PCIe switch chips or do they work with anything from X Family? Does the chassis require the original motherboard/BMC to use the backplanes? Do they have proprietary power pinouts or something else stopping you from gutting it and cramming a different motherboard in it? Can the fans be replaced? Do they still provide enough airflow without being screamers? Are there any "wild" mods for using these with NVMe? Example: I saw a project someone was working on, where they were looking at a backplane for a 2U chassis that supported 4x NVMe drives on the rightmost slots. The SAS/NVMe circuits on the backplane were physically separate. The power was on the NVMe side. So they were trying to source a bunch of these backplanes to see if they could slice off the NVMe parts, make some custom power cables, and stuff 6 of them into a single chassis to make all 24 ports work with U.2 drives. Dunno if they ever got anywhere with that project. But it's interesting. We are at the point where there are several generations of chassis out there that just aren't useful anymore in their original configuration. Hundreds of watts of power draw for a Haswell Xeon is not worth it. How can we get cool stuff, and reduce waste in the process?
Tom if you spec’ing out a new xcp-ng system would you prefer all in one like the unit you have here (host+storage) or a host (compute server) and a separate storage,like an iX systems with NFS shares?
There are quite a few people using these as homelab machines, bang for buck they are honestly pretty good servers. I went Epyc for my homelab upgrades but gave the R730/R740 and the xd versions a good look before I decided to go Epyc.
I run an R340 in my homelab. Same generation but quite a bit lower in power usage. Only downside is they top out at 64GB of RAM but for homelab use they're excellent.
Prefer HPE over Dell, the Gen10 ProLiant DL380 is a good homelab choice if your sticking with HPE parts only - very quiet when in a supported configuration. If you’re using third party PCIe cards and drives, you can override the fan curve with iLO’s REST API with up to a 50% reduction in fan curve ramping. Been running a DL325 Gen10 in my lab, it’s a bit loud being 1u but holds tons of RAM and I got it for dirt cheap - $500 new in box.
Gotta disagree with the Intel vs Broadcom argument. My experience is Intel requires offloads to get the performance they do, my annoyance is when those "smarts" aren't very smart and instead get in the way. A simple example recently is an Intel NIC and VLANs, I have an 82599ES that in no way can I get it to pass VLANs (trunk) running proxmox (currently 8.2.7). A BCM5709 (onboard in the same system) is working perfectly fine, as is a BCM57840 (in a different proxmox node). I don't need my hardware trying to be smart and instead being retarded and wasting my time, for high pps Intel is supposed to be faster (can't say the system name but the software maker insisted on Intel), but for every case where we have it all of those offloads are pointless as they are all plugged into access ports and the network devices deal with anything fancy (VLANs, VxLAN, etc). Maybe with VMWare the drivers handle things properly but the linux drivers are crap (I've made the mistake of vlans on a windows machine once for a hyper-V setup and never touching that again).
rackmount certainly isnt a great of a deal as it used to be, but getting a 740dx with 256gb ram and dual xeon 6152s desktop equivalent isnt going to be as cheap , imho, I could be wrong!
He many times explained that they are both mature and nice software and everyone is free to choose. He prefers xcpng. Similar things with pfsense vs opnsense.
I got lucky in picking up a free T640 with 8 x 1.94TB Intel SSD's. Picked up a BOSS card for boot and then used the rest of the PCI slots for NVME drives. I have 2 x16's left for future expansion in the front with U.2's. It had two 4110's but I swapped them for two 5120's for core density. With all SSD storage it runs low 200 watts for the homelab. Fun chassis.
@@electrocyper its decently quiet for a server. Fans are all 80 or 90mm if I remember correctly so that helps with the tone. For my workloads the fans stay close to the system idle speeds, probably the bigger heatsinks/air space of a tower keeps its cool enough for me. I have mine in a spare bedroom and shutting the door is enough the mute any noise its making for me.
Unless it's decommissioned servers for a lab use it's hard to recommend Dell hardware with all the proprietary limitations in the hardware/BIOS whitelisting. Just get a Supermicro server and avoid the headache all together.
@LAWRENCESYSTEMS - Oh, last servers I built on dell earlier this year, AMD was the cheapest option by a margin of around 8% per core. These were both faster cores and consumed way less power. That was with the discounts though, which to be fair is hidden and makes no sense or logic, even reaching like 60% on some servers compared to Dells pricing on their website. I just make the server specs and contact one of their sales reps each time, i never go with their list pricing.
I with I had gotten a R740xd instead of my T7820 for my main home server...It's the same hardware (minus the NIC daughter board and idrac) but I didn't plan on wiring my condo or getting a rack at the time, so I went wit the tower for noise reasons. Now, technically speaking there is a rack conversion kit for it...but I've been trying to find one for nearly a year. I do love the R730xd with the 12 3.5" bays that I'm using for TrueNAS though.
Lol... Cascade Lake? In 2024? So basically a 10+ year old platform. Slow DDR4, only 28 lanes of PCIe Gen3, and up to 28 cores per CPU. Whyyyyyyyyyyyyyyyyyy?
There's nothing wrong with DDR4, especially on a server hosting VMs, and those cpus have plenty of power for a lab server. The sweet spot for price/performance in homelabs are usually 1-2 generations behind. Not sure where you get 10 years from, but the server/cpu are 7 years old. Regardless, its a great server for hosting VMs.
@lucky64444 2667 MHz DDR4, to be clear. And Skylake originally came out in 2015. So apologies... Only 9+ years old. I mean, my own cluster is still Cascade Lake, and sure it's fine? But in 2024, there are much better price/performance/efficiency options for server class platforms. Also... All these chips took quite a bit hit from the Spectre/Meltdown fixes. A 28x core Skylake, is slower than a 5950X. Just, ya know, for comparison.
Personally I feel like the glory days of rackmount servers for the homelab are behind us. Power consumption is already way too high and only getting higher, and the systems are just getting to be complete overkill with a single host being capable of what 3-4 older boxes used to be. That and newer desktop CPU's are crazy fast and efficent, high capacity dimms let you pack a ton of ram into a tiny system. I had a 24u rack with a few R*20 systems, disk shelf, multiple switches, tape library etc. Sold it all and replaced it with a albeit expensive but tiny sff ryzen system in a NAS case that idles at ~80w with half a dozen disks and have not looked back at all. Although I did keep the tape library, that things dope af.
That’s like 60% correct…..You’re forgetting something, Effing software bloat everywhere !! Like no matter how beefy your system is the software you run on it will be the ultimate bottleneck!! Even Linux does it not only Windows. I’ve found that only FreeBSD is the exception but then again, nothing is ever made for it
@@seansingh4421lol what a 💩 take 😂
You forgot your meds today mate.@@seansingh4421
Agree. I currently run TrueNAS on an R730 and with California’s electricity prices, I very much am doing the math on other builds right now. I pay $0.43/kWH (average). That means I pay ~$350/year to run it 24/7. It begins to make sense to buy almost anything else and come out ahead.
I agree for the most part! But there exists very legitimate options for building a rackmount server with standard consumer/prosumer ATX parts. It's like the best of both worlds
Edit: just realised I missed the whole point of your comment lol, I fully agree after rereading it enterprise computing is still far too overkill for homelab especially with current consumer hardware
The platinum TH-cam plaque on the wall behind you isn't level Tom. My OCD taking over.
LOVE your videos, thank you for all the hard work you do for us for free.
Run a shitload of refurbished 740s for only a 3rd of the price of new 750s without any performance hits. Love to build clusters with 20 or 24 R740s instead of 16 R750s. Better performance for my customers, less investment. And, also, can get them in a few days instead of weeks/months.
Roughly how much are these now in the US? Mine cost quite a lot but that’s cause I’m in Asia and at the mercy of local Dell agents; but I got it new. Its about 4 years old now
I had one of these at a company. We got a $40k server for $16k as it was stock that Dell needed to get rid of. It was an excellent server.
I chose to go the second-hand HP route. I've had about a dozen over the past 10 years spanning Gen6 - Gen10. The good new is that my therapist says we are close to uncovering the early traumatic event that sent me down the road to self-flagellation.
Nothing wrong with HP. They're still my go-to.
. Sure, what's wrong with paywalled firmware updates
Hence I run commodity hardware.
I have purchased a good number of the R740XD's from Tech Supply Direct. I do a little extra analysis when deciding on which CPU's to use, by creating a quick spreadsheet of the CPU's and price. Then Google each one, and click the CPU benchmark link to see what the Multithread and Single Thread ratings are. Once complete I do some sorting to find the best performance / price. It's a bit of an eye opener as you can usually see substantial price difference in some models, with very little difference in actual performance.
Still rocking a R720, been in service for around 8 years now, runs around 110-140w at low load. Currently it just runs my PfSense VM. Not much else I can use it for without it being too slow to be reasonable. Most AI programs wont run due to missing instruction sets, no HW video decoding so I can't make it a decent plex box without investing in a GPU (Already have an intel CPU server that I use HW decoding with anyways), and it's way too slow for most other tasks that would benefit from lots of cores but low core clock, such as 3D modeling large fluid simulations (also needs a GPU,)
Plan to replace them in a couple years. I had 10 year old machines running, the purple tabs would turn pinkish-orange and would disintegrate if you even looked at them. Ran backups on a few machines and that was enough to run the hardware raid card to the point of crashing the machine. Swapped it out with a spare broke most of the tabs I touched on the other machine and a few on the new one (I swapped the hardware nic out, because Intel and the Mac addresses are a pain to remap on all the VM's). Other then having to repaste the cpus on a few machines in the stack (and the endless number of mid-day power supply swaps at the data center): they were fairly decent machines. First 5 years were flawless.
Food for thought.
Big thank you Tom! Way back when you started covering Xcpng and XOA I started playing around with some random hardware. I ended up running a primary Norco box and and a better secondary Supermicro for my main FreeNAS setup with the master replicating across. This has been running for over 5 years now.
I also bought a R740xd new from local Dell agents (way too expensive but I got 5-year 4 hour support) that’s been my Xcpng VM computer cluster.
However Dell for a long time couldn’t verify if TrueNAS could work with the existing PERC controller. I think you’ve answered that for me as ideally I’d like to move my aging Norco to another Dell solution.
You content on pfSense, FreeNAS and Xcpng too my homelab to the next level and now I have two racks.
Happy to hear the videos helped you get that all setup!
Looks like iDrac 9 is a lot snappier than 7 and 8. Good to see.
Love these servers! Have two of them in my home lab running vSphere! Great video!
I've got one R740 running in my homelab. It's a non-XD version. I picked the one version without drivebays even, because I won't use them anyway. My storage are 2 Intel 4TB NVMe drives interally.
The best part of the machine is, that it's running idle on 56w.
Specs:
1x Xeon 6132 (14c/28t @ 2,6Ghz)
192GB DDR4-2400
2x Intel P4510 4TB on an AliExpress U.2 to PCIe PCB
Dell rNDC SFP+ NIC
1x 750w PSU
Running the newest Proxmox. But I only use it for testing, so it's off when I don't need it.
I kinda wish we, as a community, had a good pool of info on reusing these chassis.
I'd love to know how far back I can go if I want NVMe connectivity, and I'm willing to strip the chassis down to the backplane/cages.
I recently got my hands on a bunch of 4/8TB U.2 drives for very little money, and now I'm getting sticker-shock at just how much money it's going to cost to even connect them.
A $80 cage + $25 cable is hard to justify to attach a $50 drive...
Even on eBay? Used to be able to get drives on Amazon with basically free caddies…
@@farmeunit A caddy doesn't help to actually connect up the drive.
@Prophes0r I understand, just giving example of acquiring parts. Not sure on this model but typically you can buy parts to make them fit the port.
@@farmeunit Make what fit what port?
The point of my post was that there isn't much info on reusing/rebuilding older chassis with modern parts.
What ones can be refit with NVMe backplanes? What are those part numbers? Are they compatible with only the original vendor's PCIe switch chips or do they work with anything from X Family? Does the chassis require the original motherboard/BMC to use the backplanes? Do they have proprietary power pinouts or something else stopping you from gutting it and cramming a different motherboard in it? Can the fans be replaced? Do they still provide enough airflow without being screamers?
Are there any "wild" mods for using these with NVMe?
Example: I saw a project someone was working on, where they were looking at a backplane for a 2U chassis that supported 4x NVMe drives on the rightmost slots. The SAS/NVMe circuits on the backplane were physically separate. The power was on the NVMe side. So they were trying to source a bunch of these backplanes to see if they could slice off the NVMe parts, make some custom power cables, and stuff 6 of them into a single chassis to make all 24 ports work with U.2 drives.
Dunno if they ever got anywhere with that project. But it's interesting.
We are at the point where there are several generations of chassis out there that just aren't useful anymore in their original configuration. Hundreds of watts of power draw for a Haswell Xeon is not worth it.
How can we get cool stuff, and reduce waste in the process?
Tom if you spec’ing out a new xcp-ng system would you prefer all in one like the unit you have here (host+storage) or a host (compute server) and a separate storage,like an iX systems with NFS shares?
definitely would be a great backup machine you could have on a few hours a day or days a month.
There are quite a few people using these as homelab machines, bang for buck they are honestly pretty good servers. I went Epyc for my homelab upgrades but gave the R730/R740 and the xd versions a good look before I decided to go Epyc.
Nice one! But what type of clients are using this though? Software Development companies for development? Or are they running services on production?
Got the r740xd a couple of months ago, running unRAID on it and installed a arc770 with som mods to fit
"Really fast NVME", like how fast? Would be great to see what you achieved with ZFS. Fast like one nvme drive (vlog raidz1)?
I run an R340 in my homelab. Same generation but quite a bit lower in power usage. Only downside is they top out at 64GB of RAM but for homelab use they're excellent.
now i want one, my r710 when i still used it sometime used 300w
Tom, I'm curious, what would stop you from using this as a primary system?
Prefer HPE over Dell, the Gen10 ProLiant DL380 is a good homelab choice if your sticking with HPE parts only - very quiet when in a supported configuration. If you’re using third party PCIe cards and drives, you can override the fan curve with iLO’s REST API with up to a 50% reduction in fan curve ramping.
Been running a DL325 Gen10 in my lab, it’s a bit loud being 1u but holds tons of RAM and I got it for dirt cheap - $500 new in box.
I can confirm Proxmox comparability. I run eight of them in a SMB cluster.
That’s so kewl, i’m jealous!!
Any supermicro equivalents to this?
I wish that network module was available as a pcie card!
2 questions, how much you paid for the server and which one is better XCP or PVE?
I prefer XCP-ng th-cam.com/video/et54DxAC2uM/w-d-xo.htmlsi=uwrR4YkMSQPwCrJI
Gotta disagree with the Intel vs Broadcom argument. My experience is Intel requires offloads to get the performance they do, my annoyance is when those "smarts" aren't very smart and instead get in the way. A simple example recently is an Intel NIC and VLANs, I have an 82599ES that in no way can I get it to pass VLANs (trunk) running proxmox (currently 8.2.7). A BCM5709 (onboard in the same system) is working perfectly fine, as is a BCM57840 (in a different proxmox node).
I don't need my hardware trying to be smart and instead being retarded and wasting my time, for high pps Intel is supposed to be faster (can't say the system name but the software maker insisted on Intel), but for every case where we have it all of those offloads are pointless as they are all plugged into access ports and the network devices deal with anything fancy (VLANs, VxLAN, etc). Maybe with VMWare the drivers handle things properly but the linux drivers are crap (I've made the mistake of vlans on a windows machine once for a hyper-V setup and never touching that again).
rackmount certainly isnt a great of a deal as it used to be, but getting a 740dx with 256gb ram and dual xeon 6152s desktop equivalent isnt going to be as cheap , imho, I could be wrong!
Why do you use xcp over pve?
He many times explained that they are both mature and nice software and everyone is free to choose. He prefers xcpng. Similar things with pfsense vs opnsense.
I have a video on that here th-cam.com/video/et54DxAC2uM/w-d-xo.htmlsi=uwrR4YkMSQPwCrJI
Business relationships, combined with what's most familiar.
Funny timing, I have about 10 of these I'd like to sell! Send people my way! Unless you'd like some options for your clients.
Any way these can be less power hungry, like 100w
I got lucky in picking up a free T640 with 8 x 1.94TB Intel SSD's. Picked up a BOSS card for boot and then used the rest of the PCI slots for NVME drives. I have 2 x16's left for future expansion in the front with U.2's. It had two 4110's but I swapped them for two 5120's for core density. With all SSD storage it runs low 200 watts for the homelab. Fun chassis.
Hi there, is it quite loud or it is fine for home lab?
@@electrocyper its decently quiet for a server. Fans are all 80 or 90mm if I remember correctly so that helps with the tone. For my workloads the fans stay close to the system idle speeds, probably the bigger heatsinks/air space of a tower keeps its cool enough for me. I have mine in a spare bedroom and shutting the door is enough the mute any noise its making for me.
Too much power consumption plus maintenance for a backup storage. You can use a cloud backup solution as a paid service, much cheaper and hassle free
Who makes your glasses?
Unless it's decommissioned servers for a lab use it's hard to recommend Dell hardware with all the proprietary limitations in the hardware/BIOS whitelisting. Just get a Supermicro server and avoid the headache all together.
Nice!
Intel cpus? What would Wendell say about that? Unless you pay per core like in vmware, I don't see any reason choosing Intel these days.
The price is still a bit more on the Dell severs with AMD.
@LAWRENCESYSTEMS - Oh, last servers I built on dell earlier this year, AMD was the cheapest option by a margin of around 8% per core. These were both faster cores and consumed way less power.
That was with the discounts though, which to be fair is hidden and makes no sense or logic, even reaching like 60% on some servers compared to Dells pricing on their website. I just make the server specs and contact one of their sales reps each time, i never go with their list pricing.
Strongly recommend you look at the r7515s. AMD epic, one cpu
Dude, you got a Dell.
Does your back hurt? Because mine does and I actually get that reference!
I with I had gotten a R740xd instead of my T7820 for my main home server...It's the same hardware (minus the NIC daughter board and idrac) but I didn't plan on wiring my condo or getting a rack at the time, so I went wit the tower for noise reasons.
Now, technically speaking there is a rack conversion kit for it...but I've been trying to find one for nearly a year. I do love the R730xd with the 12 3.5" bays that I'm using for TrueNAS though.
Yeah ….. a ver y small server at home….. 😊 I want to buy twelve units ….. thank you for your información billy gates junior !!!!
Gen 3 PCI-E 3.0 Intel based CPU
XD
First 🎉
Lol... Cascade Lake? In 2024? So basically a 10+ year old platform. Slow DDR4, only 28 lanes of PCIe Gen3, and up to 28 cores per CPU.
Whyyyyyyyyyyyyyyyyyy?
I guess it depends on the workload you're going to run. I'd probably not get them for an AI cluster but they will host regular VMs just fine.
There's nothing wrong with DDR4, especially on a server hosting VMs, and those cpus have plenty of power for a lab server. The sweet spot for price/performance in homelabs are usually 1-2 generations behind. Not sure where you get 10 years from, but the server/cpu are 7 years old. Regardless, its a great server for hosting VMs.
@lucky64444 2667 MHz DDR4, to be clear. And Skylake originally came out in 2015. So apologies... Only 9+ years old. I mean, my own cluster is still Cascade Lake, and sure it's fine? But in 2024, there are much better price/performance/efficiency options for server class platforms.
Also... All these chips took quite a bit hit from the Spectre/Meltdown fixes.
A 28x core Skylake, is slower than a 5950X. Just, ya know, for comparison.
Nice!