in all seriousness when i saw that ending as he began to apply paste ot the second cpu i thought "no he isnt really," and then the legend did it and i laughed like a hyena for 30 seconds straight. **liked and subscribed**
on 1 hand i hate that the projects do not finishing in a single video...on the other i love that i see the miscalculations, mistakes or simply bad luck you have doing those things and we also learn from your mistakes! Thank you! 😊
Hey, I'd love to see a whole video about power consumption! That would be insanely informative. Especially comparing Sandy to Ivy, with GPU, without GPU, with GPU installed but blacklist, with GPU blacklisted but VM started and then shut down. Please that would make a hell of a video!
I recently upgraded from a single E5-2620 v1 to a 2630 v2. I was surprised to find that the power consumption was nearly identical on my HP DL380p both at idle and fully loaded. This is just my personal experience, I'd expect better results with higher-tier CPUs.
You’ve got a better track record of ordering parts then the people who order fiber trunks at my work. Last time they were 40-50ft over and this time they were 20ft short.
All the little mishaps here reminded me of my journey upgrading my 3D printer recently. I feel your pain. Also, I thought you were going to demonstrate how that thermal paste application was actually fine, but it was even better
My best recommendation is to buy Cisco or IBM server hardware if you can afford it. Dell hardware if you can't. Nothing older than an M3 for Cisco or an R720 for a Dell. IBM will just be anything with a Xeon E5 or E7 v0 series CPU. Don't buy Lenovo servers if you want to be able to safely update BIOS and BMC remotely.
@@mndlessdrwer Buying the server is only part of the problem. The bigger problem is, where to find organs to sell, to buy a house, where to house your server rack? :p
@@CraftComputing My, my. Windows Server 2016 licenses still don't come exactly cheap, but they can be easier to manage drivers and firmware on compared to the Linux variants. My go-to solution is just to keep a Windows-To-Go image around for updating more troublesome hardware firmware with.
@@mndlessdrwer I don't have an issue with Hyper-v like most people. It works for my needs, and runs on a W10 Pro Box. Have an older i7 with 32Gigs of ram. Raided storage and the os lives on an SSD. Have 4 debian vms because they run well compared to Ubuntu with pfsence, unifi controller and a web server, and a spare for projects.
Stumbled onto you a few weeks ago. I love that you include your mistakes. you're 100% relatable to anyone who does this stuff for a living. And your beer choice is almost always top notch. Cheers, bruv!
Love how you also show the problems that you face, it makes your videos all the more genuine and interesting! Plus it shows a great quality to your character, keep up the great content!
Can confirm that using the gommets as standsoffs works fine, i had the same issues when i built mine. I also took some 3m heavy duty double sided tape and attached to the inside of the case cover, 2 small strips per drive. This was probably overkill but i figured better to error on tne side of caution. I can say that this works with shucked western digitial drives with no problem. Thanks again for your content.
To mount my supermicro 2U freenas server I used my angle grinder to made slits for the rails to fit 🙈 and my apple xserve needed center hole drilled out. Maybe if I will keep the rack for a long time ( i like it alot and it looks good ) I will make cagenut rails in the rack. I can easily pull out the rail strips while the rack stays intact. And it came with the complete power distribution kit! I can feed it with 4x 16A 230volt. And have 2 full size strips with 64x C13!
Any chance of getting a quick tutorial on the LCARS screen you had in the background previously? Didn't see it on your video list, granted I have only subbed for one day so far
Ok I do server installs and upgrades on a daily basis, first thing to always do is planning, second is more planning, third is checking and then ordering, fourth is completing the upgrade/install. But these will serve you quite well for a bit of time. These machines have been cropping up all over the show and they are still good to go for quite some time.
I recently setup 2 x HP Elitedesk G1 boxes with i7 CPUs and 32gb of ram running Proxmox, plus my old firewall was converted to Proxmox Backup (HP Compaq 8000, 4 core Cpu with 16gb ram and 14Tb Seagate Ironwolf) and Proxmox Backup works AMAZINGLY well.... And all my drives and cards fit also. Your cat is awesome also!!!
I use to run XCP-NG on my main server (Xeon E5-2650v4, 64GB RAM, 2xNVMe and 6x10TB HGST), but after seeing you work with Proxmox I tested it and are now using it in stead. I can highly recommend the CPU by the way. Thanks for a very good video 👍
I bought the same server. It DOES support SAS drives for the 8 disks on the LSI controller.. You could buy a physical adapter that converts SATA to SAS but you can also do what I did and Dremel out the power and SATA connector ends to allow for the SAS pin out. Lastly the server can support two SSDs near the power supply bringing the drive count to 14! I want to be clear that this is a very unique circumstance and that typically you would have needed another raid card.
Congratulations on reaching 200k subscribers. Your videos are always exciting to watch, and your explanations are straightforward, and not overly complicated. Keep up the great work 🍻🍻
I, for one, really appreciate the honesty and the showing of the mistakes. These are issues that, even having built several servers, I would have not thought to consider. Such as risers being of different heights even for 1U servers. Or the hard drive mounting being for the third hole that the hard drives didn’t even have. I definitely wouldn’t have thought of that. So, with showing these the value you’re providing to us only goes up. Would the remote video editing be enough to make it’s own video - I find that very interesting and would really like to see both how you go about setting it up, but also how it actually is to use and edit remotely. What would you say (or find) to be the most important pre-reqs for doing such a setup, etc. And I would certainly have watched a time lapse of the “off camera” work, but understand if that would be too much of a PITA to do with needing to blur out links etc.
My homelab: 2 x HP DL360P Gen 8's. With 1 x Xeon E5-2620 in each with 48Gb's of RAM. 2 x HP MSA60's with 12 x Seagate Constellation 3TB SAS drives. 24 total drives. Both DL360P's are connected to both MSA's using SAS cables and they're both using HP P822 RAID cards. 1 x I5-4570T, 8Gb of RAM and a GTX 980 in my old Antec 400 for Plex with a direct 1gb link back to each server for direct storage. For live migrations I have a 10Gb Link between each server and another 10gb link between each server and my workstation. VM's have a dedicated 1Gb Intel quad port NIC. 2 x APC UPS 1000's.
I am watching everything I can to build a plex server with the most space available as I currently have like 15+ externals hosting my movies and I just figure I should build a media server and looking to go the dual xenon route with 4k direct play. Love the channel and very informative.
I bought myself a SuperMicro board with a Xeon-D 1537 SoC. That has 8 cores, 16 threads, right now 64GB of unregistered ECC ram, it has a LSI SAS2116 chip, with 16 ports of SAS/SATA and builtin IPMI. It runs Proxmox, the LSI chip is PCIe passthrough to a fileserver VM, where CentOS runs a ZFS data set on 8 Ironwolf Pro drives. An HP NX3031 based networkcard connects it to the network with a 10GbE DA Cable. It is capable of 128 GB on memory using registered ECC , so I have some room to grow. Its protected by a BlueWalker PowerWalker 1500 UPS that has saved me two times already.
You ve been an inspiration to evolve my Homelab. i started with a Dell R520 and bought a grid K2 to do what you did for remote gaming, but thease are gone and now I have a Dell R730XD with a Quadro M4000. 96 GB ram and a mix of bunch of sas&sata drives that I could find. Fun fact I bought a male-female extension sata cable and could fit a 4TB 3.5" sata drive that I had spare too😊 For backup and for running some of the 20 vms I have a Qnap TS563 with the qm2 10 Gb dual NVME card direcly connwcred to the dell. In Greece is not as easy to find such cheap servers as you have in US. Taxes and shipping is to expensive..
Don't blacklist the drivers. Use driverctl to bind a specific PCI card to vifo driver for pass through. While I haven't done this with video cards, it works great for two identical HBA cards where I want to use one for Proxmox and the other passed through to a VM. Should work the same for video cards.
I used to be sysadmin in a storage corp test lab. 4 servers in a day is easy. I've done 10, with 35 in that week, for prototyping. Then again, I didn't have to deal mounting hard drives. I had drives on sleds and a backplane.
I usually don't comment or thumbs up videos but I loved the end. Also, you make me was to spend lots of money as my home lab consists of an RPi4 and a repurposed A10 apu laptop that runs my pihole, unifi, raid, and network shares. You keep showing me more things I want to do and don't have the means to :-)
this is pretty much why I bought three Dell R620.. no hassle it just works and they have three expansion slots since I need atleast two, one for a FC HBA card and a 10Gb NIC for my vmware cluster.
I never done it but as far as I know there is no need to blacklist the video driver, the card can be stubbed out with pci-stub. Of course if the host doesn't use an amd card it's easier to disable the whole driver.
My setup? A Dell R200 with Core2Duo processor and 2GB RAM, and IBM X3650 M3 for my "NAS". Need to get a lot more high-capacity 2.5" drives, 'cause I'm running out of space FAST.
For lower-performance bulk storage you might want to look into shucking portable Seagate external drives. Or if you're crazy add a USB3 card and connect USB drives externally.
I currently have a full server rack with HP gen 8 dl360p, cisco 2960-s gigabyte 24 port switch. I have three desktop computers in the rack as well in my office. Two battery backups although they dont keep my server up due to some unknown issue. Eventually I will be shutting down one of my computers and retiring it as it old and outdated. I keystone 12 port with other items as well. I need more servers lol.
Great video feel sorry for all inconveniences next build maybe you should consider the HP DL360 g8 servers they offer a pretty good deal with great features and a great price with 4lff or 8sff bays. I personally get them pre-configured from the esiso inc guys in ebay (no promo) with the ram that I want because it tends to be cheaper than buying it separately and maybe a bit lower-end cpu and consider upgrading it if I can get it cheaper by buying it separately then I end with twice the cpu's. To play around I got myself one with 2x E5-2640 with 64gb of ram for $190 + 4 trays $7-8 each. But the ones with E5-2650v2 64gb i think they are ~$284. Happy shopping again great videos and hopefuly everything is ok with a the flooding that you had. 🍻 cheers
Purchased 3 servers in all during lockdown, what I haven't spent in fuel and going away I have certainly made up for. HP Gen8 MIcroServer, Dell T620 and a Dell R730 (oh yeah and a few Dell workstations for pfsense). Now to start selling the old servers and getting some funds back. I only live in a flat so noise is a concern, the Dell R730 is just about as loud as some of my switches in kitchen cupboard but the T620 is dead quiet in my use and lives in the bedroom, lol. I still have my old HP MicroServer N36L and I'm not sure that will ever go, it is in a bit of a mess so not worth much but with 8GB of ram still useful as a lab. Are you sleeping on the couch yet?????
My server stack is an old HP Storageworks 1600 G2 as a NAS, an HP DL380 G7 as a game server, an old Dell CS24-SC to run a very very old project. I need a couple more blades to finish the rack though.
Ordered SAS instead of SATA... yeah... I've done that. Recently. Luckily my storage server is a tower with plenty of PCIe slots. Since it's no longer running a host card for my nVidia Tesla units, I just replaced that with an SAS controller and got some SATA to SAS adapters for the power connections. Another note on the Tesla units... We moved to a house without a basement and my wife won't let me run the jet engines out of my networking (formerly linen) closet... something about wanting to be able to sleep without ear bleeds or something. I'm thinking I'm going to have to build an Air-conditioned shed with soundproofing just to house all my servers and run a 10 Gbps line out to it.
Currently 1 Bottlerocket server that was purchased for firewall purposes. 1 Barracuda that was free from work which runs an instance of Linux and pihole directly on hardware. 1 HP DL380 G7 that is my virtualization server, 64GB of ram with 3 hard drive pools. 1HP DL380 G7 with 32GB of ram and 8 300GB drives that was serving as a backup server. 1 Dell R610 supporting 64GB of ram and 6 900GB drives which is going to be used for home security as it will be set up with some networkable cameras around the property. The HP units were sourced and under 200 dollars a piece pre configured with working ram and drives. The Dell unit was 45 dollars purchased from a won bid, the drives cost 28 dollars per drive which was less expensive then the average. The bottlerocket server was 80 dollars and has a celeron processor but for the firewall traffic it monitors so far that has done well. Paired with 8gb of ram and running pfsense it has done the job well so far. The barracuda was of course free, it's specs are also a celeron processor and 8gb of ram as well. PiHole runs nicely on it and provides the whole home solution for ad blocking.
I feel your pain. I'm in a similar process building an HA Proxmox cluster but I try to avoid some of your challenges by sticking with 2U rackmount cases. So far, so good. We'll see.
I love when I'm at my computer desk getting frustrated as hell and one of my cats climbs into my lap. Sometimes it feels like they know when they need to interfere lol.
I think those servers are *great*, but running HP/Dell/Lenovo Servers might give you a better availability for parts, compatibility, and documentation. As well as with most you can still get firmware, and ipmi updates for still supported hardware. I ran 2x Dell R410, R610, R710, 2x DL380 G6s, 3x DL380 G8s, 1x Supermicro (something something...), and a DS4243 connected directly to the supermicro for large disk storage. And yes, my power bill was disgusting. :-) Great build, Jeff!
I get the storage. Still wondering about the processing power. Seems like over kill. You went into great detail of how they will be backed by failover for never ending service. I do question your virutal machines. Do they need that level of power?
Good video can't wait to see the setup running ! I I used to have a nice HA setup with 3 R610's and a 12 bay R510 for storage. After heating my room for many years i went to a single Tower Lenovo St550 with Soon to be Dual Xeon Silver's :)
I'm running 1 haswell PC as my server, its running proxmox, with a GTX760 Passed through to mac-os / windows (when i want to switch, as long as the other is off you can passthrough both) and X-Penology with 2 16TB seagate exos (shucked from Expansions) passed through to that in raid 1 PiHole, nginx, apache, plex, minecraft servers. all run in LXC containers too.
You order like I used to... Double check first and second to make sure it's right. I try to avoid hit drives as they have been the worst drives I've had in terms of breakdown. I would spin load, bench and full write and read and see if you get problems... Mine were new and only have 3-4k hours on them. Personally if I had those units, I would attach them to a das. All high speed. Maybe even a pxe?? If I don't have local storage, or at least linked it doesn't feel right. (Self primary backup) Then of course it's all in the HA. All my servers have a DAS/SAN. Each, only now doing a share. (Next week). Then I have a surprising backup which I got last week. (Costing around the same as one of your CPUs 45w tdp underload) Did you select SATA due to sas running costs? It is inviting for a $167 12 bay... Can't get prices like that here..
You can get a DL Gen7 that supports DDR3 for even less. DL Gen8 are popping up cheap on eBay as well. However, it supports DDR4 and RAM can be pricey as of today. I would definitely go with the Proliants. I wonder what’s the reason behind your choice Jeff. (Edit: typo)
HP servers can be incredibly nice to work on, but ongoing support for them can be quite tedious if you aren't willing to pay out the nose for a support contract to download their SPP. Like, you can download individual firmware packages and update them, but it is more annoying than I'd like. Cisco and Oracle are, unfortunately, the same way about their driver and firmware support, with Dell and other third party companies being more open with their firmware downloads.
Shame about your problems. At least Rambo is there to cheer you up! :D I did a sort of copy of the combo server build you did back in January 2020, but I ran into problems of my own... Like accidentally ordering the micro atx version of a Chinese X79 motherboard ( the Machinist one to be exact) and ending up with only one pcie slot instead of three. Luckily I had a pci to pcie adapter laying around so I could plug in both the LSI card and a graphics card to set it up for the first time. Still need to finish setting up proxmox though as I have some other issues also. When I look back, it would perhaps have been a better idea to order a Supermicro motherboard as that has it`s own video output integrated, but then I would have to sacrifice the m.2 nvme slot. Keep up the good work!
My home lab: 1. Unifi DMP 2. Unifi 24port non-POE Switch 3. Unifi 5 port POE+ Switch 4. Mikrotik 10gig 5 ports switch 5. Dell Power Edge R610 - 6x900gig SAS Drives Dual Intel Xeon E5645 2.40GHz, 12M Cache, 5.86 GT, and 128gigs DDR3 ECC Ram, running Windows 10 Pro (Will be switching to Server 2012) Dell 10gig dual SFP+ card 6. Dell Poweredge R710XD 12x3TB SAS drives Dual Intel Xeon E5645 2.40GHz, 12M Cache, 5.86 GT, and 128gigs DDR3 ECC ram, With Proxmox for VM's 10gig dual SFP+ card 7. Chenbro NR12000 12x10TB SATA drives Hyve Zeus Dual 2011-0 1U Barebones. I think I would use it as a desktop or some type of editing system.
That ending was well worth the watch
in all seriousness when i saw that ending as he began to apply paste ot the second cpu i thought "no he isnt really," and then the legend did it and i laughed like a hyena for 30 seconds straight. **liked and subscribed**
23:29 I laughed so hard i fell off my chair. And after sitting down again, i laughed even more. That was not Xeon. That was Epic 🤣
Craft's whife: " hey hun, how come the checking account is empty"
Craft: ummmmmm
At first thought you were drawing a heart with the thermal paste xD
OMG... the "after credits scene" is one of the best things I've seen recently!
The perfect thermal paste application at the end of the video. Very well done!
love it!
Beat me to it, I laughed my arse off. Now I'm tempted to do that with all my future builds, lol
this method gives you 100000 IOPS more !
I'll use it as a template for my builds! 😆
And the cat shows up right when Jeff needs a cat
Cats are good at that
13:39 - That... That was absolutely hilarious.
@linustechtips
I think we can improve on it ...by dropping them SAS drives!!
on 1 hand i hate that the projects do not finishing in a single video...on the other i love that i see the miscalculations, mistakes or simply bad luck you have doing those things and we also learn from your mistakes! Thank you! 😊
Hey, I'd love to see a whole video about power consumption! That would be insanely informative. Especially comparing Sandy to Ivy, with GPU, without GPU, with GPU installed but blacklist, with GPU blacklisted but VM started and then shut down. Please that would make a hell of a video!
I recently upgraded from a single E5-2620 v1 to a 2630 v2. I was surprised to find that the power consumption was nearly identical on my HP DL380p both at idle and fully loaded. This is just my personal experience, I'd expect better results with higher-tier CPUs.
Would also love to see some data on power on some of these servers and setups.
lga2011 is terrible for idle power budget.
I'm glad to see that even being a youtuber that definitely knows what he is doing, you are still doing the same mistake as us.
Cheers !
You’ve got a better track record of ordering parts then the people who order fiber trunks at my work. Last time they were 40-50ft over and this time they were 20ft short.
This video helped me find the optimal thermal paste aplication pattern!
drives moved from 6 to 4 (3 and 2 on each side) screws with the addition of extra platters... this has been standard for about 7 years.
90% of the server software stuff goes over my head, but servers are just so dang interesting.
Me too, I understand the hardware but don’t do software.
When I saw the stack of hard drives, I was like, OK linus, lololol
0:13 The 4 clacks of the CPUs hitting the table contact side down made me cry
All the little mishaps here reminded me of my journey upgrading my 3D printer recently. I feel your pain. Also, I thought you were going to demonstrate how that thermal paste application was actually fine, but it was even better
My server stack is full of hopes, dreams, and partial research
My best recommendation is to buy Cisco or IBM server hardware if you can afford it. Dell hardware if you can't. Nothing older than an M3 for Cisco or an R720 for a Dell. IBM will just be anything with a Xeon E5 or E7 v0 series CPU. Don't buy Lenovo servers if you want to be able to safely update BIOS and BMC remotely.
@@mndlessdrwer Buying the server is only part of the problem. The bigger problem is, where to find organs to sell, to buy a house, where to house your server rack? :p
u mean server rack
@@syn4x Nopony cares!
@@LeonSteelpaw learn to spell grammar
bUt CaN iT rUn LiNuX
Just for you, I'm going Hyper-V
Rifk
What about crysis?
@@CraftComputing My, my. Windows Server 2016 licenses still don't come exactly cheap, but they can be easier to manage drivers and firmware on compared to the Linux variants. My go-to solution is just to keep a Windows-To-Go image around for updating more troublesome hardware firmware with.
@@mndlessdrwer I don't have an issue with Hyper-v like most people. It works for my needs, and runs on a W10 Pro Box. Have an older i7 with 32Gigs of ram. Raided storage and the os lives on an SSD. Have 4 debian vms because they run well compared to Ubuntu with pfsence, unifi controller and a web server, and a spare for projects.
Stumbled onto you a few weeks ago.
I love that you include your mistakes. you're 100% relatable to anyone who does this stuff for a living. And your beer choice is almost always top notch.
Cheers, bruv!
Love how you also show the problems that you face, it makes your videos all the more genuine and interesting! Plus it shows a great quality to your character, keep up the great content!
Not editing the mistakes makes this very relatable, well done!
Can confirm that using the gommets as standsoffs works fine, i had the same issues when i built mine.
I also took some 3m heavy duty double sided tape and attached to the inside of the case cover, 2 small strips per drive. This was probably overkill but i figured better to error on tne side of caution.
I can say that this works with shucked western digitial drives with no problem.
Thanks again for your content.
Nice blue color! Would be nice looking in my purple sun rack 😍
Awwww.... I love my APC NetShelter, but I'd love a Sun Sparc Rack!
Yes as your second rack you should get a sun rack! Only issue is I have no cage nuts. Instead I have tapped holes. Do many rail kits won't fit 😭
Oooo, that would definitely be a deal breaker. Rails are a MUST!
To mount my supermicro 2U freenas server I used my angle grinder to made slits for the rails to fit 🙈 and my apple xserve needed center hole drilled out. Maybe if I will keep the rack for a long time ( i like it alot and it looks good ) I will make cagenut rails in the rack. I can easily pull out the rail strips while the rack stays intact.
And it came with the complete power distribution kit! I can feed it with 4x 16A 230volt. And have 2 full size strips with 64x C13!
His wife is canceling all their credit cards and closing the bank account tied to eBay tomorrow...
Still have the option for money orders to AliExpress.
Packed bags and left years ago.
@Collin 👉😏
LOL
23:49 Jeff had enough of being always the nice guy huh
Just because I'm nice doesn't mean I don't have a sense of humor.
Now we know if we dont hear from Craft after this video, his wife has sent him to the dog house :D
Can confirm, and the WiFi out here is terrible. Maybe if I bought a really nice outdoor WiFi6 AP.......
It's not exile, it's an excuse for a man shed/studio/climate controlled doghouse
Any chance of getting a quick tutorial on the LCARS screen you had in the background previously? Didn't see it on your video list, granted I have only subbed for one day so far
@@CraftComputingnext video: running metal shielded exterior grade fiber to the doghouse.
www.lcars47.com
lol the end. Nice!
Ok I do server installs and upgrades on a daily basis, first thing to always do is planning, second is more planning, third is checking and then ordering, fourth is completing the upgrade/install.
But these will serve you quite well for a bit of time. These machines have been cropping up all over the show and they are still good to go for quite some time.
I recently setup 2 x HP Elitedesk G1 boxes with i7 CPUs and 32gb of ram running Proxmox, plus my old firewall was converted to Proxmox Backup (HP Compaq 8000, 4 core Cpu with 16gb ram and 14Tb Seagate Ironwolf) and Proxmox Backup works AMAZINGLY well.... And all my drives and cards fit also. Your cat is awesome also!!!
Tom , just the tip ....Rolf
I use to run XCP-NG on my main server (Xeon E5-2650v4, 64GB RAM, 2xNVMe and 6x10TB HGST), but after seeing you work with Proxmox I tested it and are now using it in stead. I can highly recommend the CPU by the way. Thanks for a very good video 👍
You will probably be happy with it for awhile.
I run a dl380p g8 and a DL 360 g7 with a small prodesk 600 g2. Honestly pretty happy.
I bought the same server. It DOES support SAS drives for the 8 disks on the LSI controller.. You could buy a physical adapter that converts SATA to SAS but you can also do what I did and Dremel out the power and SATA connector ends to allow for the SAS pin out. Lastly the server can support two SSDs near the power supply bringing the drive count to 14! I want to be clear that this is a very unique circumstance and that typically you would have needed another raid card.
Best part about buying used Dell or HPE is the docs and parts reference. Nothing better than an exact part number.
Congratulations on reaching 200k subscribers. Your videos are always exciting to watch, and your explanations are straightforward, and not overly complicated. Keep up the great work 🍻🍻
"anything that can go wrong, will go wrong"
Especially while recording it.
I, for one, really appreciate the honesty and the showing of the mistakes. These are issues that, even having built several servers, I would have not thought to consider. Such as risers being of different heights even for 1U servers. Or the hard drive mounting being for the third hole that the hard drives didn’t even have. I definitely wouldn’t have thought of that. So, with showing these the value you’re providing to us only goes up.
Would the remote video editing be enough to make it’s own video - I find that very interesting and would really like to see both how you go about setting it up, but also how it actually is to use and edit remotely. What would you say (or find) to be the most important pre-reqs for doing such a setup, etc.
And I would certainly have watched a time lapse of the “off camera” work, but understand if that would be too much of a PITA to do with needing to blur out links etc.
So glad you pointed out these Hyve servers. They're amazing and amazingly inexpensive.
lets see how how much longer they'll inexpensive. Im picking up 2 rn
@@hooami6245 He showed these a few months ago and they haven't gone up or become harder to find.
My homelab:
2 x HP DL360P Gen 8's. With 1 x Xeon E5-2620 in each with 48Gb's of RAM.
2 x HP MSA60's with 12 x Seagate Constellation 3TB SAS drives. 24 total drives.
Both DL360P's are connected to both MSA's using SAS cables and they're both using HP P822 RAID cards.
1 x I5-4570T, 8Gb of RAM and a GTX 980 in my old Antec 400 for Plex with a direct 1gb link back to each server for direct storage.
For live migrations I have a 10Gb Link between each server and another 10gb link between each server and my workstation.
VM's have a dedicated 1Gb Intel quad port NIC.
2 x APC UPS 1000's.
Amazing video - where did you purchase your star trek t shirt from please.
I am watching everything I can to build a plex server with the most space available as I currently have like 15+ externals hosting my movies and I just figure I should build a media server and looking to go the dual xenon route with 4k direct play. Love the channel and very informative.
I bought myself a SuperMicro board with a Xeon-D 1537 SoC.
That has 8 cores, 16 threads, right now 64GB of unregistered ECC ram, it has a LSI SAS2116 chip, with 16 ports of SAS/SATA and builtin IPMI.
It runs Proxmox, the LSI chip is PCIe passthrough to a fileserver VM, where CentOS runs a ZFS data set on 8 Ironwolf Pro drives.
An HP NX3031 based networkcard connects it to the network with a 10GbE DA Cable.
It is capable of 128 GB on memory using registered ECC , so I have some room to grow.
Its protected by a BlueWalker PowerWalker 1500 UPS that has saved me two times already.
Dude. You rock. I would have been 3 pints deep after realizing I ordered the wrong drives. I can't wait to see these up and running!
dude, that pride starship shirt is badass
I love it! Need one
Nothing more real world problems than this 😂 keep up the good work dude, also that thermal paste application was perfect
BuT tHe ThErMaL pAsTe!!!!11!!
Can't believe I'm getting fooled in to thinking this guy knows what he's doing every time... Guess that's part of the fun
Tacos fall apart and we still love them!
You ve been an inspiration to evolve my Homelab. i started with a Dell R520 and bought a grid K2 to do what you did for remote gaming, but thease are gone and now I have a Dell R730XD with a Quadro M4000. 96 GB ram and a mix of bunch of sas&sata drives that I could find. Fun fact I bought a male-female extension sata cable and could fit a 4TB 3.5" sata drive that I had spare too😊
For backup and for running some of the 20 vms I have a Qnap TS563 with the qm2 10 Gb dual NVME card direcly connwcred to the dell.
In Greece is not as easy to find such cheap servers as you have in US. Taxes and shipping is to expensive..
Know I'm a little late to the game here, but FYI a few SAS drives in the Chenbro is not an issue. 8 of the ports are on a LSI SAS2008 controller.
i love how the drink goes lower during each edit cut .. cheers !
Don't blacklist the drivers. Use driverctl to bind a specific PCI card to vifo driver for pass through. While I haven't done this with video cards, it works great for two identical HBA cards where I want to use one for Proxmox and the other passed through to a VM. Should work the same for video cards.
Yeah my server stack is a DS920 and fits in a cabinet with the cable modem. It’s mostly Plex but I’m wanting to try pihole.
I used to be sysadmin in a storage corp test lab. 4 servers in a day is easy. I've done 10, with 35 in that week, for prototyping. Then again, I didn't have to deal mounting hard drives. I had drives on sleds and a backplane.
4 servers is easy if all of the parts are already guaranteed to fit.
I usually don't comment or thumbs up videos but I loved the end.
Also, you make me was to spend lots of money as my home lab consists of an RPi4 and a repurposed A10 apu laptop that runs my pihole, unifi, raid, and network shares. You keep showing me more things I want to do and don't have the means to :-)
The end is pure gold!
damn, i'm hypnotized by the level of your beer... slowly, but constantly go down...
12:22 Every time, man. Every. Single. Time. I feel you on this one.
this is pretty much why I bought three Dell R620.. no hassle it just works and they have three expansion slots since I need atleast two, one for a FC HBA card and a 10Gb NIC for my vmware cluster.
I never done it but as far as I know there is no need to blacklist the video driver, the card can be stubbed out with pci-stub. Of course if the host doesn't use an amd card it's easier to disable the whole driver.
I love the taste in the music, its relaxing.
My setup? A Dell R200 with Core2Duo processor and 2GB RAM, and IBM X3650 M3 for my "NAS". Need to get a lot more high-capacity 2.5" drives, 'cause I'm running out of space FAST.
For lower-performance bulk storage you might want to look into shucking portable Seagate external drives. Or if you're crazy add a USB3 card and connect USB drives externally.
I currently have a full server rack with HP gen 8 dl360p, cisco 2960-s gigabyte 24 port switch. I have three desktop computers in the rack as well in my office. Two battery backups although they dont keep my server up due to some unknown issue. Eventually I will be shutting down one of my computers and retiring it as it old and outdated. I keystone 12 port with other items as well. I need more servers lol.
Great video feel sorry for all inconveniences next build maybe you should consider the HP DL360 g8 servers they offer a pretty good deal with great features and a great price with 4lff or 8sff bays.
I personally get them pre-configured from the esiso inc guys in ebay (no promo) with the ram that I want because it tends to be cheaper than buying it separately and maybe a bit lower-end cpu and consider upgrading it if I can get it cheaper by buying it separately then I end with twice the cpu's.
To play around I got myself one with 2x E5-2640 with 64gb of ram for $190 + 4 trays $7-8 each. But the ones with E5-2650v2 64gb i think they are ~$284.
Happy shopping again great videos and hopefuly everything is ok with a the flooding that you had.
🍻 cheers
Thermal paste application was perfect.
Wow, 4 servers in one day... *applause*
I'm currently running dual x5680 on a super micro board. I also have 48gb of triple channel ECC memory.
The Linus dig was Super Bowl commercial worthy. 🤣
Purchased 3 servers in all during lockdown, what I haven't spent in fuel and going away I have certainly made up for. HP Gen8 MIcroServer, Dell T620 and a Dell R730 (oh yeah and a few Dell workstations for pfsense). Now to start selling the old servers and getting some funds back. I only live in a flat so noise is a concern, the Dell R730 is just about as loud as some of my switches in kitchen cupboard but the T620 is dead quiet in my use and lives in the bedroom, lol. I still have my old HP MicroServer N36L and I'm not sure that will ever go, it is in a bit of a mess so not worth much but with 8GB of ram still useful as a lab.
Are you sleeping on the couch yet?????
My server stack is an old HP Storageworks 1600 G2 as a NAS, an HP DL380 G7 as a game server, an old Dell CS24-SC to run a very very old project. I need a couple more blades to finish the rack though.
Ordered SAS instead of SATA... yeah... I've done that. Recently. Luckily my storage server is a tower with plenty of PCIe slots. Since it's no longer running a host card for my nVidia Tesla units, I just replaced that with an SAS controller and got some SATA to SAS adapters for the power connections. Another note on the Tesla units... We moved to a house without a basement and my wife won't let me run the jet engines out of my networking (formerly linen) closet... something about wanting to be able to sleep without ear bleeds or something. I'm thinking I'm going to have to build an Air-conditioned shed with soundproofing just to house all my servers and run a 10 Gbps line out to it.
Currently 1 Bottlerocket server that was purchased for firewall purposes.
1 Barracuda that was free from work which runs an instance of Linux and pihole directly on hardware.
1 HP DL380 G7 that is my virtualization server, 64GB of ram with 3 hard drive pools.
1HP DL380 G7 with 32GB of ram and 8 300GB drives that was serving as a backup server.
1 Dell R610 supporting 64GB of ram and 6 900GB drives which is going to be used for home security as it will be set up with some networkable cameras around the property.
The HP units were sourced and under 200 dollars a piece pre configured with working ram and drives.
The Dell unit was 45 dollars purchased from a won bid, the drives cost 28 dollars per drive which was less expensive then the average.
The bottlerocket server was 80 dollars and has a celeron processor but for the firewall traffic it monitors so far that has done well. Paired with 8gb of ram and running pfsense it has done the job well so far.
The barracuda was of course free, it's specs are also a celeron processor and 8gb of ram as well. PiHole runs nicely on it and provides the whole home solution for ad blocking.
I feel your pain. I'm in a similar process building an HA Proxmox cluster but I try to avoid some of your challenges by sticking with 2U rackmount cases. So far, so good. We'll see.
Just can't catch a break when building servers!
Hang in there Jeff!
I love when I'm at my computer desk getting frustrated as hell and one of my cats climbs into my lap. Sometimes it feels like they know when they need to interfere lol.
A wife/GF would be better...then again usually they just come to tell you something else is broken haha
"You like Segway's" hahahaha made my day lol
I like the new thermal paste technique!
Oh dear, the thermal paste application is exactly my type of petty
I think those servers are *great*, but running HP/Dell/Lenovo Servers might give you a better availability for parts, compatibility, and documentation. As well as with most you can still get firmware, and ipmi updates for still supported hardware.
I ran 2x Dell R410, R610, R710, 2x DL380 G6s, 3x DL380 G8s, 1x Supermicro (something something...), and a DS4243 connected directly to the supermicro for large disk storage.
And yes, my power bill was disgusting. :-) Great build, Jeff!
'Today we build 4 servers!'
Me, installs RAID for the first time, ever.
Honestly. Those Zeus servers from ebay are amazing. Grabbed one myself with 2x 10 core processors and never looked back
Worth watching for the epilogue alone! :-)
I get the storage. Still wondering about the processing power. Seems like over kill. You went into great detail of how they will be backed by failover for never ending service. I do question your virutal machines. Do they need that level of power?
Ok, I LOL'd at the cut scene at the end.
Good video can't wait to see the setup running ! I I used to have a nice HA setup with 3 R610's and a 12 bay R510 for storage. After heating my room for many years i went to a single Tower Lenovo St550 with Soon to be Dual Xeon Silver's :)
That was the greatest ending ever XD
I'm running 1 haswell PC as my server, its running proxmox, with a GTX760 Passed through to mac-os / windows (when i want to switch, as long as the other is off you can passthrough both) and X-Penology with 2 16TB seagate exos (shucked from Expansions) passed through to that in raid 1
PiHole, nginx, apache, plex, minecraft servers. all run in LXC containers too.
You could keep the SAS drives and just use adapters. And get a SAS HBA card and use the breakout cables.. :)
Dude …. Gear freak…. Love your channel
I am currently running 4 Dell Poweredge R720 with 2 quadro 4000 in each and also some 2x Xeon 2630v2 cpus in each.
Thanks for all your videos, definitely have learned a lot from your channel. keep the great content coming!
MIssing the beer but the reply to Tom's Tech worth it... with foam.
Jeff, nice builds have a great weekend !!
Fun fact: PCI slots are spaced 0.6 inches apart last I recall checking. So you're probably off by a slot with that PCIe riser.
You order like I used to... Double check first and second to make sure it's right.
I try to avoid hit drives as they have been the worst drives I've had in terms of breakdown. I would spin load, bench and full write and read and see if you get problems... Mine were new and only have 3-4k hours on them.
Personally if I had those units, I would attach them to a das. All high speed. Maybe even a pxe??
If I don't have local storage, or at least linked it doesn't feel right.
(Self primary backup)
Then of course it's all in the HA.
All my servers have a DAS/SAN. Each, only now doing a share. (Next week).
Then I have a surprising backup which I got last week. (Costing around the same as one of your CPUs 45w tdp underload)
Did you select SATA due to sas running costs?
It is inviting for a $167 12 bay... Can't get prices like that here..
I lol'ed hard at the end. Thank you for that.
You can get a DL Gen7 that supports DDR3 for even less. DL Gen8 are popping up cheap on eBay as well. However, it supports DDR4 and RAM can be pricey as of today. I would definitely go with the Proliants. I wonder what’s the reason behind your choice Jeff. (Edit: typo)
HP servers can be incredibly nice to work on, but ongoing support for them can be quite tedious if you aren't willing to pay out the nose for a support contract to download their SPP. Like, you can download individual firmware packages and update them, but it is more annoying than I'd like. Cisco and Oracle are, unfortunately, the same way about their driver and firmware support, with Dell and other third party companies being more open with their firmware downloads.
aa yes, what i was waiting for, drinking beer and watching server builds
Shame about your problems. At least Rambo is there to cheer you up! :D I did a sort of copy of the combo server build you did back in January 2020, but I ran into problems of my own... Like accidentally ordering the micro atx version of a Chinese X79 motherboard ( the Machinist one to be exact) and ending up with only one pcie slot instead of three. Luckily I had a pci to pcie adapter laying around so I could plug in both the LSI card and a graphics card to set it up for the first time. Still need to finish setting up proxmox though as I have some other issues also.
When I look back, it would perhaps have been a better idea to order a Supermicro motherboard as that has it`s own video output integrated, but then I would have to sacrifice the m.2 nvme slot.
Keep up the good work!
I was in the middle of creating a custom windows image when this video popped up. Time to spend more money!!! lol
My home lab:
1. Unifi DMP
2. Unifi 24port non-POE Switch
3. Unifi 5 port POE+ Switch
4. Mikrotik 10gig 5 ports switch
5. Dell Power Edge R610 - 6x900gig SAS Drives Dual Intel Xeon E5645 2.40GHz, 12M Cache, 5.86 GT, and 128gigs DDR3 ECC Ram, running Windows 10 Pro (Will be switching to Server 2012) Dell 10gig dual SFP+ card
6. Dell Poweredge R710XD 12x3TB SAS drives Dual Intel Xeon E5645 2.40GHz, 12M Cache, 5.86 GT, and 128gigs DDR3 ECC ram, With Proxmox for VM's 10gig dual SFP+ card
7. Chenbro NR12000 12x10TB SATA drives Hyve Zeus Dual 2011-0 1U Barebones. I think I would use it as a desktop or some type of editing system.