I have, but since my rack is located right next to my desk I am worried about those 40mm fans driving me insane with their noise output. And upgrading them to Noctua fans costs as much as another server so I settled on one 2U and 2 4U servers.
Absolutely, having avoided a rack at home for noise and mostly depth reasons. But I love the idea of home racks. Did a few 4U rackmounts at the office a few years ago, but used generic cases so the noise wasn't an issue. But I've never stopped looking for a quiet(ish) short depth server. First time time viewer, this was a great video! Subscribed! Looking forward to more.
Lack of 10GB networking would be my one concern. Lots of new switches come with at least a few 10GB ports and using those to go to the switch can get you a good performance boast if you have multi-able devices hitting the server at once. Does couldn't you get an AM4 board? If you could it would be more efficient.
2U can be far more silent than 1U, more component choices, and if you go 3U u get full PCI slots, and 4U is for many disks or multiple GPUs, + you will need 10G network for very fast VM migration between nodes (1G feels slow)
You installed the RAM in dual channel mode. With 4 DIMMs, you should have 2 DIMMs on each side of the CPU. Normally A2, B2, C2 and D2. But it depends on the board.
I'm brand new to the home lab game. Because of space constraints in my home, my desk is right next to my 25U rack. This means that EVERYTHING in my rack has to be QUITE. My server(s) and my desktop will all go in my rack. While I'm figuring out what meets my needs and what doesn't, the needed quite factor causes me to shy away from 1U chassis.
1u servers are unfortunately naturally loud, due to fan size constraints. I'd really like to see some short form factor 2u servers, but I've never found any in my many years of looking
For best performance you should fill all of the blue or black slots first. The 2011 socket has 4 memory channels, putting all the RAM on one side only uses 2 of the channels and loads down the number of ranks on that side. Depending on the DIMM and CPU/board combo, you can have slower memory access when you have more ranks loading the bus. The fastest memory speeds are had by having 1 dimm per channel and having no more than 2 ranks per dimm. If those are 2Rx8 32GB dimms, then 1DPC will be the fastest you can get. 4Rx8 is the same as 2DPC and slows down memory access.
The xeon and the motherboard you have support quad channel memory, but unless the motherboard manual says differently you installed the ram using dual channel as apposed to quad channel. I would recommend checking the motherboard manual. Without quad channel your loosing out on half of the possible performance and bandwidth that your motherboard and cpu are capable of supporting with your memory.
My 1U servers are... a Dell PowerEdge 860, and an HP ProLiant DL360 G5. Everything else is either 2U or a tower. (DL185 G5 + 2U disk tray, PowerEdge T610, and a PowerEdge T410.)
I am currently running an old 1U server for my virtualization and home automation/home lab purposes. I'm running an old HP Proliant DL360 G7. I was first running it with VMWare but decided to go to UnRAID due to the 0 overhead (Runs off a USB flash drive) so that all my drives were used for storage. After about nearly 2 years on UnRAID, I've just moved to ProxMox. Currently, I'm running a premise PBX system in VM's, Home Assistant (Supervised), and several containers. I'd _LOVE_ to move to a more modern server, but money is tight, so for now, I'll make do with what I've got. Even if it is messy looking LOL. Great video!
I’ve been into PCs for years, homelab’ing for 2 and working in IT for 1 year and that’s the first time I’ve actually heard or seen a SATA dom. Might pick one up, it’s pretty cool
Thanks for your video. I first built a proxmox cluster and then ditched it for a rancher kubernete cluster with your help. It is awesome. Thanks. Good luck with your new servers.
Just subscribed! I just bought 2 half-depth 1u SuperMicro Servers and changed out the E5-2660v3 Processor with E5-2696v4 Swapped all the fans with Noctua A4x20's & a Dynatron T318 heatsink for much quieter operation. I have the half-rack in my office and didn't want a full sized server or the sound that would go with a 1u. They've been great ESXi hosts and doing Plex Transcoding and several other functions, Virtualized PfSense, OpenMPTCProuter, and a few thin clients I was using.
@8:53 SM have options parts that should allow mounting of two SSD in the slots above the two left most 3.5" bays if you are looking to stuff large drives in the 3.5" bays.
4:32 Hey unfortunately GVT-G is not a feature that the E5 line of cpus supports because they dont have integrated graphics. The graphics in this server is provided by the bmc on the motherboard which is most likely an aspeed ast 2400 or 2500. but yeah gvt-g is very cool and i use it in my Xeon E-2200 system. Great video
Sad to see an R710 coming out. I run 2 of them myself, one for my virtualization and one as my NAS. But this is a really sweet build to replace it with. Lots of good things here, and I don't know why i never considered trying to build a server from a bare bones. Thanks for this video!
After watching your great videos, I wished I had room in my rented apartment for a server rack... Great work as usual very informative, inspiring and of course entertaining... Thanks...
Could you do me a favor and double check the motherboard manual about the ram slots? I feel like you should be using just the blue or just black slots to get the full multi-channel memory bandwidth, but I also know Supermicro does silly stuff (like making the slot colors meaningless).
Went a similar route as you, had a HP DL380 G7 & a Dell R810 that I started out with as I got them cheap. But of course waiting for my new house to be built I haven't bothered to setup my current house to accommodate a server rack. So these two servers have been a little too noisy so happened to pick up two identical HP DL360 G7 servers from auction I was going to on sell but after powering them up for testing I noticed how much less noise and heat come from these. I have now decided to keep these and decom my other two.
Looking forward to seeing some tutorials about HA rancher setups, I have been following along with your rancher/docker guides and have been bumbling my way through adding a couple of new (virtualized) nodes to the rancher cluster. But it will be great to see how to do it properly!
I have a Dell R610 which is old, but a great 1U server. It is dual socket, has redundant power supplies, 6 hot swap drive bays, and 2 external and 2 internal (proprietary) PCIe slots. I have one of the internal ones used for a Dell RAID card. Other two are used for 10GB networking, and a GTX 710 GPU. It also has a useful display out on the front of the server. Currently spec'd with 96 GB RAM, and 2x Xeon 5690's.
First time I've come across your channel! Good content so far, and I really enjoy the way you edit the videos. I'm a Network Admin by trade, and love physical hardware. There's a few things I wouldn't have done the same way you did, but that's mostly down to personal preference (e.g. Proxmox over ESXi, consumer SSDs over Enterprise). Keep up the great work, and +1 subscriber!
Hey techno Tim, the ram isn't installed correctly. There is a quick start guide in the extras box that has a chart showing where to Install them. If I recall correctly, fill the blue slots first. Additionally, the tan things in the drive trays are just dummy spacers to keep the trays in position, not drive adapters. The adapters you have are exactly what's needed.
Hearing “reliable” when referring to an SM SataDOM hurt, ngl. I used to work in support for a vendor that used SM boxes, and the amount of those we replaced on a daily basis was insane.
@@TechnoTim For the amount of RAM and the CPU you are running you really should be running it in quad channel mode. Virtualization will benefit greatly.
@@TechnoTim Dimm layout on that motherboard is by channel from back to front: B1,B2,D1,D2 CPU C2,C1,A2,A1 For the best performance with 128 GB and quad channel with your Xeon use only B1,C1,D2 and A1. Don't use the #2 slots unless you are fully populating the RAM slots.
I actually use two 1U servers but they aren't speced the same at all. Both have dual xeons (don't know the models). Both run Proxmox, and one of them (primary) runs kubernetes. The second runs freenas. Maybe some more ram down the road and I'll do HA. Can't wait to see your videos! Server 1: HP Proliant DL360 G6 72 GB ECC RAM 1 TB Storage Server 2: Dell R410 16 GB RAM 3 TB (for now) Storage Storage 1: Synology DS218j 512 MB (maybe) RAM 4 TB (redundant) Storage Soon to be Switched to server two Storage (sorta) 2: Sun StorEdge C4 20ish ILO2 and 3 tapes Not installed
Thank you Tim, it is nice to see a creator include the bad with the good in their content. I am interested to see how you are going to connect these to your external disk pack.
Those Sabrent converters are pretty nifty, I had two but the only downside is that they have extra failure points with the way it relocates the connectors. For my Dell server I was able to find 3.5 to 2.5 adapters that simply moved the 2.5 to the side so that they would directly connect rather than having to have a PCB and additional connectors. It's just a dumb bit of metal that pads out the extra space that a 3.5 would use. I'm colocating the server so I feel more comfortable with less possible failure points.
I have not thought about a 1U but I have thought about going to an all rackmount setup. I currently watercool and use my server as my own "cloud gaming" setup. I run UnRaid currently and have a Windows 10 VM for my gaming machine and I have plans to add another GPU for my wife at some point. I also have it all watercooled in a Caselabs case. If there was a Rack that allowed for radiator and pump mounts and had a distro plate or something then I could easily see myself going to a Rack system. P.s. Watercooling is only done for noise on my end. I don't overclock much and I do neat and tiddy builds and I use clear coolant for long term stability. My Server is used for the following: NextCloud Plex Multiple Shares for Work NAS backup OneDrive Backup "Cloud" Gaming VM VPN Radarr Lidarr Im sure there is so much more that I can do but I'm learning as I go.
We have a bunch of HPE DL360s at work. They're 1U but can accommodate three PCIe cards each (1 full height and 2 half-height). They also have two CPU sockets each. I know Dell also offers 1U servers with dual sockets and three PCIe cards each.
Hi, I really like your content. Thanks! 2 tips, RAM sticks are populated incorrectly, you are using 2 channels instead of 4. You don't have to move out the riser to plug an extension card. Just install a card when riser is screwed - this is a proper order. Have a good day! PS I'm a little jealous because I have an earlier generation - 2x DL360p (1x E5-2680v2, 128GB RAM and 7x256GB Samsung 850 PRO each).
Supermicro servers are a great and highly flexible option. They are sturdy, boot fast, and reliable. However, a 2 node 2U is my recommended option. They are the most cost efficient.
Great choice going with supermicro. I have a supermicro 24bay, 36bay, 16 bay, a 4 bay 1u like yours (with 6 10Gbe ports) and a 1u half length I use for my pfsense server. If you mix x9/x10 and x11 boards, the IPMIVIEW app is not super great compared to dell idrac. But overall I like my supermicro’s. Set the fan setting to optimal in IPMI and they are even more quiet.
I didn't understand though. Why two 1u vs one 2u with dual cpu and easier pci-e hardware placing, cooling etc? To play with clusters? What was the overall budget if I may ask?
for HA purpose. Yes, you could use 2U and it works just fine, plus having enough space for PCI-e. However, what if the motherboard fails? or NIC fails? of the SATA dom fails? That’s why having 2 1U racks might be helpful in this case.
@@nguyenthanhdat93 2x hardware - 2x chance of failure =) Like if you have 2 servers with a failure rate of 2% then the chance of at least one of them fail equals 2+2=4%, and 2*2=0.04% chance of them fail simultaneously. But I doubt it is for HA, imho.
Clustering and the ability to take one server down for maintenance without taking down all of my services! There are links in the description with the entire kit!
Tim, could you get into some of the mundane issues like what is your monthly electric usage and bill? Would be great for you and your cohorts to do videos on things that screw up and how you fixed them... Right now I am having an occasional but constant problem burning a proxmox stick and getting it to load with our errors..... like the could not insert dell laptop .../dev/sro for ISO /dev/sdb for ISO and how you folks calculated thiings like network desgin and set up or do most of you just try and test and try again.... Enjoy your vids and thanks for sharing....
Wondering how the satadom is holding up. Have a few to use in my supermicro motherboard but heard they aren't good for write endurance. Thought proxmox might wear them out due to logs, etc.
In your case i would go with a refurbished dell R630. Offers more sas/sata 2.5" slots 6-8 depending of the config, with harware raid controller. Dual cpu and psu and 24 dimm slots for ram ddr4. Also you have 2-3 pci express slots 1 full height and 1-2 half heught depending the config. And is 1U!! In usa i believe you can find cheaper than what you bought. You guys have good deals from datacenter decomissioned servers and to import to Greece is way too much money
I Love my Dell R230 U1. it runs all my VM's and having the Dell R710 II 2U bellow it as a TrueNAS server. I am planning on moving that over to proxmox and then virtuallizing the TrueNAS server in the future. but for now it does its job well Love the vids and keep up the good work
I musta been lucky with my server then. Mine came with two of those 'adapters' but they were super micro branded and all sheet metal. And the trays themselves had different mounting options anyway, so for the other two that did not have adapters, i could just screw the SSDs into the 'lower' holes, and the ports all lined up perfectly. sure they're only supported by the one side, but the backplane is also supporting it through the connector. and since no moving parts, not like they'll just wiggle free.
Small (1U) vs. big (2U/3U/4U) is a hugely personal decision. Smaller means less heat, less power draw, but less compute horsepower. I believe that the choices you made will work for you, but they will not be universally "true" - which you made very clear. I prefer to have more threads and more disk, so a 1U system won't be sufficient for me. Are the decisions you made in November still working for you today, just 3 short months later? All too often, my "needs" are often just "wants", so I would be interested in learning how you determine if they are. I'd be interested in knowing what metrics are you using. Awesome content.
Thank you! Still using them today. Yes, I agree that it is a personal decision based on needs. These two machines run all of my VM, which are mostly kubernetes nodes so I don't need a lot of local storage outside of just VM disks. Having a striped mirror that's all SSD is really fast. Also, regarding the cores, these CPUs are 14 core 28T which is more cores and threads than I had on my Dell R710 (12C/24T). Also, all of my slow storage is run on here but running as a TrueNAS VM with an HBA passed through and connected to my disk shelf. Doing all this has made some really fast and powerful virtualization servers, which is my primary use case. Thank you!
@@TechnoTim I missed the core/thread count in the original - thanks. I've got your HBA pass-through video bookmarked, so I can see how you did that. My NAS almost died a few months ago, so I am looking into ways to mitigate the single server approach (probably Ceph, maybe replicated ZFS at my kids house across town, maybe both/neither/dunno). With the exponential explosion in storage, I want to play around and find a way to organize the hundred terabytes or so of essential data (read: random crap) I have on my spindles. through helping
What's the purpose of those "half" rails if you can't even access the top plate correctly for hardware maintenance? Too bad they didn't come with a 10 Gbit NIC; your Quadro occupies the PCIe slot...
For what purposes do you use this server in your laboratory ? the video mentioned the management of virtualization servers. What exactly did they mean ?
7:35 I'm pretty sure they will find Supermicro 3.5" drive caddies in the remains of the Earth one day and wonder if Earth's culture just pinnacled in the 90s and style ceased to evolve.
I have 2U servers, OEM. 1U is nice, but the upgradeability is zero. You have to make too many compromises. I have 3 virtualization hosts (but I run VMware vSphere). They are similar to what you built (Xeon 2678v3 that support DDR3, 'cause it's cheaper), 256GB RAM, Dual 10GbE and Dual 4Gb Fibre Channel. Just bought a new 16 port 10GbE switch from Mikrotik. I'm considering taking down the SAN (8 port Fibre Channel switch, connected to my TrueNAS Core, gives ~800MB/s 100K IOPS) now that I have dual 10GbE, but I'm unsure... FC works really well.
Not trying to be a prick but please refer to page 2-14 of the manual of the X10SRi-F motherboard. You installed your RAM incorrectly. You've occupied slots A1/A2/B1/B2. You're effectively running dual-channel on a quad-channel capable motherboard. You need to occupy the slots A1/B1/C1/D1. In other words the four blue slots, two on each side of the socket. With the current configuration you're only getting 1/2 the memory bandwidth.
Thank you so much! I was basing this off the quick reference guide www.supermicro.com/QuickRefs/superserver/1U/QRG-1605.pdf Not being a prick at all, this is super helpful!
Hi Tim I studied the TH-cam algo a bunch, one suggestion to optimizing your titles to get found easilier would be to put the topic word first, so for example starting with Server, when you go to the search, and put in a word, autocompletion comes up, the words and sentences are ranked by polulaity eg volume of searches, and so this is a great way to pick titles that will easilier get searched for. I really like your channel so it is just an advice I would give you, great video! Nice server!
If you're going to make a proxmox cluster do your future self a favor and add a third node, even if its way underpowered and wont ever host a VM. With just two they wont be able to establish a quorum, so if one goes out somewhere down the road and has to be rebuilt, the other becomes a pain to manage. I'm in a situation like that right now. Unable to build new VMs on the remaining clustered server because it cant talk to the dead server in the cluster. With a third node the remaining two can vote on the state of the cluster and I would have been able to add the newly rebuilt proxmox server back to the cluster.
Well so much for Christmas present ideals !! Question: Do they come with just the 2 NIC ports, or can you get 2 more ? 2nd question.... Dell has the IDRAC, and HP has the ilo. Does this come with anything like that, for remote access to the server ?
Hello Tim. Is that SATA DOM in any way HA? What do you do if it fails? How do you recover the hypervisor SO? How do you cluster these server without some kind of shared storage? Thanks.
Hi Tim. I got the same SM Sata DOM but i have no yellow sata ports on my asus motherboard to power it up. Also i have no special connectors for using the included power cable. Any idea on how i can power the sata dom, directly through my PSU?
Bit old now. There are supermicro "twins" effectively two servers in a 1u. They get two drives and one pciex. That works well for powering compute as you've got the drives off system then it can all work well for VM with processing is more important.
You might want to keep the 2nd NIC for vMotion (or whatever its called on yours) - LAGG does not double speeds for a single transfer. Keep this traffic seperate from your other traffic. You could likely run the cable directly from server to server and setup static IP's and you wont need to use a port
i might be wrong cuz i am really new to this but are the rams in good positions? i remember from desktop builds that you have to put rams in same colored ports which is every second port but i guess its different in servers
Question for you. Since you already have a NAS, why not use iscsi or NFS to provide the storage you need for you hypervisor? Also, I am sure many has post this, but you memory are installed on 2 of 4 channels. You will need to install on the same color slot for quad channel.
Thanks! All 4 dimms are populated now! RE storage, I don’t have 10gb networking and I want my kubernetes nodes to be independent of my NAS. Most VMs are running kubernetes.
I'm late to the party on this November of 2020 upload but I thought you should know that if you want quad channel memory to work, you need to install the DIMMs in the blue slots first.
PVE need to have three nodes for a HA-Cluster (corosync quorum) You should also do this network setup for PVE Clusters. 2x Nics with LACP -> IP Traffic (VM Traffic) 2x Nics with LACP -> HA Traffic (Corosync) 2x Nics with LACP -> Storage Traffic (preferred NFS or iSCSI) So every PVE Server need to have 6 network cards for a very stable cluster. I build many PVE Clusters and i have seen enough problems if a big cluster dont get dedicted networks for there load. 1HE server are nice. If you dont need more space for some harddrives of GPU's there is no reason to go not for 1HE.
I gave up on Supermicro years ago and I just buy retired Dell Servers. I currently have a couple of Dell R410s that I plan to switch out the Fans in. Have one Server in the Livingroom and gets way too loud.
Low profile doesn’t always mean 1 pcie height, it usually means half the width. The card in here only takes 1 height but full width. Height is more important on 1u than width.
I absolutely love the content on the channel! I watched most videos but still i wonder how the truenas server interact with everything else. It seems so magic spaghetti how the serverrack is able to use and combine all the hardware while still be separate machines 😅
i have 3 1u servers, 2 hp and 1 Quanta. I was expecting the quanta with the most power hungry CPUs to be the loudest, it was until i got a hp dl180 g6. My god was i surprised when it turned on it was so loud haha
Nice server upgrade! I’m running two Dell R620 1U servers and I think my biggest complaint would be fan noise level. Other than that I’m happy with the 1U form factor
I’m looking at a motherboard that might need to be upgraded with a V3 cpu to the V4. Did you need to do that? If so did the ddr4-2400 ram work? Some MB don’t support ddr4-2400 with the V3. I want to know if I need one stick of ddr4-2133. Thanks
I do have one question, how the heck do your power all this? do you have a more powerful dedicated wall socket installed? I know my apartment would trigger the breakers if i use 2 PSUs
Have you thought about building a 1U server?
I have, but since my rack is located right next to my desk I am worried about those 40mm fans driving me insane with their noise output. And upgrading them to Noctua fans costs as much as another server so I settled on one 2U and 2 4U servers.
Yup yup, have 2u left in current rack. Concerned about power in room and house at some point.
Absolutely, having avoided a rack at home for noise and mostly depth reasons. But I love the idea of home racks. Did a few 4U rackmounts at the office a few years ago, but used generic cases so the noise wasn't an issue.
But I've never stopped looking for a quiet(ish) short depth server.
First time time viewer, this was a great video! Subscribed! Looking forward to more.
Lack of 10GB networking would be my one concern. Lots of new switches come with at least a few 10GB ports and using those to go to the switch can get you a good performance boast if you have multi-able devices hitting the server at once.
Does couldn't you get an AM4 board? If you could it would be more efficient.
2U can be far more silent than 1U, more component choices, and if you go 3U u get full PCI slots, and 4U is for many disks or multiple GPUs, + you will need 10G network for very fast VM migration between nodes (1G feels slow)
I actually valued your “1U over” joke... 👏🏼
Haha! Thanks! I almost cut out my reaction to my own dumb joke, but then I left it.
@@TechnoTim p
p
ppp
p
p
p
p
@@TechnoTim was the best!!
Me too 😂👍
@@TechnoTim I heard it and then realised RU laughing would work also.
Bad puns are okay if someone is not making them all the time.
You installed the RAM in dual channel mode. With 4 DIMMs, you should have 2 DIMMs on each side of the CPU. Normally A2, B2, C2 and D2. But it depends on the board.
Dude that's awesome! Looking forward to seeing what you get up to with the new servers! Loved the pun in the intro as well!
I'm brand new to the home lab game. Because of space constraints in my home, my desk is right next to my 25U rack. This means that EVERYTHING in my rack has to be QUITE. My server(s) and my desktop will all go in my rack. While I'm figuring out what meets my needs and what doesn't, the needed quite factor causes me to shy away from 1U chassis.
1u servers are unfortunately naturally loud, due to fan size constraints.
I'd really like to see some short form factor 2u servers, but I've never found any in my many years of looking
For best performance you should fill all of the blue or black slots first. The 2011 socket has 4 memory channels, putting all the RAM on one side only uses 2 of the channels and loads down the number of ranks on that side. Depending on the DIMM and CPU/board combo, you can have slower memory access when you have more ranks loading the bus. The fastest memory speeds are had by having 1 dimm per channel and having no more than 2 ranks per dimm. If those are 2Rx8 32GB dimms, then 1DPC will be the fastest you can get. 4Rx8 is the same as 2DPC and slows down memory access.
Thanks! I did make that switch after the video, the shortly after added another 12GB for 256GB using all slots!
The xeon and the motherboard you have support quad channel memory, but unless the motherboard manual says differently you installed the ram using dual channel as apposed to quad channel. I would recommend checking the motherboard manual. Without quad channel your loosing out on half of the possible performance and bandwidth that your motherboard and cpu are capable of supporting with your memory.
"Do you have any 1U servers"
nervously looks left at the R610 that i bought and haven't touched. Yeah i need to do something with that
haha!
My 1U servers are... a Dell PowerEdge 860, and an HP ProLiant DL360 G5.
Everything else is either 2U or a tower. (DL185 G5 + 2U disk tray, PowerEdge T610, and a PowerEdge T410.)
I am currently running an old 1U server for my virtualization and home automation/home lab purposes. I'm running an old HP Proliant DL360 G7. I was first running it with VMWare but decided to go to UnRAID due to the 0 overhead (Runs off a USB flash drive) so that all my drives were used for storage. After about nearly 2 years on UnRAID, I've just moved to ProxMox. Currently, I'm running a premise PBX system in VM's, Home Assistant (Supervised), and several containers.
I'd _LOVE_ to move to a more modern server, but money is tight, so for now, I'll make do with what I've got. Even if it is messy looking LOL. Great video!
Sweet man! I’m jelly these look every nice sounds like there well balanced!
Thanks! Yeah, nice little virtualization servers!
If you want to HA Cluster you gotta buy a third one :) High Availability only (really) works with uneven number of servers.
I’ve been into PCs for years, homelab’ing for 2 and working in IT for 1 year and that’s the first time I’ve actually heard or seen a SATA dom. Might pick one up, it’s pretty cool
You had me at won u over. 😂
you 1 me over
U 1 me over @ 1 u over
Thanks for your video. I first built a proxmox cluster and then ditched it for a rancher kubernete cluster with your help. It is awesome. Thanks. Good luck with your new servers.
Just subscribed!
I just bought 2 half-depth 1u SuperMicro Servers and changed out the E5-2660v3 Processor with E5-2696v4
Swapped all the fans with Noctua A4x20's & a Dynatron T318 heatsink for much quieter operation.
I have the half-rack in my office and didn't want a full sized server or the sound that would go with a 1u.
They've been great ESXi hosts and doing Plex Transcoding and several other functions, Virtualized PfSense, OpenMPTCProuter, and a few thin clients I was using.
I’ll have to check out those fans! Thank you!
@8:53 SM have options parts that should allow mounting of two SSD in the slots above the two left most 3.5" bays if you are looking to stuff large drives in the 3.5" bays.
Interesting! Do you know the part number?
Also, Supermicro Rocks!
4:32 Hey unfortunately GVT-G is not a feature that the E5 line of cpus supports because they dont have integrated graphics. The graphics in this server is provided by the bmc on the motherboard which is most likely an aspeed ast 2400 or 2500. but yeah gvt-g is very cool and i use it in my Xeon E-2200 system. Great video
Thank you! I found out the hard way! Good catch!
Just bought a decomissionned 1u 10x 2.5in dual xeon supermicro beast to consolidate all my servers I had lying around. It's awesome! Great vid btw
Congrats! Thank you!
1u server about to be the loudest component in your rack.
I try not to use my 1u server because it is louder than my 2u servers. How are those for sound?
My R230 server is quit after set the fan speed
Very true, but given the location in the basement here, probably not a worry in this case I guess
Not too bad! Migrating things now and have most things at full tilt now. I will give it a few days to settle down.
@@wuhaoecho R310 are very quiet as well...
Sad to see an R710 coming out. I run 2 of them myself, one for my virtualization and one as my NAS. But this is a really sweet build to replace it with. Lots of good things here, and I don't know why i never considered trying to build a server from a bare bones. Thanks for this video!
After watching your great videos, I wished I had room in my rented apartment for a server rack... Great work as usual very informative, inspiring and of course entertaining... Thanks...
I might be wrong, but in order to operate in dual channel properly you have to install your 4 sticks on blue slots, not all of them on the same side
Could you do me a favor and double check the motherboard manual about the ram slots? I feel like you should be using just the blue or just black slots to get the full multi-channel memory bandwidth, but I also know Supermicro does silly stuff (like making the slot colors meaningless).
Went a similar route as you, had a HP DL380 G7 & a Dell R810 that I started out with as I got them cheap. But of course waiting for my new house to be built I haven't bothered to setup my current house to accommodate a server rack. So these two servers have been a little too noisy so happened to pick up two identical HP DL360 G7 servers from auction I was going to on sell but after powering them up for testing I noticed how much less noise and heat come from these. I have now decided to keep these and decom my other two.
Looking forward to seeing some tutorials about HA rancher setups, I have been following along with your rancher/docker guides and have been bumbling my way through adding a couple of new (virtualized) nodes to the rancher cluster. But it will be great to see how to do it properly!
You got it!
I have a 1U server. I love it. I use the Noctua fans tho. Really good and are very quiet. I'll be building a second 1U server next year.
Tim as always you're of great help thanks dude!
Any time!
I have a Dell R610 which is old, but a great 1U server. It is dual socket, has redundant power supplies, 6 hot swap drive bays, and 2 external and 2 internal (proprietary) PCIe slots. I have one of the internal ones used for a Dell RAID card. Other two are used for 10GB networking, and a GTX 710 GPU. It also has a useful display out on the front of the server.
Currently spec'd with 96 GB RAM, and 2x Xeon 5690's.
Awesome, thanks for the demo and info, have a great day and this video has 1U me over lol
First time I've come across your channel! Good content so far, and I really enjoy the way you edit the videos. I'm a Network Admin by trade, and love physical hardware. There's a few things I wouldn't have done the same way you did, but that's mostly down to personal preference (e.g. Proxmox over ESXi, consumer SSDs over Enterprise). Keep up the great work, and +1 subscriber!
Awesome! Thank you!
I really like your videos. Thanks for making them! :)
Thank you so much!
Hey techno Tim, the ram isn't installed correctly. There is a quick start guide in the extras box that has a chart showing where to Install them. If I recall correctly, fill the blue slots first.
Additionally, the tan things in the drive trays are just dummy spacers to keep the trays in position, not drive adapters. The adapters you have are exactly what's needed.
Thanks! I've since added more ram that takes up all slots ;)
Hearing “reliable” when referring to an SM SataDOM hurt, ngl. I used to work in support for a vendor that used SM boxes, and the amount of those we replaced on a daily basis was insane.
Weird Question, if you dont mind. How much is your electric bill per month?
Probably not all that much since server have a much higher barr on efficiency design than a normal consumer PC
Just ran some quick tests, at full tilt with all my workloads I am using ~150 Watts less than my previous 2 servers. More to come.
Great question! I'm buying my first server soon and I wanna run a lot of vms this has been a main concern thank you for the info !!
I run a 1U server and I'm at 110-130 Watts (in France, it costs me like 15€ or 18$ per month for the electric bill)
@@fangedhex2042 that's not bad at all especially if it runs 24/7
Great video, but your RAM is only running in dual channel mode!
You're missing out on half of the memory bandwidth :(
Thank you so much! I was basing this off the quick reference guide www.supermicro.com/QuickRefs/superserver/1U/QRG-1605.pdf
@@TechnoTim For the amount of RAM and the CPU you are running you really should be running it in quad channel mode. Virtualization will benefit greatly.
@@TechnoTim Dimm layout on that motherboard is by channel from back to front: B1,B2,D1,D2 CPU C2,C1,A2,A1
For the best performance with 128 GB and quad channel with your Xeon use only B1,C1,D2 and A1. Don't use the #2 slots unless you are fully populating the RAM slots.
Came looking for this comment...
Thought that ram install couldn't be right.. 😵
The memory is not plugged in the correct slots, check the lid for optimal occupancy, they have to be in the same colored slots
I actually use two 1U servers but they aren't speced the same at all. Both have dual xeons (don't know the models). Both run Proxmox, and one of them (primary) runs kubernetes. The second runs freenas. Maybe some more ram down the road and I'll do HA. Can't wait to see your videos!
Server 1:
HP Proliant DL360 G6
72 GB ECC RAM
1 TB Storage
Server 2:
Dell R410
16 GB RAM
3 TB (for now) Storage
Storage 1:
Synology DS218j
512 MB (maybe) RAM
4 TB (redundant) Storage
Soon to be Switched to server two
Storage (sorta) 2:
Sun StorEdge C4
20ish ILO2 and 3 tapes
Not installed
Another great Vid Tim, congrats on the new servers man!
Thank you Tim, it is nice to see a creator include the bad with the good in their content. I am interested to see how you are going to connect these to your external disk pack.
Thank you!
Those Sabrent converters are pretty nifty, I had two but the only downside is that they have extra failure points with the way it relocates the connectors.
For my Dell server I was able to find 3.5 to 2.5 adapters that simply moved the 2.5 to the side so that they would directly connect rather than having to have a PCB and additional connectors. It's just a dumb bit of metal that pads out the extra space that a 3.5 would use. I'm colocating the server so I feel more comfortable with less possible failure points.
I have not thought about a 1U but I have thought about going to an all rackmount setup. I currently watercool and use my server as my own "cloud gaming" setup. I run UnRaid currently and have a Windows 10 VM for my gaming machine and I have plans to add another GPU for my wife at some point. I also have it all watercooled in a Caselabs case. If there was a Rack that allowed for radiator and pump mounts and had a distro plate or something then I could easily see myself going to a Rack system. P.s. Watercooling is only done for noise on my end. I don't overclock much and I do neat and tiddy builds and I use clear coolant for long term stability.
My Server is used for the following:
NextCloud
Plex
Multiple Shares for Work
NAS backup
OneDrive Backup
"Cloud" Gaming VM
VPN
Radarr
Lidarr
Im sure there is so much more that I can do but I'm learning as I go.
We have a bunch of HPE DL360s at work. They're 1U but can accommodate three PCIe cards each (1 full height and 2 half-height). They also have two CPU sockets each. I know Dell also offers 1U servers with dual sockets and three PCIe cards each.
Damn man! Nice set up, looking forward to the next updates!
Thank you!
Hi,
I really like your content. Thanks!
2 tips, RAM sticks are populated incorrectly, you are using 2 channels instead of 4. You don't have to move out the riser to plug an extension card. Just install a card when riser is screwed - this is a proper order.
Have a good day!
PS I'm a little jealous because I have an earlier generation - 2x DL360p (1x E5-2680v2, 128GB RAM and 7x256GB Samsung 850 PRO each).
Thank you so much! I was basing this off the quick reference guide www.supermicro.com/QuickRefs/superserver/1U/QRG-1605.pdf
Supermicro servers are a great and highly flexible option. They are sturdy, boot fast, and reliable. However, a 2 node 2U is my recommended option. They are the most cost efficient.
Love it, Tim. very inspiring :) hope one day I could confidently talk like you.
Cool video! I have the R410 and use it for virtualization. It is powerful and reliable. Love it!
Great to hear!
Great choice going with supermicro. I have a supermicro 24bay, 36bay, 16 bay, a 4 bay 1u like yours (with 6 10Gbe ports) and a 1u half length I use for my pfsense server. If you mix x9/x10 and x11 boards, the IPMIVIEW app is not super great compared to dell idrac. But overall I like my supermicro’s. Set the fan setting to optimal in IPMI and they are even more quiet.
Thanks for the tips!
COOL! thxn for the inspiration Tim ! :)
I'm pretty sure Proxmox recommends a 3 node cluster at a minimum for N+1 redundancy. Just something to consider.
Hey can you do a video on how you will connect the disk array to these servers?
I didn't understand though. Why two 1u vs one 2u with dual cpu and easier pci-e hardware placing, cooling etc? To play with clusters? What was the overall budget if I may ask?
for HA purpose. Yes, you could use 2U and it works just fine, plus having enough space for PCI-e.
However, what if the motherboard fails? or NIC fails? of the SATA dom fails? That’s why having 2 1U racks might be helpful in this case.
@@nguyenthanhdat93 2x hardware - 2x chance of failure =) Like if you have 2 servers with a failure rate of 2% then the chance of at least one of them fail equals 2+2=4%, and 2*2=0.04% chance of them fail simultaneously. But I doubt it is for HA, imho.
Clustering and the ability to take one server down for maintenance without taking down all of my services! There are links in the description with the entire kit!
Tim, could you get into some of the mundane issues like what is your monthly electric usage and bill? Would be great for you and your cohorts to do videos on things that screw up and how you fixed them... Right now I am having an occasional but constant problem burning a proxmox stick and getting it to load with our errors..... like the could not insert dell laptop .../dev/sro for ISO /dev/sdb for ISO and how you folks calculated thiings like network desgin and set up or do most of you just try and test and try again....
Enjoy your vids and thanks for sharing....
As the trend continues smaller and efficient is better... nice job
agreed!
Wondering how the satadom is holding up. Have a few to use in my supermicro motherboard but heard they aren't good for write endurance. Thought proxmox might wear them out due to logs, etc.
Nice Hardware for a Proxmox Homelab cluster. But I do hope, that you reseated the RAM to the correct slots or that you just trying to troll us. 🤪
Just this! Wanted to type that too lol ..
In your case i would go with a refurbished dell R630. Offers more sas/sata 2.5" slots 6-8 depending of the config, with harware raid controller. Dual cpu and psu and 24 dimm slots for ram ddr4. Also you have 2-3 pci express slots 1 full height and 1-2 half heught depending the config.
And is 1U!!
In usa i believe you can find cheaper than what you bought.
You guys have good deals from datacenter decomissioned servers and to import to Greece is way too much money
I just left a refurb 710. Old machines use so much power compared to this newer platform.
I Love my Dell R230 U1. it runs all my VM's and having the Dell R710 II 2U bellow it as a TrueNAS server. I am planning on moving that over to proxmox and then virtuallizing the TrueNAS server in the future. but for now it does its job well
Love the vids and keep up the good work
I musta been lucky with my server then. Mine came with two of those 'adapters' but they were super micro branded and all sheet metal. And the trays themselves had different mounting options anyway, so for the other two that did not have adapters, i could just screw the SSDs into the 'lower' holes, and the ports all lined up perfectly. sure they're only supported by the one side, but the backplane is also supporting it through the connector. and since no moving parts, not like they'll just wiggle free.
Small (1U) vs. big (2U/3U/4U) is a hugely personal decision. Smaller means less heat, less power draw, but less compute horsepower. I believe that the choices you made will work for you, but they will not be universally "true" - which you made very clear. I prefer to have more threads and more disk, so a 1U system won't be sufficient for me. Are the decisions you made in November still working for you today, just 3 short months later? All too often, my "needs" are often just "wants", so I would be interested in learning how you determine if they are. I'd be interested in knowing what metrics are you using. Awesome content.
Thank you! Still using them today. Yes, I agree that it is a personal decision based on needs. These two machines run all of my VM, which are mostly kubernetes nodes so I don't need a lot of local storage outside of just VM disks. Having a striped mirror that's all SSD is really fast. Also, regarding the cores, these CPUs are 14 core 28T which is more cores and threads than I had on my Dell R710 (12C/24T). Also, all of my slow storage is run on here but running as a TrueNAS VM with an HBA passed through and connected to my disk shelf. Doing all this has made some really fast and powerful virtualization servers, which is my primary use case. Thank you!
@@TechnoTim I missed the core/thread count in the original - thanks. I've got your HBA pass-through video bookmarked, so I can see how you did that. My NAS almost died a few months ago, so I am looking into ways to mitigate the single server approach (probably Ceph, maybe replicated ZFS at my kids house across town, maybe both/neither/dunno).
With the exponential explosion in storage, I want to play around and find a way to organize the hundred terabytes or so of essential data (read: random crap) I have on my spindles. through
helping
Just sub your channel yesterday, awesome work man continue like this
Xo from France
Thank you!
Man i think you did great, from choosing non enterprise SSDs to installing the Hypervisors in the flash. 👍🏻👍🏻
Thank you! Not worth the cost considering they have a 5 year warranty and I have a RAID
Watched the whole video. The magic sauce is the deadpan delivery. Good content too.
Glad you enjoyed it! Sorry, I am still figuring this all out! Hard to talk to a camera!
@@TechnoTim It was meant as a compliment!
What's the purpose of those "half" rails if you can't even access the top plate correctly for hardware maintenance? Too bad they didn't come with a 10 Gbit NIC; your Quadro occupies the PCIe slot...
Great video, thank you very much for sharing all of this.
Thank you!
For what purposes do you use this server in your laboratory ? the video mentioned the management of virtualization servers. What exactly did they mean ?
1 u over. Priceless
7:35 I'm pretty sure they will find Supermicro 3.5" drive caddies in the remains of the Earth one day and wonder if Earth's culture just pinnacled in the 90s and style ceased to evolve.
thin flexi fan shrouds work the same are more solid ones - as long as they are windproof they are fine
You always look fried and you always teach me new shit, I love your videos!
Lol thanks?
Doesn't proxmox require three servers for clustering? To stop the two other servers from arguing over who's master?
I have 2U servers, OEM. 1U is nice, but the upgradeability is zero. You have to make too many compromises. I have 3 virtualization hosts (but I run VMware vSphere). They are similar to what you built (Xeon 2678v3 that support DDR3, 'cause it's cheaper), 256GB RAM, Dual 10GbE and Dual 4Gb Fibre Channel. Just bought a new 16 port 10GbE switch from Mikrotik. I'm considering taking down the SAN (8 port Fibre Channel switch, connected to my TrueNAS Core, gives ~800MB/s 100K IOPS) now that I have dual 10GbE, but I'm unsure... FC works really well.
Proxmox allows for less than 3 node clusters now?
Also I'm interested in idle power consumption (from the wall) of those 2011 CPUs.
Not trying to be a prick but please refer to page 2-14 of the manual of the X10SRi-F motherboard. You installed your RAM incorrectly. You've occupied slots A1/A2/B1/B2. You're effectively running dual-channel on a quad-channel capable motherboard. You need to occupy the slots A1/B1/C1/D1. In other words the four blue slots, two on each side of the socket. With the current configuration you're only getting 1/2 the memory bandwidth.
Thank you so much! I was basing this off the quick reference guide www.supermicro.com/QuickRefs/superserver/1U/QRG-1605.pdf Not being a prick at all, this is super helpful!
The 1U over and you hemming & hawing about buying a new server had me laughing out loud.
0:22 I wouldn't switch jobs at this late stage if I were you. How did you come up with that 1( U)
Haha! That’s the last time I try to slip in some humor!
Hi Tim I studied the TH-cam algo a bunch, one suggestion to optimizing your titles to get found easilier would be to put the topic word first, so for example starting with Server, when you go to the search, and put in a word, autocompletion comes up, the words and sentences are ranked by polulaity eg volume of searches, and so this is a great way to pick titles that will easilier get searched for. I really like your channel so it is just an advice I would give you, great video! Nice server!
Very nice choice, should be an awesome setup. I have a 1u super micro running pfsense, it just works! The other 1u I have, on order, will run proxmox.
Very nice!
If you're going to make a proxmox cluster do your future self a favor and add a third node, even if its way underpowered and wont ever host a VM. With just two they wont be able to establish a quorum, so if one goes out somewhere down the road and has to be rebuilt, the other becomes a pain to manage. I'm in a situation like that right now. Unable to build new VMs on the remaining clustered server because it cant talk to the dead server in the cluster. With a third node the remaining two can vote on the state of the cluster and I would have been able to add the newly rebuilt proxmox server back to the cluster.
Thanks! I am not using shared storage so each server has it’s own storage for vms.
I’m really enjoying your Chanel
. Great work.
Thank you!
I'm not sure if you noticed, but you have your cap on backwards
Well so much for Christmas present ideals !!
Question: Do they come with just the 2 NIC ports, or can you get 2 more ?
2nd question....
Dell has the IDRAC, and HP has the ilo. Does this come with anything like that, for remote access to the server ?
Yes, that's the IPMI port.
@@KrisLowet87 oh ok.
Hello Tim. Is that SATA DOM in any way HA? What do you do if it fails? How do you recover the hypervisor SO? How do you cluster these server without some kind of shared storage? Thanks.
Hi Tim. I got the same SM Sata DOM but i have no yellow sata ports on my asus motherboard to power it up. Also i have no special connectors for using the included power cable. Any idea on how i can power the sata dom, directly through my PSU?
Bit old now.
There are supermicro "twins" effectively two servers in a 1u. They get two drives and one pciex.
That works well for powering compute as you've got the drives off system then it can all work well for VM with processing is more important.
You might want to keep the 2nd NIC for vMotion (or whatever its called on yours) - LAGG does not double speeds for a single transfer. Keep this traffic seperate from your other traffic. You could likely run the cable directly from server to server and setup static IP's and you wont need to use a port
Thank you! I did end up lagging them and the traffic is mixed. Hopefully that doesn’t come back to bite me!
i might be wrong cuz i am really new to this but are the rams in good positions? i remember from desktop builds that you have to put rams in same colored ports which is every second port but i guess its different in servers
You are exactly right! Shortly after I made this video I ended up filling up all slots. Good eye!
@@TechnoTim oh great, ty for quick reply on such an old vid :p
Question for you. Since you already have a NAS, why not use iscsi or NFS to provide the storage you need for you hypervisor?
Also, I am sure many has post this, but you memory are installed on 2 of 4 channels. You will need to install on the same color slot for quad channel.
Thanks! All 4 dimms are populated now! RE storage, I don’t have 10gb networking and I want my kubernetes nodes to be independent of my NAS. Most VMs are running kubernetes.
I'm late to the party on this November of 2020 upload but I thought you should know that if you want quad channel memory to work, you need to install the DIMMs in the blue slots first.
Thanks! I did that after the video!
PVE need to have three nodes for a HA-Cluster (corosync quorum)
You should also do this network setup for PVE Clusters.
2x Nics with LACP -> IP Traffic (VM Traffic)
2x Nics with LACP -> HA Traffic (Corosync)
2x Nics with LACP -> Storage Traffic (preferred NFS or iSCSI)
So every PVE Server need to have 6 network cards for a very stable cluster.
I build many PVE Clusters and i have seen enough problems if a big cluster dont get dedicted networks for there load.
1HE server are nice. If you dont need more space for some harddrives of GPU's there is no reason to go not for 1HE.
I gave up on Supermicro years ago and I just buy retired Dell Servers. I currently have a couple of Dell R410s that I plan to switch out the Fans in. Have one Server in the Livingroom and gets way too loud.
You can get "low profile" pci things which fit inside servers
Low profile doesn’t always mean 1 pcie height, it usually means half the width. The card in here only takes 1 height but full width. Height is more important on 1u than width.
I would always use Netapp Diskarrays and Lenovo Servers. Both are way better build then the other brands.
I absolutely love the content on the channel!
I watched most videos but still i wonder how the truenas server interact with everything else. It seems so magic spaghetti how the serverrack is able to use and combine all the hardware while still be separate machines 😅
i have 3 1u servers, 2 hp and 1 Quanta. I was expecting the quanta with the most power hungry CPUs to be the loudest, it was until i got a hp dl180 g6. My god was i surprised when it turned on it was so loud haha
Nice server upgrade! I’m running two Dell R620 1U servers and I think my biggest complaint would be fan noise level. Other than that I’m happy with the 1U form factor
Great content, TT, and super timely as I'm considering a similar replacement for a few of my aging systems.
Best of luck! Let me know how it works out!
I’m looking at a motherboard that might need to be upgraded with a V3 cpu to the V4. Did you need to do that? If so did the ddr4-2400 ram work? Some MB don’t support ddr4-2400 with the V3. I want to know if I need one stick of ddr4-2133. Thanks
I actually have the exact same ones (x2 as well), but they are super noisy, I don't know how you can say they are quiet 😓
I do have one question, how the heck do your power all this? do you have a more powerful dedicated wall socket installed? I know my apartment would trigger the breakers if i use 2 PSUs