This comment made me think about electrostatic dust collectors. I googled it and really it seems to be a commercial thing. Would be a cool project to figure out, that is, if it's at all viable for a domestic space. I don't need it so i'm not looking into it but i'll share the idea for You to read and say "wait... what? you're nuts, you idiot" in reply :D
I homelab for living, but also for fun. Cleaning up hardware actually is relaxing for me, so I do clean every nook and cranny every time I pull out my hardware. If this means backup storage server is offline for 3 days, so be it. It's not necessary... not at this level, at least. :D Servers are really designed to live in horrible environments. Well, I did IT support for gravel pit mine, with rock crushers running 24/7, and there and only there, I saw literal solid blocks of dirt and mud inside some machines. With experience, I do prepare systems there, by inserting plastic faux RAM sticks into every empty slot as they are impossible to clean thoroughly and easily scratch, I cover internal USB slot since these can get literally blocked so hard you can't push in USB plug and cover makes it easy to find it by touch. But that's all extreme - none of it should be required anywhere aside heavy duty industrial areas. Garage, or even a shop is not an industrial area.
Check the Epyc socket for pins that are folded the wrong direction. I had an issue with a memory channel on one of my systems until I flipped the pin back. Tada!
One of these days I'll build myself an Epyc Milan machine with PMEM and Optane disk for HANA. Maybe NewEgg will have a firesale on the PCIE 4 ones soon...
@@gamerstudio5668 Yes, thanks, I'll just get a Xeon with 64GB and a 256GB PMEM (1:4 is a supported ratio) and leave the Epyc with the 905Ps for some other homelab server...
@@FlaxTheSeedOne Yes, thanks, I'll just get a Xeon with 64GB and a 256GB PMEM (1:4 is a supported ratio) and leave the Epyc with the 905Ps for some other homelab server...
I had a similar problem with a memory socket that had "gone bad" on my 2009 Mac Pro 4,1 upgraded to 5,1, on which I've also upgraded CPUs a few times as prices have come down. The problem turned out to be one of the CPU socket pins that was misaligned and either shorting to an adjacent pin or perhaps not connecting with the CPU at all. I broke out a magnifying glass to inspect the socket more closely and then saw the misalignment more clearly. I realigned it with a small screwdriver and everything is working fine now.
If you rack in the garage has front to back airflow, might fill the front door with furnace filters to help with the dust, but you are right not all that much dust.
Nope. 3D Xpoint is Phase-change alloy memory, not ReRAM (Which uses memristors). Both are resistive memory, but 3D Xpoint varies the resistance of a metal alloy, while ReRAM varies a dielectric.
@@Prophes0r That point (that a PCM tech is its own thing vs being a ReRam subset) is contested pretty heavily. The fact that is not up for debate is that Optane is not NAND flash.
@@TwistedMe13 It's definitely not flash memory (NAND or NOR). I agree with that. But contested? Why? They are totally different technologies that produce completely different effects. Heating a blob of metal so it melts, then recrystallizes, causing it's resistance to change from 0.01 to 0.15. Applying an electrostatic charge to a semiconductor sandwich, so more electrons can flow, changing the resistance from 5.0 to 3.5. Both derive logic by changing resistance. One with a Yes/No. The other with a How Much. Saying they are the same kind of thing is like saying a boat is a type of airplane, because both use fuel to run an engine, and both can take passengers from one side of a lake to the other.
I too have (most of) my servers in the garage. The only 'dusting' I've given them is occasionally cleaning out their filters. Honestly, they actually stay CLEANER than the 3 servers & 2 workstations I have in a carpeted home office.
Not sure if you're aware; later engraving/cutting will definitely put metal dust into the air. I worked in a metal fab shop, with a couple later cutting machines and it put metal dust in the air. Definitely shorted out a few machines in the shop, prior to me starting and performing more maintenance on the machines and changing the enclosures
Love these types of videos. I started my own homelab after watching your content. Picked up a Dell R720XD for a decent price and decided to install unraid on it and it has been working very well! Plex is the main usage but also file storage for now.
Have same motherboard. And yes, poroxox and a couple of VM`s on it. Planned to add more addin cards in it. Want to thank you for this video. This saved me a lot of time, to figure out how buffrication works particulary on this motherboard. As always interesting info. Keep up and have a courage to make more videos like this.
The one place i've had issues with dust, is in the livingroom with pets and kids, and at work next to a plasma cutter table and welding stations. Ended up getting a fanless computer for the plasma cutter table.
I recently redid my homelab. Consolidated my cluster back to one host: epyc 7d12 and 256gb ram on supermicro h11ssl-i. low power epyc gets the job done for what I need. Only limiting factor is 4 channel ram instead of 8 channel.
Eeeh, you say it's not a lot of dust, but that's quite a bit more than I have in a PC in my office that's been on 24x7 since I put a GTX 970 in it. At least not your server with a leaf blower once in a while. All that dust raises the chance for some contacts getting bridged, or for signals to be tweaked just enough to throw off anything from RAM timings to voltages. You'd be surprised how many RAM BSODs I've solved by just cleaning memory contacts with a pencil eraser. You could try cleaning the copper pads on most LGA CPUs too. Fingerprints on copper oxidize over time and can interrupt the signals, especially at DDR 4-5 speeds.
Channel 8 sounds like a dust issue. :P Congrats on the system migration! I got my ceph cluster into an unhealthy state a few months ago and am still struggling with it before upgrading the cluster to proxmox 8...
And here I was thinking you were selling heavy duty coffee mugs like the blue one in this video. My daily driver coffee mug is a gold-plated and blue ceramic mug made by American Decorators in Trenton NJ in either late 1970s or early 1980s if I had to guess. Found it at an estate sale for $3.
Supermicro has cpu port level bifurcation config. To understand what you are doing, you need to go and open supermicro manual and there it states all 3 channels per cpu and how it's connected to pcie slots (or other cards). If you split it wrong you could split something up even embedded on mb like dual 10gbe controller (you could have one x8 for onboard ethernet and remaining x8 for pcie slot. So to have your own x4x4 setup on your x8 port it should be set as x8x4x4 etc.). Funny enough, they worked like two separate cards if i set my port to x4x4x4x4 . Could be useful for pcie passhtrough.
Love this kind off video's. It's above what I can buy or wan't/need power and energy wise. But i learn from it for things i do at home. I also enjoy that you play with that kind of gear. So in that way i've play with it to😅. Keep the problems and hurdles, frustrations in your video's. That's good for People like me with less expertises good. That you also have problems. I'm dutch so disclaimer i mean all this positive
i had the exact same lockup issue on one of my proxmox systems running just truenas. luckily my lockup issue magically went away after i installed more ram, nic, and an old low power video card. i spent hours digging through my logs to find… nothing.
you do need hepa pos pressure vent - no more dust, maybe consider a box fan with some cheap hepa furnace filers taped to it as well. ram over provisioning for nas is perfectly ok - also like the all flash and cache tiers plus you have faster networking - if you were a small biz this would be ideal except you would want 2 - they are cheap, downtime is extremely expensive
I really enjoy your videos they are very informative. My question for you is, For somebody that has a home lab. Would it be better for them to buy, a used R730XD dell server add four hundred bucks. They come with Dual 16core ddr3 memory with 24 data SAS slots in front, 2 in back? That would make more sense for someone on a budget.
I'd be more than happy to clean out all that horrible awful way too much dust if it means I get to check out all the engravers in your garage as well :D
Hello @CraftComputing, I'm currently in this hell too asking myself if for my storage, I keep going to an LGA1151 system with an Asrock X1213 MATX and i3 9100 or if I go back to an X99 chinese system to have more PCIE lanes, more cores, use PCIE bifurcation and use nvme drives instead of sata with caching to allow me to saturate my 10Gig network card.
What is the device you have sat above/on your keyboard at 14:22? Is it USB drive that can allow you to choose an ISO for it to appear as ? That is something I'm interested in.
I mean who isn't regularly rotating there servers in the server racks? This just makes your servers like you way more when you regularly care for them. 😇🤪🤯
Mac vlan or ip? My system I worked out was the mac vlans for my docker containers I was using instead of IP cause the system to lock up after eight hours, no idea why eight hours is a significant thing.
EPYC losing a memory channel seems to be common. I’ve seen a lot of “7 channel” used EPYCs on eBay. A friend of mine has an EPYC system that just lost a memory channel for no reason (remounts didn’t help) after years of service. And even one of my EPYC 7642 chips lost a channel recently also. Just one day poof it’s gone. Remounting didn’t help. And when I moved that CPU to another motherboard it was still the same channel that was gone. It happens.
So the random mobo issues are not a dust created problem… can dust actually be a problem? I’m thinking not as I’ve seen PCs pulled out of mines caked in dirt and still working.
you call that dusty?? You should see inside some of the network closets 50ft in the air in most warehouses lol. There was so much dust you thought there was another device on top of the switch but it was just solid dust.
laser dust is carbon? Carbon is conductive? Anyway, this much dust in my apartment rack corner I get in couple of years, not 6 month, and its not carbon. Even ordinary dust is an ESD source.
had a good chuckle at the comment feed those engagement trolls. Love this video, its like I am working on this with ya, while actually just sitting in my chair ignoring my own rack's lack of wire tidiness that I have been putting off for a long time... I am helping lol
Dust!!! You call that dust? Every time I clean out my server I find a new cat. I'm not sure how they get on there.. all that fluff! Seriously though you had me laughing my ass off, which it's now really difficult to sit, as you were mocking the people with cleaning issues.
Bifurcation is the splitting of PCIe slots to support multiple devices. In this case, the PCIe slots support connections to devices using 8-lanes. If I want to connect two 4-lane devices to those slots (like NVMe drives), you need to enable bifurcation.
@CraftComputing What ever happened to the studio build you were discussing what feels like over a year ago now? You used to be selling bricks in the store that were going to be placed in one of the walls, or something?
It was delayed multiple months (city + bureaucracy). Plan right now is to get a permit in April, and once plans are finalized, I'll post an update video.
Heating the heatsink with a hairdryer would have helped to soften the thermal pads. If you can get to the bracket screws with the heatsink on. wiggling from that end can help too. As to can get even pressure on both ends at the same time. Nope, never. Not me that has double bagged an expansion card and placed a a bowl of hot water for 20 minutes. No sir! I did hear the heatsink did come off easily once warmed well.
Had a Dell R440 just lose a PCIe slot's ability to bifurcate after a bios update recently. Was I going to use that slot for dual NVMe VM storage? Yes. Can I now? No. Thanks Dell.
@@CraftComputing Glad for you! Hope that continues. It's just that while researching ssds I've come across a lot of bad stories about b-brands and this one in particular. In any case, you do have your backups so it's a non-issue
I'm sure the PCIe bifurcation process makes sense to at least one Chinese person who designed that BIOS. Had the unfortunate experience of having to work with SuperMicro for many years and cannot understand their thought process no matter how hard I try. Sad to see that nothing has changed in 7 years.
I think the complexity comes from the fact that whoever designed that BIOS was simply following the nomenclature given by the PCIe controller itself. The controller called the slots 1 and 2 -> he called the slots 1 and 2, regardless of the fact that those slots could have been phisically implemented as slots 5 and 6 on the motherboard. It is a very lazy but accurate approach that would 100% work if documented, so as long as this was all explained on a manual, that would be a very valid approach IMO, especially if you tell the board-laying guys to label the slots on the motherboard accordingly (so that, i.e. the phisical slot 1 would be labelled with the silkscreen as slot 5). But yeah, if you're just labeling them as slots 1 to 7 on the motherboard and then in the BIOS you magically refer to them as slots 7-6 4-5 2-3 and 1 it's just stupid and confusing
I hate dust. It's coarse and irritating, and it gets everywhere.
Yes, Anakin!
@@darrenoleary5952 LIAR!
could try getting an air purifier to reduce dust
Like somebody said : Another one bites the dust !
This comment made me think about electrostatic dust collectors. I googled it and really it seems to be a commercial thing. Would be a cool project to figure out, that is, if it's at all viable for a domestic space. I don't need it so i'm not looking into it but i'll share the idea for You to read and say "wait... what? you're nuts, you idiot" in reply :D
Not dusting server out and commentary made me laugh more than it should have. Great stuff!
I homelab for living, but also for fun. Cleaning up hardware actually is relaxing for me, so I do clean every nook and cranny every time I pull out my hardware. If this means backup storage server is offline for 3 days, so be it. It's not necessary... not at this level, at least. :D Servers are really designed to live in horrible environments.
Well, I did IT support for gravel pit mine, with rock crushers running 24/7, and there and only there, I saw literal solid blocks of dirt and mud inside some machines. With experience, I do prepare systems there, by inserting plastic faux RAM sticks into every empty slot as they are impossible to clean thoroughly and easily scratch, I cover internal USB slot since these can get literally blocked so hard you can't push in USB plug and cover makes it easy to find it by touch. But that's all extreme - none of it should be required anywhere aside heavy duty industrial areas. Garage, or even a shop is not an industrial area.
@@Vatharianit is satisfying agreed!
fully agree! the more commentary he had on not dusting, the louder I laughed. same thing over here my rack is a little dusty and I DONT CARE
"Coffee. Because it's 8:30 in the morning you monsters." 😂😂
Who drinks coffee right before going to bed?
Monster coffee you say?
Behind every confused system admin is an application engineer going this makes perfect sense
Probably something that does make total sense except the user has no way of knowing, like the order the traces come from the socket or something.
Check the Epyc socket for pins that are folded the wrong direction. I had an issue with a memory channel on one of my systems until I flipped the pin back. Tada!
Bifurcation was a mess. lol. Glad it all worked out. I enjoyed
1:45 If it's optane, it's using 3D Xpoint non-volatile memory, not nand flash.
One of these days I'll build myself an Epyc Milan machine with PMEM and Optane disk for HANA. Maybe NewEgg will have a firesale on the PCIE 4 ones soon...
Be aware pmem is only supported on SOME XEON cpus and epyc does not support it.Optane disks are the ones that are crossplatform.
@@declanmcardleEpyc does not do Optane for PMEM
@@gamerstudio5668 Yes, thanks, I'll just get a Xeon with 64GB and a 256GB PMEM (1:4 is a supported ratio) and leave the Epyc with the 905Ps for some other homelab server...
@@FlaxTheSeedOne Yes, thanks, I'll just get a Xeon with 64GB and a 256GB PMEM (1:4 is a supported ratio) and leave the Epyc with the 905Ps for some other homelab server...
I had a similar problem with a memory socket that had "gone bad" on my 2009 Mac Pro 4,1 upgraded to 5,1, on which I've also upgraded CPUs a few times as prices have come down. The problem turned out to be one of the CPU socket pins that was misaligned and either shorting to an adjacent pin or perhaps not connecting with the CPU at all. I broke out a magnifying glass to inspect the socket more closely and then saw the misalignment more clearly. I realigned it with a small screwdriver and everything is working fine now.
If you rack in the garage has front to back airflow, might fill the front door with furnace filters to help with the dust, but you are right not all that much dust.
21:58 ...tisk tisk. Drinking coffee out of a Raktajino mug. To quote the waiter in the movie Blues Brothers, "Wrong glass, sir!"
Orange Whip?
Orange Whip?
Three Orange Whips!
I wouldn't say you have a lot of dust, but it's an unusual fine brown dust probably from your wood working. Living room dust is more fluffy and grey.
1:51 Not quite. Optane uses 3D-Xpoint a type of ReRAM and not NAND flash.
RIP Optane… it was sooo good
Nope.
3D Xpoint is Phase-change alloy memory, not ReRAM (Which uses memristors).
Both are resistive memory, but 3D Xpoint varies the resistance of a metal alloy, while ReRAM varies a dielectric.
@@Prophes0r That point (that a PCM tech is its own thing vs being a ReRam subset) is contested pretty heavily. The fact that is not up for debate is that Optane is not NAND flash.
@@TwistedMe13 It's definitely not flash memory (NAND or NOR). I agree with that.
But contested? Why?
They are totally different technologies that produce completely different effects.
Heating a blob of metal so it melts, then recrystallizes, causing it's resistance to change from 0.01 to 0.15.
Applying an electrostatic charge to a semiconductor sandwich, so more electrons can flow, changing the resistance from 5.0 to 3.5.
Both derive logic by changing resistance.
One with a Yes/No.
The other with a How Much.
Saying they are the same kind of thing is like saying a boat is a type of airplane, because both use fuel to run an engine, and both can take passengers from one side of a lake to the other.
I too have (most of) my servers in the garage. The only 'dusting' I've given them is occasionally cleaning out their filters. Honestly, they actually stay CLEANER than the 3 servers & 2 workstations I have in a carpeted home office.
Great job, looking for the troubleshooting of epyc. This style of videos is welcome by me.
Not sure if you're aware; later engraving/cutting will definitely put metal dust into the air. I worked in a metal fab shop, with a couple later cutting machines and it put metal dust in the air. Definitely shorted out a few machines in the shop, prior to me starting and performing more maintenance on the machines and changing the enclosures
I have forced exhaust above my lasers to avoid that.
Love these types of videos. I started my own homelab after watching your content. Picked up a Dell R720XD for a decent price and decided to install unraid on it and it has been working very well! Plex is the main usage but also file storage for now.
Have same motherboard. And yes, poroxox and a couple of VM`s on it. Planned to add more addin cards in it. Want to thank you for this video. This saved me a lot of time, to figure out how buffrication works particulary on this motherboard. As always interesting info. Keep up and have a courage to make more videos like this.
Ch 8 is probably the dust! ROFL! Happy to be a troll for you. LOL I loved that term.
The one place i've had issues with dust, is in the livingroom with pets and kids, and at work next to a plasma cutter table and welding stations. Ended up getting a fanless computer for the plasma cutter table.
100% agree with the dust levels.
Counterstrike called, it wants its de_dust back! :P
I recently redid my homelab. Consolidated my cluster back to one host: epyc 7d12 and 256gb ram on supermicro h11ssl-i. low power epyc gets the job done for what I need. Only limiting factor is 4 channel ram instead of 8 channel.
10:05 I had that exact issue yesterday with the same Asus card. Some less sticky thermal pads would definitely have been appreciated.
Eeeh, you say it's not a lot of dust, but that's quite a bit more than I have in a PC in my office that's been on 24x7 since I put a GTX 970 in it. At least not your server with a leaf blower once in a while. All that dust raises the chance for some contacts getting bridged, or for signals to be tweaked just enough to throw off anything from RAM timings to voltages. You'd be surprised how many RAM BSODs I've solved by just cleaning memory contacts with a pencil eraser. You could try cleaning the copper pads on most LGA CPUs too. Fingerprints on copper oxidize over time and can interrupt the signals, especially at DDR 4-5 speeds.
Another upvote for project videos! I love those!
That whole segment about your server dust only convinced me that you're in denial and secretly terrified! 😂
We all know he went back and cleaned it out after the cameras were off.
Projects yay! Thank you for playing with parts I couldn’t get away with owning..
Project style videos are the best!
Channel 8 sounds like a dust issue. :P
Congrats on the system migration! I got my ceph cluster into an unhealthy state a few months ago and am still struggling with it before upgrading the cluster to proxmox 8...
And here I was thinking you were selling heavy duty coffee mugs like the blue one in this video. My daily driver coffee mug is a gold-plated and blue ceramic mug made by American Decorators in Trenton NJ in either late 1970s or early 1980s if I had to guess. Found it at an estate sale for $3.
Supermicro has cpu port level bifurcation config. To understand what you are doing, you need to go and open supermicro manual and there it states all 3 channels per cpu and how it's connected to pcie slots (or other cards). If you split it wrong you could split something up even embedded on mb like dual 10gbe controller (you could have one x8 for onboard ethernet and remaining x8 for pcie slot. So to have your own x4x4 setup on your x8 port it should be set as x8x4x4 etc.). Funny enough, they worked like two separate cards if i set my port to x4x4x4x4 . Could be useful for pcie passhtrough.
Love the project videos, thanks Geoff
Love this kind off video's. It's above what I can buy or wan't/need power and energy wise. But i learn from it for things i do at home. I also enjoy that you play with that kind of gear. So in that way i've play with it to😅. Keep the problems and hurdles, frustrations in your video's. That's good for People like me with less expertises good. That you also have problems. I'm dutch so disclaimer i mean all this positive
Yes! Finally some more server/project videos! More of those please (:
i had the exact same lockup issue on one of my proxmox systems running just truenas. luckily my lockup issue magically went away after i installed more ram, nic, and an old low power video card. i spent hours digging through my logs to find… nothing.
you do need hepa pos pressure vent - no more dust, maybe consider a box fan with some cheap hepa furnace filers taped to it as well. ram over provisioning for nas is perfectly ok - also like the all flash and cache tiers plus you have faster networking - if you were a small biz this would be ideal except you would want 2 - they are cheap, downtime is extremely expensive
What a mug!
(sorry, I just started watching and this was the first thing that jumped out at me)
Im glad you are back to the original opener for yoru videos.
Good trouble shooting, Jeff. Looking forward to more project style videos.
Good to see that we have the same taste in PCI-E nvme cards (the old quad card) 😄
I will live vicariously through you because my current home lab good but you might give me bad ideas 😅
A dirty case is a sign of a dirty mind. *Looks over at the dust circles in front of the intake fans...*
Always enjoy these types of videos from you.
The dust commentary was awesome 😂
Good GOD man!! my head is spinning, what did I see??!! LOL Nice when a plan comes together! And how did you know I was a monster?? HA HA!!
Love the project stuff Jeff
For the memory channel Just clean the pad of cpu with some special cleaner and it will work i had the same issue and i just fixed like that
Once thermals are impacted please make a cleaning video. Love a good cleaning video.
I really enjoy your videos they are very informative. My question for you is, For somebody that has a home lab. Would it be better for them to buy, a used R730XD dell server add four hundred bucks. They come with Dual 16core ddr3 memory with 24 data SAS slots in front, 2 in back? That would make more sense for someone on a budget.
I'd be more than happy to clean out all that horrible awful way too much dust if it means I get to check out all the engravers in your garage as well :D
6:07 That's a severe design oversight I'm surprised Supermicro hasn't patched yet.
I only have one question... what is a Litany? I looked up the definition and still don't see how it applies to this scenario. 3:15 for reference.
"a tedious recital or repetitive series"
"an excessive amount"
how hot does the garage get? how are you keeping it cool enough? summer temps?
Hello @CraftComputing, I'm currently in this hell too asking myself if for my storage, I keep going to an LGA1151 system with an Asrock X1213 MATX and i3 9100 or if I go back to an X99 chinese system to have more PCIE lanes, more cores, use PCIE bifurcation and use nvme drives instead of sata with caching to allow me to saturate my 10Gig network card.
Why not boot off 2x satadoms instead of those NVMes for TrueNAS OS? Could save a slot for more storage that way. Just food for thought.
What is the device you have sat above/on your keyboard at 14:22? Is it USB drive that can allow you to choose an ISO for it to appear as ? That is something I'm interested in.
I think I've found it, the IODD ST300
Yes! IODD ST400 actually. I did a video on it a while back. th-cam.com/video/ZSywLblIYa0/w-d-xo.html
the obligatory, have you heard about ventory /s
@@CraftComputing
Did you ever use double sided 20110 NVME ? What about the temperature on the downside of the NVME ?
With 1TB of RAM only 1.6G was allocated to ZFS cache by default? I would have expected Scale to allocate roughly 50%.
I mean who isn't regularly rotating there servers in the server racks? This just makes your servers like you way more when you regularly care for them. 😇🤪🤯
This is why I like my Asrock rack boards. Bifurcation for each slot and each slot is labelled in order. Simple.
The way Supermicro have done it with this board is truly trauma-inducing.
@CraftComputing Jeff, did you check MB manual? Usually, there's block diagram with MB slots and corresponding CPU ports.
Mac vlan or ip? My system I worked out was the mac vlans for my docker containers I was using instead of IP cause the system to lock up after eight hours, no idea why eight hours is a significant thing.
EPYC losing a memory channel seems to be common. I’ve seen a lot of “7 channel” used EPYCs on eBay. A friend of mine has an EPYC system that just lost a memory channel for no reason (remounts didn’t help) after years of service. And even one of my EPYC 7642 chips lost a channel recently also. Just one day poof it’s gone. Remounting didn’t help. And when I moved that CPU to another motherboard it was still the same channel that was gone. It happens.
So the random mobo issues are not a dust created problem… can dust actually be a problem? I’m thinking not as I’ve seen PCs pulled out of mines caked in dirt and still working.
Oh the dust! 😂😂 you are a monster
Can we all just appreciate "bass ackwards" at 5:27 😂
Good stuff Jeff!
Always here for the server content
This is a dream build for me
What's the model of the testbench you are using?
Why not distribute the boot drives across two carrier cards. Would provide a little more redundancy...
Congratulations to the bios job...
what about non bifurcation nvme cards? i have been running one with a built in plx chip and the speeds have been great
Epyc server are notoriously picky on memory. I had a similar problem with some DDR4-2133 SK Hynix dimm's until I sourced some Micron DDR4-3200 DIMM's.
you call that dusty?? You should see inside some of the network closets 50ft in the air in most warehouses lol. There was so much dust you thought there was another device on top of the switch but it was just solid dust.
I love me a good rack day, well not me, they usually end up as hellish nightmares....but I love seeing someone else on a good rack day! x)
what aluminum eatx testing platform are you using?
How many lanes does a single 100Gbps need? 🤔
Sure it's coffee. Where is it from? What's the roast? Brew method?
Reminds me I need to rebuild my whole rack. 🙄 RIP esxi. At least I'm getting more 10Gb support soon. Also got a r730xd that is collecting dust. 🤣
YES. more projects please. Long time follower. Any chance you will play around with machine learning or AI??
Are you still going to use the H11 + 7601 for a different project?
I saw the connector on your blade. Do you have a chassis for it (for more blades) or is it like a harness to an rpdu?
It's part of a 2-node chassis. More info this week 😉
@@CraftComputing I get to play with some blades at work. I really enjoy them. I need the setup in my home lab... But alas. I'll live through you :p
laser dust is carbon? Carbon is conductive?
Anyway, this much dust in my apartment rack corner I get in couple of years, not 6 month, and its not carbon.
Even ordinary dust is an ESD source.
Optane is definitely not using NAND flash, it uses 3D X-point.
Sorry haha, this video is awesome either way
had a good chuckle at the comment feed those engagement trolls. Love this video, its like I am working on this with ya, while actually just sitting in my chair ignoring my own rack's lack of wire tidiness that I have been putting off for a long time... I am helping lol
Dust!!! You call that dust? Every time I clean out my server I find a new cat. I'm not sure how they get on there.. all that fluff!
Seriously though you had me laughing my ass off, which it's now really difficult to sit, as you were mocking the people with cleaning issues.
Supermicro should do a bios update.
and add: "if not card, keep conf" or something like that.
Jeff- explain what bifurcation means,why its necessary and when its implemented maybe on your show.
Bifurcation is the splitting of PCIe slots to support multiple devices. In this case, the PCIe slots support connections to devices using 8-lanes. If I want to connect two 4-lane devices to those slots (like NVMe drives), you need to enable bifurcation.
Wonder if the board has a bad CPU pin that moderates the RAM slot?
@CraftComputing What ever happened to the studio build you were discussing what feels like over a year ago now? You used to be selling bricks in the store that were going to be placed in one of the walls, or something?
It was delayed multiple months (city + bureaucracy). Plan right now is to get a permit in April, and once plans are finalized, I'll post an update video.
@@CraftComputing Good luck :D
Homelab-ers 🤝 Hobbyist music producers
Getting things perfect only to fuck with them moments later
"It's all finally working exactly the way I want"
"BUT NOW I WANT COLORED PATCH CABLES!!!"
Anyone know what test bench case he puts the Epyc7601 board into at the end?
Open Benchtable, and its amazing!
openbenchtable.com/
Heating the heatsink with a hairdryer would have helped to soften the thermal pads. If you can get to the bracket screws with the heatsink on. wiggling from that end can help too. As to can get even pressure on both ends at the same time.
Nope, never. Not me that has double bagged an expansion card and placed a a bowl of hot water for 20 minutes. No sir! I did hear the heatsink did come off easily once warmed well.
Had a Dell R440 just lose a PCIe slot's ability to bifurcate after a bios update recently. Was I going to use that slot for dual NVMe VM storage? Yes. Can I now? No. Thanks Dell.
Is it just me, or did it feel like an episode of Eureka during that one scene with the music?
Those silicon power drives? I too like to live dangerously :)
I've been using SP drives for years. Have yet to experience a failure.
@@CraftComputing Glad for you! Hope that continues. It's just that while researching ssds I've come across a lot of bad stories about b-brands and this one in particular. In any case, you do have your backups so it's a non-issue
on the m.2 carrier board there was some serious Rick energy
Hey @Craft computing were you able to get you new server the duo node one up and going
Not yet :-(
I probably would have dusted only because I was already in there. Otherwise that was very superficial and akin to surface rust.
21:37 Drink like a pro.... When it's 5 am instead of 5 pm.
I'm sure the PCIe bifurcation process makes sense to at least one Chinese person who designed that BIOS. Had the unfortunate experience of having to work with SuperMicro for many years and cannot understand their thought process no matter how hard I try. Sad to see that nothing has changed in 7 years.
I think the complexity comes from the fact that whoever designed that BIOS was simply following the nomenclature given by the PCIe controller itself.
The controller called the slots 1 and 2 -> he called the slots 1 and 2, regardless of the fact that those slots could have been phisically implemented as slots 5 and 6 on the motherboard.
It is a very lazy but accurate approach that would 100% work if documented, so as long as this was all explained on a manual, that would be a very valid approach IMO, especially if you tell the board-laying guys to label the slots on the motherboard accordingly (so that, i.e. the phisical slot 1 would be labelled with the silkscreen as slot 5).
But yeah, if you're just labeling them as slots 1 to 7 on the motherboard and then in the BIOS you magically refer to them as slots 7-6 4-5 2-3 and 1 it's just stupid and confusing