This is honestly such a simple but neat solution, You can always use molex or a header splitter or use existing usb fan power boards and just run a usb from the inside of the case to the outside, but none of the solutions are as nice as this one. Price wise I'm sure it's not the best but damn it looks good
This is some seriously high production value. I love the dolly shots - great work my friend :) While I'm not currently in the market for a PCIe fan at the moment, you're a direct buy once the need appears :)
delta produces 120mm fans which spin at 3000rpm at max and push a lot of air (and are loud at 3k rpm), ive seen them commonly being used in older fujitsu siemens workstations and they are great at cooling
I have a RAID card in my computer. My solution for cooling, was to buy a 4 pin PWM fan that fit the dimensions of the heatsink on the card, then I screwed the fan down so that the screws would go in between the fins of the heatsink. It doesn't matter if the fan is pushing air down on the heatsink of pulling the air away, the point is to move the air. Once that was done, I used Fan Control to set a profile for that fan and the side case fan (4 pin fan splitter here, though I do have access to a waterpump fan connector on the motherboard) that is using the NVMe drive temperature to control the fan speed (30C is 50% speed, 60C is full speed. That drive is usually at 45C, which is yellow (read caution) according to Hard Disk Sentinel). The NVMe is mounted correctly, but the heat spreader is passive only and right under the GPU. This does two things for me: 1. It helps keep air coming into the case (older no rests case, 8 5.25" bays up front. I needed this for my two three-bay five hard drive slot adapters). 2. It will keep the RAID card cooled. Before Fan Control kicks in, the little fan is running pretty fast, but once the program is running, that fan is a lot quieter. I checked the temperature report from the card itself in the web GUI interface to the card and have seen no issues. The temperature settings are based on what the card can take and should be kept within according to the manual for the card. For those interested, it's an Adaptec RAID 8805. I am using HWINFO64 to get certain temperature readings to Fan Control. For those a little further interested in the computer, here are some useful details: 1. CPU AMD Ryzen 7 5800X 2. CPU Cooler: BeQuiet! Dark Rock Pro 4 (added a fan to the rear heatsink stack, same as the front fan) 3. Motherboard: ASUS ROG Crosshair VIII Dark Hero 4. 64Gb RAM from G.Skill (6400MHz, two 32Gb dual channel kits same specs for both kits, and yes, the DOCP setting is on along with BAR and SAM) 5. NVMe: 4Tb from MSI 6. SSD: 2Tb from Crutial in the MX series (I may upgrade that to 4Tb when prices come down) 7. GPU: ASUS Dual RX6600 (8Gb card, planning to upgrade later on) 8. PSU: ThermalTake ToughPower Grand 1200W (over six years old, bought before they had quality issues, still going strong) 9. As to the RAID setup and the 10 bays - Two drives are controlled by the motherboard as SAS to SATA adapter cables only deal with four drives each, and there are only two SAS connectors on the card (I forget if those are mini-SAS ports or not, but they were different than the RAID 6805 I was using previously. Do note that you will need to use command line tools to set what mode the 8805 card operates in, and even then, you will want some extra drives to do the migration process. It was not a drop-in replacement situation for me, unlike what others have stated their experience was). I am using RAID 1, which sets up four arrays. I use these for various storage, though some older games can make use of the improved read time of RAID 1, like Fallout 4 (and yes, I've heard how not optimized that engine is). Aside from some games going on the RAID 1 (not many), I'm keeping computer backups, my digitized records (there is digitized music here, too), and a copy of a number of program installers I've gotten over the years aside from CD and DVD ISO's for other installable software (less wear and tear on said discs, along with a faster installation process and that's not including media preservation). 10. Rear Fan: Noctua 3000RPM PWM fan (black in color) - That fan is there so that if things really start getting too hot, air can be moved more quickly out of the case. My Fan Control profile has the CPU Tdie temperature as the controlling factor, with 50% fan speed for any temperature 60C and lower. Once the temperature hits 80C, the fans are set to go full speed (the CPU goes up to 90C before throttling and shutting down, and the Fan Control graph is a linear curve. Typically, my temps run mid 40's Celcius to mid 50's Celcius in a room that hits 80F in summer. Gaming so far has not gone past the low 70's Celcius with this setup. The games vary in demand on the hardware (the various HoYoverse games, various Star Wars titles, Batman Arkham Origins, Borderlands 2 and 3; Gundam Breaker 4, various games in the Mortal Kombat and Street Fighter series, various Sonic games, to name a few). Hope you've enjoyed reading and it may give you some good ideas.
Very nice plug and play solution. Years ago I had a PCI slot fan, literally a piece of plastic shaped like a PCI slot and bracket, 2 80mm fans Anna's a molex connector.
A cool feature for a v2 of your fan card would be PWM control from software. Seeing as you're using a PCIE slot, you should take advantage of its ability to communicate with the software
I'm using an older style fan card in a Dell Optiplex 270: it have a "laptop style" fan with a Molex power connector and it pushing air trough the PCI slot cover out from the system ‒ I wanted to help this computer (which has notoriously bad airflow) to breathe a bit better.
You can always just plug the fan headers into a fan controller on the motherboard or a USB header one. But with a device like this like he said the lowest setting is more than enough for the fan so you can safely leave it on the lowest setting and forget about it.
or you can just mount a fan between the card(cards) and the front intake fan, in the same orientation as the front intake fan. yes, you need to configure the fan in bios, yes i did need to 3d print brackets to support the fan, but i can use all pcie slots and all of them are getting "good enough" cooling.
I was thinking something similar, doesn't matter much in this specific use-case but was thinking of an angled bracket at like 30 degrees (relative to horizontal) that can be made by 3d print, or just basic hand tools and a thin sheet of metal you can grab off amazon. Can make it even more efficient with a DIY duct (literally can just be cardboard) of sorts that encloses the back of the HDDs. Double bonus the HDDs get push-pull style airflow for their own enclosure cooling, while both the network and the other card above it get directed airflow that comes from the front fans and still retaining all PCI availability.
@@barneybarney3982 I mean that's subjective to bearing design and yadda-yadda, and some cases are installing fans from-factory at such an angle (H5 Flow) but at the end of the day in practical terms, a standard 120mm fan will last for years at such an angle and is dirt cheap to replace.
I've been eyeballing a 40g nic on ali express, but it is definitely going to be warm. My NAS case actually does push air to the pcie, but 40g nic, that might need a hand.
One thing the BTX form factor got right was switching the side of the case the motherboard was mounted to. This had the effect of flipping the PCI/PCIe cards so that their heatsinks were facing up. Unfortunately a lot of the other things about BTX were designed to address specific shortcomings of Intel's NetBurst architecture.
I initially measured the temperature using a thermocouple with the side panel attached. It didn't look great on video, so I used a thermal camera instead. Removing the side panel definitely affects the airflow, however, it also reduces internal temperatures which seems to counteract the improved airflow. I couldn't measure a noticeable difference in temperature with and without the side panel.
I've worked with server grade fans before, so your demonstration didn't come as that big a surprise, & it did not disappoint. lol Also, I need a solution just like this for my HBA controller cards, & I thought I was going to have to design one myself. Thank you!
There is a lot of space below my NIC, so I ziptied a 90mm fan parallel to the mainboard to it. That keeps it below 70C, which is good enough for me and most importantly: quiet
I just used some L-brackets attached to the pics slot mounts that points down towards the motherboard through the hbas and nics that require the extra cooling. Pros are it doesn’t waste a slot like these options and you can use a thicker fan for higher static pressure and less noise.
I had both a 10GBe card and a Host Bus Adapter in IT mode (for TrueNAS) to cool. I found the twin fan bracket online, but had to adjust it with file to fit it. Since it is a server, I did not worry about more noise.
your alternative to this is ghetto cooling (off a fan header or molex and mounting the fan to the case in some way or something else dependent upon placement.
This video is great, even though I'm not in the market for such a device right now. Do you have another channel on TH-cam? If not, I hope you keep uploading videos here, because I love your style and presentation.
I have a very similar dual SFP+ slot card in my SFF computer/router, and since it it only has room for half-height cards i ended up designing and 3d printing an overly complex fan shroud thing used in conjunction with a 40x10mm noctua. it blows air across the cards heatsink, down the slot cages, and ultimately out of the adjacent PCI slot with a 3d printed duct angled to direct the exhaust air down towards the transceivers. the SFP transceiver i use to connect to my ISPs fiber gets crazy hot without air flow (hot like 70 degrees on a cool day after running for a few minutes), so this was a good way to kill two birds with one stone. With the fan setup, the transceiver it runs at a pretty stable 35-40 degrees. I'm not sure exactly how much cooler the card itself is running, but before the active cooling the heatsink was very hot to the touch, and with it you can barely tell that it's warm.
😬 if you've got that much storage, please, please tell me you're using ECC memory, because even error-correcting filesystems like ZFS won't save you from bit flips that happen _before_ the data gets written or _after_ it gets read from disk. risk of bit flips scales fairly logarithmically with RAM die memory density, and linearly with number of bytes R/W from RAM. I got bitten by that with a NAS that corrupted a good 20% of my data before I noticed, learning the hard way what ECC memory and good backups were for!
You could experiment using a blower fan for this bracket. Never undestood why blower fans on desktops are such a niche idea to this day, specially when systems such as yours and Mini-ITX builds'd benefit a lot from these, imo.
Neat! It would also work wonders with my pci-e-adapter-mounted u.2 data center drive, which can get very hot without active cooling. I'm cooling it using a homebrewed hack for now, but your solutions is so much nicer!
so ive been monitoring temps on my x520 and the temps never got concerning without a fan, but with a fan ziptied to the fins, i did see a slight reduction but after about 50% fan speed there was no temperature reduction.
Very COoL. I haven't seen anything that exciting since a girl scout showed up at my door last summer, inebriated. Just saying Thank you for the video. Very sub-Worthy :)
The chips in Mellanox ConnectX-3 and 4 cards are rated to run at up to 105C. I believe Intel's NICs are rated for 103C (at least the X550 is). I've just let them run hot for years on end 24/7 without any issues. If they do ever fail, they're ancient used enterprise cards, they're dirt cheap to replace.
I have like 10 of those cards spread and a 40mm noctua fan attached attached to the heat sink with some zip ties is more than enough to cool those cards. You don’t even need more than 50% fan speed
A high-pressure radial fan would likely be better, and an open-source backplate with plastic top cover for use with tape to better-direct the flow path without accounting for every card model… _ever_ would probably be the most nice way of handling this. The reason why use of an axial fan _could_ be better is because with a thermo-couple and board to figure all the PWM magic out, a fan could be baseboard-controlled and less copper could be required for that.
That's a very clean solution, I like it! I wonder if you could fit one or two 50mm fans into a half-height version. Would be useful for 2U chassis. You can fit a decent CPU cooler in those but the PCIe cards always suffer without server grade airflow.
I improvised a different solution (20 years ago) for my boiling hot passive Geforce3-Ti200, I let a cheap vertical 80mm case fan (it´s lower 1/3rd) blow from the side onto the card (as close to it´s PCB as possible) & the fan´s remaining 2/3rd to the also hot MB-Northbridge via a self build holder (using my old kid´s "Metallbaukasten" & nearly zero tools) that I mounted with longer screws (I kept from something I took apart) to the regular pci slot holes (intended for the cards only) ... Result: no more crashes (GPU & Northbridge were to hot in default PC config from HP) ... in your case I would have mounted the horizontal fan lower to cool the Network card & grafics card
Ok, I admit, I am useing 10Gb due to swtich networking constraints, that said, my Dual QSFP card is bigger, huh, faster than yours! Dual 40Gbps :P (Pitty can't use it as it)
I was literally thinking "what about a version that you could control with your motherboard" and then you came out with the "dumb" rear bracket Bravo sir. Also a heads up that you can run programs like "FanControl" that could see the temps of your card and change accordingly, or smart controllers with thermal sensors are also a thing. Your card still looks excellent though and honestly if i didnt know this was a single guy i wouldve thought it was a large company pike silverstone or something. Very impressive.
So i went to aliexpress and found a couple options, what would be the advantages of using a PCIe fan versus a standard 3Pin/4Pin that fits exactly in the same disposition? It feels like a waste of a perfectly usable PCIe slot when there is dedicated ports for the fan already.
Why is a PCIe slot required for a fan? Could a motherboard fan header work? At the end your show the fan in the BIOS. Does the BIOS automatically pick it up as a system fan?
Ya I just Zip Tied an 80mm fan in between both my expansion cards and have it hard wired to Molex so it's at 100% all the time. Why buy extra parts when 2 Zip Ties work. Really cool bracket though.
But wouldnt you also need to put in a 10Gb card in your workstation? I guess unless multiple people are hitting the server (which isnt the case for me).
The one and only thing I don't like about my new AMD motherboard is it only has 2 PCIE slots. I use one for video and the other for a 7 tb NVME drive. The 7tb drive could use a cooling fan.
You... do know Arctic S2048 are 40x40x28mm "server fans" at 6K and 15K RPMS, using normal 4 pin fan connectors, right? Not quite a delta screamer, and you don't want to stack them on the motherboard(get yourself an aquacomputer quadro or at least a sata powered fan hub instead). But they're good for using server cards in consumer/workstation cases.
I just strap a 5cm fan with some cable ties to my 10Gb SFP+ card's heat sink, keep it well below 65 even when it's summer, I don't like the idea of using a PCIE slot just for a fan.
.... I almost checked out on my order of used server fans, thinking that those would be neat to use on amp heatsinks and UPSes then i watch this vid haha
I wonder why the engineering on that is not volume per unit of time, like liters or cubic centimeters (or ugh, cubic feet) per minute, but instead linear feet per minute. Not a mechanical engineer myself, but I'm just curious.
Just wanted to mention few more things: 1. Old Intel X520/X550 are not the most power efficient NICs that are available. If you'll go for X710 their overconsumption would be lower and therefore requirements fro the airflow is lower. You also should have a look at Mellanox ConnectX-4 Lx and maybe Chelsio T6225 (though be careful with drivers and firmware, Chelsio is weird, while Mellanox, Intel and Broadcom are usually fine) 2. If you went for an SFP+ NIC, you should avoid using SFP+-RJ45 modules - they alone consume several watts. (2-5W, depends on how new it is), while DAC, AOC and even SFP+ fiber module is less than 1W of power consumption. As you might have a good cooling for the chip of the card, but your SFP might overheat. I've seen SFP+-RJ45 to heat up to 95C (~ 203F) in a router with an airflow because it was never designed for such power consumption. Just try to avoid mixing copper and fiber or if you do, use one of those switches (there are new 10G switches with SFP+ available) as media converter, as they have more room to put a proper heatsink.
Dang I thought you were going to have a pcie bracket that could Mount that server fan and provide the appropriate power for it since normal fan headers don't work. Still a fun video though, and now I know if I ever have a overheated pcie component I can get a fun little kit for the fan
Before this I thought the worst cards by far were HP/QLogic. They absolutely need cooling though. When these cards overheat in a system, it takes out the entire Windows networking stack. That is, no 10GbE, no 1GbE, no Wi-Fi. Reboot to restore. Learned that when adding one to the eMachines.
I'll be more interested when a smart/managed multi-gig switch drops below $150 (transceivers included). Unless I'm willing to bite the bullet and get a multi-gig switch with 24 ports to upgrade my network backhaul/spine a smaller managed (LACP is a required feature) multi-gig is my only alternative. Otherwise it's pearls before swine.
My motherboard has a 2.5 gigabit connection. I have no idea if I would get all of that speed from it since that is faster than most of my devices typically send or receive. I do make use of the 2.6 Gb port on the router but really a lot of that speed goes unnoticed since my internet is less than 1 Gb. Still though, for most people that is fast enough really. Even if downloading gigabytes of software and stuff that still is not bad really. I know that some people have a much faster connection but they probably had to give up their first born to afford it.
If you own a 3D Printer, I would advise simply looking around for fan adapters for your size heatsink. I tried sharing here but TH-cam deletes my comments if I actually give some useful information here.
6 fans using more power than my whole gaming PC? nah, my GPU alone can use 430 watt with 15% increased power limit, and my CPU can draw about 230 watt stock, but both components are underclocked for way better efficiency, yet still pretty damn powerful. R9 7950X and RX 7900 XTX
"but it hasn’t seen much adoption by home users, mostly due to the high cost." Isn't it more down to the fact that very few home users have internet speeds in the 100s of mb (with a more common need for 1gbs connections), never mind 10gbs? Cost might be a factor but supply won't be helping that.
3:08 You're measuring my loudness? _Don't you know who I am? I'm a server-fan, snitch!_ 3:22 This is your "big" fan? _Ping Pong ain't got_ nothin' _on me!_
Intel x550 uses more power and gets a lot hotter than the x710, most of those cards use the intel chip. Nice demo of the destructive qualities of Delta fans. Use DAC cables if possible and runs are short, SPF+ adapters get hot. Great job with the fan, and it looks good.
@ledoynier3694 In my experience in data centres, with fibre network and storage switches, I've only known them to run hot, that is mainly with Cisco switches. For home switches I have a Qnap which runs quite warm with a DAC cable, connected to a Mikrotik (other end of the DAC cable) which was lukewarm. So yes it is true that some combos are ok, but in my experience most run hot, and the problem is a data center has forced cool air, home labs do not.
i lost it when the server fan started flying around lmao
He wasn't kidding about the earmuffs. That fan was way over OSHA limits.
I really didn't expect it to be that funny. Maybe i don't need server fans afetr all, they're lowkey danger
Right? I knew they were dangerous, not dancers 😂😂😂😂
ran up on that meter and, CHOMP!
@@laboulesdebleu8335 Just a love bite.
"NICGIGA, skip this brand if your dyslexic." 😂😂💀
Drake needs to skip it too
By far the most hilarious air flow comparison between PC and Server fans i've ever seen xD
3:15 That server fan: " I AM POWER! I AM TERROR!"
Server fan is angry, run!!!
🤣🤣🤣Fan monster! Next DC comics villain!
THE WAY IT ATE A CHUNK AT 3:13 HAD ME ON THE FLOOR LMAO
So, ROFLMAO? ;D
It'll happily eat your fingers as well!
FOAM FOR THE FOAM GOD
it will eat your flesh too with the same ease. Server fans crave flesh
THE FAN HUNGERS
Most fans just need powered wires. *_A SERVER FAN NEEDS A LEASH!!!_* :rofl:
As soon as the second lead at 3:03 was connected, I started laughing because I immediately knew what was gonna happen lol!
I just hotglued an old laptop fan on there and and soldered it to the 3v3 rail on the pci connector, but this looks like a very neat solution!
Yeah, that's how I'm doing it too, but with cable ties, not hot glue. This is a great solution, but a bit pricey.
This is honestly such a simple but neat solution, You can always use molex or a header splitter or use existing usb fan power boards and just run a usb from the inside of the case to the outside, but none of the solutions are as nice as this one. Price wise I'm sure it's not the best but damn it looks good
This is some seriously high production value. I love the dolly shots - great work my friend :)
While I'm not currently in the market for a PCIe fan at the moment, you're a direct buy once the need appears :)
delta produces 120mm fans which spin at 3000rpm at max and push a lot of air (and are loud at 3k rpm), ive seen them commonly being used in older fujitsu siemens workstations and they are great at cooling
So does noctua.
I have a RAID card in my computer. My solution for cooling, was to buy a 4 pin PWM fan that fit the dimensions of the heatsink on the card, then I screwed the fan down so that the screws would go in between the fins of the heatsink. It doesn't matter if the fan is pushing air down on the heatsink of pulling the air away, the point is to move the air. Once that was done, I used Fan Control to set a profile for that fan and the side case fan (4 pin fan splitter here, though I do have access to a waterpump fan connector on the motherboard) that is using the NVMe drive temperature to control the fan speed (30C is 50% speed, 60C is full speed. That drive is usually at 45C, which is yellow (read caution) according to Hard Disk Sentinel). The NVMe is mounted correctly, but the heat spreader is passive only and right under the GPU. This does two things for me:
1. It helps keep air coming into the case (older no rests case, 8 5.25" bays up front. I needed this for my two three-bay five hard drive slot adapters).
2. It will keep the RAID card cooled.
Before Fan Control kicks in, the little fan is running pretty fast, but once the program is running, that fan is a lot quieter. I checked the temperature report from the card itself in the web GUI interface to the card and have seen no issues. The temperature settings are based on what the card can take and should be kept within according to the manual for the card. For those interested, it's an Adaptec RAID 8805. I am using HWINFO64 to get certain temperature readings to Fan Control.
For those a little further interested in the computer, here are some useful details:
1. CPU AMD Ryzen 7 5800X
2. CPU Cooler: BeQuiet! Dark Rock Pro 4 (added a fan to the rear heatsink stack, same as the front fan)
3. Motherboard: ASUS ROG Crosshair VIII Dark Hero
4. 64Gb RAM from G.Skill (6400MHz, two 32Gb dual channel kits same specs for both kits, and yes, the DOCP setting is on along with BAR and SAM)
5. NVMe: 4Tb from MSI
6. SSD: 2Tb from Crutial in the MX series (I may upgrade that to 4Tb when prices come down)
7. GPU: ASUS Dual RX6600 (8Gb card, planning to upgrade later on)
8. PSU: ThermalTake ToughPower Grand 1200W (over six years old, bought before they had quality issues, still going strong)
9. As to the RAID setup and the 10 bays - Two drives are controlled by the motherboard as SAS to SATA adapter cables only deal with four drives each, and there are only two SAS connectors on the card (I forget if those are mini-SAS ports or not, but they were different than the RAID 6805 I was using previously. Do note that you will need to use command line tools to set what mode the 8805 card operates in, and even then, you will want some extra drives to do the migration process. It was not a drop-in replacement situation for me, unlike what others have stated their experience was). I am using RAID 1, which sets up four arrays. I use these for various storage, though some older games can make use of the improved read time of RAID 1, like Fallout 4 (and yes, I've heard how not optimized that engine is). Aside from some games going on the RAID 1 (not many), I'm keeping computer backups, my digitized records (there is digitized music here, too), and a copy of a number of program installers I've gotten over the years aside from CD and DVD ISO's for other installable software (less wear and tear on said discs, along with a faster installation process and that's not including media preservation).
10. Rear Fan: Noctua 3000RPM PWM fan (black in color) - That fan is there so that if things really start getting too hot, air can be moved more quickly out of the case. My Fan Control profile has the CPU Tdie temperature as the controlling factor, with 50% fan speed for any temperature 60C and lower. Once the temperature hits 80C, the fans are set to go full speed (the CPU goes up to 90C before throttling and shutting down, and the Fan Control graph is a linear curve. Typically, my temps run mid 40's Celcius to mid 50's Celcius in a room that hits 80F in summer. Gaming so far has not gone past the low 70's Celcius with this setup. The games vary in demand on the hardware (the various HoYoverse games, various Star Wars titles, Batman Arkham Origins, Borderlands 2 and 3; Gundam Breaker 4, various games in the Mortal Kombat and Street Fighter series, various Sonic games, to name a few).
Hope you've enjoyed reading and it may give you some good ideas.
Server fan is hungry, ohm nom.
Server fan has eaten, now it goes zoom.
Oh wait, still tethered. Server fan is now like happy doggo on leash.
How a technical video about networking brought together a bunch of people with equally wicked scenes of humour 😅
I like the editing and love the product! Thanks you man
Very nice plug and play solution.
Years ago I had a PCI slot fan, literally a piece of plastic shaped like a PCI slot and bracket, 2 80mm fans Anna's a molex connector.
A cool feature for a v2 of your fan card would be PWM control from software. Seeing as you're using a PCIE slot, you should take advantage of its ability to communicate with the software
I'm using an older style fan card in a Dell Optiplex 270: it have a "laptop style" fan with a Molex power connector and it pushing air trough the PCI slot cover out from the system ‒ I wanted to help this computer (which has notoriously bad airflow) to breathe a bit better.
It would be a nice thing if it will be able to control fan speed through software using PWM controller and SMBus on PCI-E.
You can always just plug the fan headers into a fan controller on the motherboard or a USB header one. But with a device like this like he said the lowest setting is more than enough for the fan so you can safely leave it on the lowest setting and forget about it.
Lol! That fan sound shot was menacing!
All the ladies love my 10Gb
This makes a ton of sense, perfect use case.
The quality of your video shots and the editing is top notch 👍
which thermal camera do you use?
I've got a HT-301.
or you can just mount a fan between the card(cards) and the front intake fan, in the same orientation as the front intake fan.
yes, you need to configure the fan in bios, yes i did need to 3d print brackets to support the fan, but i can use all pcie slots and all of them are getting "good enough" cooling.
I was thinking something similar, doesn't matter much in this specific use-case but was thinking of an angled bracket at like 30 degrees (relative to horizontal) that can be made by 3d print, or just basic hand tools and a thin sheet of metal you can grab off amazon. Can make it even more efficient with a DIY duct (literally can just be cardboard) of sorts that encloses the back of the HDDs. Double bonus the HDDs get push-pull style airflow for their own enclosure cooling, while both the network and the other card above it get directed airflow that comes from the front fans and still retaining all PCI availability.
@dragon411320 30 degree is bad for fan, 90 or 180 only.
@@barneybarney3982 I mean that's subjective to bearing design and yadda-yadda, and some cases are installing fans from-factory at such an angle (H5 Flow) but at the end of the day in practical terms, a standard 120mm fan will last for years at such an angle and is dirt cheap to replace.
I really like your PCIe fan card. Well done, I just might get two once I install some network cards for my home setup!
Ive had great success mounting noctua a4x10 fans directly to nics and hba's by screwing a pair of m3 or m4 screws into their heatsinks.
Nice!
I've used those cards for many years now and I always just zip tie a fan to the side. But this is much cleaner.
I've been eyeballing a 40g nic on ali express, but it is definitely going to be warm. My NAS case actually does push air to the pcie, but 40g nic, that might need a hand.
One thing the BTX form factor got right was switching the side of the case the motherboard was mounted to. This had the effect of flipping the PCI/PCIe cards so that their heatsinks were facing up. Unfortunately a lot of the other things about BTX were designed to address specific shortcomings of Intel's NetBurst architecture.
Would removing the side pannel not remove what little airflow there would be from the fans in the front case?
Obviously. This video is shit
It absolutely does, so this video isn't really worth much for information. I won't talk about entertainment value, that's personal opinion.
I initially measured the temperature using a thermocouple with the side panel attached. It didn't look great on video, so I used a thermal camera instead. Removing the side panel definitely affects the airflow, however, it also reduces internal temperatures which seems to counteract the improved airflow. I couldn't measure a noticeable difference in temperature with and without the side panel.
I've worked with server grade fans before, so your demonstration didn't come as that big a surprise, & it did not disappoint. lol
Also, I need a solution just like this for my HBA controller cards, & I thought I was going to have to design one myself. Thank you!
I've been using a Dual 92mm PCI slot GPU Coolers. it's $14 on Amazon for my HBA and SAS Expander Cards.
Edit: They have singles and triples too.
you can just screw in a 40mm fan to the heatsink, usually M3 screws squeeze fine between the fins and hold it solid ;)
i had a passive cooled gt1030 back in the day this kind of fan would have been perfect
There is a lot of space below my NIC, so I ziptied a 90mm fan parallel to the mainboard to it. That keeps it below 70C, which is good enough for me and most importantly: quiet
I just used some L-brackets attached to the pics slot mounts that points down towards the motherboard through the hbas and nics that require the extra cooling. Pros are it doesn’t waste a slot like these options and you can use a thicker fan for higher static pressure and less noise.
Powering the fan from the pcie slot itself is pretty slick.
How much for the only fans box?
i wonder: how much thrust does that server fan produce?
they are nice and square, four might look realistic under a concorde model airplane
I had both a 10GBe card and a Host Bus Adapter in IT mode (for TrueNAS) to cool. I found the twin fan bracket online, but had to adjust it with file to fit it. Since it is a server, I did not worry about more noise.
Would it be possible to daisy chain the fans together at the cost of a slightly larger mounting bracket
If possible can you make a video of your server and network setup. That fan took flight.
your alternative to this is ghetto cooling (off a fan header or molex and mounting the fan to the case in some way or something else dependent upon placement.
This video is great, even though I'm not in the market for such a device right now. Do you have another channel on TH-cam? If not, I hope you keep uploading videos here, because I love your style and presentation.
3:04 It's angry!
AND HUNGRY!
I have a very similar dual SFP+ slot card in my SFF computer/router, and since it it only has room for half-height cards i ended up designing and 3d printing an overly complex fan shroud thing used in conjunction with a 40x10mm noctua. it blows air across the cards heatsink, down the slot cages, and ultimately out of the adjacent PCI slot with a 3d printed duct angled to direct the exhaust air down towards the transceivers. the SFP transceiver i use to connect to my ISPs fiber gets crazy hot without air flow (hot like 70 degrees on a cool day after running for a few minutes), so this was a good way to kill two birds with one stone. With the fan setup, the transceiver it runs at a pretty stable 35-40 degrees. I'm not sure exactly how much cooler the card itself is running, but before the active cooling the heatsink was very hot to the touch, and with it you can barely tell that it's warm.
😬 if you've got that much storage, please, please tell me you're using ECC memory, because even error-correcting filesystems like ZFS won't save you from bit flips that happen _before_ the data gets written or _after_ it gets read from disk. risk of bit flips scales fairly logarithmically with RAM die memory density, and linearly with number of bytes R/W from RAM. I got bitten by that with a NAS that corrupted a good 20% of my data before I noticed, learning the hard way what ECC memory and good backups were for!
That server fan was trying to eat the sound meter
Can't take it on a walk without a muzzle and a leash.
Those transceivers can be power hungry as well.
You could experiment using a blower fan for this bracket.
Never undestood why blower fans on desktops are such a niche idea to this day, specially when systems such as yours and Mini-ITX builds'd benefit a lot from these, imo.
Neat! It would also work wonders with my pci-e-adapter-mounted u.2 data center drive, which can get very hot without active cooling. I'm cooling it using a homebrewed hack for now, but your solutions is so much nicer!
so ive been monitoring temps on my x520 and the temps never got concerning without a fan, but with a fan ziptied to the fins, i did see a slight reduction but after about 50% fan speed there was no temperature reduction.
Very COoL. I haven't seen anything that exciting since a girl scout showed up at my door last summer, inebriated. Just saying Thank you for the video. Very sub-Worthy :)
The chips in Mellanox ConnectX-3 and 4 cards are rated to run at up to 105C. I believe Intel's NICs are rated for 103C (at least the X550 is). I've just let them run hot for years on end 24/7 without any issues. If they do ever fail, they're ancient used enterprise cards, they're dirt cheap to replace.
I have like 10 of those cards spread and a 40mm noctua fan attached attached to the heat sink with some zip ties is more than enough to cool those cards. You don’t even need more than 50% fan speed
A high-pressure radial fan would likely be better, and an open-source backplate with plastic top cover for use with tape to better-direct the flow path without accounting for every card model… _ever_ would probably be the most nice way of handling this. The reason why use of an axial fan _could_ be better is because with a thermo-couple and board to figure all the PWM magic out, a fan could be baseboard-controlled and less copper could be required for that.
That's a very clean solution, I like it! I wonder if you could fit one or two 50mm fans into a half-height version. Would be useful for 2U chassis. You can fit a decent CPU cooler in those but the PCIe cards always suffer without server grade airflow.
I improvised a different solution (20 years ago) for my boiling hot passive Geforce3-Ti200, I let a cheap vertical 80mm case fan (it´s lower 1/3rd) blow from the side onto the card (as close to it´s PCB as possible) & the fan´s remaining 2/3rd to the also hot MB-Northbridge via a self build holder (using my old kid´s "Metallbaukasten" & nearly zero tools) that I mounted with longer screws (I kept from something I took apart) to the regular pci slot holes (intended for the cards only) ...
Result: no more crashes (GPU & Northbridge were to hot in default PC config from HP) ...
in your case I would have mounted the horizontal fan lower to cool the Network card & grafics card
1:08... Uh, is that really the name?!
Found the dyslexic one 😄
Funny. 😄
Ok, I admit, I am useing 10Gb due to swtich networking constraints, that said, my Dual QSFP card is bigger, huh, faster than yours! Dual 40Gbps :P (Pitty can't use it as it)
I was literally thinking "what about a version that you could control with your motherboard" and then you came out with the "dumb" rear bracket
Bravo sir.
Also a heads up that you can run programs like "FanControl" that could see the temps of your card and change accordingly, or smart controllers with thermal sensors are also a thing.
Your card still looks excellent though and honestly if i didnt know this was a single guy i wouldve thought it was a large company pike silverstone or something. Very impressive.
isnt FanControl windows only?
Has the rubber that you clip ever gotten bigger when it sees the only fans bin?
Dang thing took a chunk out of your microphone wind screen.
So i went to aliexpress and found a couple options, what would be the advantages of using a PCIe fan versus a standard 3Pin/4Pin that fits exactly in the same disposition?
It feels like a waste of a perfectly usable PCIe slot when there is dedicated ports for the fan already.
Why is a PCIe slot required for a fan? Could a motherboard fan header work?
At the end your show the fan in the BIOS. Does the BIOS automatically pick it up as a system fan?
So why not run some bytes through NIC and observe the temperature gain??
Ya I just Zip Tied an 80mm fan in between both my expansion cards and have it hard wired to Molex so it's at 100% all the time. Why buy extra parts when 2 Zip Ties work. Really cool bracket though.
But wouldnt you also need to put in a 10Gb card in your workstation? I guess unless multiple people are hitting the server (which isnt the case for me).
That's probably why he has two of them
The one and only thing I don't like about my new AMD motherboard is it only has 2 PCIE slots. I use one for video and the other for a 7 tb NVME drive. The 7tb drive could use a cooling fan.
You... do know Arctic S2048 are 40x40x28mm "server fans" at 6K and 15K RPMS, using normal 4 pin fan connectors, right? Not quite a delta screamer, and you don't want to stack them on the motherboard(get yourself an aquacomputer quadro or at least a sata powered fan hub instead). But they're good for using server cards in consumer/workstation cases.
I had random corruptions on my zfs pool. I thought i got a bad batch of drives and replaced some of them. It was the HBA overheating all along. Oops.
I just strap a 5cm fan with some cable ties to my 10Gb SFP+ card's heat sink, keep it well below 65 even when it's summer, I don't like the idea of using a PCIE slot just for a fan.
Great vid, I'm leaving a like and a subscription.
Can you please share the specs of your server?
.... I almost checked out on my order of used server fans, thinking that those would be neat to use on amp heatsinks and UPSes then i watch this vid haha
Everything old is new again. I used slot fans when they were mounted in ISA boards. Back in the 90's.
You know, a blower-style intake fan would probably be better for closed case solutions that have a shortage of fan locations.
I wonder why the engineering on that is not volume per unit of time, like liters or cubic centimeters (or ugh, cubic feet) per minute, but instead linear feet per minute. Not a mechanical engineer myself, but I'm just curious.
They are not dependent on any volume of air being moved by the fan, but only on the air inside their heatsink being replaced at a certain rate.
Just wanted to mention few more things:
1. Old Intel X520/X550 are not the most power efficient NICs that are available. If you'll go for X710 their overconsumption would be lower and therefore requirements fro the airflow is lower. You also should have a look at Mellanox ConnectX-4 Lx and maybe Chelsio T6225 (though be careful with drivers and firmware, Chelsio is weird, while Mellanox, Intel and Broadcom are usually fine)
2. If you went for an SFP+ NIC, you should avoid using SFP+-RJ45 modules - they alone consume several watts. (2-5W, depends on how new it is), while DAC, AOC and even SFP+ fiber module is less than 1W of power consumption. As you might have a good cooling for the chip of the card, but your SFP might overheat. I've seen SFP+-RJ45 to heat up to 95C (~ 203F) in a router with an airflow because it was never designed for such power consumption. Just try to avoid mixing copper and fiber or if you do, use one of those switches (there are new 10G switches with SFP+ available) as media converter, as they have more room to put a proper heatsink.
What's the GPU you're using in the server there?
I like how the guy doesn't mention that this is just an ad for his own product until 6 minutes in...
It is open source though, kinda makes it more like an advice than an ad, plus he mentioned another solution to the card's overheating problem.
I use gigabit because... I'm not schlepping 4k raw footage around?
First time I saw a server with what looked like a vacuum cleaner hose attached I was dumbfounded. I thought someone was jury-rigging the thing.
3:00 I could have mistaken it for F1 engine sound if I wasn't watching 🤣😄
Dang I thought you were going to have a pcie bracket that could Mount that server fan and provide the appropriate power for it since normal fan headers don't work.
Still a fun video though, and now I know if I ever have a overheated pcie component I can get a fun little kit for the fan
Do you have a subscription for that only fans box? 😂
and what will you do with this now 10 gig card now?
Before this I thought the worst cards by far were HP/QLogic. They absolutely need cooling though.
When these cards overheat in a system, it takes out the entire Windows networking stack.
That is, no 10GbE, no 1GbE, no Wi-Fi. Reboot to restore. Learned that when adding one to the eMachines.
I'll be more interested when a smart/managed multi-gig switch drops below $150 (transceivers included). Unless I'm willing to bite the bullet and get a multi-gig switch with 24 ports to upgrade my network backhaul/spine a smaller managed (LACP is a required feature) multi-gig is my only alternative. Otherwise it's pearls before swine.
My motherboard has a 2.5 gigabit connection. I have no idea if I would get all of that speed from it since that is faster than most of my devices typically send or receive. I do make use of the 2.6 Gb port on the router but really a lot of that speed goes unnoticed since my internet is less than 1 Gb. Still though, for most people that is fast enough really. Even if downloading gigabytes of software and stuff that still is not bad really. I know that some people have a much faster connection but they probably had to give up their first born to afford it.
If you own a 3D Printer, I would advise simply looking around for fan adapters for your size heatsink. I tried sharing here but TH-cam deletes my comments if I actually give some useful information here.
6 fans using more power than my whole gaming PC? nah, my GPU alone can use 430 watt with 15% increased power limit, and my CPU can draw about 230 watt stock, but both components are underclocked for way better efficiency, yet still pretty damn powerful. R9 7950X and RX 7900 XTX
Reminds me of the old Vantec tornado, which would cut your finger off.
you can take any old fan and screw it into the heatsink with one screw. that's what everyone does. two screws can work better but they're luxury.
"but it hasn’t seen much adoption by home users, mostly due to the high cost." Isn't it more down to the fact that very few home users have internet speeds in the 100s of mb (with a more common need for 1gbs connections), never mind 10gbs? Cost might be a factor but supply won't be helping that.
Random idea: put some old GPU above the network card and increase the speed manually so it cools the network card underneath it.
1:16 😂😂 "you might want to skip this brand if you're dyslexic"
3:08 You're measuring my loudness? _Don't you know who I am? I'm a server-fan, snitch!_
3:22 This is your "big" fan? _Ping Pong ain't got_ nothin' _on me!_
10 seconds in and he won me, that made me blush.
i use Evercool FOX 1 - SB-F1, it costs ~5 euro and it drops my 100gig nic temp from 90°C to 55°C ;)
Intel x550 uses more power and gets a lot hotter than the x710, most of those cards use the intel chip. Nice demo of the destructive qualities of Delta fans. Use DAC cables if possible and runs are short, SPF+ adapters get hot. Great job with the fan, and it looks good.
optical modules run very cool as well, just avoid RJ45 ones whenever possible ;)
@ledoynier3694 In my experience in data centres, with fibre network and storage switches, I've only known them to run hot, that is mainly with Cisco switches. For home switches I have a Qnap which runs quite warm with a DAC cable, connected to a Mikrotik (other end of the DAC cable) which was lukewarm. So yes it is true that some combos are ok, but in my experience most run hot, and the problem is a data center has forced cool air, home labs do not.
In the world of computer fans, be a server fan.
3:14 FTW!
Nice to see how the young guy try´s to Fly.