We always use Fibre and not DAC in our Data Centre. Reason being, when connecting between devices you can mix and match the SFP for compatibility. For example, if you have Cisco to Fortinet devices connected together you have to get a DAC with the right compatibility on each end. If you then change toa Checkpoint firewall for example, you have to get a new Cisco to Checkpoint DAC. With SFP, you just change the module at 1 end. Much more scalable. Plus all the racks are interconnected via fibre patch panels. You can just keep running DACS all the time, it will be a big headahe.
DACs also have thick cables that are harder to tuck away in racks when using high density stuff. For fixed installs, AOC are a good option inside a rack and cables can be multiple vendor (e.g. Mellanox on one end and Cisco on the other). I only do SM/MM when I am sure the one installing it do clean the whole path (so SFP + the fiber) to ensure it does not burn out in the coming months.
Look into Flexoptics we use them and they are so far pretty good. Their modules and dacs can be slightly more expensive than fs ones but they are self programable to pretty much any vendor. So you swap out a switch for a different brand, just reprogram each module (or do it ahead of time if they are fibre and you have spares)
If it is lightening, remember that it just passed miles through the best insulator that exists... Air. So lightening protection is a fable. But... General brownout or other power utility surges, sure but those tend to only apply to the power side, not the Ethernet side.
@@EagleMitchit may have just gone miles through air but that’s in search of the grounds negative potential. If the choice of the path between a fiber glass and insulator cable and going a different direction towards the much bigger ground, it’s going further towards ground. Lightning protection is not a myth, it’s a risk mitigator just like anything else, if you decide you don’t want to mitigate risk because it’s only 95% effective…
And I thought DAC stood for Digital to Analog Converter. Or alternatively in my current work world: Distinct Active Cardholders... 😀 GREAT Video. Thanks!
I use fiber in my rack to isolate copper based ISP’s and device access switches from the significantly more expensive servers & poe switches. I work for an ISP and I have seen MANY destroyed routers, switches and desktops. They were all destroyed because of a power surge over coax or phone lines.
All valid advice if you have a couple runs. In the datacenter its fiber all the way. Even in the rack. You just simply cannot run 100 Dacs in the same rack, you just do not have the physical space for it nor the place to put the rest of the cables.
On the other hand, fiber over really short distances can burn out receiver optics if you don't run long cables on a roll or place an attenuator. (like sunglasses for optic fiber)
@ Thats an absoulut myth. Unless you work with very long range optics like 40km+ thers no chance it will ever burn out as the send Power is slightly below the maximum recive power on almost all trancivers except a very few ultra long range one. And inside the rack you will most likely use Multimode anyway which uses LEDs not even lasers so theres no way what so ever that you burn out your optic
@@FlaxTheSeedOne Well on a system delivered as a complete set we had Aruba switches with 10G optics and 30 (??) cm optical cable between the switches, setup as fail-over. So 2 of these per 2 switches. After the 3rd 10G optic was replaced I read about this possible cause and then looked into the historical data in LibreNMS where also the RX signal strength was logged. And for sure the RX gradually degraded over time. Still could have been the fibre cable itself or the TX of the optic on the other switch, but then it wouldn't explain why replacing just the faulty optic did let those RX signal strength logs return to the same starting levels as the first ones. After replacing these really short cables with some longer ones (10m) rolled up and tucked away above the switches, it ran for years without issues.
@@TD-er What kind of optics? and it is part normal for the RX to drop over time, as its a photo sensor like a solar cell which will always degrade over time. But it should never go to the point where it dies, unless you use highpower optics.
For higher density installs the DAC cables can become quite unwieldy. As a rule of thumb, we try to use SFP modules cables on any rack that is either not going to be permanently static, or is high density. It makes it much easier to route and make changes with the thinner, mixed length fiber cables.
Compatibility can bite you in the behind. We bought some DACs to go between Intel cards and a Mikrotik switch. Worked great. Accidentally re-ordered the same for Unifi. Intel card to Unifi switch worked fine, but Unifi switch to Unifi Switch was a no-go. Had to order a Unifi-compatible DAC.
i have had several DAC failures over the years...never had a single fiber failure...the DAC always were furnaces where the fiber was not nearly as toasty. Cannot agree that DAC is the better solution for my clients.
I used to go for DAC, but have since went towards fiber. I’m able to use optics based on the end devices and reuse the patch cables as upgrades go on. Going into things like 40 or 100G? Those DACs take up a lot of real estate compared to fiber
I think its entirely dependent on how many cables you are running. We switched from 10G on all my servers (8 server x 6 10G DACs), it was starting to get unweildy. However, switching to dual 100G QSFP28 DACs is now two per server, so its totally managable. We also moved our switches to be closer to the middle instead of TOR, and that allowed for much shorter cable runs. But, because DACs dont easily come in custom lengths, they tend to take up more space. However, when I previously wired a datacenter, we had order a whole slew of custom lengths of every kind (and color oddly), and it made for really clean cabling. Also, fiber is easier to label when you Labelcore's (Panduit).
Great video! From the cost perspective fiber has the advantage that you buy the modules once and you can have multiple length of cheap fiber cable and reuse the same modules. This is the single reason I use fiber over DAC in my homelab, but that is just my preference. I also think FS com should get some credit for their "cheap" SFP programming solution. Just buy som generic modules and program it for the equipment you need.
A big advantage of running fiber in a rack is that it's easy to terminate them on a keystone patch panel using couplers. Looks much better and cleaner than a bunch of hanging wires.
not to mention, in high density/capacity racks, having 100 DAC/Twin-AX cables s horrible for cable mgt, they have must less of a bend radius the AOC/Fiber, are much heavier and can actually impede air-flow on switches using F->R airflow. I am definitely an advocate for AOC and Fiber, it is a bit more money but there are plenty of good companies for 3rd party transceivers that work amazing. I now only use fiber when I can even in my home lab
10GBASE-T RJ45 uses needs to encode two consecutive PAM16 signals using 4 twisted pairs through magnetic transformers and filters, transmit them at 800 Mbd/s, receive them through magnetic transformers and filters, then decode the pair of those signals along with error correction before leaving the Network Card through the SERDES PHY and going to your system. That computation is not cheap in power nor in latency. And the magnetic transformer part is quite lossy compared to doing it without a transformer. 10GbE DACs use NRZ, which is a lot less computationally expensive for a module to translate to the SERDES PHY, and requires no magnetics that I know of which decreases the power usage.
First of all, love the idea of using a blank patch panel to manage DAC and fibre cables. I normally use cable management trays but that would be neater. Two other things however on the fibre. Between racks it is better to use fibre I find between them not due to the isolation but I can buy trunk fibres and easily run, 24,48,96 lines between racks in one physical cable and break it out at each end to LC connectors thus keeping the overhead runs fuller and cleaner. I do also try to use fibre with media converters on external devices such as Starlink, Cellular modems with external antennas etc just to try to reduce electrical paths for lighting strikes
Mellanox , Intel -x550 , Intel x710 are where it is. Updating the card firmware can be kind of a pain but with enough editing and research it is possible.
@@TomR459 To 10GB the SFP+ ports? Yes They would work, I would just get line errors. Using the same ports, moving to Optical and no errors. It was weird.
Where I work we use fiber to link switches together in IDFs since it's an industrial environment with very high EMI, as well as shielded cat6 for runs leaving the IDF.
I am using fiber to connect my 2 racks together. I have one in the garage and my office closet. The rest of my house is all CAT6 and the only 10g machine is my personal gaming machine.
I know you can’t go wrong with Intel NICs but the Mellanox cards are also high quality and have great industry driver support. They are also price competitive on the second hand market. Also, I purchased a Mellanox SFP+ card for my QNAP NAS specifically to lower the heat/power requirements for my UniFi Aggregation switch. DAC cables now make this a lot cooler.
like the idea of the blank keystone faceplate. always hada need for the rj45 modules but damn they do get hot. I've always needed it for some switches as theres just never enough sfp+ ports on the switches in SMB
I'm running DAC for everything inside my rack, I started with DAC when I first started doing 10g networking. But I've recently started doing optical or AOC to switches outside my rack to protect from lightning strikes.
I was running DAC with everything, I changed everything over to fiber just because I thought it looked better and would be better. I guess after this video I am going back to my DAC, thanks for the great info.
A DAC link can be swapped for fiber (both ends at once) & back again but if it works (already purchased), don't bother changing. What was missed was link latency which is slower for RJ45 vs fiber/passive-DAC over short distances. I'm not sure if active DAC is between but closer to fiber in latency. Depends on your usage & other factors are more significant. Copper is typically easier to bend sharper & pull through than fiber. Small differences not deal breakers 1 way or another.
I prefer fiber over DAC for cable density issues. DAC's are hard to route and are cumbersome when connecting to other network equipment such as switches.
I use DAC for my 40G network as its within my rack is aweome, 10G is RJ45, switches suck a bit of power as they are older I use a mix of intel chipset and Mellanox (now NVIDIA) cards 40G is just Mellanox Connect-X3/Pro cards
I recently upgraded my pfsense with dual 10G to take advantage of AT&T over provisioning on fiber internet here. I am using DAC cables for everything 10G with two exceptions. My main PC has an old Asus ROG 10G NIC with RJ45. So, I have an 10Gtek SFP+ to RJ45 connector in my Unifi switch for that connection. In my pfsesnse box, DAC from the Intel x710 to the Unifi switch. Then a Unifi SFP+ to RJ45 in the other port to my modem/router thing. I used this as a recommendation online from somebody else that did the same setup. It will negotiate down to the 5G on the modem. What all the trouble? Well, I pay for 1G internet. But, AT&T gives 1.2-1.5G to the house. So, if you connect to the modem via the 5G port, you get that little extra that is sent to the home that the normal 1G ports can not pass on. On average I get about 1.2-1.3G through it.
@@SoWhichUsernameIsNot That's why I was surprised to find that Fiber has about 10% lower latency. I've tested both 10Gb/s and 25Gb/s with Mellanox ConnectX-3 and 4 correspondingly. With regular LC fiber cable, not AOC. Adapters connected directly without a switch. Cable length 2-3m. I think I used *qperf* for the measurement, rather than ping. I don't know if that makes a difference. The 10% as such is also not sth I'd loose my sleep over. The moral here might be - do your own measurement! Also, I'd rather have a somewhat higher RTT latency with a protocol running over RDMA than a lower latency with a regular TCP/UDP. Then the entire stack will (presumable) have lower latency. And that's what you are ultimately interested in. There's not much money to be made by using *ping* alone. BTW, both should have much better latency than 10GBaseT. The encoding scheme is quite a bit more complicated than just the stream of bits in the DAC/AOC/Fibre case. But I haven't tested it myself for exact numbers. And wouldn't be directly comparable if I cannot use the 10GBaseT transeivers on the same network cards directly.
I use 0.25m DACs (from 10GTek) to link my two unifi switches and their associated aggregation switch. This length will neatly connect between the units with enough space for a 1U patch panel between them. Has a nice gentle curve with absolutely no wasted slack (rather like the short etherlight patch cables Unifi shows in all it switch adverts)
When I got my Unfi Aggregation Switch (8 port SFP+) and watched one of Network Chuck's videos (Yes, it is all Network Chuck's fault) I connected all three of my switches to the UAS with DACs. That part of the plan worked as I had wanted. However, I could not use DACs to connect my SonicWALL TZ470 (no wireless) to the UAS there was not any connection. The DACs were recognized on the UAS end, but not the SonicWALL end. The UAS ports were SFP+, but the SonicWALL were 2.5 Gbs ports. They would not recognize the SFP+ DACs. Now I don't thing that DACs come with two different SFPs on each end. Eventually, 2.5 Gbs Optical Fiber FSPs became available, so I bought two of them and two optical fiber FSP+. With the right fiber patch cable, I had all of the connectivity I needed. It did work with four SFPs, but only at 1Gbs and that was not enough for bragging rights. You may wonder why I did not go for RJ45 pluggables. Four RJ45s cost at least $60 each. The optical fiber pluggables were less that $20 each, and I have not found 2.5Gbs RJ45 ones.
i work in data centers. dac always never goes from rack to rack. always stays in the same rack. AOC is used if you want to go to a different rack. or use fiber.
Thanks for covering the cable management downside of DACs, it's a definite drawback. Also, you can pipe fiber through a keystone. I do wonder if putting a patch panel behind a brush panel would look nice. I'm just using a brush panel right now for the DACs (my NAS isn't rack mounted, and my desktop is "next door"), but maybe a keystone panel behind it would keep the wires in one spot. Hmm. fwiw, my goto for sfp28 is mellanox, but I haven't tackled the qsfp28 ports yet, so, maybe I'll try the intel choice there, thanks for the advice.
Can someone recommend good sfp28 long range transceivers to buy for single mode fiber. Don’t want to overpay. I need to connect to ConnectX card or Nvidia Switch. I can borrow a device to reprogram the transceiver if needed. The prices seem to range wildly
My office and both colo facilities are full of DAC cables…. But since my switches are also SFP plus and Synology is usually ethernet. I have a handful of 10gtek sfp+ to 10gbe modules
I love DACs. I'll use them as much as I can. However I don't have much experience with AOC. I've found that they are less supported on cards than DAC and Fibre modules.
My very simple home network uses SFP+ for 10gbe and RJ45 for 1/2.5. None of the 10gbe runs are long so mostly DAC's with 1 fiber run to my workstation and 1 to my bench. At the datacenter it's all fiber though (if it's not RJ45) because it's a lot easier to swap modules if compatibility is an issue rather than have to order a new DAC/AOC and it's cheaper when buying lots of premade fiber in bulk, never mind a bundle of fiber is a lot easier to manage than a bundle of DAC's. I'll leave my past issues with Intel aside and just say I've been happy with Mellanox cx3/4 cards.
It's interesting that this video was released today. I have a UDM-SE and have been using an SPF+ DAC cable that I ordered from Amazon. Today, my new USW-Pro-HD-24-PoE switch arrived. When I tried to connect that DAC, which had been working with the UDM-SE, to the new switch, it didn't work. I tried one of the other SFP+ ports, and a link was established. That was just a dry run before putting the switch into the rack. After moving the switch into the rack, the DAC cable wouldn't work in any of the 4 SFP+ ports! I put it back on the UDM-SE, and it worked. After multiple attempts with the SFP+ ports on the switch, it finally connected. Why would a DAC function with the UDM-SE but not with the USW-Pro-HD-24-PoE switch consistently?
I am still a newb/just breakong into the enterprise types of hardware and feel like SFP+ to RJ45 is a prime example of just because you can do something, doesnt mean you should. All attempts at it only allows SFP+ Adapter on one side. Maybe im doing it wrong, but DAC just worked. Maybe ill touch fiber one day?
Trust me when I say, avoid the DAC and just go Single Mode Fiber and Optics long term. DAC is not keeping up with the higher speeds. Sure there is AOC cabling (I actually prefer it over DAC) but when you get to 100Gbps over an SFP56-DD port, you start thinking long and hard about that Single Mode! I heavily invested in 25G DAC but now I'm having to replace it all.
@8:25 - I have that one with one DAC and two 10gbit RJ45 adapters and hoooooolllyyy shit do those get hot. Touch it for more than a few seconds and it would probably burn you.
I don't understand why TP-link give me Omada router ER-7206 for 190euro with SFP module, who is 24 years old standard and w'll be out of the market in the future?!?
I'm a "recycling" guy, with 10G I use fiber everywhere I can. Refurbished Intel card are cheap, refurbished intel sfp (OM3/4) are cheap, new optical fiber are cheap (any type). If it's too short you just have to replace the fiber or use a coupler. The only inconvenient is that some card lock the firmware with specific sfp transceiver brand (could also be true with the switch). Like intel card accept only intel optical transceiver while accepting any dac and RJ45 transceiver, but workaround exists. I'm just pissed off that my ISP reouter only has a 10G RJ45 connector instead of SFP+. I had to buy an expensive RJ45 transceiver which is really warm
I've had an interesting issue where using a DAC to uplink between Ubiquiti EdgeSwitch 16XG's and EdgeSwitch 48-port switches would cause issues with Dante traffic (late packets, dropping from Dante Controller, audio dropouts). This remained constant between DAC brands (10GTek, Cable Matters, and a few other I can't think of), and between a whole campus of 16XG's and 48-ports. Issues were persistent after DSCP and IGMP Snooping were enabled, and Wireshark showed those DSCP values being correctly assigned. Weirdly enough, all other traffic was fine. As soon as we changed to SFP+ Fiber modules, all of the issues went away. I'm not sure if the issue could be some sort of switch firmware issue, or how DACs do their signaling. Would love to see if anyone else has any other weird issues.
I have had bad experience with DAC maninly those FS ones, failing not fun at 2am rarely have failures with just stright SFP/SFP+ so removed most DACs with fiber SFPs these days
I own a netgear 12 port 10gbe switch that runs at 40w even with nothing going on. What will a sfp+ or qsfp switch use as a base? I’d love to run fibre to my desktop 20m away, and use dac to my servers but my supermicro board has the 10gbe onboard. I will look for a sfp+ switch … 1x 10gbe to supermicro and then sfp dac to main server and fibre to my pc
Below 10m lengths Twinax/DAC offer lower latency not because of this conversion but because of the optical path length. In standard fibre cables light does not travel straight through the center of the glass but instead “zig zags” off the walls (the total internal reflection process).
@@mwahlert That might be true, but the speed of electricity in the wire is slightly less than the speed of light in the fiber. The latency DOES come from the electricity > light > electricity steps. That is why short runs with DAC are lower latency, because at those short lengths, the conversion losses of fiber add more latency than the slower speed of electricity.
Tried a 1.5m 10Gb DAC cable from 10Gtek and local xfer speeds went down to about 350Mbps. Went back to 2 x Ubiquiti SM 10Gb trancievers and 2m SM fiber patch cable (yellow) and my xfer speed went back up to 800-960Mbps. Not sure if bad DAC or EMI bc cable goes right by a power strip. 6bg single file between two servers with Intel X710 and nvme drives on each side. Just interesting issue.
Not sure you should have read the ad saying, "Microcenter will have 5090s" ... microcenter had all of 80 something total across the country. so they were gone instantly.
Thank you for the another excellent video. So I am using SFP+ and SFP28 transceivers and om3 cables all over my house. Am I being really inefficient in terms of power usage?
@@LAWRENCESYSTEMS in my experience, I've had to hack the EEPROM of some Intel cards to make them work with non-Intel SFPs. Arista also plays these games but luckily I acquired a former Facebook switch which they didn't clean the configuration from and provided me with a secret "unlock" key for non-Arista SFP/DACs.
I advise use of twinax/dac from cube pod leaf switches to workstations because its so robust and durable. It can survive angry end users kicking or knocking it around. In rack though, its kind of thick to route...
jeezus, no just no, Intel 10G SFP cards have some nasty firmware issues on both BSD and linux. in bsd its a bit more hidden an can result in some wierd results that woudl take to long to explain, in tldr its a firmware and driver issue that is realted to LLDP on linux similar issue that caps the card at 6gbit when bonding. need to disable LLDP on every single link mellanox is the answer connectx3 or similar is equally compatible without issues
If it's short distance within the rack, I'll just stick to RJ45. The cables are cheap and readily everywhere. Works with patch panels. You can even make your own quick replacement cable in an emergency. And you can even step down to a Cat5e cable at 1 gig as a last resort. You can't do any of that with DAC, and only limited to approx. 10m max. Then finding a switch with 24 or 48 SFP ports is also very difficult and they can be very very expensive compared to an all RJ45 switch.
Agg switches tend to be all sfp. A lot of rack switches have a few sfp for up-link. Video sounded like a "use dac INSTEAD OF fiber IN YOUR RACK" message. The only way to "just stick with rj45" in the context of the video would require using an SFP to RJ45 module, which isn't a "just" situation. Sounds like you missed the purpose of the video.
Yes technically but rj45 10gig has ALWAYS had heat issues, I had some errors from my personal nas seeing the card had reached 80c!!!! Now am using a intel? 10 gig sfp with passive dac
I have a silly question can you hook a USB C male inspection camera to an rj45? I work on long vehicles and occasionally need to monitor things that I can't reach or see from up to 60 feet away. Like suspension parts. Obviously I can't have someone hang on while I drive. And the sewer cameras are too bulky to set up and attended to while driving. But my usbc inspection camera is perfect. Can I make it work?
You're my bro crush for real. Everything from your tehcnical opinions to your sponsor are just great. Thanks for always providing this valuable info
We always use Fibre and not DAC in our Data Centre. Reason being, when connecting between devices you can mix and match the SFP for compatibility. For example, if you have Cisco to Fortinet devices connected together you have to get a DAC with the right compatibility on each end. If you then change toa Checkpoint firewall for example, you have to get a new Cisco to Checkpoint DAC. With SFP, you just change the module at 1 end. Much more scalable.
Plus all the racks are interconnected via fibre patch panels. You can just keep running DACS all the time, it will be a big headahe.
DACs also have thick cables that are harder to tuck away in racks when using high density stuff. For fixed installs, AOC are a good option inside a rack and cables can be multiple vendor (e.g. Mellanox on one end and Cisco on the other). I only do SM/MM when I am sure the one installing it do clean the whole path (so SFP + the fiber) to ensure it does not burn out in the coming months.
Look into Flexoptics we use them and they are so far pretty good. Their modules and dacs can be slightly more expensive than fs ones but they are self programable to pretty much any vendor. So you swap out a switch for a different brand, just reprogram each module (or do it ahead of time if they are fibre and you have spares)
I always try to use fiber for uplinks on POE switches for lightning protection for the rest of the rack.
That's.... actually a great idea!
If it is lightening, remember that it just passed miles through the best insulator that exists... Air. So lightening protection is a fable. But... General brownout or other power utility surges, sure but those tend to only apply to the power side, not the Ethernet side.
@@EagleMitchit may have just gone miles through air but that’s in search of the grounds negative potential. If the choice of the path between a fiber glass and insulator cable and going a different direction towards the much bigger ground, it’s going further towards ground. Lightning protection is not a myth, it’s a risk mitigator just like anything else, if you decide you don’t want to mitigate risk because it’s only 95% effective…
I like the idea but I wonder if it’s actually effective given that the equipment is still connected to copper wiring for power.
And I thought DAC stood for Digital to Analog Converter. Or alternatively in my current work world: Distinct Active Cardholders... 😀
GREAT Video. Thanks!
I use fiber in my rack to isolate copper based ISP’s and device access switches from the significantly more expensive servers & poe switches.
I work for an ISP and I have seen MANY destroyed routers, switches and desktops. They were all destroyed because of a power surge over coax or phone lines.
All valid advice if you have a couple runs.
In the datacenter its fiber all the way. Even in the rack.
You just simply cannot run 100 Dacs in the same rack, you just do not have the physical space for it nor the place to put the rest of the cables.
you can but it sucks Ass to troubleshoot. and manage. so yes, it can be done, but dont do it. lol
On the other hand, fiber over really short distances can burn out receiver optics if you don't run long cables on a roll or place an attenuator. (like sunglasses for optic fiber)
@ Thats an absoulut myth. Unless you work with very long range optics like 40km+ thers no chance it will ever burn out as the send Power is slightly below the maximum recive power on almost all trancivers except a very few ultra long range one.
And inside the rack you will most likely use Multimode anyway which uses LEDs not even lasers so theres no way what so ever that you burn out your optic
@@FlaxTheSeedOne Well on a system delivered as a complete set we had Aruba switches with 10G optics and 30 (??) cm optical cable between the switches, setup as fail-over. So 2 of these per 2 switches.
After the 3rd 10G optic was replaced I read about this possible cause and then looked into the historical data in LibreNMS where also the RX signal strength was logged. And for sure the RX gradually degraded over time.
Still could have been the fibre cable itself or the TX of the optic on the other switch, but then it wouldn't explain why replacing just the faulty optic did let those RX signal strength logs return to the same starting levels as the first ones.
After replacing these really short cables with some longer ones (10m) rolled up and tucked away above the switches, it ran for years without issues.
@@TD-er What kind of optics? and it is part normal for the RX to drop over time, as its a photo sensor like a solar cell which will always degrade over time.
But it should never go to the point where it dies, unless you use highpower optics.
Who would have thought twin-ax would have this long a run :)
Hahha, true!
I've been reading up on DACs, Fibers, and SFP+ but it was super confusing. Your video totally cleared everything up!
For higher density installs the DAC cables can become quite unwieldy. As a rule of thumb, we try to use SFP modules cables on any rack that is either not going to be permanently static, or is high density. It makes it much easier to route and make changes with the thinner, mixed length fiber cables.
Compatibility can bite you in the behind. We bought some DACs to go between Intel cards and a Mikrotik switch. Worked great. Accidentally re-ordered the same for Unifi. Intel card to Unifi switch worked fine, but Unifi switch to Unifi Switch was a no-go. Had to order a Unifi-compatible DAC.
Extralink will work with Unifi and Cisco, Zyxel and more sane companies
FS stuff is programmable
When ordering tell them to code them UBNT
I had a similar problem with my HPE Aruba switch, it is hard to find an affordable DAC for these switches. CONBIC DACs work well 😊
we use flexoptics for their programmable modules/dacs etc..
i have had several DAC failures over the years...never had a single fiber failure...the DAC always were furnaces where the fiber was not nearly as toasty. Cannot agree that DAC is the better solution for my clients.
Very thorough, Tom! Tons of great info here.
I used to go for DAC, but have since went towards fiber. I’m able to use optics based on the end devices and reuse the patch cables as upgrades go on.
Going into things like 40 or 100G? Those DACs take up a lot of real estate compared to fiber
I think its entirely dependent on how many cables you are running. We switched from 10G on all my servers (8 server x 6 10G DACs), it was starting to get unweildy. However, switching to dual 100G QSFP28 DACs is now two per server, so its totally managable. We also moved our switches to be closer to the middle instead of TOR, and that allowed for much shorter cable runs. But, because DACs dont easily come in custom lengths, they tend to take up more space. However, when I previously wired a datacenter, we had order a whole slew of custom lengths of every kind (and color oddly), and it made for really clean cabling. Also, fiber is easier to label when you Labelcore's (Panduit).
Great video! From the cost perspective fiber has the advantage that you buy the modules once and you can have multiple length of cheap fiber cable and reuse the same modules. This is the single reason I use fiber over DAC in my homelab, but that is just my preference.
I also think FS com should get some credit for their "cheap" SFP programming solution. Just buy som generic modules and program it for the equipment you need.
A big advantage of running fiber in a rack is that it's easy to terminate them on a keystone patch panel using couplers. Looks much better and cleaner than a bunch of hanging wires.
not to mention, in high density/capacity racks, having 100 DAC/Twin-AX cables s horrible for cable mgt, they have must less of a bend radius the AOC/Fiber, are much heavier and can actually impede air-flow on switches using F->R airflow. I am definitely an advocate for AOC and Fiber, it is a bit more money but there are plenty of good companies for 3rd party transceivers that work amazing. I now only use fiber when I can even in my home lab
10GBASE-T RJ45 uses needs to encode two consecutive PAM16 signals using 4 twisted pairs through magnetic transformers and filters, transmit them at 800 Mbd/s, receive them through magnetic transformers and filters, then decode the pair of those signals along with error correction before leaving the Network Card through the SERDES PHY and going to your system. That computation is not cheap in power nor in latency. And the magnetic transformer part is quite lossy compared to doing it without a transformer.
10GbE DACs use NRZ, which is a lot less computationally expensive for a module to translate to the SERDES PHY, and requires no magnetics that I know of which decreases the power usage.
First of all, love the idea of using a blank patch panel to manage DAC and fibre cables. I normally use cable management trays but that would be neater.
Two other things however on the fibre. Between racks it is better to use fibre I find between them not due to the isolation but I can buy trunk fibres and easily run, 24,48,96 lines between racks in one physical cable and break it out at each end to LC connectors thus keeping the overhead runs fuller and cleaner.
I do also try to use fibre with media converters on external devices such as Starlink, Cellular modems with external antennas etc just to try to reduce electrical paths for lighting strikes
In the rack? DAC. For longer distances, Fibre. When needed, RJ45. And I have an Emulex based card in my main PC, and Connectx-3 in my servers.
Mellanox , Intel -x550 , Intel x710 are where it is. Updating the card firmware can be kind of a pain but with enough editing and research it is possible.
Love DACs when they're compatible! I've had some weirdness with Unifi to Brocade that was solved moving to Optical.
Did u have the right licences 😂
@@TomR459 To 10GB the SFP+ ports? Yes
They would work, I would just get line errors. Using the same ports, moving to Optical and no errors. It was weird.
Absolutely, when you are in a mixed stack environment I have seen communication issues. When you are in a matched environment it works well.
Where I work we use fiber to link switches together in IDFs since it's an industrial environment with very high EMI, as well as shielded cat6 for runs leaving the IDF.
Great info thanks!
There's also a design on Printables to 3D print keystones for the DAC cables with a patch panel since you can't use a coupler like with RJ45 or fiber.
I am using fiber to connect my 2 racks together. I have one in the garage and my office closet. The rest of my house is all CAT6 and the only 10g machine is my personal gaming machine.
I know you can’t go wrong with Intel NICs but the Mellanox cards are also high quality and have great industry driver support. They are also price competitive on the second hand market.
Also, I purchased a Mellanox SFP+ card for my QNAP NAS specifically to lower the heat/power requirements for my UniFi Aggregation switch. DAC cables now make this a lot cooler.
Mellanox are good as well.
10:04: Maybe a brush panel would look nicer and work as well?
like the idea of the blank keystone faceplate. always hada need for the rj45 modules but damn they do get hot. I've always needed it for some switches as theres just never enough sfp+ ports on the switches in SMB
I just noticed that iXsytems discourages the use of DAC.
Great explanation!
I could have used this information about 6 months ago when first setting up 5gbit wan D: Nice videjya! :)
I'm running DAC for everything inside my rack, I started with DAC when I first started doing 10g networking. But I've recently started doing optical or AOC to switches outside my rack to protect from lightning strikes.
AOC is horrible, you have these heavy metal things on the end of your fibre, complete nightmare
@@jonathanbuzzard1376 Huh, you mean the transceivers? They are needed with fibre as well, the only difference is that they are permanently attached.
Ive had bad luck with DAC so Ive moved everything to fiber.
I was running DAC with everything, I changed everything over to fiber just because I thought it looked better and would be better. I guess after this video I am going back to my DAC, thanks for the great info.
Fiber does look cooler
lol yes it does that was the main reason I changed.
When you go to QSFP28 DAC for 100G, the DAC cable is much heavier and stiffer. I found them very cumbersome vs fiber even for same rack use.
A DAC link can be swapped for fiber (both ends at once) & back again but if it works (already purchased), don't bother changing. What was missed was link latency which is slower for RJ45 vs fiber/passive-DAC over short distances. I'm not sure if active DAC is between but closer to fiber in latency. Depends on your usage & other factors are more significant. Copper is typically easier to bend sharper & pull through than fiber. Small differences not deal breakers 1 way or another.
Eveyrthing you said, i agree 100% 💯
I prefer fiber over DAC for cable density issues. DAC's are hard to route and are cumbersome when connecting to other network equipment such as switches.
DACs area also great because they tend to be less expensive than transceivers/optical when running shorter distances.
I use DAC for my 40G network as its within my rack is aweome, 10G is RJ45, switches suck a bit of power as they are older I use a mix of intel chipset and Mellanox (now NVIDIA) cards 40G is just Mellanox Connect-X3/Pro cards
Mellanox connect-x 4 is also solid though.
I do have some of those in my Linux systems, they work well.
I recently upgraded my pfsense with dual 10G to take advantage of AT&T over provisioning on fiber internet here. I am using DAC cables for everything 10G with two exceptions. My main PC has an old Asus ROG 10G NIC with RJ45. So, I have an 10Gtek SFP+ to RJ45 connector in my Unifi switch for that connection. In my pfsesnse box, DAC from the Intel x710 to the Unifi switch. Then a Unifi SFP+ to RJ45 in the other port to my modem/router thing. I used this as a recommendation online from somebody else that did the same setup. It will negotiate down to the 5G on the modem.
What all the trouble? Well, I pay for 1G internet. But, AT&T gives 1.2-1.5G to the house. So, if you connect to the modem via the 5G port, you get that little extra that is sent to the home that the normal 1G ports can not pass on. On average I get about 1.2-1.3G through it.
I like dac but there is always issue between connect different vendor together.
11:00 what about latency?...
DACs are usually better for latency but the differences are small.
@@SoWhichUsernameIsNot That's why I was surprised to find that Fiber has about 10% lower latency. I've tested both 10Gb/s and 25Gb/s with Mellanox ConnectX-3 and 4 correspondingly. With regular LC fiber cable, not AOC. Adapters connected directly without a switch. Cable length 2-3m. I think I used *qperf* for the measurement, rather than ping. I don't know if that makes a difference. The 10% as such is also not sth I'd loose my sleep over. The moral here might be - do your own measurement! Also, I'd rather have a somewhat higher RTT latency with a protocol running over RDMA than a lower latency with a regular TCP/UDP. Then the entire stack will (presumable) have lower latency. And that's what you are ultimately interested in. There's not much money to be made by using *ping* alone.
BTW, both should have much better latency than 10GBaseT. The encoding scheme is quite a bit more complicated than just the stream of bits in the DAC/AOC/Fibre case. But I haven't tested it myself for exact numbers. And wouldn't be directly comparable if I cannot use the 10GBaseT transeivers on the same network cards directly.
Tom, i love the really short DAC you had at the end.... any link ?
amzn.to/3PX8NSI
@@LAWRENCESYSTEMS Wow! They even come in different colors!
@@LAWRENCESYSTEMS Thanks
I use 0.25m DACs (from 10GTek) to link my two unifi switches and their associated aggregation switch. This length will neatly connect between the units with enough space for a 1U patch panel between them. Has a nice gentle curve with absolutely no wasted slack (rather like the short etherlight patch cables Unifi shows in all it switch adverts)
10BASE5 with vampire tap for the win 🦇🧛
This man networks.
Especially nice since I think they allow using that coax as a structural member in the building itself! 😁
When I got my Unfi Aggregation Switch (8 port SFP+) and watched one of Network Chuck's videos (Yes, it is all Network Chuck's fault) I connected all three of my switches to the UAS with DACs. That part of the plan worked as I had wanted. However, I could not use DACs to connect my SonicWALL TZ470 (no wireless) to the UAS there was not any connection. The DACs were recognized on the UAS end, but not the SonicWALL end. The UAS ports were SFP+, but the SonicWALL were 2.5 Gbs ports. They would not recognize the SFP+ DACs. Now I don't thing that DACs come with two different SFPs on each end. Eventually, 2.5 Gbs Optical Fiber FSPs became available, so I bought two of them and two optical fiber FSP+. With the right fiber patch cable, I had all of the connectivity I needed. It did work with four SFPs, but only at 1Gbs and that was not enough for bragging rights. You may wonder why I did not go for RJ45 pluggables. Four RJ45s cost at least $60 each. The optical fiber pluggables were less that $20 each, and I have not found 2.5Gbs RJ45 ones.
i work in data centers. dac always never goes from rack to rack. always stays in the same rack. AOC is used if you want to go to a different rack. or use fiber.
Thanks for covering the cable management downside of DACs, it's a definite drawback. Also, you can pipe fiber through a keystone. I do wonder if putting a patch panel behind a brush panel would look nice. I'm just using a brush panel right now for the DACs (my NAS isn't rack mounted, and my desktop is "next door"), but maybe a keystone panel behind it would keep the wires in one spot. Hmm.
fwiw, my goto for sfp28 is mellanox, but I haven't tackled the qsfp28 ports yet, so, maybe I'll try the intel choice there, thanks for the advice.
Preach!
Can someone recommend good sfp28 long range transceivers to buy for single mode fiber. Don’t want to overpay. I need to connect to ConnectX card or Nvidia Switch. I can borrow a device to reprogram the transceiver if needed. The prices seem to range wildly
I have used FiberStore for many, many 10km+ SM links. Absolutely love their products.
i only use dac in the rack because of the fact it rhymes
My office and both colo facilities are full of DAC cables…. But since my switches are also SFP plus and Synology is usually ethernet. I have a handful of 10gtek sfp+ to 10gbe modules
👍Thanks!
Did you personally test the energy consumption ? or your saying it based on the information you read from the Internet ?
Verified by seeing the power draw details from the switch port status.
I love DACs. I'll use them as much as I can. However I don't have much experience with AOC. I've found that they are less supported on cards than DAC and Fibre modules.
My very simple home network uses SFP+ for 10gbe and RJ45 for 1/2.5. None of the 10gbe runs are long so mostly DAC's with 1 fiber run to my workstation and 1 to my bench. At the datacenter it's all fiber though (if it's not RJ45) because it's a lot easier to swap modules if compatibility is an issue rather than have to order a new DAC/AOC and it's cheaper when buying lots of premade fiber in bulk, never mind a bundle of fiber is a lot easier to manage than a bundle of DAC's. I'll leave my past issues with Intel aside and just say I've been happy with Mellanox cx3/4 cards.
It's interesting that this video was released today. I have a UDM-SE and have been using an SPF+ DAC cable that I ordered from Amazon. Today, my new USW-Pro-HD-24-PoE switch arrived. When I tried to connect that DAC, which had been working with the UDM-SE, to the new switch, it didn't work. I tried one of the other SFP+ ports, and a link was established. That was just a dry run before putting the switch into the rack. After moving the switch into the rack, the DAC cable wouldn't work in any of the 4 SFP+ ports! I put it back on the UDM-SE, and it worked. After multiple attempts with the SFP+ ports on the switch, it finally connected.
Why would a DAC function with the UDM-SE but not with the USW-Pro-HD-24-PoE switch consistently?
Didn't see any transceivers inside that fiber module
I am still a newb/just breakong into the enterprise types of hardware and feel like SFP+ to RJ45 is a prime example of just because you can do something, doesnt mean you should. All attempts at it only allows SFP+ Adapter on one side. Maybe im doing it wrong, but DAC just worked. Maybe ill touch fiber one day?
using DAC and Ethenet inside rack and then FO in key points in my house/lab, inlcuding some computers. i just love the low latency
Apparently there is a linux patch going into the mainline kernel that saves power used by the CPU/software w.r.t. networking.
DAC means digital to analogue converter
Yup, or Direct air capture (DAC), but in the context of this video DAC stands for Direct Attach Copper
For some reason I have nothing but trouble with UniFi DAC between UniFi NVR and UniFi link aggregation switch
Trust me when I say, avoid the DAC and just go Single Mode Fiber and Optics long term. DAC is not keeping up with the higher speeds. Sure there is AOC cabling (I actually prefer it over DAC) but when you get to 100Gbps over an SFP56-DD port, you start thinking long and hard about that Single Mode! I heavily invested in 25G DAC but now I'm having to replace it all.
@8:25 - I have that one with one DAC and two 10gbit RJ45 adapters and hoooooolllyyy shit do those get hot. Touch it for more than a few seconds and it would probably burn you.
I don't understand why TP-link give me Omada router ER-7206 for 190euro with SFP module, who is 24 years old standard and w'll be out of the market in the future?!?
I'm a "recycling" guy, with 10G I use fiber everywhere I can. Refurbished Intel card are cheap, refurbished intel sfp (OM3/4) are cheap, new optical fiber are cheap (any type). If it's too short you just have to replace the fiber or use a coupler.
The only inconvenient is that some card lock the firmware with specific sfp transceiver brand (could also be true with the switch). Like intel card accept only intel optical transceiver while accepting any dac and RJ45 transceiver, but workaround exists.
I'm just pissed off that my ISP reouter only has a 10G RJ45 connector instead of SFP+. I had to buy an expensive RJ45 transceiver which is really warm
I've had an interesting issue where using a DAC to uplink between Ubiquiti EdgeSwitch 16XG's and EdgeSwitch 48-port switches would cause issues with Dante traffic (late packets, dropping from Dante Controller, audio dropouts). This remained constant between DAC brands (10GTek, Cable Matters, and a few other I can't think of), and between a whole campus of 16XG's and 48-ports.
Issues were persistent after DSCP and IGMP Snooping were enabled, and Wireshark showed those DSCP values being correctly assigned. Weirdly enough, all other traffic was fine. As soon as we changed to SFP+ Fiber modules, all of the issues went away. I'm not sure if the issue could be some sort of switch firmware issue, or how DACs do their signaling.
Would love to see if anyone else has any other weird issues.
I have had bad experience with DAC maninly those FS ones, failing not fun at 2am rarely have failures with just stright SFP/SFP+ so removed most DACs with fiber SFPs these days
I own a netgear 12 port 10gbe switch that runs at 40w even with nothing going on.
What will a sfp+ or qsfp switch use as a base?
I’d love to run fibre to my desktop 20m away, and use dac to my servers but my supermicro board has the 10gbe onboard.
I will look for a sfp+ switch … 1x 10gbe to supermicro and then sfp dac to main server and fibre to my pc
one more reason for DAC is the lower latency. With fiber the convertion from electric to optic back to electric takes just a little bit longer
Below 10m lengths Twinax/DAC offer lower latency not because of this conversion but because of the optical path length. In standard fibre cables light does not travel straight through the center of the glass but instead “zig zags” off the walls (the total internal reflection process).
@@mwahlert That might be true, but the speed of electricity in the wire is slightly less than the speed of light in the fiber.
The latency DOES come from the electricity > light > electricity steps.
That is why short runs with DAC are lower latency, because at those short lengths, the conversion losses of fiber add more latency than the slower speed of electricity.
Tried a 1.5m 10Gb DAC cable from 10Gtek and local xfer speeds went down to about 350Mbps. Went back to 2 x Ubiquiti SM 10Gb trancievers and 2m SM fiber patch cable (yellow) and my xfer speed went back up to 800-960Mbps. Not sure if bad DAC or EMI bc cable goes right by a power strip. 6bg single file between two servers with Intel X710 and nvme drives on each side. Just interesting issue.
Not sure you should have read the ad saying, "Microcenter will have 5090s" ... microcenter had all of 80 something total across the country. so they were gone instantly.
And then there are latency differences 😊
DAC is the way. then rj45. then Fiber
Lots of DAC in rack and the Fiber and CAT6 to other rooms. Why oh why can't a Mac Mini M4 come with an SFP+ cage 😂
Thank you for the another excellent video. So I am using SFP+ and SFP28 transceivers and om3 cables all over my house. Am I being really inefficient in terms of power usage?
Be aware of vendor lock-down on "compatible" modules and DACs in both cards and switches. Tom should have addressed this fact.
Probably should have mentioned it, but It''s mostly an issue for people who use Cisco.
@@LAWRENCESYSTEMS in my experience, I've had to hack the EEPROM of some Intel cards to make them work with non-Intel SFPs. Arista also plays these games but luckily I acquired a former Facebook switch which they didn't clean the configuration from and provided me with a secret "unlock" key for non-Arista SFP/DACs.
For me, whatever is cheapest.
Back DAT NAS UP?
I advise use of twinax/dac from cube pod leaf switches to workstations because its so robust and durable. It can survive angry end users kicking or knocking it around. In rack though, its kind of thick to route...
DACs in the rack, fiber outside of it.
It is also lower latency but that only matters for longer distance.
DAC > fiber > rj45 adapter.
Once i discovered dac's, it was the end of ethernet in the rack.
I'm not an Nvidia fan but Mellonox seems to be a lot better than Intel stuff
jeezus, no just no, Intel 10G SFP cards have some nasty firmware issues on both BSD and linux.
in bsd its a bit more hidden an can result in some wierd results that woudl take to long to explain, in tldr its a firmware and driver issue that is realted to LLDP
on linux similar issue that caps the card at 6gbit when bonding. need to disable LLDP on every single link
mellanox is the answer connectx3 or similar is equally compatible without issues
If it's short distance within the rack, I'll just stick to RJ45. The cables are cheap and readily everywhere. Works with patch panels. You can even make your own quick replacement cable in an emergency. And you can even step down to a Cat5e cable at 1 gig as a last resort. You can't do any of that with DAC, and only limited to approx. 10m max. Then finding a switch with 24 or 48 SFP ports is also very difficult and they can be very very expensive compared to an all RJ45 switch.
For less that 10G things such as IPMI RJ45 is fine, it's for those faster 10G and beyond connections that DAC is a better choice.
Agg switches tend to be all sfp. A lot of rack switches have a few sfp for up-link.
Video sounded like a "use dac INSTEAD OF fiber IN YOUR RACK" message. The only way to "just stick with rj45" in the context of the video would require using an SFP to RJ45 module, which isn't a "just" situation. Sounds like you missed the purpose of the video.
bruh, I'm sticking with RJ-45
Dac is great for a short run. But if you dont need 40g or higher. Id just stick with cat6a and some sfp+ rj45 adapters.
2:00 mellanox >> intel
Yes technically but rj45 10gig has ALWAYS had heat issues, I had some errors from my personal nas seeing the card had reached 80c!!!! Now am using a intel? 10 gig sfp with passive dac
I have a silly question can you hook a USB C male inspection camera to an rj45?
I work on long vehicles and occasionally need to monitor things that I can't reach or see from up to 60 feet away. Like suspension parts. Obviously I can't have someone hang on while I drive. And the sewer cameras are too bulky to set up and attended to while driving. But my usbc inspection camera is perfect. Can I make it work?
"Why Your Rack Needs DAC in 2025" - conclusion: you dont need DAC
Nice Rack 😂😂
Fiber. Your welcome, saved you 11:40m
Except your wrong