Man. I spent hours today researching 10/15/40gb and you just presented more information in 15min than I could peice together from multiple dozens of resources and googling all day. Amazing!
Let's GO! Love the content Jeff, keep it coming. I already went through the 50 page or so forum thread of your house build and it was amazing! Hopefully one day we'll have a video about it ;)
@@chrisnelson414 When I was a kid, my school had Token Ring. I never dealt with that other than plugging those crazy proprietary plugs in. Eventually the first network I implemented myself was 10-Base 2 Coax with the T connectors and terminators on the ends. We were playing Doom over LAN at a local college for a LAN Party. So yeah, collisions were by design in those days. Amazing how far we've come. :)
10g has been perfect for my home network. 40g will eventually be fun to mess around with if I have storage that's quick enough to take advantage of the new found speed.
taking a basic network class because i think even the average person should understand the concepts on how things work. said all of that to say i was able to keep up with some of what was talked about, like switches, multimode, sfp, tranceiver, etc. so at least some of what i’m learning is sticking! thanks for these videos jeff!
We run 40G at work, and have a Dell S6100-ON - It's a chunky, loud, power-hungry switch. But has replaceable modules that can go from 16x40Gbe up to 8x100Gbe - Which is nice!
I recently got connectx3 cards for my NAS and for my Proxmox. The NAS provides iSCSi to the Proxmox so it was actually a very good change, just need to plan my way to get more SSDs to my NAS. I'm planning to start a small channel to register the progress of my homelab :) Great video as always! Cheers from Brazil.
Thanks for such an informative video unfortuanly a lot of those enterprise switches sound like a jet engine . In my current home I really don't have any where to locate them I'm hoping to buy a new home over the next year that will have a basement. However, IMO compared to 10 gig 40 is the way to go due to the low cost of the gear.
Love the video and thank you so much for sharing your knowledge. You have me thinking how can I justify 40gig in my home. Still can't imagine the need at least for my use cases. The 10gig I have no is barely used. I imagine I'd need a few U.2 SSDs in a raid to actually even reach half of the bandwidth. Also, imagine the amount of electricity and heat both the adapters and switch would create. Just can't think of any good reason to make this happen. Again for my use cases.
I recently upgraded my rack to a new OPNSense router with 40gb and 100gb dual headed ports (Intel Converged). The 40gb port feeds my 48 port switch which goes through the house, and the 100gb goes between the router and the main server.
Jeff, will you at some point explain proper network storage topology? For example, do you route your storage via L3 and how to you segment mtu 9000 traffic vs mtu 1500 traffic?
Connectx-3 is actually goes often below 20$ now. About 30-40 is a common price range for connectx-4 Lx nowadays and for about 50 you can find connectx-4 1x50 or 1x100.
@ yup, but for 100g your only cheap option is CWDM4 modules, those Ive seen for about 5$ per module (3$ but it 60% working, therefore final cost was about 5-5.5$ per module)
@@jeffsponaugle6339 so far in my experience the most problematic part is finding good and cheap switches. There are a lot of 40/100G switches on a used market for reasonable price (Celestica DX010 for example), but they are loud and power-hungry, not very home-friendly. There are quiet modern ones, like from FS or Mirotik, but price-wise they are triple the cost of Celestica, if not more... And there is almost no solution for 25G networking, it is easier to get 100G switch and breakout cables.
Amazon is above 100G switches. They're at like 200G or 400G at the rack layer. I think the other hyperscalers are the same or similar. Likely 200G hardware will be rotating out soon. 100G stuff is already being EOL'd.
@@jeffsponaugle6339 don't be surprised if you start to see 200G gear soon. Amazon had already been using switches capable of 400G/port+ for a while. In fact, they're doing 1600Gbps on their instances using elastic fabric adapter for trn1 (tranium) and 3.2Tbps on trn2. They have other non-efa stuff supporting 400Gbps, which means they're likely going to EOL products that only run at 100G and 200G per port as the 100G equipment ages...especially higher up the stack as we get to the core and agg switches all the way out to the edge switches. They've all got to be able to support higher throughput at the edge to support everything beneath. TL;DR: you're going to start seeing 200G and 400G switches before too long as Amazon is likely cycling out of those switches higher up in the core and agg layers of the EC2 Network.
oh...also, the only problem I have with going with 40Gbps (and mind you, I have some 40Gbps, so it's not a knock on this video) is that you pigeonhole your solution to the 10Gbps standard, instead of the 25Gbps optic standard that everyone is shifting to via the SFP28 standard. The 50G and 100G switches are not so much more expensive that they're not worth it. Granted, the SFP28 optics will also run in a 10G optic slot (ie. SFP+) and QSFP28 will work in QSFP, so you could go with SFP28 optics in 40Gbps switches and then just swap out switches when the price of them comes down more. That actually might be the BEST way to go about migrating or planning for fiber. Buy SFP28 and/or QSFP28 transcievers/DAC cables and then the 50G and 100G upgrade paths aren't as expensive down the road.
100GbE switches are easily to be found refurbished and at reasonable prices, but: power consumption and PCIe cards are quite pricy. At work in our racks not a real problem because the capability is needed and pays for itself, but I would not go 100 at home.
@ yeah, probably 99.9% of people don’t need 100G at home. I wouldn’t go to 100 either, but 25Gbps or even 50Gbps would be something I’d consider if I wanted to do some serious network file operations (like running an HPC cluster)
That SFP module vendor lock-in is indeed annoying. I recently upgraded the core link from my server to my Cisco switch to 10G but only had HP SFP modules. Took me a while to figure out the IOS command to allow non-cisco SFP modules to work. Unfortunately the switches I am using (Catalyst 3850s) only have two 10G SFP ports in their network modules, so they are pretty limiting in that way. I have been looking out for cheap 4x10G network modules for the 3850 but they don't seem very common.
The Key required for the arista switches for unlocking unsupported transceivers can be found on the internet very easy. I have multiple switches with the newest EOS version and all have unsupported Transceivers enables.
@jeffsponaugle6339 There are multiple codes you can find online, as arista uses a different code for every customer. I always use the EMC code, but the wiprolabs should work also.
I was looking at the 7150s series and agile ports, but power draw is over 150 watts @50% load. I think I will go 7250sx-64. 4 40gb ports will be enough for me.
I have a mini (or micro) version of your network. All unifi for AP, udm pro, and 1gb Poe switches. I took the 10gbe lan port from the udm pro and it is connected to a Ruckus icx7250 that has 8 10gb ports that feed my servers. I have a 1gb connection between the udm pro and my unifi switches. This way I can do anything to the Ruckus switch without affecting the home side. Works great, but short of buying a unifi aggregation pro switch and going 25gb nics, I’m thinking enterprise 10gb with some 40gb ports switch. It would be nice to go Ruckus icx, but they’re still too expensive. Lots of research to do, but that’s half the fun.
There are usually 2 "physical" form factor in most switches today (except in DataCenter or Long Haul Transmission). SFP (1 lane) and QSFP (4 lanes). 40G for cheap ass lab thing is OK. But for deployment I suggest deploy straight to 100GBE (QSFP28), not 40GBE, the cost is not much different now. (Only optics module will be quite more expensive). Usually QSFP28 port also compatible for QSFP, you can plug 40G optics.
Absolutly, for real datacenter no one would do 40g now, 100g/200g/400g is the path.. but for homelabs looking at cheap retired hardware ti can get a great option. All of the 40g you see on ebay is because the DCs upgrade to 100g/200g/400g.
@@jeffsponaugle6339 Retired Hardware at 10G signal is cheap, yes. but it may be very old, every inefficient. So for retired Data Center switches, I suggest only buy 25/100GBE and up (usually accept 10/40GBE plugable anyway). Lower speed than 25/100GBE that just grab new, cheap, efficient switches instead. 10/40GBE in retired hardware market are very old and power hungry, and loud as heck. 100/200/400GBE is for hyperscale/AI. Unless you running storage over Ethernet, 25GBE pretty much enough for typical servers except for high-density VM.
Do you have 40G around your entire house, or just in your lab? Building a new server room as I’ve run out of space, curious what you would recommend for future-proofing.
40G is pretty much dead for the future, which means you'll find a lot of cheap 2nd hand gear for at least the next several years. most datacenter gear is moving to 25G and 100G, with 400G showing up on the high end stuff. so if you're really think long term, get 25G since it'll normally do 10/25 and gear that does 100G will usually support 4x25G breakouts.
Yea, 100G is starting to trickle down, and those prices should come down. Optics are also getting cheaper. I'm running a 100g right now and using that 4x25 breakout to connect to my Ubiquiti Aggregation switch.
@@jeffsponaugle6339 Yeah, I think I’ll probably bite the bullet and plan for 100G throughout the house. I’ve gotten about 20 years out of my Cat5e runs - which were originally intended for fax machines, phones, and printers lol - so probably about time. Need to do some repainting after a lighting overhaul, so it’s an opportunity to punch a few holes in the walls and run OS2 SMF to all the critical areas. Hoping that will future proof me for another 2 decades. Kicking myself for not running conduit back when I built the place, but everything’s clear in hindsight.
25g/100g is certainly better, but a bit more expensive. 40g is a sweet spot if you are looking for retired enterprise gear and needs lots of interconnects. There will be cheaper retired 100g gear in the next few years coming to the used market.
i've went from 10gig to 100gig, it is actually not so expensive anymore and fully supported in vmware, the mikrotik switch was the most expensive part.
Depends on your setup and budget. Personally I'd look at 10/25G and eventually 100G, because 40G is going away. But that also means you're gonna find a lot of good deals on 40G stuff.
Exactly - If you are buying new gear you can start with 25G, and that makes easy interconnects to 100G later. However if you are looking for bang for the buck the 40G stuff is in a sweet spot. Over time for 100G stuff will drop in price, then 200G/400G will be the place to go!
Meanwhile, racks in datacenters are using 32-port or 48-port switches with 400gb ports. Jeff's homelab is so quaint, and so cute, compared to real enterprise hardware.
Saving this video for after I finally manage to upgrade from 1 gig to 10.
I took notes on this as if I was in a lecture.
Man. I spent hours today researching 10/15/40gb and you just presented more information in 15min than I could peice together from multiple dozens of resources and googling all day. Amazing!
This guy’s so nice for sitting down and sharing his knowledge with us
Let's GO! Love the content Jeff, keep it coming. I already went through the 50 page or so forum thread of your house build and it was amazing! Hopefully one day we'll have a video about it ;)
Where is that?
I found this to be a very informative video, one of the best in fiber I have seen recently!
Am I getting old? I still remember going from 10M to 100M and it being a game-changer. 😭
Try dealing with Token Ring and vampire taps... 🥴
@@chrisnelson414 When I was a kid, my school had Token Ring. I never dealt with that other than plugging those crazy proprietary plugs in. Eventually the first network I implemented myself was 10-Base 2 Coax with the T connectors and terminators on the ends. We were playing Doom over LAN at a local college for a LAN Party. So yeah, collisions were by design in those days. Amazing how far we've come. :)
@@chrisnelson414 Yeah, weren't those fun? I don't miss the token ring or coax setups.
I remember LocalTalk (for Macs over phone cabling) at 230 kb/s to 10Base-T. Woo!
I studied about token ring for the net+ even though it was already obsolete.
10g has been perfect for my home network. 40g will eventually be fun to mess around with if I have storage that's quick enough to take advantage of the new found speed.
My wallet watching this video: don't you dare 🗿
my wallet just said girl, that’s not possible 🤣
I’m just waiting for Jeff to release a masterclass in data centre architecture. I would gladly pay this man for a masterclass
Looking forward to seeing more of your homelab insights!
Love your channel Jeff. Such great stuff!
taking a basic network class because i think even the average person should understand the concepts on how things work. said all of that to say i was able to keep up with some of what was talked about, like switches, multimode, sfp, tranceiver, etc. so at least some of what i’m learning is sticking! thanks for these videos jeff!
We run 40G at work, and have a Dell S6100-ON - It's a chunky, loud, power-hungry switch. But has replaceable modules that can go from 16x40Gbe up to 8x100Gbe - Which is nice!
Yea, that is nice to have the upgrade path! The Arista 40gs are also super power hungry and loud. Sounds like an aircraft carrier.
Always learn something new from your videos.
Awesome video keep up the great content!
I recently got connectx3 cards for my NAS and for my Proxmox. The NAS provides iSCSi to the Proxmox so it was actually a very good change, just need to plan my way to get more SSDs to my NAS. I'm planning to start a small channel to register the progress of my homelab :)
Great video as always! Cheers from Brazil.
Thanks for such an informative video unfortuanly a lot of those enterprise switches sound like a jet engine . In my current home I really don't have any where to locate them I'm hoping to buy a new home over the next year that will have a basement. However, IMO compared to 10 gig 40 is the way to go due to the low cost of the gear.
Love the video and thank you so much for sharing your knowledge. You have me thinking how can I justify 40gig in my home. Still can't imagine the need at least for my use cases. The 10gig I have no is barely used. I imagine I'd need a few U.2 SSDs in a raid to actually even reach half of the bandwidth. Also, imagine the amount of electricity and heat both the adapters and switch would create. Just can't think of any good reason to make this happen. Again for my use cases.
Jeff we need to know your backstory ❤❤
Great video quality! Can I ask what lens you're using?
The separation & falloff is incredible - not too much, not too little.
It is a Sony A7 camera with the Sony F1.4 50mm fixed lens.
@jeffsponaugle6339 great lens!
I recently upgraded my rack to a new OPNSense router with 40gb and 100gb dual headed ports (Intel Converged). The 40gb port feeds my 48 port switch which goes through the house, and the 100gb goes between the router and the main server.
Welcome to the 40G club! I have a cisco 9500 half 40 half 100g, but still only can pass 16Gbps 🤷♀️
Awesome stuff, really like this kind of content - a lot of info in relatively short format :)
Jeff, will you at some point explain proper network storage topology? For example, do you route your storage via L3 and how to you segment mtu 9000 traffic vs mtu 1500 traffic?
Very knowledgeable , great content i am planning to upgrade my entire network and encrypt all my. Connections , do you have any recommendations
Great video. Subscribed!
Connectx-3 is actually goes often below 20$ now.
About 30-40 is a common price range for connectx-4 Lx nowadays and for about 50 you can find connectx-4 1x50 or 1x100.
That is great..... and optics on ebay are surprisingly inexpensive.
@ yup, but for 100g your only cheap option is CWDM4 modules, those Ive seen for about 5$ per module (3$ but it 60% working, therefore final cost was about 5-5.5$ per module)
@@jeffsponaugle6339 so far in my experience the most problematic part is finding good and cheap switches. There are a lot of 40/100G switches on a used market for reasonable price (Celestica DX010 for example), but they are loud and power-hungry, not very home-friendly.
There are quiet modern ones, like from FS or Mirotik, but price-wise they are triple the cost of Celestica, if not more...
And there is almost no solution for 25G networking, it is easier to get 100G switch and breakout cables.
Love these videos
Audio volume is a bit low on this vid, good thing I have local boost to work around it.
Ah yea, I forgot to equalize it! will fix.
Amazon is above 100G switches. They're at like 200G or 400G at the rack layer. I think the other hyperscalers are the same or similar. Likely 200G hardware will be rotating out soon. 100G stuff is already being EOL'd.
Yea, I have started to see 100G switches that are not crazy expensive. 40G is still the cheapest, but I would expect 100G to start to show up more.
@@jeffsponaugle6339 don't be surprised if you start to see 200G gear soon. Amazon had already been using switches capable of 400G/port+ for a while. In fact, they're doing 1600Gbps on their instances using elastic fabric adapter for trn1 (tranium) and 3.2Tbps on trn2. They have other non-efa stuff supporting 400Gbps, which means they're likely going to EOL products that only run at 100G and 200G per port as the 100G equipment ages...especially higher up the stack as we get to the core and agg switches all the way out to the edge switches. They've all got to be able to support higher throughput at the edge to support everything beneath.
TL;DR: you're going to start seeing 200G and 400G switches before too long as Amazon is likely cycling out of those switches higher up in the core and agg layers of the EC2 Network.
oh...also, the only problem I have with going with 40Gbps (and mind you, I have some 40Gbps, so it's not a knock on this video) is that you pigeonhole your solution to the 10Gbps standard, instead of the 25Gbps optic standard that everyone is shifting to via the SFP28 standard. The 50G and 100G switches are not so much more expensive that they're not worth it. Granted, the SFP28 optics will also run in a 10G optic slot (ie. SFP+) and QSFP28 will work in QSFP, so you could go with SFP28 optics in 40Gbps switches and then just swap out switches when the price of them comes down more.
That actually might be the BEST way to go about migrating or planning for fiber. Buy SFP28 and/or QSFP28 transcievers/DAC cables and then the 50G and 100G upgrade paths aren't as expensive down the road.
100GbE switches are easily to be found refurbished and at reasonable prices, but: power consumption and PCIe cards are quite pricy. At work in our racks not a real problem because the capability is needed and pays for itself, but I would not go 100 at home.
@ yeah, probably 99.9% of people don’t need 100G at home. I wouldn’t go to 100 either, but 25Gbps or even 50Gbps would be something I’d consider if I wanted to do some serious network file operations (like running an HPC cluster)
That SFP module vendor lock-in is indeed annoying. I recently upgraded the core link from my server to my Cisco switch to 10G but only had HP SFP modules. Took me a while to figure out the IOS command to allow non-cisco SFP modules to work. Unfortunately the switches I am using (Catalyst 3850s) only have two 10G SFP ports in their network modules, so they are pretty limiting in that way. I have been looking out for cheap 4x10G network modules for the 3850 but they don't seem very common.
The Key required for the arista switches for unlocking unsupported transceivers can be found on the internet very easy. I have multiple switches with the newest EOS version and all have unsupported Transceivers enables.
I assume you mean the 'service unsupported-transceiver wiprolabs' one?
@jeffsponaugle6339 There are multiple codes you can find online, as arista uses a different code for every customer. I always use the EMC code, but the wiprolabs should work also.
amazing video. thanks
they also make qsfp+ 40gbe to x4 sfp+ break out Direct attach Cables which be bought on E-bay too
Excellent video, Jeff. What is the model # of the Arista switch?
That is a 7050QX-32, although there are a couple of those series that are pretty inexpensive.
I was looking at the 7150s series and agile ports, but power draw is over 150 watts @50% load. I think I will go 7250sx-64. 4 40gb ports will be enough for me.
@@philarmishaw3730 Yea those Aristas use a lot of power at idle!
I have a mini (or micro) version of your network. All unifi for AP, udm pro, and 1gb Poe switches. I took the 10gbe lan port from the udm pro and it is connected to a Ruckus icx7250 that has 8 10gb ports that feed my servers. I have a 1gb connection between the udm pro and my unifi switches. This way I can do anything to the Ruckus switch without affecting the home side. Works great, but short of buying a unifi aggregation pro switch and going 25gb nics, I’m thinking enterprise 10gb with some 40gb ports switch. It would be nice to go Ruckus icx, but they’re still too expensive. Lots of research to do, but that’s half the fun.
this video made me buy a 40 gigabit switch 3 minutes in. found a good looking C3164Q for $100 from a electronics recycler and couldn't help myself.
Nice!
这家伙真好,愿意坐下来和我们分享他的知识
Kewl! Just jumped today from 1GbE to 10GbE... Lets see, when the prices for the QSFP modules dropped 🙂
Nice hairstyles
What's the astro map you have up on your wall?
If my home internet service provider just give 100Mbps for internet connection, does it with this 40Gbps tools can make from 100Mbps to 40Gbps ?
I was planning on a direct 10gb connection from my pve to the nas box and this made me realize I can simply go 40gb.
There are usually 2 "physical" form factor in most switches today (except in DataCenter or Long Haul Transmission).
SFP (1 lane) and QSFP (4 lanes).
40G for cheap ass lab thing is OK. But for deployment I suggest deploy straight to 100GBE (QSFP28), not 40GBE, the cost is not much different now. (Only optics module will be quite more expensive).
Usually QSFP28 port also compatible for QSFP, you can plug 40G optics.
Absolutly, for real datacenter no one would do 40g now, 100g/200g/400g is the path.. but for homelabs looking at cheap retired hardware ti can get a great option. All of the 40g you see on ebay is because the DCs upgrade to 100g/200g/400g.
@@jeffsponaugle6339 Retired Hardware at 10G signal is cheap, yes. but it may be very old, every inefficient.
So for retired Data Center switches, I suggest only buy 25/100GBE and up (usually accept 10/40GBE plugable anyway). Lower speed than 25/100GBE that just grab new, cheap, efficient switches instead. 10/40GBE in retired hardware market are very old and power hungry, and loud as heck.
100/200/400GBE is for hyperscale/AI. Unless you running storage over Ethernet, 25GBE pretty much enough for typical servers except for high-density VM.
Do you have 40G around your entire house, or just in your lab? Building a new server room as I’ve run out of space, curious what you would recommend for future-proofing.
40G is pretty much dead for the future, which means you'll find a lot of cheap 2nd hand gear for at least the next several years.
most datacenter gear is moving to 25G and 100G, with 400G showing up on the high end stuff. so if you're really think long term, get 25G since it'll normally do 10/25 and gear that does 100G will usually support 4x25G breakouts.
Yea, 100G is starting to trickle down, and those prices should come down. Optics are also getting cheaper. I'm running a 100g right now and using that 4x25 breakout to connect to my Ubiquiti Aggregation switch.
@@jeffsponaugle6339 Yeah, I think I’ll probably bite the bullet and plan for 100G throughout the house. I’ve gotten about 20 years out of my Cat5e runs - which were originally intended for fax machines, phones, and printers lol - so probably about time. Need to do some repainting after a lighting overhaul, so it’s an opportunity to punch a few holes in the walls and run OS2 SMF to all the critical areas. Hoping that will future proof me for another 2 decades. Kicking myself for not running conduit back when I built the place, but everything’s clear in hindsight.
I would go to 25gig and 100gig upgrade instead of 10gig - 40gig. Makes more sense to me.
25g/100g is certainly better, but a bit more expensive. 40g is a sweet spot if you are looking for retired enterprise gear and needs lots of interconnects. There will be cheaper retired 100g gear in the next few years coming to the used market.
Hell yeah
i've went from 10gig to 100gig, it is actually not so expensive anymore and fully supported in vmware, the mikrotik switch was the most expensive part.
Yea, prices are dropping on that, and optics especially are getting much cheaper!
i'm still on 1 gig with plans to upgrade to 10 gig soon lmao
should i just skip 10 gig all together and move straight to 40 gig?
Depends on your setup and budget. Personally I'd look at 10/25G and eventually 100G, because 40G is going away. But that also means you're gonna find a lot of good deals on 40G stuff.
Exactly - If you are buying new gear you can start with 25G, and that makes easy interconnects to 100G later. However if you are looking for bang for the buck the 40G stuff is in a sweet spot. Over time for 100G stuff will drop in price, then 200G/400G will be the place to go!
Watching this thru a 100Mbps D-Link switch :)
not even reaching 500Mbit in my home network, forever slow :(
Still rockin' the Convids-19 hair!
Yea, I should really get a haircut!
This guy got equity in something early and that thing is now very very large. I assume. lol
(84) gigs were said. Giggety!!
Ha!
No NordVPN? No "Please, please subscribe". No "please leave me a comment"? No bullshit? What a breeze of fresh air on youtooob.
Homedatacenter
Meanwhile, racks in datacenters are using 32-port or 48-port switches with 400gb ports.
Jeff's homelab is so quaint, and so cute, compared to real enterprise hardware.
Upgrade to 400G
Bro is so rich he thinks everyone else is rich too.
🙄I'd love to have even 2.5 gig...
stop calling that homelab when you have more power than more of small buiseness out there.....
It is a homeland, might be a on the high end but it’s still a home lab. I have seen even more insane home labs then this. But it’s still a home lab