Inside the FASTEST New 800GbE 64-port Switch

แชร์
ฝัง
  • เผยแพร่เมื่อ 27 ธ.ค. 2024

ความคิดเห็น • 340

  • @mitchmckenna1532
    @mitchmckenna1532 5 หลายเดือนก่อน +199

    Can't wait to install this in my home for $700 in 2034

    • @headlibrarian1996
      @headlibrarian1996 5 หลายเดือนก่อน +30

      Nah, we’ll still be stuck at 1 gigabit.

    • @rkan2
      @rkan2 5 หลายเดือนก่อน +13

      I feel like the switch will probably be around that price point... But the cables won't be much cheaper. 😅

    • @Ciddyism
      @Ciddyism 5 หลายเดือนก่อน

      @@headlibrarian1996 I guess you are both wrong - unless mitch has a really huge network with devices that permanently need their fast connections to random devices OR librarian refers to the standard hardware on the motherboard (that might be enough for most "casual users" for quite a while).
      But I like the idea of getting a "direct" fast connection to friends.
      (As long as I ignore the permits required, construction/... destroying cables quite often, ... and the possibility that a friend might move - especially if that friend helped with the connection to another friend who is out of range.)
      No, I'm not planning to do that. *minimizes table with distances in between family and friends*

    • @mvadu
      @mvadu 5 หลายเดือนก่อน +14

      And pay $700 a month for running this 2kw switch? 😢

    • @blacksnow7106
      @blacksnow7106 5 หลายเดือนก่อน

      @@headlibrarian1996 I don't think we gonna stuck at 1, it gonna be at least 10 imo, mostly because of RJ45 and Cat6.
      2.5G is already drop to manageable price at this point, anyone looking to wire a new ethernet network should at least take a look at 2.5G price, or at least wire your network with Cat6. That way when you upgrade your speed, you just need to swap out the switch.

  • @CraftComputing
    @CraftComputing 5 หลายเดือนก่อน +391

    Suddenly my 100Gb switch feels inadequate.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  5 หลายเดือนก่อน +67

      Let me know if you ever need some footage

    • @JeckNoTree
      @JeckNoTree 5 หลายเดือนก่อน +9

      "always has been". (moon meme)

    • @nicholascopsey4807
      @nicholascopsey4807 5 หลายเดือนก่อน +17

      Hey, if you want to send your old inadequate switch to me. I’ll take it, no questions 😂

    • @JeffGeerling
      @JeffGeerling 5 หลายเดือนก่อน +36

      And here I was thinking 25 Gbps was pretty hot.

    • @oguzhan001
      @oguzhan001 5 หลายเดือนก่อน +4

      ​@@JeffGeerling I'm happy with 1Gbit.

  • @matthiaslange392
    @matthiaslange392 5 หลายเดือนก่อน +215

    800Gbit/s is pretty fast. But nothing is faster than Patrick when he speaks about crazy fast technology 😂

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  5 หลายเดือนก่อน +57

      You can ask my wife, I slow down for videos.

    • @matthiaslange392
      @matthiaslange392 5 หลายเดือนก่อน +4

      @@ServeTheHomeVideo If your son comes like you, he finishes growing to 21 in half the time 😁

    • @JeffGeerling
      @JeffGeerling 5 หลายเดือนก่อน +12

      Listening to STH videos at 1.5x is a great way to sharpen your ears!

    • @galician-tolo8149
      @galician-tolo8149 5 หลายเดือนก่อน

      Fact: I actually came to check the player's settings, for I thought that at some point I must have had inadvertently set playback speedz to 1.25x. It was not...

    • @matthiaslange392
      @matthiaslange392 5 หลายเดือนก่อน

      @@galician-tolo8149 only Chuck Norris is faster ... perhaps...

  • @james1234168
    @james1234168 5 หลายเดือนก่อน +127

    This thing is a monster! Being able to break out into 512 x 100GBe ports is mind blowing!

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  5 หลายเดือนก่อน +18

      Yea, or to have 256x 100GbE and 128x 200GbE for uplinks as an example.

    • @retarepo
      @retarepo 5 หลายเดือนก่อน +2

      @@ServeTheHomeVideo how about making a 51.200 port gigabit switch for your home network or LAN party?

    • @monad_tcp
      @monad_tcp 5 หลายเดือนก่อน +1

      @@retarepo Its so insanely fast that you could plug 50 Geforce 4090 there, no, wait, its PCIe can't transfer 1TB/s, only its memory stack can. You could probably switch hundreds of GPUs.

    • @uiopuiop3472
      @uiopuiop3472 5 หลายเดือนก่อน

      yes and 8000 x 100MBe ports, the best type of port

  • @ClockDev
    @ClockDev 5 หลายเดือนก่อน +45

    Looking at the thickness of the board on 09:03, and looking at the vast amount of signals the single chip needs just for the front ports, this has to be one of the most complex PCB ever made (if not the most). That's an incredible amount of high speed signal traces that need to be finely adjusted,one by one just to have good timing constraints and good impedance matching between the IC and the transceivers

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  5 หลายเดือนก่อน +7

      Yea! It is super cool routing that much high speed signaling

    • @LiLBitsDK
      @LiLBitsDK 5 หลายเดือนก่อน +1

      @@ServeTheHomeVideo kinda interested to see the UNDERSIDE of that board, I saw they blocked it off with metal on the backside so no airflow down there which is kinda odd since all the optical ports had fins on them but if only the top row gets air how hot do the other ones get then? and what is on the backside? back with a screwdriver!

    • @N4CR
      @N4CR 4 หลายเดือนก่อน

      @@LiLBitsDK It's so thick and so much metal vias it will heatsink itself through connections to cooled devices I bet.

  • @Lustanda
    @Lustanda 5 หลายเดือนก่อน +127

    9:25
    HOW MANY LAYERS IS THAT PCB!? That's like HALF A RJ45 port height in PCB depth. I've never seen one that thick before.

    • @thewolf7119
      @thewolf7119 5 หลายเดือนก่อน +17

      That's what she said...

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  5 หลายเดือนก่อน +53

      In 2019 we started talking about PCIe Gen4 signaling in servers being a challenge leading to cables instead of PCB risers. We see that more now. PCIe Gen6 and Gen7 will get worse. The switch guys have to deal with crazy signaling much earlier.

    • @movax20h
      @movax20h 5 หลายเดือนก่อน +24

      My guess is easily 36 layers, but possibly 10 more. Very thick PCB, but also special materials and processes to control impedance and thin trace tolerances, and for sure also things like buried vias and blind vias. Pretty high tech.

    • @edc1569
      @edc1569 5 หลายเดือนก่อน +10

      @@movax20h Maybe you need the z height to get a bunch of controlled impedance traces on top of each other, rather than just a needs for a crazy number of layers as I'm not sure why you'd need that if all the buses are fanned out intelligently.

    • @Mr.Leeroy
      @Mr.Leeroy 5 หลายเดือนก่อน +21

      1.8kW of power planes is no joke either

  • @computersales
    @computersales 5 หลายเดือนก่อน +31

    It's crazy this 2U switch can replace 16U of 100Gb switches. Hate to imagine what 8 cables coming out each port looks like though.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  5 หลายเดือนก่อน +12

      More than that if you want those 16x 32-port switches to be connected at a reasonable speed

    • @jonathanbuzzard1376
      @jonathanbuzzard1376 5 หลายเดือนก่อน +4

      I would imagine that anyone sane will be using breakout MTP cables into patch panels.

  • @sativagirl1885
    @sativagirl1885 5 หลายเดือนก่อน +50

    This sounds like a good upgrade for dialup BBS

    • @johnmijo
      @johnmijo 5 หลายเดือนก่อน +2

      Still remember my first connection on my 8-bit C64 using an old 300 baud dialup modem, however I was able to connect up to a Color BBS but it was a painfully slow load :p
      When I trained in the USAF we used the original Acuostic Modems, I think these were like 50 ~ 60 baud ?
      You placed the phone's handset onto the couplers and it communicated that way, still a dialup type connection but there you go.

    • @edc1569
      @edc1569 5 หลายเดือนก่อน +3

      do yourself a favour and wait for the next generation

    • @headlibrarian1996
      @headlibrarian1996 5 หลายเดือนก่อน

      I thought I was living the life when I got a 9600 baud modem.

    • @AndyGraceMedia
      @AndyGraceMedia 5 หลายเดือนก่อน

      @@headlibrarian1996 You were - Trellis coding - magic back then.

  • @mr_jarble
    @mr_jarble 5 หลายเดือนก่อน +16

    I used to brag about my home lab with 100g switching.... I am so glad that you are able to bring us amazing content like this it pleases the hardware nerd that I am greatly

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  5 หลายเดือนก่อน +1

      Our pleasure!

    • @LiLBitsDK
      @LiLBitsDK 5 หลายเดือนก่อน +1

      next week a 4port 800gbit switch from AliExpress :D LOL

  • @mactep1
    @mactep1 5 หลายเดือนก่อน +33

    9:07 Dammn, that is one thicc PCB!

  • @k3rsh0k
    @k3rsh0k 5 หลายเดือนก่อน +11

    I need this for my homelab. It’s non-negotiable. Instagram won’t work without it.

  • @edc1569
    @edc1569 5 หลายเดือนก่อน +9

    Perfect timing, been looking for a new switch for the pool house.

  • @marcogenovesi8570
    @marcogenovesi8570 5 หลายเดือนก่อน +9

    Marvell is coming back with a vengeance with this one. The density in this thing is insane

  • @itsnebulous8507
    @itsnebulous8507 5 หลายเดือนก่อน +10

    I appreciate your detailed look at high-end networking equipment-it's something we network engineers don’t often see. For future content, could we delve into the more technical aspects that are critical for those of us in the industry? Some things I wanted to see mentioned, specifically as a networking professional in the HPC space:
    What type of ASICs and RPs are in a piece of equipment along with their L2/L3 forwarding rates for determining subscription. PPS capability would be helpful too.
    You went into radix, which is very helpful for considerations about network topology like fat tree, and I do appreciate that you briefly went over spine/leaf in some diagrams.
    A bit about the CAM/TCAM would be helpful for understanding RIB and FIB sizing and ACL capability.
    Covering the different redundancy modes the switches support would be helpful as well. Such as whether they handle active/active (MLAG) or active/passive (stacking) scenarios would tell us a lot about their control plane function from a redundancy perspective.
    In line with that I'd like to see some exploration into how L2/L3 multipathing is implemented.
    I'd also like to see some breakdown of what is meant by 'large buffers' and the specific Quality of Service methodologies implemented (inbound shaping, colored policing, DSCP tagging support, WRED or ECN).
    I'd also like to see some mention of RDMA/RCoE support for HPC centric switching platforms, because it's very important.
    Maybe too detail oriented... But network engineers don't typically get to read anything but whitepapers, datasheets, and slide decks, so the format is very captivating.

    • @BigBenAdv
      @BigBenAdv 5 หลายเดือนก่อน +1

      Yeah.. I was really wondering about whether the ASIC would be capable of line-rate routing for the underlays when implementing overlay fabrics in more complex setups.
      I'd have to assume at the very least that something in this class would at least do RDMA, MLAG, and VXLAN as a minimum.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  5 หลายเดือนก่อน +3

      Good feedback. We are still learning and trying to get the right level of information

    • @concinnus
      @concinnus 5 หลายเดือนก่อน +3

      @@ServeTheHomeVideo Packing all that into the video would probably hurt viewership. I'd put it on your site and mention that in the video.

    • @moritz1075
      @moritz1075 5 หลายเดือนก่อน

      I do not know about marvel, but juniper sometimes releases technical details about their new products to the public. For example I highly recommend reading the MX304 deep-dive (their newest "small" high performance router)
      community.juniper.net/blogs/reema-ray/2023/03/28/mx304-deepdive

  • @nadtz
    @nadtz 5 หลายเดือนก่อน +2

    And here's me trying to find a 25/40gb switch that I can actually use at home without my apartment sounding like a wind tunnel. Seriously though it's crazy how quickly we went from 40GbE to 800. Feels like me remembering installing a 40Gbe switch for the first time was a recent thing, videos like this make me realize that was 6-7 years ago.

  • @dbattleaxe
    @dbattleaxe 5 หลายเดือนก่อน +6

    I work in high performance computing and yeah, high end switches are insane. I don't work with the hardware itself, but I do work on the software side of it and make use of these sorts of speeds in parallel with many terabits of network bisection bandwidth. What's crazy is these networks can even be a big bottleneck when you put in too many GPUs per node.

  • @mx338
    @mx338 5 หลายเดือนก่อน +9

    The RJ45 cables connected for management look very cute.

  • @littlenewton6
    @littlenewton6 5 หลายเดือนก่อน +12

    My homelab need this switch! I need to use it to watch videos in my Jellyfin.

    • @scruffles87
      @scruffles87 5 หลายเดือนก่อน +3

      480i never felt so quick

  • @thetux0815
    @thetux0815 5 หลายเดือนก่อน +24

    Where's the affiliate link to buy one? 😂

  • @ewenchan1239
    @ewenchan1239 5 หลายเดือนก่อน +11

    And here I am, sitting at home, with my piddly 36-port 100 Gbps Infiniband switch, that's sitting in my basement.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  5 หลายเดือนก่อน +9

      There is a war brewing between IB and Ethernet! Just be thankful you do not have an OPA switch

    • @ewenchan1239
      @ewenchan1239 5 หลายเดือนก่อน

      @@ServeTheHomeVideo
      Agreed.
      Oh, I know (about the war between Ethernet and IB).
      My Mellanox ConnectX-4 dual port cards can do either PHY LINK_TYPE.
      Running my 100 Gbps IB cards as ETH ports, I can use the as a Linux Ethernet Bridge in Proxmox, but VM VM between nodes max out at around 23 Gbps, which is SIGNIFICANTLY lower than what the port can do (which is about 96.9 Gbps out of 100 Gbps possible).
      On the IB side, my applications has topped out around 12 GB/s (96 Gbps), and ib_send_bw tests show a possible 97-ish Gbps out of a possible 100 Gbps.
      IB side doesn't support bridge networking like ETH side, so I tried to enable SR-IOV on the NICs/ports, but the OpenSM subnet manager for IB from the Debian package doesn't support virtual functions/virtualisation at the SM level (requires RHEL and/or its derivatives for that), which then causes configuration issues for my Proxmox server, because I'd have to do this very funky setup where I pass one of the ports to a CentOS VM, start the RHEL opensm, only for it then try and enable virtualisation on my switch so that the Proxmox host's VFs *MIGHT* work.
      It gets REALLY complicated, REALLY fast.
      There are a whole host of issues that arise with a config like that, so I stopped trying/testing it for now, and am keeping it as just single point-to-point connections via the switch (for now).
      (Don't really *need* the VFs to work because I'm using virtiofs for VMhost communications, but it might be useful if I want to try and spin up a cluster later so that I would be able to start the VM/Container with the HPC app that I want to run, rather than physically swapping out hard drives or multi-booting via the node's BIOS.)

  • @traceybernath305
    @traceybernath305 19 วันที่ผ่านมา +1

    Patrick, I am curious if the switch handles mirror/sniff ports at full 800Gbps, or faster (multiple ports, LACP, etc)? We have been working on 100/400/800 capture to disk solutions, and would be curious if this switch would be able to provide 800+ Gbps of traffic to feed our server. Thanks!

  • @yodal_
    @yodal_ 5 หลายเดือนก่อน +8

    OSFP is still kicking? I thought it got killed by QSFP-DD. It is definitely a better designed standard especially when concerning powering and cooling modules, but the backwards compatibility seemed to draw too many to QSFP-DD.

    • @myname7021
      @myname7021 4 หลายเดือนก่อน

      I agree, though choosing OSFP as an acronym was not the greatest decision.,,

  • @hquest
    @hquest 5 หลายเดือนก่อน +4

    I mean, under 3000W power consumption is not absurd. All the Cisco 6500-E chassis we ever purchased came with minimum dual 4500W PSUs. But bottom line, great to see competition against Broadcom and their almost monopoly in this segment.

  • @FlyingCIRCU175
    @FlyingCIRCU175 5 หลายเดือนก่อน +18

    Linus and Jake: "Upgrade time!"

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  5 หลายเดือนก่อน +17

      Not kidding, they get switches from me. I sent a big one to Jake last year.

  • @AWildLeon
    @AWildLeon 5 หลายเดือนก่อน +10

    My 10Gbit Network seems like Peanuts against this 😂

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  5 หลายเดือนก่อน +5

      But your 10G network is training for working on this. 2025 will be the big 800GbE year and then 2027 will be the 1.6T generation

  • @marsrocket
    @marsrocket 5 หลายเดือนก่อน +3

    Thinking of getting one of these for my pi cluster.

  • @jhonbus
    @jhonbus 4 หลายเดือนก่อน +1

    13:08 Nostalgia! That's what my bedroom sounded like back when I was overclocking in the AthlonXP days!

  • @pablopoo
    @pablopoo 5 หลายเดือนก่อน +4

    Insane switch. Time to ServeTheEnterprise channel?

  • @jamess1787
    @jamess1787 5 หลายเดือนก่อน +5

    The non obvious elephant in the room.
    The CFO, "What if this switch has some crazy CPU bug like we saw with M silicone or Meltdown?"
    Me: "hold my beer."

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  5 หลายเดือนก่อน +2

      Check our coverage of the AVR54 bug on the main site

  • @JasonsLabVideos
    @JasonsLabVideos 5 หลายเดือนก่อน +1

    DAMN!! thats a beast Patrick you are rocking this stuff !!

  • @DrivingWithJake
    @DrivingWithJake 5 หลายเดือนก่อน +1

    Pretty nice. Not going to lie. Two of our cores right now only give us 288x 100g ports. Waiting on upgrading some of our routers to 800g shortly

  • @dracoix
    @dracoix 5 หลายเดือนก่อน +6

    I reported the bots, holy hell. Love the uber sfp moster of a switch.

  • @movax20h
    @movax20h 5 หลายเดือนก่อน +2

    Nice. I saw their products and they dont have enough exposure compared to broadcom. I do have small 100Gb switch using Marvell chip and it works wonderful, stable, low power, really nice.
    In fact i am surprised the chip asic die is so small, i was expecting way bigger die.

  • @mx338
    @mx338 5 หลายเดือนก่อน +9

    I imagine one of those OSFP modules already costs 10K.

    • @BigBenAdv
      @BigBenAdv 5 หลายเดือนก่อน +3

      Probably more than that for a brand name manufacturer. Back in 2017, one of my customers building a high-speed campus network (100GbE switching & routing) was already looking at an easy US$13k per 100GbE transceiver. For a project at US$2 mil total, the transceivers alone came in at around US$1.1 mil.

    • @rkan2
      @rkan2 5 หลายเดือนก่อน

      Ye... The switch you will probably be able to get for less than a thousand bucks in 2034 but the cables and transceivers though... 😅

    • @dota2tournamentss
      @dota2tournamentss 5 หลายเดือนก่อน +1

      @@BigBenAdv and why you would need 1000km OSPFs for campus? lol. If you are talking about normal 800g OSFPs with like 500m-10km range then cheapest is around 1k usd and more expensive one 3k usd from FS website

    • @moritz1075
      @moritz1075 5 หลายเดือนก่อน

      Brand optics are more expensive. Like by a factor of 10. But you can already get third party modules between 2000 and 4000 euros from third party (eg flexoptix), depending on SM/MM and power budget

  • @bwest6275
    @bwest6275 4 หลายเดือนก่อน +2

    Marvell ain’t messing around 😳

  • @Awesomes007
    @Awesomes007 4 หลายเดือนก่อน +2

    This is astounding.

  • @pcfan1986
    @pcfan1986 5 หลายเดือนก่อน +2

    I am still on 1 gbit ;)
    Maybe I should consider upgrading to 2.5?

  • @kaya.emilia
    @kaya.emilia 5 หลายเดือนก่อน +2

    Would love to know how much one would cost.

  • @TeflonBilly426
    @TeflonBilly426 5 หลายเดือนก่อน +2

    With the advances in manufacturing process nodes, power efficiency, and increase in switch chip IPC that we're seeing at this extreme top end, do you see anything coming down the pike with regards to 1U *fanless* all SFP+ or all 10GB RJ45 (24-52 ports) or are the heat dissipation requirements still too high for that switch segment?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  5 หลายเดือนก่อน +3

      Here is a different way to think of it. If you use a newer and more expensive process node then costs go up. That is tough in a market 1/80th the speed of the current top-end where people are paying for that kind of advancement. A big challenge today is that 10GbE is so slow that there are not the investment justifications behind spinning those lower end chips. That can seem strange but it is not just about what should be possible, but also what people are willing to spend R&D on. 25GbE will be the next challenge. With PCIe Gen6 x1 you get 50Gbps so we will see investments shift even in 25GbE

  • @MateuszO.-qe5ii
    @MateuszO.-qe5ii 5 หลายเดือนก่อน +2

    Yeah and in some companies people must fight with accounting to be able to buy 10Gb NIC.
    Nice video.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  5 หลายเดือนก่อน

      I hope this helps. 10GbE in an era of 800GbE is getting close to 100M in the era of 10GbE.

  • @alphafort
    @alphafort 5 หลายเดือนก่อน +2

    you're looking a little fitter than usual, bro. Good on you !

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  5 หลายเดือนก่อน +2

      Not really. Have to get back to pre-COVID shape. Hoping the arm heals and the baby sleeps.

  • @BloodyIron
    @BloodyIron 5 หลายเดือนก่อน +1

    How are you supposed to get those internal ports casbled and reachable externally exactly???

  • @WoobsBallJesse
    @WoobsBallJesse 5 หลายเดือนก่อน +1

    This is exactly what my apartment homelab needs

  • @LerrySanders
    @LerrySanders 5 หลายเดือนก่อน +1

    But cam I link aggregate that into one connection?

    • @LiLBitsDK
      @LiLBitsDK 5 หลายเดือนก่อน +1

      would be smarter to only use HALF the ports into 1 connection :D else it is just a dead end

  • @RonLaws
    @RonLaws 5 หลายเดือนก่อน +2

    You mention AI a lot, but IMO the best use case for these is for storage and compute resource back bones in a Hypervisor Cluster. Low latency disk access to a large SSD JBOD or ZFS Pool rack 🤔yes plz.

  • @mofoq
    @mofoq 5 หลายเดือนก่อน +2

    1MW saved is more than a megawatt considering that you also save that much in heat energy 🤔

  • @amessman
    @amessman 5 หลายเดือนก่อน +1

    Those fans, that heatsink! Just for the one chip!
    What certifications/degrees do these manufacturers/designers look for?

  • @RobinCernyMitSuffix
    @RobinCernyMitSuffix 5 หลายเดือนก่อน +2

    Ah, yes. Just perfect for home -defense- labbing.

  • @MarkParkTech
    @MarkParkTech 5 หลายเดือนก่อน +2

    How many first borns and right arms does this thing cost?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  5 หลายเดือนก่อน

      Feeding my first born right now. I would not trade him for a switch

  • @NickDoddTV
    @NickDoddTV 5 หลายเดือนก่อน

    Audio is all over the place in this video 🤯

  • @Galileocrafter
    @Galileocrafter 5 หลายเดือนก่อน +2

    Holy sh*t how thicc is that PCB?? This looks like a cutting board 🔥.

  • @monad_tcp
    @monad_tcp 5 หลายเดือนก่อน +1

    5:31 this thing is amazing, its like 50 Geforce 4090 transferring things to memory.

  • @mx338
    @mx338 5 หลายเดือนก่อน +2

    So there's just no way to directly connect a server to this running 800GbE?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  5 หลายเดือนก่อน +5

      In a few months when we get PCIe Gen6 CPUs you can do an 800GbE port off of a Gen6 x16 slots. These 51.2T are really going to get deployed en masse in 2025 as well. Just showing this off before it becomes more common

    • @movax20h
      @movax20h 5 หลายเดือนก่อน

      Maybe next year. But for now people will be using these to connect using 128x200G and 256x400G connections to server or directly to accelerators (bypassing CPU, memory and PCI), mostly ai stuff, and maybe some FPGAs sometimes. There is no info on any product or system supporting pcie 6, but I do expect Nvidia mighty be the first to deliver something in their systems capable of 800Gbe. After that maybe IBM and then Intel and AMD. As far as we are aware AMD Zen 5 on desktop and server will be Pcie 5 still. But IBM Power 11 is most likely to support PCIe 6.

  • @ivanmaglica264
    @ivanmaglica264 5 หลายเดือนก่อน

    Hey Patrick, is this switch able to do logging of packet metadata if it was deployed in ISP scenario, where its required to log metadata for a year?

  • @cdspiele
    @cdspiele 5 หลายเดือนก่อน +2

    came here for 800GbE, stayed for a 2U switch.

  • @kristopherleslie8343
    @kristopherleslie8343 5 หลายเดือนก่อน

    The cable management will be nuts!

  • @Ardren
    @Ardren 5 หลายเดือนก่อน +1

    9:12 How thick is that PCB!?

  • @roysigurdkarlsbakk3842
    @roysigurdkarlsbakk3842 5 หลายเดือนก่อน +1

    Thanks for this! Just a couple of questions:
    - What are those PSUs rated to?
    - That central processor, is that an FPGA or just an ASIC? I mean - you'll have to produce rather a few of these to make up the cost for an ASIC and if you need an upgrade, it'll be a PITA

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  5 หลายเดือนก่อน +2

      Well over 2kW. You can see ratings and voltages on the main site where there is a label pic. It is a 5nm ASIC. Projections are for tens of millions of 800G ports installed by 2028

  • @gwnlars_8306
    @gwnlars_8306 5 หลายเดือนก่อน +5

    But can it run Crysis?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  5 หลายเดือนก่อน +3

      Sure. COM express modules and PCIe

    • @capability-snob
      @capability-snob 5 หลายเดือนก่อน

      This is a switch. You might have more luck with TOTK.

  • @david300m
    @david300m 5 หลายเดือนก่อน +1

    Great info. Thanks for sharing.

  • @SB-qm5wg
    @SB-qm5wg 5 หลายเดือนก่อน +1

    That switch uses more power than my whole house lol

  • @Meiestrix
    @Meiestrix 5 หลายเดือนก่อน

    Finally a new switch to replace my Netgerar GS305.

  • @gabest4
    @gabest4 5 หลายเดือนก่อน +2

    On the next LTT: "Installing 800Gb between my room and the shed"

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  5 หลายเดือนก่อน +2

      I had 64 ports of 400GbE in my Austin house :-)

  • @KendallSeabury
    @KendallSeabury 5 หลายเดือนก่อน +4

    Would this be good for home use with gigabit fiber?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  5 หลายเดือนก่อน +3

      Not kidding, I had a 64-port 400GbE switch running in my 1596 fiber home in Texas.

  • @NexusTrimean
    @NexusTrimean 5 หลายเดือนก่อน +1

    Heh I got to see Cisco's Plans for 1600GbE a few months back, So Crazy.

  • @APie357
    @APie357 5 หลายเดือนก่อน +1

    this is servethehome, this is a channel meant for homes
    this is a *little* overkill i think

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  5 หลายเดือนก่อน +1

      Not sure where you got that impression. We have been doing big switches for many years and 8 GPU servers since 2016-2017

    • @LiLBitsDK
      @LiLBitsDK 5 หลายเดือนก่อน

      nothing wrong with looking at technology... if you haven't noticed it... such stuff comes in at the big corps first and then finds it's way down to us "simpletons" and homelabbers etc.

  • @YonatanAvhar
    @YonatanAvhar 5 หลายเดือนก่อน

    So... where do I get one for my homelab?

  • @NicholasJNZL
    @NicholasJNZL 5 หลายเดือนก่อน +1

    This opens up so many possibilities. Should be fast enough to hash model loads…… unlimited vram?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  5 หลายเดือนก่อน +3

      That is a big reason the industry is moving to 1.6TbE in 2027

    • @NicholasJNZL
      @NicholasJNZL 5 หลายเดือนก่อน

      @@ServeTheHomeVideo yeah. I can’t wait. I guess then that frees up the controls in place with vram limits / pci lanes

  • @CoreyPL
    @CoreyPL 5 หลายเดือนก่อน +1

    Imagine dropping a screwdriver inside while disassembling this switch 😂
    And look at that vapor chamber on the radiator - it's a massive one, even RTX 4090 would be proud.

  • @whyomgwhywtf
    @whyomgwhywtf 4 หลายเดือนก่อน

    sooo are the TAA compliant? because I know a few places that may be in the market for something like this...

  • @bacphan7582
    @bacphan7582 5 หลายเดือนก่อน

    This switch is perfect for my 1gbps home network

  • @axlrose357
    @axlrose357 5 หลายเดือนก่อน +1

    Do you have a chance to compare it to other 800G network switches like Mellanox/NVIDIA Quantum X800 Infiniband or Spectrum 4 Ethernet or Broadcom Tomahawk5 Ethernet based chipsets?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  5 หลายเดือนก่อน

      Hopefully we will get to do the Broadcom and NVIDIA ones too

  • @shutenchan
    @shutenchan 5 หลายเดือนก่อน +1

    Cooling this would be a challenge

  • @rafalkaluzinski9369
    @rafalkaluzinski9369 21 วันที่ผ่านมา

    Hi, can you tell me if switch can do multicast, and if so, is Multicast Latency same like Unicast latency? also when you send multicast flow into one port and replicate to 60 ports, is latency for each egress port same? More, what are latencies between different speeds, like 800gb to 100gb or 800g to 400g in both directions.
    Do they ship QSFP verstion? Can FEC be disabled?

  • @SCSITerminator
    @SCSITerminator 4 หลายเดือนก่อน +1

    Used to work at the factory that built the Cisco Catalyst 6500 and CRS-1 back in the day. You can tell this is a testing / qual / development build since it has a lot of 10 pin headers and clock signaling test ports on it that would likely get depopulated on final release.
    Everyone talking about how thick that PCB is, but to be honest, my question is how many balls does that ASIC have? I'm guessing at least 5K, and at least half of them are for just for power. 🤣
    While I'm sure a lot of that PCB thickness is for signals, another reason is with that much power and heat on that ASIC, a thin PCB can warp under the strain and cause one of the solder balls to pop open. (it's what was causing the red ring of death on the xbox360, I know first hand cus my company built those too, and I spoke directly with the engineer who was working on the problem about it) Anyway, sometimes you make PCB's extra thick, just to make them stiff and more heat tolerant, with large power and ground planes to not only provide shielding for the signal traces, but to help spread out that heat thru the board.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  4 หลายเดือนก่อน

      My first “corporate” job was at the old Stratacom Cisco factory off of Blossom Hill in San Jose

  • @Brians256
    @Brians256 5 หลายเดือนก่อน

    This is how you disaggregate memory and aggregate compute clusters. Wow.

  • @KareemAbawi
    @KareemAbawi 4 หลายเดือนก่อน

    I want a 10G 48port, any recommendations from Cisco? Extreme Networks

  • @shephusted2714
    @shephusted2714 5 หลายเดือนก่อน +1

    they should sleeve and structure fan wires for a neater more manageable switch - it is like staging a house - first impressions count #spaghetti

  • @JohnScherer
    @JohnScherer 5 หลายเดือนก่อน

    Awesome video!

  • @BillLambert
    @BillLambert 3 หลายเดือนก่อน +1

    I thought I was pretty good at grasping scale, but my brain is PSODing at this one.

  • @thatLion01
    @thatLion01 5 หลายเดือนก่อน +1

    I couldn't even afford to keep this thing powered on

  • @Ciddyism
    @Ciddyism 5 หลายเดือนก่อน +1

    Last week: "Yeah, 2.5 GBit/s for all smaller devices [those that can't be upgraded easily] and 10 GBit/s for connections to the big servers, our PCs and other switches is enough."
    Yesterday: "This will be such a big upgrade. Let's put in the fibre and replace the switches."
    Today: "OK, let's watch a video while installing the new cards into the systems so I can finally test the fastest connections. This will be so cool..."
    Big mistake. :D

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  5 หลายเดือนก่อน

      These are not really edge switches :) Still fun to see how things are changing at the high-end

    • @Ciddyism
      @Ciddyism 5 หลายเดือนก่อน

      @@ServeTheHomeVideo I know...
      - my hardware maxes out at ~40 GBit/s (best value of a single device)
      - the network would max out at ~100 GBit/s (in between switches in case I use multiple switches based on the old connections, ...)
      - many smaller connections won't be that fast (only 2.5 Gbit/s - upgrade impossible or could result in more upgrades)
      - daily use per user often is in the range below 1 GBit/s (or even less thanks to SSH, ...)
      - ...
      I didn't randomly pick the parts for the upgrade... :)
      But it feels slow* anyway... so thank you for that... ;)
      (* no, it's actually quite nice with 2.5 - 10x the speed I could get before [per device] while the connections don't slow down each other as much as before)
      ---
      And I'll ask if we can upgrade the cluster at work... I guess there are ~200 nodes (if not upgraded already) on their own network that uses quite a few switches.
      So thanks for the information! (Really useful!!)

  • @DCTekkie
    @DCTekkie 5 หลายเดือนก่อน +1

    I'm surprised that with this power usage in 2U factor it's not water-cooled in any way

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  5 หลายเดือนก่อน

      They can be, but most of the power is being used by the up to 28W OSFP modules. The 500W TDP switch chip is not too bad to cool. We will see 500W server CPUs air cooled this year

    • @DCTekkie
      @DCTekkie 5 หลายเดือนก่อน

      @@ServeTheHomeVideo it's just that the two aspects I've come across are:
      1. Spinning air fans can take up to 30% of power usage
      2. It's easier to collect heat with water block from high power chipsets
      But then again I can imagine the airflow is more important for qsfps in this case

  • @SmalltimR
    @SmalltimR 5 หลายเดือนก่อน

    This is the switch I need for my home :p

  • @awarepenguin3376
    @awarepenguin3376 5 หลายเดือนก่อน +5

    how much is this? $20,000 ?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  5 หลายเดือนก่อน +8

      Switch pricing is very strange. For example, when we saw early 32-port 100GbE switches Arista might sell for $20K, Facebook was buying the Wedge100's for like $2500 each. Do not forget the optics though. 800G optics are not cheap.

  • @cameramaker
    @cameramaker 2 หลายเดือนก่อน

    if there is 500W needed for 512x 100G ports.. thats an insane efficiency. Please integrate the management cpu and make some passive home switches based on this technology - 16x 100G, splittable to 64x25G, in the 16W range! On the other note.. nobody mentioned what such TL10 switch costs? :D

  • @kristopherleslie8343
    @kristopherleslie8343 5 หลายเดือนก่อน

    Why is it so difficult to make faster switches? Is it more silicon bound?

  • @fpskywalker
    @fpskywalker 5 หลายเดือนก่อน

    Interesting that you have to deal with wires when swapping fans.

  • @licson0729
    @licson0729 4 หลายเดือนก่อน +1

    Marvell chips aren't known for its features, but if it's marketed towards AI then probably they will use as few features as possible for the raw speed.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  4 หลายเดือนก่อน

      Just to be clear, there is the lower-end line with chips like the Prestera, then the Innovium acquistion Teralynx line that we are showing here.

  • @meco
    @meco 5 หลายเดือนก่อน +1

    Whenever I see switches like this I do not understand how some ISP‘s cheap out on peering when there are solutions

  • @wololo10
    @wololo10 4 หลายเดือนก่อน +1

    Why dont our desktop cpus are packaged like that? Direct die with a metal frame around to support the cooler

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  4 หลายเดือนก่อน

      Because it greatly increases the torque tolerances. In the V100 era HPE cracked many GPUs because their pre screened thermal paste was too thick. On consumer devices, you need to account for people who are installing their first CPU not following a guide to be successful or else it will cause problems

  • @abavariannormiepleb9470
    @abavariannormiepleb9470 5 หลายเดือนก่อน

    Next step: Just use PCIe to connect systems, as it was initially intended to.

  • @ws_stelzi79
    @ws_stelzi79 5 หลายเดือนก่อน +1

    Teardown of a 402 Tbs switch when?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  5 หลายเดือนก่อน +1

      Not sure. Hoping a 102.4T in two years

  • @sadnesskant7604
    @sadnesskant7604 5 หลายเดือนก่อน

    I'm puzzled why you would need a 10gbe base-t inside a 64 port 800gbe switch

  • @poppipo1222
    @poppipo1222 5 หลายเดือนก่อน

    Finally something for my 1 user (me) jellyfin server!

  • @TAPS147
    @TAPS147 5 หลายเดือนก่อน

    How do we start a petition for Nvidia to let Patrick have a look at what they’re working on??

  • @olhoTron
    @olhoTron 5 หลายเดือนก่อน +1

    Just one of those OSFP module uses more power than my mini PC...

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  5 หลายเดือนก่อน

      Imagine if your mini PC had two PCIe Gen5 x16 slots of I/O and could pump them both hundreds of kilometers.

  • @QuantumBraced
    @QuantumBraced หลายเดือนก่อน

    1000km distance without amplification?? For real? This blows my mind more than the 800G. I thought there was some physical law that limited it to about 100km, apparently not.

  • @userbosco
    @userbosco 5 หลายเดือนก่อน

    Quite the beastie!! No mention of cost?

    • @kingneutron1
      @kingneutron1 5 หลายเดือนก่อน

      " If you have to ask, you can't afford it "

  • @itsnebulous8507
    @itsnebulous8507 5 หลายเดือนก่อน

    OSFP is going to go the way of XFP. QSFP-DD is the future.