Talking About Mellanox 100g

แชร์
ฝัง
  • เผยแพร่เมื่อ 24 ธ.ค. 2024

ความคิดเห็น • 149

  • @LethalBB
    @LethalBB 3 ปีที่แล้ว +45

    ONIE is love, ONIE is life. Commodity switches are the future.

    • @gorgonbert
      @gorgonbert 3 ปีที่แล้ว

      Word!

    • @wiziek
      @wiziek 3 ปีที่แล้ว

      Not really, they are used mosty by hyperscalers so aws, azure, google cloud but still there isn't replacement for typical edge/core router (not layer 3 switches). ATT is trying to use that open computing model with DriveNets but it isn't really that popular, may end up using more power, space then chassis devices.

    • @Stopinvadingmyhardware
      @Stopinvadingmyhardware 3 ปีที่แล้ว

      No

  • @jgfjfgjfhjf
    @jgfjfgjfhjf 3 ปีที่แล้ว +147

    People are afraid of 5g, and here you are handling 100g without any protective head gear!

    • @osrr6422
      @osrr6422 3 ปีที่แล้ว +12

      But I like the warm buzzy feeling it gives me!

    • @CMDRSweeper
      @CMDRSweeper 3 ปีที่แล้ว +41

      Well, 5g is survivable for some time for some people, not great, but not terrible either.
      However 100G and once you cross 9g it starts to get rather dicey, it all depends for how long you decide to be subjected to all those gs...
      Just stay away from the bathroom scales when you are subjected to them, because no diet can fix the number you see at that point.

    • @FarrellMcGovern
      @FarrellMcGovern 3 ปีที่แล้ว +5

      @@CMDRSweeper LOL! You beat me to it!!! Good one!

    • @cytomatrix
      @cytomatrix 3 ปีที่แล้ว +2

      Its only active once injected intravenously.

    • @reed-young
      @reed-young 2 ปีที่แล้ว

      Hahaha, "protective head gear"!
      Astronauts don't even pull 100G at liftoff! Wendell ftw!

  • @mianderson86
    @mianderson86 3 ปีที่แล้ว +44

    I love how cheap network hardware can get. You can pick up 40GB Mellanox switches under 200 and the NICs under 100. Great fun!

    • @kenzieduckmoo
      @kenzieduckmoo 3 ปีที่แล้ว +4

      That would be awesome, and less than the cost of 10 gig hardware

    • @GrandpasPlace
      @GrandpasPlace 3 ปีที่แล้ว +3

      I picked up a supermicro blade system with 4 blades each with 2x 40g Mellanox boards in them for $600 and then added the switch to it. Instant home cloud. Though at 1400w for the power supply it costs quite a bit to keep it running. lol

    • @zacharytaylor8523
      @zacharytaylor8523 2 ปีที่แล้ว +2

      @@kenzieduckmoo It's kind of an odd situation. Everyone and their dog is looking for a faster than gigabit network and are laser focused on 10G. This has made the price of 10G stuff kinda high as used cards and equipment is drying up. But as nobody is really looking at 40G and used gear is getting more available it's almost at the point there 40G is just as cheap or cheaper.
      I almost want to take a stab at the InfiniBand stuff. 16 Port 56Gb switches for $100 shipped. I just have no idea how to interface an InfiniBand network with Ethernet without spending stupid amounts on a "gateway"

  • @StupidFoxInSpace
    @StupidFoxInSpace 2 ปีที่แล้ว +29

    Heads up, 400G is not 4x 100G, its 8x 50G. The switch to 400G changed the physical port from a QSFP28 (rated for 4x 25g) to QSFP-DD (rated for 8x 50G). Along with this physical change, the signal modulation was also changed from NRZ Modulation to PAM4 Modulation. You can buy a 400G optic that will combine 2x channels per lane for 4x 100G transport/breakout. But the originating channels are still 8x 50G

    • @ewenchan1239
      @ewenchan1239 ปีที่แล้ว

      I thought that 400 Gbps was QSFP56, no?

    • @StupidFoxInSpace
      @StupidFoxInSpace ปีที่แล้ว +3

      @@ewenchan1239 QSFP56 is 200G (4x 50G lanes). You can run 400G with QSFP-DD (8x50G), OSFP (8x 50G) or QSFP112 (4x 100G).
      Basically it all boils down to the number of physical lanes available to the hardware, and how fast the electronics can drive each lane.
      FYI these “lanes” are electrical and directly connect to the ASICs. Pluggable optics only convert the electrical signaling into lasers and then back into electrical signaling.
      If you want some fun nightime reading, pull up some MSA documents for each type and they will describe the physical standard of each optic type
      For 400G: SR8, DR4, FR4, LR4 and ZR4 are all defined!

    • @ewenchan1239
      @ewenchan1239 ปีที่แล้ว

      @@StupidFoxInSpace
      Ahhh...okay.
      My bad.
      I mean QSFP112. (I hadn't read the Mellanox ConnectX7 documentation until now.)
      But to Wendall's point though, you CAN get 400G using 4x100G if it is a QSFP112, which means that what Wendall said isn't necessarily incorrect, with respect to your comments above:
      "Heads up, 400G is not 4x 100G, its 8x 50G."
      and
      "You can run 400G...QSFP112 (4x 100G)."

    • @StupidFoxInSpace
      @StupidFoxInSpace ปีที่แล้ว +1

      @@ewenchan1239 At the time of my original comment, QSFP112 had not been ratified and products using QSFP112 did not exist yet :) Apologies I didn’t clarify that in my response last night!

    • @ewenchan1239
      @ewenchan1239 ปีที่แล้ว

      @@StupidFoxInSpace
      Gotcha.
      I wasn't sure about the timeline, but it looks like that the first public release of the QSFP-DD/QSFP-DD800/QSFP112 came out on May 20th, 2021 in Rev. 6.0 of the multi-source agreement, so your comment wasn't too far off from when it was published, and I would have to imagine that it was being worked on, behind-the-scenes.

  • @johncnorris
    @johncnorris 3 ปีที่แล้ว +10

    Did I just watch a kid in a candy store video?

  • @LanceThumping
    @LanceThumping 3 ปีที่แล้ว +12

    Good news, looks like connection speed is slightly beating Moore's Law.
    Based on the speed difference between the two you'd expect the old modem to have been made in 1964

    • @sniglom
      @sniglom 3 ปีที่แล้ว +4

      Bad news(?), 10mbit ethernet was available in the early 80s. The comparison with the modem is unfair.

  • @MikaelKKarlsson
    @MikaelKKarlsson 3 ปีที่แล้ว +9

    They don't make em beep-boop like they used to though.

  • @KeithCarmichaelInFL
    @KeithCarmichaelInFL 3 ปีที่แล้ว +36

    I am starting to understand why I watch all these videos. They are just outside of my capacity to understand, but I just feel smarter listening to Wendell (I really hope I spelled your name correctly) talk about things!

  • @agostinhogoncalves2736
    @agostinhogoncalves2736 3 ปีที่แล้ว +5

    From a Network Engineer point of view I would change the 100G port distribution, 2x100G for the inter-switch link and 1x100G for the Storage per switch, why? usually on this type of deployment MLAG is used and if that single 100G inter-switch goes down it will wreak havoc where the best scenario will be that a whole switch will shutdown all of its ports

  • @txtrader512
    @txtrader512 3 ปีที่แล้ว +5

    400G is only available on QSFP-DD ports (which is actually 8x50G lanes, not 4x100). AFAIK, 100Gb single lane SERDES aren't commonly available, yet.

    • @agostinhogoncalves2736
      @agostinhogoncalves2736 3 ปีที่แล้ว

      You also have OSFP ports that have much better cooling and have flexibility to do more, the number of lanes are getting a bit more complicated, as of today we are seeing as you said 50G lanes in the ASIC but when you get to the actual physical on the optics we see a gearbox that will transform the 8x50G to 4x100G and that's the most common implementation, so as always it depends :)

  • @movax20h
    @movax20h 3 ปีที่แล้ว +2

    The problem with Mellanox 40 and 100Gbps NIC is actually availability. I got some quotes recently, and it is 2 months waiting list, plus need to sign no-resale agreement.

  • @drassx615
    @drassx615 3 ปีที่แล้ว +9

    do I have to pay royalties for reusing the Mt. Stupid quote? That one deserves a T-shirt for sure.

  • @jeremymcguire7069
    @jeremymcguire7069 ปีที่แล้ว +1

    I'm old enough to remember my dad being elated to get 2400 baud running on his hobby system. Sometimes. It was hard to tell on the amber monitor.

  • @SadatayWadatah
    @SadatayWadatah 3 ปีที่แล้ว +20

    Wendell, long time viewer since teksyndicate days and still a noob here. I am trying to dabble in making a clustered SAN lab but my head is spinning understanding Mellanox through iSCI, iSER, RDMA, Infiniband, iWarp. Would it be too much to create a video explaining the lay of the land?

    • @vylbird8014
      @vylbird8014 3 ปีที่แล้ว

      If you're doing a SAN, you're going to be dealing with either fibre channel or iSCSI.

  • @sparkyenergia
    @sparkyenergia 3 ปีที่แล้ว +7

    I'm curious if the 100G/40G cards behave like a single 10GB/s transfer or if they behave like a link aggregation of 4 * 25G/10G?
    I'm looking for the fastest access for a single desktop to a single server for single file copies and access. So it's really 25gbe or 40gbe that are the two specs that are interesting and in budget.

    • @0bsmith0
      @0bsmith0 3 ปีที่แล้ว

      That's not how LAG works.

    • @agostinhogoncalves2736
      @agostinhogoncalves2736 3 ปีที่แล้ว +2

      No, a 100G link formed by 4*25G channels doesn't behave like a link aggregation of 4 links of 25G, so in your use case the 40G link will be better, but good luck reaching does speeds on a single TCP session data transfer, for large and fast data transfer is much better to use multiple streams

  • @Ownermode
    @Ownermode 3 ปีที่แล้ว +3

    I love this content, keep it comming!

  • @Tarulia
    @Tarulia 3 ปีที่แล้ว +1

    9:00 Link is in fact not in the description ;)

  • @Teledabby
    @Teledabby 3 ปีที่แล้ว +1

    so cool... iplay around with some 10gbit X3 still not easy to use all the bandwith on a standard NAS, even with ssd.

  • @vonkruel
    @vonkruel 3 ปีที่แล้ว +2

    I'm looking forward to replacing my 1GbE switch in the next year or so, but for now I'm fine with a single 10Gb link in my home network between my server & desktop. The two machines are only a couple meters apart so I got a pair of X520-DA1 cards from China (knockoffs, not genuine Intel) and connected those with a cheap DAC cable. A proper switch will be a nicer setup, although considering my main storage is spinning rust in raid6, I may not be in a rush to go beyond 10G. The switch I want is something like: 4 x SFP+, 8 x 2.5GbE (4 w/ PoE). I've been leaning toward Mikrotik but I'll consider Mellanox too.

  • @Tofflus
    @Tofflus 3 ปีที่แล้ว +16

    I don't even have 10gb cards to play with and Wendel has 100gb cards to play with :O

    • @LanceThumping
      @LanceThumping 3 ปีที่แล้ว +2

      You can get 10/40gbe cards for under $30 atm.
      I'd say the bigger cost depends on your setup.
      If you have the devices close enough you can get cheap DAC cables but if you are doing distance you've got to deal with either Fiber or the high price of 10Gbase-T.

    • @SilenceGProd
      @SilenceGProd 3 ปีที่แล้ว +1

      @@LanceThumping I think the problem is getting a 10g switch that you can use with you Ethernet wired house

    • @SilenceGProd
      @SilenceGProd 3 ปีที่แล้ว +2

      @@LanceThumping Can you link a few of those 30$ cards? I looking to buy some.

    • @wiziek
      @wiziek 3 ปีที่แล้ว +1

      @@SilenceGProd What's the problem? Switches are available, price may be problem.

    • @AdrianuX1985
      @AdrianuX1985 3 ปีที่แล้ว +2

      @@SilenceGProd
      MikroTik switches with several 10G ports is fairly inexpensive.

  • @Smithdude_
    @Smithdude_ 3 ปีที่แล้ว +2

    This channel is so cool.

  • @hacked2123
    @hacked2123 3 ปีที่แล้ว +3

    So I have 3 servers, 2 are data storage servers exceeding 1250MB/s read/write, and the last server is for running VMs and pfSense. The VM server has a dual 100GbE NIC, with one connection to each server...any reason a switch would be better?

    • @TV4ELP
      @TV4ELP 3 ปีที่แล้ว +2

      With JUST that setup, a swtich would give you some configuration options and outside access to the ressources aswell, but ig they only need to communicate together thats fine.
      One storage server could in theory go over the pfsense to the other storage server. But if you think about scaling it up more and more, you should opt in for a switch, since just one pc dying could mean everything behind it is not acessible anymore. Ringbus stuff for example. Or just having way too much overhead since someone is talking to one server trough a third one etc.

    • @hacked2123
      @hacked2123 3 ปีที่แล้ว +1

      @@TV4ELP makes sense. it's a homelab, it'll never grow more than this; I've moved my entire environment (my gaming/workstation computers) into the VM server, so there are no additional server I could ever need.

  • @Andy-ee9ft
    @Andy-ee9ft 3 ปีที่แล้ว

    Just a small question... @ 6:07 th-cam.com/video/lAk89Id-5RU/w-d-xo.html it was supposed to be 12x SFP28 which is 12x 25GbE and it got mixed up ?

  • @pcb7377
    @pcb7377 ปีที่แล้ว

    I understand that at the ends of the DAC cable there is an EEPROM and it says who I am. Question - does the NIC card ports reading the eeprom compare it to the eeprom on the other end of the cable?
    I connected 2 DAC cables through a docking board (QSFP28 QSFP28) and can't get a connection between NIC cards
    I don't understand what's wrong.

  • @jonnypeace2810
    @jonnypeace2810 3 ปีที่แล้ว +1

    These numbers all sound incredible for a homelaber like me, but can imagine busy servers needing it. Love the videos :)

    • @mndlessdrwer
      @mndlessdrwer 2 ปีที่แล้ว +1

      With ESOS, you could create your own DIY storage array and use it to present block-level storage to your devices to hold things like Steam libraries so you don't need to migrate an internal drive with every PC upgrade, you just change the path in the array config. 40Gb cards are becoming very inexpensive, though you do need to make sure that you get one that isn't Infiniband or is configurable as a NIC or an Infiniband HBA. I saw some for as little as $70 with AoC. Pretty cool how cheap high-speed networking equipment is getting these days. Hopefully 100Gb stuff drops this fast.

    • @avarise5607
      @avarise5607 ปีที่แล้ว +1

      Servers are one thing, but the true consumers of newest networking gear are telco companies :P

  • @klam77
    @klam77 ปีที่แล้ว

    what are those beautiful panel screens in the background open to firefox?

  • @R-C.
    @R-C. ปีที่แล้ว

    I came across this random piece of hardware while browsing the web and decided to youtube it. Asking myself, who is this relevant to? I am now in a whole new rabbit hole. So cool even tho I don't understand much yet.

  • @amirpourghoureiyan1637
    @amirpourghoureiyan1637 3 ปีที่แล้ว +7

    LAN party over acoustic modem anyone?

    • @NOOBNUT08
      @NOOBNUT08 3 ปีที่แล้ว

      Modem Wars anyone?

    • @TheExard3k
      @TheExard3k 3 ปีที่แล้ว +2

      I had a dedicated server running on my LAN party days with two channels of 28.8k glory. But we didn't need to patch games or play videos over WAN, so bandwidth was ok for most things

  • @-tineidae
    @-tineidae 3 ปีที่แล้ว +1

    If i look around, that switch cost around 3000-7000€ piece, depending if new or refurbished. All this onie switches are expensive as hell.

  • @Ziggurat1
    @Ziggurat1 3 ปีที่แล้ว

    Hey Wendel, I take offense to that tomato fruit comment in that image. Tomatoes are totally fruit, and why is that useful to use as a classification? They ripen, even after detached to the plant, you can use them to make new tomato plants, it's awesome to learn about biology (you would not want to take wrong or the genitalia and leg of a person so why a plant?)

  • @DangoNetwork
    @DangoNetwork 3 ปีที่แล้ว +2

    More TrueNAS SCALE video please. I am baning my head on 40Gb ConnectX-3 without RDMA.

  • @codingblues3181
    @codingblues3181 3 ปีที่แล้ว +1

    Used these cards for some time; most Mellanoxs are a good card for their custom SoC stack and ToE with DDPK, not sure why would you use them for run-of-the-mill setup!

  • @mnamnam6061
    @mnamnam6061 3 ปีที่แล้ว

    Ahm, about the acoustic coupler is it possible you were talking about 300 baud, not bit/s?
    Happy new year.

  • @MiniArts159
    @MiniArts159 2 ปีที่แล้ว

    ok completely unrelated but that BGM is basically if the mother series had to do "The Lick" in band class

  • @pieterrossouw8596
    @pieterrossouw8596 3 ปีที่แล้ว +1

    Cries in 1Gbps networking with 10Mbps internet (on good days)

  • @wiziek
    @wiziek 3 ปีที่แล้ว +5

    100g fiber isn't ethernet? It is unless you are talking about fiber channel, just like 10g, 40g or 25g SFP ports are just ethernet.

    • @popcorny007
      @popcorny007 3 ปีที่แล้ว +1

      I believe Mellanox Connect-X cards use their own protocol, not ethernet.
      Or they can switch between protocols? Not sure

    • @wiziek
      @wiziek 3 ปีที่แล้ว +2

      @@popcorny007 What kind of protocol?
      They have connect-x cards listed as ethernet, what would you connect them with? Infiniband is under seperate category, along with fibers channel you don't plug them into ethernet switches that were shown here.

    • @bourbonwarrior1618
      @bourbonwarrior1618 3 ปีที่แล้ว +2

      Not sure about the newer gen cards or if the naming is still the same after the Nvidia buyout but connect-x3 cards could do Infiniband, Ethernet or both.
      Base connect-x3 cards were Infiniband only
      connect-x3 EN were Ethernet only
      connect-x3 VPI were able to do both
      Not sure what Wendel meant when he said that or if he just miss spoke because the switch he was talking about is an Ethernet switch.

    • @agostinhogoncalves2736
      @agostinhogoncalves2736 3 ปีที่แล้ว +3

      I think he just Missspoke, it is typical to call Cat5/6/7 with RJ45 connectors Ethernet cables so I think what he meant to say that is not RJ, ethernet doesn't really care about the actual cable as it is a Layer 2 protocol

  • @Metalcastr
    @Metalcastr 3 ปีที่แล้ว +12

    I'm glad you mentioned how slow the cloud can be, comparatively, even if it's on great hardware. Also cloud equipment is shared, with bandwidth split between customers.
    One thing I have not tried is the "dedicated" cloud hardware options, when you pay extra for bare metal equipment. I wonder if you can also get dedicated links between all the hardware, since different machines host CPU, drives, etc. Back when I did cloud things, it just chugged (compared to a high-speed desktop) due to the shared resource model. I remember several blog posts about "cpu steal" where another tenant could steal cpu time and your VM would be slow. There were also several blog posts comparing speed between cloud providers, some with better Iops, etc.
    Maybe this is largely solved now, I've been out of this space for a few years.

    • @FenixJr
      @FenixJr 3 ปีที่แล้ว +4

      i think your proposal has been an option for quite a while. if you look at a host like OVH, they have a vRack. so you rent multiple servers from them, but then they can be placed in a virtual private network. i don't know the specifics of the backbones between racks/floors etc in their datacenters, but they push some heavy data.... so i don't think it would ever feel like those network connections are bottlenecking you beyond actual enterprise grade technology limitation anyways.

    • @blueguitar4419
      @blueguitar4419 ปีที่แล้ว +1

      Firstly, your host needs to be in a rack where the Top Of Rack Switch is not oversubscribed. But if you get bare metal in an appropriately divided TOR, seems you would have full fat access to the network via sfp/qsfp depending on the host type

  • @carlostrudo
    @carlostrudo 3 ปีที่แล้ว +1

    Linus is drooling right now.

    • @motoryzen
      @motoryzen 3 ปีที่แล้ว

      Yeah and harassing the shit out of his product manager to get this in their hands.

  • @mndlessdrwer
    @mndlessdrwer 2 ปีที่แล้ว

    Dell was pushing 25Gb minimum connectivity four years ago. Now they're working toward having 100Gb as their new minimum standard. Given how their storage arrays are designed to function, with clusters of storage nodes aggregating high-performance NVMe flash RAID arrays, the array itself should be more than capable of providing multiple 100+Gbps data streams, sustained. I mean, the nodes in the clusters themselves are interconnected using no fewer than a pair of 200Gb infiniband connectors each, if not more. With the introduction of NVMe packetization for network transport protocols, the floodgates have really opened for high-speed networking well in excess of 10Gb.

  • @debugin1227
    @debugin1227 ปีที่แล้ว

    We called them acoustic couplers

  • @todddembsky8321
    @todddembsky8321 3 ปีที่แล้ว +3

    I have no idea what you are saying. I am a Desktop PC builder. This enterprise kit is so far beyond me. I used to connect to the MF via a 300 baud acoustic modem. Those were the days. I would love a series on the components of an enterprise solution. But keep it at a level so Krista's dog can understand it.
    Cheers

  • @falconeagle3655
    @falconeagle3655 3 ปีที่แล้ว +1

    Looking forward to that openshift video

  • @Jibs-HappyDesigns-990
    @Jibs-HappyDesigns-990 3 ปีที่แล้ว

    great scott! I have found my plateau! so, a great world 4 software processing!

  • @g.4279
    @g.4279 2 ปีที่แล้ว

    Can someone please educate me. 25gb are hundreds of dollars online. But eBay has a bunch of "40gb" cards for under $100. I see a bunch of postings for Mellanox MCX314A-BCCT for cheap. Is there some sort of catch?

    • @nathanwhite704
      @nathanwhite704 2 ปีที่แล้ว

      it's a scam, I'd avoid ebay like the plague.

  • @GorditoCrunchTime
    @GorditoCrunchTime ปีที่แล้ว

    Did that open shift video ever come out? (mentioned toward end of video)

  • @hblaub
    @hblaub 3 ปีที่แล้ว +3

    I thought "100g" was 100 gramms. To warn the users how heavy the card is ;-)

  • @MacGyver0
    @MacGyver0 2 ปีที่แล้ว

    Which server chassis model is shown in this video?

  • @QuantumBraced
    @QuantumBraced 2 ปีที่แล้ว

    I think I'm halfway between Mount Stupid and the dip that is after Mount Stupid.

  • @DoozyBytes
    @DoozyBytes ปีที่แล้ว

    The connectx-4 is obsolete with esxi8

  • @andreas1989
    @andreas1989 3 หลายเดือนก่อน

    Can somebody write the adress of mellanox website please..? 😮

  • @gumanow
    @gumanow ปีที่แล้ว

    Heads up. Mellanox RDMA is not standards compliant. It doesn't work with any other vendors RoCE. It breaks the key tenant of Ethernet... standards, interoperability, and open.

  • @jonfe
    @jonfe ปีที่แล้ว

    there is a way to have a good bus connection between (4x NVIDIA 3080 TI) and (8x AMD 6700XT) using this technology to use in AI training ?

  • @DJ-Daz
    @DJ-Daz 3 ปีที่แล้ว

    Wasn't the modem CPS and not bits?

  • @minerzcollective6755
    @minerzcollective6755 3 ปีที่แล้ว

    I would love to get that Dell switch...but not for $3000 off ebay....oh man scalpers are even in enterprise stuff...

    • @0bsmith0
      @0bsmith0 3 ปีที่แล้ว

      That's bloody cheap.

  • @jrherita
    @jrherita 3 ปีที่แล้ว

    Is that a 300 baud modem from the 80s or from the 70s? :)

  • @johncarter2383
    @johncarter2383 11 หลายเดือนก่อน

    nvidia bought Mellanox March 11, 2019

  • @Valerius7777
    @Valerius7777 3 ปีที่แล้ว

    But what about cx-6 cards?

  • @programorprogrammed
    @programorprogrammed 3 ปีที่แล้ว +1

    I thought he was talking about some drug, 100g doses

  • @dpscribe
    @dpscribe 3 ปีที่แล้ว

    All my Linux and BSD ISOs distributed to my close friends and family quickly in my network of fun?

  • @Somethingaboutthat
    @Somethingaboutthat 2 ปีที่แล้ว

    This looks awesome! Anyone got a recommendation for a 24 port 1Gb switch with at least 4x 10GB ports that won’t break the bank?

  • @keyserxx
    @keyserxx 3 ปีที่แล้ว

    small g, we don't wanna look like we're showing off

  • @javierthewish
    @javierthewish 3 ปีที่แล้ว +1

    I hope I found this video before. I was fighting with my dell switches to get some of the ports from 25 Gbits to 10 Gbits and I wasn't able until a friend told me about the 4 ports groups...

    • @Stopinvadingmyhardware
      @Stopinvadingmyhardware 3 ปีที่แล้ว

      Nope 👎
      Nobody gives a shit racist. Go away.

    • @javierthewish
      @javierthewish 3 ปีที่แล้ว

      @@Stopinvadingmyhardware Can you elaborate your comment? Are you calling me racist?

    • @Stopinvadingmyhardware
      @Stopinvadingmyhardware 3 ปีที่แล้ว

      @@javierthewish Made sure you were paying attention invaders

    • @javierthewish
      @javierthewish 3 ปีที่แล้ว

      ​@@Stopinvadingmyhardware I live in Germany is it getting more invader/racist under your very confuse understanding of reality? Don't worry mister or miss fake account. There is still hope for you, there are professionals of Psychology out there that can be a very good help for your problem. Take care.

    • @Stopinvadingmyhardware
      @Stopinvadingmyhardware 3 ปีที่แล้ว

      @@javierthewish
      That’s why

  • @ThePoot_tf2
    @ThePoot_tf2 2 ปีที่แล้ว

    7:26

  • @MazeFrame
    @MazeFrame 3 ปีที่แล้ว

    And here I sit. Thought those two fibers dangling under my ceiling running 10Gig were fast...

  • @kungfujesus06
    @kungfujesus06 3 ปีที่แล้ว +1

    Til, 2.5x higher bandwidth is orders of magnitude. :-p

  • @ewitte12
    @ewitte12 ปีที่แล้ว

    It blows me away sometimes how persistent work has been using 95% 1Gbit.

  • @power-max
    @power-max 3 ปีที่แล้ว

    This makes my 5 port 10/100 BaseT unmanaged switch feel inadequate. :(

  • @narobii9815
    @narobii9815 3 ปีที่แล้ว +1

    We only need 333,333,333 or so of those phone shoes to be equal. parrel is the future

  • @theosky7162
    @theosky7162 ปีที่แล้ว

    QSFP112 Please.

  • @kenzieduckmoo
    @kenzieduckmoo 3 ปีที่แล้ว

    I’m so glad Wendell stopped correcting himself “mellanox I mean nvidia”

  • @pkt1213
    @pkt1213 3 ปีที่แล้ว +1

    Thank God I am an idiot and I can just think I am right all the time.
    Pretty sure the DBA I have been working with thinks I am.

  • @NarekAvetisyan
    @NarekAvetisyan 2 ปีที่แล้ว

    Wendell, review the NVIDIA CONNECTX-7 that's 400Gb/s!!! That should be fun xD

  • @kelownatechkid
    @kelownatechkid 2 ปีที่แล้ว

    Lol I can barely make use of 10gig as-is

  • @cliffordcrawford5723
    @cliffordcrawford5723 หลายเดือนก่อน

    ❤❤❤

  • @Kknewkles
    @Kknewkles 3 ปีที่แล้ว

    One hundred billion vs 300. Needless to say, these aren't the spartans you're looking for...

  • @Gryfang451
    @Gryfang451 3 ปีที่แล้ว

    Who stold my first modem? Mom, hang up the phone, you kicked me off CompuServ... Little did I know then how far we would come. Now I understand why my dad was so amazed with personal computers like the C64. It was miles beyond a typewriter. Of course, there are very few places that have a 100Gb connection to the world, but like everything else, eventually someone will ask you why you're still stuck on slow af 100Gb to your house when its barely fast enough to surf the metaverse and certainly not in full 5 sense mode. Please bury me before we have metaverse fart jokes that you can smell!

    • @AlpineTheHusky
      @AlpineTheHusky 3 ปีที่แล้ว

      5mbit connection via wire and maybe 15mbit with mobile. Hybrid router gud stuff

  • @nagyandras8857
    @nagyandras8857 3 ปีที่แล้ว +2

    Dammnit, stop calling it linIx, its LinUx, there is a God damned U there in the end not and i.
    Happy New year btw.

  • @JimParshall
    @JimParshall 2 ปีที่แล้ว

    My first mofdem was a 300 bit per second one... I swear I could type faster . Jajaja

  • @Kurukx
    @Kurukx 3 ปีที่แล้ว

    Wendel your fired !!! Joking

  • @GooogleGoglee
    @GooogleGoglee 3 ปีที่แล้ว

    About chips shortage... 😅😂😂😂😂

  • @tolpacourt
    @tolpacourt 2 ปีที่แล้ว +1

    Nobody wants to write an operating system? Then why does routerOS, HP Comware, etc. exist? Microsoft has an allegedly FOSS switch software named Sonic but MSFT software is always a trap. These Dell switches are cool but Nvidia now owns Cumulus and will not be supporting any Broadcom-based chipsets. These Dell switches are Broadcom chip devices.

  • @mzamroni
    @mzamroni 2 ปีที่แล้ว

    It's still almost impossible for single server to deliver more than 10 Gbps application layer throughput.
    Over 10 Gbps is over kill for application server such as web server, database, etc.

  • @attilavidacs24
    @attilavidacs24 2 ปีที่แล้ว

    In 40 years time someone will be making a video holding up a 100gbit card and laughing at it while they show their 1 petabit NIC. In 400 years time there will be no concept of data speeds as we know it. They will have NICs hardwired into their brains.

  • @wmopp9100
    @wmopp9100 3 ปีที่แล้ว

    this video would have saved me a day if I had it 5 months ago ;)

  • @0ctatr0n
    @0ctatr0n 3 ปีที่แล้ว

    You watch, with all the r/selfhosting going on everywhere. Pretty soon you're going to see server racks and blade servers sold to end users for their IOT and micro services running at home. People will start buying up solar and house batteries for powering these beasts (And Texans for other reasons) Sys-admin skills will become part of the school curriculum, and the humble home desktop PC disappears as every house becomes it's own VPS, perhaps with redundant backup to family in other cities..

    • @wiziek
      @wiziek 3 ปีที่แล้ว +2

      Were you drinking something? Home Desktop PCs already dissapeared, leaving laptops, tablets and smartphones, most people won't be running additional hardware maybe beside USB hdd/ssd for additional storage.

    • @kelownatechkid
      @kelownatechkid 2 ปีที่แล้ว

      Lol, what are you smoking? most people barely know how to log in to their isp modem admin web gui

  • @PupShepardRubberized
    @PupShepardRubberized 3 ปีที่แล้ว

    Drools ~~~~~~~~~~~ wags wags wags wags wags wags wags wags wags

  • @konnorj6442
    @konnorj6442 11 หลายเดือนก่อน

    Heh I remember b4 300 baud... and when 300 came out it was blistering!