ULTIMATE 40Gb Homelab Networking - UNRAID NETWORK SETUP

แชร์
ฝัง
  • เผยแพร่เมื่อ 11 พ.ย. 2024

ความคิดเห็น • 58

  • @DigitalSpaceport
    @DigitalSpaceport  2 ปีที่แล้ว

    Join this channel to get access to perks:
    th-cam.com/channels/iaQzXI5528Il6r2NNkrkJA.htmljoin
    Shop our Store (receive 3% or 5% off unlimited items w/channel membership) shop.digitalspaceport.com/

  • @ewenchan1239
    @ewenchan1239 8 หลายเดือนก่อน +4

    I run 100 Gbps Infiniband in the basement of my home.
    It was mostly used for HPC applications (CFD/FEA -- that sort of thing). Within those IB/RDMA/MPI aware applications, I was getting somewhere between 80-90 Gbps throughput (depends on the problem and the application, and also the size of the problem that it was solving).
    Using the IB bandwidth benchmarking tool, I can get up to around 96-97 Gbps.
    For me, using said 100 Gbps IB network for storage is really just a fringe benefit as it wasn't deployed with that in mind.
    Having said that, I don't have like a pool nor an array of NVMe SSDs (in fact, I try to avoid using SSDs because SSDs are the brake pads of the computer world in that the faster they are, the more you're going to use and more likely you're going to use them, which just leads to wear out.)
    So instead, I use 36 HDDs, and as a result of that, my throughput is limited anyways.
    But it is nice that I CAN max out at roughly 24 Gbps with eight HDDs, with the nominal average closer to around the 4 Gbps range. But my LTO-8 tape drive can really do only about 200 MB/s (~1.6 Gbps sustained) so the fact that I have so much networking bandwidth/headroom - again -- storage part of this was just a fringe benefit as a result of the HPC micro cluster deployment that I had.
    On a $/Gbps basis, 100 Gbps IB is cheaper than even 10 Gbps, even if the NICs, cables, and switches have a higher absolute cost.
    (I run it through a Mellanox MSB7890 externally managed 36-port 100 Gbps IB switch.)

  • @DeLoreansgarage
    @DeLoreansgarage ปีที่แล้ว +2

    I just ran a 10gig Fiber line between my Mikrotik router and Switch and it's awesome!

    • @DigitalSpaceport
      @DigitalSpaceport  ปีที่แล้ว +1

      The speed is addicting. Folks tell me 2.5 and I'm like....nahhh go 10!

    • @eazolan
      @eazolan ปีที่แล้ว +1

      @@DigitalSpaceport I just set up my first NAS with a 1GB connection. It's very annoying!

    • @DigitalSpaceport
      @DigitalSpaceport  ปีที่แล้ว +1

      @eazolan yeah at 1Gb link it's not fast enough. I'm not sure why some folks say that's fine, people expect a folder preview to load pretty fast in modern times

    • @DeLoreansgarage
      @DeLoreansgarage ปีที่แล้ว

      @@DigitalSpaceport not fast enough when I have a 1Gbps fiber connection with multiple people streaming outside of my network and inside at the same time. I am going to be adding a Dell emc sc420 to my system shortly

  • @WoTpro
    @WoTpro 2 ปีที่แล้ว +7

    really sweet tutorial, i didn't realize CPU speed had such a big impact on 10/40 gbps ethernet, kinda odd that it doesn't utilize multithreading on the cpu.

    • @DigitalSpaceport
      @DigitalSpaceport  2 ปีที่แล้ว +2

      With 40Gb core frequency is critical. 10Gb less so. When we get to the shared storage video I'll go over some bottleneck points again.

    • @CDAWWGG43
      @CDAWWGG43 2 ปีที่แล้ว +2

      Something to remember is that these NICS all have ASICs onboard to offload some of the network processing from the CPU to help free up resources.

    • @DigitalSpaceport
      @DigitalSpaceport  2 ปีที่แล้ว

      Yeah I need to dig into tunables but for ETH traffic it still blows the hell out of the processor. Do you have some things I should check in specific?

    • @CDAWWGG43
      @CDAWWGG43 2 ปีที่แล้ว +5

      @@DigitalSpaceport Yea 40G is no joke. It's for sure worthwhile to run a server CPU and board, gotta love those lanes. In regards to the settings It's called a little something different between vendors but what you're looking for is TCP Offload and then depending on what vendor they have configuration guides for things like RDMA, ISCSI, VXLAN offload, etc to be offloaded to the super fast silicon on the NIC. I know for sure Chelsio's cards have native support in BSD based operating systems for a lot of these features. Check the doc from your vendor. As the cards get newer like the 100G and up ones, the onboard hardware is faster and faster with each generation. Older 40G NICs and switches have a bit more latency and less features according to Wendel from L1 and I believe Pat from STH did a comparison on generations of connectivity fairly recently.

    • @LampJustin
      @LampJustin 2 ปีที่แล้ว +2

      This is a limitation of iperf3 it is just single threaded. That's why you can achieve 40GBit/s with 2 instances! Also increasing MTU size also helps with cpu cycles on larger transfers

  • @electronicparadiseonline2103
    @electronicparadiseonline2103 2 ปีที่แล้ว +3

    mmmm.... 40Gb. Spicy!!! Yes please, ill take some of that!!

  • @elefantopia
    @elefantopia ปีที่แล้ว +2

    this is so technical - subscribed

  • @gustavocadena5089
    @gustavocadena5089 2 ปีที่แล้ว +2

    my network is junk, need a L3 ROUTER that can handle traffic and vlands. my plans are to join the 10 gb club but now after watching this video, i need to step it up to the 40 gb =) waiting for the DYI video/buy sheet

    • @gustavocadena5089
      @gustavocadena5089 ปีที่แล้ว +1

      any luck on making a DYI 40g network for noobs and a buy sheet =)

  • @ryanmalone2681
    @ryanmalone2681 5 หลายเดือนก่อน

    Such a cool video. I adore this shit. I’m gonna build something similar with TrueNAS, gaming PC’s local, remote bare-metal gaming PC, and 3 gaming VMs using a shared Tesla and a dedicated RTX 3060, just for the fun of it and to learn. iPerf is fine for testing, but good luck getting anything remotely close from an UNRAID array unless you carefully lay out the data across all the disks and then query them consistently. Unraid is slow, even with NVMe, for numerous reasons.

    • @DigitalSpaceport
      @DigitalSpaceport  4 หลายเดือนก่อน

      That 40gbe is so cheap is pretty crazy and fun. If you hit FUSE yeah its going to cap out very early. Direct to mounts can be faster tho.

  • @ConfidentGrips
    @ConfidentGrips 2 ปีที่แล้ว +1

    Damn this is a serious build your making even Dave's garage look slow lol

  • @CDAWWGG43
    @CDAWWGG43 2 ปีที่แล้ว +6

    Great video! Love to see more people getting into "Big boy" networking 40G and up! 2 things, have you considered the jump to 100 with Mikrotik's new $999 4 port 100G switch and do these brocades need any special licensing or anything for L2.5/L3 features? Looking at the POE version of what you got. Thanks.

    • @DigitalSpaceport
      @DigitalSpaceport  2 ปีที่แล้ว +3

      I do love the power behind that ICX6610 for sure. I have been looking at that Mikrotik 100g powerhouse but I thought it was only 100Gb total switching? mikrotik.com/product/crs504_4xq_in Is this the one you are talking about?
      I need to get PCIe4 NVME storage arrays to really utilize 100Gb and I don't have those servers in the racks.....yet
      The Brocades you should checkout the STH thread that has massive info on the ICX6610. If you get one that has not been R2'd it may well have all the original license stuff installed. Mine had the PoE, 80 and 160Gb module (which is what they call all that power in the backside) already. I disabled the PoE on mine but I have a 5548p that already is setup for all that.

    • @CDAWWGG43
      @CDAWWGG43 2 ปีที่แล้ว +1

      @@DigitalSpaceport Their test data seems way off. I remember seeing it and thought it looked neat for labbers. I wonder if they missed a capital B somewhere. Used 100G iron is starting to come down in price too thankfully.

  • @chiefleaf3132
    @chiefleaf3132 2 ปีที่แล้ว

    New camera looking amazing 🤩

  • @derekleclair8787
    @derekleclair8787 ปีที่แล้ว

    Looking forward to rdma info with nfs or samba if it’s now available on Linux vs just windows.

  • @nsanerydah
    @nsanerydah ปีที่แล้ว +2

    Great info and an idea to think about before using 10GB only. I know that my Synology can only do 10GB at the moment, but having the ability to upscale as components increase will allow for up to 40GB to future upgrades. Thanks for the information!!

    • @DigitalSpaceport
      @DigitalSpaceport  ปีที่แล้ว +1

      You should checkout the Petabyte for content storage followup video I have editing right now. You are likely very good with 10Gbit....but 40Gbit does bring another level of performance if you have all flash arrays on your tiered storage.

  • @chionyenkwu2253
    @chionyenkwu2253 5 หลายเดือนก่อน

    Some motherboards (msi trx40 creator for example) allow you to fix the 2nd pcie slot at X8 and only drop it down if you explicitly set it to bifurcate ....

    • @DigitalSpaceport
      @DigitalSpaceport  4 หลายเดือนก่อน +1

      Yeah that video was before I discovered the Threadripper life.

  • @simonemastellonephotography
    @simonemastellonephotography 10 หลายเดือนก่อน

    Nice video, i bought two card Mcx354A -FCBT but my unraid is not able to discover in network setting , can you suggest how to make it working

    • @DigitalSpaceport
      @DigitalSpaceport  10 หลายเดือนก่อน +1

      Yeah you are going to need to install the mellanox plugin, in the app store, and may need to flash the cards and/or set them to mode 2 operation ETH if they are for some reason in IB mode. The plugin gives you the steps to do that.

    • @simonemastellonephotography
      @simonemastellonephotography 10 หลายเดือนก่อน

      @@DigitalSpaceport Thank you for the answere , i already install plugin , but the MCX354 -FCBT is not in the list for the eth mode ,i flash the one for my card at least update, i do not understant how to switch in etherneth mode

  • @billygoatbilbs
    @billygoatbilbs 9 หลายเดือนก่อน

    @DigitalSpaceport I recently bought a SX6036, could you help me configure the switch for ethernet license generation? i have been researching the guide server the home but i am short on time and would like a jump start.

    • @DigitalSpaceport
      @DigitalSpaceport  9 หลายเดือนก่อน

      Some switch models require the license others have it already enabled by default. It depends on the manufacturer. If you can't enable eth mode/vpi then your going to have to use the serverthehome forums thread and there is also an eBay seller who sells the licenses, I'll leave it to you on the eBay one if you trust that. I've had 3 of these switches now and I think it's the mellanox branded ones that need licenses applied. The HP ones seem to just work outta the box.

  • @smalltimer4370
    @smalltimer4370 2 หลายเดือนก่อน

    Would it be correct to say that the Brocade 6610 can only provide 40gbe to one client when connected to Unraid?

    • @DigitalSpaceport
      @DigitalSpaceport  หลายเดือนก่อน +1

      If you connection is Desktop 6610 UnRAID host then yes. If you are looking for more hosts checkout the mellanox sx6036 or its smaller port version.

  • @sfsfsdfsdification
    @sfsfsdfsdification ปีที่แล้ว

    Great video! Were you able to connect this through Brocade icx6610? I'm getting only 10Gb (I have dual cards) and not sure if there is 40Gb license.

    • @DigitalSpaceport
      @DigitalSpaceport  ปีที่แล้ว

      Check the licenses section in the GUI to see what you have. System > General and then in the displayed screen click Config Module on the left hand side. If slot 2 reads "ICX6610-QSFP 10-port 160G Module" with status ACTIVE then you have your rear QSFP slots functional. The 2 on the right hand side are 10gb capable (1 to 4) or single 10gb and the 2 on the left side are the actual 40GB ports.

  • @madhupjoshua2319
    @madhupjoshua2319 ปีที่แล้ว

    Can we connect the QSFP orange cable directly between two servers bypassing the brocade switch? I should work I think

    • @DigitalSpaceport
      @DigitalSpaceport  ปีที่แล้ว +2

      You can direct connect qsfp and hit 40Gbit speeds for sure

    • @madhupjoshua2319
      @madhupjoshua2319 ปีที่แล้ว

      @@DigitalSpaceport Thanks for reply

    • @Jpondi
      @Jpondi ปีที่แล้ว

      @@shephusted2714 thank you

  • @Ochadd
    @Ochadd 2 ปีที่แล้ว

    Great info. Thanks

  • @thespencerowen
    @thespencerowen 11 หลายเดือนก่อน

    So cool.

  • @denichuchii7425
    @denichuchii7425 2 ปีที่แล้ว

    great device! good server rack

    • @DigitalSpaceport
      @DigitalSpaceport  2 ปีที่แล้ว +1

      ConnectX-3 + ICX6610, solid combo 1/10/40 combo.

  • @KillaBitz
    @KillaBitz 2 ปีที่แล้ว +1

    👍

  • @activate__motivation
    @activate__motivation หลายเดือนก่อน

    what about thunderbolt,,,,, unraid to pc ... no video about this, ppl not gonna buy that ,, they will use what the motherboard came with ,,, highest is thunderbolt....

    • @DigitalSpaceport
      @DigitalSpaceport  หลายเดือนก่อน

      This mobo didnt have thunderbolt but I dont think its a neglected topic really with the new MACs is it? I have read a lot about it recently and it seems rather decently performant

  • @ryanmalone2681
    @ryanmalone2681 5 หลายเดือนก่อน

    What’s the point with Unraid? I have 2 servers running 2 x 10GbE each. Even when I was first copying data where all the disks were being used, I saw a max just over 1.5Gbps. Pointless. Unraid is SO SLOW.

    • @DigitalSpaceport
      @DigitalSpaceport  4 หลายเดือนก่อน

      unraid hits FUSE by default so yeah its slow unless you write to a mount direct... which defeats the purpose in no small measure. Performance, TN wins hands down.

    • @ryanmalone2681
      @ryanmalone2681 4 หลายเดือนก่อน

      @@DigitalSpaceport Skipping Fuse is fine for cache running containers, but for the sort of files you’d leverage 40G for, I think you’d want it on the array using Fuse.

    • @DigitalSpaceport
      @DigitalSpaceport  4 หลายเดือนก่อน

      Yeah my current unraid is running if a 2.5gb linked mini nas... ?so at least it's not a problem for me? I'm not hitting above 240mb ever hehe!

  • @derekleclair8787
    @derekleclair8787 ปีที่แล้ว

    Looking forward to rdma info with nfs or samba if it’s now available on Linux vs just windows.