Using dRAID In ZFS for Faster Rebuilds On Large Arrays.

แชร์
ฝัง
  • เผยแพร่เมื่อ 27 พ.ย. 2024

ความคิดเห็น • 25

  • @Mikesco3
    @Mikesco3 11 หลายเดือนก่อน +13

    I really like your deep dives into these topics. You're one of the few TH-camrs I've seen that actually knows what is being presented...

    • @dominick253
      @dominick253 10 หลายเดือนก่อน +2

      Apalards adventures is really knowledgeable as well.

    • @andrewjohnston359
      @andrewjohnston359 8 หลายเดือนก่อน +1

      @@dominick253 true, and Wendell from level one techs

  • @ewenchan1239
    @ewenchan1239 หลายเดือนก่อน +2

    Great video!
    A few things:
    1) You CAN create multiple multi-drive vdevs as a part of a larger ZFS pool just like how you would create multiple raidz(#) vdevs to make up a larger ZFS pool.
    There is no requirement that you have to put all of your drives into a single vdev to make up your ZFS pool.
    I want to be clear and explicit about that because a LOT of the documentation and even in the example that you provided here, might make it sound like, for people who might not know (or haven't tested it out yet), to think that they have to put all of the drives into a single vdev.
    2) That being said, if you, if you DO have multiple dRAID vdevs which make up your pool (e.g. let's say that in traditional ZFS parlance, you have three 8-wide raidz2 vdev (no spares, no special, no SLOG, just 8 drives with 2 drives for redundancy), i.e. raidz2-0, raidz2-1, and raidz2-2), then in dRAID parlance, you will end up with draid2:6d(:0s:8c)-0, draid2:6d-1, and draid2:6d-2 (something like that -- I forget the exact syntax as to how the name shows up from when I was testing it) -- when you replace a drive, ONLY the drives IN THAT VDEV, will participate in the resilvering.
    So, here is what this means from a practical perspective:
    If you watch Mark Maybee's video about dRAID (th-cam.com/video/jdXOtEF6Fh0/w-d-xo.html), both him and ElectronicsWizardry here talks about how it will use all of the drives in the ZFS pool for the resilvering. This is true IF and ONLY IF, you have ONE vdev that make up your entire pool.
    If you split out your pool where you have multiple dRAID vdevs that make up your pool, then ONLY THE DRIVES IN THAT VDEV WILL PARTICIPATE IN THE RESILVERING PROCESS.
    So, in the example above where you have three 8-wide draid2 vdevs, only ONE of the 8-wide draid2 vdevs will participate in the resilvering process whenever you go to swap out or replace a drive.
    The other 16 drives in the other draid2 vdevs will NOT participate in the resilvering process.
    I think that this is important information for people to know.
    (I should make a video about this.)
    That way, people's expectation about their resilvering times will be clear (vs. the expectation, vs. the information that's available on TH-cam about dRAID).
    Thanks.

    • @ElectronicsWizardry
      @ElectronicsWizardry  หลายเดือนก่อน

      I didn't talk about multiple draid vdevs as I didn't really have the hardware where it would make sense. I agree with your example of having multiple draid vdevs and it only doing rebuilt IO on the vdev being rebuild, but I think Draid is made for fewer larger vdevs to take better advantage of its features. I'm guessing there are some situations(maybe like a system with many SAS JBODs) where having a few Draid vdevs makes sense, but for small ish systems I think sticking with one is the best.

    • @ewenchan1239
      @ewenchan1239 หลายเดือนก่อน +1

      @@ElectronicsWizardry
      I mention this because if I were to migrate to dRAID, I won't be able to migrate all 216 TB (raw capacity) over at once.
      I'd have to break up the migration, which will then, as a consequence of doing it piecemeal, will result in multiple dRAID vdevs.
      I tested how the rebuild would work using files as the vdev and saw that it will just rebuild *within* its own vdev, and therefore; won't use all of the drives in the pool, for said rebuild.

  • @wecharg
    @wecharg 11 หลายเดือนก่อน +4

    Thanks for taking my request, that was really cool to see! I ended up going with CEPH but this is interesting and might use it in the future! -Josef K

  • @makouille495
    @makouille495 11 หลายเดือนก่อน +3

    how the hell do you manage to make everything so water clear for noobs like me haha as always quality content and quality explainations ! thanks a lot for sharingyour knowledge with us ! keep it up ! 👍

  • @TheExard3k
    @TheExard3k ปีที่แล้ว +3

    If I had like 24 drives, I'd certainly use dRAID. Sequential resilver....just great, especially with today's drive capacities.

  • @FredFredTheBurger
    @FredFredTheBurger ปีที่แล้ว +2

    Fantastic video. I really appreciate the RaidZ3 9 disk + spare rebuild times - and the mirror rebuild times. Right now I have data striped across mirrors (Two mirrors, 8TB disks) that is starting to fill up and I've been trying to figure out the next progression. Maybe a 15 bay server - 10 bays for a new Z3 + 1 array, leaves enough space to migrate my current data to the new array.

  • @zyghom
    @zyghom ปีที่แล้ว +5

    imagine: I only use mirrors and stripes but I am still watching it ;-)

  • @boneappletee6416
    @boneappletee6416 10 หลายเดือนก่อน +1

    This was a very interesting video, thank you for the explanation! :)
    Unfortunately I haven't had the chance to really play around with ZFS yet, most of the hardware at work use hardware RAID controllers. But I'll definitely keep dRAID in mind when looking into ZFS in the future 😊

  • @Linuxcangamenow-w4j
    @Linuxcangamenow-w4j 3 ชั่วโมงที่ผ่านมา

    4:38 did you actually let it finish rebuilding and time it or just look at the estimated time? Because I’ve never known zfs to do an accurate estimation. Also metadata offloaded on a ssd massively improves raidz rebuild performance.

    • @ElectronicsWizardry
      @ElectronicsWizardry  3 ชั่วโมงที่ผ่านมา

      All the times were after it finished. I may have showed screenshots of it in progress though.

    • @Linuxcangamenow-w4j
      @Linuxcangamenow-w4j 3 ชั่วโมงที่ผ่านมา

      Nice good job collecting data :)

  • @awesomearizona-dino
    @awesomearizona-dino ปีที่แล้ว +4

    Upside down construction picture?

    • @ElectronicsWizardry
      @ElectronicsWizardry  ปีที่แล้ว +4

      I didn't realize the picture looks odd in video. The part of the picture that is visible in the video is a reflection, and the right side up part of the picture is hidden.

  • @Mikesco3
    @Mikesco3 11 หลายเดือนก่อน +2

    I'm curious if you've looked into ceph

    • @ElectronicsWizardry
      @ElectronicsWizardry  11 หลายเดือนก่อน +1

      I did a video on a 3 node cluster a bit ago and used ceph for the video. I want to do more ceph videos in the future when I have hardware to show ceph and other distributed filesystem in a correct environment.

    • @andrewjohnston359
      @andrewjohnston359 8 หลายเดือนก่อน +2

      @@ElectronicsWizardry I would love to see that. There are zero videos I can find showing a promox+ceph cluster that are not homelabbers in either nested VM's or using very under powered hardware as a 'proof of concept' - and once it's setup the video finishes!!. I have in the past built a reasonably specced 3 node proxmox cluster with 10GB nics, mix of SSD's and spinners to run VM's at work. It was really cool - but the VM's performance was all over the place. A proper benchmark, deep dive into optimal ceph settings and emulating a production environment with a decent handful of VM's running would be amazing to see!

  • @Spoolingturbo6
    @Spoolingturbo6 8 หลายเดือนก่อน

    @2:15 can you explain how to set that up, or give a search term to look that up?
    The I installed promos, I split my 256GB NVMe drive up in the following GB sizes (120/40/40/16/16/1/.5) (Main, cache,unused,metadata,unused,EFI,Bios)
    I knew about this, but just now at the stage I need to use metadata and small files.

  • @marconwps
    @marconwps 2 หลายเดือนก่อน

    12 hdd in my pool i try draid soon as i can , truenas support confirmed?

    • @ElectronicsWizardry
      @ElectronicsWizardry  2 หลายเดือนก่อน

      I’m pretty sure truenas has fraud support as I’ve seen it as an option when making pools. Draid makes a good amount of sense with 12 drives.

  • @inlandchris1
    @inlandchris1 2 หลายเดือนก่อน

    Why not use a good quality Raid card with…16 ports? Use 8 hard drives in a Raid #? With 8 SSD’s wrapped around the spinning drives? That will solve the latency problem and it really speeds things up.

  • @severgun
    @severgun 10 หลายเดือนก่อน +1

    why data sizes so weird? 7 5 9? None of them divisible by 2.
    why not 8d20c2s?
    Because of fixed width I thought it better to comply 2^n rule. Or I miss something?
    How compression works here?