Using dRAID In ZFS for Faster Rebuilds On Large Arrays.

แชร์
ฝัง
  • เผยแพร่เมื่อ 5 ส.ค. 2024
  • In this video I take a look at dRAID in ZFS. dRAID is a variant of RAIDZ that allow for much faster rebuilds and better uses of a hot spare drive. In this video I compare the rebuild times to a RAIDZ array, and look at performance differences. I also cover how to create a dRAID array in ZFS, and the different parameters that need to be set.
    00:00 Intro
    00:50 How dRAID is different from RAIDZ
    02:33 Pros and Cons of dRAID
    04:11 Rebuild time comparison
    06:29 Performance comparison
    07:59 How to create a dRAID Zpool
    10:52 Calculating usable space using dRAID
    12:39 When dRAID makes sense
    13:54 Conclusion
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 16

  • @Mikesco3
    @Mikesco3 8 หลายเดือนก่อน +12

    I really like your deep dives into these topics. You're one of the few TH-camrs I've seen that actually knows what is being presented...

    • @dominick253
      @dominick253 7 หลายเดือนก่อน +2

      Apalards adventures is really knowledgeable as well.

    • @andrewjohnston359
      @andrewjohnston359 4 หลายเดือนก่อน +1

      @@dominick253 true, and Wendell from level one techs

  • @zyghom
    @zyghom 8 หลายเดือนก่อน +4

    imagine: I only use mirrors and stripes but I am still watching it ;-)

  • @wecharg
    @wecharg 7 หลายเดือนก่อน +3

    Thanks for taking my request, that was really cool to see! I ended up going with CEPH but this is interesting and might use it in the future! -Josef K

  • @makouille495
    @makouille495 8 หลายเดือนก่อน +3

    how the hell do you manage to make everything so water clear for noobs like me haha as always quality content and quality explainations ! thanks a lot for sharingyour knowledge with us ! keep it up ! 👍

  • @TheExard3k
    @TheExard3k 8 หลายเดือนก่อน +3

    If I had like 24 drives, I'd certainly use dRAID. Sequential resilver....just great, especially with today's drive capacities.

  • @FredFredTheBurger
    @FredFredTheBurger 8 หลายเดือนก่อน +2

    Fantastic video. I really appreciate the RaidZ3 9 disk + spare rebuild times - and the mirror rebuild times. Right now I have data striped across mirrors (Two mirrors, 8TB disks) that is starting to fill up and I've been trying to figure out the next progression. Maybe a 15 bay server - 10 bays for a new Z3 + 1 array, leaves enough space to migrate my current data to the new array.

  • @boneappletee6416
    @boneappletee6416 7 หลายเดือนก่อน +1

    This was a very interesting video, thank you for the explanation! :)
    Unfortunately I haven't had the chance to really play around with ZFS yet, most of the hardware at work use hardware RAID controllers. But I'll definitely keep dRAID in mind when looking into ZFS in the future 😊

  • @awesomearizona-dino
    @awesomearizona-dino 8 หลายเดือนก่อน +4

    Upside down construction picture?

    • @ElectronicsWizardry
      @ElectronicsWizardry  8 หลายเดือนก่อน +4

      I didn't realize the picture looks odd in video. The part of the picture that is visible in the video is a reflection, and the right side up part of the picture is hidden.

  • @Mikesco3
    @Mikesco3 8 หลายเดือนก่อน +2

    I'm curious if you've looked into ceph

    • @ElectronicsWizardry
      @ElectronicsWizardry  8 หลายเดือนก่อน +1

      I did a video on a 3 node cluster a bit ago and used ceph for the video. I want to do more ceph videos in the future when I have hardware to show ceph and other distributed filesystem in a correct environment.

    • @andrewjohnston359
      @andrewjohnston359 4 หลายเดือนก่อน +2

      @@ElectronicsWizardry I would love to see that. There are zero videos I can find showing a promox+ceph cluster that are not homelabbers in either nested VM's or using very under powered hardware as a 'proof of concept' - and once it's setup the video finishes!!. I have in the past built a reasonably specced 3 node proxmox cluster with 10GB nics, mix of SSD's and spinners to run VM's at work. It was really cool - but the VM's performance was all over the place. A proper benchmark, deep dive into optimal ceph settings and emulating a production environment with a decent handful of VM's running would be amazing to see!

  • @Spoolingturbo6
    @Spoolingturbo6 4 หลายเดือนก่อน

    @2:15 can you explain how to set that up, or give a search term to look that up?
    The I installed promos, I split my 256GB NVMe drive up in the following GB sizes (120/40/40/16/16/1/.5) (Main, cache,unused,metadata,unused,EFI,Bios)
    I knew about this, but just now at the stage I need to use metadata and small files.

  • @severgun
    @severgun 6 หลายเดือนก่อน

    why data sizes so weird? 7 5 9? None of them divisible by 2.
    why not 8d20c2s?
    Because of fixed width I thought it better to comply 2^n rule. Or I miss something?
    How compression works here?