TrueNAS: How To Expand A ZFS Pool (Update RAIDz Expansion Video added to Description)

แชร์
ฝัง
  • เผยแพร่เมื่อ 25 พ.ย. 2024

ความคิดเห็น • 201

  • @LAWRENCESYSTEMS
    @LAWRENCESYSTEMS  2 ปีที่แล้ว +11

    TrueNAS Tutorial: Expanding Your ZFS RAIDz VDEV with a Single Drive
    th-cam.com/video/uPCrDmjWV_I/w-d-xo.html
    Explaining ZFS LOG and L2ARC Cache: Do You Need One and How Do They Work?
    th-cam.com/video/M4DLChRXJog/w-d-xo.html
    ZFS COW Explained
    th-cam.com/video/nlBXXdz0JKA/w-d-xo.html
    TrueNAS ZFS VDEV Pool Design Explained: RAIDZ RAIDZ2 RAIDZ3 Capacity, Integrity, and Performance.
    th-cam.com/video/-AnkHc7N0zM/w-d-xo.html
    ⏱ Timestamps ⏱
    00:00 ▶ How to Expand ZFS
    01:23 ▶ How To Expand Data VDEV
    02:11 ▶ Symmetrical VDEV Explained
    03:05 ▶ Mixed Drive Sizes
    04:45 ▶ Mirrored Drives
    06:00 ▶ What Happens if you lose a VDEV?
    07:37 ▶ Creating Pools In TrueNAS
    10:30 ▶ Expanding Pool In TrueNAS
    16:00 ▶ Expanding By Replacing Drives

    • @tailsorange2872
      @tailsorange2872 2 ปีที่แล้ว

      Can we just give you a nickname "Lawrence Pooling Systems" instead :)

    • @zeusde86
      @zeusde86 2 ปีที่แล้ว

      I'd really wish, that you could point out the importance of "ashift" in zfs. I just recently learned, that most SSDs have 512b instead ok 4k-sectors, and that using "ashift=12" on them (instead of 9) is, what really hits the performance so bad, that many SSDs will fall behind spinning-rust performance levels. In general i'd really like to see best practices on SSD-Pools (which cache-type to use, ashift, as described above, and which disk-types to avoid). while it may sound luxurious to have ssd-zpools in a homelab, this is especially important on e.G. proxmox-instances with zfs-on-root (on SSDs).

    • @garygrose9188
      @garygrose9188 ปีที่แล้ว

      Brand new and as green as it gets, when you say "let's jump over here" and landed in a comand page, exactly how did you get there?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  ปีที่แล้ว

      @@garygrose9188 you can SSH into the system

  • @chromerims
    @chromerims ปีที่แล้ว +9

    5:51 -- I like this. Two pools: first one with faster flash, and the second with HDDs. Thank you, Tom! 👍

  • @davidbanner9001
    @davidbanner9001 ปีที่แล้ว +3

    I'm just moving from Open Media Vault to TrueNAS scale and your uploads have really help me understand ZFS. Thanks.

    • @gorillaau
      @gorillaau ปีที่แล้ว

      What was the deal breaker that made you leave Open Media Vault? I'm pondering shared storage device as a data store for proxmox.

    • @davidbanner9001
      @davidbanner9001 ปีที่แล้ว

      @@gorillaau Overall flexibility and general support. A large amount of almost preconfigured apps/dockers and the ability to run VM's. If you are running Proxmox these are probably less of a concern? Switching to ZFS is also very interesting and something I have not used before.

    • @nitrofx80
      @nitrofx80 ปีที่แล้ว +1

      I don't think it's a good idea. I just migrated from OMV to truenas and not.very happy about the change. I think that there is a lot more value for the home user for OMV thank truenas

    • @nitrofx80
      @nitrofx80 ปีที่แล้ว

      As far I know there is only support support for one filesystrm in truenas. OMV supports all files stems and its all really up to you what you want to use.

  • @eggman9713
    @eggman9713 ปีที่แล้ว +6

    Thank you for the detailed explanation on this topic. I'm just starting to get really into homelab and large data storage. I've been a user of Drobo products (now bankrupt, obsolete, and unsupported) for many years and their "BeyondRAID" system allowing mixed-size drives was a game-changer in 2007 and few other products could do that then or now. I also use Unraid but since it is a dedicated parity disk array and each disk is semi-independent it has limitations (mainly on write speed), but is nice in a data recovery situation where each individual data drive can function outside the machine. I know that OpenZFS developers have announced that "expansion" is coming, and users have been patiently awaiting it, which would make zfs more like how a Drobo works. Better than buying whole VDEVs worth of disks at a time and finding a place for them.

  • @GoosewithTwoOs
    @GoosewithTwoOs 2 ปีที่แล้ว +2

    That last piece of info is really good to know. Got a ProxMox server running and I want to replace the old drives that came with it with some newer larger drives. Now I know.

  • @marklowe7431
    @marklowe7431 ปีที่แล้ว +1

    Super well explained. Cheers. Enterprise grade integrity, performance & home user flexibility. Pick two.

  • @Dr.Hoppity
    @Dr.Hoppity 5 หลายเดือนก่อน +1

    Thanks for the excellent practical demonstrations of how zfs distributes io!

  • @ewenchan1239
    @ewenchan1239 2 ปีที่แล้ว +19

    Two things:
    1) Replacing the disks one at a time for onesie-twosie TB capacities to the larger capacities isn't a terrible issue.
    But if you're replacing 10 TB drives with 20 TB drives, then the resilver process (for each drive) takes an INORDINATE amount of time such that you might actually be better off building a new system with said 20 TB drives and then migrating the data over your network vs. the asynchronous resilvering process.
    2) My biggest issue with ZFS is the lack of OTS data recovery tools that a relatively simple and easy to use. The video that Wendall made with Allan Jude talks about this in great detail.

  • @David_Quinn_Photography
    @David_Quinn_Photography 2 ปีที่แล้ว +1

    16:05 answered the question I had but I learned some interesting things thank you for sharing. I have a 500gb, 2tb, and 3tb drives and wanted to at least replace my 500gb for a 8th that I got on sale.

  • @Mike-01234
    @Mike-01234 ปีที่แล้ว +1

    After reviewing everything I wanted drive redundancy and pool size efficiency I built a raidZ2 that was 5 years ago never looked back. My drive failure rate has been 1-2 drives a year those were used WD red drives I bought on ebay. I now only buy brand new WD reds haven't had a failure yet in last few years. I'm looking at move the Truenas up to 14TB from 6TB, and for critical files backing up to a windows mirror drives on a windows box. I don't like all the security issues around windows if you blue screen, or something happens to the OS difficult to recover data sometimes. My new build will be 5 drive 14TB raid Z2 plus a 2nd mirror Vdev as a backup set for critical data move that off the windows box on to the Truenas.

  • @johngermain5146
    @johngermain5146 2 ปีที่แล้ว +1

    You saved the best for last (adding larger capacity drives.) As my enclosure has the max # of drives installed and 2 vdevs with no room for more, replacing the drives with larger ones is "almost" my only solution without expanding.

    • @theangelofspace155
      @theangelofspace155 2 ปีที่แล้ว +2

      You can add a 12-15 disk DAS for around $200-$250

    • @theangelofspace155
      @theangelofspace155 2 ปีที่แล้ว +1

      Well my last commet was deleted. Check serverbuilds if you need a guide

    • @johngermain5146
      @johngermain5146 2 ปีที่แล้ว

      @@theangelofspace155 Your last comment is still here!

  • @deadlymarsupial1236
    @deadlymarsupial1236 2 ปีที่แล้ว +1

    I just went with trunas scale zfs using intel e series 6 core / 12 thread xeon 32g ram & 4 x 20TB WD RED PROs
    I like the idea that I can move the whole pool/array of drives to another mainboard and not have to be worried about differing proprietary raid controllers or such controllers failing.
    I also like using a server mainboard with remote admin built onto the board and a dedicated network interface so I can power up the machine via vpn remote access if need be.
    Although it is very early days in set-up/test, I am so far very impressed and worth the extra $ for server hardware platform. People may however be surprised how much storage is allocated for redundancy - at least 1 drive's worth to survive 1 drive failing.
    What is a bit tricky is configuring a windows vm hosted on the nas that can access the nas shares.
    Haven't quite figured out how to set up a container to host ubiquiti controller either.
    One of the things this nas will do is host storagecraft spx backup sets and the windows vm hosts the incremental back-up image manager that routinely verifies, consolidates and purges redundant data as per retention politices.
    I haven't decided what ftp server for receiving backups of remote hosts yet.
    Could go with filezilla I supose.
    Another nice solution would be pxe boot service for providing a range of system boot images for setting up and troubleshooting systems in a workshop environment.
    There has been some implementations where trunas is hosted within a hypervisor such as proxmox so trunas can focus exclusively on nas while other vm's can run a windows server, firewall and perhaps containers for ubiquiti controller. May need more cores for that however when I have the time and get another 32G Ram to put in the machine, I plan to see if I can migrate the existing bare metal install of truenas scale to proxmox hypervised vm just to see how that goes.

    • @theangelofspace155
      @theangelofspace155 2 ปีที่แล้ว

      There are some video of setting truenas scale as a promox VM, I went that router. I use scale just for file manager, promox for VM hypervisor and unraid for container (docker) manager.

    • @deadlymarsupial1236
      @deadlymarsupial1236 2 ปีที่แล้ว

      @@theangelofspace155 Thanks, it will be interesting to see how easily (or not) migrating trunas from bare-metal to vm within proxmox will go. I suspect back-up of the truenas configuration, mapping the drive and network interfaces to the vm and setting up auto-boot on restored mains power but need to put together a more thorough research inspired plan first.

  • @Anonymousee
    @Anonymousee ปีที่แล้ว

    16:02 This is what I really wanted to hear, thank you!
    Too bad it was a side-note at the end, but I did learn some other things that may come in handy later.

  • @SirLothian
    @SirLothian ปีที่แล้ว

    I have a boot pool that was originally a single 32GB thumb drive that I mirrored with a 100GB SSD. I wanted to get rid of the thumb drive so I replaced the thumb drive on the boot pool with a second 100 GB SSD. I had expected the capacity to go from 32 GB to 100 GB but it did not. This surprises me since the video said that replacing the last drive on a pool would increase the pool size to the smallest disk in the pool. Looks like I will have to destroy the boot pool and recreate it with full capacity and then reinstall TrueNAS on it.

  • @Darkk6969
    @Darkk6969 2 ปีที่แล้ว +10

    One thing I love about ZFS as it's incredibly easy to manipulate the storage pools. I was able to replace 4 3TB drives with 4 4TB drives without any data loss. It took awhile to resilver each time I swap out the drive. Once all the drives been swapped out ZFS automatically expanded the pool.

    • @tubes9181
      @tubes9181 2 ปีที่แล้ว +6

      This is available on a lot more than just zfs.

    • @MHM4V3R1CK
      @MHM4V3R1CK 2 ปีที่แล้ว

      How long did that take btw?

  • @alex.prodigy
    @alex.prodigy 2 ปีที่แล้ว +1

    excellent video , makes understanding basics of ZFS very easy

  • @MobiusCoin
    @MobiusCoin 4 หลายเดือนก่อน

    This sounds a like a lot of work for not that much benefit. Watching this has really changed my approach to my build. I'm going to save up and get as many drives as my build will fit and just not go through this hassle. I actually planned on getting 3 drives, raidz1, and expanding. But nah, I'd rather not create this extra work for myself and just be more patient. Although I don't mind the last method. Again just have to be patient.

  • @madeyeQ
    @madeyeQ 2 ปีที่แล้ว

    Great video and very informative. I may have to take another look at TrueNAS. At the moment I am using a debian based system with just ZFS pools managed from the CLI. (yes I am a control freak).
    One thing to note about ZFS raid (or any other raid) is that it's not the same as a backup. If you are worried about loosing a drive, make sure you have backups! (learned that one the hard way about 20 years ago).

  • @perriko
    @perriko 2 ปีที่แล้ว +1

    Great instruction as usual... fact with reason! Thankyou!

  • @zeusde86
    @zeusde86 2 ปีที่แล้ว +4

    Actually you CAN remove data-vdevs, you just cannot do with raid-z vdevs. with mirrored vdevs this works, see also "man zpool-remove(8)":
    "Top-level vdevs can only be removed if the primary pool storage does not contain a top-level raidz vdev".
    ...on very full vdevs it just taked some time to move the stuff around...

  • @romanhegglin
    @romanhegglin 2 ปีที่แล้ว +2

    Danke!

  • @knomad666
    @knomad666 ปีที่แล้ว

    Great explanation.

  • @alecwoon6325
    @alecwoon6325 2 ปีที่แล้ว

    Thanks for sharing. Great content! 👍

  • @HelloHelloXD
    @HelloHelloXD 2 ปีที่แล้ว

    Great video as usual. Thanks

  • @bartgrefte
    @bartgrefte 2 ปีที่แล้ว +4

    Can you make a video about which aspects of ZFS are very RAM-demanding? A whole bunch of websites say that with ZFS, you need 1GB of RAM for each TB of storage, but there are also a whole bunch of people out there who are able to use ZFS without problems on systems with far from enough RAM to obey that requirement.

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  2 ปีที่แล้ว +5

      Right here: Explaining ZFS LOG and L2ARC Cache: Do You Need One and How Do They Work?
      th-cam.com/video/M4DLChRXJog/w-d-xo.html

    • @Mr.Leeroy
      @Mr.Leeroy 2 ปีที่แล้ว +1

      The only demanding thing is deduplication, the rest is caching.
      You can control on per dataset basis what gets cached (also is it only metadata or data itself is being cached). As well as what gets cached where, into RAM, into L2ARC.
      Dataset CLI parameters like `primarycache` is you what you need.
      Still be very cautious going below minimal requirements e.g. 8GB RAM for FreeNAS, it is not ZFS dictated, but rather particular appliance as a whole OS. Something like ZFS on vanilla FreeBSD may very well go a lot lower than 8GB, all depending on your serivces running.

    • @bartgrefte
      @bartgrefte 2 ปีที่แล้ว

      ​@@Mr.Leeroy I wasn't thinking as low as 8GB, more like 32GB, but with so much storage that the "1GB RAM per TB storage" requirement still doesn't apply.

    • @Mr.Leeroy
      @Mr.Leeroy 2 ปีที่แล้ว

      @@bartgrefte 32GB is perfectly adequate. I don't suppose you approach triple digit TB pool just yet.

    • @bartgrefte
      @bartgrefte 2 ปีที่แล้ว

      @@Mr.Leeroy no pool yet, waiting for a good deal for hdd's, now if ZFS had the option to start a RAIDz2 (or 3) with a small amount of drives and adding drives later....
      Everything else is ready to go, build a system with used parts only and it has 16 3.5" and 6 2.5" hot swap bays in a Stacker STC-T01 that I managed to get my hands on :)

  • @andymok7945
    @andymok7945 11 หลายเดือนก่อน

    Thanks. Waiting for the feature to add a drive to expand. I used much larger drive size when I created my pools. For me, data integrity is way more important. It is for my own use, but important stuff and I have nightly rsync happening to copy to another TrueNAS setup. The I also have a 3rd system that is my off-line archive copy. Gets powered up and connected to network and rsync away. When done, network disconnected and power removed.

  • @deacbeugene
    @deacbeugene 9 หลายเดือนก่อน

    Questions about dealing with pools: can one move a dataset to another pool? Can one delete a vdev from a pool if there is enough space to move the data?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  9 หลายเดือนก่อน +1

      You can use ZFS replication to copy them over to another pool.

  • @Savagetechie
    @Savagetechie 2 ปีที่แล้ว +1

    extendable vdevs can't be too far away. the openzfs developer summit is next week, maybe they'll even be discussed there?

  • @johnpaulsen1849
    @johnpaulsen1849 2 ปีที่แล้ว +3

    Great video. I know that Wendell from Levelonetechs has mentioned that expanding vdevs is coming?
    What do you think about that?
    Also do you have any content on adding hot spares or SSD cache to an existing pool?

    • @Pythonzzz
      @Pythonzzz ปีที่แล้ว +1

      I keep checking around every few months for updates on this. I’m hoping this will be an option by the time I need to add more storage.

  • @Im_Ninooo
    @Im_Ninooo 2 ปีที่แล้ว

    that's basically why I went with BTRFS. so I could expand slowly since drives are quite expensive where I live so I can't just buy a lot of them at once.

    • @Im_Ninooo
      @Im_Ninooo 2 ปีที่แล้ว

      @@wojtek-33 I've been using it for years now, but admittedly only with a single disk on all of my servers, so can't speak from experience on the resiliency of it.

    • @LesNewell
      @LesNewell 2 ปีที่แล้ว +1

      @@wojtek-33 I've been using BTRFS for 10+ years (mostly raid5) and in that time have had two data loss incidents, neither of which could be blamed on BTRFS. One was raid0 on top of LUKS with 2 drives on USB. Basically I was begging for something to go wrong and eventually it did. One USB adapter failed so I lost some data. This was only a secondary backup so no big deal.
      The other time was when I was creating a new Raid5 array of 5x 2TB SSDs and had one brand new SSD with an intermittent fault. I mistakenly replaced the wrong drive. Raid5 can't handle 2 drive failures at the same time (technically one failure and one replacement) so I lost some data. Some of the FS was still readable but it was easier to just wipe and start again after locating the correct faulty drive and replacing it.
      As an aside, I find BTRFS raid5 to be considerably faster than ZFS RaidZ. ZFS also generates roughly twice as many write commands for the same amount data. That's a big issue for SSDs.
      BTRFS raid5 may have a slightly higher risk of data loss but for SSDs I think that risk is offset by the reduced drive wear and risk of wearing drives out.

    • @Mr.Leeroy
      @Mr.Leeroy 2 ปีที่แล้ว

      each added drive is also ~52kWh per year, so expanding vertically still makes more sense..

  • @ManVersusWilderness
    @ManVersusWilderness 2 ปีที่แล้ว +1

    What is the difference between "add vdevs" and "expand pool" in truenas?

  • @RobFisherUK
    @RobFisherUK 9 หลายเดือนก่อน

    I only have two drives and only space for two, so 16:00 is the answer for me!

  • @prpunk787
    @prpunk787 2 หลายเดือนก่อน +1

    From a noob. If you keep adding VDEV with 4 HDD on raidz, you can expand the pool by adding another VDEV with raidz. But the pool only tolerates 1 drive failure, but the capacity would be lower because there are 2 VDEVs on raidz. If you had all 8 HDD in 1 VDEV in the pool, you would have more storage and its still able to tolerate one drive failure. I'm correct on that?

  • @simonsonjh
    @simonsonjh 2 ปีที่แล้ว

    I think I would use the disk replacement method. But waiting for new ZFS features.

  • @StylinEffect
    @StylinEffect 3 หลายเดือนก่อน

    I currently have 5x 4TB drives and am looking at using TrueNAS. What would be the best configuration that would allow me to expand to max capacity which is 8 drives for my case?

  • @glitch0156
    @glitch0156 8 หลายเดือนก่อน

    I think for Raid0, you can add drives to the pool without rebuilding the pool.

  • @hpsfresh
    @hpsfresh ปีที่แล้ว

    Doesn’t ads supports attach command even for non-mirrors?

  • @Kannulff
    @Kannulff 2 ปีที่แล้ว

    Thank you for the great explanation and video. As always :) Is it possible to put here the fio command line? Thank you. :)

  • @TonyHerson
    @TonyHerson 3 หลายเดือนก่อน

    If you're running stripe you can add one drive

  • @frederichardy1990
    @frederichardy1990 ปีที่แล้ว

    With the "Expanding by replacing", assuming that you can shutdown the TrueNAS server for a few hours, copying all the existing drives of a vdev (with dd or even standalone duplicator) to higher capacity drive could work??? It would be much faster than replacing one drive at a time for vdev with a lot of drives.

  • @kommentator1157
    @kommentator1157 2 ปีที่แล้ว +1

    Would it be possible (though not advisable) to have vdevs with differents widths?
    Edit: Just got to the part where you show it. Yep it's possible, not recommended.

  • @hojo1233
    @hojo1233 ปีที่แล้ว

    What about truenas and 2 drives in basic mirror? Is there any way to expand it using bigger drives? Unfortunately I don't have any more free ports in server.
    In my configuration I have 4 ports total - 2 of them are for data drives (2x4TB). Another one is for SSD cache, and last one is for boot. I had no issues with that configuration whatsoever, but now I need to increase storage capacity.
    Is there any way to expand it without rebuilding everything from scratch? For example by replacing 4TB disks to 8TB and resizing pool?

  • @__SKYNET__
    @__SKYNET__ 10 หลายเดือนก่อน

    Tom can you talk about the new features upcoming in Pool expansion in ZFS 2.3 thanks, appreciate it

  • @philippemiller4740
    @philippemiller4740 ปีที่แล้ว

    Hey Tom I thought you could remove vdevs but only mirrors not raidz vdevs from a pool?

  • @tank1demon
    @tank1demon ปีที่แล้ว

    So there functionally isn't a solution for having a system where you will end up with 5 drives in a pool but have to start with 4? As in adding anything to an existing vdev? I'm on xubuntu 20.04 and I'm trying to work out how to go about that, if possible. Can I just build a pool with drives without a vdev and add to that pool?

  • @Thomate1375
    @Thomate1375 2 ปีที่แล้ว

    Heey, I have a problem with the pool creation ...
    I have a fresh install of truenas scale with 2x 500gb hdds
    But everytime I try to create a Pool with them there comes an error of "...partion not found "
    Everything that I could find online says that I would have to wipe the disk and eventually reboot the system. I have done this multiple times now but nothing changes.
    Have also done a smart test but according to the results the drives seems to be ok

  • @KB3M
    @KB3M 2 หลายเดือนก่อน

    Hi Lawrence, Have you generally upgraded all your TrueNas zpools to the feature flag version that prevents moving pools to an older Truenas version? I'm just a home user, any reason not to?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  2 หลายเดือนก่อน +1

      I update the feature flag once I know I am not going back to a previous version.

  • @LukeHartley
    @LukeHartley ปีที่แล้ว +3

    What's the most common cause of a VDEV failing? I like the idea of creating several VDEV's but the thought of 1 failing and loosing EVERYTHING scares me.

    • @BenVanTreese
      @BenVanTreese ปีที่แล้ว +3

      VDEVs would fail due to normal drive failures.
      The issue with a lower raid level is that while you do have the ability to lose 1 drive and keep all data, when you put in a new drive to replace the failed one, it must do a lot of read/write to recalculate the parity on the drive you put in.
      This process can cause any other drives that are close to failing to fail as well.
      Usually people buy drives in bulk, so if you buy 16x drives at once, and they were all made at the same time from same manufacturer, the chances of another drive failing at the same time the first did is higher as well.
      The chance of dual drives failing on the same vdev when you're doing raid2 and you have a hot spare or two assigned to the pool is just lowering and lowering risk, but that risk is never 0, which is why you have backups of raid (raid is not a backup).
      Anyway, hopefully that is helpful info.

    • @lukehartleyfilms
      @lukehartleyfilms ปีที่แล้ว +1

      @@BenVanTreese very helpful! Thanks for the info!

  • @DiStickStoffMono0xid
    @DiStickStoffMono0xid 2 ปีที่แล้ว

    I did read somewhere that it’s possible to “evacuate” data from a vdev to remove it from a pool, is that maybe a new feature?

  • @AdamEverythingRC
    @AdamEverythingRC 2 ปีที่แล้ว

    Question ...for a Truenas server what would be better ASUS x58 sabretooth with Xeon x5690 , or ASUS sabretooth 990FX R2.0 AMD 8350 using a sas card to the drives I have both i could use memory 24gb on the intel and 16gb on the AMD. just not sure what would be better. will also be using m.2 card with a 256gb m.2 drive as a log cache or would it be better used as just extra cache. this will be a file server to hold all my photos (Photographer) thanks for your time and thoughts on this

  • @AdamEverythingRC
    @AdamEverythingRC 2 ปีที่แล้ว

    are you able to add a drive so that you can increase your fault tolerance. for instance, i started with 5 drives with Z1 i would like to add another drive and change from Z1 to Z2. is that possable?

  • @GW2_Live
    @GW2_Live ปีที่แล้ว

    This does drive me a little nuts tbh, as a home user. I have a MD1000 disk shelf, with 4/15 bays empty, would be nice add 4 more 8TB drives to my VDEV without restoring all the data from my backup

    • @emka2347
      @emka2347 10 หลายเดือนก่อน

      yeah... this is why i'm thinking about unraid

  • @arturbedowski1148
    @arturbedowski1148 11 หลายเดือนก่อน

    hi, I copied my hdd on ssd and i tried expanding zfs pool via gparted, but it didnt work (ssd has waaaay biger storage). Is it possible to expand my rpool zfs partition or is it not posible?

  • @AnuragTulasi
    @AnuragTulasi 2 ปีที่แล้ว

    Do a video on dRaid too

  • @KerbalCraft
    @KerbalCraft 4 หลายเดือนก่อน

    I added a data vdev to my pool, I don't see an increase in storage
    I originally had a pool with 1 vdev containing 3 4TB SSDs (RAIDz1). I just added another data vdev with 3 4TB SSDs in RAIDz1, to increase the pool storage.
    However, after I added the vdev, the storage did not increase, but the second vdev shows up (pool shows 6 drives, 2 vdevs).
    Why is this? Am I missing something?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  4 หลายเดือนก่อน

      Not and issue I have run into from the UI, but from the command line you can run "zpool list" and it will show the space available.

  • @IntenseGrid
    @IntenseGrid 6 หลายเดือนก่อน

    Several RAIDs have a hot spare, (or cool by powering down the drive). I would like to have a cold spare for my zpool that gets automatically used so resilvering can kick off without me knowing a thing. I realize that this is sometimes dangerous because we don't know what killed the drive, and may kill another one while resilvering, but most of the time, drives themselves are the problem. Doez ZFS support the hot or cold spare concept?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  6 หลายเดือนก่อน

      Yes, you can have a hot spare.

  • @jms019
    @jms019 ปีที่แล้ว

    Isn't RAIDz1 expansion properly in yet ?

  • @Linrox
    @Linrox 6 หลายเดือนก่อน

    Is it possible to upgrade a mirrored (2drive) raid to a raidz with an extra 2 drives. without data loss

  • @maddmethod5880
    @maddmethod5880 2 ปีที่แล้ว +1

    man I wish proxmox had a nice UI like that for zfs. gotta do a lot of this in command line like a scrub

    • @theangelofspace155
      @theangelofspace155 2 ปีที่แล้ว +2

      You can move to the dark side and run truenas as a VM under promox 😬

    • @skullpoly1967
      @skullpoly1967 ปีที่แล้ว

      Yea I does do that

  • @Djmaxofficial
    @Djmaxofficial 8 หลายเดือนก่อน

    But what if i wanna use diferent size drives¿?

  • @BrentLeVasseur
    @BrentLeVasseur 29 วันที่ผ่านมา

    Since it’s almost 2025/late 2024, has this changed where you can add one drive at a time? Maybe an update video is in order? Thanks!

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  29 วันที่ผ่านมา +1

      You mean like this one from 2024? th-cam.com/video/uPCrDmjWV_I/w-d-xo.html

    • @BrentLeVasseur
      @BrentLeVasseur 29 วันที่ผ่านมา +1

      @@LAWRENCESYSTEMS I watched it thanks! I just setup my very first ProxMox server and TrueNAS VM on Proxmox, and I feel like I have given birth to a borg baby. And I was wondering how I can later increase the pool size and this video popped up, so thanks!

  • @LA-MJ
    @LA-MJ 2 ปีที่แล้ว

    would you recommend raidz1 for ssds?

    • @LesNewell
      @LesNewell 2 ปีที่แล้ว

      RaidZ1 generates quite a lot of extra disk writes, which is bad for SSD life. I did some testing a while back between ZFS raidZ and BTRFS Raid5. BTRFS generated roughly half as many disk writes for the same amount of data written to the file system.
      How do you intend to use the system? If it's mostly for backups you'll probably never wear the drives out. If it's for an application with regular heavy disk writes you may have a problem.

  • @ovizinho
    @ovizinho ปีที่แล้ว

    Hello!….
    I have a doubt that I think is so simple everywhere I research this doubt goes unnoticed…
    I built a NAS with an old PC and everything is ready for the installation of TrueNas….
    My question…where to connect the LAN cable? Direct on the internet router? Or on the main PC's LAN?
    NAS-router or NAS-main computer?
    Both the NAS and the main computer have 10gb LAN each…
    If it is NAS-Router after installing TrueNas, do I disconnect it from the router and connect it to the main computer?
    Thanks in advance!…
    Processor i7 6700 3.40 GHz
    Mother board ASUS EX-B250-V7
    Vídeo card GTX 1060 6GB (PG410)
    Memory DDR4 16GB 3000mhz
    SSD 500GB NVMe
    HDD 1TB

  • @SandWraith0
    @SandWraith0 2 ปีที่แล้ว

    Just one question: how is any of this better than how Unraid does it (or OpenMediaVault with a combination of UnionFS and Snapraid or Windows with Stablebit)?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  2 ปีที่แล้ว +3

      ZFS has much better performance and better scalability

  • @Saturn2888
    @Saturn2888 2 ปีที่แล้ว

    So I have 4x1TB. Replace 1TB with 8TB, resilver, no change. Replace another 1TB, resilver, now it's 8TB larger from the first one? Or is it that you replace all drives first, then it shows the new size?

    • @gloth
      @gloth ปีที่แล้ว +1

      no changes until you replace that last drive and you have 4x8tb on your vdev

    • @Saturn2888
      @Saturn2888 ปีที่แล้ว

      @@gloth thanks! I eventually figured it out and switched to all mirrors

  • @z400racer37
    @z400racer37 2 ปีที่แล้ว

    Doesn’t Unraid allow adding 1 drive at a time @Lawrence Systems ?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  2 ปีที่แล้ว +1

      Not sure, I don't use Unraid.

    • @z400racer37
      @z400racer37 2 ปีที่แล้ว

      @@LAWRENCESYSTEMS Pretty sure I remember them working some magic there somehow. Could be interesting to check out. But I'm a TrueNAS guy also.

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  2 ปีที่แล้ว +1

      No Unraid does not natively use ZFS

    • @z400racer37
      @z400racer37 2 ปีที่แล้ว

      @@LAWRENCESYSTEMS @superWhisk ohh I see, I must have misunderstood when researching it ~a year ago. Thanks for the clarification guys 👍🏼

  • @Reminder5261
    @Reminder5261 2 ปีที่แล้ว

    Is it possible for you to do a video on creating a ZFS share? There is nothing on youtube to assist me with this. For some reason, I am unable to get my ZFS shares up and running.

    • @wiktorsz1967
      @wiktorsz1967 2 ปีที่แล้ว +1

      Check if your user group has smb authentication enabled. At first I assumed that if my user settings are set up then it would work or just automatically allow primary group to authenticate.
      Also make sure to set share type as “smb share” at the bottom when creating your dataset and add your user and group to ACL in dataset permissions.
      I don’t know if you have done all that already, but for me it works with all the things I wrote above
      Edit: if you’re using Core (like me) and your share doesn’t work on iPhone then enable APF in services.
      On Scale you need to enable “APF compatibility” or something like that somewhere in dataset or ACL settings

    • @0Mugle0
      @0Mugle0 2 ปีที่แล้ว

      check there is no spaces in the pool or share names. fixed it for me

  • @yc3X
    @yc3X ปีที่แล้ว

    Is it possible to just drag and drop files into the the Vdrive nas? Secondly is it possible to run games off the nas? I have some super old games I wanted to store on it and just play them off it. I wasn't sure if the files are compressed or not when placing them on the nas.

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  ปีที่แล้ว

      Yes, you can put them on a share and as long as a game can run from a share it should work,

    • @yc3X
      @yc3X ปีที่แล้ว

      @@LAWRENCESYSTEMS Awesome thanks! Yeah, I'm using a Drobo currently but who knows when it might die so I figured I would start looking into something newer. I figured it must be something similar to a drobo.

  • @praecorloth
    @praecorloth 2 ปีที่แล้ว +2

    I'm going to be one of those mirror guys. When it comes to systems that are going to have more than 4 drives, mirrors are pretty much the only way to go. The flexibility in how you can set them up means that if you need space and performance, you can have 3x 2-way mirrors, or if you need better data redundancy (better than RAIDZ2), you can set up 2x 3-way mirrors. The more space for physical drives you have, the less sense parity RAID makes.
    Also, for home labbers using RAIDZ*, watch out for mixing and matching disks with different sector sizes. Like 512 byte vs 4096 byte sector size drives. That will completely fuck ANY storage efficiency you think you're going to get with RAIDZ* over mirrors.

    • @Mike-01234
      @Mike-01234 ปีที่แล้ว +3

      Mirror is only good if performance is your top priority. Raidz-2 exceeds space, and up to 2 drive failures when compared to mirror. If you step up to 3 way mirror now you can lose up to 2 drives but you still lose more space then a raidz-2. The only gain is performance.

    • @praecorloth
      @praecorloth ปีที่แล้ว

      @@Mike-01234 storage is cheap, and performance is what people want. Parity RAID just doesn't make sense anymore.

  • @mikew642
    @mikew642 ปีที่แล้ว

    So on a mirrored pool, if I add a vdev to that pool, my dataset won't know the difference, and just give me the extra storage?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  ปีที่แล้ว +1

      Yes, datasets don't care about how the VDEV's they are attached are expanded.

    • @mikew642
      @mikew642 ปีที่แล้ว +1

      @LAWRENCESYSTEMS Thank you sir! Your one of the main reasons I started playing with ZFS / TrueNAS! THANK YOU for your content!

  • @jeff-w
    @jeff-w 3 หลายเดือนก่อน

    You can make a single drive vdev and put it in a pool if you wish.

  • @rcdenis1
    @rcdenis1 2 ปีที่แล้ว

    How you reduce the size of a zfs pool? I have more room than I need and need that extra space for another server.

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  2 ปีที่แล้ว +6

      As I said in the video, you don't.

    • @rcdenis1
      @rcdenis1 2 ปีที่แล้ว +2

      @@LAWRENCESYSTEMS ok, guess I'll have to backup everything, tear it down, start over and restore. And I wanted to go fishing next weekend! Thanks for the video

  • @tylercgarrison
    @tylercgarrison 2 ปีที่แล้ว +1

    is that backgoround blue hex image from gamersnexus? lol

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  2 ปีที่แล้ว

      I never really watch that channel, they were part of an old template I had.

  • @tupui
    @tupui 11 หลายเดือนก่อน

    Did you see OpenZFS added Raidz expansion!?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  11 หลายเดือนก่อน

      Added, but not in all production systems yet.

  • @donaldwilliams6821
    @donaldwilliams6821 2 ปีที่แล้ว +2

    Re Expanding VDEVs by replacing drives with larger ones. One note, if you are doing that with RAIDZ1 you are intentionally putting the VDEV into degraded mode. If another drive should fail during the rebuild that vdev and zpool will go offline. This is especially risky with spinning drives over 2TB since they have longer rebuild times. A verified backup should be done before attempting that process. Some storage arrays have a feature mirrors out a drive vs. forcing a complete rebuild. I.e. SMART errors increase, the drive will be mirrored out before it actually fails. I don't believe ZFS has a command like that? You mirror the data to the new drive in the background then "fail" the smaller drive, the mirrored copy becomes active and a small rebuild is typically needed to get it 100% in sync. Depending on the IO activity at the time.

    • @zeusde86
      @zeusde86 2 ปีที่แล้ว +7

      You can do this without degrading the pool, just leave the disk to be replaced attached, and perform a "replace" action instead of just plugging it out. You will notice, that the pool reads from all available drives to prefill the new one, including the disk designated to be removed. If you have spare disk-slots, this method is definately preferred, done this multiple times.

    • @donaldwilliams6821
      @donaldwilliams6821 2 ปีที่แล้ว

      @@zeusde86 Excellent! Thank you. I am still learning ZFS. I use it on my TrueNAS server, many VMs, Linux laptop and Proxmox.

    • @ericfielding668
      @ericfielding668 2 ปีที่แล้ว

      ​@@zeusde86 The "replace" action is a great idea. I wonder if the addition of a "hot spare" (i.e. yet another drive) would help if things went sour during the change.

  • @Shpongle64
    @Shpongle64 5 หลายเดือนก่อน

    I don't understand when he combines multiple RaidZ1's into a large ZFSpool that one disk in the vdev causes such a major problem. Isn't RaidZ1 supposed to have a one disk failsafe?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  5 หลายเดือนก่อน

      I don't understand the question

    • @Shpongle64
      @Shpongle64 5 หลายเดือนก่อน

      @@LAWRENCESYSTEMS I rewatched and I misunderstood. When you put the multiple raidz1 vdevs into a pool it sounded like if one disk in the vdev goes down it can corrupt the pool. As long as you quickly replace the failed disk in the Raidz1 vdev then the whole pool is fine.

  • @84Actionjack
    @84Actionjack 2 ปีที่แล้ว

    Must admit the expansion limitation is a reason I'll stick to "Stablebit" on my Windows Server as my main storage but I fully intend to adopt ZFS on TrueNAS as a backup server. Thanks

    • @Im_Ninooo
      @Im_Ninooo 2 ปีที่แล้ว

      with BTRFS you can add a drive of any size, at any time and run a balance operation to spread the data (and/or convert the replication method)

    • @84Actionjack
      @84Actionjack 2 ปีที่แล้ว +1

      @@Im_Ninooo Stablebit works the same way in windows. Thanks

  • @whyme2500
    @whyme2500 ปีที่แล้ว

    Not all heroes wear capes....

  • @kevinghadyani263
    @kevinghadyani263 2 ปีที่แล้ว

    Watching all these ZFS videos on your channel and others, I'm basically stuck saying "I don't know what to do". I was gonna make a RAIDZ2 with my eight 16TB drives, but now I'm thinking it's better to have more vdevs so I can upgrade more easily in the future. It just makes sense; although, I can lose a ton of storage capacity doing it.
    I thought about RAIDZ1 with 4 drives like you showed striped together, but I don't think that's very safe; definitely not as safe as a single RAIDZ2 especially with 16TB drives. I wanna put my photos and videos on there; although, I also need a ton of storage capacity for my TH-cam videos. Each project is 0.5-1TB. And I don't know if I should use any of my older 2TB drives as part of this zpool or put them in a separate one.
    I feel completely and unable to move. My 16TB drives have been sitting there for some days now, and I need the space asap :(. I don't want to make a wrong decision and not be able to fix it.

  • @blackrockcity
    @blackrockcity ปีที่แล้ว

    Watching this at 2x was the closest thing I've seen to 'The Matrix' that wasn't actually art or sci-fi. 🤣

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  ปีที่แล้ว +1

      I use 2X as well, TH-cam should offer up to 3X.

  • @donaldwilliams6821
    @donaldwilliams6821 2 ปีที่แล้ว +1

    Re: VDEV loss. In the case of RAIDZ1 you would need two faillures for the VDEV to go offline. Your illustration shows one failure bringing entire VDEV offline, which isn't correct. That VDEV would be degraded but still online. I do agree that Z2 is a better option. Re: Mirrors. Ah yes the old EMC way of doing things. haha I have seen plenty of mirror failulres too.

    • @Mr.Leeroy
      @Mr.Leeroy 2 ปีที่แล้ว

      @SuperWhisk triple mirror is far from terrible idea, when you are designing cost-effective tiered storage.
      E.g. as a homelab admin you consider how low the ratio of your non-recoverable data to recoverable trash like plex storage gets, and suddenly tripple mirror + single drive pools make sense.

    • @Mr.Leeroy
      @Mr.Leeroy 2 ปีที่แล้ว

      @SuperWhisk look up tiered storage concept, or re-read, idk..

  • @WillFuI
    @WillFuI 6 หลายเดือนก่อน

    So there is no way to make a 4drive z1 into an 8 drive z2 without loosing all the data currently on the drives. Dang would have loved that

  • @bridgetrobertson7134
    @bridgetrobertson7134 ปีที่แล้ว

    Yup, I hate ZFS. Looking to offload from Open Media Vault which has run flawlessly for 6 years with 3 10TB drives on snapraid. I wanted less of a do it all server and more of a long term storage this time around. Problem is, I can't afford to buy enough drives at clown world prices to satisfy zfs if I can't just add a drive or two later. What's worse is 20TB drives are within $10 of my same old 10TB drives. Will look for something else.

  • @Mice-stro
    @Mice-stro 2 ปีที่แล้ว

    Something interesting is that while you can't expand a pool by 1 drive, you can add it as a hot spare, and then add it into a full pool later

    • @MHM4V3R1CK
      @MHM4V3R1CK 2 ปีที่แล้ว

      I have one hot spare on my 8 disk raidz2. So 9 disks. Are you saying I can expand the storage into that hot spare so it adds storage space and removes the hot spare?

    • @ericfalsken5188
      @ericfalsken5188 2 ปีที่แล้ว

      @@MHM4V3R1CK No, but if you expand the raidz later, you can use the hot spare as one of those drives..... Not sure if that's quite as awesome..... but the drive is still giving you usefulness in redundancy.

    • @MHM4V3R1CK
      @MHM4V3R1CK 2 ปีที่แล้ว

      @@ericfalsken5188 Not sure I follow. Could you explain in a little more detail please?

    • @ericfalsken5188
      @ericfalsken5188 2 ปีที่แล้ว

      @@MHM4V3R1CK You're confusing 2 different things. The "hot spare" isn't part of any pool. But it's swapped into a pool to replace a dead or dying drive when necessary. So it can still be useful to help provide resiliency in the case of a failure.... but isn't going to help you expand your pools. On the other hand, because it isn't being used.... when you DO get around to making a new pool with the drive (or if TrueNas adds ZFS expansion in the meantime) then you can still use the drive. If you do add the drive to a pool, then it's not a hot spare anymore.

    • @MHM4V3R1CK
      @MHM4V3R1CK 2 ปีที่แล้ว

      @@ericfalsken5188Oh yes. I understand the hot spares functionality. I thought for some reason based on your comment that having the hot spare configured in the pool meant I got some free pass to use it to expand the storage. I misunderstood. Thanks for your extra explanation!

  • @phillee2814
    @phillee2814 ปีที่แล้ว

    Thankfully, the future has arrived and you can now add one drive to a RAIDZ to expand it.

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  ปีที่แล้ว

      Not yet

    • @phillee2814
      @phillee2814 ปีที่แล้ว

      @@LAWRENCESYSTEMS So they were misleading us all at the OpenZFS conference then?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  ปีที่แล้ว

      @@phillee2814 My point is that it's still a coming in the future feature, not in production code yet.

  • @jlficken
    @jlficken ปีที่แล้ว

    How would you set up an all SSD 24-bay NAS with ZFS? I'm thinking either 3 x 8-disk RAIDZ2 VDEV's, 2 x 12-disk RAIDZ2 VDEV's, or maybe 1 x 24-disk RAIDZ3 VDEV? The data will be backed up elsewhere too. It's not necessary to have the best performance ever but it will be used as shared storage for my Proxmox HA Cluster.

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  ปีที่แล้ว +1

      2X12

    • @jlficken
      @jlficken ปีที่แล้ว

      @@LAWRENCESYSTEMS Thanks for the reply! I'll try to grab 4 more SSD's over the next couple of months to make the first pool and go from there.

  • @lyth1um
    @lyth1um ปีที่แล้ว +1

    the worst part about zfs so far is shrinking, lvm and dumb fs can do it. but like in real life, we cant get everything.

  • @LudovicCarceles
    @LudovicCarceles ปีที่แล้ว +1

    Merci !

  • @june5646
    @june5646 2 ปีที่แล้ว

    How to expand a pool? You don't unless you're rich lmao

  • @nid274
    @nid274 ปีที่แล้ว

    Wish it was more easy

  • @christopherwilliams1878
    @christopherwilliams1878 ปีที่แล้ว

    did you know that this video is uploadet to an other channel ?

  • @emka2347
    @emka2347 10 หลายเดือนก่อน

    i guess unraid is the way to go...

  • @enkrypt3d
    @enkrypt3d ปีที่แล้ว

    so what's the advantage of using several vdev's?? If you lose one you lose everything?! EEEK!

  • @ashuggtube
    @ashuggtube 2 ปีที่แล้ว

    Boo to the naysayers 😊

  • @ff34jmr
    @ff34jmr 2 ปีที่แล้ว

    This is why synology still wins … easy to expand volumes.

    • @bangjago283
      @bangjago283 ปีที่แล้ว

      Yes. We use synology for 32tb. But do you have recommendations for storage 1PB?

    • @TheBlur81
      @TheBlur81 ปีที่แล้ว

      All other things aside, would a Z2 2 vdev pool (4 drives per vdev) have the same sequential read/write as a single 6 drive vdev? I know the IOPS will double, but strictly R/W speeds...

  • @bluegizmo1983
    @bluegizmo1983 ปีที่แล้ว

    How to expand ZFS: Switch to UnRAID and quit using ZFS if you want easy expansion 😂

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  ปีที่แล้ว

      But then you lose all the performance and integrity features of ZFS.

  • @LesNewell
    @LesNewell 2 ปีที่แล้ว

    ZFS doesn't make it very clear but basically a pool is a bunch of vdevs in raid0.

    • @piotrcalus
      @piotrcalus 2 ปีที่แล้ว

      Not exactly. In ZFS writes are balanced to fill all free space (all vdevs) at the same time. It is not RAID0.

  • @namerandom2000
    @namerandom2000 ปีที่แล้ว

    This is so confusing....there must be a simpler way to explain this.

  • @icmann4296
    @icmann4296 8 หลายเดือนก่อน

    Please remake this video. Starting point, viewer knows raid and mdadm, and knows nothing about zfs, and believes that zfs is useless if it can't do the MOST BASIC multi-disk array function of easily expanding storage. I shouldn't have to watch 75 other videos to understand zfs well enough to get one unbelievably, hilariously basic question answered.

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  8 หลายเดือนก่อน

      ZFS is complex and if you are looking a raid system that can be easily expanded then ZFS is not for you.

  • @dariokinoshita8964
    @dariokinoshita8964 8 หลายเดือนก่อน

    This is very bad!!! Windows Storage Spaces allow add 1, 2 3 or any number of disc with same redundancy.

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  8 หลายเดือนก่อน

      Windows Storage Spaces is a not nearly as robust as ZFS and a very poor performing product that I never recommend anyone to use.