Getting the Most Performance out of TrueNAS and ZFS

แชร์
ฝัง
  • เผยแพร่เมื่อ 26 ก.ย. 2024

ความคิดเห็น • 207

  • @LAWRENCESYSTEMS
    @LAWRENCESYSTEMS 7 หลายเดือนก่อน +200

    Great video, of note TrueNAS Scale will have automatice ARC cache to be more than %50 later this year! :)

    • @TechnoTim
      @TechnoTim  7 หลายเดือนก่อน +28

      Thanks Tom! I saw that's coming in 24.04 after creating this! Thank you for all of your wonderful TrueNAS content!!

    • @justinknash
      @justinknash 7 หลายเดือนก่อน

      Is it going to be a configuration setting you can define or just larger than 50%?

    • @Tarkhein
      @Tarkhein 7 หลายเดือนก่อน +5

      @@justinknash The documentation says it will match CORE, which uses up to 90% of available RAM.

    • @brandonchappell1535
      @brandonchappell1535 7 หลายเดือนก่อน +1

      Well its meant to be on dragonfly (or is it dragonfish?) beta, which i think is out now, ive not tested just wat i heard, think i'll wait till its in stable myself. Toms vid works for me for now, definitely looking forward to it though

    • @metalunleashed
      @metalunleashed 7 หลายเดือนก่อน

      I learnt everything about truenas from you. Thanks a ton

  • @HardwareHaven
    @HardwareHaven 7 หลายเดือนก่อน +54

    This was fantastic and will be a great resource to reference down the road. Great job Tim!

    • @TechnoTim
      @TechnoTim  7 หลายเดือนก่อน

      Thanks Colten!

  • @gcs8
    @gcs8 7 หลายเดือนก่อน +52

    Over all, not a bad ZFS video, but I took some hasty notes while watching, here they are for you, happy to expand on anything or clarify on anything that I did a poor job on while not paying 100% attention.
    You say faster resliver times on miorrs, but not if you are using mechanical media where the act of resilvering is very hard on the disks and if they where deployed at the same time are more likely to fail togeather during a heavy operation like that. That is why you tipicaly only go raidZ2 or raidZ3 on mechanical.
    Also as far as miooring goes, you get 1 disk of write speed per vdev, but 2x the read, or more if you have a 3+ wide miorr.
    Would only use encryption if you hae a proc that supports AES-NI.
    Compression: LZ4 is safe to leave on and use everywhere, I prefer ZSTD-3 in most cases.
    ARC: The 1G of RAM per 1T of disk/data is more to do about metadata managment, the goal, keeping all your block lookups in RAM. You would want to stack extra if you have a heavy workload, you also need to know block size affects this requirment, 1M blocks need less that 16K blocks.
    IX is working on fixing ARC on Linux, give it a bit.
    L2ARC: Yes, your disk layout needs to be faster than the pool or it is the new bottle neck, you should also be carful about L2ARC, it requires RAM to function also as the block map in ARC has to refrance the L2ARC, iirc it's something in the neghborhoot of 1:5 RAM:L2ARC.
    Write speed: you are mixing your units a bit funky I think, 150MB/s is ~143MiB/s and 1G eth is ~125MB/s (119.2MiB/s) but realworld you are looking closer to 1G being ~960-980Mbps, or around ~120-122.5MB/s. 10Gbps would be ~1.25GB/s (~1.164GiB/s) but real world, maybe closer to ~8.6Gbps on 1500MTU and maybe ~9.6Gbps on 9K MTU.
    SLOG: Their use to be stuff like ZUSE RAM drive with a SAS interface, but now a day we have things like 3DxP or PMEM, the goal here is lowest commit latancy, hense the prefrance for for NVRAM/3DxP, if you are using cheap or low qualty SSDs that do not have PLP and do not report a write as good until it's in nand not just in the DRAM on the SSD, huge latancy problem, if you get a crappy consumer SSD that says it has PLP or just reports the data is commited even if it's only in dram but not protected, you can have data loss or coruption. You want something that can ack writes as fast as it can to really be of use here.
    Network > disk: You also need enough RAM to hold ~5 sec (the default transaction group timmer iirc) of inbound data in RAM before it goes to SLOG/Pool.
    You can also add vlan interfaces on the ports and restrict both client IPs and services to each as well. This also means you don't have to make static routes to avoid going out of the default gateway if you do something like have your UI/mgmt on 1G and all data serviecs on a 10+Gbps link.

    • @whatwhat-777
      @whatwhat-777 7 หลายเดือนก่อน +3

      Boy, your comment NEEDS to get pinned

  • @NickyNiclas
    @NickyNiclas 7 หลายเดือนก่อน +22

    The best thing I did for performance personally is moving the ZFS metadata off the rust to a Special Metadata Device (SMD) on mirrored Optane NVMe's. Having the metadata on SSDs makes browsing files, moving files, batch renaming files, etc etc, so much faster. It's incredible. PLEASE remember that if you lose the metadata the whole pool is dead, so definitely use reliable SSDs in at least 2x mirror if you want to use SMD in production.

    • @davidkamaunu7887
      @davidkamaunu7887 6 หลายเดือนก่อน

      Sounds cool! How do you set an SMD to store ZFS metadata?

    • @spiralout112
      @spiralout112 5 หลายเดือนก่อน +1

      Agreed, really shines when you are chugging through lots of tiny files. No metadata lookups means you get like at least a 2x speedup with small files, and I set any files under 32kb to be stored on the optane ssd's. Honestly was not expecting such an improvement, kinda odd this video doesn't really mention it.

    • @rokiesato
      @rokiesato 3 หลายเดือนก่อน

      what size were your optane drives? i have 2x 16gb ones

    • @NickyNiclas
      @NickyNiclas 3 หลายเดือนก่อน

      @@rokiesato Those are a bit small depending on the size of your pool. The rule of thumb is 0.3% of the pool size for a typical use case but it varies depending on the type and size of files you store.

    • @rokiesato
      @rokiesato 2 หลายเดือนก่อน

      with a special metadata cache, is having l2arc still necessary

  • @jonathanzj620
    @jonathanzj620 7 หลายเดือนก่อน +5

    I've read/watched a ton explaining zfs architecture concepts in my journey to setup a Scale build, but this (especially with your illustrations) has been by far the most clear/helpful. I will say, for the average homelabber there seems to be a lot of "general consensuses" about slog, l2arc, and some of the other stuff not being impactful or helpful enough to worry about doing. Would love some commentary on when/where each is useful or not. Thanks!

  • @lowellalleman
    @lowellalleman 3 หลายเดือนก่อน +3

    It's probably helpful to recommend backing up the TrueNAS config itself. You don't want to loose things like encryption keys if just the OS disk is lost.

  • @Ddraig62SPD
    @Ddraig62SPD 6 หลายเดือนก่อน +5

    Hey Tim, that was hands-down the best overview I've watched on TrueNAS to date. Ticks all the boxes for a self-confessed geek who's taking his first steps into building my first home Server/NAS on a £80 HP Elitedesk 800 G3 SFF i7-Gen7 256GB SSD 16GB. Just setting up TrueNAS Scale as a VM on a Proxmox 128GB boot SSD with 2x4TB Seagate Iron Wolf (mirrored) plus a 1TB Gen 4 NVMe L2ARC. Your presentation style is truly engaging, simple to understand for tech-savvy newcomers and clearly illustrates the core concepts whilst framing them with real-world examples. Thx again ..sub'd👋

    • @TechnoTim
      @TechnoTim  6 หลายเดือนก่อน

      Thank you so much!

  • @dolemite888
    @dolemite888 7 หลายเดือนก่อน +5

    The combination of @TechnoTim and @LawrenceSystems in providing all of us with such an incredibly detailed yet easily digestible explanation of these systems is invaluable. Hats off to you both!

  • @BrianSez
    @BrianSez 7 หลายเดือนก่อน +5

    Excellent video. I use TrueNAS at its most basic functionality because I'm not savvy, but this guide really helped explain a lot of the questions that I've had.

  • @MAP3D1234
    @MAP3D1234 7 หลายเดือนก่อน +2

    as someone who has been using truenas for a while now, at least a good few years, this was STILL helpful for understanding things better, thank you so much for the highly detailed explainers here, even having watched other videos explaining things, you still helped me better understand some things I thought I knew well enough and did not, thank you.

  • @cloudmover
    @cloudmover 26 วันที่ผ่านมา

    Thank you for the video. I wanted a quick and easy introduction to TrueNas and this fit the bill!
    I am building my own NAS to back up my Synology and house my animations for work.
    I now understand what to look for in a motherboard and what to expect.
    Subscribed.

  • @nikunjkaria
    @nikunjkaria หลายเดือนก่อน

    great video. Have been exploring TRUENAS, but there are lot of insights from this vide.

  • @aaronclark145
    @aaronclark145 7 หลายเดือนก่อน +2

    One of the best most useful videos I have watched in a long while. Thank you! TrueNAS Scale is something I am really interested in but very limited videos recently. It is changing fast and I can understand how hard it is to keep up. Would love some future app setup and setting up apps with a commercial vpn like PIA for reverse proxy.

  • @nzehavi
    @nzehavi 4 หลายเดือนก่อน +2

    I went through ten other videos and didn't understand anything till I got to this one, thank you for explaining everything so simply and so clearly!!! ! How do you handle security?

  • @JB-xj9jj
    @JB-xj9jj 7 หลายเดือนก่อน +2

    Perfect timing Tim. I am in the process of transferring 25TB of data off my Truenas Core server and install Truenas Scale. This video answered many questions I had. Thank you for the quality content.

    • @Makaveli6103
      @Makaveli6103 7 หลายเดือนก่อน

      Can't you just import your pool in Scale? I am switching this weekend also.

    • @JB-xj9jj
      @JB-xj9jj 7 หลายเดือนก่อน

      @@Makaveli6103 yes, you can. I just want to make a clean install.

  • @computersales
    @computersales 4 หลายเดือนก่อน +1

    I'll have to watch through this again and take notes. Hopefully it will apply to scale as well since I'm switching to scale here soon.

  • @sygad1
    @sygad1 6 หลายเดือนก่อน

    Really enjoyed the presentation style AND the content, right speed of delivery and technical detail, with the all important explanation of each part of the UI and whether it's needed. Im going to explore your other videos for LAGG, i'm having terrible trouble setting this up on Scale 23.10 and a Unifi XG16.

  • @stephenreaves3205
    @stephenreaves3205 7 หลายเดือนก่อน +9

    I love this video. I would have mentioned Special Vdevs though. They seem to be all the rage.
    EDIT: I also think it should be noted that a loss of a slog will only result in the loss of IN-FLIGHT data. So if you run a single drive (or a stripe) and it dies, you'll only lose about 5 seconds of data. But, at 10+ Gbps, 5 seconds might be a lot. Also, SLOGs only need to be about 2/8s of your ARC size max (tx group is 1/8 of ARC by default, slog can store two transaction groups before syncing, by default)

    • @TechnoTim
      @TechnoTim  7 หลายเดือนก่อน +3

      Thank you for mentioning special VDEVs. I thought about it, but the safest place for this is on my main pool since if you lose this you lose your Pool. I would have to create a pretty redundant special VDEV for this and I don't have the space! If I could have a do over, I would have at least mentioned it. Thank you!

    • @stephenreaves3205
      @stephenreaves3205 7 หลายเดือนก่อน

      @@TechnoTim it's all good. You could spend several hours going over all the ZFS features. This was a fantastic overview

    • @darkpixel1128
      @darkpixel1128 7 หลายเดือนก่อน

      @@TechnoTim the recommendation I've seen is for minimum of a three wide mirror (three drives all storing the same data), so it definitely takes up space. I'm also not sure how much it really speeds up reads. I imagine it's not worth it for most use cases.

    • @ajhieb
      @ajhieb 7 หลายเดือนก่อน

      @@darkpixel1128 The performance boost will vary depending on what type of data you're storing and how much of it you have. When you get into tens or hundreds of terabytes of data, directory operations can start to slow significantly which can get very annoying. I'm going to be consolidating some servers soon and I'll be adding drives for storing the metadata, but I'll be going against "standard practices" as my tolerance for data loss is high, and the possibility has already been mitigated. I've got two spinning rust pools that are ~100TB each. I'll be using a single Intel 2TB NVMe drive on each. (mainly because I'm out of expansion slots and PCIe lanes... unless Wendell over at Level1Techs reveals the specifics of his NVMe carrier boards) I'm comfortable with that for two reasons. 1) Those intel drives are pretty darned reliable. Orders of magnitude above the other drives in the array to the point where it's still more probably that I lose 3 drives on a RAIDz2 VDEV before the Intel drive dies, and 2) I have a near-live onsite backup online, so all the data is still accessible, so restoring the original pool and accessing the data in the meantime is pretty trivial even if the Intel drive does die.

    • @Prophes0r
      @Prophes0r 7 หลายเดือนก่อน

      @@TechnoTim As I mentioned in my (wall of text) post, non-enterprise admins should absolutely NOT be using L2ARC. It is a trap.
      The only way to get real performance gains is to understand a little more about what each pool will be used for and the types of files/access on that pool...
      ...Then set up Metadata and Special Metadata VDEVs using that understanding.
      NOTE: You can add Metadata and Special Metadata drives to your pool in-place. It will start using them immediately. It just won't move existing Metadata/Files to them. You will need to fiddle with stuff to get the stuff moved, similar to how we will need to jump through some hoops when RaidZ expansion finally rolls out.

  • @CraftComputing
    @CraftComputing 7 หลายเดือนก่อน +4

    SCREW SAFETY! MORE SPEED 😁
    Great job on this one Tim!

    • @TechnoTim
      @TechnoTim  7 หลายเดือนก่อน

      🚀 Thanks Jeff!

  • @brunohao
    @brunohao 2 หลายเดือนก่อน

    Congrats from Brazil. Great Video! Cleary! I understand everything in just 10 minuts! Thank you!

  • @saskog8455
    @saskog8455 7 หลายเดือนก่อน +6

    ZFS special device (SSD mirror) can also help you increase speed and improve on IO … definitely a use case for those who have tons of small files on spinners.

    • @TechnoTim
      @TechnoTim  7 หลายเดือนก่อน +3

      Thank you! I mentioned this in another comment but I was a little scared to move this off the pool since I don't have a lot of additional space for all the additional drives I need to make this redundant enough not to lose my pool. If I could have a do over I would have at least mentioned it and why I chose not to do this. Thank you!

  • @whatwhat-777
    @whatwhat-777 7 หลายเดือนก่อน

    Great Video Tim, I am new to ZFS & didn't know much but you cleared up so much for me like L2 Arc, Slog, ZIL. Thanks and always sending good vibes.✌🏼

  • @BenState
    @BenState 2 หลายเดือนก่อน

    Excellent communicator, well done

  • @DPCTechnology
    @DPCTechnology 7 หลายเดือนก่อน

    This is SUUUUPER helpful! Perfect timing for me on my HL15 tuning, appreciate it!

  • @nadtz
    @nadtz 7 หลายเดือนก่อน +5

    Pretty solid overview. Just one nit to pick, the 1gb per tb thing is kind of a myth based on memory wanted for dedup and even then it wasn't meant as a rule. Depending on hardware, network speed, number of users and exactly how Truenas is being used a lot of people who use it purely for storage can get away with 16gb memory and people who are running jails/VM's 32. That said 'More memory, more better' so it doesn't hurt to give ZFS more memory.
    I guess 2 nit's, consumer NVME don't make the best drives for ZFS L2ARC/Slog/special devices. Get a couple Optane drives to compare with and it's a night and day difference (also helps to have drives with capacitors in case of worst case power loss if we're talking best practice). P1600x's are relatively cheap and plentiful right now and either size should be more than enough for SLOG up to a 40gb network. That said you also want to be very careful with special devices, lose that vdev and you lose the pool. And after having played around with ZFS for years now honestly most home/home lab users probably wont see much improvement regardless, but as someone who likes to tinker I can understand wanting to try something out if for no other reason than to learn.

    • @TechnoTim
      @TechnoTim  7 หลายเดือนก่อน +1

      Thank you! Not nit picking at all! It's a deeply technical topic and I felt I was in over my head in some parts. I appreciate comments because it helps viewers too! As far as SLOG, mine are on Optane! (But my L2ARC is not) I didn't mention is though, I should have!

    • @Prophes0r
      @Prophes0r 7 หลายเดือนก่อน +1

      @@TechnoTim Yeah the RAM requirements being a myth are really something we need to continually be shouting out loud to help fight against the ingrained false information. That's the problem with "common knowledge" that is completely false.
      More memory is great, but ZFS doesn't actually NEED any memory. Like, at all.
      This is a really important thing to realize when we start using ZFS for system drives.
      You probably don't want Your Proxmox Host using 64GB of your 128GB of RAM for ARC, just because you installed it on a ZFS mirror.
      Your VMs will probably be trying to cache stuff anyway. There really isn't much need for the hypervisor to ALSO be using RAM to cache the same stuff.

  • @xanderman55
    @xanderman55 7 หลายเดือนก่อน +2

    Perfect timing! I am about to rebuild my TrueNAS server in a new JBOD, and this video helps clear up many questions I had. Thanks!

  • @CoreyTyhurst
    @CoreyTyhurst 7 หลายเดือนก่อน +3

    Don’t see it mentioned in the comments so I thought I’d mention. The backup capability in trueNAS is VERY easy to tie to popular cloud providers for offsite backup. I use back lade for the data I consider critical so I have a backup if the house burns down. It’s quite affordable if you think about it as ‘insurance’.

  • @stey2590
    @stey2590 7 หลายเดือนก่อน

    Thanks Tim, very informative and just at the right time as I'm in the process of switching over to truenas scale!

  • @RockTheCage55
    @RockTheCage55 7 หลายเดือนก่อน

    probably the best video i've seen to discuss the basics of zfs (truenas)

  • @justinknash
    @justinknash 7 หลายเดือนก่อน

    Awesome video Tim. I learned a lot about ZFS and TrueNAS advanced features.

  • @luisliz
    @luisliz 7 หลายเดือนก่อน +1

    Wow this video is so good. I already knew most of these concepts but were still confusing. Great stuff!! Beautiful balance between deeply technical but easy to understand.

  • @tupui
    @tupui 6 หลายเดือนก่อน

    Wooo this is gold! Thanks for that very detailed and clear video.

  • @MarcosCastro-v5n
    @MarcosCastro-v5n 2 หลายเดือนก่อน

    Man, best TrueNas video hands down , thank you

  • @bertrandgillet9819
    @bertrandgillet9819 5 หลายเดือนก่อน

    Thanks a lot for this video. I am planning to build a TrueNAS server to replace my 10+ yo Synology. I will use a refurb HP DL380 with 768GB of RAM, SSDs for boot, L2ARC and SLOG, and 12*12TB 5400rpm WD NAS drives, and 2*10GbE + 4*1GbE eth ports. Looking forward to see it in action :)

  • @parl-88
    @parl-88 7 หลายเดือนก่อน

    Outstanding Video! Thanks a LOT! Learned some many new things with this video. Really, thanks for putting the time and effort.

  • @Mahesh-j8y
    @Mahesh-j8y 6 หลายเดือนก่อน

    Dear Tim, Thanks for making this video, outstanding!!

  • @lechegaray
    @lechegaray 7 หลายเดือนก่อน

    great synopsis for any NAS setup, really cut through various actionable topics. good stuff

  • @Suddenlystan728
    @Suddenlystan728 6 หลายเดือนก่อน

    I super appreciate this video. Thank you so much for making it!

  • @Locationary
    @Locationary 5 หลายเดือนก่อน

    This guide was great, nice work

  • @ewenchan1239
    @ewenchan1239 7 หลายเดือนก่อน +1

    re: backup
    Depending on how much data you are trying to back up and how fast you want it to be backed up - LTO tape can be an option for some people.
    The initial cost for the faster tape drives can be quite steep (I think that my LTO-8 tape drive was somewhere just shy of $5000 USD when I bought it 4 years ago), but I think for a 12 TB raw/30 TB compressed LTO-8 tape now, it's like $75 (I think) per tape, which is cheaper than a 12 TB HDD (new) and 30 TB hard drives only exist if you know and ask the right people.
    Thus, as a backup solution, it works well, if you can stomach the initial cost.
    The more data that you want or need to back up, you end up cost-averaging down and becomes more cost efficient to do tape than really any other storage medium.

  • @DigitalMirrorComputing
    @DigitalMirrorComputing 6 หลายเดือนก่อน

    Brilliant video mate! Loved it!

  • @apolloeosphoros4345
    @apolloeosphoros4345 7 หลายเดือนก่อน

    what a fantastic video! I really needed this about a year ago rofl

  • @danilfun
    @danilfun 7 หลายเดือนก่อน +2

    3:36 For me a reason to turn encryption off is write NOP optimization.
    ZFS can just skip writing to file if current content is the same as what you are writing. Very handy in some cases.
    ZFS does this by comparing block hashes. But it isn't possible with encryption.
    Another annoying thing is that truenas doesn't allow you to have unencrypted dataset inside an encrypted one. So the only way to have any unencrypted datasets is to disable encryption on the root level.

  • @alphenit
    @alphenit 7 หลายเดือนก่อน

    Low budget/Powersaving option:
    After running truenas systems for many years and paying for the disks and the power consumption I decided to switch things up.
    I now have a low power dedicated TrueNAS system that runs 24x7 with a single SSD that serves my important files and a single large drive to serve my media.
    For backup I have a older system that powers up every sunday running Unraid and using rsync I sync everything from my truenas system to unraid. (I also rsync a truenas box I have with my dvd-movie collection that is only powered on when needed)
    Using unraid I can mix and match drives to create one big pool andas my backup size increases and I don't have to keep up with matching 100% of the drives that I have running on my truenas boxes. I used zfs snapshotting for years but the older snapshots take a toll on your usable diskspace.

  • @truckerallikatuk
    @truckerallikatuk 7 หลายเดือนก่อน +2

    Also when setting up your pools, it's worth considering your drives. I use a LOT of used enterprise drives, and would never, ever choose less than a Z2 because they're old drives, they will die sometime. Edit: ZFS pool expansion is now available in TrueNas Scale, at least in Cobia. You can now add individual drives.

    • @ajhieb
      @ajhieb 7 หลายเดือนก่อน +1

      Yeah, most of my drives are used enterprise drives. I keep a stack of cold-spares on hand too. You don't want to wait for a drive failure to order your replacement.
      My failures have been pretty rare. Of my 6TB drives of which I have 48 running, I've had about 3 drive failures over the last 5 years.

    • @Prophes0r
      @Prophes0r 7 หลายเดือนก่อน

      @@ajhieb Luck can vary though.
      I've had no failures with the 12x 10TB drives I've been running for the last 18ish months.
      I've had 16 failures on the 12x 4TB drives over the 4 years I've been using them.
      Yes...16 drive failures on a 12 drive array. The 5 year warranty is coming up soon and that pool is going away...

    • @ajhieb
      @ajhieb 7 หลายเดือนก่อน

      @@Prophes0r That's not really luck (random occurrence) so much as it is variation in drive manufacturing. Quality/reliability can and does vary between manufacturers, models and even batches. All the more reason to keep cold spares around (or hot spares if you have the bays available)

  • @peterruppert7856
    @peterruppert7856 3 หลายเดือนก่อน

    Bro, your video is fkin amazing. You explain pretty complicated stuff in a very simple way. You a teacher or something? lol. Seriously, thank you. I've been a FreeNAS, then TrueNAS Core, then TrueNAS Scale users/fan for a long time and didn't know MOST OF THIS STUFF lol. Great video, thank you so much. :)

    • @TechnoTim
      @TechnoTim  3 หลายเดือนก่อน

      Thank you! Not a teacher, just a software engineer!

  • @KS-wr8ub
    @KS-wr8ub 7 หลายเดือนก่อน

    Great video, thank you for explaining TrueNAS in a deeper way. I’ve only used it a bit as I’ve been on unRAID for more than a decade and haven’t been too unhappy to make a move. Now I’d like to build a second server with only SSDs and this will probably be TrueNAS.
    One point on backups from a backups nerd. 😜 It’s worth noting that the backup server doesn’t really need to have any drive redundancy at all. Sure it’s convenient to have in case of a drive failure, but since the data SHOULD already be on 2 other instances (3-2-1 rule) it’s not necessary. Maybe do RAIDz1 just for good measure. 😅
    That should at least soften people’s thoughts on backups, as you really don’t need a complete replica of your source system hardware and drive setup. That means that you “only” need to buy 3 new drives to expand your pool with another 2 drive mirrored vDev, and the third drive goes into the backup system.
    AND, thank you for mentioning that snapshots isn’t backup! 👍

  • @ac93uk
    @ac93uk 7 หลายเดือนก่อน

    Hi Tim, Great video as always.
    It would be really interesting to see a video of how other external appliances/VMs/containers within a network use this sort of storage. Currently I have a RAID array which I share through VM in Proxmox and create files in containers within a VM, however I often encounter file permission issues, or missing/corrupt data. I find it quite difficult to find resources outlining this full lifecycle, explained in a simple manner. I Have used SMB previously but maybe NFS is better, but I am not sure. I find it quite easy to find resources on setting up truenas, but not so much when it comes to other areas using it.
    Thanks for all your work, I have learned so much from your channel.

  • @mspencerl87
    @mspencerl87 7 หลายเดือนก่อน +1

    I believe the recursive option for snapshots is for any data sets under the data set you're taking a snapshot of. I don't believe the recursive is for folders inside of a data set.

  • @IEnjoyCreatingVideos
    @IEnjoyCreatingVideos 7 หลายเดือนก่อน

    Great video Tim! Thanks for sharing it with us!💖👍😎JP

  • @TheMongolPrime
    @TheMongolPrime 7 หลายเดือนก่อน +8

    Great job. Next maybe a video on how to utilize said UPS with something like NUT. Can't wait to see the colocation part of your adventure.

    • @chilexican
      @chilexican 7 หลายเดือนก่อน +3

      agreed explaining NUT would have been nice to expand on when it comes to UPS.. being able to have truenas automatically power itself down after passing a certain battery percentage or time frame of being on battery would have been good to talk about..

  • @scottyz
    @scottyz 7 หลายเดือนก่อน

    Nice video Tim

  • @ajhieb
    @ajhieb 7 หลายเดือนก่อน +5

    2:55 _"But the more mirrored VDEVs you have, the less likely this is to happen"_ Uhhhh no. When dealing with a striped mirror (which is how ZFS handles multiple mirrored VDEVs in a single zpool) adding more mirrored VDEVs will be adding more failure points, without increasing your redundancy so the likelihood of total data loss goes _up._ If you increase your mirror "depth" and go with 3-drive mirrored VDEVs, you're adding failure points, but you're also adding redundancy, so in that case you're likelihood of data loss goes _down._
    I think I get what you were trying to say in that with a single mirrored VDEV, if you lose two drives, then you've lost it all , but as you increase the number of VDEVs and assume the loss of two drives, the likelihood of having the two failures occur on the same VDEV goes _down._ (But again, this is offset, by the greater likelihood of having multiple drive failures because of the added drives)
    In short, all other things being equal, the more VDEVs you have the greater your chance of data loss.

    • @TechnoTim
      @TechnoTim  7 หลายเดือนก่อน +3

      Sorry, I thought that's exactly what I said, "the more mirrored VDEVs you have, the less likely that (2 drives in a VDEV dying) is to happen." Along with the illustration I drew I thought that was clear, I guess not. Thank you!

    • @ajhieb
      @ajhieb 7 หลายเดือนก่อน +2

      @@TechnoTim Yeah, sorry to be nit-picky, but I get that way about data loss. :) What you said was technically correct, but I think it conveyed the wrong message because you isolated a very specific scenario. In the very specific scenario you described (losing exactly two drives) the probability of losing two drives in the same VDEV does indeed go down. But the likelihood of having 2 drives fail also goes _up_ in that scenario, in fact more so than the corresponding drop in having them be in the same VDEV so the overall likelihood of data loss goes _up_ with the addition of more VDEVs. They way you described it was a little ambiguous and could have been interpreted the opposite way.
      Again, not trying to be overly critical, I just like to be very clear on matters of data integrity. As always, appreciate your content and thanks for all of the work you put into your videos.

    • @TechnoTim
      @TechnoTim  7 หลายเดือนก่อน +2

      @@ajhieb Hey! No offense taken! I want be sure that the information is right, even if that means I am wrong, so I really appreciate the feedback! I don't think you're being picky at all, it means you are detail oriented, something that's appreciated from the tech community!

    • @Prophes0r
      @Prophes0r 7 หลายเดือนก่อน +1

      @@TechnoTim Tim, your math makes incorrect assumptions.
      Your math only works when considering larger/smaller arrays with exactly the same number of failures.
      Sure, it is less likely that 2 drives failing will be from the same mirror when you have more mirrors.
      But any given drive has the same likelihood of failure as the others.
      More drives = more failures = greater likelihood that both drives in a mirror will die.

  • @ash-cn2oh
    @ash-cn2oh 5 วันที่ผ่านมา

    for backup to another truenas system you will want to use zfs replication, not rsync or copy. if you have a slog you still have a zil but it is stored on the slog special device(s) instead of the main pool devices.

  • @bertnijhof5413
    @bertnijhof5413 6 หลายเดือนก่อน

    This video gives you a good but somewhat luxury NAS example to using ZFS. Since 2019 I run ZFS on my Ubuntu desktop using a Ryzen 3 2200G and 16GB DDR4, I limit the memory cache (L1ARC) to 4GB. I have a 512GB nvme-SSD (3400/2300MB/s) and a 2TB HDD (2 partitions) with a 128GB sata-SSD cache with 4 partitions as L2ARC (90GB + 30GB) and ZIL (5GB + 3GB). I use a lot of Virtualbox VMs, loading Xubuntu takes

  • @llortaton2834
    @llortaton2834 7 หลายเดือนก่อน

    More of this type of intro! Thank you.

  • @wva5089
    @wva5089 7 หลายเดือนก่อน +1

    wouldn't you want to stagger the age of your mirror'd drives? so that they don't fail at the same time when they do fail? even just different batches.

  • @xtonousou
    @xtonousou 7 หลายเดือนก่อน

    Another optimization on the networking part is to increase the MTU from 1500 to 9000 aka. enable jumbo frames (must be enabled on switch level as well).

  • @blahx9
    @blahx9 6 หลายเดือนก่อน

    For unifi devices with L3 routing, if you have the L3 switch do the routing for the vlans in question, not the udm, you no longer get a penalty hit on your speed. There are probably downsides I am not aware of.

  • @LarsBerntropBos
    @LarsBerntropBos 7 หลายเดือนก่อน +1

    Zfs dedup is great when you have a couple of VMs on the same OS. Those VM disks use a lot less space.

    • @Mr.Leeroy
      @Mr.Leeroy 7 หลายเดือนก่อน +2

      Storage space is a lot cheaper than RAM and not that limited in max amounts per socket.

  • @mjmeans7983
    @mjmeans7983 4 หลายเดือนก่อน

    At 7:30 in the video you indicated that you would even have a link in the documentation below the video to the Tom Lawrence video where you learned this. I don't see that link.

  • @Marcasecas
    @Marcasecas 6 หลายเดือนก่อน

    Nothing better to train your brain like watching videos like this one..😆

  • @frederichardy8844
    @frederichardy8844 7 หลายเดือนก่อน +1

    My understanding of SLOG is not the same. Am I wrong? :
    The SLOG/ZIL is read only when ZFS start. That's when ZFS check that all the async writes are written on the ZPOOL and if there's some missing they can be read from the ZIL/SLOG and write to the ZPOOL so that there's no data loss. It's a log, not a cache. When a client write data to a zpool there's no read from the ZIL/SLOG, the data is in RAM, why read from a slower disc?
    So mirroring a SLOG is good of course but the risk of data loss is only if the SLOG drive fail and the server crash SIMULTANEOUSLY. If the SLOG drive fail during normal usage of the server, ZFS just put it offline and use the ZIL instead and there's no data loss, only a drop in performance.
    If you look the drive usage of a SLOG you will see only write, no read.

  • @Froggie92
    @Froggie92 7 หลายเดือนก่อน +1

    openzfs2.2 added the ability to add a single drive to raidz1,2,3 etc

  • @Felix-ve9hs
    @Felix-ve9hs 4 หลายเดือนก่อน

    Only the physical device (or vdev) that contains the ZIL is called a SLOG, the ZIL is allways called the same, not matter if it lives on the pool or the SLOG device.

  • @blyatspinat
    @blyatspinat 6 หลายเดือนก่อน

    It might be easier to replace or add 2 new disks, but if you would use RaidZ1 with 6 drives you have much more space and therefore the might be no need to expand for a far longer time than with mirrors, it always depends a lot on what you want to do and what data you have, there is no rule of thumb in many cases :P

  • @Andy15792
    @Andy15792 20 วันที่ผ่านมา

    Hey, have a quick question: Do you recommend running a separate server for NAS and have another for all your home lab needs? also if have two servers running no GPU on NAS and add GPU to the other for all transcoding need [media] should work?

  • @rickyc5860
    @rickyc5860 3 หลายเดือนก่อน

    When you say data loss for slog, are you saying only data loss is what was written to it in that moment or pool data is loss as well?

  • @andred.2335
    @andred.2335 2 หลายเดือนก่อน

    The first snapshot on the same dataset doesnt consume any data. Its just a marker for the system to store any changes starting from that moment.

  • @vimaximus1360
    @vimaximus1360 7 หลายเดือนก่อน

    Please make a follow up on this, for the pitfalls eg. SLOG, Mirrored VDevs etc etc

  • @cursedslayerCZ
    @cursedslayerCZ 5 หลายเดือนก่อน

    If i remember correctly SLOG is writecache but not as you described. SLOG is not inflatable. SLOG is by default in ZFS FORCED to clear itself (write all stuff in SLOG to zpool) every 5sec. In real life, it will speed up first 5s of writing to max of your network, but than you will bump in to speed difereces od ZVOL(slower) and SLOG(faster). SLOG will be still half full and new data from network still incoming. If your network is 10Gb(bit)/s, SLOG is NVME drive with R/W speeds over 2GB(Byte)/s (20+Gb/s) and your ZPOOL wite speed is 500MB/sec (cca 5Gb/s) u still bump in to write wall of zpool after few seconds.
    SLOG is mainly for synchronous writes like iSCSI, DBs etc... SLOG fastly receive data, confirms data writes, start to fill for default 5s than optimize all data in SLOG for write with minimal IOPS weight on usually mechanical drives and write to zpool.

  • @hunterw9451
    @hunterw9451 7 หลายเดือนก่อน

    Are you virtualizing your TrueNAS on Proxmox? I’m building a server soon and that seems like the best option for virtual machines and GPUs, and was wondering what your experience was. I saw you had a video from 2020 but wondered if there was anything more recent.

  • @RetiredRhetoricalWarhorse
    @RetiredRhetoricalWarhorse 7 หลายเดือนก่อน

    I'm having write performance issues for a raidz1 pool of three nvme drives... Which I find very confusing :D.
    I'm wondering whether moving two of those as a mirrored SLOG to the spindle pool and then putting the VM NFS datastores there would actually improve things...

  • @chestergregg8668
    @chestergregg8668 7 หลายเดือนก่อน

    You can enable encryption when you create a new dataset, though I'm not sure if TrueNas will give you that option. You can also replicate an existing dataset to a parent dataset that is encrypted, keeping snapshot history, etc. while adding encryption. Definitely easier to do in advance.

    • @TechnoTim
      @TechnoTim  7 หลายเดือนก่อน

      Great point!

  • @cheebadigga4092
    @cheebadigga4092 7 หลายเดือนก่อน

    So in theory, the best of both worlds for ARC/ZIL sync vs. asnyc is having a dedicated "VM" dataset with sync enabled, a dedicated "sensitive / has to always exist dataset" with sync enabled, and a dedicated "arbitrary data that I don't care if it always exists or not dataset" with sync disabled? I'm asking because I think that if you have sync disabled for VMs, VMs could misbehave (I guess). Otherwise I'd make the "VM" dataset async as well.

  • @KristianKirilov
    @KristianKirilov 7 หลายเดือนก่อน

    Guys, correct me if I'm wrong, but either ZIL or SLOG will increase your speed ONLY when you do sequential writes. In any other cases just use fast enough disks.

    • @frederichardy8844
      @frederichardy8844 7 หลายเดือนก่อน

      SLOG allow to NOT wait until the data is written on the ZPOOL. A SLOG disc need a very low capacity (16GB recommended or more but with over provisioning) and low latency (like optane SSD). Even in non sequential writes, optane SSD will be faster than a mecanical drive... Of course if you can afford optane high capacity SSD in your ZPOOL you don't need the SLOG, better you should not use SLOG because it will slow you down but that's a lot of money...

  • @jsclayton
    @jsclayton 7 หลายเดือนก่อน +3

    SCALE 24.04 will no longer have the 50% memory restriction by default!

    • @TechnoTim
      @TechnoTim  7 หลายเดือนก่อน +1

      I just saw this and updated my documentation site!

  • @andibiront2316
    @andibiront2316 7 หลายเดือนก่อน

    I'm currently running a pool of 8x8TB HDDs with 256GB RAM and 2x1.6TB NVMe for L2ARC and SLOG. It's running at it's limits, with 400 IOPs constantly, iirc, and 57 VMs. The other day I had 3 VMs uncompressing and copying files, and the performance tanked, it was impossible to work, and other VMs were complaining. I'll be upgrading next week to 2x(8x7.6TB RAIDZ2) All-flash SAS3 enterprise SSDs. The only issue I have is that I don't know much about performance tunning with all-flash storage on TrueNAS. I'll probably disable ARC and L2ARC, 'cause reads should be almost as fast. So, 256GB of RAM makes no sense anymore. I use always sync on iSCSI, but I don't think I need SLOG anymore either. I guess I'll see how it works once I finish migrating all the data.

  • @ierosgr
    @ierosgr 7 หลายเดือนก่อน

    That comment that ZIL is inside your pool and outside of it called SLOG was a nice touch. Always confusing those two.
    At 12:53 how come and you use MTU of 1500 sand not 9000 since you dealing with large files?

  • @baont5878
    @baont5878 6 หลายเดือนก่อน

    Thanks for the great videos. Would you recommend TrueNAS on Proxmox VM at all? My scenario is to edit video off NAS just hobby and up to 4K. I will have 10gb direct data connection

  • @practicallyalive
    @practicallyalive 3 หลายเดือนก่อน

    With RAID Z2 can't you just add 1 additional drive? If it's anything like Raid 6 you should be able to just add more drives as needed.

  • @ryanmalone2681
    @ryanmalone2681 3 หลายเดือนก่อน

    Is that 50% ARC RAM allocation still a thing? I just did some testing of TN in a VM on Proxmox in preparation for a migration to TN and I had 500GB RAM in Proxmox and it ate everything and I didn’t do anything. It just kept taking more and more and by morning it was all used.

  • @jamesdwi
    @jamesdwi 3 หลายเดือนก่อน

    one correction, zil and slog force writes to the pool to be a multi- step proces step 1 data is stored in arc thats in ram, them its writen to the zil, or slog from ARC then the data is writen to the pool from ram, keeping writes as fast as possiible zil and slog. are 99.999 write only . many early ZFS users still used disks forr slog because early ssd's died quickly under heavy write loads. , ZFS can verify the ram content is correct and ram is always faster than disks or even SSD or NVME, when zil or slog is used, is on zfs pool import, ZFS will see if there is any data in zil or slog that hasn't been written correctly to the pool ithen ZFS writes any data not in the pool i belive in mutli-core systems ZFS can write to to the pool and the zil or slog at the same titme. further improving performance.

  • @yerrysherry135
    @yerrysherry135 7 หลายเดือนก่อน +1

    The ARC also compress your data in memory. If you have 16 GB RAM, 50% is used for the ARC. So, you would cache more than 8 GB. The ZIL or SLOG is only used when your server had a crash. When the system reboot, it checks if every commit was written to disk. If not, it writes the commits from the ZIL or SLOG to disk. This only happens when the filesystem was configured in "sync" mode. Always activate this when using nfs.

    • @nadtz
      @nadtz 7 หลายเดือนก่อน +1

      50% is a Linux issue that they are working on fixing but can be manually tuned, on FreeBSD and opensolaris ZFS will use more.

    • @yerrysherry135
      @yerrysherry135 7 หลายเดือนก่อน

      @@nadtz I wonder why Linux only uses 50% of the memory for zfs? Is it because Linux uses ext4 or xfs for its / filesystem and also needs cache memory for this? Or that Linux needs more memory than the BSD family?

    • @nadtz
      @nadtz 7 หลายเดือนก่อน

      @@yerrysherry135 I forget the specifics but it was a work around to prevent another issue that has to do with memory allocation in Linux. I also don't currently use Scale, I believe that was supposed to have been fixed in Cobia which I believe is now out so it might be fixed.

    • @TechnoTim
      @TechnoTim  7 หลายเดือนก่อน +2

      It's coming in Dragonfish later this year!

    • @frederichardy8844
      @frederichardy8844 7 หลายเดือนก่อน

      I'm looking for a confirmation: if the drive of the SLOG fail, ZFS put it offline and use the ZIL instead so there's is no data loss???

  • @mrfailure93
    @mrfailure93 7 หลายเดือนก่อน

    I use unraid but watched this because it seemed interesting. Thanks for explaining it all so thoroughly.

  • @apoorv9492
    @apoorv9492 22 วันที่ผ่านมา

    How much capacity do you recommend for ZIL and SLOG drives?

  • @ckckck12
    @ckckck12 3 หลายเดือนก่อน

    On Synology the same ssd cache pair can be used for both read (l2arc) and write (slog), and you can even add metadata to it.
    Can this be done with truenas?

  • @demanuDJ
    @demanuDJ 7 หลายเดือนก่อน

    Ever tried to use Optane drives as SLOG? they have TONS of IO and I think they're great if you need that (for example VMs, Databases), combined with pretty large ARC it can be a great solution for storage.

    • @frederichardy8844
      @frederichardy8844 7 หลายเดือนก่อน

      I'm using one with a VDEV RAIDZ2 of 8x18TB and another with a VDEV RAIDZ1 of 3x18TB and 256GB of RAM in a DL380 gen9 with 2 xeon E5-2640 v4. Works fine for me.

  • @unijabnx2000
    @unijabnx2000 7 หลายเดือนก่อน

    I hope the special vdev is mentioned in this video..

    • @unijabnx2000
      @unijabnx2000 7 หลายเดือนก่อน

      I guess a follow up video for the "special vdev" wendell (l1t) had some neat thoughts on using it to accelerate a pool of rust.

  • @Jifflan
    @Jifflan 6 หลายเดือนก่อน

    Sync in edit datasets, do you have it on standard or always? 😊

  • @actng
    @actng 7 หลายเดือนก่อน

    does truenas support dynamic raid expansion yet? i take it we're still waiting since you talked about adding 2 drives in a mirror at a time. i was really disappointed to realize they didn't support this after doing RAIDz2 of my 4x 4TB....

    • @frederichardy8844
      @frederichardy8844 7 หลายเดือนก่อน

      planned before the end of the year... crossing fingers...

  • @no_classs
    @no_classs 7 หลายเดือนก่อน

    Thanks, I used your proxmox things to do post installation ❤. At 01:45, what happens to the pool if the cache drive fails ?

    • @no_classs
      @no_classs 7 หลายเดือนก่อน

      Simple google would have worked .... 😅 it just rights to the vdev

  • @Gaamaa-oz5ef2lf3n
    @Gaamaa-oz5ef2lf3n 5 หลายเดือนก่อน

    Hi Tim,
    I need your help.
    I own
    TrueNAS Core 13.0-U6.1
    16GB RAM
    2 Pools with 6Nos 4TB each HDD (WD Red Pro)
    Since I built 5 years ago, I see its dead slow.
    I even can't create a new folder and hit enter.
    It will say no such path. So I have to hit enter after some delay.
    I ran my life with this for 5 years assuming the FreeNAS/TrueNAS are like this only.
    My other Synology NAS works like a charm with ~100Mbps read/write speed.
    Where as this TrueNAS ~10Mbps aprox.
    If I start copying a big folder from NAS to local drive for backup.
    Then I can't even work with NAS at same time.
    It will lose the network.
    But my network intel 1Gig card is very good
    Same issue with onboard LAN too.
    I use best LAN Cat-6 cable too.
    I did several updates too from FreeNAS to TrueNAS code latest.
    (Though all latest TrueNAS Core 13.0-XXX shows incorrect drive size on windows 10)
    Where could I be wrong ?
    Can you or some one help me online or email chart ?

  • @jimsvideos7201
    @jimsvideos7201 7 หลายเดือนก่อน

    Why aggregation instead of SMB multi-channel?

  • @no_classs
    @no_classs 7 หลายเดือนก่อน

    Can you get the proxmox helper type scripts for truenass, that would be a game changer for me

  • @monish05m
    @monish05m 7 หลายเดือนก่อน

    @13:13, yes those 10 gig pipes😂

    • @TechnoTim
      @TechnoTim  7 หลายเดือนก่อน

      hand gestures don't always turn out the way you think they will....

  • @Bill_the_Red_Lichtie
    @Bill_the_Red_Lichtie 7 หลายเดือนก่อน

    Quality content.

    • @TechnoTim
      @TechnoTim  7 หลายเดือนก่อน +1

      Thanks Bill!

  • @edd7
    @edd7 7 หลายเดือนก่อน

    Hey Tim,
    What size and characteristics would you recommend for the SLOG mirror ssd's? I have 200TB in spinning rust and 256GB of ECC Ram. The use case is mostly to read/write large files (20GB-100GB) and backups with 20+ sources reading and writing to it at any given point.

    • @TechnoTim
      @TechnoTim  7 หลายเดือนก่อน +1

      I use 2 Intel OPTANE SSD P1600X Series 118GB in a mirror. Works great. Decently priced, links in the description!

    • @frederichardy8844
      @frederichardy8844 7 หลายเดือนก่อน

      ​@@TechnoTim the SLOG need max 32GB so you can use over provisioning on these drives, can make there life even longer!

  • @newstandardaccount
    @newstandardaccount 2 หลายเดือนก่อน

    Wait, the more mirrored vdevs you have, the *less* likely it is that two drives in the same vdev will fail? It's more likely, not less. The more of something you have, the more likely it is that any of of them will fail. As you add more vdevs to your pool (of any type), the more likely it is that your pool will fail, since only one vdev has to fail, and the probability of that happening is the probability of losing any one vdev, multiplied by the number of vdevs you have.
    To be clear this should be unlikely but it is more likely as you add vdevs, not less.

    • @newstandardaccount
      @newstandardaccount 2 หลายเดือนก่อน

      Point of clarity - "multiplied" isn't quite right because as you multiply probabilities you get smaller numbers. It's the sum of the probabilities. So if any one vdev had a probability of 1/100 of failing in a year (I'm sure it's way less than that, but to make the point clear I'll say it's 1/100), and you had 10 vdevs, then the probability of your pool failing is the sum of those, which would be 10/100.

  • @cdrbvgewvplxsghjuytunurqwfgxvc
    @cdrbvgewvplxsghjuytunurqwfgxvc 7 หลายเดือนก่อน +1

    Striping ARC being nvme seems not necessary. At least from perf perspective.

    • @ajhieb
      @ajhieb 7 หลายเดือนก่อน

      It's not likely to be a bottleneck in most setups, but if you're repurposing some old drives, (as so many TrueNAS users seem to do, myself included) you're better off striping them than mirroring (or RAIDzn) as the extra capacity can make a difference, but the redundancy is pretty much pointless.