How Much Memory Does ZFS Need and Does It Have To Be ECC?

แชร์
ฝัง
  • เผยแพร่เมื่อ 22 ก.ค. 2024
  • lawrence.video/truenas
    ZFS is a COW
    • Why The ZFS Copy On Wr...
    Explaining ZFS LOG and L2ARC Cache: Do You Need One and How Do They Work?
    • Explaining ZFS LOG and...
    Set ZFS Arc Size on TrueNAS Scale
    www.truenas.com/community/thr...
    Connecting With Us
    ---------------------------------------------------
    + Hire Us For A Project: lawrencesystems.com/hire-us/
    + Tom Twitter 🐦 / tomlawrencetech
    + Our Web Site www.lawrencesystems.com/
    + Our Forums forums.lawrencesystems.com/
    + Instagram / lawrencesystems
    + Facebook / lawrencesystems
    + GitHub github.com/lawrencesystems/
    + Discord / discord
    Lawrence Systems Shirts and Swag
    ---------------------------------------------------
    ►👕 lawrence.video/swag/
    AFFILIATES & REFERRAL LINKS
    ---------------------------------------------------
    Amazon Affiliate Store
    🛒 www.amazon.com/shop/lawrences...
    UniFi Affiliate Link
    🛒 store.ui.com?a_aid=LTS
    All Of Our Affiliates that help us out and can get you discounts!
    🛒 lawrencesystems.com/partners-...
    Gear we use on Kit
    🛒 kit.co/lawrencesystems
    Use OfferCode LTSERVICES to get 10% off your order at
    🛒 lawrence.video/techsupplydirect
    Digital Ocean Offer Code
    🛒 m.do.co/c/85de8d181725
    HostiFi UniFi Cloud Hosting Service
    🛒 hostifi.net/?via=lawrencesystems
    Protect you privacy with a VPN from Private Internet Access
    🛒 www.privateinternetaccess.com...
    Patreon
    💰 / lawrencesystems
    ⏱️ Time Stamps ⏱️
    00:00 ZFS Memory Requirement
    01:32 Minimum Memory ZFS System
    03:04 TrueNAS Scale Linux ZFS Memory Usage
    04:03 ZFS Memory For Performance
    #TrueNAS #ZFS
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 125

  • @Jimmy_Jones
    @Jimmy_Jones ปีที่แล้ว +29

    This will be a common video for all newbies to look up.

  • @marshalleq
    @marshalleq ปีที่แล้ว +7

    Finally good advice without fearmongering. There is so much fear mongering with ZFS for some reason.

  • @bdhaliwal24
    @bdhaliwal24 11 หลายเดือนก่อน +4

    Easily the most informative video/content I’ve seen yet on Truenas. Thanks for sharing this!

  • @edwardallenthree
    @edwardallenthree ปีที่แล้ว +6

    Thanks for the comment about the Linux 50% rule with ZFS. zfs_arc_max is a critical setting to adjust.

  • @healthy5659
    @healthy5659 ปีที่แล้ว +32

    Nicely explained, however I am still not clear- if ECC is not strictly required and data integrity is still there without it, what precisely is the benefit of ECC? Or should I ask, in what situations would a non-ECC system fail where an ECC system would not?
    Thanks for the video, please keep uploading more great content!

    • @Prophes0r
      @Prophes0r ปีที่แล้ว +37

      everything is just layers.
      zfs can provide some reliability. ECC provides reliability at a different step in the chain.
      Example: zfs loads data into memory to perform a checksum. A bit is flipped in memory. Checksum is calculated. Checksum no longer matches.
      So it tries again. Now the checksum matches. In the end it decides the data was fine and moves on.
      ECC would have fixed the single bit-flip and it wouldn't have had to do the extra work to make sure.
      Or, different ECC would have at least realized there was a problem sooner so it could redo the read before continuing.
      zfs assumes the disks are not trustworthy, but in reality nothing is. There are extra checks to hopefully recover from problems, but eliminating them before they can mess with a process is better.

    • @charleshughes7007
      @charleshughes7007 ปีที่แล้ว +30

      ZFS helps detect and correct errors which are written to the media, but ECC prevents a potential source of errors before they can ever reach the media.
      It's nice for data integrity but I think ECC's main virtue is that it lets you know very promptly when your memory is failing or otherwise having issues. If those issues are not too severe, it can mitigate them enough to keep your system functional while you resolve the root cause.
      A system without ECC which has memory corruption will crash randomly, corrupt files, and/or just generally act unpredictably. All of these are awful in a NAS.

    • @baumstamp5989
      @baumstamp5989 ปีที่แล้ว +2

      if data is in ram and written to disk and PRIOR to being written to disk a bit-flip occurs, then it is a problem. so i cannot agree with the statement that you do not need ECC if you want a proper zfs nas

    • @mikerollin4073
      @mikerollin4073 5 หลายเดือนก่อน +2

      @@baumstamp5989 "ZFS without ECC RAM is safer than other filesystem's with ECC RAM"
      - It took WAY too much reading to finally learn all of the fear mongering about ECC is just a myth.

  • @Ecker00
    @Ecker00 ปีที่แล้ว +1

    Took me days of research to come to these same conclusions a few months ago, thanks for putting the record straight!

  • @ashuggtube
    @ashuggtube 5 หลายเดือนก่อน +1

    Great work Tom. Good onya. Just watching this now because you posted it again in YT timeline. 😊

  • @nixxblikka
    @nixxblikka ปีที่แล้ว

    Thank you so much for bringing light I to this and also love the new frequencies of high quality content !

  • @Tntdruid
    @Tntdruid ปีที่แล้ว +9

    Thanks for the easy too understand zfs guide 👍

  • @paulhenderson1462
    @paulhenderson1462 4 หลายเดือนก่อน

    A nice calm discussion. Thanks for a well reasoned argument about memory use in ZFS. In my shop, we have a general rule of thumb of 128GB of memory per 100TB of zpools served. IOW, if I have a 200TB zpool, the server managing it will have 256GB of memory. We get very good performance this way, with most of the memory mapped to ZFS, which is what you want.

  • @Okeur75
    @Okeur75 ปีที่แล้ว +6

    Well, to be honest I'm a bit disapointed by the video. I would have expected some benchmarks to show when TrueNas becomes unstable/unusable under a certain amount of memory. Or you could have an ECC system and a non-ECC system, overclock RAM on both of them until it's unstable and see what it does to your data.
    This very video does not show a lot, and I'm sure did not required a lot of work.
    What happens if you run TrueNAS with 2Go of RAM ? Or even 1G ?
    What happens if you run TrueNas with 8Go (the bare recommended minimum) but with 100Tb+ of storage and some loads ? How does it affect write and read performance ?
    How resilvering is affected by the lack of memory ?
    All these tests would be useful, interesting to watch, and would also offer a definitive answer to the question we are seeing to many times on the forum "how much memory do I need for my system" ?

    • @lucky64444
      @lucky64444 ปีที่แล้ว +1

      There are too many variables to make benchmarks like those worth anything. It completely depends on your workload and your equipment. Everyones performance will be fairly unique. Not having enough ram is the difference between saturating your 10Gbe network connection and barely reading at 200MB/s.

  • @STS
    @STS ปีที่แล้ว +1

    Great video topic and timely for me! I am in the process of deciding how much to expand my TrueNAS Core usage. I currently only utilize it for iscsi (esx). Would like to move to editing videos off of TrueNAS vs copying all assets to my local machine, I was curious about the RAM usage - currently running 4x8gb DDR3 ECC Reg. I could probably stand to search for some 16gb or 32gb dimms.

  • @jonathanchevallier7046
    @jonathanchevallier7046 ปีที่แล้ว

    Thank you for this explanations about ZFS.

  • @henderstech
    @henderstech ปีที่แล้ว +5

    I appreciate your videos so much. Thank you for your hard work. You are my hero.

  • @bertnijhof5413
    @bertnijhof5413 ปีที่แล้ว +3

    My ZFS memory usage is occasionally measured in MB not GB. My use case is running VMs on an Ubuntu desktop and I have only 1 pair of hands to keep the VMs occupied. My hardware is cheap; Ryzen 3 2200G; 16GB; 512GB nvme-SSD; 2TB HDD supported by 128GB sata-SSD as cache. My 3 datapool are: nvme-SSD (3400/2300MB/s) for the most used VMs; 1TB partition on begin of HDD with 100GB L2ARC and 5GB LOG for VMs; 1TB partition at end of HDD with 20GB L2ARC and 3GB LOG for my data. L2ARC and LOG partitions are together again 128GB :) I maximized memory cache L1ARC to 3GB.
    My nvme-SSD datapool runs with primarycache=metadata, so I don't use the memory cache L1ARC for caching records. My nvme-SSD access does not gain very much in performance using the L1ARC. The boot times of e.g Xubuntu improves from ~8 seconds to ~6.5 seconds. My metadata L1ARC size is 200MB, saving space to load another VM :)
    I have a backup-server with FreeBSD 13.1 and OpenZFS, it runs on a 2003 Pentium 4 HT (3.0GHz) with 1.5GB of DDR of which ~1GB is used :) So OpenZFS can run in 1GB :)
    The VMs on the HDD run from L1ARC and L2ARC, so basically they boot assisted by L2ARC and afterwards they run from L1ARC. After a couple seconds it is like running the VMs from RAM disk or a very fast nvme-SSD :) :) Here the VMs fully use the 3GB (lz4 compressed) say 5,8GB uncompressed and my disk IO hit-rates for the L1ARC are ~93%. Using a 4GB L1ARC I can get it to ~98%.
    For all the measurement I use conky in the VMs and in the Host. Conky displays also data from /proc/spl/kstat/zfs/arcstats and from the zfs commands.
    PERFORMANCE:
    The relative small difference between using nvme-SSD and nvme-SSD + L1ARC is probably caused by the 2nd slowest Ryzen CPU available. I expect most boot-time is used by the CPU overhead and decompression, so reading from nvme instead of memory does not add very much more delay. That would change in favor of the L1ARC with a faster CPU like e.g. a Ryzen 5 5600G.
    More memory would make the tuning of the L1ARC easy, just make it say 6GB. It would not make the system much faster, since the L1ARC hit rates for disk IO are already very high in my use case. However I could load more VMs at the same time.
    The 2TB HDD is new. In the past I used 2 smaller HDDs in Raid-0. It were older slower HDDs, but the responsiveness felt better. I expect, while one HDD moved its head, the other could read. Those HDDs had 9 and 10 power-on years, so one of them died of old age, so I don't trust the remaining one anymore for serious work. Another advantage was, that my private dataset was stored with copies=2, creating a kind of mirror for that data. Once it corrected an error in my data automatically :) I consider buying a second HDD again.
    My Pentium backup server has one advantage; I reuse two 3.5" IDE HDDs (320+250GB) and two 2.5" SATA HDDs (320+320GB)and it has one disadvantage; the throughput is limited to ~22 MB/s due to a 95% load on one CPU thread. That good old PC gets overworked during say 1 hour/week.

  • @jms019
    @jms019 ปีที่แล้ว

    I’ve got a slightly nasty 32GB stick that writes incredibly slowly so would take hours to fill but works well as a cache device though has taken weeks to fill. Now it’s full (ZFS-stats -L) it has improved things beyond what just some smaller faster SSD cache partitions did on their own. So if you have “spare” USB memory sticks and ports no risk in stuffing them in as cache devices. As I only run the machine for hours per weeks persistent cache is good for me.

  • @zparihar
    @zparihar ปีที่แล้ว +1

    Once again great video! Question for you. You mentioned S3 Target. Are you using Minio? And if so, how is the performance when it's running on top of ZFS?

  • @chromerims
    @chromerims ปีที่แล้ว +2

    Great vid 👍
    My brain reading title as: *"How much money does ZFS need?"*
    Kindest regards, friends.

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  ปีที่แล้ว +1

      How much money does ZFS need seems somewhat accurate as well.

  • @milhousevh
    @milhousevh ปีที่แล้ว +3

    Timely video as I've just upgraded an old FreeNAS 8 server to TrueNAS. The performance I'm seeing definitely aligns with this video.
    HP Gen 7 Microserver N54L (2x 2.2GHz AMD Turion 64-bit cores), 16GB ECC RAM, LSI 9211-8i SAS controller (PCI-E 16x slot), Intel NIC (PCI-E 1x slot).
    TrueNAS Core 13.0-U5, booting off 250GB Crucial MX500 SSD (internal SATA port).
    * RAID-Z1 Pool: 4x 8TB IronWolf Pro 7200 RPM HDD (connected to 1st port on LSI Controller)
    * RAID-Z1 Pool: 4x 1TB Crucial MX500 SSD (connected to 2nd port on LSI Controller, via 2.5" 4-bay dock in optical drive bay)
    * Mirrored Pool (encrypted dataset): 2x 4TB IronWolf Pro 7200 RPM HDD (connected over eSATA to external 2-bay enclosure).
    This is an ancient system, massively underpowered these days, but for home use (ie. SMB/NFS file sharing - mostly media/movies/TV shows on HDD, plus the occasional git repo or document on SSD) it's still perfect as it saturates the 1Gbps NIC for pretty much everything (reads AND writes, even from the 4xHDD pool which has a sequential read rate of 640MB/s).
    At idle the system pulls about 55W, and maxes out at about 105W during a 4x HDD scrub. It's nearly silent but stick it in a closet (as mine is) and you absolutely won't hear it.
    Even the older and slower N36L can saturate the 1Gbps network with a similar controller/disk setup (I recently swapped out the N36L motherboard for the N54L as a final upgrade!)
    The only possible improvement now would be to upgrade the network side of things as that's definitely become the limiting factor, but to be honest for home use there's really no need...

  • @thisiswaytoocomplicated
    @thisiswaytoocomplicated ปีที่แล้ว +5

    I'm running ZFS on my desktop. It has 8 NVMEs all mirrored in pairs which results in about 5TB of storage in total (not evenly sized - but I run it for reliability not optimal speed and 14/10 R/W GB/s is just plainly good enough for me).
    Doesn't really matter since that desktop is a bit beyond most normal stuff (5975WX, 512GB ECC RAM, etc.) and so it is more of only anecdotal value. And yes, that is too much RAM even for ZFS - it only uses about 50 -150 GB out of the box for those 5TB of storage. So I will need to look into it of how to tune it to do better caching. ;-)
    My file server on the other hand is only an old trusty workhorse (old i7 from 2015) until recently running Linux md-raid with 16 GB of non-ECC RAM. It is just a very normal home file server. Normal (recycled) PC hardware, running about 8 years 24/7/365 without issue. Only PSU needed replacement once so far.
    Was always running raid 6 with 8 drives. Last incarnation was 8x9TB. Of course after a few years that again now became too small.
    So a few days ago I replaced the 9TB drives with 18TB drives and this time I also switched from md-raid to ZFS (zraid-2).
    What can I say? It just works at least as good as before. Just a bit faster since the drives are a bit faster than before. Hardware is old but not super-slow. Memory is not much. But with 10GbE connection it simply is still good enough for me.
    md-raid certainly stood the test of time in my home so that I still can fully recommend it. With ext4 it simply is very robust.
    But now running ZFS of course has its added value. And when the hardware finally dies, I will switch this to ECC RAM, too. Of course.

  • @ofacesig
    @ofacesig ปีที่แล้ว +1

    Could you speak more to how you set up your s3 buckets?

  • @JoePosillico
    @JoePosillico ปีที่แล้ว +1

    Good timing for me on this video. I currently have a Truenas server I built using an old intel i7 system with 32gb of ram and 5 spinning rust drives. I've been running it for a year, and it runs well for backups. I've been thinking about building one specific to VM storage that is more performant, using 4 - 2.5" SSD drives instead of HDDs. Is 128gb of ram just overkill for 15 VMs? Based on this video maybe 64 would be good enough? If there are some goto guides on this, please let me know, otherwise I may just ask this question on your forums.

  • @be-kind00
    @be-kind00 11 หลายเดือนก่อน +1

    Another issue for us home lab folks is that if we want to build a low power small nas there are very few matx or it motherboards that have ecc support and the ones that do are spend. That's why we want to use a nas that uses zfs raid.

  • @artlessknave
    @artlessknave ปีที่แล้ว

    note that there are, or at least used to be, a few, usually very rare, conditions where zfs can need loads and loads of RAM to recover a pool and if it can get it fails to import the pool.
    similar to how a dedup pool can reach a point where it cannot be loaded due to insufficient RAM.
    one of the reasons truenas puts swap on every disk is so that if RAM becomes urgently insufficient, it can at least swap. it will be slow as hell, but might have a chance of finishing.
    of course, if you have backups that mitigates much of that risk

  • @drescherjm
    @drescherjm ปีที่แล้ว +1

    0:15 I have had zfs at work and at home for around 8 years. I usually don't come even close to the 1GB per TB on any system. It's usually closer to 1/3 of GB of memory per TB. The main reason is budget and the number of slots. And then some of my servers are 10+ years old and only have 4 dimm slots but at the same time have 20 or more hard disks.

    • @Prophes0r
      @Prophes0r ปีที่แล้ว +1

      The only time it is ACTUALLY needed is for deduplication.
      You can get away with turning off ARC if you want. But deduplication just uses [X]bytes of memory / [Y]bytes of storage to function.

  • @MattiaMigliorati
    @MattiaMigliorati ปีที่แล้ว

    thank you for this useful video!

  • @dinkidink5912
    @dinkidink5912 ปีที่แล้ว

    Just checked my home NAS, it's just a basic media/file server with no need for cache, a touch over 6TB of capacity, current used RAM according to htop is 500MB.

  • @DiStickStoffMono0xid
    @DiStickStoffMono0xid ปีที่แล้ว

    thank you for mentioning video productions using truenas / zfs as this helps me making a decision on a future server upgrade for video / vfx production. The machine is probably going to be NVME based with 100G at the server side and 10x 10G connections to the clients, but it really helps to know that there already are productions running on truenas or zfs because there is not a lot of information to be found with this special use case.
    btw, would you recommend with above mentioned setup to set the ram to only be a metadata storage and have all file transfers go directly to disk?

  • @youtubegaveawaymychannelname
    @youtubegaveawaymychannelname ปีที่แล้ว +1

    Hey Tom, Any chance you can do a video on considering Cores VS. Clocks on zfs. Specifically I'd love to see if there is any update to that wisdom when it comes to TrueNAS Scale and TrueNAS Core.

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  ปีที่แล้ว +3

      Maybe I have a troubleshooting video I'm working on that would help you make a decision on your own based on the parameters of the test

  • @eugenevdm
    @eugenevdm ปีที่แล้ว

    Hi there,
    Thanks so much for the video! It's an eye opener as I thought there would be a "maximum" when running VMs, but clearly not. Unrelated question, which of your videos can I watch to determine if ZFS over iSCSI would be a good way to connect a Proxmox server to a NAS? I'm stuck trying to figure out this architecture? I get building the Proxmox server and I get building the NAS, but I don't know what file system, and what kind of switches for maximum performance?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  ปีที่แล้ว

      I prefer NFS over iSCSI as a storage for VM's and I don't use Proxmox, I use XCP-NG and I have a video here th-cam.com/video/xTo1F3LUhbE/w-d-xo.html on using storage for VMs.

  • @gjkrisa
    @gjkrisa ปีที่แล้ว

    With zfs is there away to switch the os or if you broke your os and have to clean install is there a way to not lose that data?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  ปีที่แล้ว +1

      ZFS pools can be imported into another system that is at least running the same or newer version of ZFS

  • @jsclayton
    @jsclayton ปีที่แล้ว +1

    Have you had any stability issues on Scale tweaking that memory usage switch to allow more than 50%? Seems someone from iX Systems very persuasively advised against going higher on Linux.

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  ปีที่แล้ว +2

      Only if you are using other things that need the memory such as virtualization.

  • @udirt
    @udirt ปีที่แล้ว

    two things to keep in mind:
    commercial appliances & memory: Oracle SFS boxes: 512GB/Node almost a decade ago. Tegile ZFS based systems - 48GB/Node at start, then 220ish GB (so 480GB per system *plus nvram*, Tegile in 2020... 980GB / system.
    There have been highly important patches to optimize L2ARC and dedup overhead those guys missed, but if you want to see low latency on ZFS you can either just pretend and diss people who ask about shitty performance admit how high the requirements actually are...

  • @5654Martin
    @5654Martin ปีที่แล้ว

    Is there an easy way to backup my TrueNAS storage to a third-party location with SFTP etc. in an encrypted and compressed manner?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  ปีที่แล้ว +2

      Yes, SFTP can be setup under Cloud Credentials as a backup option.

  • @MisterPhysics511
    @MisterPhysics511 ปีที่แล้ว

    Just making sure I got this right, your purple Nas is only used as a secondary backup server and barely uses 3gb of ram for 4x 8tb drives? Is it able to saturate a regular Gigabit connection on read/write? Thanks

  • @Speccy48k
    @Speccy48k ปีที่แล้ว

    Thanks for this video. I have plenty of ECC memory: would it be beneficial to use L2ARC or is it NOT required if enough available RAM for ZFS?
    My understanding is the L2ARC is the equivalent of SWAP so may impact performance.
    Also, what is interest of using a ZIL/SLOG special device like an Optane drive?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  ปีที่แล้ว

      RAM is better than L2ARC and Optane would be good for ZIL/SLOG. I have more details on how ZIL/SLOG here th-cam.com/video/M4DLChRXJog/w-d-xo.html

  • @loucipher7782
    @loucipher7782 ปีที่แล้ว +1

    cant you just use 2tb nvme for the zfs cache?
    they're so much cheaper compared to that bulk of ram and i dont mind if its just slightly slower as long as they are faster than HDDs

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  ปีที่แล้ว

      That is a more complicated answer th-cam.com/video/M4DLChRXJog/w-d-xo.html

  • @yourdad9293
    @yourdad9293 ปีที่แล้ว +1

    Very interesting.

  • @RocketLR
    @RocketLR ปีที่แล้ว

    I've been running the jankiest setup for 3 years now.
    one old gaming computer converted to a ESXi. Im talking ddr3 and a i7 4770k...
    Then im running a TrueNAS VM where I've hooked up 3 separate disks as datastores which each hold 1 single vm disk.
    Then that TrueNAS VM basically raids those 3 disks togheter.

  • @UntouchedWagons
    @UntouchedWagons ปีที่แล้ว +2

    I've read that the 1GB of RAM for every 1TB of storage is for deduplication but I have no idea. I have 32GB of RAM in my SCALE box, how do I tell ZFS to use more than half of it?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  ปีที่แล้ว +1

      Set ZFS Arc Size on TrueNAS Scale
      www.truenas.com/community/threads/zfs-tune-zfs_arc_min-zfs_arc_max.99361/

    • @Prophes0r
      @Prophes0r ปีที่แล้ว

      @@LAWRENCESYSTEMS Don't forget to tune zfs_arc_sys_free as well. It is often left out, but is a good safety setting that can let you push WAY closer to the limit with your ARC Max without having to worry about emergency evictions from ARC if something else on the system suddenly wants more memory. zfs_arc_sys_free will start calmly evicting ARC as you get to the limit, instead of waiting until the system is about to OOM.

  • @charleshughes7007
    @charleshughes7007 ปีที่แล้ว +2

    I'm running TrueNAS SCALE on a Ryzen 2600 + X570 Taichi + 32GB ECC system with a 6x16TB RAIDZ2 and a 2x4TB mirror and it's been doing great. I'm sure it would work with less memory, but this gives me some space to play around with local VM hosting too.

  • @deathcometh61
    @deathcometh61 8 หลายเดือนก่อน +1

    Short answer is all of it. Can only hold 32g get 2tb ecc xl ing ram sticks and force it to your will.

  • @KarlMeyer
    @KarlMeyer ปีที่แล้ว +1

    I wonder how this will apply to Unraid when it gets it's ZFS support update soon.

    • @blyatspinat
      @blyatspinat ปีที่แล้ว

      gtfo with unraid :D

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  ปีที่แล้ว +1

      Depends on how they implement it, but it should work.

  • @lordgarth1
    @lordgarth1 ปีที่แล้ว

    I have a TB of ECC memory on my TrueNAS server is that enough?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  ปีที่แล้ว

      Depends on your workload, might want to consider more. 😜

  • @CoryAlbrecht
    @CoryAlbrecht ปีที่แล้ว

    Does TrueNAS Scale mean TrueNAS Core on FreeBSD is going to be abandoned?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  ปีที่แล้ว

      Not at this time, they recently released an update to Core.

  • @blablabla8297
    @blablabla8297 ปีที่แล้ว +1

    Does ZFS benefit from DDR5, or is it better to just buy a larger capacity of DDR4 for the same price?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  ปีที่แล้ว +2

      Faster memory is better, but that will come down to what your next bottleneck is such as NIC interfaces or workload type.

    • @blablabla8297
      @blablabla8297 ปีที่แล้ว

      @@LAWRENCESYSTEMS Thanks. Yeah, I have a gigabit interface with spinning disks on my home NAS, so I thought that may as well go with more RAM as the bottlenecks would probably come from other places anyway.

  • @MikelManitius
    @MikelManitius 4 หลายเดือนก่อน

    LOL. love the t-shirt.

  • @stalbaum
    @stalbaum ปีที่แล้ว

    I always thought, as many dimms as are laying around, capacity over speed.

  • @luckyz0r
    @luckyz0r ปีที่แล้ว

    love your videos, they are amazing.
    but..... where the f*** you buy your t-shirts? :D I really love them
    continue the good work ;)

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  ปีที่แล้ว

      I have links in the video descriptions that take you to the shirt store lawrence.video/swag/

  • @cristianr9168
    @cristianr9168 ปีที่แล้ว +1

    Is 128gb overkill I want to turn my 5950x and 128gb into a nas.

    • @mistercohaagen
      @mistercohaagen ปีที่แล้ว

      What is the purpose of the NAS? I ran that proc with 64GB of ECC 3200MHz dual rank dimms as a NAS for a while. I found it to be overkill, even with a bunch of VM's and IOMMU passing through a GPU and capture card for an OBS system. 10G ethernet is easier to saturate than you think. Chipset matters too, x570 is probably best for server use with a desktop AM4 chip. I now run a Ryzen 3 3100 & 32GB, and it still saturates the 10G all day, even with a quad NVMe card, and 8x SATA SSD's.

    • @Prophes0r
      @Prophes0r ปีที่แล้ว

      Users? Type of data being stored? How much storage?
      It all matters.
      Memory for ARC is just a bonus for zfs unless you are doing deduplication.
      Give it however much you want to. But there will be a point where it doesn't actually do anything for you.

    • @LackofFaithify
      @LackofFaithify ปีที่แล้ว

      Not overkill depending on what you want to do. If you want to use ECC, go check out the Asrock Rack motherboards for AM4, they are all serverie and such with ECC support, 10G connections, just be mindful of the limits and weirdness they can have regarding pcie lane usage.

  • @seeingblind2
    @seeingblind2 ปีที่แล้ว +2

    How much memory do you need?
    *YES*

  • @Mr.Leeroy
    @Mr.Leeroy ปีที่แล้ว +1

    To get a good idea about amount of RAM your pool actually wants is to check during a scrub.
    It will allocate a lot more in the process and free a lot upon completion.
    P.S. Looking a lot better with that monitoring dashboard in the background. At leas it makes sense.

  • @cmoullasnet
    @cmoullasnet ปีที่แล้ว

    You look good with glasses 😎

  • @sharedknowledge6640
    @sharedknowledge6640 ปีที่แล้ว +1

    Nice video and thanks for helping debunk the myths. The level of performance you can get from even a low end TrueNas server completely shames even a high end Unraid server because of ZFS intelligent use of RAM. It’s just Apples and Oranges with TrueNas being a Ferrari and Unraid being an ox cart while Synology and Qnap are somewhere in between. Further even without ECC memory TrueNas is way less likely to have data integrity issues. Unraid loves to kick perfectly good drives out of the array kicking off a series of unwelcome time consuming tasks that just further puts your data needlessly at risk.

    • @dfgdfg_
      @dfgdfg_ ปีที่แล้ว

      you alright hun?

  • @RocketLR
    @RocketLR ปีที่แล้ว +1

    Lawrence what? Lawrence of Arabia? You sound like royalty to me! Are you royalty?!
    - FMJ Drill Sargent "Earl something something"
    I just had to get that out of MY system..

  • @Mr_Meowingtons
    @Mr_Meowingtons ปีที่แล้ว

    All of it..

  • @shephusted2714
    @shephusted2714 ปีที่แล้ว

    why stop at 128gb? 512 does better and large memory is getting cheaper - it is the best upgrade but large arrays of ssd can easily saturate all but the fastest network links - more ram, nvme and fast network links are the best priorities to focus on with infrastructure upgrades and realizing optimal performance - they are all important

    • @Prophes0r
      @Prophes0r ปีที่แล้ว +3

      The type of data being stored matters too. If blocks aren't accessed frequently, no amount of ram for more ARC is going to matter.
      The point is that there is a persistent myth that ZFS uses a ton of memory, and it is clearly false.
      Only deduplication NEEDS memory. Everything else is just a luxury to speed up bursty workloads, or blocks that are constantly accessed.

  • @johnroz
    @johnroz ปีที่แล้ว

    1GB per TB right?

  • @tazerpie
    @tazerpie ปีที่แล้ว

    Why wouldn’t you use a caching nvme ssd?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  ปีที่แล้ว +3

      Because memory is faster and more effective.

  • @LedufInfraLeDufiNFrA
    @LedufInfraLeDufiNFrA ปีที่แล้ว

    hey guy' you re testing for a lab : in production environnement with heavy i/o ... memory is really used , so if you have a lot of memory errors (and you can be sure to have many with 128 GBused), you will be pleased to have ECC doing the job for datas corruption.
    this'is my experience.
    atom cpu ... okay , you kiled me.
    with xeon ... ypu used ecc, 500$ for the cpu ... and it,s only working with ECC
    😅😅😅😅

  • @Prophes0r
    @Prophes0r ปีที่แล้ว +1

    This is something that needs to be spread because I STILL hear it.
    The only thing zfs NEEDS ram for is deduplication.
    Everything else is just nice to have for ARC. That's it.
    If you need to, you can even disable ARC and have zfs use ZERO extra memory.
    I'm not sure what your use case would be, but it is doable.

  • @thegorn
    @thegorn ปีที่แล้ว

    I have 512GB ECC RAM is that enough?

  • @shotbyschwank
    @shotbyschwank 6 หลายเดือนก่อน

    Ltt logo

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  6 หลายเดือนก่อน +1

      Nope, LTS logo and it pre-dates the LTT logo.

  • @WillFuI
    @WillFuI 3 หลายเดือนก่อน

    Me who got a great deal on 192gb of ram

  • @TechySpeaking
    @TechySpeaking ปีที่แล้ว +1

    First

  • @Itay1787
    @Itay1787 ปีที่แล้ว +7

    ZFS need ECC RAM to avoid pool and file corruption I know this from experience…

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  ปีที่แล้ว +16

      Nope, does not NEED it, but it's a nice to have.

    • @NickyNiclas
      @NickyNiclas ปีที่แล้ว +6

      ECC is more important for system stability, say if you have mission critical services running, it can help avoid crashes. Still, memory corruption is pretty rare anyway.

    • @drescherjm
      @drescherjm ปีที่แล้ว

      Although I do now have ECC on every zfs system (8 to 10) I have between home and work, I did run zfs systems for several years in production at work without any corruption. The key is to make sure your system is stable before using it. For me that meant 0 errors on memtest86 for 72+ hours of testing. No overclocking of CPU or ram / only JEDEC standard speeds and timings.

    • @sopota6469
      @sopota6469 ปีที่แล้ว +2

      @@LAWRENCESYSTEMS I don’t think he said that meaning it’s a mandatory requirement, but something that can avoid corruption so better be sure to have it.
      That said, I don’t have any confidence in a system doing very complex tasks like deduplication, full volume snapshots, caching, iscsi, etc. in volumes of 40TB+ without ECC memory. There are very good reasons servers use ECC ram. Saving a few bucks in a multi thousand $ project isn’t worth it.

    • @LackofFaithify
      @LackofFaithify ปีที่แล้ว +1

      @@sopota6469 You really think the type of person that isn't interested in ECC RAM is going to also be the type that also sets up dedupe and all the other bells and whistles on a 40TB system? Or just an average home user and you just have to show off how smart you are?

  • @raghavmahajan3341
    @raghavmahajan3341 ปีที่แล้ว

    Is it me or the color scheme and the thumbnail just look like LTT.

  • @davebing11
    @davebing11 ปีที่แล้ว +3

    if you DONT use ECC memory on a storage server, you are a fool

    • @LackofFaithify
      @LackofFaithify ปีที่แล้ว +6

      If you don't use ECC memory on a storage server you were probably just an average person called a fool on a Truenas forum and went and bought a synology.

    • @ISBayHudson
      @ISBayHudson ปีที่แล้ว +2

      Comments like this really helped me choose Unraid.

    • @f.d.castel2821
      @f.d.castel2821 ปีที่แล้ว +3

      Yeah. My rubber duck died last year because I didn't use ECC RAM. You have been warned.

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  ปีที่แล้ว +4

      Or you are someone without a budget for it.

    • @be-kind00
      @be-kind00 11 หลายเดือนก่อน

      Disagree. There are thousands of people using synology, qnap, and many other appliances without ecc. How many incidents have we heard that the root cause a failure of a zfs system was a result of not having ecc ram? None in my 40 years of IT and none in the last year of reading hundreds of posts on forums or nas vendor specific user group collaboration sites.

  • @msofronidis
    @msofronidis ปีที่แล้ว

    Is the ZFS cache the memory swap file?