Storage Server Update: Hardware, Optane, ZFS, and More!

แชร์
ฝัง
  • เผยแพร่เมื่อ 28 มี.ค. 2018
  • **********************************
    Thanks for watching our videos! If you want more, check us out online at the following places:
    + Website: level1techs.com/
    + Forums: forum.level1techs.com/
    + Store: store.level1techs.com/
    + Patreon: / level1
    + L1 Twitter: / level1techs
    + L1 Facebook: / level1techs
    + L1/PGP Streaming: / teampgp
    + Wendell Twitter: / tekwendell
    + Ryan Twitter: / pgpryan
    + Krista Twitter: / kreestuh
    + Business Inquiries/Brand Integrations: Queries@level1techs.com
    IMPORTANT Any email lacking “level1techs.com” should be ignored and immediately reported to Queries@level1techs.com.
    -------------------------------------------------------------------------------------------------------------
    Intro and Outro Music By: Kevin MacLeod (incompetech.com)
    Licensed under Creative Commons: By Attribution 3.0 License
    creativecommons.org/licenses/b...
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 192

  • @m-copyright
    @m-copyright 6 ปีที่แล้ว +63

    When Wendell says it's garbage, you know it is.

  • @TheWilldrick
    @TheWilldrick 6 ปีที่แล้ว +34

    a month ago Linus was asking wtf was the optane for, today Wendell teaches!

    • @fredio54
      @fredio54 3 ปีที่แล้ว

      ZIl, L2ARC and don't forget: swap partitions! It'd be silly to use it for much else, except maybe a cheaper bigger RAMDISK alternative but that's hardly required as a good SSD matches or exceeds these in bandwidth, just the latency is way better. To get good speed out of these you need to lay down big coin for over size drives, too.

  • @TechPillsNet
    @TechPillsNet 6 ปีที่แล้ว +43

    One of the best videos you've ever made, got me to understand zfs. 10/10 will definitely watch again.

  • @online_now6834
    @online_now6834 6 ปีที่แล้ว +74

    Wendell, the Mr. Rodgers of the tech world...

    • @StalewindFarto
      @StalewindFarto 6 ปีที่แล้ว +5

      To me Tech Deals is the Mr. Rodgers and Wendell is the Captain Kangaroo.

  • @joseroman6484
    @joseroman6484 6 ปีที่แล้ว +1

    This is awesome! I just posted requesting this sort of videos and updates and here's one. Too quick for it to have been from my post I guess you guys already had it in the works. Keep the awesome content coming!

  • @NariKims
    @NariKims 6 ปีที่แล้ว +7

    Wendell so relaxing to watch and listen to.

  • @stranger7968
    @stranger7968 6 ปีที่แล้ว +87

    9:06 "tidepod3" "tidepod2" :P

  • @Nameback
    @Nameback 6 ปีที่แล้ว +2

    You mention "jumbo frames" and other tricks used to get good throughput. I'd love to see a video on that in the future!

  • @kevinhaas5261
    @kevinhaas5261 6 ปีที่แล้ว +67

    Would love to see a Steam Cache tutorial. That setup would be super helpful for anyone with multiple gaming computers in the same house.

    • @barrade5
      @barrade5 6 ปีที่แล้ว +6

      Also with slower internet & "Capped" internet plans.

    • @MohdAkmalZakiIO
      @MohdAkmalZakiIO 4 ปีที่แล้ว

      Linus has a video for it where he setup multiple PC connect to 1 tower if im not mistaken. He said something about steam cache as well.

  • @pollomoolokki
    @pollomoolokki 6 ปีที่แล้ว +1

    2 minutes in and you already blew my mind about the hard drive speeds! :) great job guys! also I love the self reflection "lessons learned" Lifelong learning and all that jazz :p

  • @jantestowy123
    @jantestowy123 6 ปีที่แล้ว +6

    this video is done very nice, a way better production quality all together then before....

  • @leviathanpriim3951
    @leviathanpriim3951 6 ปีที่แล้ว

    great vid Wendell, thanks for the info

  • @lowfrequency400xp
    @lowfrequency400xp 6 ปีที่แล้ว

    Great work in front of the camera sir!!! Also, engagement

  • @par5eagles975
    @par5eagles975 6 ปีที่แล้ว

    great video, thanks wendell!

  • @Najvalsa
    @Najvalsa 6 ปีที่แล้ว +3

    You know you're in for a good video when it starts with "ugh..."

  • @barrade5
    @barrade5 6 ปีที่แล้ว +1

    +1 for the Cache setup. Been using pfSense & some DNS Caching, though I'd love to scale it up with a better geared performance NAS. Also, I hope to see you in "The Verse" sometime StarCitizen ;)

  • @mychemicaljojo
    @mychemicaljojo 5 ปีที่แล้ว

    really nice video I learned a lot! thanks for sharing

  • @chadhelou
    @chadhelou 6 ปีที่แล้ว +21

    Can we get a more detailed step by step of setting up disk shelves? I Find it super interesting

    • @trumanhw
      @trumanhw 4 ปีที่แล้ว

      *No. Not even if you pay them over 20/mo x 12mo with the expectation they'd honor the pledged offer of answering a single question. They WRITE that (to get you to sign up) -- but they have no interest in doing it*

  • @Denstoradiskmaskinen
    @Denstoradiskmaskinen 6 ปีที่แล้ว +1

    7:30 That htop is just beautiful, a few more threads than my celeron dual core fanless server ^v^

  • @frosty9392
    @frosty9392 6 ปีที่แล้ว +7

    some deeper dives into zfs stuff would be sweet
    also that steam caching dns hackery.. yes please

  • @positivemelon7578
    @positivemelon7578 6 ปีที่แล้ว +7

    It would be great to get a zfs installation guide (hardware & software)

  • @vskye1
    @vskye1 6 ปีที่แล้ว

    Perfect timing for this video. I just ordered parts for a new build, and the old box with a Xeon E3-1230 v3 will be my new FreeNAS box. Now, I just need some new hard drives. (my two old Enterprise hd's are getting rather old .. like 40,000 hours)

    • @vskye1
      @vskye1 6 ปีที่แล้ว

      1230 v2 .. dang typos.

  • @ahslan
    @ahslan 6 ปีที่แล้ว

    Awesome video. Crazy that you can saturate 10g connections. I just rebuilt my home storage server with Windows Server 2016 Essentials (coming from 2012 Essentials) using Storage Spaces to get dual parity virtual disks (comparable to raidz2). The write speeds are not good at all but at least I now have 2 disk redundancy on my virtual disk. I initially really wanted to give FreeNAS a try but I simply didn't have the hardware for it (running on an hp microserver n54l and 8gb of ram). My next project is to give tiered storage a try by inserting some SSDs into the mix.

  • @nosirrahx
    @nosirrahx 6 ปีที่แล้ว +2

    The 900P has an undocumented feature where the Optane caching software will recognize it as completely compatible due to the 3DXpoint memory inside the 900P. Intel claims it wont work but it does and allows you to create a huge disk with fast cache that is also way bigger than those crappy 16/32GB drives.

  • @johntotten4872
    @johntotten4872 6 ปีที่แล้ว

    Great video Wendell. I am totally new all this server stuff but would watch a server for Noobs setup to try and better understand.

  • @PalladianPD
    @PalladianPD 6 ปีที่แล้ว

    Cool video guys.

  • @justinhowarth960
    @justinhowarth960 6 ปีที่แล้ว +1

    This guy is a true tech Jedi

  • @cmj20002
    @cmj20002 5 ปีที่แล้ว

    I have a Dell Poweredge T620 sitting around that I was going to use for this purpose. It has two E5-2667v2 8 core CPU's in it and right now I have 10TB of SATA HDD's I plan on getting an SSD cache setup and 4 more HDD's added and those will be 15K SAS drives. I will use Hot Spares if I can. These videos have got me motivated to get it set up.

  • @nadpro16
    @nadpro16 6 ปีที่แล้ว +9

    ZFS for the win, Been using it for years and love it. Talked to some freeNas devs, got a bit of info that anything over 32Gb ram on any raid size is overkill. ZFS will still use it but the 1gb per TB does have a soft cap in terms of performance.

    • @stolidifiedtoast
      @stolidifiedtoast 6 ปีที่แล้ว +3

      I've been using it for several years now too (on Linux), and I love it. Pretty sure the 1 gig of RAM per terabyte recommendation was meant for things like many user workloads or the duplication feature (bad idea using that feature for most people by the way). It's really not necessary for typical media / sequential workloads.

    • @ewenchan1239
      @ewenchan1239 5 ปีที่แล้ว +2

      I started using it back in 2006 and to this day, I STILL hate it.
      Yes, I know that ZFS has come a long ways since then Sun Microsystems (now Oracle) developed it originally, but the fundamental issue still remains: ZFS has ZERO bit-read data recovery tools. Period.
      In other words, if your zpool dies (let's say you changed the position of the drives without exporting the pool first so that you can remove/eject the disks from the drive port/channel), when you fire the system back up, the UUID on the drive will no longer match the UUID of the array pool and the zpool will refuse to mount, even if you add the -f flag.
      So unless you remember exactly the order that you installed the drives in (which, when you have as many drives as they have - haha! good freakin' luck!) - you'll never be able to remount that ZFS pool.
      So, next option SHOULD have been - you perform a bit-read of the platters themselves in order to try and extract whatever you can from it. Except you CAN'T do that, because there are literally ZERO tools that can do that with ZFS. (Plus the way that the entire filesystem is architected almost prevents you from being able to recover data using the bit-read method). which also means that for the amount of storage that they have, their only solution (which even Wendall says) is backup. And in their case, I would recommend using something like a LTO-6 or LTO-7 tape library to automate the backup processes as THE ONLY mitigation strategy/solution against a ZFS zpool failure like this.
      I know this because this has literally happened to be TWICE.
      Since then, ZFS is banned from my house on production servers and production environments.
      If it weren't for this, ZFS would have been great. But because of this, ZFS sleeps with the fishes next to Btrfs in my books.

    • @peterpain6625
      @peterpain6625 5 ปีที่แล้ว

      @@ewenchan1239 This is just wrong. At neither on Solaris nor on FreeBSD the order in which you put in the drives matter. I built and upgraded multiple (as in >50) storage servers with ZFS and the Problems you're describing aren't true. Maybe you used an old version of zfs-on-linux or something i don't know. Neither on Solaris nor on FreeBSD the order of the drives matters. Also btrfs may have it's merits no doubt but for reliable storage you may reconsider trusting it. There is a reason rh is dropping it from the supported filesystems going rhel8 onwards.

    • @ewenchan1239
      @ewenchan1239 5 ปีที่แล้ว

      @@peterpain6625
      @Peter Pain
      Really?
      I STILL have the $720 bill for the Premium support from Sun Microsystems and the emails with their developers and engineers along with some of the error messages and logs (whatever I had sent to them) still in my email that PROVES this, included below:
      Gwen Nicodemus
      Fri 2007-05-04 2:51 PM
      I called and talked to the escalation engineer for a while. This is what I learned:
      We aren't going to be able to put the array back online. The zfs pool is corrupted. The only way to get the array back would be if you could remember which disk goes in which spot, and then we might be able to bring it back.
      The escalation engineer repeated your scenario in the lab and it worked fine for him-but he used drives that support device IDs.
      We can file an Request for Future Enhancement if you want asking that zfs be changed to deal with drives that don't have device IDs; however, zfs was designed around device IDs, this is unlikely to be implemented anytime soon, and this won't help your current problem.
      At this point, the solution is to restore from backup.
      --
      Gwen Nicodemus - Kernel - 303.464.4588 or X50588 - gwen@sun.com
      On-Line Service Center --------> www.Sun.COM/service/online
      To reach my manager --------> Dial my number at hit *T 77472
      Next available kernel engr --------> Dial my number and hit *T 79273
      Next available OS engr --------> Dial my number at hit *T 21287
      "...and the Problems you're describing aren't true."
      So....then how do you explain the response that I got from Ms. Nicodemus from Sun Microsystems, who talked to the escalation engineer in regards to the issue and LITERALLY wrote "We aren't going to be able to put the array back online. The zfs pool is corrupted. The only way to get the array back would be if you could remember which disk goes in which spot,a nd then we might be able to bring it back."
      How do you explain THAT then?
      Really???
      You want to try that one again, @Peter Pain?

    • @ewenchan1239
      @ewenchan1239 5 ปีที่แล้ว

      ​@@peterpain6625
      "Also btrfs may have it's merits no doubt but for reliable storage you may reconsider trusting it."
      So long as there aren't bit level readers as far as data recovery tools are concerned, those file systems are out.
      The key to those systems, (even Wendall says it) - ZFS and zraid of any kind is NOT backup. Therefore; you have to rely on backups to be able to recover/restore the data should the zpool end up getting corrupted to the point that they were causing persistent kernel panic reboot loops.
      Literally, Sun's answer to my support case was: restore from backup (except that my server running ZFS WAS the backup copy and the backup copy FAILED).
      LTO-6 plus all of the tapes for my total, stored capacity now, and to be able to execute a grandfather-father-son strategy, will cost just shy of $5200. The last time I looked, the LTO-6 SAS (or SATA) drive alone was like $3000 and to be able to execute a grandfather-father-son backup strategy, it would mean having enough LTO-6 tapes to TRIPLE my actual stored capacity on the server, which adds about another $2200 in tapes for my capacity (in an automatic tape loader/library). And this is so that the backup won't take an entire week each time I run the weekly update (because I KNOW that I won't be able to update it daily because then I would have to go to something along the lines of LTO-7 or the ultra expensive and up-and-coming LTO-8, whenever it gets launched, if it hasn't launched already).
      Ergo; ZFS has no data recovery tools. There are no bit readers that can scan the platter and try and reconstruct the data bit-by-bit (literally).
      Btrfs has EXACTLY the same ultimate failure mode - no data recovery tools/mechanisms. Your ONLY data recovery tool/mechanism is to restore from an ACTUAL backup solution (e.g. tape) and like I said, that's EXTREMELY expensive (because commodity hard drives and the total storage capacity has increased substantially, but the technology to be able to back all of that up onto tape hasn't kept up with the pace of development like the areal density of mechanical hard drives.

  • @Mutation666
    @Mutation666 6 ปีที่แล้ว +1

    deff would like the other video you mentioned

  • @jeremyfoor7590
    @jeremyfoor7590 6 ปีที่แล้ว

    More content like this please!

  • @ironconquest87
    @ironconquest87 6 ปีที่แล้ว +1

    +1 Internet point for using mirrored vdevs. Storage space is cheap - speed and reliability are less so. Resilvering drives is also so much faster.

  • @blauerhunger
    @blauerhunger 6 ปีที่แล้ว +8

    I'd love to see a video about the local steam cache

  • @MrBobbybrady
    @MrBobbybrady 3 ปีที่แล้ว

    @ 3:30 I thought you were going to start speaking Italian :) Love your vids. Even the older ones.

  • @stephenreaves3205
    @stephenreaves3205 6 ปีที่แล้ว

    I love this video! I am running FreeNas at home and a Fedora Workstation. I would be interested to see if there is any performance difference with ZFS on FreeNas vs Fedora.

  • @ChuckNorris-lf6vo
    @ChuckNorris-lf6vo 2 ปีที่แล้ว +2

    Do a new video on this server how it's holding up today and how it can be upgraded.

  • @Witnaaay
    @Witnaaay 6 ปีที่แล้ว +1

    I'm considering using an older X79 Mobo I bought second hand for my storage/zfs server. It is old, but it has almost as many cores as the Ryzen Platform and allows me more /O in terms of SATA and PCIe. Also, it gives me quad-channel ddr3 (ddr3 is cheapers than ddr4 right now)

  • @masskilla469
    @masskilla469 6 ปีที่แล้ว +3

    Wendell is my Hardware Huckleberry!

  • @rudde7251
    @rudde7251 6 ปีที่แล้ว +1

    The 1 GB of ram for 1 TB of storage thumb rules is meant when you have de-duplication activated for all your data, should in most use-cases be deactivated, and even when enabled ZFS isn't so dumb it uses it for all your data.

  • @jeff86ing
    @jeff86ing 6 ปีที่แล้ว +17

    Will you guys do some sort of network security/privacy guide?

    • @EmilePolka
      @EmilePolka 6 ปีที่แล้ว +5

      check the Level1Linux channel I guess.
      they tackled some stuff there (eg network wide VPN using pfsense).

  • @0M9H4X_Neckbeard
    @0M9H4X_Neckbeard 6 ปีที่แล้ว

    Very very cool

  • @kilquik88
    @kilquik88 6 ปีที่แล้ว +3

    Steam cache sounds amazing.

  • @npm2415hui
    @npm2415hui 5 ปีที่แล้ว

    Would love to see a tutorial on auto transcoding video and proxy stuff!

  • @los9694
    @los9694 5 ปีที่แล้ว

    It would be cool to have some brand/model info on these various bits in the info.

  • @bertnijhof5413
    @bertnijhof5413 3 ปีที่แล้ว

    Like Wendell said. I did run into the network, overloading my CPU for ZFS backups in 2020 !!
    The network is 1 Gbps and my ZFS incremental backup only reaches 200 Mbps and that is caused by the network process, creating a 95% load on one of the CPU threads of my 2003 Pentium 4 HT (3.0 MHz), the other thread reaches ~80% :) The system is in use since mid 2019 and it has 1.21 TB of leftover disks (2 x IDE 3.5" and 2 x SATA-1 2.5").
    I hope, that the improvements in the receive for OpenZFS 2.0 will improve the speed to 300 - 400 Mbps. They bypass the L1ARC and write the received record directly to the dataset.
    It also would help (10-20%), if I swap the 1024 MB and 256 MB DDR sticks (400 MHz) for 2 x 1024 MB :)

    • @wayland7150
      @wayland7150 ปีที่แล้ว

      Are you a steam engine enthusiast. It's as if you're doing modern farming using a steam engine rather than diesel. Why not put together a system based on the Xeon E3-1240 V2. It's still old but will do ZFS justice.

  • @ChadWilliamson
    @ChadWilliamson 6 ปีที่แล้ว

    Wendell needs to make his own Level1Distro.

  • @ImAManMann
    @ImAManMann 5 ปีที่แล้ว

    I have built several similar using 12bay Dell R510s, a single enclosure which has about 45tb of usable space and over 10gbe runs at up to 400+ mbs w/r to disk. I am using an SSD 500gb as the cache instead of optane... at about a year the ssd shows very little wear. It ran about Probably about 5k all in... If I did it now I would probably try it with a 12bay r720.

  • @jgould30
    @jgould30 5 ปีที่แล้ว

    So I have to comment on the whole ZIL and L2ARC part. As you mentioned using the same disk for both with partitions isn't recommended. You mentioned ideally using 2 in a mirror. Either you misspoke or you are confused. You want dedicated device(s) for both. L2ARC being an extension of your ARC means that you first would want to max out your RAM, which will be quicker. ZFS is tuned and designed in such a way that you want to increase your ARC first or you can actually drastically hurt performance by throwing large amounts of L2ARC in the system, as L2ARC actually utilizes your RAM to store the L2ARC headers. It takes ~128 bytes of ARC per object in the L2ARC (adds up quickly). If you have a large L2ARC you can eat up lots of your ARC, effectively trading faster RAM for NAND Flash (or Optane in this case). I know OpenZFS is working on compressed ARC/L2ARC to make better use but don't think it's released. But the fundamental behavior will always exist that L2ARC headers use your RAM and reduce available ARC. Which is why you should only add it after maxing ARC and monitoring your stats to know you will benefit from it.
    Second, mirroring the ZIL or L2ARC isn't really necessary and hasn't been for some time. Yes, it can be used to provide protection from a disk failure while retaining performance. However if you only use 1 disk and it fails ZFS will fall back to using the pool as your ZIL (with L2ARC it's just a read cache so you're just losing your pre-cache). The system will just keep running, no reboot required or anything. Many years ago a single ZIL failure would cause a crash and potential data loss if a txg hadn't been flushed to the pool. They fixed it.
    Another point at 6:10. Indeed, CIFS/SMB is not multithreaded in any implementation. You can read why if you look up how it's designed in Samba. So, a single transfer is "limited" to a single cores speed. On a modern piece of hardware that's not really an issue. However when serving multiple clients or many individual requests, the core count helps. Each request gets it's own core.

    • @bmxriderforlife1234
      @bmxriderforlife1234 5 ปีที่แล้ว

      or he meant you use mirrored devices for each. 2 devices for zil 2 devices for l2arc
      that was my plan. basically redundant drives.

  • @richardallankellogg
    @richardallankellogg 4 ปีที่แล้ว

    Regarding raid5: I don’t doubt that some bits on a drive could change over time (rot). But I believe the bad sector would then fail crc checks done on the drive. Then the raid controller would repair the file. So the condition you tested for isn’t likely to happen in practice.

  • @laughingcheeze8566
    @laughingcheeze8566 6 ปีที่แล้ว

    Would love to see a steam cache video.

  • @sparkyenergia
    @sparkyenergia 6 ปีที่แล้ว +3

    Unless something changed in the ZFS code your 60gb ZIL is pointless. The ZIL device has a maximum used size of half the size of your RAM so 16GB in this case.

    • @fredio54
      @fredio54 3 ปีที่แล้ว

      Is this true? Quote from iXSystems: ZFS will take data written to the ZIL and write it to your pool every 5 seconds. Here is some simple throughput math using a 1Gb connection. The maximum throughput, ignoring overheads and assuming one direction, would be .125 Gigabytes per second. With 5 seconds between SLOG flushes and using a 1Gbit link with 100% synchronous writes, the most you will see written to your SLOG is 5 x .125 GB = .625 GB.
      This shows that you don’t need that much space for a SLOG and can use a smaller SSD. If you have a write-intensive application that requires multiple 1Gb Ethernet connects or a 10Gb, you can increase the size proportionally.
      So speed dependent and what's the bet that the 5 seconds can be increased and that doesn't say if it continues to accept writes and acknowledge them based only on the ZIL and not how backlogged it is writing down to the pool beneath. I think rather than taking anyone's word for that I'd want to examine the source code.

  • @ericwright8592
    @ericwright8592 6 ปีที่แล้ว

    Steam cache guide?!? Yes please!

  • @envidjunkie
    @envidjunkie 5 ปีที่แล้ว

    It’s funny that sneaker net is still faster in a lot of cases.

  • @Malfunction142a
    @Malfunction142a 6 ปีที่แล้ว

    My brain just popped.

  • @UrpleEeple
    @UrpleEeple 6 ปีที่แล้ว

    What mirror configuration did you guys go with exactly? 2-disk mirror vdevs? 3-disk? 4-disk?

  • @alexlu9101
    @alexlu9101 3 ปีที่แล้ว

    Any doc explain the detail of ZFS tunables , how to change the commit interval within FreeNAS?

  • @eldizo_
    @eldizo_ 6 ปีที่แล้ว +3

    Yes infodump me more!

  • @christopherworthen3260
    @christopherworthen3260 6 ปีที่แล้ว +1

    That looks like a repurposed NetApp disk shelf is that what you're using. If so is there anything special that you had to do in order to use it?

  • @AayVy
    @AayVy 6 ปีที่แล้ว

    please do a tutorial for the steam cache, as it's something i have wanted to do for a long time

  • @VirendraBG
    @VirendraBG 4 ปีที่แล้ว

    Which rack case with 12 hotswap 3.5" HDD bays you will recommend for DIY NAS?

  • @StephenMcGregor1986
    @StephenMcGregor1986 5 ปีที่แล้ว +1

    What's people's opinions on ZFS vs btrfs / CephFS (with BlueStore) / XFS ???

  • @brendinemslie8226
    @brendinemslie8226 5 ปีที่แล้ว

    I am building out something similar. I'm looking for sas to sata interposers and I noticed you are using those to get multipathing. Can you share the model number of the interposers in the tray?

  • @TheDukeOfZill
    @TheDukeOfZill 5 ปีที่แล้ว

    Question!! A bit unrelated, but maybe you can help :) Got Win10 installed, BIOS is set to RapidStorageTechnology Premium with Optane. Optane is enabled on the Win10 boot drive.
    Now I want to install FreeBSD on a 2nd drive (obviously not using Optane, however I don't wanna keep going into the BIOS to keep turning AHCI on/off while I boot between OSes.)
    If I have RST enabled in the BIOS, FreeBSD keeps "retrying/timing out" while trying to find a storage device. Turning it to AHCI lets it find the device, and ultimately a drive to install on.
    Is there a nightly or some other version of FreeBSD (or maybe in an upcoming Release) where this will be natively supported?

  • @online_now6834
    @online_now6834 6 ปีที่แล้ว +1

    you know too much to feel safe from the robots...you need to hide Wendell..........

  • @trumanhw
    @trumanhw 2 ปีที่แล้ว

    @7:12 ... Wendell said:
    If you lose a disk -- ZFS sends an email so you can get a replacement disk.
    It might not be a bad idea to add some spare disks also, bc if one the HD
    dies it's nice to add a replacement while keeping the failed HD plugged in:
    But, we're unable to do that if all 24 slots are occupied, and these are the
    kinds of trade-offs you have to think about when building a storage server.
    _Why do we want to keep the failed drive in ...? What does that do ..?_

    • @Level1Techs
      @Level1Techs  2 ปีที่แล้ว

      Failed drive kept in until it's fully replaced. Failure modes are not always absolute

    • @wayland7150
      @wayland7150 ปีที่แล้ว

      I did this recently. I'd built a 3 drive RAIDz1 array and one of the drives was giving problems. A lot of errors but because it's ZFS no real data was corrupted. Because I had a spare drive connector I was able to connect a new drive and use the TrueNAS GUI to tell it which drive to replace and which drive was the replacement. A few hours later and the new drive had taken over. I could have done the same thing by removing the dodgy drive first but the resilver process would have taken longer because every byte would have had to be calculated from the other two drives.
      In fact I had three dodgy drives in my 3 drive system. I have bought a batch of dodgy drive cheap off ebay. So I repeated the process twice more replacing the remaining dodgy drives with good ones.
      The thing about ZFS is it expects to have to work through errors and still give a 100% correct answer. Even if data is lost it simply goes back to earlier data. However it's much better if you give it good hardware in the first place. ECC memory is really not essential but is in keeping with having ZFS the best it can be. It will work OK on complete trash hardware if that's what you've got. 16GB of RAM and it will be OK.

  • @steven44799
    @steven44799 6 ปีที่แล้ว

    looks suspiciously like a certain kyles soon to be setup.

  • @joshhardin666
    @joshhardin666 5 ปีที่แล้ว

    You say you moved to RHEL for media production reasons, however, may I ask what prevented you from using RHEL in a FreeNAS VM? (is this about video hardware acceleration? NVENC Maybe?) I've got about 500gb free on my current production storage server and i'm currently testing various approaches to setting up a file server and the top 2 contenders are ubuntu server (just because i know it reasonably well) and freenas. I'm currently testing freenas and like it quite a bit, however, bhyve is a little weak when it comes to vm's (in particular I don't understand how to pass through pcie cards to the virtual machines so i can get nvenc working in a linux vm), and also i find the lack of native docker support a bit disheartening (it all happens in a vm from what I understand). I would also like to do a steam cache server (would be phenomenal for lan parties) but that's a backburnered project. so I like FreeNAS mostly and i've never used zfs on linux. I understand how to go about creating and destroying pools, but if i were to go with linux, what would I use for a dashboard? I'd like to be able to pull up some kind of gui with system status information similar to freenas. Also zfs scrubbing and snapshotting... is that something that i'd just set up a crontab for similar to smart testing? and also, how can I set up similar e-mail status messages? I like zfs sending me reports following scrubs and upon disk problems or failure and also smart error messages and whatnot... is there a package for linux that incorporates those kinds of features that works well?
    Another prospect I was maybe thinking about is proxmoxVE for the base OS, which uses KVM, supports docker natively, etc. it seems to me more or less like an OSS VMware ESXI server, and supports zfs in it's gui (i'm not sure what extent the reporting happens at and what the gui looks like), but for my purposes i could pretty easily pass a gpu through to an ubuntu vm for media work, have my various servers like plex and transmission-daemon in docker containers, i could even pass through a multiport nic and turn it into my home router as well by way of a pfsense vm... Any thoughts?

  • @SelfSufficient08
    @SelfSufficient08 2 ปีที่แล้ว

    I have a very similar setup but have never figured out how to get the IOM6 controllers to use two cables. I can easily saturate the 6gbps but would love to know how you make both connections live. I have to cables connected but it’s clear only one is in use. Any info would be helpful.

  • @abukh86
    @abukh86 6 ปีที่แล้ว

    I have zero interest in what you are saying but your dark magic is quite powerful and I watched the whole thing.

  • @danstone_0001
    @danstone_0001 5 ปีที่แล้ว

    Just use HP raid card with bbwc, flash it to support jbod

  • @marekkovac7058
    @marekkovac7058 4 ปีที่แล้ว

    14:05 -> I hope I got it wrong but using optane as a L2ARC only writes data to it and not read it from optane?

  • @banefsej
    @banefsej 6 ปีที่แล้ว

    genius

  • @charliebrownau
    @charliebrownau 6 ปีที่แล้ว

    When did Ryzen get ECC working past 1 error ? Didnt you report about this ages ago ?

  • @goldbrick2751
    @goldbrick2751 6 ปีที่แล้ว

    Nice haircut man.

  • @mr_jarble
    @mr_jarble 5 ปีที่แล้ว

    Wait what magic are you talking about with origin and blizzard games? Steam is drop dead easy for my push to move my rust out of my desktop and to my quazy server but origin and blizzard refused to install onto network drives. I ended up having to mount a virtual drive to get them to install but in doing so I lost most of the performance that I had from raid 0 and cache.
    10gb networking opened so many options and blended cache got it to the point I could play games without any major lag despite being run from a different pc. If I could get that performance back it would be awesome.

  • @RyanDurbin10
    @RyanDurbin10 6 ปีที่แล้ว

    I would love your dns hack video for steam!!!

  • @abvmoose87
    @abvmoose87 4 ปีที่แล้ว

    I had a raid card which stored the array partition table on onboard flash memory. Does zfs provide some similar kind of ability like storing the filesystem partition on a flash memory and maybe even a second memory module for redundancy? Im nes to zfs, i heard yiu talking about write an read cache on flas memory but didnt hear anything of flash for storing partition table, sorry if u did and i missed it.

    • @wayland7150
      @wayland7150 ปีที่แล้ว

      Yes, you can store metadata on an SSD. This speeds up directory listings and file access.

  • @clarkkentglasses6443
    @clarkkentglasses6443 6 ปีที่แล้ว +2

    Can someone explain @17:02. What is the advantage of being able to keep a failed disk connected?

    • @Level1Techs
      @Level1Techs  6 ปีที่แล้ว +7

      sometimes a disk doesn't fail completely -- just has bad sectors. So you can migrate to a new unfailed disk while the old failed disk is "online" then once the system is not in a degraded state, the failing/failed disk can be pulled. Not a big deal to not do it that way though.

  • @sdaviscpcs
    @sdaviscpcs 6 ปีที่แล้ว

    Please Please do a video on the docker cache

  • @Narwaro
    @Narwaro 6 ปีที่แล้ว

    Today I learned the hard way that NetXen 2-port 10GBE cards are not hot-pluggable. The server did not care, it really did not care, did not want to talk to it.

  • @gettingair
    @gettingair 4 ปีที่แล้ว

    Question, did you switched off from freenas? I know this post is old but wanted to know. :)

  • @jpullen581
    @jpullen581 6 ปีที่แล้ว +1

    Do you guys have a parts list in case someone wants to build a similar system? Specifically, what motherboard has ECC support for Ryzen; what type of ECC RAM is it (Registered or Unbuffered); what speed is it; etc.?

    • @toymachine4253
      @toymachine4253 6 ปีที่แล้ว +1

      jpullen581 I Googled the Asrock Taichi motherboard mentioned, and came up with a Reddit post stating that Asrock confirmed their motherboard will operate ECC RAM in ECC mode, if the OS supports it.

    • @jpullen581
      @jpullen581 6 ปีที่แล้ว +1

      toy machine thanks, now I feel lazy for not googling it. I just didn't know if Ryzen supported ECC or if it had to be an Epyc processor. Thanks again.

    • @toymachine4253
      @toymachine4253 6 ปีที่แล้ว

      jpullen581 No problem, I was curious anyway. I'd like to see a video on your questions, or basically optimizing a Ryzen set up, maybe spell it out a little better, step by step like he said, for Luddites like me.

    • @davidporowski9512
      @davidporowski9512 5 ปีที่แล้ว

      Just Perfect for running a porn server too, I imagine.
      thanks for the info to share.👍

  • @Narwaro
    @Narwaro 6 ปีที่แล้ว

    ZFS is starting to look a bit long in the teeth. I think the patent licensing things killed it. But that was the only way for Sun to protect it from being eaten up. The real bummer for me is the missing flexibility. I add and remove drives all the time(, btrfs is still alive!) I actually experimented with GFS2 and other shared fs with my SAN, but that is only meh and really a pain to set up.

    • @mdd1963
      @mdd1963 4 ปีที่แล้ว

      i don't think a few commands to install it, start the service, and create the array are any huge burden against ZFS......

  • @smeuse
    @smeuse 6 ปีที่แล้ว

    What is that disk chassis make/model?

  • @kristeinsalmath1959
    @kristeinsalmath1959 4 ปีที่แล้ว

    So, recomendation is use optane as cache, zfs on a raid controler with battery, right?

    • @williamp6800
      @williamp6800 4 ปีที่แล้ว +2

      Kristein Salmath no, you don't use a hardware RAID controller with ZFS. It wants/needs direct control of the disks. Some RAID controllers, like some from LSI, can be flashed from RAID mode to HBA mode (Host Bus Adapter) so the controller just passes the disks through like a SATA port on a motherboard. Also, the write cache he's using the Optane for is optional and often not necessary, depending upon on your hardware and usage. There is a very good write up on appropriate hardware for ZFS on the FreeNAS forums.

  • @pyrocro
    @pyrocro 6 ปีที่แล้ว

    Kindly please Do the video for the game server cache.

  • @BikingWIthPanda
    @BikingWIthPanda 6 ปีที่แล้ว

    what do you do for VRM cooling? I've been paying a lot of attention to it lately.

  • @DAVIDGREGORYKERR
    @DAVIDGREGORYKERR 6 ปีที่แล้ว

    what about a 128 drive Optane RAID array, I have been thinking and I have read somewhere that FreeBSD will run any Linux package but Debian or any other Linux cannot except NetBSD and GhostBSD.

    • @samzx81
      @samzx81 6 ปีที่แล้ว

      I think FreeBSD implements the Linux system calls (The FreeBSD system calls have a large offset I believe. I assessment that this would make implementing the Linux system call's a lot easier as there would't be any conflicts). But not the very latest stuff. I think it might do up to kernel 2.6. Also this is just from memory so I may very well be wrong.

  • @R2053
    @R2053 6 ปีที่แล้ว

    12:16 am i the only one that heared like a meow or something?

  • @ChuckNorris-lf6vo
    @ChuckNorris-lf6vo 2 ปีที่แล้ว

    I think you should have used raidz2 and more cache.

  • @samury2041
    @samury2041 6 ปีที่แล้ว

    Hey, any chance you know how to fix BSOD error KMODE EXCEPTION NOT HANDLED? I just set up a new pc the other day, top of the line pc set up drivers and all the necessary hardware. But randomly while playing a game I get that BSOD error code. Ive looked and looked online but theres no concrete answers. Also I have the new intel optane ssd 900p set up as my boot drive and sometimes run into rebooting issues and start up. Ill get the message "The ProfSvc failed the sign-in. User profile cannot be loaded. I feel like system is a little buggy for a new pc. I'm quite new to all this so I could use some help. Doesn't help my shitty wireless is all over the place.

  • @johncnorris
    @johncnorris 6 ปีที่แล้ว

    Does anyone monitor the percentage of spares used on a HDD to determine if a drive is approaching a failure point?

    • @mdd1963
      @mdd1963 4 ปีที่แล้ว

      Nope.....; when it fails,....it fails.....

  • @patrickbentley4038
    @patrickbentley4038 6 ปีที่แล้ว

    What was the model for the disk shelf

  • @maddogfarg0
    @maddogfarg0 6 ปีที่แล้ว

    Have you tried Nexenta?

  • @MsJinkerson
    @MsJinkerson 5 ปีที่แล้ว

    nice computer sitting next to you

  • @TheMarkFerron6
    @TheMarkFerron6 6 ปีที่แล้ว +2

    So you said LAN party...

  • @mworld
    @mworld 6 ปีที่แล้ว

    I built an array with 3tb drives.. guess what size they dont sell anymore ?

    • @mdd1963
      @mdd1963 4 ปีที่แล้ว

      who is 'they'? ( Find some on Amazon) YOu can use 4 TB drives...then when all the drives are 4 TB, the size will be adjusted upward

  • @jpullen581
    @jpullen581 6 ปีที่แล้ว

    How does the Ryzen 5 2400G fare in Fedora? Would it be a good low cost alternative?

    • @mdd1963
      @mdd1963 4 ปีที่แล้ว

      file serving is not all that CPU intensive...i'd think a quad core would be fine

  • @IzzyIkigai
    @IzzyIkigai 2 ปีที่แล้ว

    So wait.. Just to make it clear... You put your SLOG on a single device? o.o

  • @fbifido2
    @fbifido2 4 ปีที่แล้ว

    Can you cluster ZFS with 4x Servers?