Intro To Software Defined Storage! Hardware vs. Software Raid & ZFS!

แชร์
ฝัง
  • เผยแพร่เมื่อ 30 ก.ย. 2024

ความคิดเห็น • 142

  • @Bunjamin27
    @Bunjamin27 4 ปีที่แล้ว +163

    Ryan should start doing Level099Techs, where he has a couple drinks and tries to explain things to tech enthusiasts who are less-knowledgeable..

    • @CassioT989Studios666
      @CassioT989Studios666 4 ปีที่แล้ว +1

      only if he drinks vodka or cachaça, a sip every x minutes

    • @Bunjamin27
      @Bunjamin27 4 ปีที่แล้ว +2

      @@CassioT989Studios666 - Yeah, no White Claw.

    • @madkvideo
      @madkvideo 4 ปีที่แล้ว +4

      RyanTechTips

    • @chukah9484
      @chukah9484 4 ปีที่แล้ว +1

      bump - I was about to post something like this but glad to see its the #1 comment. I want to get into starting my own homelab server for growing my knowledge but this is no where near my professional field or level.

    • @Bunjamin27
      @Bunjamin27 4 ปีที่แล้ว +5

      "Listen here, stupids.." I can almost hear it..

  • @winthropliquorsreviews7984
    @winthropliquorsreviews7984 4 ปีที่แล้ว +1

    Please do the next Linus event!

  • @randydowdy4064
    @randydowdy4064 4 ปีที่แล้ว

    Love your videos but I just had to say that Microsoft Windows server since 2012 has offered a software defined storage product called Storage Spaces it's not bad it does not have ZFS but it does have Microsoft ReFS. I had to say this because Microsoft offers it in their server products and not only in System Center. docs.microsoft.com/en-us/windows-server/storage/storage-spaces/deploy-standalone-storage-spaces

  • @IdunRedstone
    @IdunRedstone 4 ปีที่แล้ว +122

    Love how cause of AMD I see a dual 28 core cpu system and go "is that it?"

    • @chrisbaker8533
      @chrisbaker8533 4 ปีที่แล้ว +5

      Oh yeah.
      I would love a pair of those 64c128th beasties.

    • @Phynellius
      @Phynellius 4 ปีที่แล้ว +5

      There are still benefits to the more mature platform intel has with certain builds, mostly due to growing pains from AMD’s very promising platform. For a hardware enthusiast though I hear you loud and clear

  • @ewilliams28
    @ewilliams28 4 ปีที่แล้ว +40

    As an admin for an EMC Compellent SAN that's about to double in size, I really need a "UHH let's not" video.

    • @linuxinstalled
      @linuxinstalled 4 ปีที่แล้ว +2

      Forgive me for not knowing, but what happened to EMC after Dell purchased it?

    • @JSLEnterprises
      @JSLEnterprises 8 หลายเดือนก่อน

      @@linuxinstalled EMC purchased Dell btw.

  • @erisdiscordia5547
    @erisdiscordia5547 4 ปีที่แล้ว +30

    Would have been cool to also include a quick look at Ceph or gluster. Might alao be an interesting video in and of itself, looking at different software defined storage solutions, comparing them in regards to features and performance.

  • @johnbrooks7350
    @johnbrooks7350 4 ปีที่แล้ว +65

    I love the podcast but this is the cool stuff that comes around every once in a while. Very cool.

  • @starchild2167
    @starchild2167 4 ปีที่แล้ว +15

    Does the Yellow Server sit on Sacred Ground ? I noticed you removed your shoes.....

    • @DaemosDaen
      @DaemosDaen 4 ปีที่แล้ว

      Looks like new carpet.

  • @bertnijhof5413
    @bertnijhof5413 4 ปีที่แล้ว +21

    Modern Software Defined Storage on Truly Ancient Hardware.
    I backup my Ryzen desktop with Ubuntu on ZFS to a Pentium 4 with FreeBSD on ZFS. Only FreeBSD supports 32-bits for ZFS :) My Back-up Server is a build based on the remains of a 2003 HP D530 SFF with a Pentium 4 HT (3.0 GHz), 1.25 GB DDR (400 MHz) and 4 leftover HDDs together 1.2 TB (2 x 2.5" SATA-1 and 2 x 3.5" IDE). Both systems are protected against the frequent power-fails by an Avtek 1200W Surge Protector.
    Total Server Costs: DOP 1000 ($20) for an 3rd-hand Compaq Evo Tower and a new locally bought 500W iTech power-supply, coincidentally coming with 2 SATA and 2 Molex connectors. The Case-sticker says: "Intel Inside Pentium 4" and "Designed for Windows 2000 Professional / Windows 98".
    The system runs 32-bits FreeBSD 12.1 on ZFS with XFCE, XRDP and Conky and it is powered on for

    • @Mr.Leeroy
      @Mr.Leeroy 4 ปีที่แล้ว

      lga1366 is dirt cheap and is like a supercomputer compared to P4.

    • @bertnijhof5413
      @bertnijhof5413 4 ปีที่แล้ว +5

      @@Mr.Leeroy Well I just paid $20 for mainly the power-supply and I reused two 320GB laptop HDDs and a 250GB and 320GB IDE HDD stored in a cabinet for 3-10 years. I only need the PC once/week and today I did the whole backup in ~20 minutes. Part of the fun for a Dutchman is to reuse completely written-off hardware.
      Besides the ZFS snapshots take a second and afterwards I can continue to use the Ryzen desktop normally, while the snapshots are sent to the backup-server. So for me it is irrelevant, whether it takes 1 minute or 60 minutes.

    • @George-664
      @George-664 4 ปีที่แล้ว

      May be by rsync? It is possible to make rsync consume less CPU.

    • @bertnijhof5413
      @bertnijhof5413 4 ปีที่แล้ว +1

      @@George-664 The main load is caused by the network process that handles many many 1500 bytes frames/second. Rsync would be a disaster for backing up ~50 VMs with 5 to 40GB files. ZFS only sends the modified records not whole files.

  • @gracefullyinsane581
    @gracefullyinsane581 4 ปีที่แล้ว +8

    I'd love to see your thoughts on Proxmox/Ceph/HA and where it fits in with ZFS and LVM or so.

  • @tracyrreed
    @tracyrreed 4 ปีที่แล้ว +5

    If you are really interested in software defined storage you have to look at Ceph. It's awesome. I've been using it for a few years and I am amazed.

  • @eTwisted
    @eTwisted 4 ปีที่แล้ว +5

    Proxmox and Ceph? HA VMs and data?

  • @jesset9585
    @jesset9585 4 ปีที่แล้ว +1

    isn't this exactly why JBOD became a thing? give a lot of storage to another controller system? Also just a personal request, can you wear shoes when working in IT environments? I know it's my hangup but it's a cringe thing for me. Don't risk your toes.

  • @steven44799
    @steven44799 4 ปีที่แล้ว +7

    a ceph / gluster video would go well with this. this covers some raid in a box, then you can have raid across multiple boxes as you step up in complexity/availability.

  • @abavariannormiepleb9470
    @abavariannormiepleb9470 4 ปีที่แล้ว +6

    "If you were trying to do that on a PCIe card..." [...] "That's why we haven't seen NVMe RAID cards..."
    Someone hasn't gotten their hands on NVMe RAID cards from Broadcom yet...?
    www.broadcom.com/products/storage/raid-controllers/tab-12gb-nvme
    (The current gen is even PCIe 4.0)
    But Broadcom is also going on my blacklist, "Broadcom is stopping to validate drives, the drive manufacturers are to just send us their opinion if the drives work with our stuff".
    Hooray for Enterprise-level support!
    *Edit*: Not arguing against ZFS on a Server - but is it too much to ask for something insanely fast and large that can be used on a Windows machine locally?

    • @abavariannormiepleb9470
      @abavariannormiepleb9470 4 ปีที่แล้ว +1

      @starshipeleven
      Don’t like Windows Storage Spaces because for some reason random 4k performance sucks compared to AMD’s NVME RAID feature for example. Also no booting from these spaces limiting use cases.

    • @kuhluhOG
      @kuhluhOG 4 ปีที่แล้ว +1

      @@abavariannormiepleb9470 I think the OpenZFS people are currently trying to port ZFS to Windows.
      From my experience adding (and using) not normally installed filesystems on Windows is annoying to say the least, but let's see how that's going to work out.

    • @andrewjohnston359
      @andrewjohnston359 4 ปีที่แล้ว

      I'm testing a highpoint ssd7103 for use as a vm os and datastore in hyperv under windows server with 4 nvme m.2 ssds. Limited to raid 1 or 10, but so far so good and means no need need for windows storage spaces

    • @pproba
      @pproba 4 ปีที่แล้ว

      I have one of those. Well, an Intel branded one, to be precise. It works really well on a Windows machine and does exactly what I expect it to do. I've never actually tried to connect an NVMe drive to it, though.

  • @DeathBringer769
    @DeathBringer769 4 ปีที่แล้ว +7

    I like how when you mouse over the thumbnail the preview is Wendell magically waving his hand over the hard drive racks, lol.

  • @pianoplayer88key
    @pianoplayer88key 4 ปีที่แล้ว +4

    "ZFS is gonna be amazing when it will properly support having mixed speeds of disks" ... I'd also like to see it support some other things that I've heard (from Linus) that UnRAID supports - adding different drives to an array individually, *AND* if more drives than your parity / redundancy fails, you ONLY lose the data on those drives, NOT ALL your data.

  • @trumanhw
    @trumanhw 4 ปีที่แล้ว +3

    Please please PLEASE please PLEASE Wendell ... PLEASE do a video talking about the advantages of LSI Disk Shelves ...
    And -- perhaps do an "Ask me EVERYTHING" -- A Troubleshooting video dedicated to making ZFS systems work optimally.

  • @scottleggejr
    @scottleggejr 4 ปีที่แล้ว +2

    Our definitions on "software defined storage" differ in that SDS should include an API for management. Create a LUN, volume, mirror, clone, etc. Manage those by growing/shrinking, and every other aspect. The vision of what EMC's VIPR used to be is a good example of an SDS platform, as is IBM's shark, Hitachi USP+VSP, datacore, Netapp w/ OCAPI + WFA, etc.
    Your definition isn't 'software defined storage' per se, mostly just software defined RAID and volume management. More modern versions of this are Erasure and Ceph.

  • @heckyes
    @heckyes 4 ปีที่แล้ว +6

    JBOD master race. Holler!

  • @ArthurOnline
    @ArthurOnline 2 ปีที่แล้ว +1

    Can you make a video explaining CEPH vs ZFS , thank you!

  • @itsdeonlol
    @itsdeonlol 4 ปีที่แล้ว +6

    Man, I love when Wendell shows us some server stuff!!!

  • @Simon74
    @Simon74 4 ปีที่แล้ว +4

    Everytime I see a cool video about servers and NAS I want one, but I don‘t even know what I would store on it...

    • @Simon74
      @Simon74 4 ปีที่แล้ว +1

      retnuHretsnoM 96emaNyMtoN
      Better answer than I ever expected. Thanks.

    • @freestinje
      @freestinje 4 ปีที่แล้ว +1

      I've thought about this as well and the best I can come up with is plex/jellyfin

    • @marble_wraith
      @marble_wraith 4 ปีที่แล้ว +1

      spam emails 😁

  • @first-thoughtgiver-of-will2456
    @first-thoughtgiver-of-will2456 4 ปีที่แล้ว +1

    Please review Ceph (possibly with Rook over Kubernetes) and the Paxos algorithm!

  • @Stradlverius
    @Stradlverius 4 ปีที่แล้ว +2

    I like to watch these videos and pretend like I know what you're talking about.

  • @agenericaccount3935
    @agenericaccount3935 4 ปีที่แล้ว +8

    Every time a tech tuber mentions RAID I have no idea what they are talking about because I am not into that world. Tucking into this with zeal, thanks for publishing it.

  • @FunkyDeleriousPriest
    @FunkyDeleriousPriest 3 ปีที่แล้ว +2

    Big fan of ZFS. Glad to see you covering it here in detail. I hope something as good comes along with a license that's better for the Linux kernel. BcacheFS sounds promising, but I think it's still got a long way to go. One wonders if HAMMER2 could compete with ZFS and if it would ever ported to Linux.

  • @JoeGrimer
    @JoeGrimer 3 ปีที่แล้ว +3

    Man this is one of the most densely packed 17 minute videos... thanks!

    • @Level1Techs
      @Level1Techs  3 ปีที่แล้ว +1

      Glad you liked it! ~ Editor Amber

  • @leadiususa7394
    @leadiususa7394 4 ปีที่แล้ว +1

    Hardware wins every time in my view! great video and I love the Dell XD-720 srv. array.. I got two of them! Check my video on the 720 and more on my storage deployments.

  • @valdius85
    @valdius85 4 ปีที่แล้ว +1

    I don't understand a single thing.
    I love these videos.
    You show complicated terms on the screen and this is enough for me to find information.
    Thank you so much.

  • @timothygibney5656
    @timothygibney5656 4 ปีที่แล้ว +1

    Hardware raid is everywhere thanks to VMware. Since everything is on VMware it therefore requires hardware raid

  • @adrianteri
    @adrianteri 4 ปีที่แล้ว +1

    What's the background music by Kevin MacLeod that's always playing ...e.g @ 3:00

  • @swayne1441
    @swayne1441 4 ปีที่แล้ว +2

    Love ZFS been using it for a few years now on my home NAS set up.

  • @riboflavin1806
    @riboflavin1806 4 ปีที่แล้ว +3

    That google server looks so cool

  • @gglovato
    @gglovato 4 ปีที่แล้ว +1

    i'm still on the HW RAID bandwagon, i like the dependability they have, and for the vast majority of the uses i see here it works far better than any software defined thing

  • @opelss2
    @opelss2 4 ปีที่แล้ว

    LVM is not SDS, is a device mapper. It is a very dumb thing to say, it is like saying BTRFS is SDS. Nexenta OS is SDS, like TrueNAS and many other SDS solutions.

  • @3v068
    @3v068 6 หลายเดือนก่อน

    Dude, you got me from doing a dell r710 server for hardware raid to software raid.
    I put 3 years into the r710 and never got it working right. I put 3 hours into truenas with consumer hardware and its been performing GREAT even if it drops out sometimes.

  • @moyam01
    @moyam01 4 ปีที่แล้ว +1

    BTRFS some love?

  • @RhandomNewb
    @RhandomNewb 4 ปีที่แล้ว +1

    ZFS still cant increase a pool one (or a couple) of drives at a time though right? It is still a work in progress being able to add another drive and re-calc all the parity?

  • @joealtona2532
    @joealtona2532 4 ปีที่แล้ว +3

    lvcreate -n notazpool 😂

  • @frankwalder3608
    @frankwalder3608 2 ปีที่แล้ว

    Interesting video, though I doubt I have the need or skill for a NAS running ZFS. I do know “RAID” is not “Backup”. I think you should wear shoes next time.

  • @rogernevez5187
    @rogernevez5187 2 ปีที่แล้ว

    3:44 *Why SOCKS but NO SHOES?*
    Some kind of dust control or just a new fashion trend ????

  • @freddobrowski2974
    @freddobrowski2974 2 ปีที่แล้ว

    I have a dell r720 hooked to a 12 bay jbod unit looks like the same kind .The jbod is working but it has yellow and green lights on the drive trays and it beeps once every couple of minutes .

  • @DeusSavage
    @DeusSavage 4 ปีที่แล้ว +1

    I recognize this music from a something else I used to watch but I can not figure it out.

  • @OfBronzeandBlaze
    @OfBronzeandBlaze 4 ปีที่แล้ว +2

    I’ve been waiting for this!

  • @aikiwolfie
    @aikiwolfie 4 ปีที่แล้ว

    Any excuse to talk about ZFS. You may have a problem sir :p

  • @b2bb
    @b2bb 4 ปีที่แล้ว

    Why doesn't is surprise me that Wendell wears jorts..?

  • @citizensteve6713
    @citizensteve6713 2 ปีที่แล้ว

    This raid was brought to you by raid…..shadow legends

  • @Sarielal77
    @Sarielal77 4 ปีที่แล้ว +2

    What about IPFS?

  • @DesertCookie
    @DesertCookie 4 ปีที่แล้ว

    Does AMD have some sort of hardware-level acceleration for software raid on AM4 like Intel (15:00)? Is it only on TR4?

  • @dfitzy
    @dfitzy 4 ปีที่แล้ว +1

    "tries to accidently delete...."

  • @tommihommi1
    @tommihommi1 4 ปีที่แล้ว +1

    Software Defined *
    * enter appropriate term for your field of technology to create a custom buzzword

    • @AugustusBohn0
      @AugustusBohn0 4 ปีที่แล้ว +2

      software defined...[throws dart] point of sale systems!
      they're like regular point of sale systems, but... never mind, there's no difference.

  • @andljoy
    @andljoy 4 ปีที่แล้ว +3

    I am sorry but i would not trust vroc as far as i can throw the chiller intel used for its 5ghtz 28 core.

  • @juanlemod
    @juanlemod ปีที่แล้ว

    I wish I had your deep, manly voice.

  • @EmilRaeven
    @EmilRaeven 4 ปีที่แล้ว +1

    Can you do a video about object storage?

  • @servalous
    @servalous 2 ปีที่แล้ว

    Raid controller were made to unload the workload for manageing and controll the RAID workload on to the RAID-HBA because the CPU back then were not that powerfull. Now its different, but when using a SDS that doen't uses proper protocols the cpu load will have a bigger impact, especially when going in to a cluster...

  • @youp1tralala
    @youp1tralala 4 ปีที่แล้ว +1

    Next we need cron vs systemd timers

  • @MichaRutkowskiEngineering
    @MichaRutkowskiEngineering 4 ปีที่แล้ว

    ok, i have i9, 32gigs of ram, 6x 2tb 7200rpm drives and ZFS performance is bad, i would appreciate a "how to tune zfs", both database and zabbix servers are on this pool and i have performance

  • @AndrewMerts
    @AndrewMerts 4 ปีที่แล้ว

    What gives? Software Defined Storage video without a mention of Ceph, Gluster, et al. I'm sorry, are you from the past? Not knocking on ZFS but I feel like clustered storage should have been a large chunk of this video.

  • @GizmoFromPizmo
    @GizmoFromPizmo 3 ปีที่แล้ว

    "Such as compression... and some of the other stuff." Successfully avoided mentioning data deduplication - because it is such a nightmarish kluge.

  • @KC-rd3gw
    @KC-rd3gw ปีที่แล้ว

    ZFS send/receive is one of my favourite features personally. It's extremely fast since it's block level transfer. I can clone 5TB of datasets from my desktop to my server rack in about 8 hours compared to twice that for rsync

  • @peterconnell2496
    @peterconnell2496 4 ปีที่แล้ว

    Even the humble x570 chipset alone, offers an amazing array of IO on a 8GB/s bandwidth pool.
    6+2 sata, 2x 4GB/s ports (either pcie 4 x4 slot, or m.2 pcie nvme port), 3x pcie 4 x1 (1GB/s each) for further nvme adapters.

  • @bob71014
    @bob71014 3 ปีที่แล้ว

    Funny to me that "software defined storage" is new.
    This was normal BAU on Solaris and AIX in the 90s.

  • @LucasHartmann
    @LucasHartmann 4 ปีที่แล้ว

    I have used LVM cache on desktop to get some extra mileage out of old SSD+HDD. It works great for incremental rsync and unison. Sadly Fedora is buggy if you use it on root filesystem... It was supposed to be faster, but instead it marks the entire cache as dirty and takes 30 minutes flushing every boot.

  • @hgbugalou
    @hgbugalou 4 ปีที่แล้ว

    I finally jump ship and bailed on hardware RAID on my home file/Plex server and went over to Windows Storage Spaces. There was a bit of a learning curve, but now that I got it setup it performs very well and more importantly I can scale it horizontally extremely easy.

  • @bionicgeekgrrl
    @bionicgeekgrrl 4 ปีที่แล้ว

    At some point I really need to look at zfs. I've stuck with xfs + mdadm raid5 on Debian for years now with good reliability and performance.

  • @joshhardin666
    @joshhardin666 4 ปีที่แล้ว

    i've been using freenas in my homelab for the past year with zfs and it's been a fantastic experience. significantly better than my previous lvm raid6 with lvmcache configuration and technically provides a much higher level of data integrity checking and protection. - As Wendel was saying, however, I REALLY wish that zfs could support seemless multi-tiered storage, I really wish that zfs would implement a better system for defragmentation (Because it's COW once you start getting into 80+% utilization, disk fragmentation becomes an issue and there's no way to defrag other than moving your data to another pool which is a serious pain in the neck), and I also wish that zfs would allow for vdev expansion so you don't end up wasting so much additional space on redundancy or don't have to strategically buy up and roll out 6-8 disks at a time for a new vdev to add to the pool which means for many users (home users in particular) a new shelf or a whole new server. - otherwise zfs is really fantastic. it's certainly ready for prime time, and yeah, hardware raid controllers are deader than dead now. zfs is way more functional, and provides way more advantages.

  • @vTuberConnoisseur
    @vTuberConnoisseur 4 ปีที่แล้ว

    As a System Engineer, i really actually quite like working with native NetApps because generally speaking they just work, and the support is great too.

  • @darkphotographer
    @darkphotographer 4 ปีที่แล้ว

    am using windows server storage pool , 3 set of raid 1 , stripe in disk manegment , work well so far , for my everyday server , for my permanent storage server am thinking using the pairity option the linux side sims to complicate for me

  • @Poodlehere
    @Poodlehere 4 ปีที่แล้ว

    Open Media Vault or OMV is a great open source file management OS that can handle/create any file system you throw at it. Plus the OS is packed with so many great features. Since it is free and open source you never have to worry about proprietary software fees

  • @peterconnell2496
    @peterconnell2496 4 ปีที่แล้ว

    In this context, i can see a quad nvme array being hard to fit in, but its news to me that quad nvme on a PCIE card on TR e.g., is not a doable resource generally - as he seems to say?

  • @anonymous-pr2sy
    @anonymous-pr2sy 4 ปีที่แล้ว

    why do you have a bandaid server

  • @Aman4672
    @Aman4672 4 ปีที่แล้ว +1

    SOCKS ON CARPET!!!

  • @leocomerford
    @leocomerford 4 ปีที่แล้ว

    LVM volumes are to disk partitions roughly as memory paging is to memory segmentation?

  • @guss77
    @guss77 4 ปีที่แล้ว

    I never understood the use-case for ZFS. I'm not saying there isn't, but no one ever presented me with a compelling one where ZFS is clearly the better option: if you are in a home/small business setting and need directly attached storage, using BTRFS is simpler and more economic (less $ per usable GB, with all the ARCs and the RAM requirements, less man power to setup and maintain). If you are setting up a NAS for a small/medium business, if you need to saturate a 1GBps link, BTRFS with NFS or Samba will do and if you need more - you'll be better served by an SDS like Ceph or Gluster where multiple servers provide aggregate bandwidth. If you are building a dedicated application for a large company and need massive performance, you can use the same SDS in a SAN or a dedicated solution such as HDFS or GridFS.
    ZFS seems to apply to a very limited use-case: you have but a single server, but it can be (and kind of needs to be) massive like in this video (what we used to call "vertically scaled"), you worry about disk failure but not about other failures (power distribution, CPU, RAM, software, networking) you have the manpower to engineer a ZFS setup and everything going on around it (NAS? VM storage?) but not a to engineer a dedicated software solution. I've been in the IT business for almost 30 years by now and haven't see these set of requirements in a long long time. Frankly, it seems like a 1990s approach to storage needs and maintenance. These days we expect to do less, pay less and get more - than ZFS can offer.

    • @wayland7150
      @wayland7150 2 ปีที่แล้ว

      The requirement of ZFS are not that difficult to achieve these days. In the past 64GB of RAM would have been too expensive. The point of ZFS is to allow you to use massive hard drives reliably. If you need only small reliable storage then I'd RAID two SSDs.

    • @guss77
      @guss77 2 ปีที่แล้ว +1

      @@wayland7150 my version of small is 15TB - an SSD still won't cut it (at a reasonable price point), and 64GB is still a lot of $ for a home server of a middle class family. I run my array on a i5 2nd gen (an upgrade from the E6680 it used to run until very recently) with 2GB RAM and that same server is also running ~10 other software services, from media streaming to messaging. You can't do that (and get reasonable performance) with ZFS, but BTRFS can live there, in a corner behind all the other stuff and still serve multiple video streams over NFS and SMB.

  • @GameCyborgCh
    @GameCyborgCh 4 ปีที่แล้ว

    i would like to see wendell building a hyper converged cluster with proxmox

  • @CPLBSS88
    @CPLBSS88 4 ปีที่แล้ว

    There's not many things I despise… But hardware raid controllers are definitely up there

  • @ShaneMcGrath.
    @ShaneMcGrath. 4 ปีที่แล้ว +2

    I'm too lazy for all that though even if it's cheaper, Bought a Synology DS1019+ last year and never been happier.
    Only thing I stuffed up on was under estimating my future storage needs,Always buy the largest drives.

  • @Jeremy-su3xy
    @Jeremy-su3xy 4 ปีที่แล้ว

    What do you do with all those storage? Why do you need so many cores for the storage?

  • @frankwu9659
    @frankwu9659 4 ปีที่แล้ว

    Shrink or enlarge file system on LVM is not easy as far as I know

  • @danhadley7404
    @danhadley7404 3 ปีที่แล้ว

    thks

  • @garryclark197
    @garryclark197 4 ปีที่แล้ว

    why am I seeing ltt ads on Wendell's video?!?

  • @Stephen-wh7vl
    @Stephen-wh7vl 4 ปีที่แล้ว

    These are my favorite kind of videos on this channel.

  • @niallflynn1833
    @niallflynn1833 4 ปีที่แล้ว

    Happy to use mdadm raid 1 on old 4 core i5 file server.

  • @davidpeterson6147
    @davidpeterson6147 4 ปีที่แล้ว

    The Hardware Jesus sent me to sub to your channel

  • @diulaylomochohai
    @diulaylomochohai 3 ปีที่แล้ว

    Great talk

  • @PhuketMyMac
    @PhuketMyMac 4 ปีที่แล้ว

    ZFS is awesome! Level1Tech is awesome!

  • @GizmoFromPizmo
    @GizmoFromPizmo 3 ปีที่แล้ว

    Microsoft handles Tiered Storage - not at the filesystem level - but at the Storage Pool level. You tell the Storage Pool manager that drives are either SSD or HDD and it manages it all from there. It's all independent of whatever filesystem you use. The filesystem could be either NTFS or ReFS and you could have all that in the same pool. It sounds to me like ZFS doesn't have it broken up that way. I know a little about ReFS but I don't know that much about ZFS.

    • @wayland7150
      @wayland7150 2 ปีที่แล้ว

      Microsoft are doing some very interesting and flexible things with storage. ZFS is not that flexible but is tremendous for data integrity.
      PS, ZFS is the file system and the disk managing system. Contrast with a RAID card. The RAID card munges the drives into one drive. The file system then partitions and formats without knowing it's a RAID system.
      In ZFS munging the drives into pools and volumes with the file system as part of it.

    • @GizmoFromPizmo
      @GizmoFromPizmo 2 ปีที่แล้ว

      @@wayland7150 - Right. ZFS is more monolithic and less modular. With the MS system, you can manage your storage pool in a number of different ways. A storage pool might consist of 4 SSDs. You could just aggregate all that storage into one volume and the result will be a RAID 0 configuration. Or you could choose to configure these 4 drives using parity (RAID 5). You could even configure them in a mirror (RAID 1). And now that the volume type is configured, you can format it using either NTFS or ReFS.
      ZFS is a monolith. Your storage pool is your drive is your file system. It's hard for a Microsoft guy to get his head around.
      P.S. Microsoft's RAID 5 is DOG SLOW and should never be used. ReFS may help with data integrity but you don't even want it after you see how slow everything is. It's a mess.

    • @wayland7150
      @wayland7150 2 ปีที่แล้ว

      @@GizmoFromPizmo Actually with Microsoft you can do both aggregate it into one large RAID0 drive at the same time as having it as a smaller RAID1 drive. You can have one drive that appears to be 1TB and another that appears to be 2TB. Obviously they both fill up as you fill one up.
      ZFS is very easy. Each 'partition' is actually not a fixed size but sharing the same storage. Nothing stopping you from building multiple sets if you have enough drives.

    • @GizmoFromPizmo
      @GizmoFromPizmo 2 ปีที่แล้ว

      @@wayland7150 - Yeah, I didn't want to get too deep into the weeds on that. I considered it but decided against mentioning it.

  • @jessebow1375
    @jessebow1375 4 ปีที่แล้ว

    How does one deep dive into the powers of ZFS

    • @jessebow1375
      @jessebow1375 4 ปีที่แล้ว

      I know there’s google but a more structured approach

  • @karlisozolins4218
    @karlisozolins4218 4 ปีที่แล้ว

    What is the name of the intro song?

  • @b2bb
    @b2bb 4 ปีที่แล้ว

    digging the new transitions too

  • @karencarter964
    @karencarter964 4 ปีที่แล้ว +5

    ZFS is still a work in progress, and software raid works well for high end systems. For "spinning rust" as you call it though hardware RAID is far less costly (enterprise equipment is inexpensive on the used market because of the typical 3 year turnover rate) and can be implemented on computers without CPU's costing thousands of dollars. Software RAID still has a bit to go before becoming as reliable an easy to implement as hardware RAID, it will get there, but it isn't there yet.

    • @Level1Techs
      @Level1Techs  4 ปีที่แล้ว +8

      I would put single chassis zfs "raid" of spinning rust against the best hardware lsi hardware controller any day of the week, especially with non-sas but still higher end spinning rust any day of the week. Zfs has way more overhead but also way more internal checking. Is the hardware raid good enough? Yes. Is it better than zfs? Not in terms of data integrity.
      It is possible for me to shut down the lsi based system and introduce errors that won't be detected and that will return bad data once it's booted back up. I am unable to do the same thing with a zfs based system.. it will correct the error automatically assuming I don't introduce too much corruption.

    • @karencarter964
      @karencarter964 4 ปีที่แล้ว +3

      @@Level1Techs Dual ported SAS gives you more hardware redundancy and protection, I wouldn't even consider doing SATA considering the price of enterprise SAS drives on the used market (the prices are insanely low). I understand you're a big fan of ZFS (many are), but the cost and management are FAR lower for hardware raid right now. For example for around $300 I can put in a RAID card with 8 900GB 2.5" 10K drives on a RAID 6 setup and have 2GBPS (that's Bytes not bits) read and write on a windows workstation. For larger server requirements Dell md 1200/1220's are dirt cheap and combined with a R720/710 I can have mass storage for far less than you paid for your 1U chassis. I know it's old tech, but for many people this setup works great at a low cost. I would also mention you get enterprise software to manage it all that simplifies everything.
      I like ZFS on higher end builds, your points on NVME I agree with, I just think making use of the mass amount of enterprise hardware on the used market can work for many people far simpler and cheaper.

    • @Level1Techs
      @Level1Techs  4 ปีที่แล้ว +2

      I agree it can be simpler and cheaper, and often faster. But the level1 storage server is dual ported to each drive and it works fine with zfs, ha failover and all.. With enterprise-ish shelves and backplanes (SATa) that runs on the backplane mostly anymore anyway.

    • @karencarter964
      @karencarter964 4 ปีที่แล้ว +3

      @@Level1Techs All you need is a good understanding of linux and a VM to run it, I do think we're in agreement here. For higher end systems with a competent tech ZFS is attractive, for the rest hardware RAID is as you say "simpler, cheaper, faster" to implement. I do think software RAID is the future, but for the here and now hardware RAID still makes sense for many needs. :-)

    • @Mr.Leeroy
      @Mr.Leeroy 4 ปีที่แล้ว +3

      @@karencarter964 ZFS is built for scalability to PBytes in real scenarios, while maintainig rock solid reliability. Not speed. You are talking apples to oranges.

  • @skaltura
    @skaltura 4 ปีที่แล้ว

    ZFS is also the slowest and most unreliable thing you could do for your storage. Only thing it's really good at is single user single thread use and L2ARC is quite good, shame it tends to never fully warm :(

    • @Level1Techs
      @Level1Techs  4 ปีที่แล้ว +1

      Unreliable? Lol

    • @skaltura
      @skaltura 4 ปีที่แล้ว

      @@Level1Techs Having had quite a few arrays nuked by specifically ZFS (well ZOL) software issues - Yes, unreliable.
      When you loose 500+ customers data over and over again, you tend to choose something else instead - otherwise you lose those customers.
      So Wendell, while you know a lot, you don't know everything and every use case. Please don't be arrogant, but try to learn.
      ZFS is way way way overhyped

    • @Level1Techs
      @Level1Techs  4 ปีที่แล้ว +2

      You are not using it right or you were bitten by some bleeding edge bug from the days when fuse was in use or something. You could possibly have been bitten by hardware limitations you don't realize like most motherboards are not super reliable if you use all sata ports at once

    • @skaltura
      @skaltura 4 ปีที่แล้ว

      @@Level1Techs Well, i consulted some other DC owners at the time, they had exactly the same issues -- regardless of implementation.
      There were at time some very important fail safes missing completely.
      At the end of the day, the ridiculously bad random I/O performance was enough to drop ZFS.
      I know many who are ZFS religious, and understand it's hard to see past a religion. Ultimately however, it's data that matters for businesses like ours who do not have infinite budgets.

  • @madkvideo
    @madkvideo 4 ปีที่แล้ว

    I love ZFS

  • @Felix-ve9hs
    @Felix-ve9hs 4 ปีที่แล้ว

    R.I.P RAID

  • @FredsTech1
    @FredsTech1 4 ปีที่แล้ว

    I want to have about 3-4 3TB disks that have redundancy. What’s the best way to go? Raid5 via my motherboard (asus z390-h)? Windows storage? Run unraid bare metal and runs windows in a vm? Use old computer via network?

  • @Eli_Kennemer
    @Eli_Kennemer 4 ปีที่แล้ว

    I want to see you guys min/max a home server setup.
    Without doing too too much;
    Wishlist would be:
    Bulk storage- pics,docs - wifi sync
    Steam Cache
    Plex - Music - Books
    VM to connect over network and access everything + internet/netflix & emulators.
    Ideally all fitting in a consumer chassis (small server cab or desktop case).
    Home Kit™

  • @icey_u12
    @icey_u12 4 ปีที่แล้ว

    I wish my uni would of taught me the difference like this :/ would of been helpful

  • @awesomearizona-dino
    @awesomearizona-dino 4 ปีที่แล้ว

    Unraid.