Adventures in Motherboard Raid (it's bad)

แชร์
ฝัง
  • เผยแพร่เมื่อ 24 ธ.ค. 2024

ความคิดเห็น • 415

  • @ShiroKage009
    @ShiroKage009 3 ปีที่แล้ว +457

    RAID works by having multiple legends and heroes living in the shadows.

    • @nurnabilah1921
      @nurnabilah1921 3 ปีที่แล้ว +3

      Hahaha

    • @Aegor1998
      @Aegor1998 3 ปีที่แล้ว +9

      RAID SHADOW LEGENDS!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

    • @Knee-Lew
      @Knee-Lew 3 ปีที่แล้ว +3

      GET OFF OF MY HEAD!!1!1!!1

    • @CreativityNull
      @CreativityNull 3 ปีที่แล้ว +2

      Boooooo

    • @CyFr
      @CyFr 3 ปีที่แล้ว +8

      The one time I would have accepted a raid shadow legends sponsor spot

  • @keyboard_g
    @keyboard_g 3 ปีที่แล้ว +57

    Never since a security bios patch update ate my entire array.

    • @profosist
      @profosist 3 ปีที่แล้ว +5

      A bios update for my Sabertooth X99 would cause drives to just drop saying they were bad even though they weren't. I caught this before I lost too many. Not all were as lucky sadly.

  • @josephletts1093
    @josephletts1093 3 ปีที่แล้ว +41

    This man goes through so much pain so you don't have to. Only total respect.

  • @dciking
    @dciking 3 ปีที่แล้ว +16

    I am taking classes for my IT A+ certification tests, and we just started talking about RAID this week!!! Thanks for the info!!!

    • @sagejpc1175
      @sagejpc1175 3 ปีที่แล้ว +2

      Good luck on your exam!

  • @daemonfox69
    @daemonfox69 3 ปีที่แล้ว +44

    "Hard won experience" - This... I felt this when you said it. So many nights working through the AORUS RAID tools both SATA and Nvme. So many more nights making server 2019 Core work with an AORUS board to begin with.

    • @daemonfox69
      @daemonfox69 3 ปีที่แล้ว +2

      Oh man even better we were doing this at the same time roughly it seems... ~4 weeks ago I built 3 AMD RAID systems to move CHIA plots long term and also had one with weird performance that turned out to be ONE BAD CABLE. New cable from a box of 6 and 1 just wasn't up to the job.

    • @timothygibney159
      @timothygibney159 3 ปีที่แล้ว +2

      Aorus boards are junk. I will not buy one again even if they have great caps

    • @daemonfox69
      @daemonfox69 3 ปีที่แล้ว +4

      @@timothygibney159 to each their own. Gigabyte has always gone above and beyond for me with the couple of RMAs I've had in the last 15 years and the only complaints I have are about RAID which is more AMD and Gigabytes choice of certain Intel LAN and WAN modules. Some aren't compatible with certain OSes due to driver silliness by Intel.
      Of all the board vendors, Gigabyte has earned the most credibility with me and 4/5 systems in my home run on their boards. The 1 other is an ASROCK build.

    • @timothygibney159
      @timothygibney159 3 ปีที่แล้ว +1

      @@daemonfox69 I have to keep taking the cmos battery out once a week to keep it booting. My 2nd nvme drives keeps disappearing and this is the 2nd gigabyte board with the same problem. They won't rma their boards either

    • @Fay7666
      @Fay7666 3 ปีที่แล้ว

      I've also had good experiences with Gigabyte, not enough to say it's my go-to, but if it's an option I'll definitely consider it.
      But I wouldn't use an Aorus as a server, tho.

  • @Marc_Wolfe
    @Marc_Wolfe 3 ปีที่แล้ว +11

    "Intermittent" the worst thing to hear when talking tech.

  • @hightech-lowlife
    @hightech-lowlife 3 ปีที่แล้ว +53

    The only raid I would EVER do is raiding the pantry for orange soda and snacks.

    • @raven4k998
      @raven4k998 2 ปีที่แล้ว

      nvme for the win beats raid 10 fold for speed atleast

    • @MrBearyMcBearface
      @MrBearyMcBearface 2 ปีที่แล้ว

      @@raven4k998 but what if you raid 0 two nvme drives

    • @raven4k998
      @raven4k998 2 ปีที่แล้ว

      @@MrBearyMcBearface I was kidding child Raid is just for no reason at this point cause it no longer does anything other then use the word raid and compromise your data cause people no longer care aboot raid to save data integrity

    • @raven4k998
      @raven4k998 2 ปีที่แล้ว

      @@MrBearyMcBearface then you raided two nvme drives for what reason????

  • @Sams911
    @Sams911 ปีที่แล้ว +2

    there was a time when hardware RAID was the real "pro" deal... and software RAID was bad... has the tables turned?

    • @knietiefimdispo2458
      @knietiefimdispo2458 5 วันที่ผ่านมา

      Some times it flips. Then it flops. The times they are a-changin' ...

  • @nukedathlonman
    @nukedathlonman 3 ปีที่แล้ว +18

    Well, I agree, RAID on motherboards hasn't been overly hot. But doesn't using RAID on SSD's pose problems with Trim and also increase (exponentially) write amplification?

    • @abrahamgrams109
      @abrahamgrams109 3 ปีที่แล้ว +1

      Depends on the kind of RAID and the parity type you decide to go. Raid 0, 1 and 10 would probably pose little to no issues for SSDs, it would be the ones that contain a parity on each device for rebuilding, and when it comes to rebuilding the RAID it would potentially cause other issues. ( Just using what I was told in school )
      Edit: I wouldn't know much about trimming though

    • @amirpourghoureiyan1637
      @amirpourghoureiyan1637 3 ปีที่แล้ว

      I imagine non-NAND flash drives would fair better.

    • @RobBCactive
      @RobBCactive 3 ปีที่แล้ว +4

      RAID5 has natural write amplification because changing 1 byte, requires reading in blocks on another disk and then calculating parity and writing that too.
      RAID10 and increasing the disk budget was always a better option, as simplicity saved more than doubling disks

    • @creed5248
      @creed5248 2 ปีที่แล้ว

      Trim and optimization works with raid as long as the array isn't dynamic

  • @galdutro
    @galdutro 3 ปีที่แล้ว +30

    Can you do a comparison between file systems that support raid (ZFS/Btrfs) and also solutions like intel rapid storage?

  • @ProcessedDigitally
    @ProcessedDigitally 3 ปีที่แล้ว +5

    15:35 Makes sense. I once had a MOBO SSD RAID0 (using two earlier Sandisks) on a MSI 990FX board for boot and OS (WIN10). It was good performance when the Raid was new but over time the writes slowed to just 150MB/s coming from over 600MB/s. The reads were not affected much. I even thought the disks were getting 'worn out' but I removed them from the Raid and formatted then and turns out the disks were pretty much good the same. Maybe TRIM issues were the prob.

  • @pietdelaney
    @pietdelaney 8 หลายเดือนก่อน +1

    I read the Intel board raid works with non-intel SSDs if you get the more expensive key.

  • @NorySS
    @NorySS 3 ปีที่แล้ว +6

    Intels Marketing also blocked NON-Intel NVME drives from working on Z590 platform.

    • @tron121
      @tron121 3 ปีที่แล้ว +3

      that was fun. tried two 480gig optaine drives on a Gen 1 threadripper using amd raid.....intel forever lost points on that move. all my servers are epyc now.

  • @Tystros
    @Tystros 2 ปีที่แล้ว +6

    So do you recommend Windows software raid? One big issue with Windows software raid is that whenever the PC is shutdown uncleanly (like a crash), windows wants to do a full sync again. And when using something like a 12 TB HDD, such a full sync takes 50 hours or so. And it restarts whenever you restart your PC. Now when you PC is never running for 50 hours straight, that full sync can never actually finish, and you hear HDDs working the whole time while using the PC. It's not great. I haven't found a solution or better way for that yet.

    • @GamingWithUncleJon
      @GamingWithUncleJon 2 หลายเดือนก่อน +1

      You probably shouldn't be using RAID on any system that doesn't have expected uptimes to rebuild that volume.

  • @linuxgeex
    @linuxgeex 3 ปีที่แล้ว +2

    The writeback caching inconsistency isn't so much about whether the drives are silvered, it's about whether the writes happen in the correct order. ie when doing an atomic mv of one file over another, writing the metadata for the mv before writing the data of the new file to disk, resulting in an atomic obliteration when the software stack expects this to be impossible and applies no other mitigations. Writeback allows things to be written out of order, ie not synchronously.

  • @johnpaulsen1849
    @johnpaulsen1849 3 ปีที่แล้ว +5

    Question, isn't the disk manager raid in windows single threaded and they recommend a storage space to take advantage of additional cores/threads?

    • @llynellyn
      @llynellyn 3 ปีที่แล้ว +6

      Correct, the dynamic disk software raid used in disk management is considered obsolete/depreciated/legacy by Microsoft at this point (as you would expect as it was introduced in Windows/Server 2000!) and was replaced with storage spaces.

  • @Gogargoat
    @Gogargoat 3 ปีที่แล้ว +39

    I'm a big fan of the linux raid10 with the F2 layout, even with just 2 drives. Read performance is identical to raid 0, write identical to raid 1. Not sure if it still matters with fast SSDs (compared to the near-2 layout), but I don't really see any downsides.

    • @amonmetalhead7034
      @amonmetalhead7034 3 ปีที่แล้ว +3

      I run a RAID 5 with a RAID 10 cache in front, it's excellent.

    • @Physics072
      @Physics072 ปีที่แล้ว +1

      @soyel94 Raid 10 requires 4 disks not 3. Raid 5 just don't do it. Drives fail on rebuild. Raid 10 is superior to using parity when it comes time to rebuild.

  • @nosirrahx
    @nosirrahx 3 ปีที่แล้ว +3

    My workstation runs 4 905P drives mounted on a Hyper 16X in VROC RAID 0. The performance was pretty untouchable when considering both sequential and 4KQ1T1 until the P5800X came out. After some tuning and OCing I get 200-220MB/S 4KQ1T1 read which is nuts for a drive that also has insane sequential read.

    • @Intelwinsbigly
      @Intelwinsbigly 19 ชั่วโมงที่ผ่านมา

      905s are lovely second hand, have a bunch of them and not a one is below 90% worn.

  • @FrenziedManbeast
    @FrenziedManbeast ปีที่แล้ว

    Growing up I got into so many re-install scenarios with my PC builds due to my own ignorance about RAID. One particular build in the early-2000s I was doing a RAID 0 setup with two WD Raptor 74GB drives using a PCI (not PCIe) RAID card. I reinstalled Windows and games so many times troubleshooting corrupted drives that to this days I remember the majority of CD keys for my big games from that era.
    Since those days I've generally stayed away from using RAID, although recently I started messing with ZFS and Windows Storage Pool stuff. Thanks for yet another fun video, L1T!

  • @pixels_per_inch
    @pixels_per_inch 2 ปีที่แล้ว +1

    I used RAID 0 on HDDs and at that time HDDs were offering much higher capacity per dollar. Performance is as expected; double the read and write for sequential and a slight increase for random. Been using it for about 3 years with no issues and I'm overall happy with it.
    Would I go for RAID in the future?
    Definitely not as SSDs have become so cheap. RAID on SSDs seems a little dumb because NVMe is already so fast and when PCIE 5 becomes the norm, it just wouldn't make sense.

  • @hockeylad2727
    @hockeylad2727 3 ปีที่แล้ว +2

    Great vid as always. Also loving the background music. Sounds like bopping through cyberspace. Anyone know what it is?

    • @Level1Techs
      @Level1Techs  3 ปีที่แล้ว +5

      Hello! This song is called Vital Whales by Unicorn Heads. I found it through the TH-cam audio library. ~ Editor Autumn

  • @ericthedesigner
    @ericthedesigner 25 วันที่ผ่านมา

    The problem with m.2 raid usually the first m.2 is direct to the cpu and the other 3 are on the chipset. So you need to look at the block diagram to see what is getting switched.

  • @pdamasco
    @pdamasco 3 ปีที่แล้ว +2

    Thank you for bringing this up. I spent a lot of time struggling with RAID on my x370 gigabyte board and ultimately I just had to ditch the idea and bought a larger SSD with a huge gaming/backup HDD.

  • @LokiCDK
    @LokiCDK 3 ปีที่แล้ว +15

    I'm sorry but the phrase "avoid it like the plague" has been cancelled.
    Recent evidence suggests the average person does not, in fact, make any attempt to avoid the plague.

    • @timramich
      @timramich 3 ปีที่แล้ว +1

      Stay hiding away if you're afraid of germs

    • @thelegalsystem
      @thelegalsystem 3 ปีที่แล้ว +3

      @@timramich I hope someone you love is taken from you

    • @Astfgl
      @Astfgl 3 ปีที่แล้ว +2

      "Avoid it like responsibilities" is the new phrase.

    • @vgamesx1
      @vgamesx1 3 ปีที่แล้ว +2

      @@thelegalsystem I've heard of some people knowing a friend/family member who died from "the thing" and they still don't care or say it isn't real, apparently even death isn't good enough to take something seriously these days.

    • @timramich
      @timramich 3 ปีที่แล้ว +2

      @@thelegalsystem Thank you. You must be ultra left. You can go around threatening a president you don't like, but the minute a person says there are only two genders, they should be locked up for hate speech.

  • @KunalVaidya
    @KunalVaidya 3 ปีที่แล้ว +2

    I wanted to set up a non booting RAID with a B550 aorus master board, dug up and installed 2 unused spinny 1TB drives, etc. but stopped when I learned that even a BIOD update can damage the array. RAID plan dropped.
    Please guide on what could be a good (and safe) solution for a machine that dual boots between windows 10 and Linux ubuntu 21.04. I want redundancy and speed so that I can use it as a data location along with my main M.2 980 pro 1TB drive. I have an old PCIE SATA expansion card, maybe that will free me from the BIOS update array loss threat.

  • @chrcoluk
    @chrcoluk 3 ปีที่แล้ว +11

    Problem with windows software raid if you have a unclean shutdown, it assumes it needs to resync data and so you get a slow rebuild forced on you, and I found out from Macrium documentation a while back that dynamic disks in windows are depreciated. So I stopped using it.
    However software raid in Linux and BSD is awesome, and I stay away from hardware raid and onboard raid systems.

    • @ovistech
      @ovistech 2 ปีที่แล้ว

      True, but only on mirrored drives. The striped drives are not affected.

  • @skaardd
    @skaardd 2 ปีที่แล้ว

    I have an x570 creator Asrock running in a film scanning host machine with a 4x qvo 870s 4tb running in raid 0. My raid is meant to take a raw 4k 12bit dpx stream and write each frame file at about 14-16 frames per second. It runs quite well, only had one fault with it in over a year but it was just a drive error that corrupted 5 dpx files out of over 300,000. I only lost a day or two of work rebuilding the raid because I didn't trust it with client film. Its been working great since the rebuild and I have a spare ready if the problem drive finally breaks. Ive probably passed 400-500 tbs in scanning data through these drives by now with no issues other than stated above

  • @jcugnoni
    @jcugnoni 3 ปีที่แล้ว +2

    For me, the only RAID that works, at affordable cost, is Linux md raid; you can setup a raid 1 root system drive as long as you have a separate non raid boot partition to store the kernel , bootloader and initramfs. For data, it just works as expected and is rock solid as long as you check periodically the drives status or report drive errors by mail for example.

  • @finarfin9939
    @finarfin9939 3 ปีที่แล้ว +2

    Wendell: "Theres something wrong with the reads"
    Me: LITERACY!!

  • @AFistfulOf4K
    @AFistfulOf4K 2 ปีที่แล้ว +2

    I used Intel motherboard RAID for 6 years with no issues and it saved me from a hard drive death. I used AMD motherboard RAID for less than 4 months and it nuked Windows twice and cost me thousands of dollars in lost work. AMD fans are broken in the brain.

  • @jenesuispasbavard
    @jenesuispasbavard 2 ปีที่แล้ว +3

    Literally my only use case for RAID is that I only see a single C: drive in Windows. There's no way other than motherboard NVMe RAID to combine two 2TB drives into a *bootable* combined 4TB volume; I'd even take lower performance in RAID than single drives just so I see a single drive.

  • @misiekt.1859
    @misiekt.1859 3 ปีที่แล้ว +1

    @Leve1Techs Which driver did you use for AMD RAID on Linux? Is there a new one? Or just the 17.2.1 that is over 4 years old ?

  • @MrMalchore
    @MrMalchore 3 ปีที่แล้ว +3

    Ya, a ramble of a video indeed. I know the topic was about motherboard RAID (specficially AMD firmware) but all focus was lost after your returned from your sponser message.
    ...so anyways, I have three 1TB WD spinning hard drives I'll raid together in RAID 0 as my backup and Steam game library volume. It won't be my OS volume - that'll go on a single nvme drive (with no raid to speak of.)

  • @LaDiables
    @LaDiables 3 ปีที่แล้ว +4

    I have had fuzedrive completely blow out a partition of mine necessitating a complete system reload (without fuzedrive)

    • @profosist
      @profosist 3 ปีที่แล้ว

      Was it a boot drive? Caching a boot drive from my experience is even riskier than raid so many issues with Optane as well. Ended up reverting many people to just straight NVMe boot drives.

  • @AwSomeNESSS
    @AwSomeNESSS 3 ปีที่แล้ว +2

    Could you do a video explaining the scaling issues with Optane for the consumer? It's very fascinating how much that segment has stalled. As we move more and more to the cloud at a consumer level, you would think a 128gb-256gb optane-only computer system would be the end-goal for consumer performance.

  • @shadowmist1246
    @shadowmist1246 2 ปีที่แล้ว

    I revived an old server using a 6 SAS enterprise grade HDD single raid 10 array (3 TB usable) for everything - boot and storage. I tested with simulated HDD failures and its very smooth and stable. When replacing a drive, it was seamless with no noticeable effect on performance during the restoration process. I used a perc raid card but I'm sure it would not have been as smooth with motherboard sata raid.

  • @Noobish588
    @Noobish588 3 ปีที่แล้ว +1

    Could you make a video or point me to a video on zfs for / ?
    We have SM and Dell PE servers in our environment that primary use ZFS for their data stores however for root we then do a md raid1 for that bit of reliability and if I could have a one size fits all that would be wonderful :P

  • @JonathanSwiftUK
    @JonathanSwiftUK 3 ปีที่แล้ว +2

    When discussing Windows Raid it would be helpful to clarify the two types, that through disk manager, and those via Storage Spaces. Worth mentioning that some features are depreciated - for example spanned disks. Storage Spaces is the preferred method for Windows software Raid. For me you can't beat hardware raid with a decent memory cache and battery backed write caching. RST motherboard using Raid 0 is a fast option, but no data resilience - good for test/lab systems only.

  • @gustavgurke9665
    @gustavgurke9665 3 ปีที่แล้ว +1

    KDiskMark? I've never gotten reliable numbers from that.

  • @JMetz
    @JMetz 2 ปีที่แล้ว

    Excellent video as always. One minor update: NVMe is not relegated to solely SSDs. As of NVMe 2.0 (released before this video was published) NVMe could be used to access HDDs. HOWEVER...
    ... this does nothing to negate what is said here. PCIe HDDs are not typically marketed to or offered to consumer systems. The TP (Technical Proposal, TP4088 for those who care; integrated into NVMe v2.0) was designed to allow hyperscalers to use the same NVMe driver for both SSDs and HDDs, which simplified management and upgrades.

    • @andreewert6576
      @andreewert6576 ปีที่แล้ว

      I'd like to see a PCIe lane suffer from the boredom of transferring the ~200mb/s a spinny disk can output.

  • @MisterWallopy
    @MisterWallopy ปีที่แล้ว

    Hi professional 30 second commenter here.
    I quit using Mobo raid for windows pooled storage. Performance isnt much if at all better, but if the mobo dies or you want to swap from one computer to another windows comp. It just does it. Saved me when i went from intel to amd.
    Next up, buying a pcie raid controller for the speed.

  • @sheldonirving9529
    @sheldonirving9529 2 ปีที่แล้ว +16

    I would never do RAID where "I" stands for Inexpensive ;) . That's the mistake, if you use crappy drives raid will fail. "I" stands for independent not inexpensive.

    • @bloeckmoep
      @bloeckmoep 2 ปีที่แล้ว +1

      When he said what the acronym raid stands for and was saying that I stands for inexpensive, I was laughing hard. 😂🤣
      No, you're right, the I stands for Independent.

  • @mcflygarcia
    @mcflygarcia 3 หลายเดือนก่อน

    Im from the future the year its 2024, sadly Raid sata with AMD (B550) still hit or miss, even with hdd sometimes it will lose conection when doing intense reading or writing and even get a bluescreen with raid driver error. Its totally random.

  • @JS-wl3gi
    @JS-wl3gi 3 ปีที่แล้ว

    RAID is another word for headache if you are a person that never backups files its even worst. Only reason I use it is the availability of sata drives at a low price and I get to keep running on the remaining drive if something happens. I keep stuff on raid, and 2 other backups. When it does fail I rebuild a new drive then after a while rebuild another new one. Problems is drives are becoming less easy to find with all the store closings. I usuallly build systems that I can upgrade later on over a 6 to 10 year period. M.2 and ssd are getting down in price, so just making backup images takes minutes instead of hours.

  • @MauDib
    @MauDib 2 ปีที่แล้ว +1

    Wish I'd seen this video a few months ago. Build my first PC in 15 years back in January. 5900x on x570 platform. Last time I built a PC, RAID was the standard for storage, so if 1x gen4 NVMe can hit 5k MBps reads, 2x in RAID0 would be even better, right? Wrong. After getting everything setup, I had 2x Corsair gen4 NVMe drives in RAID0 boot drive with all benchmarks showing a ton of performance left on the table. Nowhere near 5k MBps read. Was finally able to narrow the cause down to AMD's RAID driver. Such a headache to switch back to AHCI managed drives.

    • @nicholash8021
      @nicholash8021 ปีที่แล้ว

      I did the same and saw only improvement in large block transfers (2x as expected in pure sequential reads), but everything else was slower with nVME RAID via RaidExpert2 and on-board config. Anyway, I can't see why anyone would boot off a RAID, especially with how fast M.2 sticks are these days. You're just asking for trouble.

  • @St0RM33
    @St0RM33 2 ปีที่แล้ว +1

    Is there a follow up to this? Also is there is a way to enable logging of these errors in the AMD system!?

  • @ericthedesigner
    @ericthedesigner 25 วันที่ผ่านมา

    I've been building and fixing computers since 1999, and I've probably spent 3 years of my computing life dealing with raid0. I love it!

  • @billstarr5395
    @billstarr5395 ปีที่แล้ว

    I used two 250 GB SSD's on an Asus motherboard in a raid-0 for my OS and I keep getting errors. Found that I had to use two different sata ports to get rid of the errors. Had to use ports 1&3 vs. ports 1&2.

  • @ajinmathew99
    @ajinmathew99 6 หลายเดือนก่อน

    The very famous FIO test is running in the background of CrystalDiskMark. If you are try to benchmark the device using FIO directly, you should get the exact benchmarking experience or the actual read and write speed.

  • @DJMeku
    @DJMeku 2 ปีที่แล้ว +1

    Quick question: would having 2 mechanical disk instead increase the read speeds in Raid 0 & 1? These issues seem to be when using NVMe/SSDs

    • @NavinF
      @NavinF 2 ปีที่แล้ว

      Yes, but you'd still need ZFS for integrity so motherboard RAID is still pointless

  • @cdurkinz
    @cdurkinz 3 ปีที่แล้ว +1

    Have had two 2TB intel 660p m.2s in raid0 across z370 and now x570 for years now. x570 was a little bit of a pain to setup but intel was effortless. Have had no issues /shrug
    (Just wanted one 4TB drive was sick of multiple drives, everything of value is stored on NAS so if the volume dies whatevs)

  • @edwarddejong8025
    @edwarddejong8025 ปีที่แล้ว

    We use RAID 60 in our key NAS units. We have employed the classic PCI board from LSI (now owned by BroadCom IIRC) "Megaraid". It has worked flawlessly for 7 years. We of course bought the supercapacitor backup addon, which is crucial so that power loss doesn't corrupt the directories.
    Raid 5 has a known mathematical flaw, and should be avoided. Use Raid 6 instead.
    Our only design flaw in our system is that we only have 1 Gbit ethernet to the switch, and that slows things down. When reading big chunks the RAID engine is actually pretty fast. With mechanicals inside, however, it does take 3 or 4 hours (!!!) to reboot our 2000 virtual machines. So next time we will use SSD.

  • @NicolaiSyvertsen
    @NicolaiSyvertsen 3 ปีที่แล้ว +1

    Intel Matrix Storage is nice in that the metadata format is supported by Linux (via mdadm) so it is a shame it cannot be relied upon. It greatly simplifies setup when you need to boot from the array.

  • @DAVIDGREGORYKERR
    @DAVIDGREGORYKERR 3 ปีที่แล้ว

    Then a purpose designed RAID controller board is the way to go, just wondering if we were to rewrite the RAID controller software to get rid of the bugs.

  • @brenlyd
    @brenlyd 11 หลายเดือนก่อน

    Thank you for this video! Your wording is straight and to the point without being too meandering. Even when you have little asides you're keeping each one to the point. You rock!

  • @JohnOtt
    @JohnOtt 2 ปีที่แล้ว

    I'm not one to comment on videos much but I have to say that this one saved my bacon. Been using RAIDXpert2 for a while using RAID 10 on (4) 8TB SATA drives and always felt that the performance was not where it should have been. Over the past couple of weeks, I've been having some bad performance issues so early this AM I decided to blow away the RAID array and dig deeper as to what was causing the problem. Come to find out that one of the drives was transferring well below what it should have been and it ended up being a faulty SATA cable. At that point, I ended up creating a new RAID 10 setup in windows using disk management/storage spaces and the performance is much better.

  • @AdmV0rl0n
    @AdmV0rl0n 3 ปีที่แล้ว +1

    Kinda think that in most 'ordinary' states - the SSD has kinda killed some of the reasons for RAID, and provide good speed out of box. The protection.. while valid, is equally well done by backup, which you still have to do if you choose RAID..
    Note - I'm saying the above for ordinary. For server, or special soup - RAID still have magic sauce you might chase down, but anyways..

  • @chengbaal
    @chengbaal 3 ปีที่แล้ว

    i love how youtube's compression had so much trouble dealing with your shirt

  • @dabombinablemi6188
    @dabombinablemi6188 3 ปีที่แล้ว +1

    The Highpoint 370 controller and VIA RAID found on my old motherboards really does look as if it was done well by comparison. Though their main problem was the PCI bottleneck.

  • @IceBlue2012
    @IceBlue2012 3 ปีที่แล้ว +2

    You just saved me a ton of time! Thank you so much! I have a question though, if anyone could help.
    I recently built a new Ryzen 5900X + RTX 3070 + MSI X570 Tomahawk multipurpose system. Crucial P1 NVME as a boot drive (good enough for me) and a Seagate 2TB HDD for storage. I will add a NAS to my setup some time in the future. But I wanted a large-ish drive for games (non critical data) that would be at the same time relatively fast compared to a regular HDD and cheap (also environmentally friendly; I'll explain).
    So, I have a few 500GB HDDs lying around that I got for free, were not in use, and could be considered e-waste. I decided to populate all remaining SATA ports on my MOBO with them and make a 5 drives RAID-0 array as my games drive using Windows Disk Manager's RAID giving me a fast-ish 2.5-ish TB games drive. It's working fine. So much that I delayed testing the MOBO RAID indefinitely. My question is: does this setup make sense to you? Is there anything that I could do better?
    Thanks again! Great content

  • @marcin_karwinski
    @marcin_karwinski 3 ปีที่แล้ว +8

    RAID stands for Redundant Array of Independent Disks... not Inexpensive Disks :) even though so many people think of it this way ;)

    • @SethReee
      @SethReee 3 ปีที่แล้ว +4

      Supposedly it can be independent or inexpensive, I've heard both and seen both on documentation.

    • @wolf2965
      @wolf2965 3 ปีที่แล้ว +6

      The original paper from 1988 that coined the name was "A Case for Redundant Arrays of Inexpensive Disks (RAID)" - and it should not be forgotten, even though there are some hardware vendors that would very much like to put the "Inexpensive" part to rest. You who you are, EMC and NetApp.

    • @marcin_karwinski
      @marcin_karwinski 3 ปีที่แล้ว

      @@wolf2965 Yeah, it has been early on but AFAIK an advisory board in the pro-RAID pro-SAN standarisation council/body/conglomerate decided to switch that "supposedly pejorative or diminishing or unrealistic" term to 'Independent' back in the 90s...

    • @werewolfmoney6602
      @werewolfmoney6602 9 หลายเดือนก่อน

      Also, because most raid configs use striping, the disks aren't even independent anyway

    • @marcin_karwinski
      @marcin_karwinski 8 หลายเดือนก่อน

      @@werewolfmoney6602 mirroring may be as often used, especially in enterprise settings, though for the added security instead of performance... but nether it nor even JBOD configuration could be treated as trully independent if data and metadata can be located on different devices, regardless of file system or hardware choices. the independent factor rather stems from using separate/independent devices to form a storage pool instead of using bigger and potentially more performant/powerful devices...

  • @igordasunddas3377
    @igordasunddas3377 2 ปีที่แล้ว

    I had 3 3TB WD red drives with hardware raid and mdadm and I prefer the latter by a lot, but now decided to move to unRAID and its simple array. I am unsure about the performance. Can't remember the hardware raid much, but mdadm pushed 75mb/s I believe and unRAID seems slower.

  • @Razear
    @Razear 3 ปีที่แล้ว

    Have an onboard RAID0 array with two WD Caviar Blacks from over a decade ago that are still running strong. These drives are really built to last.

  • @VorpalGun
    @VorpalGun 3 ปีที่แล้ว +1

    How about file system level raid like with zfs or btrfs?

  • @Gersberms
    @Gersberms 3 ปีที่แล้ว

    That made me think of the time I built a Windows server with motherboard RAID, and Windows refused to enable disk cache because it didn't see a battery backup. It was the slowest new install I've ever done and there was no fix at the time.

  • @TzOk
    @TzOk ปีที่แล้ว

    I needed RAID-1 for my storage HDDs (nothing special 2x 3TB). I've read multiple comments, that Windows soft-RAID is great and recommended over Intel-RAID. So I've listened to them and set up a Windows-RAID (the old way, converting to a dynamic volume and serving up redundancy drive via Disk Management). It crashed 2 times in 3 weeks, and I can't even count how many times it was rebuilt. Finally, I gave up and switched to Intel RAID (Intel Rapid Storage) - no problems since then. No single RAID rebuild.

  • @PendelSteven
    @PendelSteven 3 ปีที่แล้ว

    What will help:
    DMI 4.0, released on November 4 2021 with 600 series chipsets, has 8 lanes each providing 16 GT/s,
    two times faster compared to DMI 3.0 x8

  • @mafuyu9063
    @mafuyu9063 2 ปีที่แล้ว

    TRIM appears to be working on my system - but Optimize Drives (defrag) shows my NVME RAID 1 array as a "Hard disk drive". Is that problematic?

  • @ianlehman8342
    @ianlehman8342 ปีที่แล้ว

    I've been trying for days to set up my asus b450m (prime-a II) with a 1tb nvme drive, a 500GB SATA SSD for the OS[s], and 3 HDDs in RAID 0. I liked the idea of motherboard RAID because I don't trust win10 to not be awful in reliability and function.
    Problem is when I enable RAID mode, the nvme drive didn't show up in the bios, and only showed up in windows setup if I installed sata raid drivers during setup (this would make both ssds and the raid array show up, until I tried to set up a storage space in windows, which would make the raid array disappear when making the storage pool)
    It seems the ONLY way to actually use all the drives is to do it in AHCI and use windows storage spaces
    While I did spend a ton of hours persisting when I probably shouldn't have, it wasn't time wasted. I learned a lot about storage and BIOS function windows drivers behave, and familiar with installing drivers at OS setup.

  • @OwenWagoner
    @OwenWagoner 3 ปีที่แล้ว +1

    I tried to set up 2 x M.2 drives in a RAID 0 on my X570 board in January. It sucked so bad that I just ended up using the software RAID in Windows. Works great, was easy to do, and I haven't had a single problem out of it.

    • @nonaurbizniz7440
      @nonaurbizniz7440 3 ปีที่แล้ว

      Mobo raid is down to what chipset they use. Unless its an intel raid setup I would steer clear. Been using mobo raid 0 for years on msi boards with no issues. Granted this is purely for gaming on games that benefit from fast loads like open world types and other games that stream data as you play. However with the newest nvme sticks pushing close to 4000 MB/s sequential reads raid 0 is looking less and less shiny.

  • @josephdtarango
    @josephdtarango 3 ปีที่แล้ว +2

    @Level1Techs Hi Wendell, Can you point me to the forum threads? I wrote the internal Intel performance manuals and developer automation.
    Perhaps I can provide recommendations and when I have some spare time I can write up some simple AI/ML automation scripts.
    Personally, I use the Designate x299 10G with 10 NVMe SSDs + VROC + TPM 2.0 + 10980XE + 256 GB 3600 MHz DRAM; which requires special firmware from Gigabyte Engineering in Ubuntu 20.04 LTS x64 and Windows 10 x64.
    P.S. If you look up my patents, we have something much better coming to a theater near you 😉

    • @Level1Techs
      @Level1Techs  3 ปีที่แล้ว +4

      forum.level1techs.com/t/critiquing-really-shitty-amd-x570-also-b550-sata-ssd-raid1-10-performance-sequential-write-speed-merely-a-fraction-of-what-it-could-be/172541/27
      Nice to meet you, sure docs and whatever you need to use the awesome including the bios is good. I get the impression some at Intel didn't think there were enough enthusiasts to bother documenting the awesome

  • @jackt6112
    @jackt6112 ปีที่แล้ว

    That's what I needed to know about the Intel motherboard RAID. I was hoping, and from looking at a few other videos and seeing the config in the BIOS, I was expecting this to be a hardware RAID. My experience with a hybrid has not been good. I needed the Windows Server free backup that I also had scheduled once a week as SOP in case there were ever an issue with the primary backup technology which was EMC's StorageCraft ShadowProtect. EVER happened and after many hours on the phone with StorageCraft, we both realized that that they weren't actually getting an operating system restorable backup with the new hybrid controller that Dell switched to as their standard server controller that they didn't document as a hybrid and the OS came pre-installed on the server. We got a hardware controller from them for that server but immediately verified with them that none of the other systems had one of their hybrids.

  • @felicytatomaszewska
    @felicytatomaszewska 3 ปีที่แล้ว +1

    Absolutely no one
    Level 1 Techs: Let's talk about raid which has gone almost obsolete in PC world

    • @abavariannormiepleb9470
      @abavariannormiepleb9470 3 ปีที่แล้ว +1

      Why is wanting to have uninterrupted operation in the event of a (boot) drive failure something that should ever become obsolete?

  • @MarcMBX
    @MarcMBX ปีที่แล้ว

    Hello, I'm having some problems here, I tried to do raid 0 using 3 nvme gen4 7200mb/s each and the average result was 14,000mb/s on the crystaldisk, I found it very strange, because it should be at least 18,000mb/s s, however, I decided to do a new test using only 2 of these nvme and the result was also 14,000mb/s on average, I am using an Asus Z790-H Rog Strix where all nvme slots are pcie 4.0, in the bios I selected all for gen4 and even so it continues to give 14,000mb/s, I did raid0 both through Windows and through Bios and in both methods it gives the same thing and it is as if the motherboard had limited the bandwidth to 14,000mb/s, in the main slot which is pcie 5.0 I'm using an RTX 4080, in the description of the motherboard it says it's a safeslot, so I imagine it shouldn't be interfering with the raid result, I don't know what else to do to break this limitation of the motherboard, the 3 nvme that I'm using to raid are in the 3 lower slots below the video card, and the nvme slot that is next to the processor I'm using a 2TB nvme for files in general, what could I be doing wrong? Please can anyone help me in this regard? Am I missing any important options in the Bios? Thank you very much

  • @cdoublejj
    @cdoublejj 3 ปีที่แล้ว

    so on my epyc system with unraid is AMD suppressing sata errors? most of my drives are on the PCIe raid/IT mode controllers. i tend to raid large spinning rust drives. i tend to use real raid controllers but, i get the feeling a windows raid would be fine for 4x HDDs

  • @hamtsammich
    @hamtsammich 3 ปีที่แล้ว

    I've been looking into linux raid because I'm getting poor *constant* write speeds while making deep dream videos.
    A friend of mine told me just to go ZFS, and another told me to go hardware. So, I'm currently trying to deduce what comes next

    • @TheBackyardChemist
      @TheBackyardChemist 3 ปีที่แล้ว +1

      if you just need a speedy volume to put /tmp on, just use an mdraid raid 0 and disable journaling in ext4.

    • @hamtsammich
      @hamtsammich 3 ปีที่แล้ว

      I'll confess, I'm a little intimidated by mdadm raid, but it's certainly on the table.

    • @TheBackyardChemist
      @TheBackyardChemist 3 ปีที่แล้ว +2

      @@hamtsammich if you don't want to boot from it it, md raid 0 is very straightforward to set up

    • @grenin1010
      @grenin1010 3 ปีที่แล้ว +1

      @@TheBackyardChemist this is the truth

    • @hamtsammich
      @hamtsammich 3 ปีที่แล้ว

      @asdrubale bisanzio I had also been told about speeding write speed with available memory on zfs.
      But y'all got very valid points for me to consider

  • @Wrathlon
    @Wrathlon 3 ปีที่แล้ว +1

    Im curious how this translates to Threadripper systems. My Zenith II has 5xNVME slots, all of them on the CPU, Im using 4x500Gb NVME drives in a quad RAID0 and I was able to get the expected throughput using custom testing in IOmeter but Crystal Disk Mark was....lets go with "random" at best for the numbers it spat out.
    Would it be worth me using an Optane drive on its own for my bootable drive in that 5th slot and then use Windows RAID for the 4xRAID0 drives and bypass AMD's driver all together?

  • @max-mr5xf
    @max-mr5xf 2 ปีที่แล้ว +1

    It's actually not that hard to boot Linux from software raid, at least from btrfs or zfs.
    You just have to have multiple efi system partitions in case of raid1. I see that quite often.

  • @metaleggman18
    @metaleggman18 2 ปีที่แล้ว

    Yeah I only use x570's hardware raid just to create a small JBOD to mirror my NAS so I can upload it to B1, and have a second physical copy of my data. Not quite the 3-2-1 I want, but I'm getting there. Otherwise, since I don't have a real need for RAID, I just don't use it at all. Simple as that. Hell, even in TrueNAS, I just use mirrored VDEVs lol.

  • @shadyss96
    @shadyss96 ปีที่แล้ว

    Well this leaves me stuck lol.. I have a hw raid card that I was thinking of moving over to the onboard z68 chipset.. Now Im not so sure..

  • @vh9network
    @vh9network 3 ปีที่แล้ว +1

    about F'n time you guys talked about this. I had to find out this the hard way myself.
    Motherboard BIOS fake RAID was a huge waste of time, and buggy AF on my X399 MEG Creation. Windows (also fake) RAID is the way to go with a PCIe Expander card.
    My intentions was just to get max READ/WRITE speeds with RAID0. Wasn't bold enough or wanting to deal with hell of making it a bootable RAID so I didn't go that route.

  • @St0RM33
    @St0RM33 2 ปีที่แล้ว +2

    stupid AMD suppressing the error, it does it on nvme m.2 drives

  • @geofrancis2001
    @geofrancis2001 3 ปีที่แล้ว

    I used onboard raid 0 for a pair of raptor drives back in the day for my games and OS but now with ssds things like that just aren't necessary.

  • @johncnorris
    @johncnorris 3 ปีที่แล้ว

    I'm wondering if it's just better to disable all the built-in soft-RAID features on the motherboard and just get an inexpensive RAID Controller off of E-Bay that can be put into IT Mode? (ie JBOD and ZFS)

    • @abavariannormiepleb9470
      @abavariannormiepleb9470 3 ปีที่แล้ว +1

      In my experience even the high-end ones (tested a Broadcom HBA 9400, so not the current top-of-the-line but pretty good) these are still a bit slower when using them with SATA SSDs compared to the native motherboard SATA ports in AHCI (!) non-RAID mode. But if you know that you are going to max out the interface between the motherboard chipset and the CPU with other stuff like Thunderbolt or multiple 5 or 10 Gb/s USB devices installing HBAs in PCIe slots that get their lanes directly from the CPU does indeed help.

    • @morosis82
      @morosis82 3 ปีที่แล้ว

      Just use the sata ports on the motherboard with the drives in standard ahci mode and zfs.
      The only reason to use an hba is for large numbers of disks where they can tie in to some sort of backplane, or fast enough throughout that you'd saturate the chipset link to the CPU and the dedicated PCIe lanes from a card would be faster.

    • @johncnorris
      @johncnorris 3 ปีที่แล้ว

      @@morosis82 - using between 6 and 8 drives would seem to be the best storage to cost ratio. I just think the bugs will be well known or resolved with an HBA.

    • @morosis82
      @morosis82 3 ปีที่แล้ว

      @@johncnorris when you use an IT mode HBA, there's no real secret sauce, it's effectively just a secondary SATA (SAS) controller replacing the one that's already on your motherboard.
      For single drives in standard mode tied into a software raid system, which neither of those controllers know anything about, it's basically the same thing.
      My comment about drives was if you need lots (more than the number of motherboard headers), or require a certain amount of speed, then a HBA attached to a PCIe x8 slot can be faster than the ports on your motherboard that are attached through effectively a PCIe x4 interface via the chipset. But at least with PCIe3 and above, you need a lot of fast SATA drives to saturate even x4 lanes, like 8 fast SSDs for example.
      Of course, those x4 lanes also service your other hardware like network, USB, etc.

  • @raymondobouvie
    @raymondobouvie 3 ปีที่แล้ว

    I wonder - IF I plan to make really fast cache disk for my AfterEffects - should benefit - if system is not on raid?

  • @WinZard
    @WinZard 2 ปีที่แล้ว +1

    i wonder if they have fixed these issues yet? i am concidering doing sata raid0 over a nvme raid0 on x570 or maybe even wait for x690/x790 if they make that..... yes i put 90 for a reason i think tr40 should go away and bring back max pcie to desktop. :)

  • @gretathunderer5596
    @gretathunderer5596 3 ปีที่แล้ว +3

    AMD's RAID is awful. If you want a bootable RAID array it only works on specific versions of Windows and even then barely.

    • @timothygibney159
      @timothygibney159 3 ปีที่แล้ว

      I stuck with Intel for a 9900k for this reason because I have several nvme and sata drives

  • @tonybove2468
    @tonybove2468 ปีที่แล้ว

    I currently have my OS (Win10) installed on a RAID0 array with 2x500GB SSD's, using the Gigabyte motherboard onboard RAID. Thinking about adding another 2x500GB SSD's and switching to RAID10. How much will performance change? In case it matters, this is a 12-year old custom build with an Intel 3770K, and it's a bulletproof workhorse. Never failed once in 12 years.

  • @terrabyteonetb1628
    @terrabyteonetb1628 3 ปีที่แล้ว

    Chipset bandwidth limitations (iv been doing intel chipset x99 limits to 1500 mb, sec Prox (3x ssd), same with z170 etc....not sure of amd

  • @USDAselect
    @USDAselect 3 ปีที่แล้ว

    When you said computing in the future did you mean the P5800X or the older
    Optane memory sticks?

  • @BansheeBunny
    @BansheeBunny ปีที่แล้ว

    I found this video looking for help with VROC on a x299 motherboard. After I filled in some of the pieces I came back here to share.
    2:52 The EVGA SR-3 DARK has a C622 chipset. Third party SSDs on the approved list will work on this board with a hardware key.
    3:37 I too have spent a lot of time and money looking to get VROC to work on my x299. You have to use Intel drives for VROC RAID to work on a x299 system. Third party drives will show up and work as a single drive only, if you have a key. The VROC application in Windows will notify you of a RAID error; it's the third party non-RAID drives in a VROC PCIe slot.
    3:57 My OS did not see the volume until I installed the drivers.
    4:24 You can only use RAID0 without a key. A standard key ($120.00) will allow RAID 0/1/10; Pro key ($250.00+) adds RAID5.
    4:50 Intel 670p will work on x299.
    12:00 if you use write back caching, get a UPS.
    Side note: Intel VROC (VMD NVMe RAID) ports on his EVGA SR-3 DARK should be hot swapable.
    My storage goals
    OS: VROC RAID1 (2x2TB NVMe)
    Data in use: VROC RAID0 (4x2TB NVMe)
    Long term data: Intel RST RAID5 (5x8TB spinning rust with hot spare)
    RAID is not a backup, get a RAID for your RAID.

  • @gedavids84
    @gedavids84 3 ปีที่แล้ว

    Onboard RAID is one of those things I abandoned once SSDs really took off. It used to be the only way to make your computer actually faster because HDDs were so god damn slow.

  • @20quid
    @20quid 2 ปีที่แล้ว

    Would another use-case for motherboard raid be a shared storage array in a dual booting system?

    • @andreewert6576
      @andreewert6576 ปีที่แล้ว

      If you need RAID *and* don't want a hardware controller *and* want to dual boot then yes. Most setups don't tick all three boxes though.

  • @LanceThumping
    @LanceThumping 3 ปีที่แล้ว +6

    What kind of performance can you get out of ZFS? Because it feels like that is the real master of stability right now.

  • @electrobott352
    @electrobott352 ปีที่แล้ว

    i tried raid with 2 500gb m.2 one of them was my boot drive which might have changed something but i thought it would just erase the 2 (Raid 1 Mirror) and i would be able to boot off a usb and have it work but my system refused to boot and would just bluescreen and this is after i configured the boot order. so definitely not worth the headache

  • @WinZard
    @WinZard ปีที่แล้ว

    Revisit on x670e? Amd also says there is different drivers for different cpu. Did and cc? Been a year they had to have fixed bugs by now and new platform.

  • @willis936
    @willis936 3 ปีที่แล้ว +1

    I used primo cache for a few years but I burned through my Intel 750's write lifetime. These days I'd rather avoid all types of SSD cache because they shift read heavy workloads to write heavy unduly.
    All SSD all the way.

  • @hololightful
    @hololightful ปีที่แล้ว

    Am I the only one bugged by not being able to read that notification on his Ubuntu box behind him on the TV...

  • @boncharusorn6173
    @boncharusorn6173 3 ปีที่แล้ว

    one of the two ssd in soft LSi raid0 die on me few weeks ago on s2600 workstation...yes I have backups but it won't restore....glommy

  • @henriquer8453
    @henriquer8453 3 ปีที่แล้ว

    How does that translate if you dual boot? Will software raid work on both systems for non boot drives?