8TB RAID 0 Guide : 4x PCIe NVMe Adapters Tested - Which Adapter Reigns Supreme? HP Z840 Workstation

แชร์
ฝัง
  • เผยแพร่เมื่อ 12 ม.ค. 2025

ความคิดเห็น •

  • @raceteker
    @raceteker ปีที่แล้ว +3

    Another fun, yet informative video! A couple of notes.
    1. I’ve seen others suggest that 1tb nvme’s tend to be a bit quicker than 2tb nvme’s when used in raid arrays.
    2. A few other videos I’ve seen recently testing speed and thermals for nvme’s actually suggest that pcie4.0 nvme’s (drives themselves not adapters) run a bit cooler while still achieving the same if not better read write speeds. I wonder if this is equivalent to 4 and 8 cylinder engines going at the same rate of speed. They can both achieve the same speed, only one can do it at lower effort (and temperature).
    Thanks again and keep ‘em coming for us z-enthusiasts

    • @racerrrz
      @racerrrz  ปีที่แล้ว +1

      Thank you, I am glad you enjoyed it. I had not picked up on that detail, nor I have tested the 1TB drives in RAID0 just yet. I just figured it was the lack of DRAM that slowed them down but maybe there is more to it! (I don't have set of 4x DRAM NVMes to test). I'll see if I can clear a couple of 970 Evo Plus NVMes to do a quick check relative to the 980's.
      The max RAID 0 speeds of ~2000MB/s on each NVMe doesn't seem all that random (PCIe 2.0 speed limit was this also) and I presume there is a limit somewhere that restricts the speed of these NVMes. Being DRAMless does require them to more heavily use the CPU and RAM and that may be the speed restriction.
      The Adata Legend 800 NVMes are a bit of a hybrid given that they are PCIe 4.0 but only have PCIe 3.0 speeds. They may not be the fastest drives around but what I liked about them was the endurance rating (2TB; 1200TBW) and the fact that they are running at PCIe 3.0 speeds (well matched to the Z840; Samsung 980 Pros being bottlenecked by the Z840's PCIe 3.0 would be a bit of shame).
      Likely yes, the efficiency of the PCIe 4.0 NVMes must have improved to net the higher speeds at lower temperatures / more torque / area under the curve. The question is, are the newer Gen 4.0 NVMes the V8 or the tuned 4-cylinder with variable valve timing and boost? haha (Older and running hotter would suggest to me V8 for Gen 3.0 NVMes lol).

  • @CheapSushi
    @CheapSushi ปีที่แล้ว +2

    It doesn't matter much for newer hardware since they tend to have Bifurcation but some older motherboards from X99 and especially X79 don't have bifurcation. So something to be careful with. My ASUS X99 workstation boards don't have it. There are adapter boards with PLX-like switches on them that allow you to run multiple drives per card but they are of course more expensive.

    • @racerrrz
      @racerrrz  ปีที่แล้ว +1

      Hi, yes that is correct. The NVMe booting and PCIe bifurcation tends to vary between motherboards and manufacturers, and support needs to be checked.
      The HP Workstations (from ~2014 onward) obtained support through BIOS updates (as an example, the HP Z440, which supports bifurcation, uses the same CPU socket as the X99).
      I have read about NVMe adapters with a PLX Chip but they are not all that common and do come at a steep price. But those prices are coming down and right now something like the IO CREST Quad M.2 NVMe Ports to PCIe 3.0 x16 adapter (has a PLX chip) costs nearly the same amount as the HP Z Turbo Drive Quad Pro. For ~$300 USD you could likely get your Asus X99 workstation board to support NVMe drives. Hopefully you did get a BIOS update at some point to allow booting from NVMe drives via PCIe?

  • @milanhosek343
    @milanhosek343 5 หลายเดือนก่อน +2

    Hello, thank you for the great video, any chance any of these would fit into my HP z600 v2 ?

    • @racerrrz
      @racerrrz  5 หลายเดือนก่อน +1

      That's a good question. Technically they will fit but there is a limitation that will prevent the system from detecting more than 1 NVMe in these adapters - bifurcation. The Z400, Z600, Z800, Z420, Z620 and Z820 lack bifurcation support in the BIOS. There are some technical workarounds that may work but they come at a cost. One that is worth consideration is a NVMe PCIe adapter with a PLX chip - which would allow bifurcation to be managed independent of the motherboard. I have not had the privilege of testing them out (mostly due to cost) - but that would be the only way that you could get multiple NVMes on the single PCIe slot for the older workstations.

  • @georgeburtner1174
    @georgeburtner1174 ปีที่แล้ว +2

    Regarding the unexpected speed results, in which order were the adapters tested? Was the first adapter retested at the end? Some wear or wear management internal to the drives might have had a large effect, especially on brand new drives. Retesting the slowest and fastest again would be pretty telling, even if the drives have been used extensively since the first test.

    • @racerrrz
      @racerrrz  ปีที่แล้ว +1

      Hi there. Thank you for the suggestions. I agree, it would be worth retesting the AORUS and likely the HP Z Turbo Drive Quad Pro with the same drives for comparison (the Jeyi U.2 adapter is way too much work to assemble! haha). I have not had a chance to actually fully implement the new NVMes into my workflow, so they stand in the same state they were when I finished the Jeyi U.2 test.
      All the adapters were tested back-to-back (same day/night, in order; #1 Aorus, #2 Asus, #3 HP and #4 Jeyi U.2) and the transferred files on the NVMes were deleted between tests on adapters, but the drives were not / RAID 0 was not formatted between tests. Given the relatively small data load that the NVMes would have received I would not expect the NVMe performance to change drastically between tests (I would estimate less than 200GB was written after all testing - so likely ~50GB per drive factoring in the RAID 0; making it ~200GB of "wear" between adapter #1 and adapter #4 per NVMe). They do slow dramatically when they become filled with data however, but that is less likely to be an issue here. The best case would be to purchase 16 2TB NVMes so that it is a fair test, but that's a bit excessive on the budget!
      No I didn't retest the AORUS adapter at the end, mostly because I needed the AORUS adapter after testing for work / it is still in daily use with a set of 4x 1TB Samsung 980's (not in RAID; I have various software on them and it will be a pain to remap it all).
      I have a new video nearly out that looks at the AORUS data in a bit more detail, with data for 3x different speed tests (Blackmagicdesign Disk Speed Test, ATTO Disk Benchmark and Crystal Disk Benchmark) and the speeds were all slower than expected for the AORUS (Max ~3500MB/s Read & Write while in RAID 0).
      I would want to test a set of four DRAM NVMes and ideally a set of four PCIe 4.0 NVMes in a PCIe 4.0 PCIe slot (I don't have a PCIe 4.0 motherboard on hand). If I get some spare time I'll go back for another test in the AORUS - same ADATA Legend 800 NVMes but a new test. My hypothesis is that the AORUS doesn't handle higher than 3500MB/s on read/write for the PCIe 3.0 interface, but I can't quite test this and I haven't found any videos online to test my hypothesis (note it performs as expected on PCIe 4.0 from what I have seen online). The only other thing I noticed was that drive capacity might have a role in speeds also, but most of the videos I found on the AORUS were for 1TB NVMes.
      Side note, all four adapters were slower than the theoretical maximum for RAID 0 on these NVMes on a x16 PCIe 3.0 slot, which I have as closer to 14000MB/s (ignoring overhead, data distribution efficiency, or PCIe slot limitations etc.). But I suspect the slower speeds (i.e. Read speed of 8000MB/s on RAID 0 = ~2000MB/s per NVMe) were at least in part a consequence of not having DRAM on the NVMes.
      For comparison, check out BuildOrBuy's Channel. He has a very methodical approach to his testing. He saw decent speeds on the AORUS, but he used a different system (Gigabyte TRX40 Designare) and GEN4 hardware, and not 4x NVMes in RAID0: th-cam.com/video/CR-1beSxNqE/w-d-xo.html
      Some of His AORUS Crystal Disk Mark Data (Peak Sequential; SEQ1M Q8T1 @ 1Gib):
      980 Pro 1TB: Read: 6612MB/s, Write: 4957MB/s (@ 61°C max)
      SN850 1TB: Read: 4012MB/s, Write 4340MB/s (pre-firmware update @ 58°C max)
      SN850 1TB: Read: 7071MB/s, Write 5215MB/s (post-firmware update @ 66°C max)
      edit:
      Another video showcasing the AORUS's true potential (4x 2TB Sabrent Rockets in RAID 0 on TRX40 AMD system with PCIe 4.0): th-cam.com/video/U-KBgEXrWLA/w-d-xo.html

    • @PoeLemic
      @PoeLemic 26 วันที่ผ่านมา +1

      @@racerrrz Yes, the tests by BoB's channel are more favorable to my ponderings. You should have gotten much higher speeds, much higher. Don't get it. In your case, not even worth going thru the trouble of 4 nvme's raiding. Risk and costs -- to get lower speeds than a single drive could give you. Unless something else on your machine is sucking that bandwidth as you were testing. So, I don't get it.

    • @racerrrz
      @racerrrz  26 วันที่ผ่านมา

      @@PoeLemic Hi. You may have misunderstood the graphs. The speeds were initially reported for a single drive (so ~2000MB/s reported on each NVMe), but since this was a RAID 0 pool that means the collective speed is 4x 2000MB/s which was 8000MB/s, as per 18:36 .
      So overall decent speeds, although the theoretical speeds should have been closer to 14000MB/s. I obtained 12000MB/s Read and 9500MB/s Write with the same NVMe pool in my HP Z8 G4 (which is also PCIe 3.0 limited). I put it down to my Windows 10 Pro install which was ~8 years old and likely bloated with too much software in the background. Keep in mind this is a software RAID.

    • @racerrrz
      @racerrrz  26 วันที่ผ่านมา

      @@PoeLemic Further to my earlier reply - it's well worth it to create a RAID 0 pool if you need transfer speeds. This very pool is netting me ~12000MB/s Read speeds which has been great for video editing. I have since created two more RAID 0 pools - one for a game library and one for a scratch disk. (for a Crystal Disk Speed pic - check 12min 24sec in this related video: th-cam.com/video/NzncGJJV5qk/w-d-xo.html )
      Overall the speeds are still slower than you would get on a more modern system. Given the age of these workstations, I am quite happy with ~8000 - 12000MB/s from a RAID 0 drive.

  • @royal-arsenal-history
    @royal-arsenal-history 2 หลายเดือนก่อน +1

    Superb video editing and information. Subscribed. I have just purchased a HP z640 workstation with dual CPU, e5-2699v3 64 GB ram. Currently has M4000 Quadro. I plan to make this into a video editing machine for use with Premier Pro etc. Starting with upgrading the drive read and write speeds. Will the Asus Hyper M.2 v2 NVMe pcie adapter work with x4 NvMe SSD in Raid 0 and will it boot from this Raid 0 setup with Window10/11? Bios version is m60 v02.61 (03/23/2023). Thanks in advance.

    • @racerrrz
      @racerrrz  2 หลายเดือนก่อน

      Thank you, I am glad you enjoyed the video. Nice, the Z640 is a trooper, especially when the riser CPU board loaded in. The Asus Hyper M.2 V2 will allow you to get your RAID 0 pool up but right now there is no way to boot to Windows from a Software RAID. In theory it should work with a Hardware RAID configuration on a PCIe RAID Controller. There are some risks with that setup - as I am sure you'll appreciate. If you can backup your OS to a HDD then that should give you fast speeds.
      On my system (HP Z8 G4 right now), I have a 1TB Samsung 970 Evo Plus as my OS drive (where I install DaVinci Resolve Studio) and I store the active video library on the RAID 0 pool. Those files can then be backed up to a HDD (e.g. 16TB Iron Wolf Pro), and then backed up to a NAS (I use my Z440 case swap for this - with 10GbE networking). RAID 1 (e.g. two HDDs or SSDs) and RAID 5 (several HDDs) would be ideal as secondary and tertiary backups.

    • @royal-arsenal-history
      @royal-arsenal-history 2 หลายเดือนก่อน +1

      @@racerrrz Thanks for the update, that saves me some time with any NVMe boot issues! I’ll have to try a workaround using a Clover Bootloader on a USB connected to an internal motherboard USB port (keeping it hidden inside the case) to boot from an NVMe flash drive, like the on a HP Z420 workstation in the Linus Tech Tips TH-cam video "Can’t afford a Gaming PC? This one's $169" .

  • @PearlX9
    @PearlX9 ปีที่แล้ว +1

    Kindly test Sabrent

    • @racerrrz
      @racerrrz  ปีที่แล้ว

      I considered other NVMes but I settled on the Adata Legend 800's for their price point + endurance rating + being PCIe 4.0, but geared for PCIe 3.0 performance.
      I don't plan on upgrading to a PCIe 4.0 system anytime soon - which made me hold off on getting the newer gen NVMes (my hardware would limit their speeds to PCIe 3.0). But I have been keeping an eye out for a "cheap" modern system that I can use in videos.
      If a good price comes up for Sabrent rockets I will grab some for testing!

  • @PoeLemic
    @PoeLemic 26 วันที่ผ่านมา +1

    Good video, but you could have done away with some of the front matter. Most people on a video like this already know that stuff. Just get to "Meat and Potatoes" (if you get what I mean).

    • @racerrrz
      @racerrrz  26 วันที่ผ่านมา

      True. I have found that including the "fluff" intro helps those who end up being new to the tech. More often than not those who just start out with something wind up searching for the content on TH-cam - so this helps them get up to speed.
      Sorry that you have to sit through that. I include time-stamps in most of my videos which helps with quicker navigation through to the action bits.

  • @PoeLemic
    @PoeLemic 26 วันที่ผ่านมา

    It is MASSIVELY NOT-UNDERSTANDABLE about results at 18:20. Hard to believe. You could get faster speeds in Gen 3.0 (above 3000.0 Mb/s) w/just one. Why don't 4 go over 2 Gb/s?
    >> [You don't have to write me back a long reply, because I see you do below and that's good mate you do that. Yet, it's just hard-to-understand why this happened. I'm thrown for a loop.]

  • @TTURKI
    @TTURKI 25 วันที่ผ่านมา +1

    Why the speeds aren’t that impressive? For what it sounds i thought at least 10x

    • @racerrrz
      @racerrrz  24 วันที่ผ่านมา +1

      The speeds are what the speeds are - plus these speeds were for four different quad adapters back-to-back - something no one else has done.
      The hardware limits the potential speeds but RAID 0 only nets ~2.5x gain over a single NVMe drive in this case. I am not sure why the max speeds were ~2000MB/s per NVMe in the RAID 0 pool (netting ~8000MB/s when pooled) but I believe that has to do with the aged Z840 workstation I was using for this. I theorized that NVMes with DRAM might give more speed (the Adata Legend 800's are HMB not DRAM). The same drive pool managed ~12000MB/s Read and 9500MB/s Write in a newer workstation (HP Z8 G4): for a Crystal Disk Speed pic - check 12min 24sec in this related video: th-cam.com/video/NzncGJJV5qk/w-d-xo.html