Testing Storage Spaces Parity on Server 2022, Has performance been improved?

แชร์
ฝัง
  • เผยแพร่เมื่อ 19 พ.ย. 2024

ความคิดเห็น • 31

  • @Weini123
    @Weini123 2 ปีที่แล้ว +7

    I recently "upgraded" or rather re-created my storage pool / storage space. My old pool consisted of 3x3TB and 2x 4TB WD Red drives with all default settings (because I didn't know better at the time) created originally in Windows Server 2012 R2. One day I decided to get rid of my dedicated NAS and merge my PC and NAS into one running WIndows 10. This was all working well for many years but this week I upgraded to 4x12TB Ironwolf Pro drives. Reads were always pretty good at about 200 MB/s but writes plummted after about 1GB to about 25MB/s. With my new drives I followed the guide from storagespaceswarstories and created a pool with 3 columns, interleave of 32KB and allocation size of 64KB. With this setup I now get 400MB/s read and 300MB/s write. Note that with four drives I have a column size of 3 not 4. Reason why can also be found over at the blog post. So even without an SSD cache I'm super happy with the performance. Getting to this result is way to complicated imo. As you said Microsoft has to improve their documentation maybe even give samples on how to configure parity spaces to run efficently

  • @astral16
    @astral16 3 ปีที่แล้ว +10

    Great Video, seems like we are all trying to find a way to Make Parity Spaces Usable; which is really disapointing since Spaces has been around since the Windows 8 days

  • @soundfire79
    @soundfire79 2 ปีที่แล้ว +1

    Getting 1200MB/s Read & 630MB/s Write on a dual parity storage space virtual disk. Here is what I have: Currently, 28, 2TB SAS HDDs, 16 columns, 16k cluster size, 256k interleave size, 3 PM961 NVMe “journal” drives used for the write cache set to 1GB. This is the best I was able to get with the current hardware. The funny part of it is that all the settings are default when creating a dual parity virtual larger than 32TB with more than 16 drives. Results using Crystaldiskmark 8, 64bit, 64GB, SEQ1M Q8T1.
    Best performance on Crystal: 1GB, SEQ512k Q32T8, 2300MB/s Read & 920MB/s Write

    • @jacobjuenger4454
      @jacobjuenger4454 2 ปีที่แล้ว +1

      I'm having a difficult timae getting my setup running properly in storage spaces on Server 2022, and it feels like I'm going crazy. I'd love to chat with you about this, I'm not sure how though. I replied before with my email address and the comment was immediately removed... If you're willing and able to lend a hand I'd really appreciate it.

    • @soundfire79
      @soundfire79 2 ปีที่แล้ว

      @@jacobjuenger4454 Where are you getting stuck? What have you done so far and what are you trying to do?

  • @skyhawk21
    @skyhawk21 6 หลายเดือนก่อน

    If your still with us, what if any new things or updates did they make to storage spaces in newer windows 11 22h2 and 23h2? If you know?

  • @AeROweasel
    @AeROweasel 2 ปีที่แล้ว +3

    Use a NVMe drive as write cache, plain simple.

  • @Nathan-gj8ch
    @Nathan-gj8ch 3 ปีที่แล้ว

    One time I tried to set up a storage spaces with 6 SMR drives and windows would just hang for like 4 mins at a time randomly while I was filling the archive drives. I found it was the overwrite of the data on the drive cascade rewriting mass data because of the way storage spaces was filling the drives. Storage spaces is wonky AF, I ended up using Drivepool to do the project. only thing with Drivepool is the dev. is the only guy with control over licensing and if anything happens to the guy your data could be hosed if you have an activation problem.

  • @JazzMac36251
    @JazzMac36251 2 ปีที่แล้ว

    I'm having issues enabling Storage Bus Cache on Server 2022. 4x 4TB HDD, 2x 1TB SSDs, 2x 1TB NVMe. There seems to be some sort of bug in the MS script, but nobody knows what it is. Have you heard about this?

  • @gleep52
    @gleep52 3 ปีที่แล้ว

    Did you ever try the 8 columns on 2019? Did you get roughly 250mbps write speeds on 2019 with the same powershell build? I've tried every possible combination I can think of between columns and interleave values and NTFS blocks and it's still pretty paltry no matter what I try.
    I have ten 6TB 7200rpm disks which in "simple" (raid0) mode I pull about 1.6GBps read and write. Mirror is about half that around 900MBps, but when I put in ANY form of parity mode, the speed plummets to 30-50MB/s range every time. I tried fiddling with device manager cache settings to no avail either.
    When I used 8 columns with 2 redundancy, I get 1.16GB/s read and 45MB/s write. Just absurd.
    For clarity - I'm on 2019 server right now and thinking about updating to 2022 so I can try out and see if any improvements have been made. The parity creation option in the UI is still broken on 2019. I can make simple or mirror vdisks but not parity ones...

  • @soundfire79
    @soundfire79 3 ปีที่แล้ว

    Could you post your best performing powershell command? I am running a 24 HDD dual parity array with 3 old, slow ssd cache drives in Windows 10 Workstation storage spaces. Configured with 7 columns, I am only able to get 500mb/s read and 150mb/s write.

  • @BizAutomation
    @BizAutomation 3 ปีที่แล้ว +1

    Excellent video. I'm looking to put a 10 SSD Server together using Windows 2019 Server Standard. Everything you said in your video revolved around the problems with parity. I wonder if you have any thoughts on mirrored performance. Is it true that using it in mirror mode should yield roughly the same IOPS performance as a hardware RAID card ? All 10 drives will be Micron Max 800GB Read (Random): 2,400 (220) Write (Random): 850 (45). I am hoping not to use hardware RAID, and want to mirror all 10 (so I'd have roughly 4 TB of usable space). I assume Windows would give this setup 5 or 10 columns - not sure which.

    • @ElectronicsWizardry
      @ElectronicsWizardry  3 ปีที่แล้ว +1

      Yea I was talking about parity only in this video, mirrored and simple performance is good and about on par with a raid card for most tasks. It will work fine with 10 drives in. Mirrored pool, and you should get nearly the full read performance of the 10 ssds and the write performance of about half the drives or 5 drives here.
      With mirrored virtual disks the optimal number of collums for peak performance the number of drives, so 5 is the correct here.

  • @charlesturner897
    @charlesturner897 2 ปีที่แล้ว

    I discovered something recently whilst trying to deploy SBC, it will not work for me if I set the virtual disks to be the max size of the pooled disks, I have 4x SSDs and 4x HDDs and I can get it working with both simple and mirrored configurations if I drop about 5GB off the max size of the virtual disk tiers (360GB indtead of 365GB for the SSD tier and 3.710TB in stead of 3.722TB for the HDD tier)

    • @jacobjuenger4454
      @jacobjuenger4454 2 ปีที่แล้ว

      Did you ever get SBC working and see a performance gain? I have had nothing but issues trying to get it working, I can enable it and create the pool, but when I create a virtual disk and volume with ReFS I don't see any performance gains at all. I also can't create a SSD Mirror tier for some reason as the documentation shows so... I don't know what's going on here. I just want a storage pool with parity that isn't terribly slow T_T

  • @HardcoreNacho
    @HardcoreNacho 2 ปีที่แล้ว

    Have you tried ReFS formatted drives? I’ve heard that using that over NFTS fixes these issues.
    I’m on windows 10 and ReFS isn’t available despite hours of me trying to change my registry to allow me to. Considering moving to windows server if speed is better.

    • @ElectronicsWizardry
      @ElectronicsWizardry  2 ปีที่แล้ว

      I am trying windows storage spaces with refs no and it seems to be better for some times like tiering but I haven’t seen a change in parity performance. I should do some more testing though. You can also get refs in windows 10 if you buy the enterprise or pro for workstations. Also if you format a volume as refs I think all versions of windows 10 can use it, the limitation on creation of a file system not mounting.

    • @jacobjuenger4454
      @jacobjuenger4454 2 ปีที่แล้ว

      @@ElectronicsWizardry Were you ever able to get Storage Bus Cache working? I've been trying to for a couple of days now and I it's like I'm pounding my head against a wall. I can't seem to figure it out or get it working correctly

  • @daslolo
    @daslolo ปีที่แล้ว

    Why use windows server for file server? I have used WS2022 for a few weeks and love the ease of use since workstations are windows but file speed seems less than ideal so I'm trying out a flavor of linux in a VM.

    • @ElectronicsWizardry
      @ElectronicsWizardry  ปีที่แล้ว +2

      For most home/small business(10GBe and less), I have found Windows server to have similar NAS speeds to Linux. Windows also has better integration with Windows clients, with features like SMB multichannel and AD permissions working better on Windows than Linux in my experience. I still typically use Linux for file servers if I have the choice as I like using Linux more overall, but I will use Windows if it is the better solution for a situation.

    • @daslolo
      @daslolo ปีที่แล้ว

      I see, so it's up to convenience. I run on a 100gbe mellanox. Do you still recommend WS? I was shocked to find that RDMA yield slower crystalmark than non RDMA...

  • @udirt
    @udirt 2 ปีที่แล้ว

    Is this with RDMA working well?

  • @dragonmc77
    @dragonmc77 2 ปีที่แล้ว +3

    Not sure how this guy has not found the Parity Write Cache bypass feature in Storage Spaces, which has been around for a while now. If you create the space with the right interleave value/formatted cluster size such that writes are aligned across cluster size boundaries (which is not hard to do), you will get write performance on par with what you would expect out of "traditional" raid5 or raid6 setups in hardware or from mdadm/zfs.

    • @ElectronicsWizardry
      @ElectronicsWizardry  2 ปีที่แล้ว +2

      Can you show an example of how this can be done. I’ve done a good amount of research and have yet to see this shown.

    • @jackh125
      @jackh125 2 ปีที่แล้ว +3

      @dragonmc77, well? Are you going to show us how this is done? We're all here from google reading your comment as "psh! Noob! You haven't found the pot of gold at the end of the rainbow?? I have!" *run* Come on homie, fill us in.

    • @JohnSawtell
      @JohnSawtell 2 ปีที่แล้ว

      @@ElectronicsWizardry I posted a link to a writeup on how this worked and my comment was deleted. I then posted another comment with a general overview of the issue and how to properly configure it, and that comment was deleted as well. The info is out there but it took me quite a bit of digging to find so I tried sharing to help others get there faster... whatever. I have my storage and backup servers running parity without a cache volume and not hitting that performance drop off that you get with just setting it up via the GUI defaults so it is possible with the right setup.

    • @jacobjuenger4454
      @jacobjuenger4454 2 ปีที่แล้ว

      ​@@JohnSawtell I'm having a difficult time getting my setup running properly in storage spaces on Server 2022, and it feels like I'm going crazy. I'd love to chat with you about this, I'm not sure how though. I replied before with my email address and the comment was immediately removed... If you're willing and able to lend a hand I'd really appreciate it.

    • @JohnSawtell
      @JohnSawtell 2 ปีที่แล้ว

      @@jacobjuenger4454 I'm not sure where exactly your getting stuck so it is hard to advise. I can tell you that if you only use the GUI your probably going to have performance issues. The next is to wrap your head around how columns work as most guides on the net try to liken storage spaces to RAID and it really is more like your columns are like a RAID and your pool is like logically grouped sets of RAID disks. What I mean by this is with GUI you give it five disks and it makes a three column pool (if memory serves, it's been a bit) so you have a mismatch that reduces performance, and space (because the parity loss is tied to the column count, not physical disk count). You have to use powershell to configure different settings but I think the web does pretty well covering how to do that. The biggest deal for me to find was info about setting the correct NTFS AUS and Interleave sizes. I don't want to link again since it already got deleted before but the end of the last sentence is a great search term to find the info.

  • @brainthesizeofplanet
    @brainthesizeofplanet 3 ปีที่แล้ว

    Lol RAID is faster than 5 in writes with storage space...MSFT tried to mindf**** us

  • @brainthesizeofplanet
    @brainthesizeofplanet 3 ปีที่แล้ว

    Microsoft just gives 0 fucks about praity RAID - it's that simple. There is no magic involved to get speeds up to 400mb/s for an 6 disc RAID with current CPUs, MSFT just doesn't care.
    Anyone because RAID 5 and 6 is "dead" with rotating rust, but I would say it's up to debate but it certainly isn't dead when using SSDs with an URE of 10by17
    So anyone who wants to run RAID 5 or 6 with server 2019 2022 will need HW RAID - we run HW RAID 6 with SSDs and just fine and much cheaper than running a RAID 10