I just struggled with this and finally found a combination that works well with my 5 ) 8TB Drives. I saw up to 200MB/s Writes copying from a single backup disk. 5 columns, 16KB interleave => 64KB data stripe size, matches 64KB NTFS cluster size (AUS). Run in Powershell as admin. Here is what I did. $Disks = Get-PhysicalDisk | ? CanPool | ? MediaType -eq HDD New-StoragePool -FriendlyName "8TB Storage Pool" -StorageSubsystemFriendlyName "Windows Storage*" -PhysicalDisks $Disks -ResiliencySettingNameDefault Parity New-VirtualDisk -StoragePoolFriendlyName "8TB Storage Pool" -FriendlyName "RAID Media" -ResiliencySettingName Parity -NumberOfDataCopies 1 -NumberOfColumns 5 -Interleave 16KB -ProvisioningType Fixed -UseMaximumSize -Verbose -WriteCacheSize 0GB Hopefully this helps someone, as I struggled for days Finding this out, and testing a lot of combinations
This video is great. I also went over and read the article at storagespaceswarstories. This statement stood out in the article: "As you can see, a lot of thought has to be given to the architecture of Storage Spaces before they are created, because once these values are set, it is hard or impossible to change them." This gives me pause when considering expansion. Here's the question... If today I have 2x4TB drives and 2x6TB drives, I assume I'm doing a 3 column solution with the calculated interleave, AUS, etc. Now, I've decided to expand and add 3x10TB drives (I'm getting big time here). Can I / Should I / Would I need to / Would I be able to adjust the column number, AUS, etc that was set in the initial build without starting from scratch? Thanks again for being a great resource.
It's unbelievable! Very useful and very cool! NEVER, do you hear me? NEVER delete this video! This is very important information. Definitely subscribe to the channel, like the video and add it to your favorites! Thank you for what you do! It's very helpful! You are a cool man!
Absolutely fantastic tutorial, thanks. Got drives expecting to be able to do a proper raid-5, this was super helpful figuring out how to do what I actually could to the best. thanks!
I am so happy I found this update!!!!! I've also been using storage for 10 years and have just lived with crappy parity write speeds. The last time I rebuilt my server, I put Server 2016 essentials so I'm curious if I can try experimenting with a new virtual disk to see if I can get some better performance. I'm currently using 7 drives with dual parity which doesn't seem to line up with what you mentioned but I'll see if I can play around with the settings to see if I see any improvements
Adding SSDs as, "Journled," disks is the best way to improve parity speeds in Storage Spaces as they act effectively as write-cache. I've been doing this since Server 2012 R2.
Your tutorial here was an amazing Life Saver. Thank you. Your explanation and instructions were great. As a side note, I kept running into some stupid error where somehow Windows decided to flag all my drives (that before the wipe were in a storage pool" as unable to be pooled. I had to reset the "CanPool" attribute in powershell before I could create a new pool with them.
I ran into the same problem and was able to add the disks to a StoragePool only after running PowerShell command "Reset-PhysicalDisk -FriendlyName "" to reset the disk in quesiton. Use "GetPhysicalDisk" to determine the name.
@@sithseven It's been a few months since I looked at it, but according to my logs, first I used: Get-PhysicalDisk -CanPool $True | ft FriendlyName, OperationalStatus, Size, MediaType And then used the Set-PhysicalDisk command to set the "false" drives back to True.
I definitely noted the abysmal write speeds in a 3 drive parity pool a year or so ago when testing w/just a trio of spare 5400 rpm laptop drives for experimentation; I will retest with the new allocation size of 512k as soon as possible, as a 10X improvement on write speeds is nothing to sneeze at! NIce work!
I see windows 11 now has the option to add Allocation Unit Size. When giving the New Volume name and file system there is now an advanced section that drops down to give Allocation Unit Size.
1:00 I think a slide would help to present the most optimal cluster size. 5 disks - 1 parity disk = 4 data disks 256kb default interleve size x 4 data disks = 1024kb optimal cluster size
@neko77025 yup that seems right. But I think you need 7 drives or above for dual parity and 6 and under is single parity but there may be a option to override these defaults.
I like to say that I had parity drive of 5x16 tb drives with 58TB of usable storage and i formatted pc and it just appears right after formatting on it own works perfect for me.
Yesterday I configured Storage Spaces on my Windows 11 machine with 7 x 2TB hard drives. I created several dual-parity virtual drives and found out that as soon as I set the allocation unit size of the virtual disk to 32KB or over, write speeds DRAMATICALLY improved (from 30 MB/s to 350 MB/s). I used the new interface in Settings to create the pool and virtual disks.
Well today I tried running crystal disk mark and the write speeds were not impressive. I guess running a copy of a 30gb file doesn't accurately replicate what crystal disk mark tests. Now to do some more testing.
@ahslan7304 crystal disk make seems to do something weird with storage spaces and. I’m not sure why. I tried using diskspd which is used in cdm under the hood and the same commands I saw in the source code with different numbers. I’d try that file copy text as it seems to be a better representation of the speed of the virtual disk.
What are the specs of the server, HBAs, and disks used in your server test bed? I’m running an HPE DL325 G10, 4 x HPE 2.4 TB 10KB HDDS, 2 x Samsung PM863a SATA SSDs, and 2 x Intel X550 NVME SSD drives. Using windows 2022, I get about 500 MB/sec sustained with 1.4 GB/sec burst when the 4 x HDD drives are in a 2 way mirror. Server Memory is 128GB, and windows manages it better than in Server 2019; or at least reports what is modified better. When I create configure a virtual disk in PowerShell as a parity drive with 3 columns and 1 redundancy disk, I get around 250-324 MB/sec sustained with an average of 275 once the memory cache settles. In both cases, I use ReFS as I am primarily using virtualization for the hosts, and the block size is 4KB. I love that you are digging under the hood and performing this level of testing, but you have to keep in mind that performance is dependent on use case, and optimization has to take use case into account. I’ve played with larger 64KB allocation units, and they don’t provide real world performance gains unless your workload is sequential and the files are larger than 64KB. Microsoft has some detailed articles covering ReFS allocation sizing and they firmly stand behind 4KB when using virtualization even though the virtual file is going to be one big flat file. Every IOP to the VHDX has to read, modify, and write the allocation unit and small random reads suffer badly when the allocation unit size is much larger than the data accessed within that allocation unit. So you could run Crystal Benchmarks etc and get fantastic performance using synthetic testing, and the real work performance could be abysmal. Microsoft Exchange JetStress, Microsoft DiskSpeed, or Intel IO/NAS testing tools would show a better performance overview. Keep in mind, Microsoft has multiple methods to combat poor Parity disk performance in storage spaces: 1) virtual disk tiering; available from the Virtual Disk creation wizard. Using SATA SSDs for tiering offer a massive random IO performance increase, but can limit raw sequence performance unless you have enough SSDs, and NVME can offer a massive increase to both random and sequential performance. I’ve seen a performance aggregate of 4+ TB/sec when using NVME SSDs in the performance tier. 2) mirror accelerated parity via PowerShell; adding mirrored HDDs/SSDs to a HDD parity disks in a given Virtual Disk will stage all writes to the mirrored disk first, and eliminate the parity bottleneck so long as the data being written to disk doesn’t exceed the size of the mirrored disk space in the virtual disk. 3) storage spaces bus cache; install the failover clustering component on the 2022 server, and NVME or SAS/SATA SSDs can be used as a read cached in cached mode, or a read/write cached in shared mode. This option has a lot of moving parts under the hood; and less control of the configuration, but works well to absorb bursty writes in shared mode. Microsoft leans towards the bus cache in cache mode to provide read caching to virtual disks configured as mirror accelerated parity when server level disk resiliency is required. Would love to see more testing on all the above methods and detailed benchmarks! Karl
Glad I watched this video before making a storage space for my main computer. Still feeling bad about not being able to use RTFS and having to use NTFS. This is because I had the Home version of Windows 11. The storage space is 8TB, 3 SSD drives with 1 drive for parity. Dunno if RTFS vs NTFS would have made a significant difference in my use case e.g. serving media files to one user.
NTFS vs ReFS doesn't make a different for most use cases, and your likely not missing out on much. Some of the advantages of ReFS I see are: Checksumming(must be enabled to use as disabled by default), better tiering/caching in storage spaces, better handling of duplicated files(can save a lot of space with supported backup programs)/
Sure I can do a video on that. Give me some time to do some testing and research. ReFS is still missing many features that NTFS has, so it seems to be working as an additional filesystem for some use cases and not yet as a replacement. I am using ReFS current for some hyper v drives and as a Veeam backup drive and it has been working well there. I did some testing on cluster size with ReFS and storage spaces parity performance and found that going from the default 4k cluster size to the max of 64k would allow significantly better performance.
So with 4x4tb I will have 3x4tb + parity. With a cluster size of 1024kb how close to a 4+1 at 1024kb will it be in performance? 95%?? 90% less than that I'll just buy another hardrive for a total of 5
This video sounds very useful! I may have to experiment with my 10 x 3 TB spinners (which would need to use 8 data and 2 parity columns) to see what speed I can get out of them with various stripe sizes and (stripe-size * 8) cluster sizes to find a sweet spot where doubling the stripe and cluster size hardly gains anything. I currently have them as a Windows Striped Volume for speed, sustaining 2 GB/s at the outer cylinders. *** any important data backed up to some offline drives :)
Now isn't that just typical! I was just about to test Storage Spaces on my 10 x 3 TB spinners when one of them decided to die! 10 columns, 2 parity should have worked, so I tried 9 columns, 1 parity, but Storage Spaces said it was out of range! It liked 8 columns, 1 parity, but that is useless! So I have set it to 5 columns, 1 parity spread across the 9 drives, which seems to work and manages about 320 MB/s with 16K interleave and 64K clusters.
I am a first time sysadmin to storage space, have 8 14tb drives, running in to cluster size problems. Size seems to be smaller than what I should have. Any suggestions
Ok so here is a bit of a curveball question. I understand the process of what is going on here but the issue I am running into is how to setup a Parity (RAID5) with 4 disks. Calculating for 3-1 disks is easy since you have 2 data drives and can just multiply your interleave value by 2. 256KB x 2 = 512KB AUS. So how would I set this up for having 4 disks? I would have 3 data disks and 1 parity disk but since I have to multiply the interleave value by 3 I get, 256KB x 3 = 768KB. 768KB isn't a value I can set my AUS to unless there is another way to do that. I don't really know how to proceed here. Any help is appreciated.
You can set the number of columns to be smaller than the number of disks. In this case setting the number of columns to 3 with single parity would have 2 data disks and let you set optimal cluster and interleave sizes. This disadvantage of this method is there is more parity than needed. For example with 4x 1TB drives, column size of 3 would give you about 2.66TB, where a column size of 4 would give 3TB usable. I can test the performance of this setup if you want.
Ok so I shouldn't expect the same kind of performance hike that we see with a 3 drive 3 column setup, unless I use 3 columns with my 4 disks. Right now I get about 20-30Mb/s transfer speeds when I am copying a single large file but around 2-10Mb/s when I copy many smaller files. What I am looking for is the best case scenario for a 4 disk 4 column setup that has reasonable speeds since I should only have 1 drive dedicated to parity, with the other 3 available for data. A test of a 4 disk 4 column setup would be super appreciated. Also on a separate note, would having an interleave smaller or larger than 256KB make any difference? Say for a 3 column 3 disk setup having an interleave of 8KB and a 16KB AUS?@@ElectronicsWizardry
@@Wolfgang0117 I'd try the 4 drive 3 column setup here and see what the performance you get is. I'd guess its better. I also think the larger interleave and cluster size will improve performance typically, and would only go with the small sizes if you store lots of small files. I'll setup the performance test with 4 drives on a space system in a bit.
I've found that Windows Server 2022 can bypass the write cache with an interleave size of 64 KB and a cluster size of 8 KB (with 5 data columns + 1 parity column). It might be worth trying Server 2022 if you require more flexibility.
Hey can you tell me how to do this like your video where you had HDD & SSDS ? my setup on raid is 3 * 10TB WD's and 3 * 1Tb SSDS ( With a 4th 10TB on the way).
Has anybody tested a drive failure in storage spaces before or a raid recovery? I've read some terrible things on Reddit and has the questioning even using stored spaces anymore.
OH NO, I just setup my server a few months ago and have been having TERRIBLE Write speed issues. I had seen video's in the past mention its just the way Windows Server is for Parity raids. I don't 100% understand the part about the Ideal amount of drives or calculating the correct interleave, I'm running 15x 14TB drives. Now I need to try shuffling 60TB's of data back off that raid back onto something else so I can re-format.
But I thought ResFS was recommended for Storage Spaces? Is the cluster size adjustable on that? I have a lot of small files so using large cluster sizes would mean a lot of wasted disk space.
ReFS does have many features like check summing and tiering that work better with Storage spaces than NTFS. I think I was using NTFS here as its the only option in most editions of Windows 10/11. If you have lots of small files try expermenting with interleave sizes, but this trick won't well here as its wasting space as you pointed out.
Yea there isn't a 768k option unforantly. The options I see here are to set the column size to 3 so that it works correctly with the power of 2 cluster sizes like 512k,1024k. This will mean that there is a 66% data on the pool, compared to the optimal 75% with a cluster size of 4. The other option is to try the close values and see if the performance is sufficent for your needs.
You mentioned, to enter the correct ratios, you take the number of columns minus 1 multiplied by the interleave to get the size of the NTFS cluster (Allocation unit). Are you subtracting 1 because 1 of the drives is the parity drive? What if I have 4 disks configured in two way mirroring and no parity? In this case I have 2 columns. If the above is true, my interleave should still be half the size of the NTFS cluster size? Or, should I still subtract 1 and end up with the same size interleave and NTFS cluster size? On another note, no matter what I tried, I can't get a parity setup to write faster than 70 MB/s. I even set it up with just 3 disks following you exactly and my write speeds were 70 MB/s. In the above mirroring setup I can write at 380 MB/s. (Windows 11 with a storage pool of 65.4 TB)
This video was all about parity. None of these rules apply with mirrors and they should be immune from these issues. The big issue with parity I think is how storage sapces deals with the write hole. I believe the slow write speeds is the system confirming the whole stripe is written correctly and the parity is correct. With mirrors this isn't a issue and I have seen very good speeds with mirrors in storage spaces. The one I was subtracting was the parity drive. With lower single parity dubtract one drive and with dual parity subtrwct 2 for the 2 parity drives. Still odd you can't get higher speeds with these tricks for higher speeds. I'd try playing with ntfs cluster sizes a bit more as larger ones perform a bit better in my tests.
I have 12x 10TB Barracuda Pro HDDs 7200rpm. I want to do Dual parity 2 disk failure tolerance. What is the ideal column number/aus/interleave equation here? Need for File transfers between 5-25GB
With 12 drives this is what I'd try. Use 10 drives as the column size. This I think will force dual parity, so 2 parity and 8 data. Then set interleave size to 128k and 1M cluster size in NTFS and see how that works.
@@ElectronicsWizardry In a similar situation as @86ortega and have 12 x 10TB drives. Currently trialling windows server 2022 as after experimenting with unraid and others I just get on with it the best. Are you saying to just create a parity of 10 disks and leave the other two as spares? Guessing the next jump from 10 (2 parity + 8 data) would be 18 (2 Parity + 16 data) albeit having that many drives in a pool with only two drives of parity may not be advisable! 😂
@@Guttersniper35 With your 12 drives, I'd set the number of columns to 10 with dual parity drives, giving 8 data drives for best speeds. These 10 columns are spread evenly across all 12 drives, so you will have all the drives used. If you want to calculate the available space, find the data to full ratio(so data drives divided by number of columns(8/10 = .8 in this case). Then multiply that ratio by the total capacity(so 12*10 = 120 * .8 = 96TB usable(before TiB conversions and over overhead). A traditional raid 6 of number of columns would have a data to total ratio of 10/12 or .83 giving ~100 TB usable. Hope that explains whats going on with number of columns less that the total number of drives.
@@ElectronicsWizardrystill a bit confused, if I do what you say and I have 8 data drives and 2 parity what happens to the other 2? Should I set two as hot spares and therefore still be able to use your formula?
Hi thanks for this info. I'm in the process of figuring out which solution to use for a Nas disk using raid 5 and I've stumbled onto a dilemna regarding file corruption/bit rot. Can storage spaces do error checking or checksum verification? Furthermore can it repair data that has been corrupted in a parity storage system? Or would that require another piece of software?
Storages spaces with refs as a file system can do checksumming and repair data that doesn’t match the correct checksum with alternate copies. Look up the Microsoft doc about refs integrity streams to learn more.
what if the amount of drives you are adding isnt the whole total? im just starting up and i will be doing two drives, but then once i turn that into a drivepool i will be adding about 6 more drives. i cant add them all at once because i need the data first. so i have to do 2 drives first move the data from a few drives then format them.
You can expand storage spaces pretty easily, but it can be tricky with virtual disks having a fixed stripe wide. You might want to get a secondary space to store the data for now if possible. If you can't do that, I'd make those 2 drives into a pool with a virtual disk and then add drives later on. Then once its all moved make a new virtual disk with the correct parity/number of collumns you want and copy the data to the new virtual disk.
I currently have two storage pools, one contains 4x3TB drives and another 3x6TB, setup in parity.. Write speeds after it fills up cache struggles at 1GB. If I'm understanding this correctly, I should set the cluster size to 768 and 512, correct? Thankfully I recently bought a big 20TB drive that I can use as a cold backup, so having to rebuild the storage spaces isn't going to be an issue with data loss. However, I do have a question. For my first cluster (4x3TB), in parity, I should have 8.18 of usable space. The last time I had a problem with my storage space, I had filled up the space entirely. 100% utilization on the drives, and storage spaces reported that it was inaccessible. I couldn't even add another drive and load balance, resulting in some data loss. Is it a known issue where if the space gets entirely full, the space becomes inaccessible? The drives were fine, but entirely inaccessible and the pool had to be rebuilt. Since then, I've been hesitant to even approach full utilization of the drives..
I'd try setting the cluster size to 512 and number of columns to 3 for both of those. It will waste a bit of space compared to setting the number of columns to 4, but should help performance. 768 and 4 columns should also be a optimal setup. I'd also be tempted to make one big pool if it was me as I find it easier to manage data this way, and storage spaces handles mixed disk sizes pretty well. I haven't see that issue if its 100% full
@@ElectronicsWizardry Much obliged. I feel that at this point, my bottleneck is my motherboard. I'm finding now that having several drives spanned across sata2 and sata3 ports isn't ideal. My first backup went quick, but the copying back after formatting to 512k took forever.
I'm trying to wrap my head around this, how would you setup a 5x 18tb setup with dual parity? I wanna add 3x 18tb to it after I copied over the data from those 3 to the 5x 18tb dual parity setup, appreciate your help! 😊
Unfortunately the dual parity information seems to be even worse in terms of documentation. I can't find a simple way to do this, so I'd probably make a think provisioned single parity virtual disk to do the initial copy. Then add the 3 extra drives, and make a new dual parity virtual disk(I think the only way to do dual parity is set a column size of 7 or more, but I can't find good info about it). Then you should be able to copy data from the single parity virtual disk to the dual parity virtual disk, and delete the single parity virtual disk once the data is all copied over.
@ElectronicsWizardry Thank you I'm gonna try it, aquired a qnap jbod 8 drive array for plex and want to have 6 / 2 redundancy at the end all shelled 18tb mybook drives 😀 appreciate the feedback and ofcourse a third backup of this pool on lto
NIce nice. Just a question, did you ever manage to get an acceptable speed with 4 drives? I've 4 sata ports on my mobo so i bought 4 4TB drives and could never get write speed to go over 50... Everyone on the internet's been saying i should use 3 drives instead...... but I have 4 lol... Is there an alternative software raid that could be used on windows? I know mac has one.
With 4 drives you can use 3 columns to get good performance. I think having 4 columns will still provide decent performance with large clusters. I’ll test how well 4 drives work.
wish I found this earlier! I'm suffering away at about 40mbps backing up my UnRaid server w/truenas vm... going to migrate to native ZFS now in UnRaid... Already into this DAYS and It's going to take at least 1 more!! So much for my 2.5gb connection direct between the two machines =/
I did this on my Win 11 Plex server, it was driving me crazy! Not sure I'll get into the powershell bit yet but for now just creating the pool and formatting at 512k leveled everything out. Thank you for the video! Edit in case anyone has an answer; I currently have a 3 drive pool but need to add another drive to it, will the new drive get formatted by Storage Spaces at 512k?
I'm a little overwhelmed. Thanks for the video. 5 x 16TB drives, I think these should be setup like this for single parity: 5 Columns, 256KB Interleave, 1024K aus. Does that look Right? I tried buying a hardware raid controller, and discovered those are gone to history, because "Software" and the "PCIE bus" is super duper fast now.
You want to set columns to 5 here, as 1 parity drive and 4 data drives = 5 total columns. The Interleave and NTFS cluster size look good. As far as raid cards they seem to be going away, but if you want hardware raid look up cards like a LSI 9361-8i. Fairly cheap on ebay, and should be pretty performant.
@@ElectronicsWizardry Now I'm off to moving the data from the small to giant drive. The USB Drive is saturated at 168 MB/s read, the writes are going from 0 to 943 MB/s waiting on the USB. Copying data from the NVME is doing 550MB/s steady.
3 or more drives are needed for a parity setup. With 2 drives you can use the simple or mirror modes in Storage spaces. These modes are similar to raid 0 and raid 1. These modes don't have the write performance issues parity does and can achieve high speeds with proper hardware.
FANTASTIC videos! I have 2 NVMe SSDs on my Windows 10 machine that will also use 2ea 10TB RAID1 drives. How can I use one of these NVMe's as a cache drive for Storage Spaces? I am not proficient in PowerShell if that matters.
I used A program called Primocache ... I used it for my windows storage spaces .. works great (it does cost 30.00 ) and is tied to the Motherboard S/N but it works and its easy
Yup that's right. That's one disadvantage with this approach. I have often seen people using large parity arrays for mostly large files so this isn't a big issue but depends on your use case.
So I tried every cluster size and got the same results. I have 3 20 tb drives in parity getting 30 mbs no matter what I change. I know I'm missing something
I tested this trick today and it works great. Sadly it works only with 3&5 disks on single parity. 4 disks had the normal terrible performance regardless of the interleave and cluster sizes.
incorrect - I'm using 4x 10TB WD Red 5400rpm CMR drives (WD100EFAX if I remember correctly, you will find little info on them because they were created as dedicated model for Synology NAS or something like that but WD101EFAX is pretty common). I don't recall the settings, I guess I have 32kB interleave, 64kB NTFS cluster and for sure Columns 4 & Parity (not mirror !). With big files, something to the tune of 100GB [! so they can't fit into any cache at all !], I have sustained writes around 360-380MB/s and that's not on empty drives where they would write on the outtermost fastest tracks. Reads around 420-440MB/s. Before aligning interleave, columns and NTFS cluster size, I had the dreaded 35-40MB/s writes on exactly the same hardware. Intel Celeron G3900, 8GB RAM, so old uninspiring underpowered hardware. Testing by copying big file via Far Manager, Total Commander and Windows Explorer, they vary by 20-30MB/s. The only way how I can perform reads is by copying the file to "nul" in Far Manager. Didn't use command line at all, no copy.exe, xcopy.exe nor robocopy.exe The same hardware with 8x HGST SAS 6TB 7200rpm drives and Adaptec 71605 adapter in HBA mode gives me ~900MB/s read speeds and ~800MB/s writes in Storage Spaces (fun factor : CPU utilization 70-75%). Columns 8, interleave 32kB and 64kB, NTFS cluster size 64KB. Didn't try larger NTFS cluster due to time constrains (can't test every combination) but actually when I used >64kB interleave, I got less performance. I don't have the notes with me, but there was ~250-300MB/s writes only with interlave >=128kB don't know why. Didn't try other than 8 columns for shortage of time (those tests were done like 1am to 4am). all tests on the same installation of Windows Server 2022, patched/updated to 2023 March level. I'm not big fan of shingled (SMR) drives in any form of Raid, storage... make sure your problems are not caused by the drives themselves. There are many small models SMR these days which was not the case just two/three years ago. I'm gonna read : www.dell.com/support/manuals/sk-sk/storage-md1420-dsms/dsms_bpg_pub-v2/ storagespaceswarstories.com/storage-spaces-and-slow-parity-performance/ and Storage Spaces articles on that site
Love your content! right up my alley. I got a weird one for you, based on one of your videos, you convinced me to setup zfs on Proxmox redundancy. Basic problem, the performance is nowhere near what I expected. Setup: dual xeon v4, in the x16 pci slot (gen3) I put the Asus Hyper M.2 nvme for 4 Samsung Evo 970 plus (stripe of mirrors). The zfs pool hosts the OS and the VMs disks, I used scsi (scsi single) and local-zfs:vm* (iothread, cache write back, discard on). I run crystal mark (profile: real world performance) on Windows , and get horrible performance, and feels very laggy. (SEQ1M Q1T1: Read: 2134MB/s Write: 1735MB/s) - This doesn't look right. Right? Did I miss something? How would I debug this? (atime is OFF, sync is also OFF)
I didn’t see any significant performance differences going with refs instead of ntfs here and it has a limit of 64k for cluster size so that might hurt performance a small amount. I still often use refs for its additional features.
Thank you, do you have s step by step on how to start windows storage and implement it on windows 11 tutorial? your information was very helpful. I am wanting to keep my PC on windows 11 and use it for media storage for plex server through windows and still have parity back up.
I have a question. Can the drives in the pool be formatted as ReFS and still make the other adjustments to setup? In this video you kept saying NTFS but I really like the extra benefits of ReFS so can it be used? Love the vid and the information you gave out. Thanks.
Refs can be used, but the 64k cluster size is the biggest available. But with a 64k cluster size you get much better performance than using the default 4k cluster size.
@@ElectronicsWizardry So I followed your suggestion and the best speeds I can get is around 160MBps. I get serious speed swings up to 301 and down to 18MBps. I have 5 8TB drives in parity formatted in NTFS and 1024K. Any ideas how I can fix this to get 300MBps consistently? I have a good PC so CPU, RAM, PSU and so on are not underpowered. Did a Win 10 scannow and no errors found.
great work, thanks very very much! your video are so helpful. I made my windows server 2022 storage space 14tb*5 raid5(1 parity) as your recommend, lan copy read and write speed is 500-600MB/s, same speed as 4 disk raid 0. that's wonderful! Can I add another 14tb x 5 to expand current storage space, make these 14tb x10 as raid6 (2 parity) without destory current data?
Can anyone help me out I have 4 x 1.6TB Intel DC 3510 SSDs Should I use all 4 in parity or only 3? As if I used 4 it would be 768kB and I don't think that's an option when formating drives The drives will be used in my gaming PC, as extra storage for movies, random programs and an additional space backup for my photos Maybe I'll add games to it but unsure due to load times of games while using storage spaces
Since your have 4 drives I’d use all 4. You can improve write performance by setting the number of columns to 3. This will give you less usable space but still use all 4 drives. I’ve used storage spaces for my games drives in the past many times and it works fine.
@@ElectronicsWizardry okay I think I understand, I would have to use PowerShell to set collums to three? And I would have 3.2TB of storage ? First time using this and trying to wrap my head around it all
@makagio yea you need powershell unfortunately to set number of columns. For availabile space on 3 colum parity there is 66% data on the drives. With 6.4tb total of data times the 66% usable you would get about 4.2tb usable space with 4 1.6tb drives.
@@ElectronicsWizardry I think by default windows will automatically only allocate 3.2TB of storage in parity. I might not need to set the collums in PowerShell and can just reformat the drive to 512k
@@ElectronicsWizardry I do have another 6 of these Intel 1.6TB 3150 SSDs. So I can put another one in to have 5 drives (4 data + 1 parity) My only issue is I have a 1TB M.2 on my motherboard second slot taking up SATA ports 5 and 6. Used for Games only. The M.2 is a Adata 8200 Pro. Should I remove it to put another 1.6TB SSD? You said gaming performance isn't really affected by windows storage spaces? Or should I leave the 4 drives and use PowerShell to set collums to 3? Also it setting the collums to 3, would I be losing the storage of 4th drive? So in theory it being 3.2TB total pool instead of 4.4TB?
All of the parity in the world doesn't do much good if you can't then replace a failed drive and recover it. I can't find any instructions on the internet about the simple process of what to do when a drive fails using windows storage spaces. I used to work in a data center. When a drive failed, you replaced it and it rebuilt. That was it. This was twenty years ago. Sure those servers had raid controller cards in them. But how has this functionality not trickled down to software raid yet?
Yea you can replace a drive in all of this software raid solution, but its typically different, and with more flexible solutions like storage space its not as simple as hardware raid cards. I'll make a future video about replacing drives in storage spaces as that doesn't seem to be covered well as you have pointed out.
Thanks for the Trick... But, now i understand clearly the reason for you to blink your eyes so often is due to the fact mentioned at 0:02... Take good care of ur health mate...
I've clearly been out of the loop. "Working with storage spaces for about 10 years" ? I thought it was a new feature in the win 11/2022 generation of windows. I've never come across it before lol. Although to be fair, 90% of my server experience is Dell servers with PERC cards, so haven't had cause to look at alternative storage methods.
Storage spaces has been around since windows 8 and server 2012. Kinda surprising it’s over 12 years old now. I remember some articles and videos about it when it came out but never seemed to get that popular.
I can't for the life of me get parity write performance over around 60MB/s. Using 3x WD Red Plus 14TB that each can write at around 200MB/s. 3 columns, tried 32KB interleave with 64KB NTFS AUS, 256KB interleave with 512KB AUS and othe AUS = N*I value - always with the same result. I've confirmed that both Interleave and AUS values are correct after creating the space and formatting. I9-13900KS, 64GB RAM, so system performance should not be an issue either. Win 11 Pro. I'm at a loss here.
I am sooo sooo grateful to you to share with us all this important discoverys!!! If I understand correctly if I have 3 disks and 1 is redundancy then it's 2 disks x 256k = I must put 512. If I have 5 disks and 1 is redundancy then it's 4 disks x 256 = I must put 1024 correct? But if I have 6 disks and 1 is redundancy then what is the correct value? Because 256 x 5 is 1280 and there are no option for 1280. Please can help me on this? Again MUCH MUCH APPRECIATED ALL OF THAT ❤❤❤
Your math is correct. With 6 drives(so 5 data and 1 parity) there is no way to get a good layout. I'd create a storage pool with 6 drives, and a number of columns set to 5. This won't use all the space optimally, but will give better performance, and still use all the 6 disks. The 5 wide stripes are spread across all the 6 disks.
Thank you so much. I had a similar question & I was looking for the answer this whole evening. Finally, you answered my question. I have 4 drive setup so I will just use 3 column. @@ElectronicsWizardry
Really liked the content; however, keep in mind with 1024 cluster size the max NTFS volume is 4TB so you'll never be able to expand a virtual disk (or any volume formatted on that disk) beyond 4TB. Before selecting your cluster size make sure you check the max NTFS limit; once it's set it can't be changed unless you reformat.
Hi, I'm sorry for my english, hope you understand everything. Greate Video :) I'm using win11 and I've tested storage spaces for the first time. the trick with 512k sectors improved my speeds a lot. Since i'm new to storage space I've a question: For testing I've created a storage space parity with 3 USB-HDDs formated it with 512k sectors and it works greate. (the HDDs are in a 5x3,5" JBOD HDDs case connected via USB-C) In Disk Management i could see only one huge drive. nice :) Then i've tested if i'm able to use the storage space on another system so i asked my brother and he lent me his notebook with win10. I could see all three HDDs in the Disk Managment and that they are set to Storage Space. in the control panel i could only create a new storage space but i couldn't see the existing one- i dont know how to access an allready created storage space on another windows. So if i'm not wrong in the case i have to reinstall windows on my system i think that I'm no longer able to access the data in the storage space. Do you know how i could bind a created storage space to a drive letter on another/new system?
Storage spaces should be able to move between systems. I'd guess the different is due to one system being windows 11 and one being windows 10. THe windows 11 system is likely using a newver version of storage spaces the windows 10 system can't use.
@@ElectronicsWizardry Hello again, I was playing around and installed Win11ToGo on an USB-SSD and there i can see the storagepool in the gui. So i was looking in powershell and could see (Get-StoragePool) that it was set to ReadOnly. i was able to fix it (Get-StoragePool -IsPrimordial $False | Set-StoragePool -IsReadOnly $false) but still wasn't able to access the pool. So more searching... Then i found a solution: I could see that it was Deatched by Policy (Get-VirtualDisk | Select-Object FriendlyName,HealthStatus, OperationalStatus, DetachedReason) The fix to automatically attach all non-clustered virtual disks after Windows restarts, i had to open a PowerShell session as an Administrator and then use the following command: Get-VirtualDisk | Set-VirtualDisk -ismanualattach $false And finally i was able to see and access the Virtual Drive. Since I'm not familiar with powershell i hope that everything i've done was right. i just copy-pasted the commands from learn.microsoft.com and it works. Can you tell me how to calculate the maximum usable storage in a pool with parity if i'm using drives with differend sizes? for example in my test setup i was using 1x1TB, 1x2TB, 1x3TB & 2x8TB. The GUI sets a parity pool to 10,5TB. And i dont know how it was calculated. If i calculate it 1+2+3+8+8 = 22/3*2 I'm getting 14,66TB I can understand that because of the huge size differenzes that my calculation is wrong or only works if all drive would be the same size.
Amazing video, I have been having the most love hate relationship with Storage Spaces for the last 2 years and this resolved all of my performance issues. I do have a new issue though. I was doing testing, and I created the virtual disk in Windows Server to circumvent the Windows 10 63TB limit on Storage Spaces. When in Windows Server, the performance on 10x 18TB Seagate Exos is amazing (400-900MBps). Issue is that when I switch OSs to latest build Windows 10 or even Windows 11, I am able to see the storage space and use it, but I am getting middling performance (60-100MBps). Switching back over to Windows Server 2022 brings speed back to great performance. Does anyone have any ideas of what is going on? In this video he is showing great performance on windows 10, so not sure why I am having issues, unless Microsoft is limiting the speed based off of circumventing the 63TB limit?
I haven't seen this exact issue, but seems interesting to take a look at. Can you expalin what you get to get over the 63TB limit? Is this all with NTFS?
Thanks so much for getting back to me on this. To get over the 63TB limit in Windows 10 it was pretty easy but I first made the storage space itself inside of windows 10 so that it is recognized in Windows 10 (Not sure if the storage spaces created in Windows Server are using a newer version but they do not show up in Windows 10, only showing the protected storage space partitions in disk management for Windows 10.) After the Storage space is created in Windows 10, I then booted up windows server 2022, and then used a version of the PowerShell script you used to create a 130TB Windows storage space virtual disk. Then when I reboot into Windows 10, the 130TB storage Space shows up perfect, circumventing the Windows 10 63TB limit. Only issue is that I noticed that the performance is terrible compared to windows server (after more testing windows 10 is between 50-100MB where windows server is anywhere between 600-1200MB) I would love to dive into this with you if you were interested, just let me know EDIT: Also yes all NTFS and using your math, I have 10 drives 2 parity drives to 8*256KB I am using a 2048KB cluster size@@ElectronicsWizardry
@@mattgoldfein1423 Thanks for the additional information. I will take a look large disks in storage spaces and see If I can I can make a video on this information, and see If I can reproduce and solve the issue your running into.
Hey, I just wanted to update you on some more testing I have done over the last few months. I just tried the latest version of Windows 11. In the latest version they seemed to have completely overhauled the GUI for Storage Spaces. They also are allow you to create volumes that are dual parity and over 63TB even from the GUI from my testing. Only thing that sucks is that the speed issues are still there on my 130TB dual parity storage pool. Not sure if there is something I am missing in here but like I said back a few months ago, windows server 2022 is still doing pretty great with high speeds over 500MB-1GB per second.
Thanks a lot for the trick! However, anyone knows how it is working with ReFS and storage tiering? I experienced much better results with ReFS instead of NTFS especially in combination with storage tiering. The ReFS format dialog just offers me 4096 or 64k block size. For my config with 5 parity columns in HDD tier I guess it should be 4 data disks and 64k * 4 should be 256k block size.
Yea REFS is much more limited column size. I haven't tested every config, But I think REFS 64k cluster size is still decent performance wise with parity wise. REFS handles realtime tiering unlike NTFS which has a task to re optimize the tieres every nigh and generally does much better with tiering than NTFS.
@@ElectronicsWizardry Thanks for that answer. So would you say that's fair to say that NTFS seems to be a choice for non tiered setups if the cluster size is set correctly? Otherwise ReFS is the way to go, with tiering?
@TheFpdragon yea with tiering I’d go refs if possible. I’d also go refs if you want features like checksums or your program can use refs feature. Otherwise ntfs has the larger cluster size support.
@@ElectronicsWizardry I guess ReFS in theory does not need larger cluster sizes because it has 128bit inode addressing? Not sure about that correlation but I guess that was the thinking behind it... Seems that nobody has thought of the advantages that larger clusters could give with Parity that you have found? wild guessing while looking at you, microsoft...
My laptop fixes slow sd cards, those which couldn’t be fixed by other computers. I just pop them into the card reader, then the card jumps to higher speed, and keeps its new speed. Even tiny bits copy fast, such as dos or win 3.1. Previously it didn’t work.
Good video. Thanks you. It took Microsoft only 10 years going from „shit is not working at all“ to „it’s just garbage“ . Well done micro crap. Now we only have to wait another 12 years to get a useful raid 5 maybe raid 6 software on windows.
Storage Spaces is such a mixed bag... I first tried it a year or so ago with a quad NVMe AIC running basic stripe on PCIE3.0 drives, and was amazed I got over 12GB/s seq. speeds... Then I tried a Stripe mirror and was bamboozled by the performance degradation... Then I tried parity and might as well use HDDs. This kind of represents the industry as a whole: - open source/Linux goes and does it first, does it bad for a while then does it well, but puts it behind this stupidly complex learning curve so that only infra/sysadmins will ever care to use it at home (and even professionally will stay away from it without official certifications and/or maintenance contract speed dials) - then along comes Microsoft and tries to ease it in to consumers and SoHo users with a GUI, but bothes it up so bad you pretty much have to shell into everything or go read their (well-crafted, yet STUPIDLY contextual) online docs - finally comes either ...Apple, and perfects what Microsoft and Linux did yet with one or two key features visible to the users. And make sure that any sort of interop that may exist is eliminated so people only use it with iThings ...or a cloud provider, which does perfect the original goal of versatility and performance, combines it with MS's or Apple's ease of use with a nice interface or actually straighforward CLI/API... and then does pretty much the same as Apple and closes it all down to their infra. Maybe if they're a cool company they eventually FOSS-it 6 years later and 2 after that we get a spill of brilliance for the common mortals. You know like ZFS/TrueNAS. Or QEMU/KVM/Proxmox.
You are becoming the Storage Spaces reference on RUclips, great work!
Please NEVER EVER delete this Video! :) Thanks a lot, the best Tiered Storage Pool in RUclips!
Definitely appreciate you taking the time to show us how to improve data speeds here! Thanks for the help!
I just struggled with this and finally found a combination that works well with my 5 ) 8TB Drives. I saw up to 200MB/s Writes copying from a single backup disk.
5 columns, 16KB interleave => 64KB data stripe size, matches 64KB NTFS cluster size (AUS).
Run in Powershell as admin. Here is what I did.
$Disks = Get-PhysicalDisk | ? CanPool | ? MediaType -eq HDD
New-StoragePool -FriendlyName "8TB Storage Pool" -StorageSubsystemFriendlyName "Windows Storage*" -PhysicalDisks $Disks -ResiliencySettingNameDefault Parity
New-VirtualDisk -StoragePoolFriendlyName "8TB Storage Pool" -FriendlyName "RAID Media" -ResiliencySettingName Parity -NumberOfDataCopies 1 -NumberOfColumns 5 -Interleave 16KB -ProvisioningType Fixed -UseMaximumSize -Verbose -WriteCacheSize 0GB
Hopefully this helps someone, as I struggled for days Finding this out, and testing a lot of combinations
This video is great. I also went over and read the article at storagespaceswarstories. This statement stood out in the article: "As you can see, a lot of thought has to be given to the architecture of Storage Spaces before they are created, because once these values are set, it is hard or impossible to change them." This gives me pause when considering expansion. Here's the question... If today I have 2x4TB drives and 2x6TB drives, I assume I'm doing a 3 column solution with the calculated interleave, AUS, etc. Now, I've decided to expand and add 3x10TB drives (I'm getting big time here). Can I / Should I / Would I need to / Would I be able to adjust the column number, AUS, etc that was set in the initial build without starting from scratch? Thanks again for being a great resource.
It's unbelievable! Very useful and very cool! NEVER, do you hear me? NEVER delete this video! This is very important information. Definitely subscribe to the channel, like the video and add it to your favorites! Thank you for what you do! It's very helpful! You are a cool man!
Absolutely fantastic tutorial, thanks. Got drives expecting to be able to do a proper raid-5, this was super helpful figuring out how to do what I actually could to the best. thanks!
I am so happy I found this update!!!!! I've also been using storage for 10 years and have just lived with crappy parity write speeds. The last time I rebuilt my server, I put Server 2016 essentials so I'm curious if I can try experimenting with a new virtual disk to see if I can get some better performance. I'm currently using 7 drives with dual parity which doesn't seem to line up with what you mentioned but I'll see if I can play around with the settings to see if I see any improvements
Adding SSDs as, "Journled," disks is the best way to improve parity speeds in Storage Spaces as they act effectively as write-cache. I've been doing this since Server 2012 R2.
Are there any good guides for this? googling seems to bring up very little information.
@@DisTurbedSimulations Does it make a difference on 25GB files?
How do I do this?
Your tutorial here was an amazing Life Saver. Thank you. Your explanation and instructions were great.
As a side note, I kept running into some stupid error where somehow Windows decided to flag all my drives (that before the wipe were in a storage pool" as unable to be pooled. I had to reset the "CanPool" attribute in powershell before I could create a new pool with them.
I ran into the same problem and was able to add the disks to a StoragePool only after running PowerShell command "Reset-PhysicalDisk -FriendlyName "" to reset the disk in quesiton. Use "GetPhysicalDisk" to determine the name.
how you do that??
@@sithseven It's been a few months since I looked at it, but according to my logs, first I used:
Get-PhysicalDisk -CanPool $True | ft FriendlyName, OperationalStatus, Size, MediaType
And then used the Set-PhysicalDisk command to set the "false" drives back to True.
Thanks !!! Always avoid using spaces for the same problem but this made it usable .
Thanks for pointing to my issue, after testing things, my write speed is now stable and not going up and down.
I definitely noted the abysmal write speeds in a 3 drive parity pool a year or so ago when testing w/just a trio of spare 5400 rpm laptop drives for experimentation; I will retest with the new allocation size of 512k as soon as possible, as a 10X improvement on write speeds is nothing to sneeze at! NIce work!
Thank you for this. Storage spaces has been a pain in the butt.
I see windows 11 now has the option to add Allocation Unit Size. When giving the New Volume name and file system there is now an advanced section that drops down to give Allocation Unit Size.
1:00 I think a slide would help to present the most optimal cluster size.
5 disks - 1 parity disk = 4 data disks
256kb default interleve size x 4 data disks = 1024kb optimal cluster size
Can the interleave size be larger than the default?
@electronicswizardry
so if you did 6 disk - 2 Parity = 4data disk 256 x 4 = 1024kb ? ... right ?
@neko77025 yup that seems right. But I think you need 7 drives or above for dual parity and 6 and under is single parity but there may be a option to override these defaults.
Excellent research and video, Thanks.
I like to say that I had parity drive of 5x16 tb drives with 58TB of usable storage and i formatted pc and it just appears right after formatting on it own works perfect for me.
Yesterday I configured Storage Spaces on my Windows 11 machine with 7 x 2TB hard drives. I created several dual-parity virtual drives and found out that as soon as I set the allocation unit size of the virtual disk to 32KB or over, write speeds DRAMATICALLY improved (from 30 MB/s to 350 MB/s). I used the new interface in Settings to create the pool and virtual disks.
Well today I tried running crystal disk mark and the write speeds were not impressive. I guess running a copy of a 30gb file doesn't accurately replicate what crystal disk mark tests. Now to do some more testing.
@ahslan7304 crystal disk make seems to do something weird with storage spaces and. I’m not sure why. I tried using diskspd which is used in cdm under the hood and the same commands I saw in the source code with different numbers. I’d try that file copy text as it seems to be a better representation of the speed of the virtual disk.
this is very informative channel, i relly like it. greetings to a fellow nerd :)
Ok haven't even finished the vid yet. And I'm subbed.
What are the specs of the server, HBAs, and disks used in your server test bed?
I’m running an HPE DL325 G10, 4 x HPE 2.4 TB 10KB HDDS, 2 x Samsung PM863a SATA SSDs, and 2 x Intel X550 NVME SSD drives.
Using windows 2022, I get about 500 MB/sec sustained with 1.4 GB/sec burst when the 4 x HDD drives are in a 2 way mirror. Server Memory is 128GB, and windows manages it better than in Server 2019; or at least reports what is modified better.
When I create configure a virtual disk in PowerShell as a parity drive with 3 columns and 1 redundancy disk, I get around 250-324 MB/sec sustained with an average of 275 once the memory cache settles.
In both cases, I use ReFS as I am primarily using virtualization for the hosts, and the block size is 4KB.
I love that you are digging under the hood and performing this level of testing, but you have to keep in mind that performance is dependent on use case, and optimization has to take use case into account. I’ve played with larger 64KB allocation units, and they don’t provide real world performance gains unless your workload is sequential and the files are larger than 64KB. Microsoft has some detailed articles covering ReFS allocation sizing and they firmly stand behind 4KB when using virtualization even though the virtual file is going to be one big flat file. Every IOP to the VHDX has to read, modify, and write the allocation unit and small random reads suffer badly when the allocation unit size is much larger than the data accessed within that allocation unit. So you could run Crystal Benchmarks etc and get fantastic performance using synthetic testing, and the real work performance could be abysmal. Microsoft Exchange JetStress, Microsoft DiskSpeed, or Intel IO/NAS testing tools would show a better performance overview.
Keep in mind, Microsoft has multiple methods to combat poor Parity disk performance in storage spaces:
1) virtual disk tiering; available from the Virtual Disk creation wizard. Using SATA SSDs for tiering offer a massive random IO performance increase, but can limit raw sequence performance unless you have enough SSDs, and NVME can offer a massive increase to both random and sequential performance. I’ve seen a performance aggregate of 4+ TB/sec when using NVME SSDs in the performance tier.
2) mirror accelerated parity via PowerShell; adding mirrored HDDs/SSDs to a HDD parity disks in a given Virtual Disk will stage all writes to the mirrored disk first, and eliminate the parity bottleneck so long as the data being written to disk doesn’t exceed the size of the mirrored disk space in the virtual disk.
3) storage spaces bus cache; install the failover clustering component on the 2022 server, and NVME or SAS/SATA SSDs can be used as a read cached in cached mode, or a read/write cached in shared mode. This option has a lot of moving parts under the hood; and less control of the configuration, but works well to absorb bursty writes in shared mode. Microsoft leans towards the bus cache in cache mode to provide read caching to virtual disks configured as mirror accelerated parity when server level disk resiliency is required.
Would love to see more testing on all the above methods and detailed benchmarks!
Karl
Glad I watched this video before making a storage space for my main computer.
Still feeling bad about not being able to use RTFS and having to use NTFS. This is because I had the Home version of Windows 11.
The storage space is 8TB, 3 SSD drives with 1 drive for parity.
Dunno if RTFS vs NTFS would have made a significant difference in my use case e.g. serving media files to one user.
NTFS vs ReFS doesn't make a different for most use cases, and your likely not missing out on much. Some of the advantages of ReFS I see are: Checksumming(must be enabled to use as disabled by default), better tiering/caching in storage spaces, better handling of duplicated files(can save a lot of space with supported backup programs)/
Can you do a video on NTFS vs. ReFS?
ReFS is supposed to be the successor to NTFS, yet we are all still doing performance optimisation on NTFS.
Sure I can do a video on that. Give me some time to do some testing and research.
ReFS is still missing many features that NTFS has, so it seems to be working as an additional filesystem for some use cases and not yet as a replacement. I am using ReFS current for some hyper v drives and as a Veeam backup drive and it has been working well there.
I did some testing on cluster size with ReFS and storage spaces parity performance and found that going from the default 4k cluster size to the max of 64k would allow significantly better performance.
I'm looking forward to that video.. Good luck with the channel, mate. Cheers.
Very useful, thanks!
So with 4x4tb I will have 3x4tb + parity. With a cluster size of 1024kb how close to a 4+1 at 1024kb will it be in performance? 95%?? 90% less than that I'll just buy another hardrive for a total of 5
Nice informative video mate cheers
This video sounds very useful!
I may have to experiment with my 10 x 3 TB spinners (which would need to use 8 data and 2 parity columns) to see what speed I can get out of them with various stripe sizes and (stripe-size * 8) cluster sizes to find a sweet spot where doubling the stripe and cluster size hardly gains anything.
I currently have them as a Windows Striped Volume for speed, sustaining 2 GB/s at the outer cylinders.
*** any important data backed up to some offline drives :)
Now isn't that just typical!
I was just about to test Storage Spaces on my 10 x 3 TB spinners when one of them decided to die!
10 columns, 2 parity should have worked, so I tried 9 columns, 1 parity, but Storage Spaces said it was out of range!
It liked 8 columns, 1 parity, but that is useless!
So I have set it to 5 columns, 1 parity spread across the 9 drives, which seems to work and manages about 320 MB/s with 16K interleave and 64K clusters.
I am a first time sysadmin to storage space, have 8 14tb drives, running in to cluster size problems. Size seems to be smaller than what I should have. Any suggestions
Hello how would I got about doing this with 12 20 TB WD RED PRO drives? I only have 7 clean and would need to add the others as I go.
This is awesome. Could you do this on Windows Pro Workstation with ReFS as well? There don't seem to be as many options, but I'd love to see a writeup
Ok so here is a bit of a curveball question. I understand the process of what is going on here but the issue I am running into is how to setup a Parity (RAID5) with 4 disks. Calculating for 3-1 disks is easy since you have 2 data drives and can just multiply your interleave value by 2. 256KB x 2 = 512KB AUS. So how would I set this up for having 4 disks? I would have 3 data disks and 1 parity disk but since I have to multiply the interleave value by 3 I get, 256KB x 3 = 768KB. 768KB isn't a value I can set my AUS to unless there is another way to do that. I don't really know how to proceed here. Any help is appreciated.
You can set the number of columns to be smaller than the number of disks. In this case setting the number of columns to 3 with single parity would have 2 data disks and let you set optimal cluster and interleave sizes. This disadvantage of this method is there is more parity than needed. For example with 4x 1TB drives, column size of 3 would give you about 2.66TB, where a column size of 4 would give 3TB usable. I can test the performance of this setup if you want.
Ok so I shouldn't expect the same kind of performance hike that we see with a 3 drive 3 column setup, unless I use 3 columns with my 4 disks. Right now I get about 20-30Mb/s transfer speeds when I am copying a single large file but around 2-10Mb/s when I copy many smaller files. What I am looking for is the best case scenario for a 4 disk 4 column setup that has reasonable speeds since I should only have 1 drive dedicated to parity, with the other 3 available for data. A test of a 4 disk 4 column setup would be super appreciated. Also on a separate note, would having an interleave smaller or larger than 256KB make any difference? Say for a 3 column 3 disk setup having an interleave of 8KB and a 16KB AUS?@@ElectronicsWizardry
@@Wolfgang0117 I'd try the 4 drive 3 column setup here and see what the performance you get is. I'd guess its better. I also think the larger interleave and cluster size will improve performance typically, and would only go with the small sizes if you store lots of small files.
I'll setup the performance test with 4 drives on a space system in a bit.
I've found that Windows Server 2022 can bypass the write cache with an interleave size of 64 KB and a cluster size of 8 KB (with 5 data columns + 1 parity column). It might be worth trying Server 2022 if you require more flexibility.
Will six disks also work with dual parity? Wouldn't the AUS be 1024K? 256*X4 and two parity drives
Thanks for sharing!
Hey can you tell me how to do this like your video where you had HDD & SSDS ? my setup on raid is 3 * 10TB WD's and 3 * 1Tb SSDS ( With a 4th 10TB on the way).
man .. thanks .. This huge help.
BTW. I use software called Premocache to add A SSD as a cahe drive.
Has anybody tested a drive failure in storage spaces before or a raid recovery? I've read some terrible things on Reddit and has the questioning even using stored spaces anymore.
Would the larger cluster size also speed up mirrored ReFS pool containing lots of large files?
OH NO, I just setup my server a few months ago and have been having TERRIBLE Write speed issues. I had seen video's in the past mention its just the way Windows Server is for Parity raids.
I don't 100% understand the part about the Ideal amount of drives or calculating the correct interleave, I'm running 15x 14TB drives.
Now I need to try shuffling 60TB's of data back off that raid back onto something else so I can re-format.
But I thought ResFS was recommended for Storage Spaces? Is the cluster size adjustable on that? I have a lot of small files so using large cluster sizes would mean a lot of wasted disk space.
ReFS does have many features like check summing and tiering that work better with Storage spaces than NTFS. I think I was using NTFS here as its the only option in most editions of Windows 10/11.
If you have lots of small files try expermenting with interleave sizes, but this trick won't well here as its wasting space as you pointed out.
@@ElectronicsWizardry Thanks. It seems that ReFS only has 4 KB or 64 KB cluster sizes. I'll see how it is with the default and experiment.
What to do if you have 4 drives (3 data + 1 parity)? There is no 768K option. Should I set it to 1024?
Yea there isn't a 768k option unforantly. The options I see here are to set the column size to 3 so that it works correctly with the power of 2 cluster sizes like 512k,1024k. This will mean that there is a 66% data on the pool, compared to the optimal 75% with a cluster size of 4. The other option is to try the close values and see if the performance is sufficent for your needs.
You mentioned, to enter the correct ratios, you take the number of columns minus 1 multiplied by the interleave to get the size of the NTFS cluster (Allocation unit). Are you subtracting 1 because 1 of the drives is the parity drive? What if I have 4 disks configured in two way mirroring and no parity? In this case I have 2 columns. If the above is true, my interleave should still be half the size of the NTFS cluster size? Or, should I still subtract 1 and end up with the same size interleave and NTFS cluster size?
On another note, no matter what I tried, I can't get a parity setup to write faster than 70 MB/s. I even set it up with just 3 disks following you exactly and my write speeds were 70 MB/s. In the above mirroring setup I can write at 380 MB/s. (Windows 11 with a storage pool of 65.4 TB)
This video was all about parity. None of these rules apply with mirrors and they should be immune from these issues. The big issue with parity I think is how storage sapces deals with the write hole. I believe the slow write speeds is the system confirming the whole stripe is written correctly and the parity is correct. With mirrors this isn't a issue and I have seen very good speeds with mirrors in storage spaces.
The one I was subtracting was the parity drive. With lower single parity dubtract one drive and with dual parity subtrwct 2 for the 2 parity drives.
Still odd you can't get higher speeds with these tricks for higher speeds. I'd try playing with ntfs cluster sizes a bit more as larger ones perform a bit better in my tests.
I have 12x 10TB Barracuda Pro HDDs 7200rpm. I want to do Dual parity 2 disk failure tolerance. What is the ideal column number/aus/interleave equation here? Need for File transfers between 5-25GB
With 12 drives this is what I'd try. Use 10 drives as the column size. This I think will force dual parity, so 2 parity and 8 data. Then set interleave size to 128k and 1M cluster size in NTFS and see how that works.
@@ElectronicsWizardryGood man I'll give it a go. thanks for the response.
@@ElectronicsWizardry In a similar situation as @86ortega and have 12 x 10TB drives. Currently trialling windows server 2022 as after experimenting with unraid and others I just get on with it the best. Are you saying to just create a parity of 10 disks and leave the other two as spares? Guessing the next jump from 10 (2 parity + 8 data) would be 18 (2 Parity + 16 data) albeit having that many drives in a pool with only two drives of parity may not be advisable! 😂
@@Guttersniper35 With your 12 drives, I'd set the number of columns to 10 with dual parity drives, giving 8 data drives for best speeds. These 10 columns are spread evenly across all 12 drives, so you will have all the drives used. If you want to calculate the available space, find the data to full ratio(so data drives divided by number of columns(8/10 = .8 in this case). Then multiply that ratio by the total capacity(so 12*10 = 120 * .8 = 96TB usable(before TiB conversions and over overhead). A traditional raid 6 of number of columns would have a data to total ratio of 10/12 or .83 giving ~100 TB usable. Hope that explains whats going on with number of columns less that the total number of drives.
@@ElectronicsWizardrystill a bit confused, if I do what you say and I have 8 data drives and 2 parity what happens to the other 2?
Should I set two as hot spares and therefore still be able to use your formula?
Hi thanks for this info. I'm in the process of figuring out which solution to use for a Nas disk using raid 5 and I've stumbled onto a dilemna regarding file corruption/bit rot. Can storage spaces do error checking or checksum verification? Furthermore can it repair data that has been corrupted in a parity storage system? Or would that require another piece of software?
Storages spaces with refs as a file system can do checksumming and repair data that doesn’t match the correct checksum with alternate copies. Look up the Microsoft doc about refs integrity streams to learn more.
@@ElectronicsWizardry 👍👍👍Thank you
what if the amount of drives you are adding isnt the whole total? im just starting up and i will be doing two drives, but then once i turn that into a drivepool i will be adding about 6 more drives. i cant add them all at once because i need the data first. so i have to do 2 drives first move the data from a few drives then format them.
You can expand storage spaces pretty easily, but it can be tricky with virtual disks having a fixed stripe wide. You might want to get a secondary space to store the data for now if possible. If you can't do that, I'd make those 2 drives into a pool with a virtual disk and then add drives later on. Then once its all moved make a new virtual disk with the correct parity/number of collumns you want and copy the data to the new virtual disk.
What are the valid values for interleave size? or can it be anything?
I currently have two storage pools, one contains 4x3TB drives and another 3x6TB, setup in parity.. Write speeds after it fills up cache struggles at 1GB.
If I'm understanding this correctly, I should set the cluster size to 768 and 512, correct?
Thankfully I recently bought a big 20TB drive that I can use as a cold backup, so having to rebuild the storage spaces isn't going to be an issue with data loss.
However, I do have a question. For my first cluster (4x3TB), in parity, I should have 8.18 of usable space. The last time I had a problem with my storage space, I had filled up the space entirely. 100% utilization on the drives, and storage spaces reported that it was inaccessible. I couldn't even add another drive and load balance, resulting in some data loss.
Is it a known issue where if the space gets entirely full, the space becomes inaccessible? The drives were fine, but entirely inaccessible and the pool had to be rebuilt. Since then, I've been hesitant to even approach full utilization of the drives..
I'd try setting the cluster size to 512 and number of columns to 3 for both of those. It will waste a bit of space compared to setting the number of columns to 4, but should help performance. 768 and 4 columns should also be a optimal setup.
I'd also be tempted to make one big pool if it was me as I find it easier to manage data this way, and storage spaces handles mixed disk sizes pretty well.
I haven't see that issue if its 100% full
@@ElectronicsWizardry Much obliged. I feel that at this point, my bottleneck is my motherboard. I'm finding now that having several drives spanned across sata2 and sata3 ports isn't ideal. My first backup went quick, but the copying back after formatting to 512k took forever.
I'm trying to wrap my head around this, how would you setup a 5x 18tb setup with dual parity? I wanna add 3x 18tb to it after I copied over the data from those 3 to the 5x 18tb dual parity setup, appreciate your help! 😊
Unfortunately the dual parity information seems to be even worse in terms of documentation. I can't find a simple way to do this, so I'd probably make a think provisioned single parity virtual disk to do the initial copy. Then add the 3 extra drives, and make a new dual parity virtual disk(I think the only way to do dual parity is set a column size of 7 or more, but I can't find good info about it). Then you should be able to copy data from the single parity virtual disk to the dual parity virtual disk, and delete the single parity virtual disk once the data is all copied over.
@ElectronicsWizardry Thank you I'm gonna try it, aquired a qnap jbod 8 drive array for plex and want to have 6 / 2 redundancy at the end all shelled 18tb mybook drives 😀 appreciate the feedback and ofcourse a third backup of this pool on lto
Would love to see you retest this with ReFS instead of NTFS.
Thanks for the idea. I'll do a follow up soon with more ReFS, and using the new features of Windows server 2025.
NIce nice.
Just a question, did you ever manage to get an acceptable speed with 4 drives?
I've 4 sata ports on my mobo so i bought 4 4TB drives and could never get write speed to go over 50... Everyone on the internet's been saying i should use 3 drives instead...... but I have 4 lol...
Is there an alternative software raid that could be used on windows? I know mac has one.
With 4 drives you can use 3 columns to get good performance. I think having 4 columns will still provide decent performance with large clusters. I’ll test how well 4 drives work.
wish I found this earlier! I'm suffering away at about 40mbps backing up my UnRaid server w/truenas vm... going to migrate to native ZFS now in UnRaid... Already into this DAYS and It's going to take at least 1 more!! So much for my 2.5gb connection direct between the two machines =/
I did this on my Win 11 Plex server, it was driving me crazy! Not sure I'll get into the powershell bit yet but for now just creating the pool and formatting at 512k leveled everything out. Thank you for the video!
Edit in case anyone has an answer; I currently have a 3 drive pool but need to add another drive to it, will the new drive get formatted by Storage Spaces at 512k?
I'm a little overwhelmed. Thanks for the video.
5 x 16TB drives, I think these should be setup like this for single parity:
5 Columns, 256KB Interleave, 1024K aus.
Does that look Right?
I tried buying a hardware raid controller, and discovered those are gone to history, because "Software" and the "PCIE bus" is super duper fast now.
You want to set columns to 5 here, as 1 parity drive and 4 data drives = 5 total columns. The Interleave and NTFS cluster size look good.
As far as raid cards they seem to be going away, but if you want hardware raid look up cards like a LSI 9361-8i. Fairly cheap on ebay, and should be pretty performant.
@@ElectronicsWizardry Now I'm off to moving the data from the small to giant drive.
The USB Drive is saturated at 168 MB/s read, the writes are going from 0 to 943 MB/s waiting on the USB.
Copying data from the NVME is doing 550MB/s steady.
Question,
If I am only using 2 drive setup, example, 2 x 4t drives, can this be done?
3 or more drives are needed for a parity setup. With 2 drives you can use the simple or mirror modes in Storage spaces. These modes are similar to raid 0 and raid 1. These modes don't have the write performance issues parity does and can achieve high speeds with proper hardware.
FANTASTIC videos!
I have 2 NVMe SSDs on my Windows 10 machine that will also use 2ea 10TB RAID1 drives. How can I use one of these NVMe's as a cache drive for Storage Spaces? I am not proficient in PowerShell if that matters.
I used A program called Primocache ... I used it for my windows storage spaces .. works great (it does cost 30.00 ) and is tied to the Motherboard S/N but it works and its easy
So, if you have cluster size set to 1024KB and you save a file that is 25 KB in size, it will use 1024 KB of storage space to store your 25 KB file.
Yup that's right. That's one disadvantage with this approach. I have often seen people using large parity arrays for mostly large files so this isn't a big issue but depends on your use case.
So I tried every cluster size and got the same results. I have 3 20 tb drives in parity getting 30 mbs no matter what I change. I know I'm missing something
I tested this trick today and it works great. Sadly it works only with 3&5 disks on single parity. 4 disks had the normal terrible performance regardless of the interleave and cluster sizes.
incorrect - I'm using 4x 10TB WD Red 5400rpm CMR drives (WD100EFAX if I remember correctly, you will find little info on them because they were created as dedicated model for Synology NAS or something like that but WD101EFAX is pretty common). I don't recall the settings, I guess I have 32kB interleave, 64kB NTFS cluster and for sure Columns 4 & Parity (not mirror !). With big files, something to the tune of 100GB [! so they can't fit into any cache at all !], I have sustained writes around 360-380MB/s and that's not on empty drives where they would write on the outtermost fastest tracks. Reads around 420-440MB/s.
Before aligning interleave, columns and NTFS cluster size, I had the dreaded 35-40MB/s writes on exactly the same hardware.
Intel Celeron G3900, 8GB RAM, so old uninspiring underpowered hardware. Testing by copying big file via Far Manager, Total Commander and Windows Explorer, they vary by 20-30MB/s. The only way how I can perform reads is by copying the file to "nul" in Far Manager. Didn't use command line at all, no copy.exe, xcopy.exe nor robocopy.exe
The same hardware with 8x HGST SAS 6TB 7200rpm drives and Adaptec 71605 adapter in HBA mode gives me ~900MB/s read speeds and ~800MB/s writes in Storage Spaces (fun factor : CPU utilization 70-75%). Columns 8, interleave 32kB and 64kB, NTFS cluster size 64KB. Didn't try larger NTFS cluster due to time constrains (can't test every combination) but actually when I used >64kB interleave, I got less performance. I don't have the notes with me, but there was ~250-300MB/s writes only with interlave >=128kB don't know why.
Didn't try other than 8 columns for shortage of time (those tests were done like 1am to 4am).
all tests on the same installation of Windows Server 2022, patched/updated to 2023 March level. I'm not big fan of shingled (SMR) drives in any form of Raid, storage... make sure your problems are not caused by the drives themselves. There are many small models SMR these days which was not the case just two/three years ago.
I'm gonna read :
www.dell.com/support/manuals/sk-sk/storage-md1420-dsms/dsms_bpg_pub-v2/
storagespaceswarstories.com/storage-spaces-and-slow-parity-performance/ and Storage Spaces articles on that site
Thks & you're scary smart ;)
Love your content! right up my alley. I got a weird one for you, based on one of your videos, you convinced me to setup zfs on Proxmox redundancy. Basic problem, the performance is nowhere near what I expected. Setup: dual xeon v4, in the x16 pci slot (gen3) I put the Asus Hyper M.2 nvme for 4 Samsung Evo 970 plus (stripe of mirrors).
The zfs pool hosts the OS and the VMs disks, I used scsi (scsi single) and local-zfs:vm* (iothread, cache write back, discard on).
I run crystal mark (profile: real world performance) on Windows , and get horrible performance, and feels very laggy. (SEQ1M Q1T1: Read: 2134MB/s Write: 1735MB/s) - This doesn't look right. Right? Did I miss something? How would I debug this? (atime is OFF, sync is also OFF)
What about using ReFS? Any differences?
I didn’t see any significant performance differences going with refs instead of ntfs here and it has a limit of 64k for cluster size so that might hurt performance a small amount. I still often use refs for its additional features.
@@ElectronicsWizardry So if I have 3 HDD and want to run parity on a ReFS volume, what cluster size should i choose?
Thank you, do you have s step by step on how to start windows storage and implement it on windows 11 tutorial? your information was very helpful. I am wanting to keep my PC on windows 11 and use it for media storage for plex server through windows and still have parity back up.
Thanks for the idea. That sounds like a good video idea going over how to use windows 11 as a server and some of the pros and cons of doing so.
@@ElectronicsWizardryhi. thanks for videos. Hope you got this win11 using as server soon. And what is effect when using tiering and ssd cache.
I have a question. Can the drives in the pool be formatted as ReFS and still make the other adjustments to setup? In this video you kept saying NTFS but I really like the extra benefits of ReFS so can it be used? Love the vid and the information you gave out. Thanks.
Refs can be used, but the 64k cluster size is the biggest available. But with a 64k cluster size you get much better performance than using the default 4k cluster size.
@@ElectronicsWizardry So it sounds like I am better off foregoing the benefits of ReFS and just using NTFS. Thanks for the response.
@@ElectronicsWizardry So I followed your suggestion and the best speeds I can get is around 160MBps. I get serious speed swings up to 301 and down to 18MBps. I have 5 8TB drives in parity formatted in NTFS and 1024K. Any ideas how I can fix this to get 300MBps consistently? I have a good PC so CPU, RAM, PSU and so on are not underpowered. Did a Win 10 scannow and no errors found.
Thank you.
great work, thanks very very much! your video are so helpful.
I made my windows server 2022 storage space 14tb*5 raid5(1 parity) as your recommend, lan copy read and write speed is 500-600MB/s, same speed as 4 disk raid 0. that's wonderful!
Can I add another 14tb x 5 to expand current storage space, make these 14tb x10 as raid6 (2 parity) without destory current data?
Can anyone help me out
I have 4 x 1.6TB Intel DC 3510 SSDs
Should I use all 4 in parity or only 3? As if I used 4 it would be 768kB and I don't think that's an option when formating drives
The drives will be used in my gaming PC, as extra storage for movies, random programs and an additional space backup for my photos
Maybe I'll add games to it but unsure due to load times of games while using storage spaces
Since your have 4 drives I’d use all 4. You can improve write performance by setting the number of columns to 3. This will give you less usable space but still use all 4 drives. I’ve used storage spaces for my games drives in the past many times and it works fine.
@@ElectronicsWizardry okay I think I understand, I would have to use PowerShell to set collums to three? And I would have 3.2TB of storage ? First time using this and trying to wrap my head around it all
@makagio yea you need powershell unfortunately to set number of columns.
For availabile space on 3 colum parity there is 66% data on the drives. With 6.4tb total of data times the 66% usable you would get about 4.2tb usable space with 4 1.6tb drives.
@@ElectronicsWizardry I think by default windows will automatically only allocate 3.2TB of storage in parity. I might not need to set the collums in PowerShell and can just reformat the drive to 512k
@@ElectronicsWizardry I do have another 6 of these Intel 1.6TB 3150 SSDs.
So I can put another one in to have 5 drives (4 data + 1 parity)
My only issue is I have a 1TB M.2 on my motherboard second slot taking up SATA ports 5 and 6. Used for Games only.
The M.2 is a Adata 8200 Pro.
Should I remove it to put another 1.6TB SSD? You said gaming performance isn't really affected by windows storage spaces? Or should I leave the 4 drives and use PowerShell to set collums to 3? Also it setting the collums to 3, would I be losing the storage of 4th drive? So in theory it being 3.2TB total pool instead of 4.4TB?
All of the parity in the world doesn't do much good if you can't then replace a failed drive and recover it. I can't find any instructions on the internet about the simple process of what to do when a drive fails using windows storage spaces. I used to work in a data center. When a drive failed, you replaced it and it rebuilt. That was it. This was twenty years ago. Sure those servers had raid controller cards in them. But how has this functionality not trickled down to software raid yet?
Yea you can replace a drive in all of this software raid solution, but its typically different, and with more flexible solutions like storage space its not as simple as hardware raid cards. I'll make a future video about replacing drives in storage spaces as that doesn't seem to be covered well as you have pointed out.
Thanks for the Trick... But, now i understand clearly the reason for you to blink your eyes so often is due to the fact mentioned at 0:02... Take good care of ur health mate...
This did not work for me. Still stuck with 30MB/s write speeds.
I've clearly been out of the loop. "Working with storage spaces for about 10 years" ? I thought it was a new feature in the win 11/2022 generation of windows. I've never come across it before lol. Although to be fair, 90% of my server experience is Dell servers with PERC cards, so haven't had cause to look at alternative storage methods.
Storage spaces has been around since windows 8 and server 2012. Kinda surprising it’s over 12 years old now. I remember some articles and videos about it when it came out but never seemed to get that popular.
I can't for the life of me get parity write performance over around 60MB/s. Using 3x WD Red Plus 14TB that each can write at around 200MB/s. 3 columns, tried 32KB interleave with 64KB NTFS AUS, 256KB interleave with 512KB AUS and othe AUS = N*I value - always with the same result. I've confirmed that both Interleave and AUS values are correct after creating the space and formatting.
I9-13900KS, 64GB RAM, so system performance should not be an issue either.
Win 11 Pro.
I'm at a loss here.
I am sooo sooo grateful to you to share with us all this important discoverys!!! If I understand correctly if I have 3 disks and 1 is redundancy then it's 2 disks x 256k = I must put 512.
If I have 5 disks and 1 is redundancy then it's 4 disks x 256 = I must put 1024 correct?
But if I have 6 disks and 1 is redundancy then what is the correct value? Because 256 x 5 is 1280 and there are no option for 1280. Please can help me on this?
Again MUCH MUCH APPRECIATED ALL OF THAT ❤❤❤
Your math is correct. With 6 drives(so 5 data and 1 parity) there is no way to get a good layout. I'd create a storage pool with 6 drives, and a number of columns set to 5. This won't use all the space optimally, but will give better performance, and still use all the 6 disks. The 5 wide stripes are spread across all the 6 disks.
Thank you so much. I had a similar question & I was looking for the answer this whole evening. Finally, you answered my question. I have 4 drive setup so I will just use 3 column.
@@ElectronicsWizardry
Really liked the content; however, keep in mind with 1024 cluster size the max NTFS volume is 4TB so you'll never be able to expand a virtual disk (or any volume formatted on that disk) beyond 4TB. Before selecting your cluster size make sure you check the max NTFS limit; once it's set it can't be changed unless you reformat.
Hi, I'm sorry for my english, hope you understand everything.
Greate Video :)
I'm using win11 and I've tested storage spaces for the first time.
the trick with 512k sectors improved my speeds a lot.
Since i'm new to storage space I've a question:
For testing I've created a storage space parity with 3 USB-HDDs formated it with 512k sectors and it works greate.
(the HDDs are in a 5x3,5" JBOD HDDs case connected via USB-C)
In Disk Management i could see only one huge drive. nice :)
Then i've tested if i'm able to use the storage space on another system so i asked my brother and he lent me his notebook with win10.
I could see all three HDDs in the Disk Managment and that they are set to Storage Space. in the control panel i could only create a new storage space but i couldn't see the existing one- i dont know how to access an allready created storage space on another windows. So if i'm not wrong in the case i have to reinstall windows on my system i think that I'm no longer able to access the data in the storage space. Do you know how i could bind a created storage space to a drive letter on another/new system?
Storage spaces should be able to move between systems. I'd guess the different is due to one system being windows 11 and one being windows 10. THe windows 11 system is likely using a newver version of storage spaces the windows 10 system can't use.
@@ElectronicsWizardry Hello again,
I was playing around and installed Win11ToGo on an USB-SSD and there i can see the storagepool in the gui.
So i was looking in powershell and could see (Get-StoragePool) that it was set to ReadOnly.
i was able to fix it (Get-StoragePool -IsPrimordial $False | Set-StoragePool -IsReadOnly $false) but still wasn't able to access the pool.
So more searching...
Then i found a solution:
I could see that it was Deatched by Policy (Get-VirtualDisk | Select-Object FriendlyName,HealthStatus, OperationalStatus, DetachedReason)
The fix to automatically attach all non-clustered virtual disks after Windows restarts, i had to open a PowerShell session as an Administrator and then use the following command:
Get-VirtualDisk | Set-VirtualDisk -ismanualattach $false
And finally i was able to see and access the Virtual Drive.
Since I'm not familiar with powershell i hope that everything i've done was right. i just copy-pasted the commands from learn.microsoft.com and it works.
Can you tell me how to calculate the maximum usable storage in a pool with parity if i'm using drives with differend sizes? for example in my test setup i was using 1x1TB, 1x2TB, 1x3TB & 2x8TB. The GUI sets a parity pool to 10,5TB. And i dont know how it was calculated. If i calculate it 1+2+3+8+8 = 22/3*2 I'm getting 14,66TB I can understand that because of the huge size differenzes that my calculation is wrong or only works if all drive would be the same size.
The only sh!tty about the storage was the SSD can't have a maximam speed when it comes on transfering large files
Amazing video, I have been having the most love hate relationship with Storage Spaces for the last 2 years and this resolved all of my performance issues. I do have a new issue though. I was doing testing, and I created the virtual disk in Windows Server to circumvent the Windows 10 63TB limit on Storage Spaces. When in Windows Server, the performance on 10x 18TB Seagate Exos is amazing (400-900MBps). Issue is that when I switch OSs to latest build Windows 10 or even Windows 11, I am able to see the storage space and use it, but I am getting middling performance (60-100MBps). Switching back over to Windows Server 2022 brings speed back to great performance. Does anyone have any ideas of what is going on? In this video he is showing great performance on windows 10, so not sure why I am having issues, unless Microsoft is limiting the speed based off of circumventing the 63TB limit?
I haven't seen this exact issue, but seems interesting to take a look at.
Can you expalin what you get to get over the 63TB limit? Is this all with NTFS?
Thanks so much for getting back to me on this. To get over the 63TB limit in Windows 10 it was pretty easy but I first made the storage space itself inside of windows 10 so that it is recognized in Windows 10 (Not sure if the storage spaces created in Windows Server are using a newer version but they do not show up in Windows 10, only showing the protected storage space partitions in disk management for Windows 10.) After the Storage space is created in Windows 10, I then booted up windows server 2022, and then used a version of the PowerShell script you used to create a 130TB Windows storage space virtual disk. Then when I reboot into Windows 10, the 130TB storage Space shows up perfect, circumventing the Windows 10 63TB limit. Only issue is that I noticed that the performance is terrible compared to windows server (after more testing windows 10 is between 50-100MB where windows server is anywhere between 600-1200MB) I would love to dive into this with you if you were interested, just let me know EDIT: Also yes all NTFS and using your math, I have 10 drives 2 parity drives to 8*256KB I am using a 2048KB cluster size@@ElectronicsWizardry
@@mattgoldfein1423 Thanks for the additional information. I will take a look large disks in storage spaces and see If I can I can make a video on this information, and see If I can reproduce and solve the issue your running into.
Hey, I just wanted to update you on some more testing I have done over the last few months. I just tried the latest version of Windows 11. In the latest version they seemed to have completely overhauled the GUI for Storage Spaces. They also are allow you to create volumes that are dual parity and over 63TB even from the GUI from my testing. Only thing that sucks is that the speed issues are still there on my 130TB dual parity storage pool. Not sure if there is something I am missing in here but like I said back a few months ago, windows server 2022 is still doing pretty great with high speeds over 500MB-1GB per second.
Thanks a lot for the trick!
However, anyone knows how it is working with ReFS and storage tiering?
I experienced much better results with ReFS instead of NTFS especially in combination with storage tiering. The ReFS format dialog just offers me 4096 or 64k block size.
For my config with 5 parity columns in HDD tier I guess it should be 4 data disks and 64k * 4 should be 256k block size.
Yea REFS is much more limited column size. I haven't tested every config, But I think REFS 64k cluster size is still decent performance wise with parity wise.
REFS handles realtime tiering unlike NTFS which has a task to re optimize the tieres every nigh and generally does much better with tiering than NTFS.
@@ElectronicsWizardry Thanks for that answer. So would you say that's fair to say that NTFS seems to be a choice for non tiered setups if the cluster size is set correctly? Otherwise ReFS is the way to go, with tiering?
@TheFpdragon yea with tiering I’d go refs if possible. I’d also go refs if you want features like checksums or your program can use refs feature. Otherwise ntfs has the larger cluster size support.
@@ElectronicsWizardry I guess ReFS in theory does not need larger cluster sizes because it has 128bit inode addressing? Not sure about that correlation but I guess that was the thinking behind it... Seems that nobody has thought of the advantages that larger clusters could give with Parity that you have found? wild guessing while looking at you, microsoft...
My laptop fixes slow sd cards, those which couldn’t be fixed by other computers. I just pop them into the card reader, then the card jumps to higher speed, and keeps its new speed. Even tiny bits copy fast, such as dos or win 3.1. Previously it didn’t work.
Good video. Thanks you.
It took Microsoft only 10 years going from „shit is not working at all“ to „it’s just garbage“ . Well done micro crap. Now we only have to wait another 12 years to get a useful raid 5 maybe raid 6 software on windows.
Refs on the other end, just sucks
Storage Spaces is such a mixed bag... I first tried it a year or so ago with a quad NVMe AIC running basic stripe on PCIE3.0 drives, and was amazed I got over 12GB/s seq. speeds... Then I tried a Stripe mirror and was bamboozled by the performance degradation... Then I tried parity and might as well use HDDs. This kind of represents the industry as a whole:
- open source/Linux goes and does it first, does it bad for a while then does it well, but puts it behind this stupidly complex learning curve so that only infra/sysadmins will ever care to use it at home (and even professionally will stay away from it without official certifications and/or maintenance contract speed dials)
- then along comes Microsoft and tries to ease it in to consumers and SoHo users with a GUI, but bothes it up so bad you pretty much have to shell into everything or go read their (well-crafted, yet STUPIDLY contextual) online docs
- finally comes either
...Apple, and perfects what Microsoft and Linux did yet with one or two key features visible to the users. And make sure that any sort of interop that may exist is eliminated so people only use it with iThings
...or a cloud provider, which does perfect the original goal of versatility and performance, combines it with MS's or Apple's ease of use with a nice interface or actually straighforward CLI/API... and then does pretty much the same as Apple and closes it all down to their infra. Maybe if they're a cool company they eventually FOSS-it 6 years later and 2 after that we get a spill of brilliance for the common mortals. You know like ZFS/TrueNAS. Or QEMU/KVM/Proxmox.
Imagine how much you'd accomplish if instead of spending your time on this, you instead went to Speech Language therapy