Hello.
I'm testing some storage spaces (not S2D) related scenarios and I've encountered some strange behavior that I hope someone can explain to me.
I have a pool of 6x 10GB drives - displayed as 56.9GB in size and 1.5GB allocated with no virtual disks created (which amounts to 6x 256MB metadata).
FriendlyName OperationalStatus HealthStatus IsPrimordial IsReadOnly Size AllocatedSize
------------ ----------------- ------------ ------------ ---------- ---- -------------
T OK Healthy False False 56.9 GB 1.5 GB
I created 3 storage tiers (templates) for 2-way mirrored spaces with 1-3 columns.
FriendlyName ResiliencySettingName NumberOfColumns NumberOfDataCopies PhysicalDiskRedundancy
------------ --------------------- --------------- ------------------ ----------------------
Mirror_SSD_1C Mirror 1 2 1
Mirror_SSD_2C Mirror 2 2 1
Mirror_SSD_3C Mirror 3 2 1
1) I create a 2-way mirrored 5GB volume using this command:
New-Volume -StoragePoolFriendlyName t -FriendlyName test -FileSystem ReFS -DriveLetter D -ResiliencySettingName Mirror -StorageTierFriendlyNames Mirror_SSD_1C -StorageTierSizes 5GB -NumberOfColumns 1
Here comes the confusion:
![]()
- why does the Get-VirtualDisk displays the footprint on pool (and hence storage efficiency) is 12.5GB (40%) instead of the expected 10GB (50%) as the corresponding instance of the storage tier displays?
2) In the next example, I deleted the VD and created new one, this time using 3 columns:
PS C:\Users\Administrator> New-Volume -StoragePoolFriendlyName t -FriendlyName test -FileSystem ReFS -DriveLetter D -ResiliencySettingName Mirror -StorageTierFriendlyNames Mirror_SSD_3C -StorageTierSizes 5GB -NumberOfColumns 3
Here's the output:
![]()
- again - different footprints reported by Get-VirtualDisk and Get-StorageTier
- this time also the size is 5.25GB instead of 5GB I specified in the command while creating the volume - how/why??
3) Also, from the documentation, with 2-way mirrored space, this guarantees (at minimum) 1 disk failure. Is there a way to raise this to multiple disks (without resorting to 3-way mirror)? Or is there no way to guarantee this since storage spaces rotate
the disks to which they write the stripes?
To rephrase the last question: having 1 (or 2) disk fault tolerance with 2-way (3-way) mirror makes sense for low disk count (say 4-8), however when I have (for example) 24 (or more) drives, I'm still only protected from 1 drive failure? I'm comparing this
to traditional RAID1, where (in the best scenario) half of the drives can fail without data loss.