Hi,
Has anyone deployed tiered Storage Spaces + JBODs and can share real world experience on performance this solution provides?
I've read some design articles from Microsoft:
http://social.technet.microsoft.com/wiki/contents/articles/15200.storage-spaces-designing-for-performance.aspx
http://technet.microsoft.com/en-us/library/dn554250.aspx
but they do not talk about performance under various conditions.
Their configuration per JBOD is 12 SSD and 48 large capacity NL-SAS disks.
NL-SAS 7200rpm disks do not perform well under random IO and that is exactly the type of IO Virtual Machines tend to generate.
I wonder what will be the performance of VMs that need to access cold blocks stored on NL-SAS tier or when active dataset grows well beyond SSD tier size.
It seems to me that while VMs on SSD tier will be very fast, the VMs on NL-SAS tier will be very slow and will continue to be slow until tiering job will move blocks up to SSD tier.
Especially scary are events like bootstorms and mass-updates when active dataset grows very large compared to normal operation.
I'm considering following deployment:
4 node SOFS cluster + 4x 60 bay JBODs
capacity = at least 150TB
IOPS = at least 16000 with acceptable latency
IO type = small random, 75% write / 25 % read
Number of VMs = 1000+
SSD + NL-SAS meets capacity reqs, but I think it will definitely not deliver required performance.
SSD + SAS does not meet capacity reqs. Would need to deploy multiple 4x4 SOFS clusters, which is expensive.
SAS + NL-SAS could meet capactity reqs, but performance will not be great without SSD tier. Also SSDs are required for Write Cache.
SSD + SAS + NL-SAS - currently not supported (only 2 tiers are supported in WS 2012 R2).
So if anyone can share experience and expertise on designing Storage Spaces, please do.
Thanks,
Egils
Has anyone deployed tiered Storage Spaces + JBODs and can share real world experience on performance this solution provides?
I've read some design articles from Microsoft:
http://social.technet.microsoft.com/wiki/contents/articles/15200.storage-spaces-designing-for-performance.aspx
http://technet.microsoft.com/en-us/library/dn554250.aspx
but they do not talk about performance under various conditions.
Their configuration per JBOD is 12 SSD and 48 large capacity NL-SAS disks.
NL-SAS 7200rpm disks do not perform well under random IO and that is exactly the type of IO Virtual Machines tend to generate.
I wonder what will be the performance of VMs that need to access cold blocks stored on NL-SAS tier or when active dataset grows well beyond SSD tier size.
It seems to me that while VMs on SSD tier will be very fast, the VMs on NL-SAS tier will be very slow and will continue to be slow until tiering job will move blocks up to SSD tier.
Especially scary are events like bootstorms and mass-updates when active dataset grows very large compared to normal operation.
I'm considering following deployment:
4 node SOFS cluster + 4x 60 bay JBODs
capacity = at least 150TB
IOPS = at least 16000 with acceptable latency
IO type = small random, 75% write / 25 % read
Number of VMs = 1000+
SSD + NL-SAS meets capacity reqs, but I think it will definitely not deliver required performance.
SSD + SAS does not meet capacity reqs. Would need to deploy multiple 4x4 SOFS clusters, which is expensive.
SAS + NL-SAS could meet capactity reqs, but performance will not be great without SSD tier. Also SSDs are required for Write Cache.
SSD + SAS + NL-SAS - currently not supported (only 2 tiers are supported in WS 2012 R2).
So if anyone can share experience and expertise on designing Storage Spaces, please do.
Thanks,
Egils