Quantcast
Channel: File Services and Storage forum
Viewing all articles
Browse latest Browse all 13565

2012R2 Storage Spaces - Enclosure redundancy

$
0
0

Hi,

We are currently testing redundancy with Storage Space and have ran into a big problem..

Here is a description of our setup (I'll try to be as precise as possible):

Two HP DL360 Gen8 servers with 2x10 Gbe Ethernet cards and 2 LSI 4 SAS external ports, each connected to 3 DataON JBOD enclosures via dual SAS paths (2 SAS cables per servers going to 2 separate controllers on each enclosures).

The 2 10Gbe Ethernet cards are setup in separate network (10.0.0.0/16 and 192.168.0.0/16).

The 10.0.0.0/16 network is part of the Windows domain and host the DNS servers.

The 192.168.0.0/16 network is independent and only accessible by the above servers (no DNS defined, No default gateway).

I installed failover clustering and built a new cluster with those two servers, making sure to untick the “add available storage” from the wizard.

The cluster built successfully, so I proceeded to build the storage pool..

On one of those servers, I created a Storage Pool using all the disks from all 3 DataON enclosures (The disks are composed of 32x SAS HDD and 12x SAS SSD (Dual ports)).

And on top of this Storage Pool, I created two virtual hard disk:

                One small 1GB virtual hard disk for the Quorum (non-tiered, enclosure awareness enabled, mirrored)

                One large 15TB virtual hard disk for the data (Tiered Storage, enclosure awareness, write-back cache and mirrored)

As a reference, here are the powershell commands I used to create the virtual disks and the storage pool:

$pooldisks=Get-PhysicalDisk|? {$_.CanPool–eq$true }

New-StoragePool-StorageSubSystemFriendlyName *Spaces*-FriendlyName SP1-PhysicalDisks $pooldisks

$tier_ssd=New-StorageTier-StoragePoolFriendlyName SP1-FriendlyName SSD_TIER-MediaType SSD

$tier_hdd=New-StorageTier-StoragePoolFriendlyName SP1-FriendlyName HDD_TIER-MediaType HDD

New-VirtualDisk-StoragePoolFriendlyName 'SP1'-FriendlyName 'VD1'–StorageTiers @($tier_ssd,$tier_hdd) -StorageTierSizes @(2212GB,13108GB)-ResiliencySettingName Mirror-NumberOfColumns 4-WriteCacheSize 10GB-IsEnclosureAware $true

New-VirtualDisk-StoragePoolFriendlyName 'SP1'-FriendlyName 'Quorum'-Size 1GB-ResiliencySettingName Mirror-IsEnclosureAware $true

 

So far so good, I then added the storage pool to the cluster using the failover cluster manager, then added the two disks created above (created a volume within first).

I then added the bigger disk to the Cluster Shared Volmue.

Added to second disks (smaller one) as a quorum to the cluster.

In the failover cluster manager, I added the Scale Out File Server role (used the name 999SAN01P001 as the distributed server name) , and created a highly available share on the Cluster Shared Volume (now appearing under c:\clusterStorage\Volume1\Shares\Hyper-V).

I can now access the share via \\999SAN01P001\Hyper-V without any problem and even run Virtual Machines on it.

Here is the problem:

If I eject a couple of disks from one of the enclosures, no problems, everything stays available.

If I however simulate an enclosure failure (by pulling the power), the Cluster Shared Volume becomes inaccessible!

The “Cluster Virtual Disk” status in the failover cluster manager shows as “NO ACCESS”.

The virtual disk in Server Manager (under the File and Storage Services), although shows as “Degraded” is still accessible (not offline).

What am I doing wrong here?

With three enclosures, the system should be able to sustains a failure of a complete enclosure (and it does as my virtual disks in server manager shows online, but degraded), but my cluster cannot access it anymore (the cluster shared volume as “no access).

Thank you,

Stephane


Viewing all articles
Browse latest Browse all 13565

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>