Hi,
Basically I am looking for a few ideas on how to redesign our file servers.
We have multiple physical file servers and a few virtual servers and what is replicated and what is not is quite confusing. Total storage size is around 6TB made up of home directories, and shared resources - no particularly special file types etc. Using DFS with home directories however does mean that I need to essentially have only a single point of reference to be supported by Microsoft as per:
What I am thinking about doing is consolidating everything onto 4 servers.
We have a large single site with a few remote sites. The remote sites have had their links upgraded to 1Gb and we have been removing our server infrastructure from these areas due to not having environmental/physical space/security in place.
On our main site we have two separate buildings which each contain a SAN (Not linked to each other).
Microsoft's guides show concepts of using a DFS Failover Cluster in a main site with replication to a single server at a remote site.
I could do this model but just in one site, but due to the fact I have two equally sized SAN's on the main site, the issue with this is that I would like to spread the load to both SAN's. Therefore if I have anything running on the single server as primary I am creating an SPOF.
What I am thinking of doing is create 2 x 2 node DFS Failover Clusters (One in each building connected to that building's SAN).
This means:
- I can load balance the primary DFS shares at the cluster level (SAN's)
- Rapid failover can occur if needed between individual nodes within a cluster
- The single point of failure (Storage) in just using a single DFS cluster is eliminated
However I am not sure if this is supported or recommended?