Hi!
Our environment consist of several Hyper-V Hosts (Windows Server 2012) and a Scale-Out File Server cluster (WS2012) with Shared SAS Storage (HP P2000). All servers has dual dedicated 10GbE NICs for the Remote Storage Access via SMB 3.0.
When copying (write) large files to shares on the SOFS Cluster with CSV (\\[SOFS-DNN]\Share1\) we get poor performance speed ~200 MB/s compared ~800 MB/s when copying the file directly to one Storage node in the SOFS Cluster (\\FileServer1\C$\ClusterStorage\Volume1\Shares\Share1).
The traffic over the 10GbE network seems to work just fine. Copying file from (reading) the\\[SOFS-DNN]\Share1\ to another location gets good throughput ~800 MB/s.
We also get the same result when performing the file transfer/copy test locally on Storage server (FileServer1)…copying to the \\[SOFS-DNN]\Share1\ path (SOFS Client Access Point) gives ~200 MB/s without involving the 10GbE storage network…and copying to C:\ClusterStorage\Volume1\Shares\Share1 path gives ~800 MB/s (same file location).
Why is transfer speed much lower when copying large files to the\\[SOFS-DNN]\Share1\ path (SOFS Client Access Point? Of cause SOFS with “SMB-Share – Application” for Hyper-V if not for general file sharing, but as soon as the “SOFS Client Access Point path” are involved in the file transfer/copy operation we get much lower throughput when writing data to that location?
We are a little bit worried about this behavior and would appreciate some tips or explanation before going into production with our Private Cloud environment.
Rgds,
Gustav