I am running Windows Hyper-V Server 2012 R2 (Server Core) on 3 machines to test Hyper-V extended replication. HV1 and HV2 both have an Intel Core i3-4130 CPU. HV1 has 7 drives in a Storage Spaces pool using Double Parity. HV2 and HV3 both have 6 drives using an LSI RAID controller setup as RAID6.
The VM on HV1 being replicated consists of a 160 Gb Fixed VHDX and a 4 Tb (3 Tb in use) Dynamic VHDX.
When replicating a VM from HV1 to HV2 the read performance on HV1 seems much slower than I would expect. When looking at Task Manager my CPU consumption on both machines is very low. From HV1 I typically see the Gigabit Ethernet utilization running around 50-80 Mbps.
In replication from HV1 to HV2 I would expect to be simply reading from the VHDX. Given each of the HDDs are rated at 100-150 MBs I would have expected to be able to drive my network utilization to around 800-900 Mbps.
What is causing the bottleneck and such poor performance?
When extending the replication from HV2 to HV3 the same Gigabit LAN is running at the expected 800-900 Mbps performance. So I don't believe anything about my network is throttling performance. The delay seems to be with Storage Spaces on HV1?
Thanks for any insight you can provide into rectifying this performance problem.
P.S. The Storage Space drive is encrypted with BitLocker. I know this will add a slight delay to performance. But the HV2 and HV3 drives are also encrypted and yet the extended replication can still drive the network utilization to near 100% busy. So I don't believe BitLocker is degrading the replication performance.
Theokrat