I have a test environment which originally had the following configuration:-
2 x HP DL380 Gen 10 Servers running Windows 2019
4 x 1.4TB SSD - Cache
8 x 1.6TB HDD - Capacity
Storage Spaces etc and using VMFleet as my benchmarking tool building 20Vms on each host I was able to achieve the following results on a 4k 100% read test (all data in cache):-
We wanted to see if the Iops could be pushed higher so upgraded the servers to the following:-
2 x 1.4Tb NVMe
4 x 1.4Tb SSD
8 x 1.6Tb HDD
Building the same vmfleet configuration but specifying that NVMe was cache and setting SSD as performance the same stress test just produces similar iops etc
I have destroyed and rebuilt the configuration several times but am still seeing the same results which is confusing me as to whether I have a config issue or something else
Firmware as up to date as it can be on the physical servers (still waiting on a lot of 2019 drivers).
Any pointers as to where I should look to improve this are gratefully received.