Quantcast
Channel: File Services and Storage forum
Viewing all articles
Browse latest Browse all 13565

2019 - Parity Performance is terrible - Code issue?

$
0
0

I have an issue with 2019 with both RAID 5 and Parity.

tl;dr version - under Hyper-V i've shown that 2019 performance is a fraction of previous server versions

I have a test server with an Intel Core 15 8500, 32 GB RAM, a 500GB M.2 drive for the OS, and 4x 4TB drives.

I pulled these drives from my old test server which was a HP Microserver N54L running 2012 R2. On this old server the performance running RAID 5 was not great but OK.

On my new server, I installed 2019, and set up Storage Spaces with Parity. The performance was terrible. So I tried RAID 5, and the performance was still terrible although slightly better.

After lots of troubleshooting with no luck, I then tested different scenario's under Hyper-V with some really interesting results. I believe testing under Hyper-V rules out a number of possible hardware & driver issues being the cause. 

Check out the results.

Initial tests on 2019 Server, physical bare OS install:

1. Single drive, no RAID – 110MB/s write
2. Windows RAID 0 4x drive stripe – 420MB/s write
3. Windows Storage Spaces 4x Drive Mirror – 400MB/s write

The above three would indicate that there's no driver issue causing a bottle-neck.

Parity Testing:

4. Storage Spaces 4x Drive with single Parity. – 13MB/s
5. Windows RAID 5 4x drive – 23MB/s

Parity is rubbish right? Well only Window it seems. I decided to try to run a Linux VM and map the drives as physical drives to rule out a Windows issue, and was blown away by the performance. Remember this is running under a VM but mapped to the same physical drives.

6. Running Linux as Hyper-V guest, Linux RAID 5– 230MB/s

At this stage, I decided to test 2012, 2016 and 2019 under Hyper-V. Each was configured as a Gen2 machine, 8GB RAM, a 127GB OS drive on the M.2 SSD.

One VM at a time, i'd physically map the 4x4TB drives to the VM, set up the drives, and run some quick performance tests by copying a single large 50GB file. I'd measure speed after a couple of minutes to make sure any cache was full. 

Here's the results of that test:

RAID 5:
2012  - 100MB write, 150MB read

2016 - 100MB write, 150MB read

2019 -  25 MB write, 150MB read.

Storage Spaces with parity:
2016 Storage spaces with parity - 40MB/s write, around 300MB read.
2019 Storage spaces with parity- 13MB/s write. I forgot to test read!

The vDisk was configured as follows: New-VirtualDisk -StoragePoolFriendlyName "vPool" -FriendlyName "vDISK" -UseMaximumSize -ProvisioningType Fixed -ResiliencySettingName "Parity" -PhysicalDiskRedundancy 1 -NumberOfColumns 4

I tried playing with the numbers of columns, cluster size, interleave etc but nothing made much difference. 

I can't see how this can be anything other than a 2019 specific issue. Any thoughts on where to start to understand better what the cause may be?



Viewing all articles
Browse latest Browse all 13565

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>