perfmon reporting higher IOPs than possible?
- by BlueToast
We created a monitoring report for IOPs on performance counters using Disk reads/sec and Disk writes/sec on four servers (physical boxes, no virtualization) that have 4x 15k 146GB SAS drives in RAID10 per server, set to check and record data every 1 second, and logged for 24 hours before stopping reports.
These are the results we got:
Server1 Maximum disk reads/sec: 4249.437 Maximum disk writes/sec:
4178.946
Server2 Maximum disk reads/sec: 2550.140 Maximum disk writes/sec:
5177.821
Server3 Maximum disk reads/sec: 1903.300 Maximum disk writes/sec:
5299.036
Server4 Maximum disk reads/sec: 8453.572 Maximum disk writes/sec:
11584.653
The average disk reads and writes per second were generally low. I.e. for one particular server it was like average 33 writes/sec, but when monitoring in real-time it would often spike up to several hundreds and also sometimes into the thousands.
Could someone explain to me why these numbers are significantly higher than theoretical calculations assuming each drive can do 180 IOPs?
Additional details (RAID card):
HP Smart Array P410i, Total cache size of 1GB, Write cache is disabled, Array accelerator cache ratio is 25% read and 75% write