Why does storage's performance change at various queue depths?
- by Mxx
I'm in the market for a storage upgrade for our servers.
I'm looking at benchmarks of various PCIe SSD devices and in comparisons I see that IOPS change at various queue depths.
How can that be and why is that happening?
The way I understand things is: I have a device with maximum (theoretical) of 100k IOPS. If my workload consistently produces 100,001 IOPS, I'll have a queue depth of 1, am I correct?
However, from what I see in benchmarks some devices run slower at lower queue depths, then speedup at depth of 4-64 and then slow down again at even larger depths.
Isn't queue depths a property of OS(or perhaps storage controller), so why would that affect IOPS?