Why does storage's performance change at various queue depths?

Posted by Mxx on Server Fault See other posts from Server Fault or by Mxx
Published on 2012-12-19T23:16:17Z Indexed on 2012/12/20 5:04 UTC
Read the original article Hit count: 425

I'm in the market for a storage upgrade for our servers. I'm looking at benchmarks of various PCIe SSD devices and in comparisons I see that IOPS change at various queue depths. How can that be and why is that happening? The way I understand things is: I have a device with maximum (theoretical) of 100k IOPS. If my workload consistently produces 100,001 IOPS, I'll have a queue depth of 1, am I correct? However, from what I see in benchmarks some devices run slower at lower queue depths, then speedup at depth of 4-64 and then slow down again at even larger depths. Isn't queue depths a property of OS(or perhaps storage controller), so why would that affect IOPS?

© Server Fault or respective owner

Related posts about Performance

Related posts about hard-drive