Horrible performing RAID

Posted by Philip on Server Fault See other posts from Server Fault or by Philip
Published on 2012-04-13T09:18:32Z Indexed on 2012/04/13 17:33 UTC
Read the original article Hit count: 382

Filed under:
|
|
|
|

I have a small GlusterFS Cluster with two storage servers providing a replicated volume. Each server has 2 SAS disks for the OS and logs and 22 SATA disks for the actual data striped together as a RAID10 using MegaRAID SAS 9280-4i4e with this configuration: http://pastebin.com/2xj4401J

Connected to this cluster are a few other servers with the native client running nginx to serve files stored on it in the order of 3-10MB.

Right now a storage server has a outgoing bandwith of 300Mbit/s and the busy rate of the raid array is at 30-40%. There are also strange side-effects: Sometimes the io-latency skyrockets and there is no access possible on the raid for >10 seconds. The file system used is xfs and it has been tuned to match the raid stripe size.

Does anyone have an idea what could be the reason for such a bad performing array? 22 Disks in a RAID10 should deliver way more throughput.

© Server Fault or respective owner

Related posts about nginx

Related posts about raid