file read performance degrades as number of files increases

Posted by bfallik-bamboom on Server Fault See other posts from Server Fault or by bfallik-bamboom
Published on 2012-08-29T20:07:35Z Indexed on 2012/08/29 21:40 UTC
Read the original article Hit count: 194

Filed under:
|
|
|

We're observing poor file read IO results that we'd like to better understand. We can use fio to write 100 files with a sustained aggregate throughput of ~700MB/s. When we switch the test to read instead of write, the aggregate throughput is only ~55MB/s. The drop seems related to the number of files since the throughput for read and write are comparable for a single file then diverge proportionally as we increase the number of files.

The test server has 24 CPU cores, 48GB of memory, and is running CentOS 6.0. The disk hardware is a RAID 6 array with 12 disks and a Dell H800 controller. This device is partitioned with ext4 using the default settings.

Increasing the readahead (using blockdev) improves the read throughput significantly but it still doesn't match write speed. For instance, increasing the readahead from 128KB to 1M improved the read throughput to ~145MB/s.

Is this a known performance issue in our OS/disk/filesystem configuration? If so, how can we tell? If not, what tools or tests can we use to further isolate the issue?

Thanks.

© Server Fault or respective owner

Related posts about centos

Related posts about raid