Linux - real-world hardware RAID controller tuning (scsi and cciss)
Posted
by
ewwhite
on Server Fault
See other posts from Server Fault
or by ewwhite
Published on 2012-03-26T18:09:37Z
Indexed on
2012/09/25
9:39 UTC
Read the original article
Hit count: 328
Most of the Linux systems I manage feature hardware RAID controllers (mostly HP Smart Array). They're all running RHEL or CentOS.
I'm looking for real-world tunables to help optimize performance for setups that incorporate hardware RAID controllers with SAS disks (Smart Array, Perc, LSI, etc.) and battery-backed or flash-backed cache. Assume RAID 1+0 and multiple spindles (4+ disks).
I spend a considerable amount of time tuning Linux network settings for low-latency and financial trading applications. But many of those options are well-documented (changing send/receive buffers, modifying TCP window settings, etc.). What are engineers doing on the storage side?
Historically, I've made changes to the I/O scheduling elevator, recently opting for the deadline
and noop
schedulers to improve performance within my applications. As RHEL versions have progressed, I've also noticed that the compiled-in defaults for SCSI and CCISS block devices have changed as well. This has had an impact on the recommended storage subsystem settings over time. However, it's been awhile since I've seen any clear recommendations. And I know that the OS defaults aren't optimal. For example, it seems that the default read-ahead buffer of 128kb is extremely small for a deployment on server-class hardware.
The following articles explore the performance impact of changing read-ahead cache and nr_requests values on the block queues.
http://zackreed.me/articles/54-hp-smart-array-p410-controller-tuning
http://www.overclock.net/t/515068/tuning-a-hp-smart-array-p400-with-linux-why-tuning-really-matters
http://yoshinorimatsunobu.blogspot.com/2009/04/linux-io-scheduler-queue-size-and.html
For example, these are suggested changes for an HP Smart Array RAID controller:
echo "noop" > /sys/block/cciss\!c0d0/queue/scheduler
blockdev --setra 65536 /dev/cciss/c0d0
echo 512 > /sys/block/cciss\!c0d0/queue/nr_requests
echo 2048 > /sys/block/cciss\!c0d0/queue/read_ahead_kb
What else can be reliably tuned to improve storage performance?
I'm specifically looking for sysctl and sysfs options in production scenarios.
© Server Fault or respective owner