When using software RAID and LVM on Linux, which IO scheduler and readahead settings are honored?
Posted
by
andrew311
on Server Fault
See other posts from Server Fault
or by andrew311
Published on 2012-05-03T20:34:05Z
Indexed on
2013/11/12
21:57 UTC
Read the original article
Hit count: 188
In the case of multiple layers (physical drives -> md -> dm -> lvm), how do the schedulers, readahead settings, and other disk settings interact?
Imagine you have several disks (/dev/sda - /dev/sdd) all part of a software RAID device (/dev/md0) created with mdadm. Each device (including physical disks and /dev/md0) has its own setting for IO scheduler (changed like so) and readahead (changed using blockdev). When you throw in things like dm (crypto) and LVM you add even more layers with their own settings.
For example, if the physical device has a read ahead of 128 blocks and the RAID has a readahead of 64 blocks, which is honored when I do a read from /dev/md0? Does the md driver attempt a 64 block read which the physical device driver then translates to a read of 128 blocks? Or does the RAID readahead "pass-through" to the underlying device, resulting in a 64 block read?
The same kind of question holds for schedulers? Do I have to worry about multiple layers of IO schedulers and how they interact, or does the /dev/md0 effectively override underlying schedulers?
In my attempts to answer this question, I've dug up some interesting data on schedulers and tools which might help figure this out:
© Server Fault or respective owner