RAIDs with a lot of spindles - how to safely put to use the "wasted" space
Posted
by
kubanczyk
on Server Fault
See other posts from Server Fault
or by kubanczyk
Published on 2013-10-23T21:11:59Z
Indexed on
2013/10/23
21:55 UTC
Read the original article
Hit count: 195
storage
|best-practices
I have a fairly large number of RAID arrays (server controllers as well as midrange SAN storage) that all suffer from the same problem: barely enough spindles to keep the peak I/O performance, and tons of unused disk space.
I guess it's a universal issue since vendors offer the smallest drives of 300 GB capacity but the random I/O performance hasn't really grown much since the time when the smallest drives were 36 GB.
One example is a database that has 300 GB and needs random performance of 3200 IOPS, so it gets 16 disks (4800 GB minus 300 GB and we have 4.5 TB wasted space).
Another common example are redo logs for a OLTP database that is sensitive in terms of response time. The redo logs get their own 300 GB mirror, but take 30 GB: 270 GB wasted.
What I would like to see is a systematic approach for both Linux and Windows environment. How to set up the space so sysadmin team would be reminded about the risk of hindering the performance of the main db/app? Or, even better, to be protected from that risk? The typical situation that comes to my mind is "oh, I have this very large zip file, where do I uncompress it? Umm let's see the df -h
and we figure something out in no time..." I don't put emphasis on strictness of the security (sysadmins are trusted to act in good faith), but on overall simplicity of the approach.
For Linux, it would be great to have a filesystem customized to cap I/O rate to a very low level - is this possible?
© Server Fault or respective owner