Why database partitioning didn't work? Extract from thedailywtf.com
Posted
by questzen
on Server Fault
See other posts from Server Fault
or by questzen
Published on 2010-03-24T09:03:15Z
Indexed on
2010/03/24
9:13 UTC
Read the original article
Hit count: 243
Original link. http://thedailywtf.com/Articles/The-Certified-DBA.aspx.
Article summary: The DBA suggests an approach involving rigorous partitioning, 10 partitions per disk (3 actual disks and 3 raid). The stats show that the performance is non-optimal. Then the DBA suggests an alternative of 1 partition per disk (with more added disks). This also fails. The sys-admin then sets up a single disk, single partition and saves the day.
The size of disks was not mentioned but given today,s typical disk sizes (of the order of 100 GB), the partitions ; would be huge, it surprises me that a single disk with all partitions outperformed.
Initially I suspect that the data was segregated and hence faster reads. But how come the performance didn't degrade as time went by with all the inserts and updates happening? Saw this on reddit, but the explanation was by far spindle/platter centered. There was no mention in the article about this. Is there any other reason? I can only guess that the tables were using a incorrect hash distribution causing non-uniform allocation across disks (wrong partitioning); this would increase fetch times. Any thoughts?
© Server Fault or respective owner