Using Partitions for a large MySQL table
Posted
by user293594
on Stack Overflow
See other posts from Stack Overflow
or by user293594
Published on 2010-03-27T12:16:36Z
Indexed on
2010/03/27
12:23 UTC
Read the original article
Hit count: 306
An update on my attempts to implement a 505,000,000-row table on MySQL on my MacBook Pro: Following the advice given, I have partitioned my table, tr:
i UNSIGNED INT NOT NULL,
j UNSIGNED INT NOT NULL,
A FLOAT(12,8) NOT NULL,
nu BIGINT NOT NULL,
KEY (nu), key (A)
with a range on nu. nu ought to be a real number, but because I only have 6-d.p. accuracy and the maximum value of nu is 30000. I multiplied it by 10^8 made it a BIGINT - I gather one can't use FLOAT or DOUBLE values to PARTITION a MySQL table. Anyway, I have 15 partitions (p0: nu<25,000,000,000, p1: nu<50,000,000,000, etc.). I was thinking that this should speed up a typical to SELECT:
SELECT * FROM tr WHERE nu>95000000000 AND nu<100000000000 AND A.>1.
to something of the order of the same query on a table consisting of only the data in the relevant partition (<30 secs). But it's taking 30mins+ to return rows for queries within a partition and double that if the query is for rows spanning two (contiguous) partitions. I realise I could just have 15 different tables, and query them separately, but is there a way to do this 'automatically' with partitions? Has anyone got any suggestions?
© Stack Overflow or respective owner