Mysql ndb cluster - node restart.
- by Arafat
Hi guys!
I just setup a mysql cluster on a fairly decent baby (IBM x3650 M3) with 24GB memory, xeon 6core, SAS 6Gbps HDD. Running Debian Lenny 5. 64bits.
Ndb version is 7.1.9a. Our database size on MyISAM is around 3.2 GB.
Ndb_size estimation is 58GB for ndbengine.
A little info about my database is as follows.
150 common tables for global purpose.
130 tables for each clients. So it goes like this,
130 x 115(clients) = 14950 tables.
Is it normal or usual to have 14000 tables on one database? The reasons why we did this was, Easy maintenance and per client based customization.
Now, the problem is, ndb cluster can only support, 20320 tables. But it can support 5,000,000,000 rows in one table if I'm not wrong.
My real head ache is my cluster data node takes less than two minutes to startup with out any data. But as soon as convert my tables into ndb, that too only 2000 tables, data node takes at least 30 to 40 mins to start up. Is it normal? If I convertt all my tables into ndb, will it take even longer?
Or let's say if consolidate my 14000 table's data into one, which is 130 tables, will it help?
Or is there anything idiotically wrong which I'm doing?
I'll attach my config.ini file soon. here's the simple overview of my config
Datamemory = 14G
Indexmemory = 3GB
Maxnooftable = 14000
Maxnoofattributes = 78000
I'm just testing these values with 2000 tables first.
Please advise, how to increase the start up speed. Please point out where I'm going wrong. Thanks in advance guys!