Hadoop Map/Reduce - simple use example to do the following...

Posted by alexeypro on Stack Overflow See other posts from Stack Overflow or by alexeypro
Published on 2010-04-22T04:46:03Z Indexed on 2010/04/22 4:53 UTC
Read the original article Hit count: 357

Filed under:
|
|
|

I have MySQL database, where I store the following BLOB (which contains JSON object) and ID (for this JSON object). JSON object contains a lot of different information. Say, "city:Los Angeles" and "state:California".

There are about 500k of such records for now, but they are growing. And each JSON object is quite big.

My goal is to do searches (real-time) in MySQL database. Say, I want to search for all JSON objects which have "state" to "California" and "city" to "San Francisco".

I want to utilize Hadoop for the task. My idea is that there will be "job", which takes chunks of, say, 100 records (rows) from MySQL, verifies them according to the given search criteria, returns those (ID's) which qualify.

Pros/cons? I understand that one might think that I should utilize simple SQL power for that, but the thing is that JSON object structure is pretty "heavy", if I put it as SQL schemas, there will be at least 3-5 tables joins, which (I tried, really) creates quite a headache, and building all the right indexes eats RAM faster than I one can think. ;-) And even then, every SQL query has to be analyzed to be utilizing the indexes, otherwise with full scan it literally is a pain. And with such structure we have the only way "up" is just with vertical scaling. But I am not sure it's the best option for me, as I see how JSON objects will grow (the data structure), and I see that the number of them will grow too. :-)

Help? Can somebody point me to simple examples of how this can be done? Does it make sense at all? Am I missing something important?

Thank you.

© Stack Overflow or respective owner

Related posts about hadoop

Related posts about mapreduce