Dealing with huge SQL resultset

Posted by Dave McClelland on Stack Overflow See other posts from Stack Overflow or by Dave McClelland
Published on 2010-03-26T00:03:29Z Indexed on 2010/03/26 0:13 UTC
Read the original article Hit count: 820

Filed under:
|
|
|

I am working with a rather large mysql database (several million rows) with a column storing blob images. The application attempts to grab a subset of the images and runs some processing algorithms on them. The problem I'm running into is that, due to the rather large dataset that I have, the dataset that my query is returning is too large to store in memory.

For the time being, I have changed the query to not return the images. While iterating over the resultset, I run another select which grabs the individual image that relates to the current record. This works, but the tens of thousands of extra queries have resulted in a performance decrease that is unacceptable.

My next idea is to limit the original query to 10,000 results or so, and then keep querying over spans of 10,000 rows. This seems like the middle of the road compromise between the two approaches. I feel that there is probably a better solution that I am not aware of. Is there another way to only have portions of a gigantic resultset in memory at a time?

Cheers,

Dave McClelland

© Stack Overflow or respective owner

Related posts about c#

Related posts about .NET