Dealing with large number of text strings

Posted by Fadrian on Stack Overflow See other posts from Stack Overflow or by Fadrian
Published on 2010-03-15T22:30:00Z Indexed on 2010/03/15 23:19 UTC
Read the original article Hit count: 180

Filed under:
|
|
|

My project when it is running, will collect a large number of string text block (about 20K and largest I have seen is about 200K of them) in short span of time and store them in a relational database. Each of the string text is relatively small and the average would be about 15 short lines (about 300 characters). The current implementation is in C# (VS2008), .NET 3.5 and backend DBMS is Ms. SQL Server 2005

Performance and storage are both important concern of the project, but the priority will be performance first, then storage. I am looking for answers to these:

  • Should I compress the text before storing them in DB? or let SQL Server worry about compacting the storage?
  • Do you know what will be the best compression algorithm/library to use for this context that gives me the best performance? Currently I just use the standard GZip in .NET framework
  • Do you know any best practices to deal with this? I welcome outside the box suggestions as long as it is implementable in .NET framework? (it is a big project and this requirements is only a small part of it)

EDITED: I will keep adding to this to clarify points raised

  • I don't need text indexing or searching on these text. I just need to be able to retrieve them in later stage for display as a text block using its primary key.
  • I have a working solution implemented as above and SQL Server has no issue at all handling it. This program will run quite often and need to work with large data context so you can imagine the size will grow very rapidly hence every optimization I can do will help.

© Stack Overflow or respective owner

Related posts about c#

Related posts about sql