Search Results

Search found 33242 results on 1330 pages for 'database optimization'.

Page 22/1330 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Database: Storing Dates as Numeric Values

    - by Chin
    I'm considering storing some date values as ints. i.e 201003150900 Excepting the fact that I lose any timezone information, is there anything else I should be concerned about with his solution? Any queries using this column would be simple 'where after or before' type lookups. i.e Where datefield is less than 201103000000 (before March next year). currently the app is using MSSQL2005. Any pointers to pitfalls appreciated.

    Read the article

  • database is normalized but the following is a problem please help

    - by user287745
    but the prob is there are relations ships which are so huge that after normalizing they have like a 20 primary keys( composite keys) which are really foreign keys but have to be declared as primary keys to identify the relationship uniquely. so please help? is it correct and i apologize to the expert community for not accepting answers, i was not aware that accepting is possible, the TICK MARK is that visible :-)

    Read the article

  • Why is it bad to use boolean flags in databases? And what should be used instead?

    - by David Chanin
    I've been reading through some of guides on database optimization and best practices and a lot of them suggest not using boolean flags at all in the DB schema (ex http://forge.mysql.com/wiki/Top10SQLPerformanceTips). However, they never provide any reason as to why this is bad. Is it a peformance issue? is it hard to index or query properly? Furthermore, if boolean flags are bad, what should you use to store boolean values in a database? Is it better to store boolean flags as an integer and use a bitmask? This seems like it would be less readable.

    Read the article

  • How to map to tables in database PHPMyAdmin

    - by thegrede
    I'm working now on a project which a user can save their own coupon codes on the websites, so I want to know what is the best to do that, Lets say, I have 1 table with the users, like this, userId | firstName | lastName | codeId and then I have a table of the coupon codes, like this, codeId | codeNumber So what I can do is to connect the codeId to userId so when someone saves the coupons goes the codeId from the coupon table into the codeId of the users table, But now what if when a user have multiple coupons what do I do it should be connected to the user? I have 2 options what to do, Option 1, Saving the codeId from coupons table into the codeId of users table like 1,2,3,4,5, Option 2 To make a new row into the coupons table and to connect the user to the code with adding another field in the coupon table userId and putting into it the user which has added the coupon his userId of the users table, So what of the two options is better to do? Thanks you guys.

    Read the article

  • friendship database schema

    - by Daniel Hertz
    I'm creating a db schema that involves users that can be friends, and I was wondering what the best way to model the ability for these friends to have friendships. Should it be its own table that simply has two columns that each represent a user? Thanks!

    Read the article

  • correct approach to store in database

    - by John
    I'm developing an online website (using Django and Mysql). I have a Tests table and User table. I have 50 tests within the table and each user completes them at their own pace. How do I store the status of the tests in my DB? One idea that came to my mind is to create an additional column in User table. That column containing testid's separated by comma or any other delimiter. userid | username | testscompleted 1 john 1, 5, 34 2 tom 1, 10, 23, 25 Another idea was to create a seperate table to store userid and testid. So, I'll have only 2 columns but thousands of rows (no of tests * no of users) and they will always continue to increase. userid | testid 1 1 1 5 2 1 1 34 2 10

    Read the article

  • Database table relationships: Always also relate to specified value (Linq to SQL in .NET Framework)

    - by sinni800
    I really can not describe my question better in the title. If anyone has suggestions: Please tell! I use the Linq to SQL framework in .NET. I ran into something which could be easily solved if the framework supported this, it would be a lot of extra coding otherwise: I have a n to n relation with a helper table in between. Those tables are: Items, places and the connection table which relates items to places and the other way. One item can be found in many places, so can one place have many items. Now of course there will be many items which will be in ALL places. Now there is a problem: Places can always be added. So I need a place-ID which encompasses ALL places, always. Like maybe a place-id "0". If the helper table has a row with the place-id of zero, this should be visible in all places. In SQL this would be a simple "Where [...] or place-id = 0", but how do I do this in Linq relations? Also, for a little side question: How could I manage "all but this place" kind of exclusions?

    Read the article

  • Database Modelling - Conceptually different entities with near identical fields

    - by Andrew Shepherd
    Suppose you have two sets of conceptual entities: MarketPriceDataSet which has multiple ForwardPriceEntries PoolPriceForecastDataSet which has multiple PoolPriceForecastEntry Both different child objects have near identical fields: ForwardPriceEntry has StartDate EndDate SimulationItemId ForwardPrice MarketPriceDataSetId (foreign key to parent table) PoolPriceForecastEntry has StartDate EndDate SimulationItemId ForecastPoolPrice PoolPriceForecastDataSetId (foreign key to parent table) If I modelled them as separate tables, the only difference would be the foreign key, and the name of the price field. There has been a debate as to whether the two near identical tables should be merged into one. Options I've thought of to model this is: Just keep them as two independent, separate tables Have both sets in the one table with an additional "type" field, and a parent_id equalling a foreign key to either parent table. This would sacrifice referential integrity checks. Have both sets in the one table with an additional "type" field, and create a complicated sequence of joining tables to maintain referential integrity. What do you think I should do, and why?

    Read the article

  • Database design - table relationship question

    - by iama
    I am designing schema for a simple quiz application. It has 2 tables - "Question" and "Answer Choices". Question table has 'question ID', 'question text' and 'answer id' columns. "Answer Choices" table has 'question ID', 'answer ID' and 'answer text' columns. With this simple schema it is obvious that a question can have multiple answer choices & hence the need for the answer choices table. However, a question can have only one correct answer and hence the need for the 'answer ID' in the question table. However, this 'answer ID' column in the question table provides a illusion as though there can be multiple questions for a single answer which is not correct. The other alternative to eliminate this illusion is to have another table just for correct answer that will have just 2 columns namely the question ID and the answer ID with a 1-1 relationship between the two tables. However, I think this is redundant. Any recommendation on how best to design this thereby enforcing the rules that a question can have multiple answer choices but only one correct answer? Many Thanks.

    Read the article

  • Generic version control strategy for select table data within a heavily normalized database

    - by leppie
    Hi Sorry for the long winded title, but the requirement/problem is rather specific. With reference to the following sample (but very simplified) structure (in psuedo SQL), I hope to explain it a bit better. TABLE StructureName { Id GUID PK, Name varchar(50) NOT NULL } TABLE Structure { Id GUID PK, ParentId GUID (FK to Structure), NameId GUID (FK to StructureName) NOT NULL } TABLE Something { Id GUID PK, RootStructureId GUID (FK to Structure) NOT NULL } As one can see, Structure is a simple tree structure (not worried about ordering of children for the problem). StructureName is a simplification of a translation system. Finally 'Something' is simply something referencing the tree's root structure. This is just one of many tables that need to be versioned, but this one serves as a good example for most cases. There is a requirement to version to any changes to the name and/or the tree 'layout' of the Structure table. Previous versions should always be available. There seems to be a few possibilities to tackle this issue, like copying the entire structure, but most approaches causes one to 'loose' referential integrity. Example if one followed this approach, one would have to make a duplicate of the 'Something' record, given that the root structure will be a new record, and have a new ID. Other avenues of possible solutions are looking into how Wiki's handle this or go a lot further and look how proper version control systems work. Currently, I feel a bit clueless how to proceed on this in a generic way. Any ideas will be greatly appreciated. Thanks leppie

    Read the article

  • Naming of boolean column in database table

    - by Space Cracker
    I have 'Service' table and the following column description as below Is User Verification Required for service ? Is User's Email Activation Required for the service ? Is User's Mobile Activation required for the service ? I Hesitate in naming these columns as below IsVerificationRequired IsEmailActivationRequired IsMobileActivationRequired or RequireVerification RequireEmailActivation RequireMobileActivation I can't determined which way is the best .So, Is one of the above suggested name is the best or is there other better ones ?

    Read the article

  • Upgrading from SQL2000 database to SQL Express 2008 R2

    - by itwb
    Hi, We have a web application which uses a MSSQL 2000 backend database. We are currently paying a ridiculous amount for Shared Hosting, with the database costs alone costing us $150 per month (MSSQL 100mb extra space is $40 per month). Our database size is 896.38 MB I am looking at getting a Virtual Private Server and upgrading the database to a MSSQL2008 Express database. I am aware that the Express version is limited to a 10GB database (with R2), and is constrained to a single CPU. I have also been offered SQL Server 2008 Web Edition for $19/per month, but I cannot find many details on the difference between Express and Web. Any suggestions here? What I would also like to know is: If we upgrade the database to MSSQL 2008 database, is there any issues with possible data transformations in the future? I.e. Is it possible to download and mount it with SQL Server 2008 Standard edition? I'm more concerned about how to get data in and out of the database through SQL Management tools. Are there any other issues that I might face? Thanks, Mike

    Read the article

  • MySQL root user can't access database

    - by Ed Schofield
    Hi all, We have a MySQL database ('myhours') on a production database server that is accessible to one user ('edsf') only, but not to the root user. The command 'SHOW DATABASES' as the root user does not list the 'myhours' database. The same command as the 'edsf' user lists the database: mysql> SHOW DATABASES; +--------------------+ | Database | +--------------------+ | information_schema | | myhours | +--------------------+ 2 rows in set (0.01 sec) Only the 'edsf' user can access the 'myhours' database with 'USE myhours'. Neither user seems to have permission to grant further permissions for this database. My questions are: Q1. How is it that the root user does not have permission to use the database? How could this have come about? The output of SHOW GRANTS FOR 'root'@'localhost'; looks fine to me: GRANT ALL PRIVILEGES ON *.* TO 'root'@'localhost' IDENTIFIED BY PASSWORD '*xxx' WITH GRANT OPTION Q2. How can I recover this situation to make this database visible to the MySQL root user and grant further permissions on it? Thanks in advance for any help! -- Ed

    Read the article

  • WebSphere Application Server EJB Optimization

    - by Chris Aldrich
    We are working on developing a Java EE based application. Our application is Java 1.5 compatible and will be deployed to WAS ND 6.1.0.21 with EBJ 3.0 and Web Services feature packs. The configuration is currently one cell with two clusters. Each cluster will have two nodes. Our application, or our system, as I should rather say, comes in two or three parts. Part 1: An ear deployed to one cluster that contains 3rd party vendor code combined with customization code. Their code is EJB 2.0 compliant and has a lot of Remote Home interfaces. Part 2: An ear deployed to the same cluster as the first ear. This ear contains EBJ 3's that make calls into the EJB 2's supplied by the vendor and the custom code. These EJB 3's are used by the JSF UI also packaged with the EAR, and some of them are also exposed as web services (JAX-WS 2.0 with SOAP 1.2 compliance) for other clients. Part 3: There may be other services that do not depend on our vendor/custom code app. These services will be EJB 3.0's and web services that are deployed to the other cluster. Per a recommendation from some IBM staff on site here, communication between nodes in a cluster can be EJB RMI. But if we are going across clusters and/or other cells, then the communication should be web services. That said, some of us are wondering about performance and optimizing communication for speed of our applications that will use our web services and EJB's. Right now most EJB's are exposed as remote. (and our vendor set theirs up that way, rather than also exposing local home interfaces). We are wondering if WAS does any optimizations between apps in the same node/cluster node space. If two apps are installed in the same area and they call each other via remote home interface, is WAS smart enough to make it a local home interface call? Are their other optimization techniques? Should we consider them? Should we not? What are the costs/benefits? Here is the question from one of our team members as sent in their email: The question is: Supposing we develop our EJBs as remote EJBs, where our UI controller code is talking to our EXT java services via EJB3...what are our options for performance optimization when both the EJB server and client are running in the same container? As one point of reference, google has given me some oooooold websphere performance tuning documentation from 2000 that explains a tuning configuration you can set to enable Call By Reference for EJB communication when they're in the same application server JVM. It states the following: Because EJBs are inherently location independent, they use a remote programming model. Method parameters and return values are serialized over RMI-IIOP and returned by value. This is the intrinsic RMI "Call By Value" model. WebSphere provides the "No Local Copies" performance optimization for running EJBs and clients (typically servlets) in the same application server JVM. The "No Local Copies" option uses "Call By Reference" and does not create local proxies for called objects when both the client and the remote object are in the same process. Depending on your workload, this can result in a significant overhead savings. Configure "No Local Copies" by adding the following two command line parameters to the application server JVM: * -Djavax.rmi.CORBA.UtilClass=com.ibm.CORBA.iiop.Util * -Dcom.ibm.CORBA.iiop.noLocalCopies=true CAUTION: The "No Local Copies" configuration option improves performance by changing "Call By Value" to "Call By Reference" for clients and EJBs in the same JVM. One side effect of this is that the Java object derived (non-primitive) method parameters can actually be changed by the called enterprise bean. Consider Figure 16a: Also, we will also be using Process Server 6.2 and WESB 6.2 as well in the future. Any ideas? recommendations? Thanks

    Read the article

  • Effective optimization strategies on modern C++ compilers

    - by user168715
    I'm working on scientific code that is very performance-critical. An initial version of the code has been written and tested, and now, with profiler in hand, it's time to start shaving cycles from the hot spots. It's well-known that some optimizations, e.g. loop unrolling, are handled these days much more effectively by the compiler than by a programmer meddling by hand. Which techniques are still worthwhile? Obviously, I'll run everything I try through a profiler, but if there's conventional wisdom as to what tends to work and what doesn't, it would save me significant time. I know that optimization is very compiler- and architecture- dependent. I'm using Intel's C++ compiler targeting the Core 2 Duo, but I'm also interested in what works well for gcc, or for "any modern compiler." Here are some concrete ideas I'm considering: Is there any benefit to replacing STL containers/algorithms with hand-rolled ones? In particular, my program includes a very large priority queue (currently a std::priority_queue) whose manipulation is taking a lot of total time. Is this something worth looking into, or is the STL implementation already likely the fastest possible? Along similar lines, for std::vectors whose needed sizes are unknown but have a reasonably small upper bound, is it profitable to replace them with statically-allocated arrays? I've found that dynamic memory allocation is often a severe bottleneck, and that eliminating it can lead to significant speedups. As a consequence I'm interesting in the performance tradeoffs of returning large temporary data structures by value vs. returning by pointer vs. passing the result in by reference. Is there a way to reliably determine whether or not the compiler will use RVO for a given method (assuming the caller doesn't need to modify the result, of course)? How cache-aware do compilers tend to be? For example, is it worth looking into reordering nested loops? Given the scientific nature of the program, floating-point numbers are used everywhere. A significant bottleneck in my code used to be conversions from floating point to integers: the compiler would emit code to save the current rounding mode, change it, perform the conversion, then restore the old rounding mode --- even though nothing in the program ever changed the rounding mode! Disabling this behavior significantly sped up my code. Are there any similar floating-point-related gotchas I should be aware of? One consequence of C++ being compiled and linked separately is that the compiler is unable to do what would seem to be very simple optimizations, such as move method calls like strlen() out of the termination conditions of loop. Are there any optimization like this one that I should look out for because they can't be done by the compiler and must be done by hand? On the flip side, are there any techniques I should avoid because they are likely to interfere with the compiler's ability to automatically optimize code? Lastly, to nip certain kinds of answers in the bud: I understand that optimization has a cost in terms of complexity, reliability, and maintainability. For this particular application, increased performance is worth these costs. I understand that the best optimizations are often to improve the high-level algorithms, and this has already been done.

    Read the article

  • Getting started with Oracle Database In-Memory Part III - Querying The IM Column Store

    - by Maria Colgan
    In my previous blog posts, I described how to install, enable, and populate the In-Memory column store (IM column store). This weeks post focuses on how data is accessed within the IM column store. Let’s take a simple query “What is the most expensive air-mail order we have received to date?” SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE  lo_shipmode = 5; The LINEORDER table has been populated into the IM column store and since we have no alternative access paths (indexes or views) the execution plan for this query is a full table scan of the LINEORDER table. You will notice that the execution plan has a new set of keywords “IN MEMORY" in the access method description in the Operation column. These keywords indicate that the LINEORDER table has been marked for INMEMORY and we may use the IM column store in this query. What do I mean by “may use”? There are a small number of cases were we won’t use the IM column store even though the object has been marked INMEMORY. This is similar to how the keyword STORAGE is used on Exadata environments. You can confirm that the IM column store was actually used by examining the session level statistics, but more on that later. For now let's focus on how the data is accessed in the IM column store and why it’s faster to access the data in the new column format, for analytical queries, rather than the buffer cache. There are four main reasons why accessing the data in the IM column store is more efficient. 1. Access only the column data needed The IM column store only has to scan two columns – lo_shipmode and lo_ordtotalprice – to execute this query while the traditional row store or buffer cache has to scan all of the columns in each row of the LINEORDER table until it reaches both the lo_shipmode and the lo_ordtotalprice column. 2. Scan and filter data in it's compressed format When data is populated into the IM column it is automatically compressed using a new set of compression algorithms that allow WHERE clause predicates to be applied against the compressed formats. This means the volume of data scanned in the IM column store for our query will be far less than the same query in the buffer cache where it will scan the data in its uncompressed form, which could be 20X larger. 3. Prune out any unnecessary data within each column The fastest read you can execute is the read you don’t do. In the IM column store a further reduction in the amount of data accessed is possible due to the In-Memory Storage Indexes(IM storage indexes) that are automatically created and maintained on each of the columns in the IM column store. IM storage indexes allow data pruning to occur based on the filter predicates supplied in a SQL statement. An IM storage index keeps track of minimum and maximum values for each column in each of the In-Memory Compression Unit (IMCU). In our query the WHERE clause predicate is on the lo_shipmode column. The IM storage index on the lo_shipdate column is examined to determine if our specified column value 5 exist in any IMCU by comparing the value 5 to the minimum and maximum values maintained in the Storage Index. If the value 5 is outside the minimum and maximum range for an IMCU, the scan of that IMCU is avoided. For the IMCUs where the value 5 does fall within the min, max range, an additional level of data pruning is possible via the metadata dictionary created when dictionary-based compression is used on IMCU. The dictionary contains a list of the unique column values within the IMCU. Since we have an equality predicate we can easily determine if 5 is one of the distinct column values or not. The combination of the IM storage index and dictionary based pruning, enables us to only scan the necessary IMCUs. 4. Use SIMD to apply filter predicates For the IMCU that need to be scanned Oracle takes advantage of SIMD vector processing (Single Instruction processing Multiple Data values). Instead of evaluating each entry in the column one at a time, SIMD vector processing allows a set of column values to be evaluated together in a single CPU instruction. The column format used in the IM column store has been specifically designed to maximize the number of column entries that can be loaded into the vector registers on the CPU and evaluated in a single CPU instruction. SIMD vector processing enables the Oracle Database In-Memory to scan billion of rows per second per core versus the millions of rows per second per core scan rate that can be achieved in the buffer cache. I mentioned earlier in this post that in order to confirm the IM column store was used; we need to examine the session level statistics. You can monitor the session level statistics by querying the performance views v$mystat and v$statname. All of the statistics related to the In-Memory Column Store begin with IM. You can see the full list of these statistics by typing: display_name format a30 SELECT display_name FROM v$statname WHERE  display_name LIKE 'IM%'; If we check the session statistics after we execute our query the results would be as follow; SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE lo_shipmode = 5; SELECT display_name FROM v$statname WHERE  display_name IN ('IM scan CUs columns accessed',                        'IM scan segments minmax eligible',                        'IM scan CUs pruned'); As you can see, only 2 IMCUs were accessed during the scan as the majority of the IMCUs (44) in the LINEORDER table were pruned out thanks to the storage index on the lo_shipmode column. In next weeks post I will describe how you can control which queries use the IM column store and which don't. +Maria Colgan

    Read the article

  • How to document and teach others "optimized beyond recognition" computationally intensive code?

    - by rwong
    Occasionally there is the 1% of code that is computationally intensive enough that needs the heaviest kind of low-level optimization. Examples are video processing, image processing, and all kinds of signal processing, in general. The goals are to document, and to teach the optimization techniques, so that the code does not become unmaintainable and prone to removal by newer developers. (*) (*) Notwithstanding the possibility that the particular optimization is completely useless in some unforeseeable future CPUs, such that the code will be deleted anyway. Considering that software offerings (commercial or open-source) retain their competitive advantage by having the fastest code and making use of the newest CPU architecture, software writers often need to tweak their code to make it run faster while getting the same output for a certain task, whlist tolerating a small amount of rounding errors. Typically, a software writer can keep many versions of a function as a documentation of each optimization / algorithm rewrite that takes place. How does one make these versions available for others to study their optimization techniques?

    Read the article

  • Problems locating Redmine database

    - by zordor
    I have an active redmine but I can not find the database where it is running right now. It should be on PostgreSQL but the database where it should be running is empty. Does anybody have any idea how to check current database used by redmine? Please let me know if you need any extra information. Thank you EDIT: Ok I know the database it is using. On the database.yml I have project_redmine but it is using the database project I dont know why. That database it is used by developers for the actual project. So that is getting me problems of course. I am unable to run it on the right DB (project_redmine) any ideas? :S

    Read the article

  • Issues in Convergence of Sequential minimal optimization for SVM

    - by Amol Joshi
    I have been working on Support Vector Machine for about 2 months now. I have coded SVM myself and for the optimization problem of SVM, I have used Sequential Minimal Optimization(SMO) by Mr. John Platt. Right now I am in the phase where I am going to grid search to find optimal C value for my dataset. ( Please find details of my project application and dataset details here http://stackoverflow.com/questions/2284059/svm-classification-minimum-number-of-input-sets-for-each-class) I have successfully checked my custom implemented SVM`s accuracy for C values ranging from 2^0 to 2^6. But now I am having some issues regarding the convergence of the SMO for C 128. Like I have tried to find the alpha values for C=128 and it is taking long time before it actually converges and successfully gives alpha values. Time taken for the SMO to converge is about 5 hours for C=100. This huge I think ( because SMO is supposed to be fast. ) though I`m getting good accuracy? I am screwed right not because I can not test the accuracy for higher values of C. I am actually displaying number of alphas changed in every pass of SMO and getting 10, 13, 8... alphas changing continuously. The KKT conditions assures convergence so what is so weird happening here? Please note that my implementation is working fine for C<=100 with good accuracy though the execution time is long. Please give me inputs on this issue. Thank You and Cheers.

    Read the article

  • Am I understanding premature optimization correctly?

    - by Ed Mazur
    I've been struggling with an application I'm writing and I think I'm beginning to see that my problem is premature optimization. The perfectionist side of me wants to make everything optimal and perfect the first time through, but I'm finding this is complicating the design quite a bit. Instead of writing small, testable functions that do one simple thing well, I'm leaning towards cramming in as much functionality as possible in order to be more efficient. For example, I'm avoiding multiple trips to the database for the same piece of information at the cost of my code becoming more complex. One part of me wants to just not worry about redundant database calls. It would make it easier to write correct code and the amount of data being fetched is small anyway. The other part of me feels very dirty and unclean doing this. :-) I'm leaning towards just going to the database multiple times, which I think is the right move here. It's more important that I finish the project and I feel like I'm getting hung up because of optimizations like this. My question is: is this the right strategy to be using when avoiding premature optimization?

    Read the article

  • ASP.NET Web Optimization - confusion about loading order

    - by Ciel
    Using the ASP.NET Web Optimization Framework, I am attempting to load some javascript files up. It works fine, except I am running into a peculiar situation with either the loading order, the loading speed, or its execution. I cannot figure out which. Basically, I am using ace code editor for javascript, and I also want to include its autocompletion package. This requires two files. /ace.js /ext-language_tools.js This isn't an issue, if I load both of these files the normal way (with <script> tags) it works fine. But when I try to use the web optimization bundles, it seems as if something goes wrong. Trying this out... bundles.Add(new ScriptBundle("~/bundles/js") { .Include("~/js/ace.js") .Include("~/js/ext-language_tools.js") }); and then in the view .. @Scripts.Render("~/bundles/js") I get the error ace is not defined This means that the ace.js file hasn't run, or hasn't loaded. Because if I break it apart into two bundles, it starts working. bundles.Add(new ScriptBundle("~/bundles/js") { .Include("~/js/ace.js") }); bundles.Add(new ScriptBundle("~/bundles/js/language_tools") { .Include("~/js/ext-language_tools.js") }); Can anyone explain why this would behave in this fashion?

    Read the article

  • Memory optimization while downloading

    - by lboregard
    hello all i have the following piece of code, that im looking forward to optimize, since i'm consuming gobs of memory this routine is heavily used first optimization would be to move the stringbuilder construction out of the download routine and make it a field of the class, then i would clear it inside the routine can you please suggest any other optimization or point me in the direction of some resources that could help me with this (web articles, books, etc). i'm thinking about replacing the stringbuilder by a fixed (much larger) size buffer ... or perhaps create a larger sized stringbuilder thanks in advance. StreamWriter _writer; StreamReader _reader; public string Download(string msgId) { _writer.WriteLine("BODY <" + msgId + ">"); string response = _reader.ReadLine(); if (!response.StartsWith("222")) return null; bool done = false; StringBuilder body = new StringBuilder(256* 1024); do { response = _reader.ReadLine(); if (OnProgress != null) OnProgress(response.Length); if (response == ".") { done = true; } else { if (response.StartsWith("..")) response = response.Remove(0, 1); body.Append(response); body.Append("\r\n"); } } while (!done); return body.ToString(); }

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >