Search Results

Search found 12802 results on 513 pages for 'memory profiler'.

Page 304/513 | < Previous Page | 300 301 302 303 304 305 306 307 308 309 310 311  | Next Page >

  • Best Practice for Utilities Class?

    - by Sonny Boy
    Hey all, We currently have a utilities class that handles a lot of string formatting, date displays, and similar functionality and it's a shared/static class. Is this the "correct" way of doing things or should we be instanciating the utility class as and when we need it? Our main goal here is to reduce memory footprint but performance of the application is also a consideration. Thanks, Matt PS. We're using .NET 2.0

    Read the article

  • Any difference in performance/compatibility of different languages in PostgreSQL?

    - by Igor
    In nowadays the PostgreSQL offers plenty of procedural languages: pl/pgsql, pl/perl, etc Are there any difference in the speed/memory consumption in procedures written in different languages? Does anybody have done any test? Is it true that to use the native pl/pgsql is the most correct choice? How the procedure written in C++ and compiled into loadable module differs in all parameter w.r.t. the user function written with pl/* languages?

    Read the article

  • 8GB Compact Flash Corrupted, Boot Sector Lost ?

    - by robert
    I have an 8GB Kingston compact Flash, and when I insert it into my mac it says that card is unredable and ask me for initialization. If i open Utilty Disk it show a card of 2,2 TB Generic Comact Flash, if I try to initialize that it give me error: POSIX reports: impossible to allocate memory. How i can format that ? There's a way with fdisk or smt to get this card work ? Thanks

    Read the article

  • NHibernate - Stream large result sets?

    - by Dan Black
    Hi, I have to read in a large record set, process it, then write it out to a flat file. The large result set comes from a Stored Proc in SQL 2000. I currently have: var results = session.CreateSQLQuery("exec usp_SalesExtract").List(); I would like to be able to read the result set row by row, to reduce the memory foot print Thanks

    Read the article

  • How do Relational Databases Work Under the Hood?

    - by Pierreten
    I've always been interested in how you can throw some SQL at at database, and it nearly instantaneously returns your results in an orderly manner without thinking about it as anything other than a black box. What is really going on? I'm pretty sure it has something to do with how values are laid out regularly in memory, similar to an array; but aside from that, I don't know much else. How is SQL parsed in a manner to facilitate all of this?

    Read the article

  • What are the Ruby equivalent of Python itertools, esp. combinations/permutations/groupby?

    - by Amadeus
    Python's itertools module provides a lots of goodies with respect to processing an iterable/iterator by use of generators. For example, permutations(range(3)) --> 012 021 102 120 201 210 combinations('ABCD', 2) --> AB AC AD BC BD CD [list(g) for k, g in groupby('AAAABBBCCD')] --> AAAA BBB CC D What are the equivalent in Ruby? By equivalent, I mean fast and memory efficient (Python's itertools module is written in C).

    Read the article

  • Executing sequential stored procedures; works in query analyzer, doesn't in my .NET application

    - by evanmortland
    Hello, I have an audit record table that I am writing to. I am connecting to MyDb, which has a stored procedure called 'CreateAudit', which is a passthrough stored procedure to another database on the same machine called MyOther DB with a stored procedure called 'CreatedAudit' as well. In other words in MyDB I have CreateAudit, which does the following EXEC dbo.MyOtherDB.CreateAudit. I call the MyDb CreateAudit stored procedure from my application, using subsonic as the DAL. The first time I call it, I call it with the following (pseudocode): Result = CreateAudit(recordId, "Opened") One line after that, I call: Result2 = CreateAudit(recordId, "Closed") In my second stored procedure it is supposed to mark the record that was created by the CreateAudit(recordId, "Opened") with a status of closed. It works great if I run them independently of one another, but when they run in sequence in the application, the record is not marked as "Closed". When I run SQL profiler I see that both queries ran, and if I copy the queries out and run them from query analyzer the record gets marked as closed 100% of the time! When I run it from the application, about once every 20 times or so, the record is successfully marked closed - the other 19 times nothing happens, but I do not get an error! Is it possible for the .NET app to skip over the ouput from the first stored procedure and start executing the second stored procedure before the record in the first is created? When I add a "WAITFOR DELAY '00:00:00:003'" to the top of my stored procedure, the record is also closed 100% of the time. My head is spinning, any ideas why this is happening! Thanks for any responses, very interested in hearing how this can happen.

    Read the article

  • Convert large raster graphics image(bitmap, PNG, JPEG, etc) to non-vector postscript in C#

    - by Dennis Cheung
    How to convert an large image and embed it into postscript? I used to convert the bitmap into HEX codes and render with colorimage. It works for small icons but I hit a /limitcheck error in ghostscript when I try to embed little larger images. It seem there is a memory limit for bitmap in ghostscript. I am looking a solution which can run without 3rd party/pre-processing other then ghostscript itself.

    Read the article

  • Max file size for File.ReadAllLines

    - by user283897
    Hi, I need to read and process a text file. My processing would be easier if I could use the File.ReadAllLines method but I'm not sure what is the maximum size of the file that could be read with this method without reading by chunks. I understand that the file size depends on the computer memory. But are still there any recommendations for an average machine? I would greatly appreciate your fast response. Thanks, Lev

    Read the article

  • Getting identity from Ado.Net Update command

    - by rboarman
    My scenario is simple. I am trying to persist a DataSet and have the identity column filled in so I can add child records. Here's what I've got so far: using (SqlConnection connection = new SqlConnection(connStr)) { SqlDataAdapter adapter = new SqlDataAdapter("select * from assets where 0 = 1", connection); adapter.MissingMappingAction = MissingMappingAction.Passthrough; adapter.MissingSchemaAction = MissingSchemaAction.AddWithKey; SqlCommandBuilder cb = new SqlCommandBuilder(adapter); var insertCmd = cb.GetInsertCommand(true); insertCmd.Connection = connection; connection.Open(); adapter.InsertCommand = insertCmd; adapter.InsertCommand.CommandText += "; set ? = SCOPE_IDENTITY()"; adapter.InsertCommand.UpdatedRowSource = UpdateRowSource.OutputParameters; var param = new SqlParameter("RowId", SqlDbType.Int); param.SourceColumn = "RowId"; param.Direction = ParameterDirection.Output; adapter.InsertCommand.Parameters.Add(param); SqlTransaction transaction = connection.BeginTransaction(); insertCmd.Transaction = transaction; try { assetsImported = adapter.Update(dataSet.Tables["Assets"]); transaction.Commit(); } catch (Exception ex) { transaction.Rollback(); // Log an error } connection.Close(); } The first thing that I noticed, besides the fact that the identity value is not making its way back into the DataSet, is that my change to add the scope_identity select statement to the insert command is not being executed. Looking at the query using Profiler, I do not see my addition to the insert command. Questions: 1) Why is my addition to the insert command not making its way to the sql being executed on the database? 2) Is there a simpler way to have my DataSet refreshed with the identity values of the inserted rows? 3) Should I use the OnRowUpdated callback to add my child records? My plan was to loop through the rows after the Update() call and add children as needed. Thank you in advance. Rick

    Read the article

  • Commit is VERY slow in my NHibernate / SQLite project

    - by Tom Bushell
    I've just started doing some real-world performance testing on my Fluent NHibernate / SQLite project, and am experiencing some serious delays when when I Commit to the database. By serious, I mean taking 20 - 30 seconds to Commit 30 K of data! This delay seems to get worse as the database grows. When the SQLite DB file is empty, commits happen almost instantly, but when it grows to 10 Meg, I see these huge delays. The database has 16 tables, averaging 10 columns each. One possible problem is that I'm storing a dozen or so IList members, but they are typically only 200 elements long. But this is a recent addition to Fluent NHibernate automapping, which stores each float in a single table row, so maybe that's a potential problem. Any suggestions on how to track this down? I suspect SQLite is the culprit, but maybe it's NHibernate? I don't have any experience with profilers, but am thinking of getting one. I'm aware of NHibernate Profiler - any recommendations for profilers that work well with SQLite? Here's the method that saves the data - it's just a SaveOrUpdate call and a Commit, if you ignore all the error handling and debug logging. public static void SaveMeasurement(object measurement) { Debug.WriteLine("\r\n---SaveMeasurement---"); // Get the application's database session var session = GetSession(); using (var transaction = session.BeginTransaction()) { try { session.SaveOrUpdate(measurement); } catch (Exception e) { throw new ApplicationException( "\r\n SaveMeasurement->SaveOrUpdate failed\r\n\r\n", e); } try { Debug.WriteLine("\r\n---Commit---"); transaction.Commit(); Debug.WriteLine("\r\n---Commit Complete---"); } catch (Exception e) { throw new ApplicationException( "\r\n SaveMeasurement->Commit failed\r\n\r\n", e); } } }

    Read the article

  • What's the fastest way to determine if a file adheres to a particular class's NSCoding implementatio

    - by Justin Searls
    Given: An application that accesses a directory of files: some plain text, some binary files that adhere to a particular NSCoding implementation, and perhaps other binary files it simply doesn't understand how to process. I want to be able to figure out which of the files in that directory adhere to my NSCoding class, and I'd prefer not to have to fall back on the naïve approach of loading the entirety of each file into memory, attempting to unarchive each. Anyone have an elegant approach or pattern to this problem?

    Read the article

  • Event sourcing: Write event before or after updating the model

    - by Magnus
    I'm reasoning about event sourcing and often I arrive at a chicken and egg problem. Would be grateful for some hints on how to reason around this. If I execute all I/O-bound processing async (ie writing to the event log) then how do I handle, or sometimes even detect, failures? I'm using Akka Actors so processing is sequential for each event/message. I do not have any database at this time, instead I would persist all the events in an event log and then keep an aggregated state of all the events in a model stored in memory. Queries are all against this model, you can consider it to be a cache. Example Creating a new user: Validate that the user does not exist in model Persist event to journal Update model (in memory) If step 3 breaks I still have persisted my event so I can replay it at a later date. If step 2 breaks I can handle that as well gracefully. This is fine, but since step 2 is I/O-bound I figured that I should do I/O in a separate actor to free up the first actor for queries: Updating a user while allowing queries (A0 = Front end/GUI actor, A1 = Processor Actor, A2 = IO-actor, E = event bus). (A0-E-A1) Event is published to update user 'U1'. Validate that the user 'U1' exists in model (A1-A2) Persist event to journal (separate actor) (A0-E-A1-A0) Query for user 'U1' profile (A2-A1) Event is now persisted continue to update model (A0-E-A1-A0) Query for user 'U1' profile (now returns fresh data) This is appealing since queries can be processed while I/O-is churning along at it's own pace. But now I can cause myself all kinds of problems where I could have two incompatible commands (delete and then update) be persisted to the event log and crash on me when replayed up at a later date, since I do the validation before persisting the event and then update the model. My aim is to have a simple reasoning around my model (since Actor processes messages sequentially single threaded) but not be waiting for I/O-bound updates when Querying. I get the feeling I'm modeling a database which in itself is might be a problem. If things are unclear please write a comment.

    Read the article

  • Flex maps howto examples

    - by alessandro ferrucci
    Hello, I've stumbled upon this flash map http://www.washingtonpost.com/wp-srv/special/nation/unemployment-by-county/ it looks like they used this map to construct the website. http://commons.wikimedia.org/wiki/File:USA_Counties_with_FIPS_and_names.svg I am curious as to what people have done or any blogs that describe what can be done with flex and simple maps like this (not google maps style maps) but simple all-in-memory maps like this one. It would be cool to see what/ and how flex can do in terms of maps. thanks!

    Read the article

  • Explicit disable MySQL query cache in some parts of program

    - by jack
    In a Django project, some cronjob programs are mainly used for administrative or analysis purposes, e.g. generating site usage stats, rotating user activities log, etc. We probably do not hope MySQL to cache queries in those programs to save memory usage and improve query cache efficiency. Is it possible to turn off MySQL query cache explicitly in those programs while keep it enabled for other parts including all views.py?

    Read the article

  • How do access a secure website within a sharepoint webpart?

    - by Bill
    How do access a secure website within a sharepoint webpart? The following code works fine as a console application but if you run it in a webpart, you will get a access violation WebRequest request = WebRequest.Create("https://somesecuresite.com"); WebResponse firstResponse = null; try { firstResponse = request.GetResponse(); } catch (WebException ex) { writer.WriteLine("Error: " + ex.ToString()); return; } if you access a non secure site, it also works. Any ideas? Error: System.Net.WebException: The underlying connection was closed: An unexpected error occurred on a receive. --- System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. at System.Net.UnsafeNclNativeMethods.NativePKI.CertVerifyCertificateChainPolicy(IntPtr policy, SafeFreeCertChain chainContext, ChainPolicyParameter& cpp, ChainPolicyStatus& ps) at System.Net.PolicyWrapper.VerifyChainPolicy(SafeFreeCertChain chainContext, ChainPolicyParameter& cpp) at System.Net.Security.SecureChannel.VerifyRemoteCertificate(RemoteCertValidationCallback remoteCertValidationCallback) at System.Net.Security.SslState.CompleteHandshake() at System.Net.Security.SslState.CheckCompletionBeforeNextReceive(ProtocolToken message, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ProcessReceivedBlob(Byte[] buffer, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartReadFrame(Byte[] buffer, Int32 readBytes, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.CheckCompletionBeforeNextReceive(ProtocolToken message, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ProcessReceivedBlob(Byte[] buffer, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartReadFrame(Byte[] buffer, Int32 readBytes, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.CheckCompletionBeforeNextReceive(ProtocolToken message, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ProcessReceivedBlob(Byte[] buffer, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartReadFrame(Byte[] buffer, Int32 readBytes, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.CheckCompletionBeforeNextReceive(ProtocolToken message, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ForceAuthentication(Boolean receiveFirst, Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ProcessAuthentication(LazyAsyncResult lazyResult) at System.Net.TlsStream.CallProcessAuthentication(Object state) at System.Threading.ExecutionContext.runTryCode(Object userData) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Net.TlsStream.ProcessAuthentication(LazyAsyncResult result) at System.Net.TlsStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.Net.ConnectStream.WriteHeaders(Boolean async) --- End of inner exception stack trace --- at System.Net.HttpWebRequest.GetResponse()

    Read the article

  • Efficiency questions

    - by rayman
    Hi, I have to manage XML documents and Strings in my app. In terms of efficiency and memory usage, will a collection like ArrayList be much more expensive than String[]? Also, I could store the content as a regular String or XML. Is working with XML also more expensive? (When I say expensive, I am referring to the use of system resources.) If there are differences, are they significant? Thanks, Ray.

    Read the article

  • Why is the software world full of status codes?

    - by David V McKay
    Why did programmers ever start using status codes? I mean, I guess I could imagine this might be useful back in the days when a text string was an expensive resource. WAYYY back then. But even after we had megabytes of memory to work with, we continued to use them. What possible advantage could there be for obfuscating the meaning of an error message or status message behind a status code?

    Read the article

  • How do I prove I should put a table of values in source code instead of a database table?

    - by FastAl
    <tldr>looking for a reference to a book or other undeniably authoritative source that gives reasons when you should choose a database vs. when you should choose other storage methods. I have provided an un-authoritative list of reasons about 2/3 of the way down this post.</tldr> I have a situation at my company where a database is being used where it would be better to use another solution (in this case, an auto-generated piece of source code that contains a static lookup table, searched by binary sort). Normally, a database would be an OK solution even though the problem does not require a database, e.g, none of the elements of ACID are needed, as it is read-only data, updated about every 3-5 years (also requiring other sourcecode changes), and fits in memory, and can be keyed into via binary search (a tad faster than db, but speed is not an issue). The problem is that this code runs on our enterprise server, but is shared with several PC platforms (some disconnected, some use a central DB, etc.), and parts of it are managed by multiple programming units, parts by the DBAs, parts even by mathematicians in another department, etc. These hit their own platform’s version of their databases (containing their own copy of the static data). What happens is that every implementation, every little change, something different goes wrong. There are many other issues as well. I can’t even use a flatfile, because one mode of running on our enterprise server does not have permission to read files (only databases, and of course, its own literal storage, e.g., in-source table). Of course, other parts of the system use databases in proper, less obscure manners; there is no problem with those parts. So why don’t we just change it? I don’t have administrative ability to force a change. But I’m affected because sometimes I have to help fix the problems, but mostly because it causes outages and tons of extra IT time by other programmers and d*mmit that makes me mad! The reason neither management, nor the designers of the system, can see the problem is that they propose a solution that won’t work: increase communication; implement more safeguards and standards; etc. But every time, in a different part of the already-pared-down but still multi-step processes, a few different diligent, hard-working, top performing IT personnel make a unique subtle error that causes it to fail, sometimes after the last round of testing! And in general these are not single-person failures, but understandable miscommunications. And communication at our company is actually better than most. People just don't think that's the case because they haven't dug into the matter. However, I have it on very good word from somebody with extensive formal study of sociology and psychology that the relatively small amount of less-than-proper database usage in this gigantic cross-platform multi-source, multi-language project is bureaucratically un-maintainable. Impossible. No chance. At least with Human Beings in the loop, and it can’t be automated. In addition, the management and developers who could change this, though intelligent and capable, don’t understand the rigidity of this ‘how humans are’ issue, and are not convincible on the matter. The reason putting the static data in sourcecode will solve the problem is, although the solution is less sexy than a database, it would function with no technical drawbacks; and since the sharing of sourcecode already works very well, you basically erase any database-related effort from this section of the project, along with all the drawbacks of it that are causing problems. OK, that’s the background, for the curious. I won’t be able to convince management that this is an unfixable sociological problem, and that the real solution is coding around these limits of human nature, just as you would code around a bug in a 3rd party component that you can’t change. So what I have to do is exploit the unsuitableness of the database solution, and not do it using logic, but rather authority. I am aware of many reasons, and posts on this site giving reasons for one over the other; I’m not looking for lists of reasons like these (although you can add a comment if I've miss a doozy): WHY USE A DATABASE? instead of flatfile/other DB vs. file: if you need... Random Read / Transparent search optimization Advanced / varied / customizable Searching and sorting capabilities Transaction/rollback Locks, semaphores Concurrency control / Shared users Security 1-many/m-m is easier Easy modification Scalability Load Balancing Random updates / inserts / deletes Advanced query Administrative control of design, etc. SQL / learning curve Debugging / Logging Centralized / Live Backup capabilities Cached queries / dvlp & cache execution plans Interleaved update/read Referential integrity, avoid redundant/missing/corrupt/out-of-sync data Reporting (from on olap or oltp db) / turnkey generation tools [Disadvantages:] Important to get right the first time - professional design - but only b/c it's meant to last s/w & h/w cost Usu. over a network, speed issue (best vs. best design vs. local=even then a separate process req's marshalling/netwk layers/inter-p comm) indicies and query processing can stand in the way of simple processing (vs. flatfile) WHY USE FLATFILE: If you only need... Sequential Row processing only Limited usage append only (no reading, no master key/update) Only Update the record you're reading (fixed length recs only) Too big to fit into memory If Local disk / read-ahead network connection Portability / small system Email / cut & Paste / store as document by novice - simple format Low design learning curve but high cost later WHY USE IN-MEMORY/TABLE (tables, arrays, etc.): if you need... Processing a single db/ff record that was imported Known size of data Static data if hardcoding the table Narrow, unchanging use (e.g., one program or proc) -includes a class that will be shared, but encapsulates its data manipulation Extreme speed needed / high transaction frequency Random access - but search is dependent on implementation Following are some other posts about the topic: http://stackoverflow.com/questions/1499239/database-vs-flat-text-file-what-are-some-technical-reasons-for-choosing-one-over http://stackoverflow.com/questions/332825/are-flat-file-databases-any-good http://stackoverflow.com/questions/2356851/database-vs-flat-files http://stackoverflow.com/questions/514455/databases-vs-plain-text/514530 What I’d like to know is if anybody could recommend a hard, authoritative source containing these reasons. I’m looking for a paper book I can buy, or a reputable website with whitepapers about the issue (e.g., Microsoft, IBM), not counting the user-generated content on those sites. This will have a greater change to elicit a change that I’m looking for: less wasted programmer time, and more reliable programs. Thanks very much for your help. You win a prize for reading such a large post!

    Read the article

< Previous Page | 300 301 302 303 304 305 306 307 308 309 310 311  | Next Page >