Search Results

Search found 14113 results on 565 pages for 'memory stick'.

Page 100/565 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • iPhone Objective-C/Plain C Memory Management

    - by toc777
    Hi everyone, I understand Objective-C memory management but I'm using Core Graphics functionality such as CGRect, CGPoint, CGImageRef etc.. which is written in plain C. My question is how do i manage this memory or is it already handled for me? According to the Apple documentation if an Apple Objective-C function doesn't have copy, new or create in it the returned object is managed for you using autorealease. Is this true for the Core Graphics stuff also? (Well i guess it wont be using autorealease but maybe something similar?) Thanks for taking the time to read this.

    Read the article

  • Out of memory error while using clusterdata in MATLAB

    - by Hossein
    Hi, I am trying to cluster a Matrix (size: 20057x2).: T = clusterdata(X,cutoff); but I get this error: ??? Error using == pdistmex Out of memory. Type HELP MEMORY for your options. Error in == pdist at 211 Y = pdistmex(X',dist,additionalArg); Error in == linkage at 139 Z = linkagemex(Y,method,pdistArg); Error in == clusterdata at 88 Z = linkage(X,linkageargs{1},pdistargs); Error in == kmeansTest at 2 T = clusterdata(X,1); can someone help me. I have 4GB of ram, but think that the problem is from somewhere else..

    Read the article

  • Should Application_End fire on an automatic App Pool Recycle?

    - by Laramie
    I have read this, this, this and this plus a dozen other posts/blogs. I have an ASP.Net app in shared hosting that is frequently recycling. We use NLog and have the following code in global.asax void Application_Start(object sender, EventArgs e) { NLog.Logger logger = NLog.LogManager.GetCurrentClassLogger(); logger.Debug("\r\n\r\nAPPLICATION STARTING\r\n\r\n"); } protected void Application_OnEnd(Object sender, EventArgs e) { NLog.Logger logger = NLog.LogManager.GetCurrentClassLogger(); logger.Debug("\r\n\r\nAPPLICATION_OnEnd\r\n\r\n"); } void Application_End(object sender, EventArgs e) { HttpRuntime runtime = (HttpRuntime)typeof(System.Web.HttpRuntime).InvokeMember("_theRuntime", BindingFlags.NonPublic | BindingFlags.Static | BindingFlags.GetField, null, null, null); if (runtime == null) return; string shutDownMessage = (string)runtime.GetType().InvokeMember("_shutDownMessage", BindingFlags.NonPublic | BindingFlags.Instance | BindingFlags.GetField, null, runtime, null); string shutDownStack = (string)runtime.GetType().InvokeMember("_shutDownStack", BindingFlags.NonPublic | BindingFlags.Instance | BindingFlags.GetField, null, runtime, null); ApplicationShutdownReason shutdownReason = System.Web.Hosting.HostingEnvironment.ShutdownReason; NLog.Logger logger = NLog.LogManager.GetCurrentClassLogger(); logger.Debug(String.Format("\r\n\r\nAPPLICATION END\r\n\r\n_shutDownReason = {2}\r\n\r\n _shutDownMessage = {0}\r\n\r\n_shutDownStack = {1}\r\n\r\n", shutDownMessage, shutDownStack, shutdownReason)); } void Application_Error(object sender, EventArgs e) { NLog.Logger logger = NLog.LogManager.GetCurrentClassLogger(); logger.Debug("\r\n\r\nApplication_Error\r\n\r\n"); } Our log file is littered with "APPLICATION STARTING" entries, but neither Application_OnEnd, Application_End, nor Application_Error are ever fired during these spontaneous restarts. I know they are working because there are entries for touching the web.config or /bin files. We also ran a memory overload test and can trigger an OutOfMemoryException which is caught in Application_Error. We are trying to determine whether the virtual memory limit is causing the recycling. We have added GC.GetTotalMemory(false) throughout the code, but this is for all of .Net, not just our App´s pool, correct? We've also tried var oPerfCounter = new PerformanceCounter(); oPerfCounter.CategoryName = "Process"; oPerfCounter.CounterName = "Virtual Bytes"; oPerfCounter.InstanceName = "iisExpress"; logger.Debug("Virtual Bytes: " + oPerfCounter.RawValue + " bytes"); but don't have permission in shared hosting. I've monitored the app on a dev server with the same requests that caused the recycles in production with ANTS Memory Profiler attached and can't seem to find a culprit. We have also run it with a debugger attached in dev to check for uncaught exceptions in spawned threads that might cause the app to abort. My questions are these: How can I effectively monitor memory usage in shared hosting to tell how much my application is consuming prior to an application recycle? Why are the Application_[End/OnEnd/Error] handlers in global.asax not being called? How else can I determine what is causing these recycles? Thanks.

    Read the article

  • Inspect in memory hsqldb while debugging

    - by Albert
    We're using hdsqldb in memory to run junit tests which operate against a database. The db is setup before running each test via a spring configuration. All works fine. Now when a tests fails it can be convinient to be able to inspect the values in the in memory database. Is this possible? If so how? Our url is: jdbc.url=jdbc:hsqldb:mem:testdb;sql.enforce_strict_size=true The database is destroyed after each tests. But when the debugger is running the database should also still be alive. I've tried connecting with the sqldb databaseManager. That works, but I don't see any tables or data. Any help is highly appreciated!

    Read the article

  • What are pinned objects?

    - by sagie
    Hi. I am trying to find a memory leak using ants memory profiler, and I've encountered in a new term: Pinned objects. Can some one give me a good & simple explanation about what this objects are, How can I pinn/Unpinn objects, and detect who pinned objects? Thanks

    Read the article

  • Branchless memory manager?

    - by Richard Fabian
    Anyone thought about how to write a memory manager (in C++) that is completely branch free? I've written a pool, a stack, a queue, and a linked list (allocating from the pool), but I am wondering how plausible it is to write a branch free general memory manager. This is all to help make a really reusable framework for doing solid concurrent, in-order CPU, and cache friendly development. Edit: by branchless I mean without doing direct or indirect function calls, and without using ifs. I've been thinking that I can probably implement something that first changes the requested size to zero for false calls, but haven't really got much more than that. I feel that it's not impossible, but the other aspect of this exercise is then profiling it on said "unfriendly" processors to see if it's worth trying as hard as this to avoid branching.

    Read the article

  • fastest in-memory cache for XslCompiledTransform

    - by rudnev
    I have a set of xslt stylesheet files. I need to produce the fastest performance of XslConpiledTransform, so i want to make in-memory representation of these stylesheets. I can load them to in-memory collection as IXpathNavigable on application start, and then load each IXPAthNavigable into singleton XslCompiledTransform on each request. But this works only for styleshhets without xsl:import or xsl:include. (Xsl:import is only for files). also i can load into cache many instances of XSLCompiledTransform for each template. Is it reasonable? Are there other ways? What is the best? what are another tips for improving performance MS Xslt processor?

    Read the article

  • In-memory Database in Excel

    - by user329174
    Hello, I am looking for a way to import a datatable from Access into an Excel variable and then run queries through this variable to speed up the process. I am trying to migrate from C# .NET where I read a data table from an access database into memory and then used LINQ to query this dataset. It is MUCH faster than how I have it currently coded in VBA where I must make lots of calls to the actual database, which is slow. I have seen the QueryTable mentioned, but it appears that this requires pasting the data into the excel sheet. I would like to keep everything in memory and minimize the interaction between the Excel Sheet and the VBA code as much as possible. I wish we didn't need to use Excel+VBA to do this, but we're kind of stuck with that for now. Thanks for the help!

    Read the article

  • Are .dll files loaded once for every program or once for all programs?

    - by Nilbert
    I have a simple small question which someone who knows will be able to answer easily, I searched google but couldn't find the answer. There are many programs running at once on a computer, and my question is: when a program loads a DLL, does it actually load the DLL file or does it find the memory in which the DLL is already loaded? For example, is ws2_32.dll (winsock 2) loaded for every program that uses winsock, or is it loaded once and all programs that use it use the same memory addresses to call the functions?

    Read the article

  • NSData release is not reclaiming memory

    - by ctpenrose
    iPhoneOS 3.2 I use NSKeyedUnarchiver's unarchiveObjectWithFile: to load a custom object that contains a single large NSData and another much smaller object. The dealloc method in my custom object gets called, the NSData object is released, its retainCount == 1 just before. Physical memory does not decrement by any amount, let alone a fraction of the NSData size, and with repetition memory warnings are reliably generated: I have test until I actually received level 2 warnings. =( NSString *archivePath = [[[NSBundle mainBundle] pathForResource:@"lingering"] ofType:@"data"] retain]; lingeringDataContainer = [[NSKeyedUnarchiver unarchiveObjectWithFile:archivePath] retain]; [archivePath release]; [lingeringDataContainer release]; and now the dealloc.... - (void) dealloc { [releasingObject release]; [lingeringData release]; [super dealloc]; } Before release: (gdb) p (int) [(NSData *) lingeringData retainCount] $1 = 1 After: (gdb) p (int) [(NSData *) lingeringData retainCount] Target does not respond to this message selector.

    Read the article

  • Recommendations for an in memory database vs thread safe data structures

    - by yx
    TLDR: What are the pros/cons of using an in-memory database vs locks and concurrent data structures? I am currently working on an application that has many (possibly remote) displays that collect live data from multiple data sources and renders them on screen in real time. One of the other developers have suggested the use of an in memory database instead of doing it the standard way our other systems behaves, which is to use concurrent hashmaps, queues, arrays, and other objects to store the graphical objects and handling them safely with locks if necessary. His argument is that the DB will lessen the need to worry about concurrency since it will handle read/write locks automatically, and also the DB will offer an easier way to structure the data into as many tables as we need instead of having create hashmaps of hashmaps of lists, etc and keeping track of it all. I do not have much DB experience myself so I am asking fellow SO users what experiences they have had and what are the pros & cons of inserting the DB into the system?

    Read the article

  • Autocomplete server-side implementation

    - by toluju
    What is a fast and efficient way to implement the server-side component for an autocomplete feature in an html input box? I am writing a service to autocomplete user queries in our web interface's main search box, and the completions are displayed in an ajax-powered dropdown. The data we are running queries against is simply a large table of concepts our system knows about, which matches roughly with the set of wikipedia page titles. For this service obviously speed is of utmost importance, as responsiveness of the web page is important to the user experience. The current implementation simply loads all concepts into memory in a sorted set, and performs a simple log(n) lookup on a user keystroke. The tailset is then used to provide additional matches beyond the closest match. The problem with this solution is that it does not scale. It currently is running up against the VM heap space limit (I've set -Xmx2g, which is about the most we can push on our 32 bit machines), and this prevents us from expanding our concept table or adding more functionality. Switching to 64-bit VMs on machines with more memory isn't an immediate option. I've been hesitant to start working on a disk-based solution as I am concerned that disk seek time will kill performance. Are there possible solutions that will let me scale better, either entirely in memory or with some fast disk-backed implementations? Edits: @Gandalf: For our use case it is important the the autocompletion is comprehensive and isn't just extra help for the user. As for what we are completing, it is a list of concept-type pairs. For example, possible entries are [("Microsoft", "Software Company"), ("Jeff Atwood", "Programmer"), ("StackOverflow.com", "Website")]. We are using Lucene for the full search once a user selects an item from the autocomplete list, but I am not yet sure Lucene would work well for the autocomplete itself. @Glen: No databases are being used here. When I'm talking about a table I just mean the structured representation of my data. @Jason Day: My original implementation to this problem was to use a Trie, but the memory bloat with that was actually worse than the sorted set due to needing a large number of object references. I'll read on the ternary search trees to see if it could be of use.

    Read the article

  • How efficient is PHP's substr?

    - by zildjohn01
    I'm writing a parser in PHP which must be able to handle large in-memory strings, so this is a somewhat important issue. (ie, please don't "premature optimize" flame me, please) How does the substr function work? Does it make a second copy of the string data in memory, or does it reference the original? Should I worry about calling, for example, $str = substr($str, 1); in a loop?

    Read the article

  • [MFC] Combining 2 memory DCs ?

    - by OverTheEdge
    I'm writing a control where there's a lot of custom drawing going through. Because of this I need to trim down the amount of "screen writes" that go about. Currently there is only one memory DC that is used to write to screen so as to avoid flicker when the control is redrawn. I want to know if it is a possiblity to use 2 or more memory DCs to write updates independently and then bitblt them to screen. This way the need to render non-changed parts of the screen is minimized. thanx in advacne, the_Saint

    Read the article

  • Out-Of-Memory while doing Core Data migration

    - by Kamchatka
    Hello, I'm migrating a CoreData model between two versions of an application. I was storing binary data as blobs in the previous version and I want to take them out of the blobs for performance. My issue is that during the migration it seems that Core Data loads everything into memory which leads to Low Memory Warnings and then to my app being killed. Apple documentation suggests the following : http://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/CoreDataVersioning/Articles/vmCustomizingTheProcess.html#//apple_ref/doc/uid/TP40005510-SW9 However, it seems to rely on the fact that the large objects are applied different mapping. In my case, all the objects are basically the same and the same mapping has to be applied to each of them. I don't see in this case how I could apply their technique. How should I handle a migration with very large objects ?

    Read the article

  • Memory consumption of resource manager

    - by Quang Anh
    I'm writing a resource manager, which is required to be fast and has small memory foot-print. For example, I have an resource class class Abc { string m_name; string m_path; string handle; void SomeFunctions(); } And so on. Now I create and List and add 5000 items to it. How much memory will it consume? One more question: Can I find items base on handle number only, which is the int part of the Tuple?

    Read the article

  • 'Out of Memory exception' in sql server 2005 xml column

    - by Raghuraman
    Hi All, I am devloping a windows forms application and am using sql server 2005 database as my backend. I am having an xml column in my database. I am using ultrawingrid control in my application.I obtain the xml of the dataset which is bound to my ultrawingrid control and pass that as a parameter value to the stored procedure where am inserting this value into the xml column which I specified. The columns in my grid are dynamic and hence there can be any no of columns in my grid. I got 'out of memory' exception in the dataset.GetXml() statement since there were more no of columns I believe.So, what I did is that I used dataset.WriteXml() method and stored all the xml contents into an xml file, loaded the xml file into the XmlDocument object and then passed the xmlnodereader as the value to the stored procedure parameter.Now, while executing the stored procedure am getting the same 'out of memory' exception. How could I resolve this issue?

    Read the article

  • Invalid Memory Acess for JavaFX ScrollBar

    - by Mike Caron
    I created the following JavaFX script, which when run, generates an Invalid memory access. What is it about javafx.scene.control.ScrollBar that is causing a memory failure? Stage { title: "Scroll View" scene: Scene { content: [ ScrollBar { min: 0 max: 100 value: 0 blockIncrement: 10 vertical: false } ] } resizable: false } I'm using whatever JavaFX (at least 1.2) that comes with NetBeans 6.8: Product Version: NetBeans IDE 6.8 (Build 200912041610) Java: 1.6.0_17; Java HotSpot(TM) 64-Bit Server VM 14.3-b01-101 System: Mac OS X version 10.6.2 running on x86_64; MacRoman; en_US (nb)

    Read the article

  • Invalid Memory Acess for JavaFX ScrollBar on Snow-Leopard

    - by Mike Caron
    I created the following JavaFX script, which when run, generates an Invalid memory access on Snow-Leopard. What is it about javafx.scene.control.ScrollBar that is causing a memory failure? Stage { title: "Scroll View" scene: Scene { content: [ ScrollBar { min: 0 max: 100 value: 0 blockIncrement: 10 vertical: false } ] } resizable: false } I'm using whatever JavaFX (at least 1.2) that comes with NetBeans 6.8: Product Version: NetBeans IDE 6.8 (Build 200912041610) Java: 1.6.0_17; Java HotSpot(TM) 64-Bit Server VM 14.3-b01-101 System: Mac OS X version 10.6.2 running on x86_64; MacRoman; en_US (nb)

    Read the article

  • Memory leak while using emoticons on CRichEditCtrl

    - by Jorg B Jorge
    I'm developing a text editor class (for a chat application) based on CRichEditCtrl (MFC) with emoticon support. After i load the emoticon's bitmap, I use the function OleCreateStaticFromData to insert it into CRichEditCtrl. After that i just delete the bitmap object allocated by myself. I can verify (using a GDIView utility) that all resources i allocate have been properly released. This works perfectly: the bitmap (emoticon) is drawn on the CRichEditCtrl window and is handled just like a character. My problem is that I don't know how to deallocate the memory (internal) allocated by OleCreateStaticFromData to manage the bitmap (emoticon). The memory allocated for any emoticon used is never released, even if i delete the CRichEditCtrl object. I'd like to know how to fix that issue. Is that a MFC's issue or i'm doing something wrong ? Thx.

    Read the article

  • "Out of Memory" error in Lotus Notes automation from VBA

    - by PowerUser
    This VBA function sporadically fails with a Notes automation error "Run-Time Error '7' Out of Memory". Naturally, when I try to manually reproduce it, everything runs fine. Function ToGMT(ByVal X As Date) As Date Static NtSession As NotesSession If NtSession Is Nothing Then Set NtSession = New NotesSession NtSession.Initialize End If (do stuff) End function To put this in context, this VBA function is being called by an Access query, 3-4 times per record, with 20,000 records. For performance reasons, the NotesSession has been made static. Any ideas why it is sporadically giving an out-of-memory error? (Also, I'm initiating the NotesSession just so I can convert a datetime to GMT using Lotus's rules. If you know a better way, I'm listening).

    Read the article

  • How to maxmise the largest contiguous block of memory in the Large Object Heap

    - by Unsliced
    The situation is that I am making a WCF call to a remote server which is returns an XML document as a string. Most of the time this return value is a few K, sometimes a few dozen K, very occasionally a few hundred K, but very rarely it could be several megabytes (first problem is that there is no way for me to know). It's these rare occasions that are causing grief. I get a stack trace that starts: System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown. at System.Xml.BufferBuilder.AddBuffer() at System.Xml.BufferBuilder.AppendHelper(Char* pSource, Int32 count) at System.Xml.BufferBuilder.Append(Char[] value, Int32 start, Int32 count) at System.Xml.XmlTextReaderImpl.ParseText() at System.Xml.XmlTextReaderImpl.ParseElementContent() at System.Xml.XmlTextReaderImpl.Read() at System.Xml.XmlTextReader.Read() at System.Xml.XmlReader.ReadElementString() at Microsoft.Xml.Serialization.GeneratedAssembly.XmlSerializationReaderMDRQuery.Read2_getMarketDataResponse() at Microsoft.Xml.Serialization.GeneratedAssembly.ArrayOfObjectSerializer2.Deserialize(XmlSerializationReader reader) at System.Xml.Serialization.XmlSerializer.Deserialize(XmlReader xmlReader, String encodingStyle, XmlDeserializationEvents events) at System.Xml.Serialization.XmlSerializer.Deserialize(XmlReader xmlReader, String encodingStyle) at System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) I've read around and it is because the Large Object Heap is just getting too fragmented, so even preceding the call with a quick check to StringBuilder.EnsureCapacity just causes the OutOfMemoryException to be thrown earlier (and because I'm guessing at what's needed, it might not actually need that much so my check is causing more problems than it is solving). Some opinions are that there's not much I can do about it. Some of the questions I've asked myself: Use less memory - have you checked for leaks? Yes. The memory usage goes up and down, but there's no fundamental growth that guarantees this to happen. Some of the times it fails, it succeeded at that stage previously. Transfer smaller amounts Not an option, this is a third party web service over which I have no control (or at least it would take a long time to resolve, in the meantime I still have a problem) Can you do something to the LOH to make it less likely to fail? ... now this is most fruitful course. It's a 32-bit process (it has to be for various political, technical and boring reasons) but there's normally hundreds of meg free (multiples of the largest amount for which we've seen failures). Can we monitor the LOH? Using perfmon I can track the size of the heaps, but I don't think there's a way to monitor the largest available contiguous block of memory. Question is: any advice or suggestions for things to try?

    Read the article

  • Memory allocation and release for UIImage in iPhone?

    - by rkbang
    Hello all, I am using following code in iPhone to get smaller cropped image as follows: - (UIImage*) getSmallImage:(UIImage*) img { CGSize size = img.size; CGFloat ratio = 0; if (size.width < size.height) { ratio = 36 / size.width; } else { ratio = 36 / size.height; } CGRect rect = CGRectMake(0.0, 0.0, ratio * size.width, ratio * size.height); UIGraphicsBeginImageContext(rect.size); [img drawInRect:rect]; UIImage *tempImg = [UIGraphicsGetImageFromCurrentImageContext() retain]; UIGraphicsEndImageContext(); return [tempImg autorelease]; } - (UIImage*)imageByCropping:(UIImage *)imageToCrop toRect:(CGRect)rect { //create a context to do our clipping in UIGraphicsBeginImageContext(rect.size); CGContextRef currentContext = UIGraphicsGetCurrentContext(); //create a rect with the size we want to crop the image to //the X and Y here are zero so we start at the beginning of our //newly created context CGFloat X = (imageToCrop.size.width - rect.size.width)/2; CGFloat Y = (imageToCrop.size.height - rect.size.height)/2; CGRect clippedRect = CGRectMake(X, Y, rect.size.width, rect.size.height); //CGContextClipToRect( currentContext, clippedRect); //create a rect equivalent to the full size of the image //offset the rect by the X and Y we want to start the crop //from in order to cut off anything before them CGRect drawRect = CGRectMake(0, 0, imageToCrop.size.width, imageToCrop.size.height); CGContextTranslateCTM(currentContext, 0.0, drawRect.size.height); CGContextScaleCTM(currentContext, 1.0, -1.0); //draw the image to our clipped context using our offset rect //CGContextDrawImage(currentContext, drawRect, imageToCrop.CGImage); CGImageRef tmp = CGImageCreateWithImageInRect(imageToCrop.CGImage, clippedRect); //pull the image from our cropped context UIImage *cropped = [UIImage imageWithCGImage:tmp];//UIGraphicsGetImageFromCurrentImageContext(); CGImageRelease(tmp); //pop the context to get back to the default UIGraphicsEndImageContext(); //Note: this is autoreleased*/ return cropped; } I am using following line of code in cellForRowAtIndexPath to update the image of the cell: cell.img.image = [self imageByCropping:[self getSmallImage:[UIImage imageNamed:@"goal_image.png"]] toRect:CGRectMake(0, 0, 36, 36)]; Now when I add this table view and pop it from navigation controller, I see a memory hike.I see no leaks but memory keeps climbing. Please note that the images changes for each row and I am creating the controller using lazy initialization that is I create or alloc it whenever I need it. I saw on internet many people facing the same issue, but very rare good solutions. I have multiple views using the same way and I see almost memory raised to 4MB within 20-25 view transitions. What is the good solution to resolve this issue. tnx.

    Read the article

  • php gzip xml file (53MB) casue Out of memory error

    - by ntan
    Hi, i have a 53 MB xml file that i want to gzip. The code below gzip it $gzFile = "my.gz"; $data = IMPLODE("", FILE($filename)); $gzdata = GZENCODE($data, 9); //open gz -- 'w9' is highest compression $fp = gzopen ($gzFile, 'w9'); //loop through array and write each line into the compressed file gzwrite ($fp, $gzdata); //close the file gzclose ($fp); This cause PHP Fatal error: Out of memory (allocated 70516736) (tried to allocate 24 bytes) Any one have any suggestions. I already have increase the memory in php.ini

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >