Search Results

Search found 13619 results on 545 pages for 'memory mapped'.

Page 40/545 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • Memory allocation for collections in .NET

    - by Yogendra
    This might be a dupe. I did not find enough information on this. I was discussing memory allocation for collections in .Net. Where is the memory for elements allocated in a collection? List<int> myList = new List<int>(); The variable myList is allocated on stack and it references the List object created on heap. The question is when int elements are added to the myList, where would they be created ? Can anyone point the right direction?

    Read the article

  • Objective-C memory management issue

    - by Toby Wilson
    I've created a graphing application that calls a web service. The user can zoom & move around the graph, and the program occasionally makes a decision to call the web service for more data accordingly. This is achieved by the following process: The graph has a render loop which constantly renders the graph, and some decision logic which adds web service call information to a stack. A seperate thread takes the most recent web service call information from the stack, and uses it to make the web service call. The other objects on the stack get binned. The idea of this is to reduce the number of web service calls to only those appropriate, and only one at a time. Right, with the long story out of the way (for which I apologise), here is my memory management problem: The graph has persistant (and suitably locked) NSDate* objects for the currently displayed start & end times of the graph. These are passed into the initialisers for my web service request objects. The web service call objects then retain the dates. After the web service calls have been made (or binned if they were out of date), they release the NSDate*. The graph itself releases and reallocates new NSDates* on the 'touches ended' event. If there is only one web service call object on the stack when removeAllObjects is called, EXC_BAD_ACCESS occurs in the web service call object's deallocation method when it attempts to release the date objects (even though they appear to exist and are in scope in the debugger). If, however, I comment out the release messages from the destructor, no memory leak occurs for one object on the stack being released, but memory leaks occur if there are more than one object on the stack. I have absolutely no idea what is going wrong. It doesn't make a difference what storage symantics I use for the web service call objects dates as they are assigned in the initialiser and then only read (so for correctness' sake are set to readonly). It also doesn't seem to make a difference if I retain or copy the dates in the initialiser (though anything else obviously falls out of scope or is unwantedly released elsewhere and causes a crash). I'm sorry this explanation is long winded, I hope it's sufficiently clear but I'm not gambling on that either I'm afraid. Major big thanks to anyone that can help, even suggest anything I may have missed?

    Read the article

  • Memory management in iOS

    - by angrest
    Looks like I did not understand memory management in Objective C... sigh. I have the following code (note that in my case, placemark.thoroughfare and placemark.subThoroughfare are both filled with valid data, thus both if-conditions will be TRUE if (placemark.thoroughfare) { [item.place release]; item.place = [NSString stringWithFormat:@"%@ ", placemark.thoroughfare]; } else { [item.place release]; item.place = @"Unknown Place"; } if (placemark.thoroughfare && placemark.subThoroughfare) { // *** problem is here *** [item.place release]; item.place = [NSString stringWithFormat:@"%@ %@", placemark.thoroughfare , placemark.subThoroughfare]; } If I do not release item.place at the marked location in the code, Instruments finds a memory leak there. If I do, the program crashes as soon as I try to access item.place outside the offending method. Any ideas?

    Read the article

  • finding out memory allocation hotspots in java

    - by Zamir
    Our GC is working hard and we have some pauses that we want to decrease. We have some memory allocation issues that we want to solve before or while we are tweaking with the actual JVM GC args. I would like to know which objects are making the GC sweat: is there a way to know which objects are evacuated every time the GC is working? is there a way to know which objects are moved between areas every time the GC is working? Is there a way to know which objects are in Eden area? I am working extensively with Jprofiler and Memory Analyzer. I would like to get this information on a running application in my staging environment.

    Read the article

  • How is inheritance implemented at the memory level?

    - by cambr
    Suppose I have class A { public: void print(){cout<<"A"; }}; class B: public A { public: void print(){cout<<"B"; }}; class C: public C { }; How is inheritance implemented at the memory level? Does C copy print() code to itself or does it have a pointer to the it that points somewhere in A part of the code? How does the same thing happen when we override the previous definition, for example in B (at the memory level)?

    Read the article

  • Objective C memory leaking

    - by Jakub Lédl
    Hi everyone, I'm creating one Cocoa application for myself and I found a problem. I have two NSTextFields and they're connected to each other as nextKeyViews. When I run this app with memory leaks detection tool and tab through those 2 textboxes for a while, enter some text etc., I start to leak memory. It shows me that the AppKit library is responsible, the leaked objects are NSCFStrings and the responsible frames are [NSEvent charactersIgnoringModifiers] and [NSApplication nextEventMatchingMask:untilDate:inMode:dequeue:]. I know this is quite a brief and incomplete description, but does anyone have any ideas what could be the problem? Also, I don't use GC, so I release my instance variables in the controllers dealloc. What about the outlets? Since IBOutlet is just a mark for Interface Builder and doesn't actually mean anything, should I release them too?

    Read the article

  • Python - Memory Leak

    - by Dave
    I'm working on solving a memory leak in my Python application. Here's the thing - it really only appears to happen on Windows Server 2008 (not R2) but not earlier versions of Windows, and it also doesn't look like it's happening on Linux (although I haven't done nearly as much testing on Linux). To troubleshoot it, I set up debugging on the garbage collector: gc.set_debug(gc.DEBUG_UNCOLLECTABLE | gc.DEBUG_INSTANCES | gc.DEBUG_OBJECTS) Then, periodically, I log the contents of gc.garbage. Thing is, gc.garbage is always empty, yet my memory usage goes up and up and up. Very puzzling.

    Read the article

  • Controling CRT memory initialization

    - by Ofek Shilon
    Occasionally you meet bugs that are reproducible only in release builds and/or only on some machines. A common (but by no means only) reason is uninitialized variables, that are subject to random behaviour. E.g, an uninitialized BOOL can be TRUE most of the time, on most machines, but randomly be initialized as FALSE. What I wish I would have is a systematic way of flushing out such bugs by modifying the behaviour of the CRT memory initialization. I'm well aware of the MS debug CRT magic numbers - at the very least I'd like to have a trigger to turn 0xCDCDCDCD (the pattern that initializes freshly allocated memory) to zeros. I suspect one would be able to easily smoke out nasty initialization pests this way, even in debug builds. Am I missing an available CRT hook (API, registry key, whatever) that enables this? Anyone has other ideas to get there?

    Read the article

  • x86 and Memory Addressing

    - by IM
    I've been reading up on memory models in an assembly book I picked up and I have a question or two. Let's say that the address bus has 32 lines, the data bus has 32 lines and the CPU is 32-bit (for simplicity). Now if the CPU makes a read request and sends the 32bit address, but only needs 8 bits, all 32 bits come back anyway? Also, the addresses in memory are still addressed per byte correct? So fetching one byte would bring back 0000 0001 to address 0000 0004? Thanks in advance

    Read the article

  • C++ Memory allocation question involving vectors

    - by TheFuzz
    vector< int > vect; int *int_ptr = new int(10); vect.push_back( *int_ptr ); I under stand that every "new" needs to be followed by a "delete" at some point but does the clear() method clean this memory? What about this method of doing the same thing: vector< int > vect; int int_var = 10; vect.push_back( int_var ); From what I understand, clear() calls the variables destructors, but both vect.push_back() methods in this example push an object on the vector, not a pointer. so does the first example using an int pointer need something other than clear() to clean up memory?

    Read the article

  • C#/XNA Giant Memory Leak

    - by user1440926
    this is my first post here. I'm making a game in Visual Studio 2010 using XNA, and i've hit a giant memory leak. My game starts out using 17k ram and then after ten minutes it's upto 65k. I ran some memory profilers, and they all say that new instances of the String object are being created, but they aren't live. The amount of live instances of String hasn't changed at all. It's also creating instances of Char[] (which i'd expect from it), Object[], and StringBuilder. My game is pretty new but there's too much code to post here. I have no idea how to get rid of instances that aren't live, please help!

    Read the article

  • C++: Question about freeing memory

    - by Martijn Courteaux
    On Learn C++, they wrote this to free memory: int *pnValue = new int; // dynamically allocate an integer *pnValue = 7; // assign 7 to this integer delete pnValue; pnValue = 0; My question is: "Is the last statement needed to free the memory correctly, completly?" I thought that the pointer *pnValue was still on the stack and new doesn't make any sense to the pointer. And if it is on the stack it will be cleaned up when the application leaves the scope (where the pointer is declared in), isn't it?

    Read the article

  • Memory allocation in Linux

    - by Goofy
    Hello! I have a multi threaded application where I allocate buffers with data, which then wait in queues to be send via sockets. All buffers are reproducible because I use only buffers of fixed size in whole program (1024, 2048, 2080 and 5248 bytes). I noticed, that my program usually use up to 10 buffers of each length type at the same moment. So far I always manually allocate new buffer and then free it (using malloc() and free ()) where it's not needed any more. I started wondering if Linux is enough smart to cache this memory for me, so next time I allocate new buffer system only quickly receive a buffer I have already used before and not perform heavy operation of allocating new memory block?

    Read the article

  • Sharing memory between modules

    - by John Holecek
    Hi, I was wondering how to share some memory between different program modules - lets say, I have a main application (exe), and then some module (dll). They both link to the same static library. This static library will have some manager, that provides various services. What I would like to achieve, is to have this manager shared between all application modules, and to do this transparently during the library initialization. Between processes I could use shared memory, but I want this to be shared in the current process only. Could you think of some cross-platform way to do this? Possibly using boost libraries, if they provide some facilities to do this. Only solution I can think of right now, is to use shared library of the respective OS, that all other modules will link to at runtime, and have the manager saved there.

    Read the article

  • Delphi Memory Management

    - by nomad311
    I haven't been able to find the answers to a couple of my Delphi memory management questions. I could test different scenarios (which I did to find out what breaks the FreeAndNil method), but its takes too long and its hard! But seriously, I would also like to know how you all (Delphi developers) handle these memory management issues. My Questions (Feel free to pose your own I'm sure the answers to them will help me too): Does FreeAndNil work for COM objects? My thoughts are I don't need it, but if all I need to do is set it to nil than why not stay consistent in my finally block and use FreeAndNil for everything? Whats the proper way to clean up static arrays (myArr : Array[0..5] of TObject). I can't FreeAndNil it, so is it good enough to just set it to nil (do I need to do that after I've FreeAnNil'd each object?)? Thanks Guys!

    Read the article

  • Apply Xslt on in-memory Xml and returning in-memory Xml

    - by Jan Willem B
    I am looking for a static function in the .NET framework which takes an XML snippet and an Xslt file, applies the transformation in memory, and returns the transformed XML. I would like to do this: string rawXml = invoiceTemplateDoc.MainDocumentPart.Document.InnerXml; rawXml = DoXsltTransformation(rawXml, @"c:\prepare-invoice.xslt")); // ... do more manipulations on the rawXml Alternatively, instead of taking and returning strings, it could be taking and returning XmlNodes. Is there such a function?

    Read the article

  • Is this a php memory leak?

    - by mseifert
    I have memory_get_usage() in the footer of my page and with each refresh of the page, I watch it increase by about 100k each time. My page load creates many objects and destroys them when done . My parent objects each have __destruct() which uses unset() with all child objects. Child objects with a reference back to the parent, have __destruct() to unset() these references. Inserting memory_get_usage() before and after processing different parts of my page only tells me how much of the total usage was added due to that part of the script. How do I go about determining what memory is lost and not recycled for garbage collection after the page finishes loading? I have one global $_SESSION var containing objects storing user info, but have verified using strlen(serialize($object)) that this object is not growing in size. I presume that what I am seeing is a memory leak and that php garbage collection should be in effect after the script ends. Any ideas how to debug this?

    Read the article

  • Cocos2d-xna memory management for WP8

    - by Arkiliknam
    I recently upgraded to VS2012 and try my in dev game out on the new WP8 emulators but was dismayed to find out the emulator now crashes and throws an out of memory exception during my sprite loading procedure (funnily, it still works in WP7 emulators and on my WP7). Regardless of whether the problem is the emulator or not, I want to get a clear understanding of how I should be managing memory in the game. My game consists of a character whom has 4 or more different animations. Each animation consists of 4 to 7 frames. On top of that, the character has up to 8 stackable visualization modifications (eg eye type, nose type, hair type, clothes type). Pre memory issue, I preloaded all textures for each animation frame and customization and created animate action out of them. The game then plays animations using the customizations applied to that current character. I re-looked at this implementation when I received the out of memory exceptions and have started playing with RenderTexture instead, so instead of pre loading all possible textures, it on loads textures needed for the character, renders them onto a single texture, from which the animation is built. This means the animations use 1/8th of the sprites they were before. I thought this would solve my issue, but it hasn't. Here's a snippet of my code: var characterTexture = CCRenderTexture.Create((int)width, (int)height); characterTexture.BeginWithClear(0, 0, 0, 0); // stamp a body onto my texture var bodySprite = MethodToCreateSpecificSprite(); bodySprite.Position = centerPoint; bodySprite.Visit(); bodySprite.Cleanup(); bodySprite = null; // stamp eyes, nose, mouth, clothes, etc... characterTexture.End(); As you can see, I'm calling CleanUp and setting the sprite to null in the hope of releasing the memory, though I don't believe this is the right way, nor does it seem to work... I also tried using SharedTextureCache to load textures before Stamping my texture out, and then clearing the SharedTextureCache with: CCTextureCache.SharedTextureCache.RemoveAllTextures(); But this didn't have an effect either. Any tips on what I'm not doing? I used VS to do a memory profile of the emulation causing the crash. Both WP7.1 and WP8 emulators peak at about 150mb of usage. WP8 crashes and throws an out of memory exception. Each customisation/frame is 15kb at the most. Lets say there are 8 layers of customisation = 120kb but I render then onto one texture which I would assume is only 15kb again. Each animation is 8 frames at the most. That's 15kb for 1 texture, or 960kb for 8 textures of customisation. There are 4 animation sets. That's 60Kb for 4 sets of 1 texture, or 3.75MB for 4 sets of 8 textures of customisation. So even if its storing every layer, its 3.75MB.... no where near the 150mb breaking point my profiler seems to suggest :( WP 7.1 Memory Profile (max 150MB) WP8 Memory Profile (max 150MB and crashes)

    Read the article

  • Windows xp mapped drives disconnect from Windows 7 on Peer to Peer network

    - by Kathryn Codo
    I have a 5 user peer to peer network. 2-XP machines, 3- Windows 7 machines and a Windows 7 machine acting as the file server. I have NO issues with the Windows 7 machines staying connected via the mapped drives to the "server". However, once a day the 2 XP machines will not connect to the Windows 7 machine "server" via the mapped drives. I must restart the "server" in order to see the server and access the files via the mapped drives or Explorer. I have tried the persistent : yes command in NET USE, on the XP machines and also setting a static IP address on the "server". I've turned the firewall off on the "server:, no difference so I turned the firewall back on. No interruption occurs with the internet when this happens, just can't see the server. Again, the Windows 7 users are unaffected. I have gone into the advanced network settings on the "server" and added read/write permissions for Everyone as well as the users of the XP machines. I have double checked that we are all on the same WORKGROUP. I'm at a loss. It seems to happen around mid-day and I've not been able to find any activity that is happening at this time (Like someone plugging in a thumbdrive). Any suggestions would be greatly appreciated as my client is running old XP software he no longer has the disks for and needs for his engineering firm.

    Read the article

  • "Path Not Found" when attempting to write to a sub folder within a mapped drive

    - by Adam
    We have an interesting issue with one of our server shares, or possibly, our Win 7 desktops. When our users try to save files in a sub folder, either via copy/paste or through an application, to a mapped drive on our DC they receive an error saying "Path not found". They can however browse this folder and open files from it. This is where the "Path Not Found" error doesn't seem to stack up in my opinion. Users can however save files fine in the root folder of the mapped drive, it appears only to affect sub folders. It seems to be random which users and machines this affects. The users can log on to a different machine and be able to save in sub folders fine, on the same mapped drive. Event viewer hasn't been much help either. Currently, the only solution we have found is to image the machines affected which solves the issue. Our servers are Server 2008 R2 with Win 7 Pro desktops. Any help/pointers/suggestions would greatly be appreciated.

    Read the article

  • Why does IHttpAsyncHandler leak memory under load?

    - by Anton
    I have noticed that the .NET IHttpAsyncHandler (and the IHttpHandler, to a lesser degree) leak memory when subjected to concurrent web requests. In my tests, the development web server (Cassini) jumps from 6MB memory to over 100MB, and once the test is finished, none of it is reclaimed. The problem can be reproduced easily. Create a new solution (LeakyHandler) with two projects: An ASP.NET web application (LeakyHandler.WebApp) A Console application (LeakyHandler.ConsoleApp) In LeakyHandler.WebApp: Create a class called TestHandler that implements IHttpAsyncHandler. In the request processing, do a brief Sleep and end the response. Add the HTTP handler to Web.config as test.ashx. In LeakyHandler.ConsoleApp: Generate a large number of HttpWebRequests to test.ashx and execute them asynchronously. As the number of HttpWebRequests (sampleSize) is increased, the memory leak is made more and more apparent. LeakyHandler.WebApp TestHandler.cs namespace LeakyHandler.WebApp { public class TestHandler : IHttpAsyncHandler { #region IHttpAsyncHandler Members private ProcessRequestDelegate Delegate { get; set; } public delegate void ProcessRequestDelegate(HttpContext context); public IAsyncResult BeginProcessRequest(HttpContext context, AsyncCallback cb, object extraData) { Delegate = ProcessRequest; return Delegate.BeginInvoke(context, cb, extraData); } public void EndProcessRequest(IAsyncResult result) { Delegate.EndInvoke(result); } #endregion #region IHttpHandler Members public bool IsReusable { get { return true; } } public void ProcessRequest(HttpContext context) { Thread.Sleep(10); context.Response.End(); } #endregion } } LeakyHandler.WebApp Web.config <?xml version="1.0"?> <configuration> <system.web> <compilation debug="false" /> <httpHandlers> <add verb="POST" path="test.ashx" type="LeakyHandler.WebApp.TestHandler" /> </httpHandlers> </system.web> </configuration> LeakyHandler.ConsoleApp Program.cs namespace LeakyHandler.ConsoleApp { class Program { private static int sampleSize = 10000; private static int startedCount = 0; private static int completedCount = 0; static void Main(string[] args) { Console.WriteLine("Press any key to start."); Console.ReadKey(); string url = "http://localhost:3000/test.ashx"; for (int i = 0; i < sampleSize; i++) { HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url); request.Method = "POST"; request.BeginGetResponse(GetResponseCallback, request); Console.WriteLine("S: " + Interlocked.Increment(ref startedCount)); } Console.ReadKey(); } static void GetResponseCallback(IAsyncResult result) { HttpWebRequest request = (HttpWebRequest)result.AsyncState; HttpWebResponse response = (HttpWebResponse)request.EndGetResponse(result); try { using (Stream stream = response.GetResponseStream()) { using (StreamReader streamReader = new StreamReader(stream)) { streamReader.ReadToEnd(); System.Console.WriteLine("C: " + Interlocked.Increment(ref completedCount)); } } response.Close(); } catch (Exception ex) { System.Console.WriteLine("Error processing response: " + ex.Message); } } } }

    Read the article

  • MODI leaking memory

    - by Khragg
    I have an app where I'm using MODI 2007 to OCR several multi-page tiff files. I have found that when I kick it off on a directory that contains several good tiffs but also some tiffs that cannot be opened in Windows Picture and Fax Viewer, then MODI also fails to OCR those "bad" tiffs. When this happens, the app is unable to reclaim any of the memory that was used by MODI to OCR those tiffs. After the tool tries to OCR too many of these "bad" tiffs, the machine runs out of memory and the app crashes. I have tried several code fixes from the web that supposedly fix any MODI memory leaks, but so far none have worked for me. I am pasting in the part of the code below that does the OCRing: StringBuilder strRecText = new StringBuilder(10000); MODI.Document doc1 = new MODI.Document(); doc1.Create(name); try { doc1.OCR(MODI.MiLANGUAGES.miLANG_ENGLISH, true, true); // this will ocr all pages of a multi-page tiff file } catch (Exception e) { doc1.Close(false); // clean up if (doc1 != null) { GC.Collect(); GC.WaitForPendingFinalizers(); GC.Collect(); GC.WaitForPendingFinalizers(); System.Runtime.InteropServices.Marshal.FinalReleaseComObject(doc1); doc1 = null; } } MODI.Images images = doc1.Images; for (int imageCounter = 0; imageCounter < images.Count; imageCounter++) { if (imageCounter > 0) { if (!noPageBreakFlag) { strRecText.Append((char)pageBreakChar); } } MODI.Image image = (MODI.Image)images[imageCounter]; MODI.Layout layout = image.Layout; strRecText.Append(layout.Text); GC.Collect(); GC.WaitForPendingFinalizers(); GC.Collect(); GC.WaitForPendingFinalizers(); if (layout != null) { System.Runtime.InteropServices.Marshal.FinalReleaseComObject(layout); layout = null; } if (image != null) { System.Runtime.InteropServices.Marshal.FinalReleaseComObject(image); image = null; } } File.AppendAllText(ocrFile, strRecText.ToString()); // write the OCR file out to disk GC.Collect(); GC.WaitForPendingFinalizers(); GC.Collect(); GC.WaitForPendingFinalizers(); if (images != null) { System.Runtime.InteropServices.Marshal.FinalReleaseComObject(images); images = null; } GC.Collect(); GC.WaitForPendingFinalizers(); GC.Collect(); GC.WaitForPendingFinalizers(); doc1.Close(false); // clean up if (doc1 != null) { System.Runtime.InteropServices.Marshal.FinalReleaseComObject(doc1); doc1 = null; } GC.Collect(); GC.WaitForPendingFinalizers(); GC.Collect(); GC.WaitForPendingFinalizers();

    Read the article

  • RAM caching causes severe performance drops

    - by B T
    I have read plenty of threads on memory caching and the standard response of "large cache is good, it shouldn't effect performance", "the kernel knows best". I have recently upgraded from 12.04 to 12.10 and changed from VirtualBox to VMware Workstation and the performance differences are severe (I suspect it is because of the latter). When I am running my virtual machine the system load monitor graph shows less than 50% memory usage generally. System load indicator is showing me that the rest of my RAM is used in the cache all the time. Plain and simple this is the comparison: BEFORE Cache was very sparingly used, pretty much none of my memory usage was the cache Swappiness was 0 (caused my memory to be used first, then swap only if needed) Performance was quite good and logical RAM was used fully first, caching was minimal. I could run enough software to utilize my full 4GB of RAM without any performance degradation whatsoever Swap space was then used as needed which was obviously slower (I am on a HDD) but was still usable when the current program was loaded into memory AFTER Cache is used to fill the full 4GB as soon as my virtual machine is run Swappiness is 0 (same behaviour as before but cache uses full memory straight away) Performance is terrible and unusable while running Ubuntu software Basic things like changing windows takes 2 minutes + Changing screens happens frame by frame over sometimes up to 5 minutes Cannot run an IDE and VM like I could with ease before So basically, any suggestions on how to take my performance back to how it was before while keeping my current setup? My suspicion is VMWare is the problem, but how do I see what is tied to the use of the cache? Surely there is a way to control this behaviour in software as polished as VMware? Thanks EDIT: Could also be important to note that the behaviour differs depending on whether VMware is open or closed. If VMware is open, then the ram will lock at like 50% and 50% cache and go into the complete lock up mentioned above. Contrastingly, if VMware is closed (after being open), then the RAM will continue to rise as it needs / cache will stay as the complete remaining memory and there is no noticeable performance degradation.

    Read the article

  • -[UIImage drawInRect:] / CGContextDrawImage() not releasing memory?

    - by sohocoke
    I wanted to easily blend a UIImage on top of another background image, so wrote a category method for UIImage, adapted from http://stackoverflow.com/questions/1309757/blend-two-uiimages : - (UIImage *) blendedImageOn:(UIImage *) backgroundImage { NSAutoreleasePool* pool = [[NSAutoreleasePool alloc] init]; UIGraphicsBeginImageContext(backgroundImage.size); CGRect rect = CGRectMake(0, 0, backgroundImage.size.width, backgroundImage.size.height); [backgroundImage drawInRect:rect]; [self drawInRect:rect]; UIImage* blendedImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); [pool release]; return blendedImage; } Unfortunately my app that uses the above method to load around 20 images and blend them with background and gloss images (so probably around 40 calls), is being jettisoned on the device. An Instruments session revealed that calls to malloc stemming from the calls to drawInRect: are responsible for the bulk of the memory usage. I tried replacing the drawInRect: messages with equivalent function calls to the function CGContextDrawImage but it didn't help. The AutoReleasePool was added after I found the memory usage problem; it also didn't make a difference. I'm thinking this is probably because I'm not using graphics contexts appropriately. Would calling the above method in a loop be a bad idea because of the number of contexts I create? Or did I simply miss something?

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >