Search Results

Search found 14113 results on 565 pages for 'memory stick'.

Page 117/565 | < Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >

  • 'Bank Switching' Sprites on old NES applications

    - by Jeffrey Kern
    I'm currently writing in C# what could basically be called my own interpretation of the NES hardware for an old-school looking game that I'm developing. I've fired up FCE and have been observing how the NES displayed and rendered graphics. In a nutshell, the NES could hold two bitmaps worth of graphical information, each with the dimensions of 128x128. These are called the PPU tables. One was for BG tiles and the other was for sprites. The data had to be in this memory for it to be drawn on-screen. Now, if a game had more graphical data then these two banks, it could write portions of this new information to these banks -overwriting what was there - at the end of each frame, and use it from the next frame onward. So, in old games how did the programmers 'bank switch'? I mean, within the level design, how did they know which graphic set to load? I've noticed that Mega Man 2 bankswitches when the screen programatically scrolls from one portion of the stage to the next. But how did they store this information in the level - what sprites to copy over into the PPU tables, and where to write them at? Another example would be hitting pause in MM2. BG tiles get over-written during pause, and then get restored when the player unpauses. How did they remember which tiles they replaced and how to restore them? If I was lazy, I could just make one huge static bitmap and just grab values that way. But I'm forcing myself to limit these values to create a more authentic experience. I've read the amazing guide on how M.C. Kids was made, and I'm trying to be barebones about how I program this game. It still just boggles my mind how these programmers accomplisehd what they did with what they had. EDIT: The only solution I can think of would be to hold separate tables that state what tiles should be in the PPU at what time, but I think that would be a huge memory resource that the NES wouldn't be able to handle.

    Read the article

  • C++ operator new, object versions, and the allocation sizes

    - by mizubasho
    Hi. I have a question about different versions of an object, their sizes, and allocation. The platform is Solaris 8 (and higher). Let's say we have programs A, B, and C that all link to a shared library D. Some class is defined in the library D, let's call it 'classD', and assume the size is 100 bytes. Now, we want to add a few members to classD for the next version of program A, without affecting existing binaries B or C. The new size will be, say, 120 bytes. We want program A to use the new definition of classD (120 bytes), while programs B and C continue to use the old definition of classD (100 bytes). A, B, and C all use the operator "new" to create instances of D. The question is, when does the operator "new" know the amount of memory to allocate? Compile time or run time? One thing I am afraid of is, programs B and C expect classD to be and alloate 100 bytes whereas the new shared library D requires 120 bytes for classD, and this inconsistency may cause memory corruption in programs B and C if I link them with the new library D. In other words, the area for extra 20 bytes that the new classD require may be allocated to some other variables by program B and C. Is this assumption correct? Thanks for your help.

    Read the article

  • How can I release this NSXMLParser without crashing my app?

    - by prendio2
    Below is the @interface for an MREntitiesConverter object I use to strip all html tags from a string using an NSXMLParser. @interface MREntitiesConverter : NSObject { NSMutableString* resultString; NSString* xmlStr; NSData *data; NSXMLParser* xmlParser; } @property (nonatomic, retain) NSMutableString* resultString; - (NSString*)convertEntitiesInString:(NSString*)s; @end And this is the implementation: @implementation MREntitiesConverter @synthesize resultString; - (id)init { if([super init]) { self.resultString = [NSMutableString string]; } return self; } - (NSString*)convertEntitiesInString:(NSString*)s { xmlStr = [NSString stringWithFormat:@"<data>%@</data>", s]; data = [xmlStr dataUsingEncoding:NSUTF8StringEncoding allowLossyConversion:YES]; xmlParser = [[NSXMLParser alloc] initWithData:data]; [xmlParser setDelegate:self]; [xmlParser parse]; return [resultString autorelease]; } - (void)dealloc { [data release]; //I want to release xmlParser here but it crashes the app [super dealloc]; } - (void)parser:(NSXMLParser *)parser foundCharacters:(NSString *)s { [self.resultString appendString:s]; } @end If I release xmlParser in the dealloc method I am crashing my app but without releasing I am quite obviously leaking memory. I am new to Instruments and trying to get the hang of optimising this app. Any help you can offer on this particular issue will likely help me solve other memory issues in my app. Yours in frustrated anticipation: ) Oisin

    Read the article

  • Designing small comparable objects

    - by Thomas Ahle
    Intro Consider you have a list of key/value pairs: (0,a) (1,b) (2,c) You have a function, that inserts a new value between two current pairs, and you need to give it a key that keeps the order: (0,a) (0.5,z) (1,b) (2,c) Here the new key was chosen as the average between the average of keys of the bounding pairs. The problem is, that you list may have milions of inserts. If these inserts are all put close to each other, you may end up with keys such to 2^(-1000000), which are not easily storagable in any standard nor special number class. The problem How can you design a system for generating keys that: Gives the correct result (larger/smaller than) when compared to all the rest of the keys. Takes up only O(logn) memory (where n is the number of items in the list). My tries First I tried different number classes. Like fractions and even polynomium, but I could always find examples where the key size would grow linear with the number of inserts. Then I thought about saving pointers to a number of other keys, and saving the lower/greater than relationship, but that would always require at least O(sqrt) memory and time for comparison. Extra info: Ideally the algorithm shouldn't break when pairs are deleted from the list.

    Read the article

  • How safe and reliable are C++ String Literals?

    - by DoctorT
    So, I'm wanting to get a better grasp on how string literals in C++ work. I'm mostly concerned with situations where you're assigning the address of a string literal to a pointer, and passing it around. For example: char* advice = "Don't stick your hands in the toaster."; Now lets say I just pass this string around by copying pointers for the duration of the program. Sure, it's probably not a good idea, but I'm curious what would actually be going on behind the scenes. For another example, let's say we make a function that returns a string literal: char* foo() { // function does does stuff return "Yikes!"; // somebody's feeble attempt at an error message } Now lets say this function is called very often, and the string literal is only used about half the time it's called: // situation #1: it's just randomly called without heed to the return value foo(); // situation #2: the returned string is kept and used for who knows how long char* retVal = foo(); In the first situation, what's actually happening? Is the string just created but not used, and never deallocated? In the second situation, is the string going to be maintained as long as the user finds need for it? What happens when it isn't needed anymore... will that memory be freed up then (assuming nothing points to that space anymore)? Don't get me wrong, I'm not planning on using string literals like this. I'm planning on using a container to keep my strings in check (probably std::string). I'm mostly just wanting to know if these situations could cause problems either for memory management or corrupted data.

    Read the article

  • Have you dealt with space hardening?

    - by Tim Post
    I am very eager to study best practices when it comes to space hardening. For instance, I've read (though I can't find the article any longer) that some core parts of the Mars rovers did not use dynamic memory allocation, in fact it was forbidden. I've also read that old fashioned core memory may be preferable in space. I was looking at some of the projects associated with the Google Lunar Challenge and wondering what it would feel like to get code on the moon, or even just into space. I know that space hardened boards offer some sanity in such a harsh environment, however I'm wondering (as a C programmer) how I would need to adjust my thinking and code if I was writing something that would run in space? I think the next few years might show more growth in private space companies, I'd really like to at least be somewhat knowledgeable regarding best practices. Can anyone recommend some books, offer links to papers on the topic or (gasp) even a simulator that shows you what happens to a program if radiation, cold or heat bombards a board that sustained damage to its insulation? I think the goal is keeping humans inside of a space craft (as far as fixing or swapping stuff) and avoiding missions to fix things. Furthermore, if the board maintains some critical system, early warnings seem paramount.

    Read the article

  • Is this 2D array initialization a bad idea?

    - by Brendan Long
    I have something I need a 2D array for, but for better cache performance, I'd rather have it actually be a normal array. Here's the idea I had but I don't know if it's a terrible idea: const int XWIDTH = 10, YWIDTH = 10; int main(){ int * tempInts = new int[XWIDTH * YWIDTH]; int ** ints = new int*[XWIDTH]; for(int i=0; i<XWIDTH; i++){ ints[i] = &tempInts[i*YWIDTH]; } // do things with ints delete[] ints[0]; delete[] ints; return 0; } So the idea is that instead of newing a bunch of arrays (and having them placed in different places in memory), I just point to an array I made all at once. The reason for the delete[] (int*) ints; is because I'm actually doing this in a class and it would save [trivial amounts of] memory to not save the original pointer. Just wondering if there's any reasons this is a horrible idea. Or if there's an easier/better way. The goal is to be able to access the array as ints[x][y] rather than ints[x*YWIDTH+y].

    Read the article

  • Android - BitmapFactory.decodeByteArray - OutOfMemoryError (OOM)

    - by Bob Keathley
    I have read 100s of article about the OOM problem. Most are in regard to large bitmaps. I am doing a mapping application where we download 256x256 weather overlay tiles. Most are totally transparent and very small. I just got a crash on a bitmap stream that was 442 Bytes long while calling BitmapFactory.decodeByteArray(....). The Exception states: java.lang.OutOfMemoryError: bitmap size exceeds VM budget(Heap Size=9415KB, Allocated=5192KB, Bitmap Size=23671KB) The code is: protected Bitmap retrieveImageData() throws IOException { URL url = new URL(imageUrl); InputStream in = null; OutputStream out = null; HttpURLConnection connection = (HttpURLConnection) url.openConnection(); // determine the image size and allocate a buffer int fileSize = connection.getContentLength(); if (fileSize < 0) { return null; } byte[] imageData = new byte[fileSize]; // download the file //Log.d(LOG_TAG, "fetching image " + imageUrl + " (" + fileSize + ")"); BufferedInputStream istream = new BufferedInputStream(connection.getInputStream()); int bytesRead = 0; int offset = 0; while (bytesRead != -1 && offset < fileSize) { bytesRead = istream.read(imageData, offset, fileSize - offset); offset += bytesRead; } // clean up istream.close(); connection.disconnect(); Bitmap bitmap = null; try { bitmap = BitmapFactory.decodeByteArray(imageData, 0, bytesRead); } catch (OutOfMemoryError e) { Log.e("Map", "Tile Loader (241) Out Of Memory Error " + e.getLocalizedMessage()); System.gc(); } return bitmap; } Here is what I see in the debugger: bytesRead = 442 So the Bitmap data is 442 Bytes. Why would it be trying to create a 23671KB Bitmap and running out of memory?

    Read the article

  • Why does Perl's Devel::LeakTrace::Fast point to blank files and evals?

    - by kt
    I am using Devel::LeakTrace::Fast to debug a memory leak in a perl script designed as a daemon which runs an infinite loop with sleeps until interrupted. I am having some trouble both reading the output and finding documentation to help me understand the output. The perldoc doesn't contain much information on the output. Most of it makes sense, such as pointing to globals in DBI. Intermingled with the output, however, are several leaked SV(<LOCATION>) from (eval #) line # Where the numbers are numbers and <LOCATION> is a location in memory. The script itself is not using eval at any point - I have not investigated each used module to see if evals are present. Mostly what I want to know is how to find these evals (if possible). I also find the following entries repeated over and over again leaked SV(<LOCATION>) from line # Where line # is always the same #. Not very helpful in tracking down what file that line is in.

    Read the article

  • Handling large datasets with PHP/Drupal

    - by jo
    Hi all, I have a report page that deals with ~700k records from a database table. I can display this on a webpage using paging to break up the results. However, my export to PDF/CSV functions rely on processing the entire data set at once and I'm hitting my 256MB memory limit at around 250k rows. I don't feel comfortable increasing the memory limit and I haven't got the ability to use MySQL's save into outfile to just serve a pre-generated CSV. However, I can't really see a way of serving up large data sets with Drupal using something like: $form = array(); $table_headers = array(); $table_rows = array(); $data = db_query("a query to get the whole dataset"); while ($row = db_fetch_object($data)) { $table_rows[] = $row->some attribute; } $form['report'] = array('#value' => theme('table', $table_headers, $table_rows); return $form; Is there a way of getting around what is essentially appending to a giant array of arrays? At the moment I don't see how I can offer any meaningful report pages with Drupal due to this. Thanks

    Read the article

  • malloc:mmap(size=XX) failed (error code=12)

    - by Michel
    I have a memory problem in an iPhone app, giving me a hard time. Here is the error message I get: malloc: * mmap(size=9281536) failed (error code=12) * error: can't allocate region I am using ARC for this app, in case that might be useful information. The code (below) is just using a file in the Bundle in order to load a core data entity. The strange thing is the crash happens only after more than 90 loops; while it seems to mee that since the size of the "contents" in getting smaller and smaller, the memory request should also get smaller and smaller. Here is the code, if any one can see a flaw please let me know. NSString *path,*contents,*lineBuffer; path=[[NSBundle mainBundle] pathForResource:@"myFileName" ofType:@"txt"]; contents=[NSString stringWithContentsOfFile:path encoding:NSUTF8StringEncoding error:nil]; int counter=0; while (counter<10000) { lineBuffer=[contents substringToIndex:[contents rangeOfCharacterFromSet:[NSCharacterSet newlineCharacterSet]].location]; contents=[contents substringFromIndex:[lineBuffer length]+1]; newItem=[NSEntityDescription insertNewObjectForEntityForName:@"myEntityName" inManagedObjectContext:context]; [newItem setValue:lineBuffer forKey:@"name"]; request=[[NSFetchRequest alloc] init]; [request setEntity: [NSEntityDescription entityForName:@"myEntityName" inManagedObjectContext:context]]; error=nil; [context save:&error]; counter++; }

    Read the article

  • when i set property(retain), one "self" = one "retain"?

    - by Walter
    although I'm about to finish my first app, I'm still confused about the very basic memory management..I've read through many posts here and apple document, but I'm still puzzled.. For example..I'm currently doing things like this to add a label programmatically: @property (retain, nonatomic) UILabel *showTime; @sythesize showTime; showTime = [[UILabel alloc] initWithFrame:CGRectMake(45, 4, 200, 36)]; [self.showTime setText:[NSString stringWithFormat:@"%d", time]]; [self.showTime setFont:[UIFont fontWithName:@"HelveticaRoundedLT-Bold" size:23]]; [self.showTime setTextColor:numColor]; self.showTime.backgroundColor = [UIColor clearColor]; [self addSubview:self.showTime]; [showTime release]; this is my current practice, for UILabel, UIButton, UIImageView, etc... [Alloc init] it without self., coz I know this will retain twice.. but after the allocation, I put back the "self." to set the attributes.. My gut feel tells me I am doing wrong, but it works superficially and I found no memory leak in analyze and instruments.. can anyone give my advice? when I use "self." to set text and set background color, does it retain one automatically? THX so much!

    Read the article

  • How to configure Hyper-V failover cluster to live migrate when dynamic memory runs out?

    - by Matt Johnson
    Appologies in advance that this is not a direct programming question, but I have a feeling that the solution involves custom powershell scripts (maybe), so this is as good a place to ask as any. I maintain a website that has a large Hyper-V cluster for SQL Servers. We are using Windows 2008 R2 SP1, and the new "dynamic memory" feature. I've already ready reviewed the Best Practices Guide, and implemented it's suggested configuration. Everything works well, except that when SQL demand increases memory pressure to expand to more memory than is available on the physical machine, the memory status goes into the "Warning" state and stays there. I assume the hypervisor is using a swapfile on the host to fulfill the memory requirement, thus slowing the virtual machine down. When this happens, there are plenty of other nodes in the cluster that have available resources. I can live-migrate the virtual server over there and everything works, and the warnings go away. Now how can I automate this? I see no menu options in either Hyper-V or the Failover Cluster Manager for performing a migration or shutdown when dynamic memory goes into the warning state. Any ideas about how to script this, or monitor it and invoke the action directly, would be helpful. If the solution involves coding, powershell would be ideal, but I could envison this as a .Net Service that monitors for this state and kicks off the migration request. I just don't know what objects are involved in doing the monitoring or kicking off the live migration. Thanks in advance.

    Read the article

  • Having trouble with a workaround, for booting from a usb stick, using grub and a minimal linux kernel to load usb drivers

    - by s hanley
    I'm trying to boot from a usb stick. I formatted it to fat32, and later to ext2, and installed dsl on it using unetbootin, and later the usb install guide on dsl wiki (http://www.damnsmalllinux.org/wiki/index.php/Install_to_USB_From_within_Linux). The bios doesn't have a setting for booting from usb. Grub doesn't "see" the usb drive when I use the root and find commands, explained in (http://www.damnsmalllinux.org/wiki/index.php/USB_Booting). This happens even when I set boot from floppy at the top of the boot order. However, my usb keyboard is recognised by the bios and by grub. How can it recognise the keyboard but not the usb drive? Also, the usb led does flash even before grub starts up, so surely something must be happening usb-wise? I am now following an ubuntu guide to booting from a USB stick, using a hdd-based, minimal linux kernel to supply the usb drivers. But I'm having difficulty adapting it to other OSes (slax/dsl/aptosid). I believe I have to alter the initrd.gz file to include usb drivers and then copy that file along with vmlinuz to a partition on my hdd. But, what's the grub command for the kernel line supposed to look like? From the ubuntu example it's: title USB FLASH DRIVE root (hd0,6) kernel /boot/usb-boot/vmlinuz file=/cdrom/preseed/ubuntu.seed boot=casper noprompt cdrom-detect/try-usb=true persistent initrd /boot/usb-boot/initrd.lz boot Should mine just be: title USB FLASH DRIVE root (hd0,6) kernel /boot/usb-boot/vmlinuz cdrom-detect/try-usb=true initrd /boot/usb-boot/initrd.lz boot

    Read the article

  • What is the right tool to detect VMT or heap corruption in Delphi ?

    - by Roland Bengtsson
    I'm a member in a team that use Delphi 2007 for a larger application and we suspect heap corruption because sometimes there are strange bugs that have no other explanation. I believe that the Rangechecking option for the compiler is only for arrays. I want a tool that give an exception or log when there is a write on a memory address that is not allocated by the application. Regards EDIT: The error is of type: Error: Access violation at address 00404E78 in module 'BoatLogisticsAMCAttracsServer.exe'. Read of address FFFFFFDD EDIT2: Thanks for all suggestions. Unfortunately I think that the solution is deeper than that. We use a patched version of Bold for Delphi as we own the source. Probably there are some errors introduced in the Bold framwork. Yes we have a log with callstacks that are handled by JCL and also trace messages. So a callstack with the exception can lock like this: 20091210 16:02:29 (2356) [EXCEPTION] Raised EBold: Failed to derive ServerSession.mayDropSession: Boolean OCL expression: not active and not idle and timeout and (ApplicationKernel.allinstances->first.CurrentSession <> self) Error: Access violation at address 00404E78 in module 'BoatLogisticsAMCAttracsServer.exe'. Read of address FFFFFFDD. At Location BoldSystem.TBoldMember.CalculateDerivedMemberWithExpression (BoldSystem.pas:4016) Inner Exception Raised EBold: Failed to derive ServerSession.mayDropSession: Boolean OCL expression: not active and not idle and timeout and (ApplicationKernel.allinstances->first.CurrentSession <> self) Error: Access violation at address 00404E78 in module 'BoatLogisticsAMCAttracsServer.exe'. Read of address FFFFFFDD. At Location BoldSystem.TBoldMember.CalculateDerivedMemberWithExpression (BoldSystem.pas:4016) Inner Exception Call Stack: [00] System.TObject.InheritsFrom (sys\system.pas:9237) Call Stack: [00] BoldSystem.TBoldMember.CalculateDerivedMemberWithExpression (BoldSystem.pas:4016) [01] BoldSystem.TBoldMember.DeriveMember (BoldSystem.pas:3846) [02] BoldSystem.TBoldMemberDeriver.DoDeriveAndSubscribe (BoldSystem.pas:7491) [03] BoldDeriver.TBoldAbstractDeriver.DeriveAndSubscribe (BoldDeriver.pas:180) [04] BoldDeriver.TBoldAbstractDeriver.SetDeriverState (BoldDeriver.pas:262) [05] BoldDeriver.TBoldAbstractDeriver.Derive (BoldDeriver.pas:117) [06] BoldDeriver.TBoldAbstractDeriver.EnsureCurrent (BoldDeriver.pas:196) [07] BoldSystem.TBoldMember.EnsureContentsCurrent (BoldSystem.pas:4245) [08] BoldSystem.TBoldAttribute.EnsureNotNull (BoldSystem.pas:4813) [09] BoldAttributes.TBABoolean.GetAsBoolean (BoldAttributes.pas:3069) [10] BusinessClasses.TLogonSession._GetMayDropSession (code\BusinessClasses.pas:31854) [11] DMAttracsTimers.TAttracsTimerDataModule.RemoveDanglingLogonSessions (code\DMAttracsTimers.pas:237) [12] DMAttracsTimers.TAttracsTimerDataModule.UpdateServerTimeOnTimerTrig (code\DMAttracsTimers.pas:482) [13] DMAttracsTimers.TAttracsTimerDataModule.TimerKernelWork (code\DMAttracsTimers.pas:551) [14] DMAttracsTimers.TAttracsTimerDataModule.AttracsTimerTimer (code\DMAttracsTimers.pas:600) [15] ExtCtrls.TTimer.Timer (ExtCtrls.pas:2281) [16] Classes.StdWndProc (common\Classes.pas:11583) The inner exception part is the callstack at the moment an exception is reraised. EDIT3: The theory right now is that the Virtual Memory Table (VMT) is somehow broken. When this happen there is no indication of it. Only when a method is called an exception is raised (ALWAYS on address FFFFFFDD, -35 decimal) but then it is too late. You don't know the real cause for the error. Any hint of how to catch a bug like this is really appreciated!!! We have tried with SafeMM, but the problem is that the memory consumption is too high even when the 3 GB flag is used. So now I try to give a bounty to the SO community :) EDIT4: One hint is that according the log there is often (or even always) another exception before this. It can be for example optimistic locking in the database. We have tried to raise exceptions by force but in test environment it just works fine. EDIT5: Story continues... I did a search on the logs for the last 30 days now. The result: "Read of address FFFFFFDB" 0 "Read of address FFFFFFDC" 24 "Read of address FFFFFFDD" 270 "Read of address FFFFFFDE" 22 "Read of address FFFFFFDF" 7 "Read of address FFFFFFE0" 20 "Read of address FFFFFFE1" 0 So the current theory is that an enum (there is a lots in Bold) overwrite a pointer. I got 5 hits with different address above. It could mean that the enum holds 5 values where the second one is most used. If there is an exception a rollback should occur for the database and Boldobjects should be destroyed. Maybe there is a chance that not everything is destroyed and a enum still can write to an address location. If this is true maybe it is possible to search the code by a regexpr for an enum with 5 values ? EDIT6: To summarize, no there is no solution to the problem yet. I realize that I may mislead you a bit with the callstack. Yes there are a timer in that but there are other callstacks without a timer. Sorry for that. But there are 2 common factors. An exception with Read of address FFFFFFxx. Top of callstack is System.TObject.InheritsFrom (sys\system.pas:9237) This convince me that VilleK best describe the problem. I'm also convinced that the problem is somewhere in the Bold framework. But the BIG question is, how can problems like this be solved ? It is not enough to have an Assert like VilleK suggest as the damage has already happened and the callstack is gone at that moment. So to describe my view of what may cause the error: Somewhere a pointer is assigned a bad value 1, but it can be also 0, 2, 3 etc. An object is assigned to that pointer. There is method call in the objects baseclass. This cause method TObject.InheritsForm to be called and an exception appear on address FFFFFFDD. Those 3 events can be together in the code but they may also be used much later. I think this is true for the last method call. EDIT7: We work closely with the the author of Bold Jan Norden and he recently found a bug in the OCL-evaluator in Bold framework. When this was fixed these kinds of exceptions decreased a lot but they still occasionally come. But it is a big relief that this is almost solved.

    Read the article

  • C++ destructor seems to be called 'early'

    - by suicideducky
    Please see the "edit" section for the updated information. Sorry for yet another C++ dtor question... However I can't seem to find one exactly like mine as all the others are assigning to STL containers (that will delete objects itself) whereas mine is to an array of pointers. So I have the following code fragment #include<iostream> class Block{ public: int x, y, z; int type; Block(){ x=1; y=2; z=3; type=-1; } }; template <class T> class Octree{ T* children[8]; public: ~Octree(){ for( int i=0; i<8; i++){ std::cout << "del:" << i << std::endl; delete children[i]; } } Octree(){ for( int i=0; i<8; i++ ) children[i] = new T; } // place newchild in array at [i] void set_child(int i, T* newchild){ children[i] = newchild; } // return child at [i] T* get_child(int i){ return children[i]; } // place newchild at [i] and return the old [i] T* swap_child(int i, T* newchild){ T* p = children[i]; children[i] = newchild; return p; } }; int main(){ Octree< Octree<Block> > here; std::cout << "nothing seems to have broken" << std::endl; } Looking through the output I notice that the destructor is being called many times before I think it should (as Octree is still in scope), the end of the output also shows: del:0 del:0 del:1 del:2 del:3 Process returned -1073741819 (0xC0000005) execution time : 1.685 s Press any key to continue. For some reason the destructor is going through the same point in the loop twice (0) and then dying. All of this occures before the "nothing seems to have gone wrong" line which I expected before any dtor was called. Thanks in advance :) EDIT The code I posted has some things removed that I thought were unnecessary but after copying and compiling the code I pasted I no longer get the error. What I removed was other integer attributes of the code. Here is the origional: #include<iostream> class Block{ public: int x, y, z; int type; Block(){ x=1; y=2; z=3; type=-1; } Block(int xx, int yy, int zz, int ty){ x=xx; y=yy; z=zz; type=ty; } Block(int xx, int yy, int zz){ x=xx; y=yy; z=zz; type=0; } }; template <class T> class Octree{ int x, y, z; int size; T* children[8]; public: ~Octree(){ for( int i=0; i<8; i++){ std::cout << "del:" << i << std::endl; delete children[i]; } } Octree(int xx, int yy, int zz, int size){ x=xx; y=yy; z=zz; size=size; for( int i=0; i<8; i++ ) children[i] = new T; } Octree(){ Octree(0, 0, 0, 10); } // place newchild in array at [i] void set_child(int i, T* newchild){ children[i] = newchild; } // return child at [i] T* get_child(int i){ return children[i]; } // place newchild at [i] and return the old [i] T* swap_child(int i, T* newchild){ T* p = children[i]; children[i] = newchild; return p; } }; int main(){ Octree< Octree<Block> > here; std::cout << "nothing seems to have broken" << std::endl; } Also, as for the problems with set_child, get_child and swap_child leading to possible memory leaks this will be solved as a wrapper class will either use get before set or use swap to get the old child and write this out to disk before freeing the memory itself. I am glad that it is not my memory management failing but rather another error. I have not made a copy and/or assignment operator yet as I was just testing the block tree out, I will almost certainly make them all private very soon. This version spits out -1073741819. Thank you all for your suggestions and I apologise for highjacking my own thread :$

    Read the article

  • Java Appengine APPSTATS causing java out of memory error

    - by aloo
    I have several servlets in my java appengine app that do in memory sorting and take on the order of seconds to complete. These complete error free. However, I recently enabled appstats for appengine and started receiving the following error: java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOf(Unknown Source) at java.lang.AbstractStringBuilder.expandCapacity(Unknown Source) at java.lang.AbstractStringBuilder.append(Unknown Source) at java.lang.StringBuilder.append(Unknown Source) at java.lang.StringBuilder.append(Unknown Source) at java.lang.StringBuilder.append(Unknown Source) at com.google.appengine.repackaged.com.google.protobuf.TextFormat$TextGenerator.write(TextFormat.java:344) at com.google.appengine.repackaged.com.google.protobuf.TextFormat$TextGenerator.print(TextFormat.java:332) at com.google.appengine.repackaged.com.google.protobuf.TextFormat.printUnknownFields(TextFormat.java:249) at com.google.appengine.repackaged.com.google.protobuf.TextFormat.print(TextFormat.java:47) at com.google.appengine.repackaged.com.google.protobuf.TextFormat.printToString(TextFormat.java:73) at com.google.appengine.tools.appstats.Recorder.makeSummary(Recorder.java:157) at com.google.appengine.tools.appstats.Recorder.makeSyncCall(Recorder.java:239) at com.google.apphosting.api.ApiProxy.makeSyncCall(ApiProxy.java:98) at com.google.appengine.api.datastore.DatastoreApiHelper.makeSyncCall(DatastoreApiHelper.java:54) at com.google.appengine.api.datastore.PreparedQueryImpl.runQuery(PreparedQueryImpl.java:127) at com.google.appengine.api.datastore.PreparedQueryImpl.asQueryResultList(PreparedQueryImpl.java:81) at org.datanucleus.store.appengine.query.DatastoreQuery.fulfillEntityQuery(DatastoreQuery.java:379) at org.datanucleus.store.appengine.query.DatastoreQuery.executeQuery(DatastoreQuery.java:289) at org.datanucleus.store.appengine.query.DatastoreQuery.performExecute(DatastoreQuery.java:239) at org.datanucleus.store.appengine.query.JDOQLQuery.performExecute(JDOQLQuery.java:89) at org.datanucleus.store.query.Query.executeQuery(Query.java:1489) at org.datanucleus.store.query.Query.executeWithArray(Query.java:1371) at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:243) at com.poo.pooserver.dataaccess.DataAccessHelper.getPooStream(DataAccessHelper.java:204) at com.poo.pooserver.GetPooStreamServlet.doPost(GetPooStreamServlet.java:58) at javax.servlet.http.HttpServlet.service(HttpServlet.java:713) at javax.servlet.http.HttpServlet.service(HttpServlet.java:806) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1166) at com.google.appengine.tools.appstats.AppstatsFilter.doFilter(AppstatsFilter.java:92) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157)

    Read the article

  • iPhone SDK: Memory leak on picker

    - by bbftsoftware
    I have created a picker for my users to pick from a list of countries. The problem is that repeated opening and closing of the picker is resulting in “EXC_BAD_ACCESS” error. I suspect it might be a memory leak but I am not sure. I was hoping someone could shed some insight into why this might be happening? //data source for UIPicker NSArray *arrayCountryChoices; arrayCountryChoices = [[NSArray alloc] initWithObjects:@"TK=TOKELAU", @"TJ=TAJIKISTAN", @"TH=THAILAND", @"TG=TOGO", @"TF=FRENCH SOUTHERN TERRITORIES", @"GY=GUYANA", @"TD=CHAD", nil]; //opening the picker CountryViewController *countryVC = [[CountryViewController alloc] initWithNibName:@"CountryView" bundle:nil]; countryVC.delegate = self; [self presentModalViewController:countryVC animated:YES]; [countryVC release]; //here is where I grab the data //close country selector [self dismissModalViewControllerAnimated:YES]; //parse out code NSString *strCode = [chosenCountry substringToIndex:2]; //set the gui txtCountry.text = strCode; I think it may be because I am trying to release the Country Selector before the delegate has a chance to get its data? Also I am wondring if I should not release the picker until the screen that calls it is released. Thanks in advance.

    Read the article

  • Lambdas within Extension methods: Possible memory leak?

    - by Oliver
    I just gave an answer to a quite simple question by using an extension method. But after writing it down i remembered that you can't unsubscribe a lambda from an event handler. So far no big problem. But how does all this behave within an extension method?? Below is my code snipped again. So can anyone enlighten me, if this will lead to myriads of timers hanging around in memory if you call this extension method multiple times? I would say no, cause the scope of the timer is limited within this function. So after leaving it no one else has a reference to this object. I'm just a little unsure, cause we're here within a static function in a static class. public static class LabelExtensions { public static Label BlinkText(this Label label, int duration) { Timer timer = new Timer(); timer.Interval = duration; timer.Tick += (sender, e) => { timer.Stop(); label.Font = new Font(label.Font, label.Font.Style ^ FontStyle.Bold); }; label.Font = new Font(label.Font, label.Font.Style | FontStyle.Bold); timer.Start(); return label; } }

    Read the article

  • Loading a RSA private key from memory using libxmlsec

    - by ereOn
    Hello, I'm currently using libxmlsec into my C++ software and I try to load a RSA private key from memory. To do this, I searched trough the API and found this function. It takes binary data, a size, a format string and several PEM-callback related parameters. When I call the function, it just stucks, uses 100% of the CPU time and never returns. Quite annoying, because I have no way of finding out what is wrong. Here is my code: d_xmlsec_dsig_context->signKey = xmlSecCryptoAppKeyLoadMemory( reinterpret_cast<const xmlSecByte*>(data), static_cast<xmlSecSize>(datalen), xmlSecKeyDataFormatBinary, NULL, NULL, NULL ); data is a const char* pointing to the raw bytes of my RSA key (using i2d_RSAPrivateKey(), from OpenSSL) and datalen the size of data. My test private key doesn't have a passphrase so I decided not to use the callbacks for the time being. Has someone already done something similar ? Do you guys see anything that I could change/test to get forward on this problem ? I just discovered the library on yesterday, so I might miss something obvious here; I just can't see it. Thank you very much for your help.

    Read the article

  • JVM segmentation faults due to "Invalid memory access of location"

    - by Dan
    I have a small project written in Scala 2.9.2 with unit tests written using ScalaTest. I use SBT for compiling and running my tests. Running sbt test on my project makes the JVM segfault regularly, but just compiling and running my project from SBT works fine. Here is the exact error message: Invalid memory access of location 0x8 rip=0x10959f3c9 [1] 11925 segmentation fault sbt I cannot locate a core dump anywhere, but would be happy to provide it if it can be obtained. Running java -version results in this: java version "1.6.0_37" Java(TM) SE Runtime Environment (build 1.6.0_37-b06-434-11M3909) Java HotSpot(TM) 64-Bit Server VM (build 20.12-b01-434, mixed mode) But I've also got Java 7 installed (though I was never able to actually run a Java program with it, afaik). Another issue that may be related: some of my test cases contain titles including parentheses like ( and ). SBT or ScalaTest (not sure) will consequently insert square parens in the middle of the output. For example, a test case with the name (..)..(..) might suddenly look like (..[)..](..). Any help resolving these issues is much appreciated :-) EDIT: I installed the Java 7 JDK, so now java -version shows the right thing: java version "1.7.0_07" Java(TM) SE Runtime Environment (build 1.7.0_07-b10) Java HotSpot(TM) 64-Bit Server VM (build 23.3-b01, mixed mode) This also means that I now get a more detailed segfault error and a core dump: # # A fatal error has been detected by the Java Runtime Environment: # # SIGSEGV (0xb) at pc=0x000000010a71a3e3, pid=16830, tid=19459 # # JRE version: 7.0_07-b10 # Java VM: Java HotSpot(TM) 64-Bit Server VM (23.3-b01 mixed mode bsd-amd64 compressed oops) # Problematic frame: # V [libjvm.dylib+0x3cd3e3] And the dump.

    Read the article

  • AS3 Memory Conservation (Loaders/BitmapDatas/Bitmaps/Sprites)

    - by rinogo
    I'm working on reducing the memory requirements of my AS3 app. I understand that once there are no remaining references to an object, it is flagged as being a candidate for garbage collection. Is it even worth it to try to remove references to Loaders that are no longer actively in use? My first thought is that it is not worth it. Here's why: My Sprites need perpetual references to the Bitmaps they display (since the Sprites are always visible in my app). So, the Bitmaps cannot be garbage collected. The Bitmaps rely upon BitmapData objects for their data, so we can't get rid of them. (Up until this point it's all pretty straightforward). Here's where I'm unsure of what's going on: Does a BitmapData have a reference to the data loaded by the Loader? In other words, is BitmapData essentially just a wrapper that has a reference to loader.content, or is the data copied from loader.content to BitmapData? If a reference is maintained, then I don't get anything by garbage collecting my loaders... Thoughts?

    Read the article

  • Why I got some memory problems?

    - by Tattat
    I got this error from XCode: objc[8422]: FREED(id): message release sent to freed object=0x3b120c0 I googled and find that is related to the memory. But I don't know which line of code I go wrong, any ideas? @implementation MyAppDelegate @synthesize window; @synthesize viewController; - (void)applicationDidFinishLaunching:(UIApplication *)application { // Override point for customization after app launch [window addSubview:viewController.view]; [window makeKeyAndVisible]; [self changeScene:[MainGameMenuScene class]]; } - (void)dealloc { [viewController release]; [window release]; [super dealloc]; } - (void) changeScene: (Class) scene { BOOL animateTransition = true; if(animateTransition){ [UIView beginAnimations:nil context:NULL]; [UIView setAnimationDuration:0.5]; [UIView setAnimationTransition:UIViewAnimationTransitionFlipFromLeft forView:window cache:YES]; //does nothing without this line. } if( viewController.view != nil ) { [viewController.view removeFromSuperview]; //remove view from window's subviews. [viewController.view release]; //release gamestate } viewController.view = [[scene alloc] initWithFrame:CGRectMake(0, 0, IPHONE_WIDTH, IPHONE_HEIGHT) andManager:self]; //now set our view as visible [window addSubview:viewController.view]; [window makeKeyAndVisible]; if(animateTransition){ [UIView commitAnimations]; } }

    Read the article

  • Help to resolve 'Out of memory' exception when calling DrawImage

    - by Jack Juiceson
    Hi guys, About one percent of our users experience sudden crash while using our application. The logs show below exception, the only thing in common that I've seen so far is that, they all have XP SP3. Thanks in advance Out of memory. at System.Drawing.Graphics.CheckErrorStatus(Int32 status) at System.Drawing.Graphics.DrawImage(Image image, Rectangle destRect, Int32 srcX, Int32 srcY, Int32 srcWidth, Int32 srcHeight, GraphicsUnit srcUnit, ImageAttributes imageAttrs, DrawImageAbort callback, IntPtr callbackData) at System.Drawing.Graphics.DrawImage(Image image, Rectangle destRect, Int32 srcX, Int32 srcY, Int32 srcWidth, Int32 srcHeight, GraphicsUnit srcUnit, ImageAttributes imageAttr, DrawImageAbort callback) at System.Drawing.Graphics.DrawImage(Image image, Rectangle destRect, Int32 srcX, Int32 srcY, Int32 srcWidth, Int32 srcHeight, GraphicsUnit srcUnit, ImageAttributes imageAttr) at System.Windows.Forms.ControlPaint.DrawBackgroundImage(Graphics g, Image backgroundImage, Color backColor, ImageLayout backgroundImageLayout, Rectangle bounds, Rectangle clipRect, Point scrollOffset, RightToLeft rightToLeft) at System.Windows.Forms.Control.PaintBackground(PaintEventArgs e, Rectangle rectangle, Color backColor, Point scrollOffset) at System.Windows.Forms.Control.PaintBackground(PaintEventArgs e, Rectangle rectangle) at System.Windows.Forms.Control.OnPaintBackground(PaintEventArgs pevent) at System.Windows.Forms.ScrollableControl.OnPaintBackground(PaintEventArgs e) at System.Windows.Forms.Control.PaintTransparentBackground(PaintEventArgs e, Rectangle rectangle, Region transparentRegion) at System.Windows.Forms.Control.PaintBackground(PaintEventArgs e, Rectangle rectangle, Color backColor, Point scrollOffset) at System.Windows.Forms.Control.PaintBackground(PaintEventArgs e, Rectangle rectangle) at System.Windows.Forms.Control.OnPaintBackground(PaintEventArgs pevent) at System.Windows.Forms.ScrollableControl.OnPaintBackground(PaintEventArgs e) at System.Windows.Forms.Control.PaintTransparentBackground(PaintEventArgs e, Rectangle rectangle, Region transparentRegion) at System.Windows.Forms.Control.PaintBackground(PaintEventArgs e, Rectangle rectangle, Color backColor, Point scrollOffset) at System.Windows.Forms.Control.PaintBackground(PaintEventArgs e, Rectangle rectangle) at System.Windows.Forms.Control.OnPaintBackground(PaintEventArgs pevent) at System.Windows.Forms.Control.PaintWithErrorHandling(PaintEventArgs e, Int16 layer, Boolean disposeEventArgs) at System.Windows.Forms.Control.WmPaint(Message& m) at System.Windows.Forms.Control.WndProc(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.OnMessage(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m) at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) Operation System Information ---------------------------- Name = Windows XP Edition = Home Service Pack = Service Pack 3 Version = 5.1.2600.196608 Bits = 32

    Read the article

  • Android maps out of memory error

    - by SamB09
    Hi , sometimes when running a google maps program with an overlay image i will receive a bit map out of memory error. It always seems to be at a random point in the app. Im not sure how to solve this. Anyone have any ideas ? My overlay code is below , im not sure if you need to see the class its called in though? public class MyOverlay2 extends Overlay { private static final double MAX_TAP_DISTANCE_KM = 3; // Rough approximation - one degree = 50 nautical miles private static final double MAX_TAP_DISTANCE_DEGREES = MAX_TAP_DISTANCE_KM * 0.5399568 * 50; private final GeoPoint gPoint; private final Context cont; private final int draw; // private final int lat; public MyOverlay2(Context cont, GeoPoint gPoint1, int draw) { // constructor will be called in the userLocation class to draw an overly image this.cont = cont; this.gPoint = gPoint1; this.draw = draw; } @Override public boolean draw(Canvas canvas, MapView mapView, boolean shadow, long when) { // constructor takes 3 arguments super.draw(canvas, mapView, shadow); // Convert geo coordinates to screen pixels Point screenPoint = new Point(); mapView.getProjection().toPixels(gPoint, screenPoint); //Read the image from the xml resource using a bitmap factory BitmapFactory.Options options=new BitmapFactory.Options(); options.inSampleSize = 1; Bitmap preview_bitmap=BitmapFactory.decodeResource(cont.getResources(),R.drawable.monday12,options); //draw the image at the location specified by the co-ordinates canvas.drawBitmap(preview_bitmap, screenPoint.x - preview_bitmap.getWidth() /2, screenPoint.y - preview_bitmap.getHeight()/2 , null); // get the images height and width values divided by two draw the image at the specified screen points return true; } @Override public boolean onTap(GeoPoint s, MapView mapView) { // Handle tapping on the overlay here return true; } }

    Read the article

< Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >