Search Results

Search found 12267 results on 491 pages for 'out of memory'.

Page 124/491 | < Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >

  • C String literals: Where do they go?

    - by Chris Cooper
    I have read a lot of posts about "string literals" on SO, most of which have been about best-practices, or where the literal is NOT located in memory. I am interested in where the string DOES get allocated/stored, etc. I did find one intriguing answer here, saying: Defining a string inline actually embeds the data in the program itself and cannot be changed (some compilers allow this by a smart trick, don't bother). but, it had to do with C++, not to mention that it says not to bother. I am bothering. =D So my question is, again, where and how is my string literal kept? Why should I not try to alter it? Does the implementation vary by platform? Does anyone care to elaborate on the "smart trick?" Thanks for any explanations.

    Read the article

  • .Net: Prevent an object from being paged out (VirtualLock equivalent)

    - by Gene
    How would one go about keep an object in memory such that it won't be paged out by the OS in .Net? i.e. Something similar to VirtualLock, but operating on an object, such that if compacting occurs and the object is moved it still would not be paged out, etc.. (I suppose one could pin the object's, determine what pages it belongs to, and then VirtualLock those pages, but that seems non-desireable for many reasons.) If possible, could you point me to a reference or working sample? (C# ideally) Many thanks in advance!

    Read the article

  • IPC speed and compare

    - by Lily
    I am trying to implement a real-time application which involves IPC across different modules. The modules are doing some data intensive processing. I am using message queue as the backbone(Activemq) for IPC in the prototype, which is easy(considering I am a totally IPC newbie), but it's very very slow. Here is my situation: I have isolated the IPC part so that I could change it other ways in future. I have 3 weeks to implement another faster version. ;-( IPC should be fast, but also comparatively easy to pick up I have been looking into different IPC approaches: socket, pipe, shared memory. However, I have no experience in IPC, and there is definitely no way I could fail this demo in 3 weeks... Which IPC will be the safe way to start with? Thanks. Lily

    Read the article

  • Why does this program crash: passing of std::string between DLLs

    - by msiemeri
    Hello together. I have some trouble figuring out why the following crashes (MSVC9): //// the following compiles to A.dll with release runtime linked dynamically //A.h class A { __declspec(dllexport) std::string getString(); }; //A.cpp #include "A.h" std::string A::getString() { return "I am a string."; } //// the following compiles to main.exe with debug runtime linked dynamically #include "A.h" int main() { A a; std::string s = A.getString(); return 0; } // crash on exit Obviously (?) this is due to the different memory models for the executable and DLL. Could it be that the string A::getString() returns is being allocated in A.dll and freed in main.exe? If so, why - and what would be a safe way to pass strings between DLLs (or executables, for that matter)? Without using wrappers like shared_ptr with a custom deleter.

    Read the article

  • A very basic auto-expanding list/array

    - by MainMa
    Hi, I have a method which returns an array of fixed type objects (let's say MyObject). The method creates a new empty Stack<MyObject>. Then, it does some work and pushes some number of MyObjects to the end of the Stack. Finally, it returns the Stack.ToArray(). It does not change already added items or their properties, nor remove them. The number of elements to add will cost performance. There is no need to sort/order the elements. Is Stack a best thing to use? Or must I switch to Collection or List to ensure better performance and/or lower memory cost?

    Read the article

  • Release objects before returning a value based on those object?

    - by Moshe
    Consider the following method, where I build a string and return it. I would like to release the building blocks of the sting, but then the string is based on values that no longer exists. Now what? Am I leaking memory and if so, how can I correct it? - (NSString) getMiddahInEnglish:(int)day{ NSArray *middah = [[NSArray alloc] initWithObjects:@"Chesed", @"Gevurah", @"Tiferes", @"Netzach", @"Hod", @"Yesod", @"Malchus"]; NSString *firstPartOfMiddah = [NSString stringWithFormat: @"%@", [middah objectAtIndex: ((int)day% 7)-1]]; NSString *secondPartOfMiddah = [NSString stringWithFormat: @"%@", [middah objectAtIndex: ((int)day / 7)]]; NSString *middahStr = [NSString string@"%@ She'bi@%", firstPartOfMiddah, secondPartOfMiddah]; [middah release]; [firstPartOfMiddah release]; [secondPartOfMiddah release]; return middahStr; }

    Read the article

  • Releasing NSData causes exception...

    - by badmanj
    Hi, Can someone please explain why the following code causes my app to bomb? NSData *myImage = UIImagePNGRepresentation(imageView.image); : [myImage release]; If I comment out the 'release' line, the app runs... but a few times calling the function containing this code and I get a crash - I guess caused by a memory leak. Even if I comment EVERYTHING else in the function out and just leave those two lines, when the release executes, the app crashes. I'm sure this must be a newbie "you don't know how to clean up your mess properly" kind of thing ;-) Cheers, Jamie.

    Read the article

  • iPhone: custom UITableViewCell with Interface Builder -> how to release cell objects?

    - by Stefan Klumpp
    The official documentation tells me I've to do these 3 things in order to manage the my memory for "nib objects" correctly. @property (nonatomic, retain) IBOutlet UIUserInterfaceElementClass *anOutlet; "You should then either synthesize the corresponding accessor methods, or implement them according to the declaration, and (in iPhone OS) release the corresponding variable in dealloc." - (void)viewDidUnload { self.anOutlet = nil; [super viewDidUnload]; } That makes sense for a normal view. However, how am I gonna do that for a UITableView with custom UITableViewCells loaded through a .nib-file? There the IBOutlets are in MyCustomCell.h (inherited from UITableViewCell), but that is not the place where I load the nib and apply it to the cell instances, because that happens in MyTableView.m So do I still release the IBOutlets in the dealloc of MyCustomCell.m or do I have to do something in MyTableView.m? Also MyCustomCell.m doesn't have a - (void)viewDidUnload {} where I can set my IBOutlets to nil, while my MyTableView.m does.

    Read the article

  • Efficient data structure design

    - by Sway
    Hi there, I need to match a series of user inputed words against a large dictionary of words (to ensure the entered value exists). So if the user entered: "orange" it should match an entry "orange' in the dictionary. Now the catch is that the user can also enter a wildcard or series of wildcard characters like say "or__ge" which would also match "orange" The key requirements are: * this should be as fast as possible. * use the smallest amount of memory to achieve it. If the size of the word list was small I could use a string containing all the words and use regular expressions. however given that the word list could contain potentially hundreds of thousands of enteries I'm assuming this wouldn't work. So is some sort of 'tree' be the way to go for this...? Any thoughts or suggestions on this would be totally appreciated! Thanks in advance, Matt

    Read the article

  • Class members allocation on heap/stack? C++

    - by simplebutperfect
    If a class is declared as follows: class MyClass { char * MyMember; MyClass() { MyMember = new char[250]; } ~MyClass() { delete[] MyMember; } }; And it could be done like this: class MyClass { char MyMember[250]; }; How does a class gets allocated on heap, like if i do MyClass * Mine = new MyClass(); Does the allocated memory also allocates the 250 bytes in the second example along with the class instantiation? And will the member be valid for the whole lifetime of MyClass object? As for the first example, is it practical to allocate class members on heap?

    Read the article

  • Conceptual question about NSAutoreleasePools

    - by ryyst
    In my Cocoa program, wouldn't a really simple way of dealing with autoreleased objects be to just create a timer object inside the app delegate that calls the following method e.g. every 10 seconds: if (pool) { // Release & drain the current pool to free the memory. [pool release]; } // Create a new pool. pool = [[NSAutoreleasePool alloc] init]; The only problems I can imagine are: 1) If the above code runs in a separate thread, an object might get autoreleased between the release call to the old pool and the creation of the new pool - that seems highly unlikely though. 2) It's obviously not that efficient, because the pool might get released if there's nothing in it. Likewise, in the 10 second gap, many many objects might be autoreleased, causing the pool to grow a lot. Still, the above solution seems pretty suitable to small and simple projects. Why doesn't anybody use it? What's the best practice of using NSAutoreleasePools?

    Read the article

  • Are there any modern platforms with non-IEEE C/C++ float formats?

    - by Patrick Niedzielski
    Hi all, I am writing a video game, Humm and Strumm, which requires a network component in its game engine. I can deal with differences in endianness easily, but I have hit a wall in attempting to deal with possible float memory formats. I know that modern computers have all a standard integer format, but I have heard that they may not all use the IEEE standard for floating-point integers. Is this true? While certainly I could just output it as a character string into each packet, I would still have to convert to a "well-known format" of each client, regardless of the platform. The standard printf() and atod() would be inadequate. Please note, because this game is a Free/Open Source Software program that will run on GNU/Linux, *BSD, and Microsoft Windows, I cannot use any proprietary solutions, nor any single-platform solutions. Cheers, Patrick

    Read the article

  • Any sense to set obj = null(Nothing) in Dispose()?

    - by serhio
    Is there any sense to set custom object to null(Nothing in VB.NET) in the Dispose() method? Could this prevent memory leaks or it's useless?! Let's consider two examples: public class Foo : IDisposable { private Bar bar; // standard custom .NET object public Foo(Bar bar) { this.bar = bar; } public void Dispose() { bar = null; // any sense? } } public class Foo : RichTextBox { // this could be also: GDI+, TCP socket, SQl Connection, other "heavy" object private Bitmap backImage; public Foo(Bitmap backImage) { this.backImage = backImage; } protected override void Dispose(bool disposing) { if (disposing) { backImage = null; // any sense? } } }

    Read the article

  • WeakReferences are not freed in embedded OS

    - by Carsten König
    I've got a strange behavior here: I get a massive memory leak in production running a WPF application that runs on a DLOG-Terminal (Windows Embedded Standard SP1) that behaves perfectly fine if I run it localy on a normal desktop (Win7 prof.) After many unsucessful attempts to find any problem I put one of those directly beside my monitor, installed the ANTs MemoryProfiler and did one hour test run simulating user operations on both the terminal and my development PC. Result is, that due to some strange reasons the embedded system piles up a huge amount of WeakReference and EffectiveValueEntry[] Objects. Here are are some pictures: Development (PC): And the terminal: Just look at the class list... Has anyone seen something like this before and are there known solutions to this? Where can I get help? (PS the terminals where installed with images prepared for .net4)

    Read the article

  • NSAutoreleasePool carrying across methods?

    - by Tim
    I'm building an iPhone application where I detach some threads to do long-running work in the background so as not to hang the UI. I understand that threads need NSAutoreleasePool instances for memory management. What I'm not sure about is if the threaded method calls another method - does that method also need an NSAutoreleasePool? Example code: - (void)primaryMethod { [self performSelectorInBackground:@selector(threadedMethod) withObject:nil]; } - (void)threadedMethod { NSAutoreleasePool *aPool = [[NSAutoreleasePool alloc] init]; // Some code here [self anotherMethod]; // Maybe more code here [aPool drain]; } - (void)anotherMethod { // More code here } The reason I ask is I'm receiving errors that objects are being autoreleased with no pool in place, and are "just leaking." I've seen other questions where people didn't have autorelease pools in place at all, and I understand why an autorelease pool is needed. I'm specifically interested in finding out whether an autorelease pool created in (in this example) threadedMethod applies to objects created in anotherMethod.

    Read the article

  • Displaying/scrolling through heaps of pictures in the browser

    - by user347256
    I want to be able to browse through heaps of images in the browser, fast. THe easy way (just load 2000 images and scroll) slows down the scrolling a lot, assumedly because there's too much images to be kept in memory. I'd love to hear thoughts on strategies to be able to quickly scroll through 10000s of images (as if you were on your desktop) in the browser. What would expected bottlenecks be? How to address them? How to fake things so that the user experience is still good? Examples in the wild?

    Read the article

  • Access violation using LocalAlloc()

    - by PaulH
    I have a Visual Studio 2008 Windows Mobile 6 C++ application that is using an API that requires the use of LocalAlloc(). To make my life easier, I created an implementation of a standard allocator that uses LocalAlloc() internally: /// Standard library allocator implementation using LocalAlloc and LocalReAlloc /// to create a dynamically-sized array. /// Memory allocated by this allocator is never deallocated. That is up to the /// user. template< class T, int max_allocations > class LocalAllocator { public: typedef T value_type; typedef size_t size_type; typedef ptrdiff_t difference_type; typedef T* pointer; typedef const T* const_pointer; typedef T& reference; typedef const T& const_reference; pointer address( reference r ) const { return &r; }; const_pointer address( const_reference r ) const { return &r; }; LocalAllocator() throw() : c_( NULL ) { }; /// Attempt to allocate a block of storage with enough space for n elements /// of type T. n>=1 && n<=max_allocations. /// If memory cannot be allocated, a std::bad_alloc() exception is thrown. pointer allocate( size_type n, const void* /*hint*/ = 0 ) { if( NULL == c_ ) { c_ = LocalAlloc( LPTR, sizeof( T ) * n ); } else { HLOCAL c = LocalReAlloc( c_, sizeof( T ) * n, LHND ); if( NULL == c ) LocalFree( c_ ); c_ = c; } if( NULL == c_ ) throw std::bad_alloc(); return reinterpret_cast< T* >( c_ ); }; /// Normally, this would release a block of previously allocated storage. /// Since that's not what we want, this function does nothing. void deallocate( pointer /*p*/, size_type /*n*/ ) { // no deallocation is performed. that is up to the user. }; /// maximum number of elements that can be allocated size_type max_size() const throw() { return max_allocations; }; private: /// current allocation point HLOCAL c_; }; // class LocalAllocator My application is using that allocator implementation in a std::vector< #define MAX_DIRECTORY_LISTING 512 std::vector< WIN32_FIND_DATA, LocalAllocator< WIN32_FIND_DATA, MAX_DIRECTORY_LISTING > > file_list; WIN32_FIND_DATA find_data = { 0 }; HANDLE find_file = ::FindFirstFile( folder.c_str(), &find_data ); if( NULL != find_file ) { do { // access violation here on the 257th item. file_list.push_back( find_data ); } while ( ::FindNextFile( find_file, &find_data ) ); ::FindClose( find_file ); } // data submitted to the API that requires LocalAlloc()'d array of WIN32_FIND_DATA structures SubmitData( &file_list.front() ); On the 257th item added to the vector<, the application crashes with an access violation: Data Abort: Thread=8e1b0400 Proc=8031c1b0 'rapiclnt' AKY=00008001 PC=03f9e3c8(coredll.dll+0x000543c8) RA=03f9ff04(coredll.dll+0x00055f04) BVA=21ae0020 FSR=00000007 First-chance exception at 0x03f9e3c8 in rapiclnt.exe: 0xC0000005: Access violation reading location 0x01ae0020. LocalAllocator::allocate is called with an n=512 and LocalReAlloc() succeeds. The actual Access Violation exception occurs within the std::vector< code after the LocalAllocator::allocate call: 0x03f9e3c8 0x03f9ff04 > MyLib.dll!stlp_std::priv::__copy_trivial(const void* __first = 0x01ae0020, const void* __last = 0x01b03020, void* __result = 0x01b10020) Line: 224, Byte Offsets: 0x3c C++ MyLib.dll!stlp_std::vector<_WIN32_FIND_DATAW,LocalAllocator<_WIN32_FIND_DATAW,512> >::_M_insert_overflow(_WIN32_FIND_DATAW* __pos = 0x01b03020, _WIN32_FIND_DATAW& __x = {...}, stlp_std::__true_type& __formal = {...}, unsigned int __fill_len = 1, bool __atend = true) Line: 112, Byte Offsets: 0x5c C++ MyLib.dll!stlp_std::vector<_WIN32_FIND_DATAW,LocalAllocator<_WIN32_FIND_DATAW,512> >::push_back(_WIN32_FIND_DATAW& __x = {...}) Line: 388, Byte Offsets: 0xa0 C++ MyLib.dll!Foo(unsigned long int cbInput = 16, unsigned char* pInput = 0x01a45620, unsigned long int* pcbOutput = 0x1dabfbbc, unsigned char** ppOutput = 0x1dabfbc0, IRAPIStream* __formal = 0x00000000) Line: 66, Byte Offsets: 0x1e4 C++ If anybody can point out what I may be doing wrong, I would appreciate it. Thanks, PaulH

    Read the article

  • Unicorn: Which number of worker processes to use?

    - by blackbird07
    I am running a Ruby on Rails app on a virtual Linux server that is capped at 1GB RAM. Currently, I am constantly hitting the limit and would like to optimize memory utilization. One option I am looking at is reducing the number of unicorn workers. So what is the best way to determine the number of unicorn workers to use? The current setting is 10 workers, but the maximum number of requests per second I have seen on Google Analytics Real-Time is 3 (only scored once at a peak time; in 99% of the time not going above 1 request per second). So is it a save assumption that I can - for now - go with 4 workers, leaving room for unexpected amounts of requests? What are the metrics I should have a look at for determining the number of workers and what are the tools I can use for that on my Ubuntu machine?

    Read the article

  • Release an object without a pointer?

    - by Kai Friis
    I’ve just started developing for iPhone and am trying to get my head around memory management. I made a small program that shows a map and an annotation on the map. For the annotation I made a simple class that implements the MKAnnotation protocol. To create and add the annotation I wrote this: [self.myMapView addAnnotation:[[MyAnnotation alloc] init]]; It worked fine until I tried to release the object. Nothing to release. This is what I would have done in C#, I guess it doesn’t work without garbage collection? So is this the only way to do it? MyAnnotation *myAnnotation = [[MyAnnotation alloc] init]; [self.myMapView addAnnotation: myAnnotation]; [myAnnotation release];

    Read the article

  • How return a std::string from C's "getcwd" function

    - by rubenvb
    Sorry to keep hammering on this, but I'm trying to learn :). Is this any good? And yes, I care about memory leaks. I can't find a decent way of preallocating the char*, because there simply seems to be no cross-platform way. const string getcwd() { char* a_cwd = getcwd(NULL,0); string s_cwd(a_cwd); free(a_cwd); return s_cwd; } UPDATE2: without Boost or Qt, the most common stuff can get long-winded (see accepted answer)

    Read the article

  • PostGres - run a query in batches?

    - by CaffeineIV
    Is it possible to loop through a query so that if (for example) 500,000 rows are found, it'll return results for the first 10,000 and then rerun the query again? So, what I want to do is run a query and build an array, like this: $result = pg_query("SELECT * FROM myTable"); $i = 0; while($row = pg_fetch_array($result) ) { $myArray[$i]['id'] = $row['id']; $myArray[$i]['name'] = $row['name']; $i++; } But, I know that there will be several hundred thousand rows, so I wanted to do it in batches of like 10,000... 1- 9,999 and then 10,000 - 10,999 etc... The reason why is because I keep getting this error: Fatal error: Allowed memory size of 536870912 bytes exhausted (tried to allocate 3 bytes) Which, incidentally, I don't understand how 3 bytes could exhaust 512M... So, if that's something that I can just change, that'd be great, although, still might be better to do this in batches?

    Read the article

  • Effecient data structure design

    - by Sway
    Hi there, I need to match a series of user inputed words against a large dictionary of words (to ensure the entered value exists). So if the user entered: "orange" it should match an entry "orange' in the dictionary. Now the catch is that the user can also enter a wildcard or series of wildcard characters like say "or__ge" which would also match "orange" The key requirements are: * this should be as fast as possible. * use the smallest amount of memory to achieve it. If the size of the word list was small I could use a string containing all the words and use regular expressions. however given that the word list could contain potentially hundreds of thousands of enteries I'm assuming this wouldn't work. So is some sort of 'tree' be the way to go for this...? Any thoughts or suggestions on this would be totally appreciated! Thanks in advance, Matt

    Read the article

  • Garbage Collection in Java

    - by simion
    On the slides I am revising from it says the following: Live objects can be identified either by maintaining a count of the number of references to each object, or by tracing chains of references from the roots. Reference counting is expensive – it needs action every time a reference changes and it doesn’t spot cyclical structures, but it can reclaim space incrementally. Tracing involves identifying live objects only when you need to reclaim space – moving the cost from general access to the time at which the GC runs, typically only when you are out of memory. I understand the principles of why reference counting is expensive but do not understand what "doesn’t spot cyclical structures, but it can reclaim space incrementally." means. Could anyone help me out a little bit please? Thanks

    Read the article

  • Measuring debug vs release of ASP.NET applications

    - by Alex Angas
    A question at work came up about building ASP.NET applications in release vs debug mode. When researching further (particularly on SO), general advice is that setting <compilation debug="true"> in web.config has a much bigger impact. Has anyone done any testing to get some actual numbers about this? Here's the sort of information I'm looking for (which may give away my experience with testing such things): Execution time | Debug build | Release build -------------------+---------------+--------------- Debug web.config | average 1 | average 2 Retail web.config | average 3 | average 4 Max memory usage | Debug build | Release build -------------------+---------------+--------------- Debug web.config | average 1 | average 2 Retail web.config | average 3 | average 4 Output file size | Debug build | Release build -------------------+---------------+--------------- | size 1 | size 2

    Read the article

  • Why freed struct in C still has data?

    - by kliketa
    When I run this code: #include <stdio.h> typedef struct _Food { char name [128]; } Food; int main (int argc, char **argv) { Food *food; food = (Food*) malloc (sizeof (Food)); snprintf (food->name, 128, "%s", "Corn"); free (food); printf ("%d\n", sizeof *food); printf ("%s\n", food->name); } I still get 128 Corn although I have freed food. Why is this? Is memory really freed?

    Read the article

< Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >