Search Results

Search found 14113 results on 565 pages for 'memory stick'.

Page 130/565 | < Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >

  • Are there any modern platforms with non-IEEE C/C++ float formats?

    - by Patrick Niedzielski
    Hi all, I am writing a video game, Humm and Strumm, which requires a network component in its game engine. I can deal with differences in endianness easily, but I have hit a wall in attempting to deal with possible float memory formats. I know that modern computers have all a standard integer format, but I have heard that they may not all use the IEEE standard for floating-point integers. Is this true? While certainly I could just output it as a character string into each packet, I would still have to convert to a "well-known format" of each client, regardless of the platform. The standard printf() and atod() would be inadequate. Please note, because this game is a Free/Open Source Software program that will run on GNU/Linux, *BSD, and Microsoft Windows, I cannot use any proprietary solutions, nor any single-platform solutions. Cheers, Patrick

    Read the article

  • Any sense to set obj = null(Nothing) in Dispose()?

    - by serhio
    Is there any sense to set custom object to null(Nothing in VB.NET) in the Dispose() method? Could this prevent memory leaks or it's useless?! Let's consider two examples: public class Foo : IDisposable { private Bar bar; // standard custom .NET object public Foo(Bar bar) { this.bar = bar; } public void Dispose() { bar = null; // any sense? } } public class Foo : RichTextBox { // this could be also: GDI+, TCP socket, SQl Connection, other "heavy" object private Bitmap backImage; public Foo(Bitmap backImage) { this.backImage = backImage; } protected override void Dispose(bool disposing) { if (disposing) { backImage = null; // any sense? } } }

    Read the article

  • WeakReferences are not freed in embedded OS

    - by Carsten König
    I've got a strange behavior here: I get a massive memory leak in production running a WPF application that runs on a DLOG-Terminal (Windows Embedded Standard SP1) that behaves perfectly fine if I run it localy on a normal desktop (Win7 prof.) After many unsucessful attempts to find any problem I put one of those directly beside my monitor, installed the ANTs MemoryProfiler and did one hour test run simulating user operations on both the terminal and my development PC. Result is, that due to some strange reasons the embedded system piles up a huge amount of WeakReference and EffectiveValueEntry[] Objects. Here are are some pictures: Development (PC): And the terminal: Just look at the class list... Has anyone seen something like this before and are there known solutions to this? Where can I get help? (PS the terminals where installed with images prepared for .net4)

    Read the article

  • NSAutoreleasePool carrying across methods?

    - by Tim
    I'm building an iPhone application where I detach some threads to do long-running work in the background so as not to hang the UI. I understand that threads need NSAutoreleasePool instances for memory management. What I'm not sure about is if the threaded method calls another method - does that method also need an NSAutoreleasePool? Example code: - (void)primaryMethod { [self performSelectorInBackground:@selector(threadedMethod) withObject:nil]; } - (void)threadedMethod { NSAutoreleasePool *aPool = [[NSAutoreleasePool alloc] init]; // Some code here [self anotherMethod]; // Maybe more code here [aPool drain]; } - (void)anotherMethod { // More code here } The reason I ask is I'm receiving errors that objects are being autoreleased with no pool in place, and are "just leaking." I've seen other questions where people didn't have autorelease pools in place at all, and I understand why an autorelease pool is needed. I'm specifically interested in finding out whether an autorelease pool created in (in this example) threadedMethod applies to objects created in anotherMethod.

    Read the article

  • Displaying/scrolling through heaps of pictures in the browser

    - by user347256
    I want to be able to browse through heaps of images in the browser, fast. THe easy way (just load 2000 images and scroll) slows down the scrolling a lot, assumedly because there's too much images to be kept in memory. I'd love to hear thoughts on strategies to be able to quickly scroll through 10000s of images (as if you were on your desktop) in the browser. What would expected bottlenecks be? How to address them? How to fake things so that the user experience is still good? Examples in the wild?

    Read the article

  • Access violation using LocalAlloc()

    - by PaulH
    I have a Visual Studio 2008 Windows Mobile 6 C++ application that is using an API that requires the use of LocalAlloc(). To make my life easier, I created an implementation of a standard allocator that uses LocalAlloc() internally: /// Standard library allocator implementation using LocalAlloc and LocalReAlloc /// to create a dynamically-sized array. /// Memory allocated by this allocator is never deallocated. That is up to the /// user. template< class T, int max_allocations > class LocalAllocator { public: typedef T value_type; typedef size_t size_type; typedef ptrdiff_t difference_type; typedef T* pointer; typedef const T* const_pointer; typedef T& reference; typedef const T& const_reference; pointer address( reference r ) const { return &r; }; const_pointer address( const_reference r ) const { return &r; }; LocalAllocator() throw() : c_( NULL ) { }; /// Attempt to allocate a block of storage with enough space for n elements /// of type T. n>=1 && n<=max_allocations. /// If memory cannot be allocated, a std::bad_alloc() exception is thrown. pointer allocate( size_type n, const void* /*hint*/ = 0 ) { if( NULL == c_ ) { c_ = LocalAlloc( LPTR, sizeof( T ) * n ); } else { HLOCAL c = LocalReAlloc( c_, sizeof( T ) * n, LHND ); if( NULL == c ) LocalFree( c_ ); c_ = c; } if( NULL == c_ ) throw std::bad_alloc(); return reinterpret_cast< T* >( c_ ); }; /// Normally, this would release a block of previously allocated storage. /// Since that's not what we want, this function does nothing. void deallocate( pointer /*p*/, size_type /*n*/ ) { // no deallocation is performed. that is up to the user. }; /// maximum number of elements that can be allocated size_type max_size() const throw() { return max_allocations; }; private: /// current allocation point HLOCAL c_; }; // class LocalAllocator My application is using that allocator implementation in a std::vector< #define MAX_DIRECTORY_LISTING 512 std::vector< WIN32_FIND_DATA, LocalAllocator< WIN32_FIND_DATA, MAX_DIRECTORY_LISTING > > file_list; WIN32_FIND_DATA find_data = { 0 }; HANDLE find_file = ::FindFirstFile( folder.c_str(), &find_data ); if( NULL != find_file ) { do { // access violation here on the 257th item. file_list.push_back( find_data ); } while ( ::FindNextFile( find_file, &find_data ) ); ::FindClose( find_file ); } // data submitted to the API that requires LocalAlloc()'d array of WIN32_FIND_DATA structures SubmitData( &file_list.front() ); On the 257th item added to the vector<, the application crashes with an access violation: Data Abort: Thread=8e1b0400 Proc=8031c1b0 'rapiclnt' AKY=00008001 PC=03f9e3c8(coredll.dll+0x000543c8) RA=03f9ff04(coredll.dll+0x00055f04) BVA=21ae0020 FSR=00000007 First-chance exception at 0x03f9e3c8 in rapiclnt.exe: 0xC0000005: Access violation reading location 0x01ae0020. LocalAllocator::allocate is called with an n=512 and LocalReAlloc() succeeds. The actual Access Violation exception occurs within the std::vector< code after the LocalAllocator::allocate call: 0x03f9e3c8 0x03f9ff04 > MyLib.dll!stlp_std::priv::__copy_trivial(const void* __first = 0x01ae0020, const void* __last = 0x01b03020, void* __result = 0x01b10020) Line: 224, Byte Offsets: 0x3c C++ MyLib.dll!stlp_std::vector<_WIN32_FIND_DATAW,LocalAllocator<_WIN32_FIND_DATAW,512> >::_M_insert_overflow(_WIN32_FIND_DATAW* __pos = 0x01b03020, _WIN32_FIND_DATAW& __x = {...}, stlp_std::__true_type& __formal = {...}, unsigned int __fill_len = 1, bool __atend = true) Line: 112, Byte Offsets: 0x5c C++ MyLib.dll!stlp_std::vector<_WIN32_FIND_DATAW,LocalAllocator<_WIN32_FIND_DATAW,512> >::push_back(_WIN32_FIND_DATAW& __x = {...}) Line: 388, Byte Offsets: 0xa0 C++ MyLib.dll!Foo(unsigned long int cbInput = 16, unsigned char* pInput = 0x01a45620, unsigned long int* pcbOutput = 0x1dabfbbc, unsigned char** ppOutput = 0x1dabfbc0, IRAPIStream* __formal = 0x00000000) Line: 66, Byte Offsets: 0x1e4 C++ If anybody can point out what I may be doing wrong, I would appreciate it. Thanks, PaulH

    Read the article

  • Unicorn: Which number of worker processes to use?

    - by blackbird07
    I am running a Ruby on Rails app on a virtual Linux server that is capped at 1GB RAM. Currently, I am constantly hitting the limit and would like to optimize memory utilization. One option I am looking at is reducing the number of unicorn workers. So what is the best way to determine the number of unicorn workers to use? The current setting is 10 workers, but the maximum number of requests per second I have seen on Google Analytics Real-Time is 3 (only scored once at a peak time; in 99% of the time not going above 1 request per second). So is it a save assumption that I can - for now - go with 4 workers, leaving room for unexpected amounts of requests? What are the metrics I should have a look at for determining the number of workers and what are the tools I can use for that on my Ubuntu machine?

    Read the article

  • Release an object without a pointer?

    - by Kai Friis
    I’ve just started developing for iPhone and am trying to get my head around memory management. I made a small program that shows a map and an annotation on the map. For the annotation I made a simple class that implements the MKAnnotation protocol. To create and add the annotation I wrote this: [self.myMapView addAnnotation:[[MyAnnotation alloc] init]]; It worked fine until I tried to release the object. Nothing to release. This is what I would have done in C#, I guess it doesn’t work without garbage collection? So is this the only way to do it? MyAnnotation *myAnnotation = [[MyAnnotation alloc] init]; [self.myMapView addAnnotation: myAnnotation]; [myAnnotation release];

    Read the article

  • PostGres - run a query in batches?

    - by CaffeineIV
    Is it possible to loop through a query so that if (for example) 500,000 rows are found, it'll return results for the first 10,000 and then rerun the query again? So, what I want to do is run a query and build an array, like this: $result = pg_query("SELECT * FROM myTable"); $i = 0; while($row = pg_fetch_array($result) ) { $myArray[$i]['id'] = $row['id']; $myArray[$i]['name'] = $row['name']; $i++; } But, I know that there will be several hundred thousand rows, so I wanted to do it in batches of like 10,000... 1- 9,999 and then 10,000 - 10,999 etc... The reason why is because I keep getting this error: Fatal error: Allowed memory size of 536870912 bytes exhausted (tried to allocate 3 bytes) Which, incidentally, I don't understand how 3 bytes could exhaust 512M... So, if that's something that I can just change, that'd be great, although, still might be better to do this in batches?

    Read the article

  • How return a std::string from C's "getcwd" function

    - by rubenvb
    Sorry to keep hammering on this, but I'm trying to learn :). Is this any good? And yes, I care about memory leaks. I can't find a decent way of preallocating the char*, because there simply seems to be no cross-platform way. const string getcwd() { char* a_cwd = getcwd(NULL,0); string s_cwd(a_cwd); free(a_cwd); return s_cwd; } UPDATE2: without Boost or Qt, the most common stuff can get long-winded (see accepted answer)

    Read the article

  • Effecient data structure design

    - by Sway
    Hi there, I need to match a series of user inputed words against a large dictionary of words (to ensure the entered value exists). So if the user entered: "orange" it should match an entry "orange' in the dictionary. Now the catch is that the user can also enter a wildcard or series of wildcard characters like say "or__ge" which would also match "orange" The key requirements are: * this should be as fast as possible. * use the smallest amount of memory to achieve it. If the size of the word list was small I could use a string containing all the words and use regular expressions. however given that the word list could contain potentially hundreds of thousands of enteries I'm assuming this wouldn't work. So is some sort of 'tree' be the way to go for this...? Any thoughts or suggestions on this would be totally appreciated! Thanks in advance, Matt

    Read the article

  • Measuring debug vs release of ASP.NET applications

    - by Alex Angas
    A question at work came up about building ASP.NET applications in release vs debug mode. When researching further (particularly on SO), general advice is that setting <compilation debug="true"> in web.config has a much bigger impact. Has anyone done any testing to get some actual numbers about this? Here's the sort of information I'm looking for (which may give away my experience with testing such things): Execution time | Debug build | Release build -------------------+---------------+--------------- Debug web.config | average 1 | average 2 Retail web.config | average 3 | average 4 Max memory usage | Debug build | Release build -------------------+---------------+--------------- Debug web.config | average 1 | average 2 Retail web.config | average 3 | average 4 Output file size | Debug build | Release build -------------------+---------------+--------------- | size 1 | size 2

    Read the article

  • Why freed struct in C still has data?

    - by kliketa
    When I run this code: #include <stdio.h> typedef struct _Food { char name [128]; } Food; int main (int argc, char **argv) { Food *food; food = (Food*) malloc (sizeof (Food)); snprintf (food->name, 128, "%s", "Corn"); free (food); printf ("%d\n", sizeof *food); printf ("%s\n", food->name); } I still get 128 Corn although I have freed food. Why is this? Is memory really freed?

    Read the article

  • Garbage Collection in Java

    - by simion
    On the slides I am revising from it says the following: Live objects can be identified either by maintaining a count of the number of references to each object, or by tracing chains of references from the roots. Reference counting is expensive – it needs action every time a reference changes and it doesn’t spot cyclical structures, but it can reclaim space incrementally. Tracing involves identifying live objects only when you need to reclaim space – moving the cost from general access to the time at which the GC runs, typically only when you are out of memory. I understand the principles of why reference counting is expensive but do not understand what "doesn’t spot cyclical structures, but it can reclaim space incrementally." means. Could anyone help me out a little bit please? Thanks

    Read the article

  • If free() knows the length of my array, why can't I ask for it in my own code?

    - by Chris Cooper
    I know that it's a common convention to pass the length of dynamically allocated arrays to functions that manipulate them: void initializeAndFree(int* anArray, int length); int main(){ int arrayLength = 0; scanf("%d", &arrayLength); int* myArray = (int*)malloc(sizeof(int)*arrayLength); initializeAndFree(myArray, arrayLength); } void initializeAndFree(int* anArray, int length){ int i = 0; for (i = 0; i < length; i++) { anArray[i] = 0; } free(anArray); } but if there's no way for me to get the length of the allocated memory from a pointer, how does free() "automagically" know what to deallocate? Why can't I get in on the magic, as a C programmer? Where does free() get its free (har-har) knowledge from?

    Read the article

  • in java, which is better - three arrays of booleans or 1 array of bytes?

    - by joe_shmoe
    I know the question sounds silly, but consider this: I have an array of items and a labelling algorithm. at any point the item is in one of three states. The current version holds these states in a byte array, where 0, 1 and 2 represent the three states. alternatively, I could have three arrays of boolean - one for each state. which is better (consumes less memory) depends on how jvm (sun's version) stores the arrays - is a boolean represented by 1 bit? (p.s. don't start with all that "this is not the way OO/Java works" - I know, but here performance comes in front. plus the algorithm is simple and perfectly readable even in such form). Thanks a lot

    Read the article

  • Allocated Private Bytes keeps going up in one computer but not the other

    - by Jacob
    OK, this may sound weird, but here goes. There are 2 computers, A (Pentium D) and B (Quad Core) with almost the same amount of RAM. If I run the same code on both computers, the allocated private bytes in A never goes down resulting in a crash later on. In B it looks like the private bytes is constantly deallocated and everything looks fine. In both computers, the working set is deallocated and allocated similarly. Could this be an issue with manifests or DLLs (system)? I'm clueless. Note: I observed the utilized memory with Process Explorer.

    Read the article

  • release viewcontroller after presenting modally

    - by Jonathan
    I was watching CS193P Stanford course on Itunes, and in one of the lectures a demo was given and There it was said you could present the viewcontroller modally and then release it. Roughly like this (I know this isn't perfect but I'm on my PC atm) [self.view presentcontentmodally:myVC] [myVC release]; However this seems to produce problems. If I put a NSLog(@"%d", [myVC retainCount]) between those two lines then it returns 2 implying it is ok to release. However when I dismiss the myVC the app crashes. Nothing in the NSlog and the debugger won't show where it stopped. But I used malloc-history or something that some blog said would help. And found that it was the myVC. So should I be releasing myVC? (also when the modalVC has been dissmissed should the app's memory usuage go back to before the modalVC was presented?)

    Read the article

  • In what circumstances can large pages produce a speedup ?

    - by timday
    Modern x86 CPUs have the ability to support larger page sizes than the legacy 4K (ie 2MB or 4MB), and there are OS facilities (Linux, Windows) to access this functionality. The Microsoft link above states large pages "increase the efficiency of the translation buffer, which can increase performance for frequently accessed memory". Which isn't very helpful in predicting whether large pages will improve any given situation. I'm interested in concrete, preferably quantified, examples of where moving some program logic (or a whole application) to use huge pages has resulted in some performance improvement. Anyone got any success stories ? There's one particular case I know of myself: using huge pages can dramatically reduce the time needed to fork a large process (presumably as the number of TLB records needing copying is reduced by a factor on the order of 1000). I'm interested in whether huge pages can also benefit more mundane applications though.

    Read the article

  • How do I release an object allocated in a different AutoReleasePool ?

    - by ajcaruana
    Hi, I have a problem with the memory management in Objective-C. Say I have a method that allocates an object and stores the reference to this object as a member of the class. If I run through the same function a second time, I need to release this first object before creating a new one to replace it. Supposing that the first line of the function is: NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; This means that a different auto-release pool will be in place. The code to allocate the object is as follows: if (m_object != nil) [m_object release]; m_object = [[MyClass alloc] init]; [m_object retain]; The problem is that the program crashes when running the last line of the method: [pool release]; What am I doing wrong ? How can I fix this ? Regards Alan

    Read the article

  • Writing to pointer out of bounds after malloc() not causing error

    - by marr
    Hi, when I try the code below it works fine. Am I missing something? main() { int *p; p=malloc(sizeof(int)); printf("size of p=%d\n",sizeof(p)); p[500]=999999; printf("p[0]=%d",p[500]); return 0; } I tried it with malloc(0*sizeof(int)) or anything but it works just fine. The program only crashes when I don't use malloc at all. So even if I allocate 0 memory for the array p, it still stores values properly. So why am I even bothering with malloc then?

    Read the article

  • Looking at the C++ new[] cookie. How portable is this code?

    - by carleeto
    I came up with this as a quick solution to a debugging problem - I have the pointer variable and its type, I know it points to an array of objects allocated on the heap, but I don't know how many. So I wrote this function to look at the cookie that stores the number of bytes when memory is allocated on the heap. template< typename T > int num_allocated_items( T *p ) { return *((int*)p-4)/sizeof(T); } //test #include <iostream> int main( int argc, char *argv[] ) { using std::cout; using std::endl; typedef long double testtype; testtype *p = new testtype[ 45 ]; //prints 45 std::cout<<"num allocated = "<<num_allocated_items<testtype>(p)<<std::endl; delete[] p; return 0; } I'd like to know just how portable this code is.

    Read the article

  • Code corresponding to leaks with Visual Leak Detector

    - by matt
    I am trying to use Visual Leak Detector in Visual Studio 2008, here is an example of the output I get: Detected memory leaks! Dumping objects -> {204} normal block at 0x036C1568, 1920 bytes long. Data: < > 80 08 AB 03 00 01 AB 03 80 F9 AA 03 00 F2 AA 03 {203} normal block at 0x0372CC68, 40 bytes long. Data: <( > 28 00 00 00 80 02 00 00 E0 01 00 00 01 00 18 00 {202} normal block at 0x0372CC00, 44 bytes long. Data: << E > 3C 16 45 00 80 02 00 00 E0 01 00 00 01 00 00 00 The user's guide says to click on any line to jump to the corresponding file/line of code ; I tried clicking on every line but nothing happens! What am I missing?

    Read the article

  • Force an object to be allocated on the heap

    - by Warren Seine
    A C++ class I'm writing uses shared_from_this() to return a valid boost::shared_ptr<>. Besides, I don't want to manage memory for this kind of object. At the moment, I'm not restricting the way the user allocates the object, which causes an error if shared_from_this() is called on a stack-allocated object. I'd like to force the object to be allocated with new and managed by a smart pointer, no matter how the user declares it. I thought it could be done through a proxy or an overloaded new operator, but I can't find a proper way of doing that. Is there a common design pattern for such usage? If it's not possible, how can I test it at compile time?

    Read the article

  • C++ deleting a pointer

    - by eSKay
    On this page, its written that One reason is that the operand of delete need not be an lvalue. Consider: delete p+1; delete f(x); Here, the implementation of delete does not have a pointer to which it can assign zero. Adding a number to a pointer shifts it forward in memory by those many number of sizeof(*p) units. So, what is the difference between delete p and delete p+1, and why would making the pointer 0 only be a problem with delete p+1?

    Read the article

< Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >