Search Results

Search found 25996 results on 1040 pages for 'memory address'.

Page 33/1040 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • Objective-C Memory Management: When do I [release]?

    - by Sahat
    I am still new to this Memory Management stuff (Garbage Collector took care of everything in Java), but as far as I understand if you allocate memory for an object then you have to release that memory back to the computer as soon as you are finished with your object. myObject = [Object alloc]; and [myObject release]; Right now I just have 3 parts in my Objective-C .m file: @Interface, @Implementation and main. I released my object at the end of the program next to these guys: [pool drain]; return 0; But what if this program were to be a lot more complicated, would it be okay to release myObject at the end of the program? I guess a better question would be when do I release an object's allocated memory? How do I know where to place [myObject release];?

    Read the article

  • Solving iPhone/iPad out of memory issues

    - by Joonas Trussmann
    I have a strange issue where I'm scrolling through a paged UIScrollView which displays the pages of a PDF document (using Quartz 2D and CATiledLayer). When I page through memory allocation looks fine with it going up with a few initial pages and then keeping it steady as it obviously releases the memory kept for earlier pages. Upon hitting page x (not a certain PDF page or a certain number per se) memory usage goes from a couple of megs to 308 megs and the app crashes. So my question is: how to best try to find what's causing this? The object alloc tool in instruments shows the memory as simply going to malloc. (in huge chunks).

    Read the article

  • Memory leak with preg_replace

    - by Silvio Donnini
    I'm using the preg_replace function to replace accents in a string, I'm working with UTF-8. I have incurred in what seems to be a memory leak, but I can't isolate the root cause, my code is rather simple: preg_replace( array_keys($aToNoAccents), array_values($aToNoAccents), $sText ); where $aToNoAccents is an associative array with entries like '~[A]~u' => 'A', '~[C]~u' => 'C',. My script prints this error for the above line: Fatal error: Allowed memory size of 1073741824 bytes exhausted (tried to allocate 3039 bytes) Obviously it's not a matter of increasing the allowed memory for PHP, (a 1Gb footprint is way off the scale of my application). Also, that line is executed thousands of times without problems but, just for some cases which are difficult to reproduce, it yields the error. Is anyone aware of memory problems with preg_replace and UTF-8 strings? Am I to use special care in passing actual parameters to such function? I'm using PHP 5.2.6-3 with Suhosin-Patch thanks Silvio

    Read the article

  • Virtual Memory and Paging

    - by Kenshin
    Hello, I am doing some exercices to understand how the virtual memory and paging works, my question is as follows : Suppose we use a paged memory with pages of 1024 bytes, the virtual address space is of 8 pages but the physical memory can only contain 4 frames of pages. Replacement policy is LRU. What is the physical address in main memory that corresponds to virtual address 4096? and how do you get to that result? Same thing as question 1 but with virtual address 1024 A page fault occurs when accessing a word in page 0, which page frame will be used to receive the virtual page 0? Page Image

    Read the article

  • Thread Local Memory, Using std::string's internal buffer for c-style Scratch Memory.

    - by Hassan Syed
    I am using Protocol Buffers and OpensSSL to generate, HMACs and then CBC encrypt the two fields to obfuscate the session cookies -- similar Kerberos tokens. Protocol Buffers' API communicates with std::strings and has a buffer caching mechanism; I exploit the caching mechanism, for successive calls in the the same thread, by placing it in thread local memory; additionally the OpenSSL HMAC and EVP CTX's are also placed in the same thread local memory structure ( see this question for some detail on why I use thread local memory and the massive amount of speedup it enables even with a single thread). The generation and deserialization, "my algorithms", of these cookie strings uses intermediary void *s and std::strings and since Protocol Buffers has an internal memory retention mechanism I want these characteristics for "my algorithms". So how do I implement a common scratch memory ? I don't know much about the rdbuf(streambuf - strinbuf ??) of the std::string object. I would presumeably need to grow it to the lowest common size ever encountered during the execution of "my algorithms". Thoughts ? My question I guess would be: " is the internal buffer of a string re-usable, and if so, how ?" Edit: See comments to Vlad's answer please.

    Read the article

  • renderInContext creating memory that is not promptly released

    - by iworkinprogress
    While debugging in instruments using 'ObjectAlloc' I'm noticing 7megs of memory being allocated for the renderInContext call, but it never is released. When I comment out the renderInContext call this doesn't happen, and future renderInContext calls does not continue to increase the memory allotment. UIGraphicsBeginImageContext(contentHolder.bounds.size); [contentHolder.layer renderInContext:UIGraphicsGetCurrentContext()]; UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); Is there a way to force this memory to be released?

    Read the article

  • Loading machinecode from file into memory and executing in C -- mprotect failing

    - by chartreusekitsune
    Hi I'm trying to load raw machine code into memory and run it from within a C program, right now when the program executes it breaks when trying to run mprotect on the memory to make it executable. I'm also not entirely sure that if the memory does get set right it will execute. What I currently have is the following: #include <memory.h> #include <sys/mman.h> #include <stdio.h> int main ( int argc, char **argv ) { FILE *fp; int sz = 0; char *membuf; int output = 0; fp = fopen(argv[1],"rb"); if(fp == NULL) { printf("Failed to open file, aborting!\n"); exit(1); } fseek(fp, 0L, SEEK_END); sz = ftell(fp); fseek(fp, 0L, SEEK_SET); membuf = (char *)malloc(sz*sizeof(char)); if(membuf == NULL) { printf("Failed to allocate memory, aborting!\n"); exit(1); } memset(membuf, 0x90, sz*sizeof(char)); if( mprotect(membuf, sz*sizeof(char), PROT_EXEC | PROT_READ | PROT_WRITE) == -1) { printf("mprotect failed!!! aborting!\n"); exit(1); } if((sz*sizeof(char)) != fread(membuf, sz*sizeof(char), 1, fp)) { printf("Read failed, aborting!\n"); exit(1); } __asm__ ( "call %%eax;" : "=a" (output) : "a" (membuf) ); printf("Output = %x\n", output); return 0; }

    Read the article

  • Macbook memory upgrade question

    - by James Evans
    I've read some conflicting articles on Macbooks and memory upgrades. Some say you have to buy the "special" Mac memory (bulls$%t), others say manufacturers like Partriot and Ocz will work fine. My Macbook (non-pro) is about 6 mos old with it's 2 GB of memory (SO-DIMM 1066MHz DDR3). Does anyone have any definitive information of what will work? Thanks!

    Read the article

  • [C#] Not enough memory or not enough handles?

    - by Nayan
    I am working on a large scale project where a custom (pretty good and robust) framework has been provided and we have to use that for showing up forms and views. There is abstract class StrategyEditor (derived from some class in framework) which is instantiated whenever a new StrategyForm is opened. StrategyForm (a customized window frame) contains StrategyEditor. StrategyEditor contains StrategyTab. StrategyTab contains StrategyCanvas. This is a small portion of the big classes to clarify that there are many objects that will be created if one StrategyForm object is allocated in memory at run-time. My component owns all these classes mentioned above except StrategyForm whose code is not in my control. Now, at run-time, user opens up many strategy objects (which trigger creation of new StrategyForm object.) After creating approx. 44 strategy objects, we see that the USER OBJECT HANDLES (I'll use UOH from here onwards) created by the application reaches to about 20k+, while in registry the default amount for handles is 10k. Read more about User Objects here. Testing on different machines made it clear that the number of strategy objects opened is different for message to pop-up - on one m/c if it is 44, then it can be 40 on another. When we see the message pop-up, it means that the application is going to respond slowly. It gets worse with few more objects and then creation of window frames and subsequent objects fail. We first thought that it was not-enough-memory issue. But then reading more about new in C# helped in understanding that an exception would be thrown if app ran out of memory. This is not a memory issue then, I feel (task manager also showed 1.5GB+ available memory.) M/C specs Core 2 Duo 2GHz+ 4GB RAM 80GB+ free disk space for page file Virtual Memory set: 4000 - 6000 My questions Q1. Does this look like a memory issue and I am wrong that it is not? Q2. Does this point to exhaustion of free UOHs (as I'm thinking) and which is resulting in failure of creation of window handles? Q3. How can we avoid loading up of an StrategyEditor object (beyond a threshold, keeping an eye on the current usage of UOHs)? (we already know how to fetch number of UOHs in use, so don't go there.) Keep in mind that the call to new StrategyForm() is outside the control of my component. Q4. I am bit confused - what are Handles to user objects exactly? Is MSDN talking about any object that we create or only some specific objects like window handles, cursor handles, icon handles? Q5. What exactly cause to use up a UOH? (almost same as Q4) I would be really thankful to anyone who can give me some knowledgeable answers. Thanks much! :)

    Read the article

  • Unable to remove limit on memory usage for PHP script.

    - by Jess Telford
    The Situation I am having an issue with a PHP script getting the following error message: Fatal error: Out of memory (allocated 359923712) (tried to allocate 72 bytes) in /path/to/piwik/core/DataTable.php on line 969 The script I'm running is: /path/to/piwik/misc/cron/archive.sh I am assuming the numbers are Bytes, which means that total is approximately 360MB. For all intents and purposes, I have increased the memory limits on the server well above 360MB, yet this is the number (give or take a byte) it consistently errors out at. Please note: This question is not about fixing a memory leak in the script, nor about why the script itself is using so much memory. The script is part of the Piwik archiving process, so I cannot just fix any memory leaks, etc. For more info on this script and why I am increasing the memory limit, see "How to setup auto archiving" The question Given that the script is attempting to use over 360MB of memory, which I cannot change, why does it not seem possible for me to increase the amount of memory available to php on my server? What I've tried Increasing PHP's memory_limit Given the php.ini file: php -i | grep php.ini Configuration File (php.ini) Path => /usr/local/lib Loaded Configuration File => /usr/local/lib/php.ini I have edited that file, so the memory_limit directive reads; memory_limit = -1 Restart Apache, and check the new value has stuck; $ php -i | grep memory_limit memory_limit => -1 => -1 Run the script, and get the same error. I've also tried 1G, 768M, etc, all to the same result (ie; no change). Update 22nd June: Based on Vangel's help, I have attempted to set post_max_size to 20M in combination with setting memory_limit. Again, this has no effect. Removing the memory limit on child processes of Apache I have found and edited the httpd.conf file to make sure there is no RLimitMEM directive. I then used WHM's Apache Configuration Memory Usage Restrictions to generate a restriction, which it claimed was at 1000M (and confirmed by checking httpd.conf). Both of these resulted in no change to the script erroring at 360MB. Increasing the per process memory limits of Linux The current limits set on the system: $ ulimit -m 524288 $ ulimit -v 524288 I have attempted to set both of these to unlimited: $ ulimit -m unlimited $ ulimit -v unlimited $ ulimit -m unlimited $ ulimit -v unlimited Once again, this has resulted in absolutely no improvement in my problem. My setup $ cat /etc/redhat-release CentOS release 5.5 (Final) $ uname -a Linux example.com 2.6.18-164.15.1.el5 #1 SMP Wed Mar 17 11:30:06 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux $ php -i | grep "PHP Version" PHP Version => 5.2.9 $ httpd -V Server version: Apache/2.0.63 Server built: Feb 2 2011 01:25:12 Cpanel::Easy::Apache v3.2.0 rev5291 Server's Module Magic Number: 20020903:13 Server loaded: APR 0.9.17, APR-UTIL 0.9.15 Compiled using: APR 0.9.17, APR-UTIL 0.9.15 Architecture: 64-bit Server compiled with.... -D APACHE_MPM_DIR="server/mpm/prefork" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D HTTPD_ROOT="/usr/local/apache" -D SUEXEC_BIN="/usr/local/apache/bin/suexec" -D DEFAULT_PIDLOG="logs/httpd.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_LOCKFILE="logs/accept.lock" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="conf/mime.types" -D SERVER_CONFIG_FILE="conf/httpd.conf" Output of $ php -i: http://pastebin.com/EiRut6Nm

    Read the article

  • maintaing a sorted list that is bigger than memory

    - by tcurdt
    I have a list of tuples. [ "Bob": 3, "Alice: 2, "Jane": 1, ] When incrementing the counts "Alice" += 2 the order should be maintained: [ "Alice: 4, "Bob": 3, "Jane": 1, ] When all is in memory there rather simple ways (some more or some less) to efficiently implement this. (using an index, insert-sort etc) The question though is: What's the most promising approach when the list does not fit into memory. Bonus question: What if not even the index fits into memory? How would you approach this?

    Read the article

  • Running out of memory while analyzing a Java Heap Dump

    - by Abel Morelos
    Hi, I have a curious problem, I need to analyze a Java heap dump (from an IBM JRE) which has 1.5GB in size, the problem is that while analyzing the dump (I've tried HeapAnalyzer and the IBM Memory Analyzer 0.5) the tools runs out of memory I can't really analyze the dump. I have 3GB of RAM in my machine, but seems like it's not enough to analyze the 1.5 GB dump, My question is, do you know a specific tool for heap dump analysis (supporting IBM JRE dumps) that I could run with the amount of memory I have? Thanks.

    Read the article

  • Memory leak, again!

    - by Dave
    A couple of weeks ago I was having trouble with memory leaks associated with a ContextMenuStrip. That problem was fixed. See that question here Now I'm having similar issues with ToolStrip controls. As in the previous problem, I'm creating a large number of UserControls and adding them to a FlowLayoutPanel. Each control creates a ToolStrip for itself in its constructor. When a control is removed from the FlowLayoutPanel (the only reference to the control), it seems memory for the ToolStrip is not being released. However when I comment out the code that creates the ToolStrip the memory leak doesn't happen. Is this the same sort of issue as the previous one - I need to set the ToolStrip to null? I don't see how that could be since this time the control is creating the strip itself, and all the button events etc are handled inside it. So shouldn't everything be GC'd when the control is no longer referenced? EDIT: As to the comments, the thing I don't understand is originally I was "making" my own toolstrip out of a panel and some labels. The labels were used as buttons. No memory leaks occurred this way. The only thing I've changed is using a proper ToolStrip with proper buttons in place of the panel, but all the event handlers are wired the same way. So why is it now leaking memory?

    Read the article

  • Why Doesn't UIWebView release all of its memory?

    - by Theory
    I have a graphic-intensive iPad app that features a UIWebView. Using the simulator (iOS 4.2.1), I can see Real Mem increase quite a lot as I browse. The more I browse, the more RAM it uses. When I close the UIWebView and release it, some of the memory it used is freed, but not all of it. This is annoying. Okay, so maybe it's because it isn't deallocated right away. Fine. But then I would expect the system to do some cleanup when there's a memory warning. However, if I browse around, then close the UIWebView (and release it), then trigger a memory warning in the simulator, Real Mem does not change! WTF? So why is this? Why isn't UIWebView better at releasing memory back to the system? And why doesn't it appear to respond to memory warnings? Am I missing something?

    Read the article

  • memory leak in php script

    - by Jasper De Bruijn
    Hi, I have a php script that runs a mysql query, then loops the result, and in that loop also runs several queries: $sqlstr = "SELECT * FROM user_pred WHERE uprType != 2 AND uprTurn=$turn ORDER BY uprUserTeamIdFK"; $utmres = mysql_query($sqlstr) or trigger_error($termerror = __FILE__." - ".__LINE__.": ".mysql_error()); while($utmrow = mysql_fetch_array($utmres, MYSQL_ASSOC)) { // some stuff happens here // echo memory_get_usage() . " - 1241<br/>\n"; $sqlstr = "UPDATE user_roundscores SET ursUpdDate=NOW(),ursScore=$score WHERE ursUserTeamIdFK=$userteamid"; if(!mysql_query($sqlstr)) { $err_crit++; $cLog->WriteLogFile("Failed to UPDATE user_roundscores record for user $userid - teamuserid: $userteamid\n"); echo "Failed to UPDATE user_roundscores record for user $userid - teamuserid: $userteamid<br>\n"; break; } unset($sqlstr); // echo memory_get_usage() . " - 1253<br/>\n"; // some stuff happens here too } The update query never fails. For some reason, between the two calls of memory_get_usage, there is some memory added. Because the big loop runs about 500.000 or more times, in the end it really adds up to alot of memory. Is there anything I'm missing here? could it herhaps be that the memory is not actually added between the two calls, but at another point in the script? Edit: some extra info: Before the loop it's at about 5mb, after the loop about 440mb, and every update query adds about 250 bytes. (the rest of the memory gets added at other places in the loop). The reason I didn't post more of the "other stuff" is because its about 300 lines of code. I posted this part because it looks to be where the most memory is added.

    Read the article

  • Multiple .NET processes memory footprint

    - by mr.b
    I am developing an application suite that consists of several applications which user can run as needed (they can run all at the same time, or only several..). My concern is in physical memory footprint of each process, as shown in task manager. I am aware that Framework does memory management behind the curtains in terms that it devotes parts of memory for certain things that are not directly related to my application. The question. Does .NET Framework has some way of minimizing memory footprint of processes it runs when there are several processes running at the same time? (Noobish guess ahead) Like, if System.dll is already loaded by one process, does framework load it for each process, or it has some way of sharing it between processes? I am in no way striving to write as small (resource-wise) apps as possible (if I were, I probably wouldn't be using .NET Framework in the first place), but if there's something I can do something about over-using resources, I'd like to know about it.

    Read the article

  • Memory leak appears only when multiprocessing

    - by Sandro
    I am trying to use python's multiprocessing library to hopefully gain some performance. Specifically I am using its map function. Now, for some reason when I swap it out with its single threaded counterpart I don't get any memory leaks over time. But using the multiprocessing version of map causes my memory to go through the roof. For the record I am doing something which can easily hog up loads of memory, but what would the difference be between the two to cause such a stark difference?

    Read the article

  • Google Chrome and (cache or memory leaks).

    - by Alexey Ogarkov
    Hello All, I have a big problem with Google Chrome and its memory. My app is displaying to user several image charts and reloads them every 10s. In the interval i have code like that var image = new Image(); var src = 'myurl/image'+new Date().getTime(); image.onload = function() { document.getElementById('myimage').src = src; image.onload = image.onabort = image.onerror = null; } image.src = src; So i have no memory leaks in Firefox and IE. Here the response headers for images Server Apache-Coyote/1.1 Vary * Cache-Control no-store (// I try no-cache, must-revalidate and so on here) Content-Type image/png Content-Length 11131 Date Mon, 31 May 2010 14:00:28 GMT Vary * taken from here In about:cache page there is no my cached images. If i enable purge-memory-button for chrome (--purge-memory-button parameter) it`s not help. Images is in PNG24. So i think that the problem is not in cache. May be Google Chrome is not releasing memory for old images. Please help. Any suggestions. Thanks.

    Read the article

  • memory leak on mdi with blank child form

    - by bob
    Hi, I've created a blank application with a mdi parent form opening a blank child form from the menu. When the parent form of the child form is set to the mdi - it appears the system does not release memory - thus a leak. When the parent form is not set, the child form is removed. Does anyone know why this apparent memory leak can be resolved? I've been using the ants memory profiler. Bob.

    Read the article

  • Java memory mapped files and swap

    - by MarkS
    I'm looking at some memory mapped files in Java. Let's say I have a heap size set to 2gb, and I memory map a file that is 50gb - far more than the physical memory on the machine. The OS will cache parts of that 50gb file in the os file cache, the java process will have 2gb of heap space. What I'm curious about is how does the OS decide how much of the 50gb file to cache? For instance, if I have another java process, also with a 2gb heap size, will that 2gb be swapped out to allow the os to cache parts of the memory mapped file? Will parts of the heap space of the first process be swapped out to allow the OS to cache? Is there any way to tell the OS not to swap heap space for OS caching? If the OS doesn't swap out main processes, how does it determine how big its file cache should be?

    Read the article

  • Virtual and Physical Memory / OutOfMemoryException

    - by user417518
    Hi, I am working on a 64-bit .Net Windows Service application that essentially loads up a bunch of data for processing. While performing data volume testing, we were able to overwhelm the process and it threw an OutOfMemoryException (I do not have any performance statistics on the process when it failed.) I have a hard time believing that the process requested a chunk of memory that would have exceeded the allowable address space for the process since its running on a 64-bit machine. I do know that the process is running on a machine that is consistently in the neighborhood of 80%-90% physical memory usage. My question is: Can the CLR throw an OutOfMemoryException if the machine is critically low on available physical memory even though the process wouldn't exceed it's allowable amount of virtual memory? Thanks for your help!

    Read the article

  • Copying an sqlite database from file to :memory using C#

    - by davidb
    I have a small database with tables containing a small amount of data which I need to copy into memory. Currently I am using: insertcommand.CommandText = "SELECT sql FROM sqlite_master WHERE sql NOT NULL;"; To pull all the table schema from the file database, however I'm not really sure how to proceed with creating these tables in the new memory database, and copying all the relevant data across. In short, how do I copy an SQLite database from file to memory using C# and System.Data.SQLite?

    Read the article

  • How to reliably measure available memory in Linux?

    - by Alex B
    Linux /proc/meminfo shows a number of memory usage statistics. MemTotal: 4040732 kB MemFree: 23160 kB Buffers: 163340 kB Cached: 3707080 kB SwapCached: 0 kB Active: 1129324 kB Inactive: 2762912 kB There is quite a bit of overlap between them. For example, as far as I understand, there can be active page cache (belongs to "cached" and "active") and inactive page cache ("inactive" + "cached"). What I want to do is to measure "free" memory, but in a way that it includes used pages that are likely to be dropped without a significant impact on overall system's performance. At first, I was inclined to use "free" + "inactive", but Linux's "free" utility uses "free" + "cached" in its "buffer-adjusted" display, so I am curious what a better approach is. When the kernel runs out of memory, what is the priority of pages to drop and what is the more appropriate metric to measure available memory?

    Read the article

  • Beginner assembly programming memory usage question

    - by Daniel
    I've been getting into some assembly lately and its fun as it challenges everything i have learned. I was wondering if i could ask a few questions When running an executable, does the entire executable get loaded into memory? From a bit of fiddling i've found that constants aren't really constants? Is it just a compiler thing? const int i = 5; _asm { mov i, 0 } // i is now 0 and compiles fine So are all variables assigned with a constant value embedded into the file as well? Meaning: int a = 1; const int b = 2; void something() { const int c = 3; int d = 4; } Will i find all of these variables embedded in the file (in a hex editor or something)? If the executable is loaded into memory then "constants" are technically using memory? I've read around on the net people saying that constants don't use memory, is this true?

    Read the article

  • Finding cause of memory leaks in large PHP stacks

    - by Mike B
    I have CLI script that runs over several thousand iterations between runs and it appears to have a memory leak. I'm using a tweaked version of Zend Framework with Smarty for view templating and each iteration uses several MB worth of code. The first run immediately uses nearly 8MB of memory (which is fine) but every following run adds about 80kb. My main loop looks like this (very simplified) $users = UsersModel::getUsers(); foreach($users as $user) { $obj = new doSomethingAwesome(); $obj->run($user); $obj = null; unset($obj); } The point is that everything in scope should be unset and the memory freed. My understanding is that PHP runs through its garbage collection process at it's own desire but it does so at the end of functions/methods/scripts. So something must be leaking memory inside doSomethingAwesome() but as I said it is a huge stack of code. Ideally, I would love to find some sort of tool that displayed all my variables no matter the scope at some point during execution. Some sort of symbol-table viewer for php. Does anything like that or any other tools that could help nail down memory leaks in php exist?

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >