Search Results

Search found 12267 results on 491 pages for 'out of memory'.

Page 89/491 | < Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >

  • What parts of a motherboard age, and how can I choose one with the longest possible life?

    - by Robert Harvey
    I have a home-built computer that's probably about four years old. I realize this probably seems ancient to some folks, but computers have no moving parts (except the fans), so theoretically they should last a long time, if I still have software to run on them. A few weeks ago, it began blue-screening and freezing up, with various error messages. It almost always happened about five minutes after startup. I assumed that the video card was overheating, since the cheap little fan on the heatsink died, so I replaced it. Long story short, after upgrading the video drivers a couple of times and performing some other troubleshooting, I remembered that the last time this happened, I took out the memory SIMS and cleaned the contacts with a gum eraser, so I did that again (noting that the SATA cables were very close to the chips on the SIMS). I re-routed the cables and reinstalled the SIMS. So far, so good; the machine has been trouble-free since. But blue-screens are distressing; I never know what bits are being chewed up in my OS installation when something like this happens. So I'm wondering if I'm choosing my components properly. If it matters, it's an Intel D915GAG motherboard and Corsair memory, but what I'm wondering is, should I be looking for certain characteristics when I choose these parts for my next computer, so that I can avoid this problem in my next build?

    Read the article

  • Windows, why 8 GB of RAM feel like a few MB?

    - by Desmond Hume
    I'm on Windows 7 x64 with 4-core Intel i7 and 8 GB of RAM, but lately it feels like my computer's "RAM" is located solely on the hard drive. Here is what the task manager shows: The total amount of memory used by the processes in the list is just about 1 GB. And what is happening on my computer for a few days now is that one program (Cataloger.exe) is continually processing large quantities of (rather big) files, repeatedly opening and reading them for the purposes of cataloging. But it doesn't grow too much in memory and stays about that size, about 90 MB. However, the amount of data it processes in, say, 30 minutes can be measured in gigabytes. So my guess was that Windows file caching has something to do with it. And after some research on the topic, I came across this program, called RamMap, that displays detailed info on a computer's RAM. Here is the screenshot: So to me it looks like Windows keeps in RAM huge amounts of data that is no longer needed, redirecting any RAM allocation requests to the pagefile on the hard drive. Even when I close Cataloger.exe, the RamMap reports the size of the mapped file as about the same for a long time on. And it's not just this particular program. Earlier I noticed that similar slowdown occurred after some massive file operations with other programs. So it's really not an exception. Whatever it is, it slows down the computer by like 50 times. Opening a new tab in Chrome takes 20-30 seconds, opening a new program can take up to a minute. Due to the slowdown, some programs even crash. So what do you think, is the problem hiding in file caching or somewhere else? How do I solve it?

    Read the article

  • FastGate A20 Line And Himem.sys Issue With Updating BIOS

    - by Boris_yo
    I have been persistent with a thought to perform my first BIOS update ever through MS-DOS but have been postponing this task until today. Despite people telling me any bootable ISO will do it either through CD-ROM or RAMDRIVE, I am still having problems. First is the problem with CD-ROM driver trying to make it work with 4 driver files (cd1.SYS, cd2.SYS, cd3.SYS, cd4.SYS) as well as starting RAMDISK proved to be failure: CD-ROM XMS Allocation Error RAMDISK XMS Allocaton Error (X: and R: drives not working) This A20 line seemed to be the obstacle which then after a couple of searches pointed me to this article on Microsoft website. It seems that FastGate is the culprit which takes over A20 line and conflicts with himem.sys which should be handling it causing the driver to be unable to allocate memory resources. Albeit article suggests 2 workarounds which is disabling FastGate option or adding switch, I read that the former workaround could cause problems which involves later tinkering BIOS, disabling shadow copy etc. while the latter workaround can just hang system as stated in the link above. I assume it just hangs the boot process from image file though. Summing up the above, I am cautious and think it is risky to follow both workarounds because disabling FastGate or trying adding switch by trying available switches from 1-14 or 16, could crash the BIOS update process by itself. I could do this without the need for himem.sys with bootable USB thumbdrive by making it to be seen as USB-HDD, but some time ago I read that it is never a good idea to update BIOS from hard drive so even thought it is simulation, who knows... Maybe it will deactivate hard drive in the middle of the BIOS update process or even USB thumbdrive per se? One forum discussion was about updating BIOS and somebody suggested to not load himem.sys for some reason, but now that I think of it, what if BIOS update needs upper memory?

    Read the article

  • Computer does not boot after ram upgrade

    - by Calmarius
    I have a Dell Optiplex GX520 desktop (it's abount 5 yr old) PC with 512 MB DDR2 RAM. Since my computer always swapping I thought I should upgrade my RAM. I bought a Kingmax 2GB DDR2 RAM. But my system does not boot. The status leds are on 2 and 4. The user manual says 'video card failure' wtf? I put back the original module and everything works. I tried many combinations. When I leave the old 512 RAM in and put the 2GB next to it to the other socket my system completes the POST and I'm able to enter the BIOS menu. It says my system has 2.5 GB installed, one 0.5GB and one 2GB in dual asymmetric channel mode. It's seemingly right. Exiting the BIOS setup GRUB loads successfully, but when I try to boot Ubuntu it crashes with kernel panic immediately. Trying to load Windows XP does not get past the loading screen, it crashes with 0x8E stop error. Does this mean the ram I bought is faulty? Or is it just mean that the memory module I bought is too new to be handled my computer? I this case I may exchange the RAM with my friends. No other computer is in my house (my very old box has DDR1 ram, my systers new box has DDR3 ones. I can't plug my memory in neither one.) I'm going to return the RAM to store to replace it with a better one tomorrow. Is there any hope to get this new module work?

    Read the article

  • 100% CPU load on Ubuntu 10.04.3 LTS 64bit

    - by deadtired
    I have 2 days since I am trying to fix this issue, with no success. The server is a mysql database server. Hardware: DELL Poweredge 1950, 2x Intel Xeon Quad Core E5345 @ 2.33GHz, 16 Gb mem, 2x 146Gb SAS (software RAID1) Software: Ubuntu 10.04.3 LTS, MySQL 5.1.41 Issue: while mysql is not used and runs with no database, everything seems alright. As soon as I install a database, it has the reason to bring all 8 cores in 100% with low memory consumption. So, you can imagine the load average goes high (I saw 212 load average for the first time). The server doesn't become unresponsive, but you can see it's slow while browsing the project installed. Additional info: the database used is not more than 24MB and it was moved from a server with less resources and a lot more larger databases. So it's not the database/project. my.cnf is not a reason also, as I used both default one and the one I use on the same distribution on another server.What is interesting is that mysql doesn't close any process and runs to the limit of the max_connections. Logs are quiet. Nothing there. I switched to this Ubuntu version after I suspected some problems in the newly Ubuntu 11.10 server. This one worked alright for an hour after I made a kernel upgrade to 3.0.1 (it was using the memory also) I tested disk speed and seems alright. Some more output on the running server: dstat -cndymlp -N total -D total 3: htop command: Idea? Did anyone meet the same problem? Any fix you can think of?

    Read the article

  • VMWare Workstation 8 Disk I/O & Hard Faults

    - by Scott
    I have VMWare Workstation 8 installed on a host machine with the following specs: Intel i5 2500k CPU 16GB DDR3 1600 ram 1TB Western Digital Caviar Black HD I have two Windows 7 virtual machines configured (currently running one at a time but will be operating both at once when my 32GB RAM kit arrives in a couple days). Each one is configured with 8GB of RAM and no tweaks/performance customizations or anything done. All of the VMWare settings are the defaults. When I boot into these machines and run various programs (Visual Studio, Outlook, etc), I can hear the disk thrashing quite a bit and checking Resource Monitor, I can see that I'm getting anywhere between 300-800 hard faults per second. From the host machine, it shows they're coming from the VMWare image. If I go to the virtual machine, whatever app I'm currently loading is the image that's causing the hard faults. As I understand it, hard faults are (simply) when an address in memory has been swapped out to the page file and has to be read from the page file instead of from memory. I don't understand why this is happening though. With 8GB of ram on the guest machine and 6.5GB available, what could be causing this? I know Windows 7 supposedly improved on page file management over XP but it seems excessive for this kind of slowdown, disk thrashing and high hard fault count when I have that much free RAM. Is there anything I can to to improve the performance in my guest machines? On the host machine, I can open/run any applications at all and hard faults stays around 0 with low disk I/O.

    Read the article

  • Win/Bios showing wrong RAM in Gateway Netbook?

    - by Ael
    I've seen similar problems, but this seems slightly unique, maybe... I have a Gateway LT21 netbook, Win7. I've upgraded RAM from 1gb to 2gb. It didn't work. So I updated to the latest Bios 1.25, then it worked. 2gb was recognized in the Bios and in Windows. Every was fine. Now today it seemed slow and, to my surprise, both the Bios and Win show only 1gb. :/ I've run memory diagnostic, no error. I entered bios and hit Exit and Save. Still 1gb. I took out the ram, put it back. Still 1gb. :/ CPU-Z shows 2048mb/2gb of RAM. Further testing: if i put in the old 1gb ram, turn on, then put in the new 2gb ram again, the Bios and Win show 2gb of ram. BUT, once restarted at all (even from Bios) it seems to go back to showing incorrect 1gb ram again. :// (There are very few options in the bios, none appear memory-related.) Any ideas?

    Read the article

  • How can I setup BluePill to Monitor a Rails App Running via Passenger (mod_rails)

    - by Jim Jeffers
    I recently launched a site running phusion passenger. Unfortunately, the site went down due to a frozen thread. I was able to save the server by doing kill -9 to the specific PID. Still though, I thought passenger was able to manage this automatically. I have a server with 1GB of memory running one rails app with passenger allotted up to 7 instances. However, when I came to discover the site went down I found that passenger had spawned 6 instances with one of them using up over 800mb of memory causing the server to swap. As a result I am hoping to setup something like bluepill on the server but I'm slightly confused as to how you go about doing it. Mainly because bluepill expects to start/stop the processes it's monitoring. However, in our case, passenger already restarts processes for us so we only need to monitor the pids of passengers instances and kill them once they've gotten too large. Has anyone here setup BluePill to monitor a rails app running under phusion's passenger? Any insight would be useful.

    Read the article

  • Win7 'locking' process/files/folders?

    - by Dynde
    I've had a fair bit problems with sometimes files/folders/processes being 'locked' by Windows. The weird thing is, it's not like the traditional sense, I think, where tools like UnlockIT and wholockme would work. It seems that just giving it a little often helps - making me think it could either be the HDD, the memory, or something in Windows. A scenario: I go into a folder - don't open anything at all, go back up, cut or drag-move the folder to someplace else, it says "Action can't be completed because the folder or a file in it is open in another program". Waiting sometimes 20 seconds sometimes a little longer, and I can move it. Another scenario is deleting a bunch of files in a folder, and it appears that everything is gone, but then suddenly after a few seconds an .exe file pops back up, and I can't delete it. Waiting a few minutes, then pressing refresh and it's gone. I have the strangest feeling that there's a problem with either HDD or memory. I already tried disabling Windows indexing service with no luck. Does anyone have any ideas? EDIT: I should say, that I have a very fast system, 16 GB DDR3 RAM, i7-2600k CPU, SSD main HDD, so I really should not be experiencing any sort of problems, where one might say that it's "reasonable" for the system not to respond right away. Edit2: And I updated SSD firmware a couple months ago, so it shouldn't be bad release FW either

    Read the article

  • Half of installed RAM is hardware reserved

    - by user968270
    After a rather arduous and convoluted series of problems that left me without a desktop for ~80 days, I've finally got the thing up and running, having replaced the power supply, motherboard, graphics card and CPU. Now, however, I'm experiencing the 'hardware reserved RAM' issue. Perhaps this is the exhaustion talking, but looking at the question that tends to get pointed to when this kind of topic gets locked as a duplicate hasn't helped. I have 16 GB of RAM installed in an MSi 970A-G46, which is spec'd for up to 32 GB of RAM. The BIOS recognizes that I have 16 GB installed, and the resource monitor also shows the whole 16 GB, only it shows 8 GB as hardware reserved. I've seen suggestions that it's an OS issue, but the particular installation of Windows 7 (64-bit) which I'm running on my boot drive is the same as the one that could actually access the 16 GB in my previous motherboard (MSi 870A-G54). I've updated my BIOS using the MSi Live Update tool and restarted the machine with no effect, and I cannot seem to locate any 'Memory Remapping' option as I've seen mentioned. I've physically swapped the RAM between the slots to no effect. I've unchecked the Maximum Memory box in the msconfig Boot tab's advanced options, also to no effect. These are my system's basic specifications OS: Windows 7 Home Premium (64-Bit) Motherboard: MSi 970A-G46 CPU: AMD FX-8150 Graphics Card: XFX Radeon HD 6870 Boot Drive: OCZ Agility 3 Storage Drive: Samsung Spinpoint F3 ST1000DM005/HD103SJ 1TB PSU: Thermaltake TR-2 TR600 600W ATX12V v2.3

    Read the article

  • Ram configuration 4x4 or 2x8?

    - by Carl B
    I am looking to upgrade my Ram to 16gigs and I am wondering if there is any distinct advantage of the way I do it. That being a 4x4 or 2x8 set up. In all my searching there have been a number of pros for each profile. I can find no benchmarck results for either setup as a compaison. So, if there are 2 profiles of the same speed, same voltage, same timing and same cas what would perform better or have a better over all benafit? A few examples from my search - a 4x4 set would lend a benafit in that if one stick failed, you only lose 25% of your Ram vs 50% in a 2x8 set up. a 2x8 set up would have less strain on the memory controler and motherboard. a 2x8 would generate less heat. a 2x8 set up is easier to over clock (not part of my need, but alot of the comparisons circled around the overclocability ease of the 2 stick set up). There is one outstanding benafit that I have found in at least the target company I have looked at and that is price. The 2x8 is nearly half the cost. My motherboard supports a max of 16 gigs and I have a 64 bit OS. Has anyone seen any performance comparisons or is 16 gb just 16 gb no matter how you slice it? And is there any merit to the above pros? Edit: as per the mobo specs - Main Memory • Supports four unbuffered DIMM of 1.5 Volt DDR3 800/1066/1333/1600*/1800*/2133* (OC) DRAM, 16GB Max

    Read the article

  • Low-traffic WordPress website on Apache keeps crashing server

    - by OC2PS
    I have recently moved my low-moderate traffic (1000 UAUs, 5000 pageviews on a busy day) website from shared hosting to a Centos 6 64-bit VPS with Apache and cPanel running on 4 quad-core processor (likely oversold) and 3GB memory (Xen). We've had problems from the beginning. The server keeps crashing. It seems PHP keeps expanding till it consumes all the memory and crashes the server. Some folks have suggested that I should abandon Apache/cPanel/PHP/mySQL and go with nginX/Varnish/PHP-FPM/SQLite. But that's just not possible for me as I am not very tech savvy and need a simple GUI like cPanel to be able to manage the mundane management tasks (can't afford to hire system administrator or get fully managed hosting). I have come across several posts discussing optimization of Apache for WordPress. But all of these lead to articles that are pretty dated such as this ~4 year old one from Jan 2009 - http://thethemefoundry.com/blog/optimize-apache-wordpress/ The article is pretty detailed and seems helpful, but I stumble even on the first step. My httpd.conf only has 2 loadmodule commands LoadModule fastinclude_module modules/mod_fastinclude.so LoadModule bwlimited_module modules/mod_bwlimited.so So I go total bust right there. Further, my httpd.conf says Direct modifications to the Apache configuration file may be lost upon subsequent regeneration of the configuration file. To have modifications retained, all modifications must be checked into the configuration system by running: /usr/local/cpanel/bin/apache_conf_distiller I am having trouble finding where to change the modules in WHM. Please can someone help me with updated guidelines on how to optimize Apache for WordPress? Many thanks! P.S. The WordPress installation also has WP Super Cache installed. P.P.S. I also have phpBB, OpenCart, and Menalto Gallery installed.

    Read the article

  • vSphere education - What are the downsides of configuring virtual machines with *too* much RAM?

    - by ewwhite
    VMware memory management seems to be a tricky balancing act. With cluster RAM, Resource Pools, VMware's management techniques (TPS, ballooning, host swapping), in-guest RAM utilization, swapping, reservations, shares and limits, there are a lot of variables. I'm in a situation where clients are using dedicated vSphere cluster resources. However, they are configuring the virtual machines as though they were on physical hardware. In turn, this means a standard VM build may have 4 vCPUs and 16GB or more of RAM. I come from the school of starting small (1 vCPU, minimal RAM), checking real-world use and adjusting up as necessary. Some examples from a "problem" cluster. Resource pool summary - Looks almost 4:1 overcommitted. Note the high amount of ballooned RAM. Resource allocation - The Worst Case Allocation column shows that these VMs would have access to less than 50% of their configured RAM under constrained conditions. The real-time memory utilization graph of the top VM in the listing above. 4 vCPU and 64GB RAM allocated. It averages under 9GB use. Summary of the same VM What are the downsides of overcommitting and overconfiguring resources (specifically RAM) in vSphere environments? Assuming that the VMs can run in less RAM, is it fair to say that there's overhead to configuring virtual machines with more RAM than they need? What is the counter-argument to: "if a VM has 16GB of RAM allocated, but only uses 4GB, what's the problem??"? E.g. do customers need to be educated? What specific metric should be used to meter RAM usage. Tracking the peaks of "Active" versus time?

    Read the article

  • Deploying Memcached as 32bit or 64bit?

    - by rlotun
    I'm curious about how people deploy memcached on 64 bit machines. Do you compile a 64bit (standard) memcached binary and run that, or do people compile it in 32bit mode and run N instances (where N = machine_RAM / 4GB)? Consider a recommended deployment of Redis (from the Redis FAQ): Redis uses a lot more memory when compiled for 64 bit target, especially if the dataset is composed of many small keys and values. Such a database will, for instance, consume 50 MB of RAM when compiled for the 32 bit target, and 80 MB for 64 bit! That's a big difference. You can run 32 bit Redis binaries in a 64 bit Linux and Mac OS X system without problems. For OS X just use make 32bit. For Linux instead, make sure you have libc6-dev-i386 installed, then use make 32bit if you are using the latest Git version. Instead for Redis <= 1.2.2 you have to edit the Makefile and replace "-arch i386" with "-m32". If your application is already able to perform application-level sharding, it is very advisable to run N instances of Redis 32bit against a big 64 bit Redis box (with more than 4GB of RAM) instead than a single 64 bit instance, as this is much more memory efficient. Would not the same recommendation also apply to a memcached cluster?

    Read the article

  • Java OutOfMemoryError due to Linux RAM disk cache not freed

    - by Markus Jevring
    The process will run fine all day, then, bam, without warning, it will throw this error. Sometimes seemingly in the middle of doing nothing. It will happen at seemingly random times during the day. I checked to see if anything else was running on the machine, like scheduled backups or something, but found nothing. The machine has enough physical memory (2GB, with about 1GB free for a 3-500MB load), and has sufficient -Xmx specified. According to our sysadmin, the problem is that the RAM that the kernel uses as a disk cache (apparently all but 8MB) is not freed when the JVM needs to allocate memory, so the JVM process throws an OutOfMemoryError. This could be because Java asks the kernel if enough memory is available before allocating and finds that it is insufficient, resulting in a crash. I would like to think, however, that Java simply tries to allocate the memory via the kernel, and when the kernel gets such a request, it makes room for the application by throwing our some of the disk cache. Has anyone else run in to the issue, and if so, what was the error, and how did you solve it? We are currently using jdk1.6.0_20 on SLES 10 SP2 Linux 2.6.16.60-0.42.9-smp in VMWare ESX.

    Read the article

  • Why isn’t my autoreleased object getting released?

    - by zoul
    Hello. I am debugging a weird memory management error and I can’t figure it out. I noticed that some of my objects are staying in memory longer than expected. I checked all my memory management and finally got to the very improbable conclusion that some of my autorelease operations don’t result in a release. Under what circumstances is that possible? I created a small testing Canary class that logs a message in dealloc and have the following testing code in place: NSLog(@"On the main thread: %i.", [NSThread isMainThread]); [[[Canary alloc] init] autorelease]; According to the code we’re really on the main thread, but the dealloc in Canary does not get called until much later. The delay is not deterministic and can easily take seconds or more. How is that possible? The application runs on a Mac, the garbage collection is turned off (Objective-C Garbage Collection is set to Unsupported on the target.) I am mostly used to iOS, is memory management on OS X different in some important way?

    Read the article

  • Instruments (Leaks) and NSDateFormatter

    - by Cal
    When I run my iPhone app with Instruments Leaks and parse a bunch of NSDates using NSDateFormatter my memory goes up about 1mb and stays even though these NSDates should be dealloc'd after the parsing (I just discard them if they aren't new). I thought the malloc (in my heaviest stack trace below) could become part of the NSDate but I also thought it could be memory that only used during some intermediate step in parsing. Does anyone know which one it is or how to find out? Also, is there a way to put a breakpoint on NSDate dealloc to see if that memory is really being reclaimed? Here's what my date formatter looks like for parsing these dates: df = [[NSDateFormatter alloc] init]; [df setDateFormat:@"EEE, d MMM yyyy H:m:s z"]; Here's the Heaviest Stack trace when the memory bumps up and stays there: 0 libSystem.B.dylib 208.80 Kb malloc 1 libicucore.A.dylib 868.19 Kb icu::ZoneMeta::getSingleCountry(icu::UnicodeString const&, icu::UnicodeString&) 2 libicucore.A.dylib 868.66 Kb icu::ZoneMeta::getSingleCountry(icu::UnicodeString const&, icu::UnicodeString&) 3 libicucore.A.dylib 868.67 Kb icu::ZoneMeta::getSingleCountry(icu::UnicodeString const&, icu::UnicodeString&) 4 libicucore.A.dylib 868.67 Kb icu::DateFormatSymbols::initZoneStringFormat() 5 libicucore.A.dylib 868.67 Kb icu::DateFormatSymbols::getZoneStringFormat() const 6 libicucore.A.dylib 868.67 Kb icu::SimpleDateFormat::subParse(icu::UnicodeString const&, int&, unsigned short, int, signed char, signed char, signed char*, icu::Calendar&) const 7 libicucore.A.dylib 868.67 Kb icu::SimpleDateFormat::parse(icu::UnicodeString const&, icu::Calendar&, icu::ParsePosition&) const 8 libicucore.A.dylib 868.67 Kb icu::DateFormat::parse(icu::UnicodeString const&, icu::ParsePosition&) const 9 libicucore.A.dylib 868.67 Kb udat_parse 10 CoreFoundation 868.67 Kb CFDateFormatterGetAbsoluteTimeFromString 11 CoreFoundation 868.67 Kb CFDateFormatterCreateDateFromString 12 Foundation 868.67 Kb -[NSDateFormatter getObjectValue:forString:range:error:] 13 Foundation 868.75 Kb -[NSDateFormatter getObjectValue:forString:errorDescription:] 14 Foundation 868.75 Kb -[NSDateFormatter dateFromString:] Thanks!

    Read the article

  • How to find the leaky faucet that loads into Malloc 32kb

    - by Rob
    I have been messing around with Leaks trying to find which function is not being deallocated (I am still new to this) and could really use some experienced insight. I have this bit of code that seems to be the culprit. Every time I press the button that calls this code, 32kb of memory is additionally allocated to memory and when the button is released that memory does not get deallocated. What I found was that everytime that AVAudioPlayer is called to play an m4a file, the final function to parse the m4a file is MP4BoxParser::Initialize() and this in turn allocates 32kb of memory through Cached_DataSource::ReadBytes My question is, how do I go about deallocating that after it is finished so that it doesn't keep allocating 32kb every time the button is pressed? Any help you could provide is greatly appreciated! - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { //stop playing theAudio.stop; // cancel any pending handleSingleTap messages [NSObject cancelPreviousPerformRequestsWithTarget:self selector:@selector(handleSingleTap) object:nil]; UITouch* touch = [[event allTouches] anyObject]; NSString* filename = [g_AppsList objectAtIndex: [touch view].tag]; NSString *path = [[NSBundle mainBundle] pathForResource: filename ofType:@"m4a"]; theAudio=[[AVAudioPlayer alloc] initWithContentsOfURL:[NSURL fileURLWithPath:path] error:NULL]; theAudio.delegate = self; [theAudio prepareToPlay]; [theAudio setNumberOfLoops:-1]; [theAudio setVolume: g_Volume]; [theAudio play]; }

    Read the article

  • iPhone OS: Strategies for high density image work

    - by Jasconius
    I have a project that is coming around the bend this summer that is going to involve, potentially, an extremely high volume of image data for display. We are talking hundreds of 640x480-ish images in a given application session (scaled to a smaller resolution when displayed), and handfuls of very large (1280x1024 or higher) images at a time. I've already done some preliminary work and I've found that the typical 640x480ish image is just a shade under 1MB in memory when placed into a UIImageView and displayed... but the very large images can be a whopping 5+ MB's in some cases. This project is actually be targeted at the iPad, which, in my Instruments tests seems to cap out at about 80-100MB's of addressable physical memory. Details aside, I need to start thinking of how to move huge volumes of image data between virtual and physical memory while preserving the fluidity and responsiveness of the application, which will be high visibility. I'm probably on the higher ends of intermediate at Objective-C... so I am looking for some solid articles and advice on the following: 1) Responsible management of UIImage and UIImageView in the name of conserving physical RAM 2) Merits of using CGImage over UIImage, particularly for the huge images, and if there will be any performance gain 3) Anything dealing with memory paging particularly as it pertains to images I will epilogue by saying that the numbers I have above maybe off by about 10 or 15%. Images may or may not end up being bundled into the actual app itself as opposed to being loaded in from an external server.

    Read the article

  • Core Data - How to check if a managed object's properties have been deallocated?

    - by georryan
    I've created a program that uses core data and it works beautifully. I've since attempted to move all my core data methods calls and fetch routines into a class that is self contained. My main program then instantiates that class and makes some basic method calls into that class, and the class then does all the core data stuff behind the scenes. What I'm running into, is that sometimes I'll find that when I grab a managed object from the context, I'll have a valid object, but its properties have been deallocated, and I'll cause a crash. I've played with the zombies and looked for memory leaks, and what I have gathered is it seems that the run loop is probably responsible for deallocating the memory, but I'm not sure. Is there a way to determine if that memory has been deallocated and force the core data to get it back if I need to access it? My managedObjectContext never gets deallocated, and the fetchedResultsController never does, either. I thought maybe I needed to use the [managedObjectContext refreshObject:mergeData:] method, or the [managedObjectContext setRetainsRegisteredObjects:] method. Although, I'm under the impression that last one may not be the best bet since it will be more memory intensive (from what I understand). These errors only popped up when I moved the core data calls into another class file, and they are random when they show up. Any insight would be appreciated. -Ryan

    Read the article

  • How to pass parameters to manage_shared_memory.construct() in Boost.Interprocess

    - by recipriversexclusion
    I've stared at the Boost.Interprocess documentation for hours but still haven't been able to figure this out. In the doc, they have an example of creating a vector in shared memory like so: //Define an STL compatible allocator of ints that allocates from the managed_shared_memory. //This allocator will allow placing containers in the segment typedef allocator<int, managed_shared_memory::segment_manager> ShmemAllocator; //Alias a vector that uses the previous STL-like allocator so that allocates //its values from the segment typedef vector<int, ShmemAllocator> MyVector; int main(int argc, char *argv[]) { //Create a new segment with given name and size managed_shared_memory segment(create_only, "MySharedMemory", 65536); //Initialize shared memory STL-compatible allocator const ShmemAllocator alloc_inst (segment.get_segment_manager()); //Construct a vector named "MyVector" in shared memory with argument alloc_inst MyVector *myvector = segment.construct<MyVector>("MyVector")(alloc_inst); Now, I understand this. What I'm stuck is how to pass a second parameter to segment.construct() to specify the number of elements. The interprocess document gives the prototype for construct() as MyType *ptr = managed_memory_segment.construct<MyType>("Name") (par1, par2...); but when I try MyVector *myvector = segment.construct<MyVector>("MyVector")(100, alloc_inst); I get compilation errors. My questions are: Who actually gets passed the parameters par1, par2 from segment.construct, the constructor of the object, e.g. vector? My understanding is that the template allocator parameter is being passed. Is that correct? How can I add another parameter, in addition to alloc_inst that is required by the constructor of the object being created in shared memory? There's very little information other than the terse Boost docs on this.

    Read the article

  • Access cost of dynamically created objects with dynamically allocated members

    - by user343547
    I'm building an application which will have dynamic allocated objects of type A each with a dynamically allocated member (v) similar to the below class class A { int a; int b; int* v; }; where: The memory for v will be allocated in the constructor. v will be allocated once when an object of type A is created and will never need to be resized. The size of v will vary across all instances of A. The application will potentially have a huge number of such objects and mostly need to stream a large number of these objects through the CPU but only need to perform very simple computations on the members variables. Could having v dynamically allocated could mean that an instance of A and its member v are not located together in memory? What tools and techniques can be used to test if this fragmentation is a performance bottleneck? If such fragmentation is a performance issue, are there any techniques that could allow A and v to allocated in a continuous region of memory? Or are there any techniques to aid memory access such as pre-fetching scheme? for example get an object of type A operate on the other member variables whilst pre-fetching v. If the size of v or an acceptable maximum size could be known at compile time would replacing v with a fixed sized array like int v[max_length] lead to better performance? The target platforms are standard desktop machines with x86/AMD64 processors, Windows or Linux OSes and compiled using either GCC or MSVC compilers.

    Read the article

  • How to simulate OutOfMemory exception

    - by Gacek
    I need to refactor my project in order to make it immune to OutOfMemory exception. There are huge collections used in my project and by changing one parameter I can make my program to be more accurate or use less of the memory... OK, that's the background. What I would like to do is to run the routines in a loop: Run the subroutines with the default parameter. Catch the OutOfMemory exception, change the parameter and try to run it again. Do the 2nd point until parameters allow to run the subroutines without throwing the exception (usually, there will be only one change needed). Now, I would like to test it. I know, that I can throw the OutOfMemory exception on my own, but I would like to simulate some real situation. So the main question is: Is there a way of setting some kind of memory limit for my program, after reaching which the OutOfMemory exception will be thrown automatically? For example, I would like to set a limit, let's say 400MB of memory for my whole program to simulate the situation when there is such an amount of memory available in the system. Can it be done?

    Read the article

  • Is extending a base class with non-virtual destructor dangerous in C++

    - by Akusete
    Take the following code class A { }; class B : public A { }; class C : public A { int x; }; int main (int argc, char** argv) { A* b = new B(); A* c = new C(); //in both cases, only ~A() is called, not ~B() or ~C() delete b; //is this ok? delete c; //does this line leak memory? return 0; } when calling delete on a class with a non-virtual destructor with member functions (like class C), can the memory allocator tell what the proper size of the object is? If not, is memory leaked? Secondly, if the class has no member functions, and no explicit destructor behaviour (like class B), is everything ok? I ask this because I wanted to create a class to extend std::string, (which I know is not recommended, but for the sake of the discussion just bear with it), and overload the +=,+ operator. -Weffc++ gives me a warning because std::string has a non virtual destructor, but does it matter if the sub-class has no members and does not need to do anything in its destructor? -- FYI the += overload was to do proper file path formatting, so the path class could be used like class path : public std::string { //... overload, +=, + //... add last_path_component, remove_path_component, ext, etc... }; path foo = "/some/file/path"; foo = foo + "filename.txt"; //and so on... I just wanted to make sure someone doing this path* foo = new path(); std::string* bar = foo; delete bar; would not cause any problems with memory allocation

    Read the article

  • Do COM Dll References Require Manual Disposal? If so, How?

    - by Drew
    I have written some code in VB that verifies that a particular port in the Windows Firewall is open, and opens one otherwise. The code uses references to three COM DLLs. I wrote a WindowsFirewall class, which Imports the primary namespace defined by the DLLs. Within members of the WindowsFirewall class I construct some of the types defined by the DLLs referenced. The following code isn't the entire class, but demonstrates what I am doing. Imports NetFwTypeLib Public Class WindowsFirewall Public Shared Function IsFirewallEnabled as Boolean Dim icfMgr As INetFwMgr icfMgr = CType(System.Activator.CreateInstance(Type.GetTypeFromProgID("HNetCfg.FwMgr")), INetFwMgr) Dim profile As INetFwProfile profile = icfMgr.LocalPolicy.CurrentProfile Dim fIsFirewallEnabled as Boolean fIsFirewallEnabled = profile.FirewallEnabled return fIsFirewallEnabled End Function End Class I do not reference COM DLLs very often. I have read that unmanaged code may not be cleaned up by the garbage collector and I would like to know how to make sure that I have not introduced any memory leaks. Please tell me (a) if I have introduced a memory leak, and (b) how I may clean it up. (My theory is that the icfMgr and profile objects do allocate memory that remains unreleased until after the application closes. I am hopeful that setting their references equal to nothing will mark them for garbage collection, since I can find no other way to dispose of them. Neither one implements IDisposable, and neither contains a Finalize method. I suspect they may not even be relevant here, and that both of those methods of releasing memory only apply to .Net types.)

    Read the article

< Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >