Search Results

Search found 12267 results on 491 pages for 'out of memory'.

Page 96/491 | < Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >

  • Memory leak in chrome.extension.sendRequest()

    - by jprim
    Chrome Version : 9.0.597.19 (Build 68937) beta & current stable I have simplified my code as far as possible. I ended up with the attached extension: content.js (content script run on every site): setInterval(function() { chrome.extension.sendRequest({ }, function(response) { //Do nothing }); }, 1); background.js (background page script): chrome.extension.onRequest.addListener(function(request, sender, sendResponse) { sendResponse({ }); }); When you install this extension, you can observe it eating up memory extremely fast (I got 90MB in 1 min with 9 tabs opened). You can speed up the process by opening more tabs. Of course, the extension I am actually developing does not send requests every millisecond, but only every 3 seconds. This just slows it down, though. A user who has run it in the background for a long time with many tabs opened has reported 100MB of memory usage, and I can reproduce it to a less extreme extent, too.

    Read the article

  • memcpy() safety on adjacent memory regions

    - by JaredC
    I recently asked a question on using volatile and was directed to read some very informative articles from Intel and others discussing memory barriers and their uses. After reading these articles I have become quite paranoid though. I have a 64-bit machine. Is it safe to memcpy into adjacent, non-overlapping regions of memory from multiple threads? For example, say I have a buffer: char buff[10]; Is it always safe for one thread to memcpy into the first 5 bytes while a second thread copies into the last 5 bytes? My gut reaction (and some simple tests) indicate that this is completely safe, but I have been unable to find documentation anywhere that can completely convince me.

    Read the article

  • Mathematica & J/Link: Memory Constraints?

    - by D-Bug
    I am doing a computing-intensive benchmark using Mathematica and its J/Link Java interface. The benchmark grinds to a halt if a memory footprint of about 320 MB is reached, since this seems to be the limit and the garbage collector needs more and more time and will eventually fail. The Mathematica function ReinstallJava takes the argument command line. I tried to do ReinstallJava[CommandLine -> "java -Xmx2000m ..."] but Mathematica seems to ignore the -Xmx option completely. How can I set the -Xmx memory option for my java program? Where does the limit of 320 MB come from? Any help would be greatly appreciated.

    Read the article

  • zeroing out memory

    - by robUK
    Hello, gcc 4.4.4 c89 I am just wondering what most c programmers do when they want to zero out memory. For example I have a buffer of 1024 bytes. Sometimes I do this: char buffer[1024] = {0}; Which will zero all bytes. However, should I declare like this and use memset? char buffer[1024]; . . memset(buffer, 0, sizeof(buffer); Is there any real reason you have to zero the memory? What is the worst that can happen by not doing it? Many thanks for any suggestions,

    Read the article

  • malloc unable to assign memory + doesnt warn

    - by sraddhaj
    char *str=NULL; strsave(s,str,n+1); printf("%s",str-n); when I gdb debug this code I find that the str value is 0x0 which is null and also that my code is not catching this failed memory allocation , it doesnt execute str==NULL perror code ...Any idea void strsave(char *s,char *str,int n) { str=(char *)malloc(sizeof(char)* n); if(str==NULL) perror("failed to allocate memory"); while(*s) { *str++=*s++; } *str='\0'; }

    Read the article

  • In-memory data structure that supports boolean querying

    - by sanity
    I need to store data in memory where I map one or more key strings to an object, as follows: "green", "blue" -> object1 "red", "yellow" -> object2 I need to be able to efficiently receive a list of objects, where the strings match some boolean criteria, such as: ("red" OR "green") AND NOT "blue" I'm working in Java, so the ideal solution would be an off-the-shelf Java library. I am, however, willing to implement something from scratch if necessary. Anyone have any ideas? I'd rather avoid the overhead of an in-memory database if possible, I'm hoping for something comparable in speed to a HashMap (or at least the same order of magnitude).

    Read the article

  • memory access violation error - 0xC0000005

    - by nobody
    I got memory access violation error sometimes.... but I don't know where the error comes from... So I reviewed the code and I found some strange code... delete m_p1; A *a = new A(); a->b = *c; m_p1 = a; --> strange code. I think it's a little strange line.... can i use the m_p1 after delete the m_p1? It can make some memory access error??

    Read the article

  • About memory and delete object in c++

    - by barssala
    I will give some examples and explain. First, I declare some object like CString* param = new CString[100] And when I declare this one, my memory would increase a little bit because it's some implemented string. Then I store this object in some list of CString just like List<CString> myList = new List<CString>; // new list of CString myList.add(param); This is my question: I wanna know, when I delete myList, my param isn't deleted, right? And memory in param still exists. Do I misunderstand?

    Read the article

  • memory leak in Zend_Db_Table_Row?

    - by Vincenzo
    This is the code I have: <?php $start = memory_get_usage(); $table = new Zend_Db_Table('user'); for ($i = 0; $i < 5; $i++) { $row = $table->createRow(); $row->name = 'Test ' . $i; $row->save(); unset($row); echo (memory_get_usage() - $start) . "\n"; } This is what I see: 90664 93384 96056 98728 101400 Isn't it a memory leak? When I have 500 objects to insert into DB in one script I'm getting memory overflow. Can anyone help?

    Read the article

  • dynamic memory allocation [closed]

    - by gcc
    i wanna write a program that creates (allocating memory) and manipulates (adding elements and increasing memory etc.) integer arrays dynamically according to given input sequences. input sequence which starts with the maximum number of arrays, includes integers to be put into arrays and some one letter characters which are commands to carry out some tasks (activating next array, deleting an array etc). also, i wanna create *c_arrays which is the address of the array whose elements are the actual capacities (How many integer slots are already allocated for an array?) of arrays how should i organize(set up) the algorithm?

    Read the article

  • removing image tag from memory

    - by Chapsterj
    I have seen some code to check if a background image on a div is loaded. What they are doing is adding a img tag to memory but storing it in a variable and using that to see if the image is loaded with the load event. My question is does the $img tag stay in memory and how would I be able to remove that tag when the load event has been called. var $div = $('div'), bg = $div.css('background-image'); if (bg) { var src = bg.replace(/(^url\()|(\)$|[\"\'])/g, ''), $img = $('<img>').attr('src', src).on('load', function() { // do something, maybe: $div.fadeIn(); }); } }); I got this code above from this post

    Read the article

  • memory management question -- releasing an object which has to be returned

    - by ulag
    Hi, I have an NSMutableArray called playlist. This is in a method called getAllPlaylists. The code is something like this: -(NSMutableArray *)getAllPlaylists { //playlist is an instance variable playlist = [[NSMutableArray alloc] init]; //memory leak here . . //some code here which populates the playlist array [playlist addObject: object1]; . . return playlist; } The array allocation step of playlist is causing a memory leak. In such a scenario where can i release this array? Or can i avoid allocation n initialization of playlist here by doing something else?

    Read the article

  • 4GB of RAM in MacOSX 10.5, only 3GB in MacOSX 10.6

    - by Albert
    Hi, I was using MacOSX 10.5 on my MacBook until today and I had 4GB of memory there. Now I have updated to MacOSX 10.6 and it only displays 3GB. Why is that? How can I fix it? Also, I am a bit wondering why most people (well, most of the Google hits explained the 3GB issue that way -- leaving out the fact that it has worked earlier) are saying that a 32bit system can under no circumstances access more than 3.2GB. Don't we have PAE nowadays in most systems? Thanks, Albert

    Read the article

  • Hardware error messages from syslogd

    - by Farhat
    I have a 64-core AMD server running CEntOS on which I was running a long job. In the midst of the output, I see these lines. It appears to be a memory error. How severe is this and what exactly does it indicate? Message from syslogd@heracles at Nov 7 21:00:02 ... kernel:[Hardware Error]: MC4_STATUS[Over|CE|MiscV|-|AddrV|-|-|CECC]: 0xdc10410040080a13 Message from syslogd@heracles at Nov 7 21:00:02 ... kernel:[Hardware Error]: Northbridge Error (node 4): DRAM ECC error detected on the NB. Message from syslogd@heracles at Nov 7 21:00:02 ... kernel:[Hardware Error]: cache level: L3/GEN, mem/io: MEM, mem-tx: RD, part-proc: RES (no timeout)

    Read the article

  • How is htop "Swp" calculated?

    - by Thomas
    When I run htop (on OS X 10.6.8), I see something like this : 1 [||||||| 20.0%] Tasks: 70 total, 0 running 2 [||| 7.2%] Load average: 1.11 0.79 0.64 3 [|||||||||||||||||||||||||||81.3%] Uptime: 00:30:42 4 [|| 5.8%] Mem[|||||||||||||||||||||3872/4096MB] Swp[ 0/0MB] PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 284 501 57 0 15.3G 1064M 0 S 0.0 6.5 0:01.26 /Applications/Firefox.app/Contents/MacOS/firefox -psn_0_90134 437 501 57 0 14.8G 785M 0 S 0.0 4.8 0:00.18 /Applications/Thunderbird.app/Contents/MacOS/thunderbird -psn_0_114716 428 501 63 0 12.8G 351M 0 S 1.0 2.1 0:00.51 /Applications/Firefox.app/Contents/MacOS/plugin-container.app/Contents/MacOS/ 696 501 63 0 11.7G 175M 0 S 0.0 1.1 0:00.02 /System/Library/Frameworks/QuickLook.framework/Resources/quicklookd.app/Conte 38 0 33 0 11.1G 422M 0 S 0.0 2.6 0:00.59 /System/Library/Frameworks/CoreServices.framework/Frameworks/Metadata.framewo 183 501 48 0 10.9G 137M 0 S 0.0 0.8 0:00.03 /System/Library/CoreServices/Finder.app/Contents/MacOS/Finder How can I have Processes using Gigabytes of VIRT memory and still 0MB of Swap used ?

    Read the article

  • Bank Interleave Requested but not enabled

    - by wbeard52
    I have a ECS A770M-A motherboard with a 2.2GHz Phenom 64 Quad Core processor with 4 gigs of ram. I also have a 1GB DDR3 Zotac GeForce 9500GT video card installed in the computer. I believe the main memory is DDR2. The problem I am having is that I get a "Bank Interleave Requested but not Enabled" error on startup. The computer boots up fine (no error) with the 4gigs of ram if I replace the video card with the previous 256MB video card. I have also tried placing a single gig of ram in the computer with the Zotac video card and still get the error. My question is what would typically cause this error and would a DDR3 video ram be compatible with DDR2 system ram? Thanks

    Read the article

  • ECC RAM in GA-G33M-DS2R? Or any Gigabyte/G33M motherboard?

    - by Gregory Hoerner
    I'm looking to retire a server which has 12GB of ECC DDR2 RAM. I'd like to upgrade my multi-purpose machine (firewall, file server, VM host for Windows Home Server, etc.) using the RAM from the server. I was just wondering: Has anyone had experience using ECC RAM in a GA-G33M-DS2R motherboard (or any Gigabyte GA-G33M-XXXX motherboard for that matter)? Has anyone had experience using ECC RAM in a motherboard with a G33M chipset. I've searched everywhere and found the attitude positive of ECC memory working in a Non-ECC board, but I would like some specific positive feedback before proceeding tonight. I have to kick the entire house offline, which I don't like to do without good reason :)

    Read the article

  • Turn off the Linux OOM killer by default?

    - by Peter Eisentraut
    The OOM killer on Linux wreaks havoc with various applications every so often, and it appears that not much is really done on the kernel development side to improve this. Would it not be better, as a best practice when setting up a new server, to reverse the default on the memory overcommitting, that is, turn it off (vm.overcommit_memory=2) unless you know you want it on for your particular use? And what would those use cases be where you know you want the overcommitting on? As a bonus, since the behavior in case of vm.overcommit_memory=2 depends on vm.overcommit_ratio and swap space, what would be a good rule of thumb for sizing the latter two so that this whole setup keeps working reasonably?

    Read the article

  • Should I consolidate multiple identical VMs into BSD jails?

    - by Josh
    We run a number of Openfire XMPP/Jabber servers. Due to the way Openfire works, we cannot easily run multiple Openfire instances on one server, so I have 5 identical VMware ESXi VMs, each with CentOS, MySQl, Java, and Openfire. They're the exact same, except for their IP addresses, the actual Openfire MySQL database and it's config file. I am wondering if this is the optimal configuration, or if it would be better to move these VMs to a single FreeBSD machine and put each one inside a FreeBSD jail. Specifically, I am wondering if the benefit of VMWare's Transparent Page Sharing (TPS) would outweight the cost of running 5 identical OSes. Would I end up using less memory with one large FreeBSD machine and java running in bsd jails?

    Read the article

  • Suggestion to change my Laptop RAM DDR3

    - by SP Gounder
    My Vaio VPCEA1BGN has Memory Type: DDR3 [2GB] PC3-10600 DDR3 [1GB] PC3-8500, DDR3 (non-ECC). There is no extra free slot now. I am planning to change one ram to 4GB. which one RAM i should remove now. I am still confused over the ECC types. Any suggestion while i am buying the ram?. I am looking into Corsair or Transcend. Maximum It can hold 8GB [2*4GB] The Crucial scan tool gave the Report as: http://www.crucial.com/systemscanner/viewscanbyid.aspx?id=28AC5F1531D926C3 I am using Win7 Professional 64Bit.

    Read the article

  • One of my apache processes is huge - how can I find out why?

    - by Malcolm Box
    I'm running Apache 2.2.12 with mod_wsgi, hosting a Django site. Most of the apache child processes weigh in at about 125MB RSS, but occasionally I see one child balloon to 1GB RSS. At this point there's usually 1 huge process (1GB), a couple of large ones (500MB) and the rest are still ~125MB. These are the mod_wsgi daemon processes. I've tried using memory leak tracing in Python to see if it's the Django code, and I see no leaks. Looking in the logs doesn't show any particularly strange requests. I'm stumped on how to figure out what's causing this - any ideas? Also, any workaround ways to kill the large apache process when it gets too big, without bringing apache down? Some more details: Not using mod_php Using pre-fork

    Read the article

  • What amount of physical RAM would a typical "commodity class" server have, as of late 2013?

    - by marathon
    I'm trying to spec out servers for my company's infrastructure group to build. They tell me anything more than 2GB is too much, which I find ridiculous considering cheap DRAM is about 15 bucks a dimm in bulk and our particular software runs better with more memory. I tried to find out how much google servers use, and pinning down a number is hard. Best I could find in a google research paper was that in 2008, their commodity servers were using 2 and 4GB dimms, but the paper never said how many. I realize "commodity server" is a vague term, but I'm just looking for a rough range in RAM used. I suspect at least 16GB is going to be the norm.

    Read the article

  • Performance affects of compressing Program Files on Windows / NTFS

    - by SRobertJames
    What are the performance affects of compressing Program Files on Windows NTFS? On a fast, multicore machine, the overhead of decompression is minimal. Machines are generally disk bound, and if you can reduce the disk load by compression, you often speed things up. (Microsoft says that the built in compression of Windows Search indexes actually improves speed for this reason.) On the other hand, Windows' virtual memory is complicated. Perhaps if files are compressed, they can't be paged out simply. And there may be other issues. In short: On a fast, multicore machine with a relatively slow disk, what performance affects will compressing Program Files have?

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >