Search Results

Search found 12282 results on 492 pages for 'memory deallocation'.

Page 3/492 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Ballooning Mac OS X kernel_task and Wired memory usage. How to diagnose / fix?

    - by user28930
    I have a very strange issue, which I'm having a hard time diagnosing as to the root cause. I have a Mac Pro (2008, 8-core 2.8 GHz, 8800GT) with 14 GB of RAM (recently upgraded because of this issue!). When I boot my system and log in, vm_stat / top / Activity Monitor will show that kernel_task has about 150 MB allocated, and the machine has about 800 MB of Wired memory being allocated. Even initially, 800 MB seems an awful lot of wired memory to be allocated with no applications running - but, it gets worse. (NB: Wired is locked, unswappable memory) After a very short time, sometimes triggered by something as simple as launching a terminal, kernel_task will balloon to 8-900 MB of Real Mem (RSIZE), and Wired Memory will accelerate to 1.6 GB (implying that all the extra memory requests are for wired RAM in the kernel). If I quit everything (I.E: no running applications, bar an activity monitor or terminal to view top), there is no appreciable reduction in either kernel_task RSIZE, or Wired Memory usage. Going the opposite way, and loading the system with tasks also shows that wired memory does not get reduced - and that importantly it is not reduced in preference to heavy swapping. If I log out and log back in again, it reduces a bit (450 MB kernel_task, 1.28 GB Wired), but not back to the start. I'm not running any wacky kexts - and futhermore, kextstat shows no huge memory allocations there; the largest being com.apple.nvidia.nv50hal at about 4 MB of Memory. The machine feels overall more sluggish when this has happened - unsurprisingly because such a huge amount of RAM has been marked as non-pageable. So I have a few questions: 1) Is there a good way to diagnose what has allocated all of this wired memory? It's often over 2 times the kernel_task size, running no applications. The real memory total doesn't seem to add up - it seems that there is a bunch of RAM that isn't being accounted for anywhere. 2) What is happening to cause the kernel to suddenly require 6 times as much memory?

    Read the article

  • Does it take time to deallocate memory?

    - by jm1234567890
    I have a C++ program which, during execution, will allocate about 3-8Gb of memory to store a hash table (I use tr1/unordered_map) and various other data structures. However, at the end of execution, there will be a long pause before returning to shell. For example, at the very end of my main function I have std::cout << "End of execution" << endl; But the execution of my program will go something like $ ./program do stuff... End of execution [long pause of maybe 2 min] $ -- returns to shell Is this expected behavior or am I doing something wrong? I'm guessing that the program is deallocating the memory at the end. But, commercial applications which use large amounts of memory (such as photoshop) do not exhibit this pause when you close the application. Please advise :)

    Read the article

  • OS memory allocation addresses

    - by user1777914
    Quick curious question, memory allocation addresses are choosed by the language compiler or is it the OS which chooses the addresses for the memory asked? This is from a doubt about virtual memory, where it could be quickly explained as "let the process think he owns all the memory", but what happens on 64 bits architectures where only 48 bits are used for memory addresses if the process wants a higher address? Lets say you do a int a = malloc(sizeof(int)); and you have no memory left from the previous system call so you need to ask the OS for more memory, is the compiler the one who determines the memory address to allocate this variable, or does it just ask the OS for memory and it allocates it on the address returned by it?

    Read the article

  • Linux bizarre memory report

    - by Igor Liner
    I took the following meminfo captures. I don't figure out how the free memory went from 8GB to almost 25GB, when only about 4GB of slab was freed. There was no change of the proccess memory consumption on time the meminfo output was taken. First meminfo with 8GB free memory: MemTotal: 66054256 kB MemFree: 8344960 kB Buffers: 1120 kB Cached: 30172312 kB SwapCached: 0 kB Active: 10795428 kB Inactive: 1914512 kB Active(anon): 10193124 kB Inactive(anon): 1441288 kB Active(file): 602304 kB Inactive(file): 473224 kB Unevictable: 26348912 kB Mlocked: 26348960 kB SwapTotal: 0 kB SwapFree: 0 kB Dirty: 0 kB Writeback: 0 kB AnonPages: 8886304 kB Mapped: 26383052 kB Shmem: 29097904 kB Slab: 6006384 kB SReclaimable: 3512404 kB SUnreclaim: 2493980 kB KernelStack: 15240 kB PageTables: 78724 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 33027128 kB Committed_AS: 44446908 kB VmallocTotal: 34359738367 kB VmallocUsed: 426656 kB VmallocChunk: 34325375716 kB HardwareCorrupted: 0 kB AnonHugePages: 7696384 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 6144 kB DirectMap2M: 2058240 kB DirectMap1G: 65011712 kB Second memory capture with almost 25GB free memory: MemTotal: 66054256 kB MemFree: 24949116 kB Buffers: 1120 kB Cached: 29085016 kB SwapCached: 0 kB Active: 10168904 kB Inactive: 1461156 kB Active(anon): 10168216 kB Inactive(anon): 1441956 kB Active(file): 688 kB Inactive(file): 19200 kB Unevictable: 26317328 kB Mlocked: 26317376 kB SwapTotal: 0 kB SwapFree: 0 kB Dirty: 0 kB Writeback: 0 kB AnonPages: 8861224 kB Mapped: 26351488 kB Shmem: 29066248 kB Slab: 1503440 kB SReclaimable: 232880 kB SUnreclaim: 1270560 kB KernelStack: 15256 kB PageTables: 79664 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 33027128 kB Committed_AS: 44418280 kB VmallocTotal: 34359738367 kB VmallocUsed: 426656 kB VmallocChunk: 34325375716 kB HardwareCorrupted: 0 kB AnonHugePages: 7665664 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 6144 kB DirectMap2M: 2058240 kB DirectMap1G: 65011712 kB

    Read the article

  • Memory Usage on Linux box does not match up with `free`

    - by Chris Lieb
    I have a Linux machine that is not running too much in the way of software, but is somehow using 1.7GB of the 2GB of the installed memory. When I run free, I get: total used free shared buffers cached Mem: 2072616 1979972 92644 0 164876 129740 -/+ buffers/cache: 1685356 387260 Swap: 498004 1632 496372 However, when I run ps aux, the memory usage of all processes only comes out to 295.9MB, which is a far cry from the 1.7GB of memory that free reports as used. Why is there such a discrepancy?

    Read the article

  • What is Paging in memory management?

    - by Fasih Khatib
    I was just reading Operating System Principles by Silberschatz et al when I came across paging in memory management.I'm slightly confused about it. It states that Physical Memory(I assume it's RAM) is divided into frames, and logical memory is divided into pages. CPU generates logical addresses containing page number and an offset. This page number is used to retrieve the frame number from a page table which gives the base address so the physical address is calculated as base+offset. My question is: is the page table maintained for every process? I logically think that the answer would be yes as every process will need to map its own pages to frames. I may be wrong. Please clarify. Also: paging and segmentation(where 'holes' are created in memory) are two totally different techniques that are not used in combination. Correct?

    Read the article

  • DNSCache excesive memory usage Windows Server 2008 R2

    - by MikeT
    We are having an issue with the dnscache service where its memory usage is becoming excessive (~6GB) after a week or two. Restarting the service frees this memory but performing ipconfig /flushdns does not, an ipconfig /displaydns shows aprox 15-20 entries in the cache. We have checked and there appears to be aprox 150 DNS queries per second taking place but I would not expect this to have the effect of causing this memory issue. I have tried to search MSDN for hotfixes or bug reports but I could only find a reference to a memory leak in windows 2003. can anyone suggest how to proceed.

    Read the article

  • .NET Free memory usage (how to prevent overallocation / release memory to the OS)

    - by Ronan Thibaudau
    I'm currently working on a website that makes large use of cached data to avoid roundtrips. At startup we get a "large" graph (hundreds of thouthands of different kinds of objects). Those objects are retrieved over WCF and deserialized (we use protocol buffers for serialization) I'm using redgate's memory profiler to debug memory issues (the memory didn't seem to fit with how much memory we should need "after" we're done initializing and end up with this report Now what we can gather from this report is that: 1) Most of the memory .NET allocated is free (it may have been rightfully allocated during deserialisation, but now that it's free, i'd like for it to return to the OS) 2) Memory is fragmented (which is bad, as everytime i refresh the cash i need to redo the memory hungry deserialisation process and this, in turn creates large object that may throw an OutOfMemoryException due to fragmentation) 3) I have no clue why the space is fragmented, because when i look at the large object heap, there are only 30 instances, 15 object[] are directly attached to the GC and totally unrelated to me, 1 is a char array also attached directly to the GC Heap, the remaining 15 are mine but are not the cause of this as i get the same report if i comment them out in code. So my question is, what can i do to go further with this? I'm not really sure what to look for in debugging / tools as it seems my memory is fragmented, but not by me, and huge amounts of free spaces are allocated by .net , which i can't release. Also please make sure you understand the question well before answering, i'm not looking for a way to free memory within .net (GC.Collect), but to free memory that is already free in .net , to the system as well as to defragment said memory. Note that a slow solution is fine, if it's possible to manually defragment the large heap i'd be all for it as i can call it at the end of RefreshCache and it's ok if it takes 1 or 2 second to run. Thanks for your help! A few notes i forgot: 1) The project is a .net 2.0 website, i get the same results running it in a .net 4 pool, idem if i run it in a .net 4 pool and convert it to .net 4 and recompile. 2) These are results of a release build, so debug build can not be the issue. 3) And this is probably quite important, i do not get these issues at all in the webdev server, only in IIS, in the webdev i get memory consumption rather close to my actual consumption (well more, but not 5-10X more!)

    Read the article

  • "Cannot allocate memory" while no process seems to be using up memory

    - by omat
    I am not competent on server issues, any help is much appreciated. When try to start a python/django shell on a linux box, I am getting OSError: [Errno 12] Cannot allocate memory. free -m seems to confirm I am out of memory: total used free shared buffers cached Mem: 590 560 29 0 3 37 -/+ buffers/cache: 518 71 Swap: 0 0 0 But I cannot see what is eating up the memory with top or ps aux: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 20 0 24336 908 0 S 0.0 0.2 0:00.68 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 3 root 20 0 0 0 0 S 0.0 0.0 0:04.85 ksoftirqd/0 How can I identify the leak? Thanks. BTW, I am not sure if it is relevant, but the machine I am talking about is an AWS EC2 instance with Ubuntu 12 running.

    Read the article

  • Programs and memory consumption [closed]

    - by cobie
    I have a 4gb ram macbook pro but I still run out of memory when I have chrome and a few other light weight applications open such as multiple windows of macvim. These programs are written in C/C++ so technically should be memory efficient but why do they suck up all these memory. is it just bad engineering or graphical user interfaces because I have read about incredible feats performed in software dev back in the early computing days with very limited memory but now it just feels like the applications expand to fill all my memory.

    Read the article

  • Detecting source of memory usage on a Linux box

    - by apeace
    I have a toy Linux box with 256mb RAM running Ubuntu 10.04.1 LTS. Here is the output of free -m: total used free shared buffers cached Mem: 245 122 122 0 19 64 -/+ buffers/cache: 38 206 Swap: 511 0 511 Unless I'm reading this wrong, 122mb is being used and only 84mb of that is disk cache. Here are all processes I'm running sorted by memory usage (ps -eo pmem,pcpu,rss,vsize,args | sort -k 1 -r): %MEM %CPU RSS VSZ COMMAND 5.0 0.0 12648 633140 node /home/node/main/sites.js 1.5 0.0 3884 251736 /usr/sbin/console-kit-daemon --no-daemon 1.3 0.0 3328 77108 sshd: apeace [priv] 0.9 0.0 2344 19624 -bash 0.7 0.0 1776 23620 /sbin/init 0.6 0.0 1624 77108 sshd: apeace@pts/0 0.6 0.0 1544 9940 redis-server /etc/redis/redis.conf 0.6 0.0 1524 25848 /usr/sbin/ntpd -p /var/run/ntpd.pid -g -u 103:105 0.5 0.0 1324 119880 rsyslogd -c4 0.4 0.0 1084 49308 /usr/sbin/sshd 0.4 0.0 1028 44376 /usr/sbin/exim4 -bd -q30m 0.3 0.0 904 6876 ps -eo pmem,pcpu,rss,vsize,args 0.3 0.0 888 21124 cron 0.3 0.0 868 23472 dbus-daemon --system --fork 0.2 0.0 732 19624 -bash 0.2 0.0 628 6128 /sbin/getty -8 38400 tty1 0.2 0.0 628 16952 upstart-udev-bridge --daemon 0.2 0.0 564 16800 udevd --daemon 0.2 0.0 552 16796 udevd --daemon 0.2 0.0 548 16796 udevd --daemon 0.0 0.0 0 0 [xenwatch] 0.0 0.0 0 0 [xenbus] 0.0 0.0 0 0 [sync_supers] 0.0 0.0 0 0 [netns] 0.0 0.0 0 0 [migration/3] 0.0 0.0 0 0 [migration/2] 0.0 0.0 0 0 [migration/1] 0.0 0.0 0 0 [migration/0] 0.0 0.0 0 0 [kthreadd] 0.0 0.0 0 0 [kswapd0] 0.0 0.0 0 0 [kstriped] 0.0 0.0 0 0 [ksoftirqd/3] 0.0 0.0 0 0 [ksoftirqd/2] 0.0 0.0 0 0 [ksoftirqd/1] 0.0 0.0 0 0 [ksoftirqd/0] 0.0 0.0 0 0 [ksnapd] 0.0 0.0 0 0 [kseriod] 0.0 0.0 0 0 [kjournald] 0.0 0.0 0 0 [khvcd] 0.0 0.0 0 0 [khelper] 0.0 0.0 0 0 [kblockd/3] 0.0 0.0 0 0 [kblockd/2] 0.0 0.0 0 0 [kblockd/1] 0.0 0.0 0 0 [kblockd/0] 0.0 0.0 0 0 [flush-202:1] 0.0 0.0 0 0 [events/3] 0.0 0.0 0 0 [events/2] 0.0 0.0 0 0 [events/1] 0.0 0.0 0 0 [events/0] 0.0 0.0 0 0 [crypto/3] 0.0 0.0 0 0 [crypto/2] 0.0 0.0 0 0 [crypto/1] 0.0 0.0 0 0 [crypto/0] 0.0 0.0 0 0 [cpuset] 0.0 0.0 0 0 [bdi-default] 0.0 0.0 0 0 [async/mgr] 0.0 0.0 0 0 [aio/3] 0.0 0.0 0 0 [aio/2] 0.0 0.0 0 0 [aio/1] 0.0 0.0 0 0 [aio/0] Now, I know that ps is not the best for viewing process memory usage, but that's because it tends to report more memory than is actually being used...meaning no matter how you look at it all my processes combined shouldn't be using near 122mb, even if you account for the disk cache. What's more, memory usage is growing all the time. I've had to restart my server once a week, because once my 256mb fills up it starts swapping, which it wouldn't do just for disk cache. Shouldn't there be some way for me to see the culprit?! I'm new to server admin, so please if there's something obvious I'm overlooking point it out to me. Just for good measure, the output of cat /proc/meminfo: MemTotal: 251140 kB MemFree: 124604 kB Buffers: 20536 kB Cached: 66136 kB SwapCached: 0 kB Active: 65004 kB Inactive: 37576 kB Active(anon): 15932 kB Inactive(anon): 164 kB Active(file): 49072 kB Inactive(file): 37412 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 524284 kB SwapFree: 524284 kB Dirty: 8 kB Writeback: 0 kB AnonPages: 15916 kB Mapped: 10668 kB Shmem: 188 kB Slab: 18604 kB SReclaimable: 10088 kB SUnreclaim: 8516 kB KernelStack: 536 kB PageTables: 1444 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 649852 kB Committed_AS: 64224 kB VmallocTotal: 34359738367 kB VmallocUsed: 752 kB VmallocChunk: 34359737600 kB DirectMap4k: 262144 kB DirectMap2M: 0 kB EDIT: I had misinterpreted the meaning of free -m at first. But even so: the important thing is that my OS eventually begins to use swap memory if I don't restart my server, which disk caching wouldn't do. So where do I look to see what is using all this memory?

    Read the article

  • Windows7 issue in mutli- tasking and memory

    - by Nitesh
    I seeming some problem in my windows OS recently, let me first say my system configuration. processor - Intel(R) Core(TM)2 Quad CPU Q8400 @ 2.66 GHz Installed memory (RAM) - 4.00 GB (3.00 GB usable) System type - 32 bit operating system I am using two OS in this system, first one is Windows7 and the other is centOS. Well, I am using this from a long time there was no problem , and all of a sudden since from couple weeks I am facing problems in my Windows7 OS. In windows7 i was nearing using multiple jobs almost every time i log in, there was no problem but now i don't no what happen I am not able to do multiple jobs at same time. For example- 1 I am now not able to listen to music in windows media player and view photo's. All of a sudden the system stops working and does not respond and then respond after 5mins and the music get played where it got stopped after 5 mins. 2 When i start browersing internet it hangs all of sudden and doesn't respond for 2 or 3 mins and gets loading. I mean it just happens for every operation i do in the system. Even now typing was also difficult, it gets hanged very frequently even though i am doing single task. I have never come across this kind of problem before. So the first thing i did was to see the useage of the processor and the memory. Well, i thick the useage of the processor was fine, for single task the useage was some where around 3 to 5%. Well, it was something weird i found in the memory, in spite of no task that i was running it was using somewhere around 34 to 41% of memory. So i opened the task manager and click on resource monitor in performance tab. And in the memory section of the monitoring tool i found the usage of my RAM, it was something like this. Hardware reserved - 1029 MB In Use - 1430 MB Modified - 49 MB Standby - 1566 MB free - 22 MB And i could also see Available 1588 MB Cached 1615 MB Total 3067 MB Installed 4096 MB well, this if all i could find out and i have no idea why my computer is acting so weird all of a sudden and the performance problem is growing day by day and i also don't know if there is problem in Bios, i have let it for default settings from long time. please help me and Thank you in advance for reading this and helping me.

    Read the article

  • Memory upgrade to 8GB on unibody MacBook

    - by dlamblin
    I bought the 13" unibody MacBook one month prior to it's upgrade to become the new improved 13" unibody MacBook Pro (with unopenable battery compartment, extra 3 hours of battery life, more color gammut, SD slot, firewire 800 port, and a new 8Gb memory limit). My MacBook says it is limited to 4GB of DDR3 1066mhz memory, in 2 SO-DIMMs. But that was written back when you couldn't get 2 4GB SO-DIMMs. Now that you can get them, the very similar MacBook Pro is shipping with 4GB standard, and can be upgraded to 8GB. I've asked (Apple reps), and I'm repeatedly told that my model cannot be upgraded to 8. When I ask for the reason they alway say: "Because that's the published limit at the time your mac was built." I find this unconvincing. If they said: "Because the memory controller in the chipset is limited to 4GB, despite being seemingly identical to the memory controller in the same chipset newest MacBook model," then I'd just take their word for it. Has anyone tried it, or found any research as to whether two 4GB DDR3 1066mhz SO-DIMMs can be installed in the unibody MacBook without FireWire?

    Read the article

  • dhclient requests filling memory?

    - by shanethehat
    Dammit Jim, I'm a web developer, not a sys admin. With that out of the way, my client's has a CentOS server (6.2) that is only serving a single Magento site (and the associated MySQL server) and it is frequently running out of memory, despite the site only currently being open to 5 users. I'm investigating the logs to try to figure out why the memory usage is so high, but I don't really know what I'm looking at. It seems that there are a lot of entries in /var/log/messages concerning DHCP requests, approximately one every 15 seconds, that look like this: Apr 7 14:23:06 s15940039 dhclient[815]: DHCPREQUEST on eth0 to 172.30.102.85 port 67 (xid=0x6b5cd2a7) Is this normal? I don't see anything else in here that I don't recognise, but then I'm not sure I'd know the problem if I did see it. 4 days ago the server ran out of memory completely and locked up, requiring a restart. The DHCP messages did not start up again for 23 hours, but then carried on as before. I have read this question which describes the same issue, but in my case a fresh DHCP lease does not ever seem to be issued. Is this something I should push back to the hosting provider, or have I not yet found the source of the memory problem?

    Read the article

  • What causes memory fragmentation in .NET

    - by Matt
    I am using Red Gates ANTS memory profiler to debug a memory leak. It keeps warning me that: Memory Fragmentation may be causing .NET to reserver too much free memory. or Memory Fragmentation is affecting the size of the largest object that can be allocated Because I have OCD, this problem must be resolved. What are some standard coding practices that help avoid memory fragmentation. Can you defragment it through some .NET methods? Would it even help?

    Read the article

  • Analyzing Memory Usage: Java vs C++ Negligible?

    - by Anthony
    How does the memory usage of an integer object written in Java compare\contrast with the memory usage of a integer object written in C++? Is the difference negligible? No difference? A big difference? I'm guessing it's the same because an int is an int regardless of the language (?) The reason why I asked this is because I was reading about the importance of knowing when a program's memory requirements will prevent the programmer from solving a given problem. What fascinated me is the amount of memory required for creating a single Java object. Take for example, an integer object. Correct me if I'm wrong but a Java integer object requires 24 bytes of memory: 4 bytes for its int instance variable 16 bytes of overhead (reference to the object's class, garbage collection info & synchronization info) 4 bytes of padding As another example, a Java array (which is implemented as an object) requires 48+bytes: 24 bytes of header info 16 bytes of object overhead 4 bytes for length 4 bytes for padding plus the memory needed to store the values How do these memory usages compare with the same code written in C++? I used to be oblivious about the memory usage of the C++ and Java programs I wrote, but now that I'm beginning to learn about algorithms, I'm having a greater appreciation for the computer's resources.

    Read the article

  • Sun T5120: Memory issue with 8GB DIMMs, "Unsupported memory configuration"

    - by watain
    I'm trying to upgrade RAM in a Sun T5120 server, replacing 2GB (Sun P/N: 501-7953-01) with 8GB DIMMs (Sun P/N: 511-1262-01). When bringing up the host system, I get the following errors on the ILOM: -> show faulty Target | Property | Value --------------------+------------------------+--------------------------------- /SP/faultmgmt/0 | fru | /SYS/MB /SP/faultmgmt/0/ | timestamp | Dec 14 15:29:42 faults/0 | | /SP/faultmgmt/0/ | sp_detected_fault | /SYS/MB/CMP0/MCU3 Forced fail faults/0 | | (IBIST) /SP/faultmgmt/0/ | timestamp | Dec 14 15:29:28 faults/1 | | /SP/faultmgmt/0/ | sp_detected_fault | /SYS/MB/CMP0/MCU2 Forced fail faults/1 | | (IBIST) /SP/faultmgmt/0/ | timestamp | Dec 14 15:29:13 faults/2 | | /SP/faultmgmt/0/ | sp_detected_fault | /SYS/MB/CMP0/MCU1 Forced fail faults/2 | | (IBIST) /SP/faultmgmt/0/ | timestamp | Dec 14 15:28:59 faults/3 | | /SP/faultmgmt/0/ | sp_detected_fault | /SYS/MB/CMP0/MCU0 Forced fail faults/3 | | (IBIST) /SP/faultmgmt/1 | fru | /SYS /SP/faultmgmt/1/ | timestamp | Dec 14 15:29:42 faults/0 | | /SP/faultmgmt/1/ | sp_detected_fault | Dec 14 15:29:42 ERROR: faults/0 | | Unsupported memory | | configuration As you can see, the only error message I get is "Unsupported memory configuration". Note that I'm absolutely sure that I placed in the DIMMs in the correct slots. Might this issue be related to the Voltage of the DIMMs? Any suggestions on how to trouble-shoot this issue? This issue seems to be similar to the one explained at "Inserted disabled" while upgrading Sun Sparc t5120 memory. However the given link http://docs.sun.com/source/820-4445-10/chapter1.html seems to point to an inexistent page...

    Read the article

  • iPhone - Memory Management - Using Leaks tool and getting some bizarre readings.

    - by Robert
    Hey all, putting the finishing touches on a project of mine so I figured I would run through it and see if and where I had any memory leaks. Found and fixed most of them but there are a couple of things regarding the memory leaks and object alloc that I am confused about. 1) There are 2 memory leaks that do not show me as responsible. There are 8 leaks attributed to AudioToolbox with the function being RegisterEmbeddedAudioCodecs(). This accounts for about 1.5 kb of leaks. The other one is detected immediately when the app begins. Core Graphics is responsible with the extra info being open_handle_to_dylib_path. For the audio leak I have looked over my audio code and to me it seems ok. self.musicPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:[NSURL fileURLWithPath:songFilePath] error:NULL]; [musicPlayer prepareToPlay]; [musicPlayer play] is called later on in a function. 2) Is it normal for there to be a spike in Object Allocation whenever a new view or controller is presented? My total memory usage is very, very low except for whenever I present a view controller. It spikes then immediately goes back down. I am guessing that this is just the phone handling all the information for switching or something. Blegh. Wall of text. Thanks in advance to anyone who helps! =)

    Read the article

  • Out of memory Problem

    - by Sunil
    I'm running a C++ program in Ubuntu 10.04 (32-bit system architecture). If I calculate the amount of memory that my program uses, it comes up to 800MB. I have a 4GB RAM in place. But still before the program even finishes it throws an out of memory exception. Why is that happening ? Is it because of the structure of the memory or implementation problems or what could possibly trigger this issue ? I've had seen this problem quite a number of times before but never understood the reason behind it. Have any of you handled this case before ?

    Read the article

  • Munin fills server memory

    - by danilo
    In the last weeks, it happened several times to me that my vserver (Debian Lenny) was out of RAM (500M) and therefore wasn't able to run apache anymore. When looking at the processes with top, I saw that there were many open munin-limits and munin-cron processes that consumed most of the memory. My guess would be that sometimes Apache temporarily needs more memory, which prevents munin-cron from running. And if munin-cron isn't able to stop itself, it would fill the memory until nothing is left. I don't know whether this guess is true, but could maybe someone know what the problem is and how to prevent it? If necessary I'll remove munin, but I'd prefer to keep it running.

    Read the article

  • How is dynamic memory allocation handled when extreme reliability is required?

    - by sharptooth
    Looks like dynamic memory allocation without garbage collection is a way to disaster. Dangling pointers there, memory leaks here. Very easy to plant an error that is sometimes hard to find and that has severe consequences. How are these problems addressed when mission-critical programs are written? I mean if I write a program that controls a spaceship like Voyager 1 that has to run for years and leave a smallest leak that leak can accumulate and halt the program sooner or later and when that happens it translates into epic fail. How is dynamic memory allocation handled when a program needs to be extremely reliable?

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #053 – Final Post in Series

    - by Pinal Dave
    It has been a fantastic journey to write memory lane series for an entire year. This series gave me the opportunity to go back and see what I have contributed to this blog throughout the last 7 years. This was indeed fantastic series as this provided me the opportunity to witness how technology has grown throughout the year and how I have progressed in my career while writing this blog post. This series was indeed fantastic experience readers as many joined during the last few years and were not sure what they have missed in recent years. Let us continue with the final episode of the Memory Lane Series. Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Get Current User – Get Logged In User Here is the straight script which list logged in SQL Server users. Disable All Triggers on a Database – Disable All Triggers on All Servers Question : How to disable all the triggers for a database? Additionally, how to disable all the triggers for all servers? For answer execute the script in the blog post. Importance of Master Database for SQL Server Startup I have received following questions many times. I will list all the questions here and answer them together. What is the purpose of Master database? Should our backup Master database? Which database is must have database for SQL Server for startup? Which are the default system database created when SQL Server 2005 is installed for the first time? What happens if Master database is corrupted? Answers to all of the questions are very much related. 2008 DECLARE Multiple Variables in One Statement SQL Server is a great product and it has many features which are very unique to SQL Server. Regarding feature of SQL Server where multiple variable can be declared in one statement, it is absolutely possible to do. 2009 How to Enable Index – How to Disable Index – Incorrect syntax near ‘ENABLE’ Many times I have seen that the index is disabled when there is a large update operation on the table. Bulk insert of very large file updates in any table using SSIS is usually preceded by disabling the index and followed by enabling the index. I have seen many developers running the following query to disable the index. 2010 List of all the Views from Database Many emails I received suggesting that they have hundreds of the view and now have no clue what is going on and how many of them have indexes and how many does not have an index. Some even asked me if there is any way they can get a list of the views with the property of Index along with it. Here is the quick script which does exactly the same. You can also include many other columns from the same view. Minimum Maximum Memory – Server Memory Options I was recently reading about SQL Server Memory Options over here. While reading this one line really caught my attention is minimum value allowed for maximum memory options. The default setting for min server memory is 0, and the default setting for max server memory is 2147483647. The minimum amount of memory you can specify for max server memory is 16 megabytes (MB). 2011 Fundamentals of Columnstore Index There are two kinds of storage in a database. Row Store and Column Store. Row store does exactly as the name suggests – stores rows of data on a page – and column store stores all the data in a column on the same page. These columns are much easier to search – instead of a query searching all the data in an entire row whether the data are relevant or not, column store queries need only to search a much lesser number of the columns. How to Ignore Columnstore Index Usage in Query In summary the question in simple words “How can we ignore using the column store index in selective queries?” Very interesting question – you can use I can understand there may be the cases when the column store index is not ideal and needs to be ignored the same. You can use the query hint IGNORE_NONCLUSTERED_COLUMNSTORE_INDEX to ignore the column store index. The SQL Server Engine will use any other index which is best after ignoring the column store index. 2012 Storing Variable Values in Temporary Array or Temporary List SQL Server does not support arrays or a dynamic length storage mechanism like list. Absolutely there are some clever workarounds and few extra-ordinary solutions but everybody can;t come up with such solution. Additionally, sometime the requirements are very simple that doing extraordinary coding is not required. Here is the simple case. Move Database Files MDF and LDF to Another Location It is not common to keep the Database on the same location where OS is installed. Usually Database files are in SAN, Separate Disk Array or on SSDs. This is done usually for performance reason and manageability perspective. Now the challenges comes up when database which was installed at not preferred default location and needs to move to a different location. Here is the quick tutorial how you can do it. UNION ALL and ORDER BY – How to Order Table Separately While Using UNION ALL If your requirement is such that you want your top and bottom query of the UNION resultset independently sorted but in the same result set you can add an additional static column and order by that column. Let us re-create the same scenario. Copy Data from One Table to Another Table – SQL in Sixty Seconds #031 – Video http://www.youtube.com/watch?v=FVWIA-ACMNo Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Memory Usage for Databases on Linux

    - by Kyle Brandt
    So with free output what we care about with application memory usage is generally the amount of free memory in the -/+ buffers/cache line. What about with database applications such as Oracle, is it important to have a good amount of cached and buffers available for a database to run well with all the IO? If that makes any sense, how do you figure out just how much?

    Read the article

  • Embedded Linux: Memory Fragmentation

    - by waffleman
    In many embedded systems, memory fragmentation is a concern. Particularly, for software that runs for long periods of time (months, years, etc...). For many projects, the solution is to simply not use dynamic memory allocation such as malloc/free and new/delete. Global memory is used whenever possible and memory pools for types that are frequently allocated and deallocated are good strategies to avoid dynamic memory management use. In Embedded Linux how is this addressed? I see many libraries use dynamic memory. Is there mechanism that the OS uses to prevent memory fragmentation? Does it clean up the heap periodically? Or should one avoid using these libraries in an embedded environment?

    Read the article

  • How to store bitmaps in memory?

    - by Geotarget
    I'm working with general purpose image rendering, and high-performance image processing, and so I need to know how to store bitmaps in-memory. (24bpp/32bpp, compressed/raw, etc) I'm not working with 3D graphics or DirectX / OpenGL rendering and so I don't need to use graphics card compatible bitmap formats. My questions: What is the "usual" or "normal" way to store bitmaps in memory? (in C++ engines/projects?) How to store bitmaps for high-performance algorithms, such that read/write times are the fastest? (fixed array? with/without padding? 24-bpp or 32-bpp?) How to store bitmaps for applications handling a lot of bitmap data, to minimize memory usage? (JPEG? or a faster [de]compression algorithm?) Some possible methods: Use a fixed packed 24-bpp or 32-bpp int[] array and simply access pixels using pointer access, all pixels are allocated in one continuous memory chunk (could be 1-10 MB) Use a form of "sparse" data storage so each line of the bitmap is allocated separately, reusing more memory and requiring smaller contiguous memory segments Store bitmaps in its compressed form (PNG, JPG, GIF, etc) and unpack only when its needed, reducing the amount of memory used. Delete the unpacked data if its not used for 10 secs.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >