Search Results

Search found 12267 results on 491 pages for 'out of memory'.

Page 50/491 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • Differences in memory consumption between two identical D7 sites?

    - by aendrew
    I'm running Drupal on a news site that has a lot of different View blocks on the front page (~5 total, all cached). In trying to reduce the memory footprint of the site, I've checked out source from SVN to a local development install to try and convert some of those blocks into more optimized code. Here's the weird thing. Devel module lists memory consumption at 50mb on the Production site (Running Nginx, PHP 5.2.17, XCache and Zend Optimizer.) but only 14mb on my development site (Running Apache2, PHP 5.2.13 and XCache). These are nearly-identical versions of the same site — frankly, the Production site should use even less memory as I've disabled some of the modules running on the Dev site. Any idea why this might be the case?

    Read the article

  • Memory Efficient Windows SOA Server

    - by Antony Reynolds
    Installing a Memory Efficient SOA Suite 11.1.1.6 on Windows Server Well 11.1.1.6 is now available for download so I thought I would build a Windows Server environment to run it.  I will minimize the memory footprint of the installation by putting all functionality into the Admin Server of the SOA Suite domain. Required Software 64-bit JDK SOA Suite If you want 64-bit then choose “Generic” rather than “Microsoft Windows 32bit JVM” or “Linux 32bit JVM” This has links to all the required software. If you choose “Generic” then the Repository Creation Utility link does not show, you still need this so change the platform to “Microsoft Windows 32bit JVM” or “Linux 32bit JVM” to get the software. Similarly if you need a database then you need to change the platform to get the link to XE for Windows or Linux. If possible I recommend installing a 64-bit JDK as this allows you to assign more memory to individual JVMs. Windows XE will work, but it is better if you can use a full Oracle database because of the limitations on XE that sometimes cause it to run out of space with large or multiple SOA deployments. Installation Steps The following flow chart outlines the steps required in installing and configuring SOA Suite. The steps in the diagram are explained below. 64-bit? Is a 64-bit installation required?  The Windows & Linux installers will install 32-bit versions of the Sun JDK and JRockit.  A separate JDK must be installed for 64-bit. Install 64-bit JDK The 64-bit JDK can be either Hotspot or JRockit.  You can choose either JDK 1.7 or 1.6. Install WebLogic If you are using 64-bit then install WebLogic using “java –jar wls1036_generic.jar”.  Make sure you include Coherence in the installation, the easiest way to do this is to accept the “Typical” installation. SOA Suite Required? If you are not installing SOA Suite then you can jump straight ahead and create a WebLogic domain. Install SOA Suite Run the SOA Suite installer and point it at the existing Middleware Home created for WebLogic.  Note to run the SOA installer on Windows the user must have admin privileges.  I also found that on Windows Server 2008R2 I had to start the installer from a command prompt with administrative privileges, granting it privileges when it ran caused it to ignore the jreLoc parameter. Database Available? Do you have access to a database into which you can install the SOA schema.  SOA Suite requires access to an Oracle database (it is supported on other databases but I would always use an oracle database). Install Database I use an 11gR2 Oracle database to avoid XE limitations.  Make sure that you set the database character set to be unicode (AL32UTF8).  I also disabled the new security settings because they get in the way for a developer database.  Don’t forget to check that number of processes is at least 150 and number of sessions is not set, or is set to at least 200 (in the DB init parameters). Run RCU The SOA Suite database schemas are created by running the Repository Creation Utility.  Install the “SOA and BPM Infrastructure” component to support SOA Suite.  If you keep the schema prefix as “DEV” then the config wizard is easier to complete. Run Config Wizard The Config wizard creates the domain which hosts the WebLogic server instances.  To get a minimum footprint SOA installation choose the “Oracle Enterprise Manager” and “Oracle SOA Suite for developers” products.  All other required products will be automatically selected. The “for developers” installs target the appropriate components at the AdminServer rather than creating a separate managed server to house them.  This reduces the number of JVMs required to run the system and hence the amount of memory required.  This is not suitable for anything other than a developer environment as it mixes the admin and runtime functions together in a single server.  It also takes a long time to load all the required modules, making start up a slow process. If it exists I would recommend running the config wizard found in the “oracle_common/common/bin” directory under the middleware home.  This should have access to all the templates, including SOA. If you also want to run BAM in the same JVM as everything else then you need to “Select Optional Configuration” for “Managed Servers, Clusters and Machines”. To target BAM at the AdminServer delete the “bam_server1” managed server that is created by default.  This will result in BAM being targeted at the AdminServer. Installation Issues I had a few problems when I came to test everything in my mega-JVM. Following applications were not targeted and so I needed to target them at the AdminServer: b2bui composer Healthcare UI FMW Welcome Page Application (11.1.0.0.0) How Memory Efficient is It? On a Windows 2008R2 Server running under VirtualBox I was able to bring up both the 11gR2 database and SOA/BPM/BAM in 3G memory.  I allocated a minimum 512M to the PermGen and a minimum of 1.5G for the heap.  The setting from setSOADomainEnv are shown below: set DEFAULT_MEM_ARGS=-Xms1536m -Xmx2048m set PORT_MEM_ARGS=-Xms1536m -Xmx2048m set DEFAULT_MEM_ARGS=%DEFAULT_MEM_ARGS% -XX:PermSize=512m -XX:MaxPermSize=768m set PORT_MEM_ARGS=%PORT_MEM_ARGS% -XX:PermSize=512m -XX:MaxPermSize=768m I arrived at these numbers by monitoring JVM memory usage in JConsole. Task Manager showed total system memory usage at 2.9G – just below the 3G I allocated to the VM. Performance is not stellar but it runs and I could run JDeveloper alongside it on my 8G laptop, so in that sense it was a result!

    Read the article

  • Identify memory leak in Java app

    - by Vincent Ma
    One important advantage of java is programer don't care memory management and GC handle it well. Maybe this is one reason why java is more popular. As Java programer you real dont care it? After you meet Out of memory you will realize it it’s not true. Java GC and memory is big topic you can get some information in here Today just let me show how to identify memory leak quickly. Let quickly review demo java code, it’s one kind of memory leak in our code, using static collection and always add some object. import java.util.ArrayList;import java.util.List; public class MemoryTest { public static void main(String[] args) { new Thread(new MemoryLeak(), "MemoryLeak").start(); }} class MemoryLeak implements Runnable { public static List<Integer> leakList = new ArrayList<Integer>(); public void run() { int num =0; while(true) { try { Thread.sleep(1); } catch (InterruptedException e) { } num++; Integer i = new Integer(num); leakList.add(i); } }} run it with java -verbose:gc -XX:+PrintGCDetails -Xmx60m -XX:MaxPermSize=160m MemoryTest after about some minuts you will get Exception in thread "MemoryLeak" java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOf(Arrays.java:2760) at java.util.Arrays.copyOf(Arrays.java:2734) at java.util.ArrayList.ensureCapacity(ArrayList.java:167) at java.util.ArrayList.add(ArrayList.java:351) at MemoryLeak.run(MemoryTest.java:25) at java.lang.Thread.run(Thread.java:619)Heap def new generation total 18432K, used 3703K [0x045e0000, 0x059e0000, 0x059e0000) eden space 16384K, 22% used [0x045e0000, 0x0497dde0, 0x055e0000) from space 2048K, 0% used [0x055e0000, 0x055e0000, 0x057e0000) to space 2048K, 0% used [0x057e0000, 0x057e0000, 0x059e0000) tenured generation total 40960K, used 40959K [0x059e0000, 0x081e0000, 0x081e0000) the space 40960K, 99% used [0x059e0000, 0x081dfff8, 0x081e0000, 0x081e0000) compacting perm gen total 12288K, used 2083K [0x081e0000, 0x08de0000, 0x10de0000) the space 12288K, 16% used [0x081e0000, 0x083e8c50, 0x083e8e00, 0x08de0000)No shared spaces configured. OK let us quickly identify it using JProfile Download JProfile in here  Run JProfile and attach MemoryTest get largest size of  Objects in Memory View in here is Integer then select Integer and go to Heap Walker. get GC Graph for this object  Then you get detail code raise this issue quickly now.  That is enjoy it.

    Read the article

  • Strategies to Defeat Memory Editors for Cheating - Desktop Games

    - by ashes999
    I'm assuming we're talking about desktop games -- something the player downloads and runs on their local computer. Many are the memory editors that allow you to detect and freeze values, like your player's health. How do you prevent cheating via memory-modifiation? What strategies are effective to combat this kind of cheating? For reference, I know that players can: - Search for something by value or range - Search for something that changed value - Set memory values - Freeze memory values I'm looking for some good ones. Two I use that are mediocre are: Displaying values as a percentage instead of the number (eg. 46/50 = 92% health) A low-level class that holds values in an array and moves them with each change. (For example, instead of an int, I have a class that's an array of ints, and whenever the value changes, I use a different, randomly-chosen array item to hold the value)

    Read the article

  • Parameters for selection of Operating system, memory and processor for embedded system ?

    - by James
    I am developing an embedded real time system software (in C language). I have designed the s/w architecture - we know various objects required, interactions required between various objects and IPC communication between tasks. Based on this information, i need to decide on the operating system(RTOS), microprocessor and memory size requirements. (Most likely i would be using Quadros, as it has been suggested by the client based on their prior experience in similar projects) But i am confused about which one to begin with, since choice of one could impact the selection of other. Could you also guide me on parameters to consider to estimate the memory requirements from the s/w design (lower limit and upper limit of memory requirement) ? (Cost of the component(s) could be ignored for this evaluation)

    Read the article

  • Cocos2d update leaking memory

    - by Andrey Chernukha
    I have a weird issue - my app is leaking memory on device only, not on a simulator. It is leaking if i schedule update method anywhere, on any scene. It is leaking despite update method is empty, there's nothing inside it except NSLog. How can it be? I have even scheduled update on the very first scene where it seems there's nothing to leak, and scheduled another empty and it's leaking or not leaking but allocating something, the result is the same - the volume of the memory consumed is increasing and my app is crashing soon. I can detect the leakage via using Instruments-Memory-Activity Monitor or with help of following function: void report_memory(void) { struct task_basic_info info; mach_msg_type_number_t size = sizeof(info); kern_return_t kerr = task_info(mach_task_self(), TASK_BASIC_INFO, (task_info_t)&info, &size); if( kerr == KERN_SUCCESS ) { NSLog(@"Memory in use (in bytes): %u", info.resident_size); } else { NSLog(@"Error with task_info(): %s", mach_error_string(kerr)); } } Can anyone explain me what's going on?

    Read the article

  • Good free software to track and graph memory and CPU usage on Windows?

    - by TheLQ
    There are many questions on Linux memory tracking, but I haven't seen any for Windows. In my case however its a Windows XP Pro box I need to track the memory and CPU usage of. The reason I need it is due to a server program I'm trying that is eating all my processor and some of my memory which is freezing my RDP session and System Explorer and even makes it difficult to login physically. As this is a very constrained server I'm working off of (768 MB RAM with Pentium 4 which disappears with this program), I need a program that doesn't run/require a webserver. I can give it a MySQL database if necessary however. Is there any suggestions for such a program?

    Read the article

  • Does lshw list the "factory" speed of a memory module or the effective speed and how to find the former?

    - by Panayiotis Karabassis
    I hope I phrased this correctly. lshw gives: description: DIMM Synchronous 400 MHz (2.5 ns) product: M378B5773CH0-CH9 vendor: Samsung physical id: 0 slot: DIMM0 size: 2GiB width: 64 bits clock: 400MHz (2.5ns) And indeed the memory speed is set is set to 800MHz in the BIOS, which I think makes sense since it is a double rate. On the other hand, Googling strongly suggests that to this product number corresponds the PC3-10600 type, which is 1333MHz, not 800MHz. And this seems to be confirmed in the BIOS, where if I select Auto for memory bus speed, 1333MHz is selected "based on SPD settings". However in the latter case, the computer does not boot, i.e. the kernel panics, complaining that something attempted to kill the Idle process. So, I am I am beginning to suspect that I have been given defective memory, the technician that installed saw this, and lowered the bus speed. Is this a possibility?

    Read the article

  • Faster Memory Allocation Using vmtasks

    - by Steve Sistare
    You may have noticed a new system process called "vmtasks" on Solaris 11 systems: % pgrep vmtasks 8 % prstat -p 8 PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP 8 root 0K 0K sleep 99 -20 9:10:59 0.0% vmtasks/32 What is vmtasks, and why should you care? In a nutshell, vmtasks accelerates creation, locking, and destruction of pages in shared memory segments. This is particularly helpful for locked memory, as creating a page of physical memory is much more expensive than creating a page of virtual memory. For example, an ISM segment (shmflag & SHM_SHARE_MMU) is locked in memory on the first shmat() call, and a DISM segment (shmflg & SHM_PAGEABLE) is locked using mlock() or memcntl(). Segment operations such as creation and locking are typically single threaded, performed by the thread making the system call. In many applications, the size of a shared memory segment is a large fraction of total physical memory, and the single-threaded initialization is a scalability bottleneck which increases application startup time. To break the bottleneck, we apply parallel processing, harnessing the power of the additional CPUs that are always present on modern platforms. For sufficiently large segments, as many of 16 threads of vmtasks are employed to assist an application thread during creation, locking, and destruction operations. The segment is implicitly divided at page boundaries, and each thread is given a chunk of pages to process. The per-page processing time can vary, so for dynamic load balancing, the number of chunks is greater than the number of threads, and threads grab chunks dynamically as they finish their work. Because the threads modify a single application address space in compressed time interval, contention on locks protecting VM data structures locks was a problem, and we had to re-scale a number of VM locks to get good parallel efficiency. The vmtasks process has 1 thread per CPU and may accelerate multiple segment operations simultaneously, but each operation gets at most 16 helper threads to avoid monopolizing CPU resources. We may reconsider this limit in the future. Acceleration using vmtasks is enabled out of the box, with no tuning required, and works for all Solaris platform architectures (SPARC sun4u, SPARC sun4v, x86). The following tables show the time to create + lock + destroy a large segment, normalized as milliseconds per gigabyte, before and after the introduction of vmtasks: ISM system ncpu before after speedup ------ ---- ------ ----- ------- x4600 32 1386 245 6X X7560 64 1016 153 7X M9000 512 1196 206 6X T5240 128 2506 234 11X T4-2 128 1197 107 11x DISM system ncpu before after speedup ------ ---- ------ ----- ------- x4600 32 1582 265 6X X7560 64 1116 158 7X M9000 512 1165 152 8X T5240 128 2796 198 14X (I am missing the data for T4 DISM, for no good reason; it works fine). The following table separates the creation and destruction times: ISM, T4-2 before after ------ ----- create 702 64 destroy 495 43 To put this in perspective, consider creating a 512 GB ISM segment on T4-2. Creating the segment would take 6 minutes with the old code, and only 33 seconds with the new. If this is your Oracle SGA, you save over 5 minutes when starting the database, and you also save when shutting it down prior to a restart. Those minutes go directly to your bottom line for service availability.

    Read the article

  • SQL Monitor Custom Metric: Out of memory errors

    The number of out of memory errors that have occurred within a rolling five minute window. If you just want to keep an eye out for any memory errors, you can watch the ring buffers for the Out of memory errors alert when it gets registered there. Get alerts within 15 seconds of SQL Server issuesSQL Monitor checks performance data every 15 seconds, so you can fix issues before your users even notice them. Start monitoring with a free trial.

    Read the article

  • How to Increase Memory Allocated to IIS .NET Application?

    - by Mark Hansen
    We are using Windows 2008 R2 and IIS 7 running on Amazon EC2. IIS is running a single .NET application written in C#. We are having performance issues and I want to give the application more memory, but I cannot figure out how to do it. How do I control the amount of memory that the CLR gets? I'm a total newbie with IIS, .NET and the CLR. If I were working with Java, I would just use the -Xmx flag to increase the memory available to the JVM (e.g., -Xmx3000m for 3GB). But, I cannot seem to figure out how to do this in the Windows world.

    Read the article

  • Reading the memory of a N64

    - by toazron1
    I'm looking for a way to read the memory of a N64, while the game is running, in real time. I have a c# program which hooks into the memory of a running emulator and tracks SSB64 stats. I want to do the same thing with the physical N64. Currently it is possible to read the memory with a gameshark pro, however it's extremely slow, buggy, and not practical for what I am trying to accomplish. Would it be possible to tap into the gameshark, or the N64 directly, to access the memory in real time? Thanks!

    Read the article

  • How to keep memory consumption below 500 MB or less than 25 processes at background on netbook?

    - by overmann
    I bought a netbook yesterday, (I'm loving it) but I will never understand why they need to be a lot of processes running on background. I worry about other users who have no idea about it and continue using their computers with occasional choppiness due to 70 processes on background occupying most of the memory I'd like to keep my memory consumption below 500MB (I have 1 GB) is this possible? What are your ideas for this to work? I always run Microsoft Security Essentials at startup and real time protection, how many features can I disable to reach my goal memory usage?

    Read the article

  • Tweaking Hudson memory usage

    - by rovarghe
    Hudson 3.1 has some performance optimizations that greatly reduces its memory footprint. Prior to this Hudson used to always hold the entire data model (all jobs and all builds) in memory which affected scalability. Some installations configured heap sizes in excess of 1GB to counteract this. Hudson 3.1.x maintains an MRU cache and only loads jobs and builds as they are required. Because of the inability to change existing APIs and be backward compatible with plugins, there were limits to how far we could go with this approach. Memory optimizations almost always come with a related cost, in this case its additional I/O that has to be performed to load data on request. On a small site that has frequent traffic, this is usually not noticeable since the MRU cache will usually hold on to all the data. A large site with infrequent traffic might experience some delays when the first request hits the server after a long gap. If you have a large heap and are able to allocate more memory, the cache settings can be adjusted to take advantage of this and even go back to pre-3.1 behavior. All the cache settings can be passed as options to the JVM container (Tomcat or the default Jetty container) using the -D option. There are two caches, independant of each other, one for Jobs and the other for Builds. For the jobs cache: hudson.jobs.cache.evict_in_seconds ( default=60 ) Seconds from last access (could be because of a servlet request or a background cron thread) a job should be purged from the cache. Set this to 0 to never purge based on time. hudson.jobs.cache.initial_capacity ( default=1024 ) Initial number of jobs the cache can accomodate. Setting this to the number of jobs you typically display on your Hudson landing page or home page will speed up consecutive access to that page. If the default is too large you may consider downsizing and using that memory for the Builds cache instead. hudson.jobs.cache.max_entries ( default=1024) Maximum number of jobs in the cache. The default is large enough for most installations, but if you find I/O activity when always accessing the hudson home page you might consider increasing this, but first verify if the I/O is caused by frequent eviction (see above), rather than by the cache not being large enough. For the builds cache: The builds cache is used to store Build objects as they are read from storage. Typically this happens when a user drills down into the details of a particular Job from the hudson hom epage. The cache is shared among builds for different jobs since in most installations all jobs are not accessed with the same frequency, so a per-job builds cache would be a waste of memory. hudson.job.builds.cache.evict_in_seconds ( default=60 ) Same as the equivalent Job cache, applied to Build. hudson.job.builds.cache.initial_capacity" ( default=512 ) Same as equivalent Job cache setting. Note the smaller initial size. If your site stores a large number of builds and has frequent access to more builds you might consider bumping this up. hudson.job.builds.cache.max_entries ( default=10240 ) The default max is large enough for most installations, the builds cache has bigger sized objects, so be careful about increasing the upper limit on this. See section on monitoring below. Sample usage: java -jar hudson-war-3.1.2-SNAPSHOT.war -Dhudson.jobs.cache.evict_in_seconds=300 \ -Dhudson.job.builds.cache.evict_in_seconds=300 Monitoring cache usage The 'jmap' tool that comes with the JDK can be used to monitor cache performance in an indirect way by looking at the number of Job and Build objects in each cache. Find the PID of the hudson instance and run $ jmap -histo:live <pid | grep 'hudson.model.*Lazy.*Key$' Here's a sample output: num #instances #bytes class name 523: 28 896 hudson.model.RunMap$LazyRunValue$Key 1200: 3 96 hudson.model.LazyTopLevelItem$Key These are the keys to the Jobs (LazyTopLevelItem$Key) and Builds (RunMap$LazyRunValue$Key) in the caches, so counting the number of keys is a good indicator of the number of items in the cache at any given moment. The size in bytes can be ignored, they are just the size of the keys, not the actual sizes of the objects they hold. Those sizes can only be obtained with a profiler. With the output above we can conclude that there are 3 jobs and 28 builds in memory. The 28 builds can all be from 1 job or all 3 jobs. Over time on an idle system, these should get evicted and memory cache should be empty. In practice, because of background cron threads and triggers, jobs rarely fall down to zero. Access of a job or a build by a cron thread resets the eviction timer.

    Read the article

  • Disposing of ContentManager increases memory usage

    - by Havsmonstret
    I'm trying to wrap my head around how memory management works in XNA 4.0 I've created a screen management test and when I close a screen, the ContentManager created by that screen is unloaded. I have used ANTS Memory Manager to look at how the memory usage is altered when I do this, and it gives me some results which makes me scratch my head. The game starts with loading two textures (435kB and 48,3kB) which puts the usage at about 61MB. Then, when I delete the screen and runs Unload on the ContentManager, the memory usage drops to 56,5MB but instantly after goes up to 64,8MB. Am I doing something wrong or is this usual for XNA games? Do I have to dispose of everything the ContentManager loads seperatly or do I need to do something more to the ContentManager? Thanks in advance!

    Read the article

  • can a OOM be caused by not finding enough contiguous memory?

    - by raticulin
    I start some java code with -Xmx1024m, and at some point I get an hprof due to OOM. The hprof shows just 320mb, and give me a stack trace: at java.util.Arrays.copyOfRange([CII)[C (Arrays.java:3209) at java.lang.String.<init>([CII)V (String.java:215) at java.lang.StringBuilder.toString()Ljava/lang/String; (StringBuilder.java:430) ... This comes from a large string I am copying. I remember reading somewhere (cannot find where) what happened is these cases is: process still has not consumed 1gb of memory, is way below even if heap still below 1gb, it needs some amount of memory, and for copyOfRange() it has to be continuous memory, so even if it is not over the limit yet, it cannot find a large enough piece of memory on the host, it fails with an OOM. I have tried to look for doc on this (copyOfRange() needs a block of continuous memory), but could not find any. The other possible culprit would be not enough permgen memory. Can someone confirm or refute the continuous memory hypothesis? Any pointer to some doc would help too.

    Read the article

  • how to force operating system to give java more memory?

    - by Denny
    Hello, I've got problem with java jar files and memory. I use netbeans 6.7 to develop an application and this application need more memory to run because it converts another files. Whenever this application convert a 6-10 mb file, it'll crash. So I set netbeans VM Options : -Xms32m -Xmx256m and the application can convert 6-10mb files with no problem. I Clean and Build the project so it can make a jar file of my application. I run the jar on my computer and use jconsole to monitor the memory. The maximum memory to use by the application shows 256 mb. But whenever I move it to some other computers, it shows 65-66 mb on jconsole and the application will crash when convert 6-10 mb files. So I need to use command prompt : java -jar -Xmx256m myjar.jar to execute the jar with maximum memory Why it can be happen, in my computer the maximum memory shows 256 mb but on another computer 65-66 mb? Can I force another computer to give extra maximum memory to my application? Thank you for your answer. I'm sorry for my inadequate English. If you all find my question is hard to understand, please let me know. Best Regards Denny ps: fyi the computer i used to develop the application have 2gb ram, on the other computers i tested have 1-2 gb ram.

    Read the article

  • Can you catch exceeded allocated memory error before it kills the script?

    - by kristovaher
    The thing is that I want to catch memory problems before they happen. I have a system that gets rows from database and casts the returned associative array to a variable, but I never know what the size of the database result is is or how much memory it will take once the database request is made. This means that my software can fail simply because memory is exceeded. But I want to avoid that somehow. One of the ways is to obviously make database requests that are smaller, but what if this is not possible or what if I do not know the size of data that is returned from database? Is it possible to 'catch' situations where memory use is exceeded in PHP? Something like this: $requestOk=memory_test(function(){ return doSomething(); }); if($requestOk){ // Memory seems fine // $requestOk now has the value from memory_test() function } else { // Function would have exceeded memory } I just find it problematic that my script can just die at any moment because of memory issues. From what I know, try-catch cannot be used here because it is a fatal error. Any help would be appreciated!

    Read the article

  • Does the Java Memory Model (JSR-133) imply that entering a monitor flushes the CPU data cache(s)?

    - by Durandal
    There is something that bugs me with the Java memory model (if i even understand everything correctly). If there are two threads A and B, there are no guarantees that B will ever see a value written by A, unless both A and B synchronize on the same monitor. For any system architecture that guarantees cache coherency between threads, there is no problem. But if the architecture does not support cache coherency in hardware, this essentially means that whenever a thread enters a monitor, all memory changes made before must be commited to main memory, and the cache must be invalidated. And it needs to be the entire data cache, not just a few lines, since the monitor has no information which variables in memory it guards. But that would surely impact performance of any application that needs to synchronize frequently (especially things like job queues with short running jobs). So can Java work reasonably well on architectures without hardware cache-coherency? If not, why doesn't the memory model make stronger guarantees about visibility? Wouldn't it be more efficient if the language would require information what is guarded by a monitor? As i see it the memory model gives us the worst of both worlds, the absolute need to synchronize, even if cache coherency is guaranteed in hardware, and on the other hand bad performance on incoherent architectures (full cache flushes). So shouldn't it be more strict (require information what is guarded by a monitor) or more lose and restrict potential platforms to cache-coherent architectures? As it is now, it doesn't make too much sense to me. Can somebody clear up why this specific memory model was choosen? EDIT: My use of strict and lose was a bad choice in retrospect. I used "strict" for the case where less guarantees are made and "lose" for the opposite. To avoid confusion, its probably better to speak in terms of stronger or weaker guarantees.

    Read the article

  • Retain count problem iphone sdk

    - by neha
    Hi all, I'm facing a memory leak problem which is like this: I'm allocating an object of class A in class B. // RETAIN COUNT OF CLASS A OBJECT BECOMES 1 I'm placing the object in an nsmutablearray. // RETAIN COUNT OF CLASS A OBJECT BECOMES 2 In an another class C, I'm grabbing this nsmutablearray, fetching all the elements in that array in a local nsmutablearray, releasing this first array of class B. // RETAIN COUNT OF CLASS A OBJECTS IN LOCAL ARRAY BECOMES 1 Now in this class C, I'm creating an object of class A and fetching the elements in local nsmutable array. //RETAIN COUNT OF NEW CLASS A OBJECT IN LOCAL ARRAY BECOMES 2 [ALLOCATION + FETCHED OBJECT WITH RETAIN COUNT 1] My question is, I want to retain this array which I'm displaying in tableview, and want to release it after new elements are filled in the array. I'm doing this in class B. So before adding new elements, I'm removing all the elements and releasing this array in class B. And in class C I'm releasing object of class A in dealloc. But in Instruments-Leaks it's showing me leak for this class A object in class C. Can anybody please tell me wheather where I'm going wrong. Thanx in advance.

    Read the article

  • What is the exact difference between MEM_RESERVE and MEM_COMMIT states?

    - by pj4533
    As I understand it MEM_RESERVE is actually 'free' memory, ie available to be used by my process, but just hasn't been allocated yet? Or it was previously allocated, but had since been freed? Specifically, see in my !address output below how I am nearly out of virtual address space (99900 KB free, 2307872 as MEM_PRIVATE. But the states shows that 44.75% of that is actually MEM_RESERVE. Does that mean it is actually free, in my process...but maybe fragmented? 0:000> !address -summary --------- PEB a8bd8000 not found ---- -------------------- Usage SUMMARY -------------------------- TotSize ( KB) Pct(Tots) Pct(Busy) Usage 259af000 ( 616124) : 22.29% 23.12% : RegionUsageIsVAD 618f000 ( 99900) : 03.61% 00.00% : RegionUsageFree 13e22000 ( 325768) : 11.78% 12.22% : RegionUsageImage 42c04000 ( 1093648) : 39.56% 41.04% : RegionUsageStack 42d000 ( 4276) : 00.15% 00.16% : RegionUsageTeb 2625d000 ( 625012) : 22.61% 23.45% : RegionUsageHeap 0 ( 0) : 00.00% 00.00% : RegionUsagePageHeap 0 ( 0) : 00.00% 00.00% : RegionUsagePeb 1000 ( 4) : 00.00% 00.00% : RegionUsageProcessParametrs 1000 ( 4) : 00.00% 00.00% : RegionUsageEnvironmentBlock Tot: a8bf0000 (2764736 KB) Busy: a2a61000 (2664836 KB) -------------------- Type SUMMARY -------------------------- TotSize ( KB) Pct(Tots) Usage 618f000 ( 99900) : 03.61% : <free> 13e22000 ( 325768) : 11.78% : MEM_IMAGE 1e77000 ( 31196) : 01.13% : MEM_MAPPED 8cdc8000 ( 2307872) : 83.48% : MEM_PRIVATE -------------------- State SUMMARY -------------------------- TotSize ( KB) Pct(Tots) Usage 57235000 ( 1427668) : 51.64% : MEM_COMMIT 618f000 ( 99900) : 03.61% : MEM_FREE 4b82c000 ( 1237168) : 44.75% : MEM_RESERVE Largest free region: Base 7e4a1000 - Size 000ff000 (1020 KB)

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >