Search Results

Search found 12476 results on 500 pages for 'memory leak detector'.

Page 54/500 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • SQL Monitor Custom Metric: Out of memory errors

    The number of out of memory errors that have occurred within a rolling five minute window. If you just want to keep an eye out for any memory errors, you can watch the ring buffers for the Out of memory errors alert when it gets registered there. Get alerts within 15 seconds of SQL Server issuesSQL Monitor checks performance data every 15 seconds, so you can fix issues before your users even notice them. Start monitoring with a free trial.

    Read the article

  • How to Increase Memory Allocated to IIS .NET Application?

    - by Mark Hansen
    We are using Windows 2008 R2 and IIS 7 running on Amazon EC2. IIS is running a single .NET application written in C#. We are having performance issues and I want to give the application more memory, but I cannot figure out how to do it. How do I control the amount of memory that the CLR gets? I'm a total newbie with IIS, .NET and the CLR. If I were working with Java, I would just use the -Xmx flag to increase the memory available to the JVM (e.g., -Xmx3000m for 3GB). But, I cannot seem to figure out how to do this in the Windows world.

    Read the article

  • Reading the memory of a N64

    - by toazron1
    I'm looking for a way to read the memory of a N64, while the game is running, in real time. I have a c# program which hooks into the memory of a running emulator and tracks SSB64 stats. I want to do the same thing with the physical N64. Currently it is possible to read the memory with a gameshark pro, however it's extremely slow, buggy, and not practical for what I am trying to accomplish. Would it be possible to tap into the gameshark, or the N64 directly, to access the memory in real time? Thanks!

    Read the article

  • How to keep memory consumption below 500 MB or less than 25 processes at background on netbook?

    - by overmann
    I bought a netbook yesterday, (I'm loving it) but I will never understand why they need to be a lot of processes running on background. I worry about other users who have no idea about it and continue using their computers with occasional choppiness due to 70 processes on background occupying most of the memory I'd like to keep my memory consumption below 500MB (I have 1 GB) is this possible? What are your ideas for this to work? I always run Microsoft Security Essentials at startup and real time protection, how many features can I disable to reach my goal memory usage?

    Read the article

  • The Appalling Reaction to the Apple iPhone Leak

    <b>ABC News:</b> "The contempt that Apple (and Steve Jobs in particular) holds toward the media -- and its willingness to manipulate the press for its own ends -- should have produced a media backlash. There should be inside-Apple scoops in the press every week as intrepid reporters go over, under and around every arbitrary barrier Apple puts in front of them."

    Read the article

  • Tweaking Hudson memory usage

    - by rovarghe
    Hudson 3.1 has some performance optimizations that greatly reduces its memory footprint. Prior to this Hudson used to always hold the entire data model (all jobs and all builds) in memory which affected scalability. Some installations configured heap sizes in excess of 1GB to counteract this. Hudson 3.1.x maintains an MRU cache and only loads jobs and builds as they are required. Because of the inability to change existing APIs and be backward compatible with plugins, there were limits to how far we could go with this approach. Memory optimizations almost always come with a related cost, in this case its additional I/O that has to be performed to load data on request. On a small site that has frequent traffic, this is usually not noticeable since the MRU cache will usually hold on to all the data. A large site with infrequent traffic might experience some delays when the first request hits the server after a long gap. If you have a large heap and are able to allocate more memory, the cache settings can be adjusted to take advantage of this and even go back to pre-3.1 behavior. All the cache settings can be passed as options to the JVM container (Tomcat or the default Jetty container) using the -D option. There are two caches, independant of each other, one for Jobs and the other for Builds. For the jobs cache: hudson.jobs.cache.evict_in_seconds ( default=60 ) Seconds from last access (could be because of a servlet request or a background cron thread) a job should be purged from the cache. Set this to 0 to never purge based on time. hudson.jobs.cache.initial_capacity ( default=1024 ) Initial number of jobs the cache can accomodate. Setting this to the number of jobs you typically display on your Hudson landing page or home page will speed up consecutive access to that page. If the default is too large you may consider downsizing and using that memory for the Builds cache instead. hudson.jobs.cache.max_entries ( default=1024) Maximum number of jobs in the cache. The default is large enough for most installations, but if you find I/O activity when always accessing the hudson home page you might consider increasing this, but first verify if the I/O is caused by frequent eviction (see above), rather than by the cache not being large enough. For the builds cache: The builds cache is used to store Build objects as they are read from storage. Typically this happens when a user drills down into the details of a particular Job from the hudson hom epage. The cache is shared among builds for different jobs since in most installations all jobs are not accessed with the same frequency, so a per-job builds cache would be a waste of memory. hudson.job.builds.cache.evict_in_seconds ( default=60 ) Same as the equivalent Job cache, applied to Build. hudson.job.builds.cache.initial_capacity" ( default=512 ) Same as equivalent Job cache setting. Note the smaller initial size. If your site stores a large number of builds and has frequent access to more builds you might consider bumping this up. hudson.job.builds.cache.max_entries ( default=10240 ) The default max is large enough for most installations, the builds cache has bigger sized objects, so be careful about increasing the upper limit on this. See section on monitoring below. Sample usage: java -jar hudson-war-3.1.2-SNAPSHOT.war -Dhudson.jobs.cache.evict_in_seconds=300 \ -Dhudson.job.builds.cache.evict_in_seconds=300 Monitoring cache usage The 'jmap' tool that comes with the JDK can be used to monitor cache performance in an indirect way by looking at the number of Job and Build objects in each cache. Find the PID of the hudson instance and run $ jmap -histo:live <pid | grep 'hudson.model.*Lazy.*Key$' Here's a sample output: num #instances #bytes class name 523: 28 896 hudson.model.RunMap$LazyRunValue$Key 1200: 3 96 hudson.model.LazyTopLevelItem$Key These are the keys to the Jobs (LazyTopLevelItem$Key) and Builds (RunMap$LazyRunValue$Key) in the caches, so counting the number of keys is a good indicator of the number of items in the cache at any given moment. The size in bytes can be ignored, they are just the size of the keys, not the actual sizes of the objects they hold. Those sizes can only be obtained with a profiler. With the output above we can conclude that there are 3 jobs and 28 builds in memory. The 28 builds can all be from 1 job or all 3 jobs. Over time on an idle system, these should get evicted and memory cache should be empty. In practice, because of background cron threads and triggers, jobs rarely fall down to zero. Access of a job or a build by a cron thread resets the eviction timer.

    Read the article

  • Disposing of ContentManager increases memory usage

    - by Havsmonstret
    I'm trying to wrap my head around how memory management works in XNA 4.0 I've created a screen management test and when I close a screen, the ContentManager created by that screen is unloaded. I have used ANTS Memory Manager to look at how the memory usage is altered when I do this, and it gives me some results which makes me scratch my head. The game starts with loading two textures (435kB and 48,3kB) which puts the usage at about 61MB. Then, when I delete the screen and runs Unload on the ContentManager, the memory usage drops to 56,5MB but instantly after goes up to 64,8MB. Am I doing something wrong or is this usual for XNA games? Do I have to dispose of everything the ContentManager loads seperatly or do I need to do something more to the ContentManager? Thanks in advance!

    Read the article

  • can a OOM be caused by not finding enough contiguous memory?

    - by raticulin
    I start some java code with -Xmx1024m, and at some point I get an hprof due to OOM. The hprof shows just 320mb, and give me a stack trace: at java.util.Arrays.copyOfRange([CII)[C (Arrays.java:3209) at java.lang.String.<init>([CII)V (String.java:215) at java.lang.StringBuilder.toString()Ljava/lang/String; (StringBuilder.java:430) ... This comes from a large string I am copying. I remember reading somewhere (cannot find where) what happened is these cases is: process still has not consumed 1gb of memory, is way below even if heap still below 1gb, it needs some amount of memory, and for copyOfRange() it has to be continuous memory, so even if it is not over the limit yet, it cannot find a large enough piece of memory on the host, it fails with an OOM. I have tried to look for doc on this (copyOfRange() needs a block of continuous memory), but could not find any. The other possible culprit would be not enough permgen memory. Can someone confirm or refute the continuous memory hypothesis? Any pointer to some doc would help too.

    Read the article

  • how to force operating system to give java more memory?

    - by Denny
    Hello, I've got problem with java jar files and memory. I use netbeans 6.7 to develop an application and this application need more memory to run because it converts another files. Whenever this application convert a 6-10 mb file, it'll crash. So I set netbeans VM Options : -Xms32m -Xmx256m and the application can convert 6-10mb files with no problem. I Clean and Build the project so it can make a jar file of my application. I run the jar on my computer and use jconsole to monitor the memory. The maximum memory to use by the application shows 256 mb. But whenever I move it to some other computers, it shows 65-66 mb on jconsole and the application will crash when convert 6-10 mb files. So I need to use command prompt : java -jar -Xmx256m myjar.jar to execute the jar with maximum memory Why it can be happen, in my computer the maximum memory shows 256 mb but on another computer 65-66 mb? Can I force another computer to give extra maximum memory to my application? Thank you for your answer. I'm sorry for my inadequate English. If you all find my question is hard to understand, please let me know. Best Regards Denny ps: fyi the computer i used to develop the application have 2gb ram, on the other computers i tested have 1-2 gb ram.

    Read the article

  • Can you catch exceeded allocated memory error before it kills the script?

    - by kristovaher
    The thing is that I want to catch memory problems before they happen. I have a system that gets rows from database and casts the returned associative array to a variable, but I never know what the size of the database result is is or how much memory it will take once the database request is made. This means that my software can fail simply because memory is exceeded. But I want to avoid that somehow. One of the ways is to obviously make database requests that are smaller, but what if this is not possible or what if I do not know the size of data that is returned from database? Is it possible to 'catch' situations where memory use is exceeded in PHP? Something like this: $requestOk=memory_test(function(){ return doSomething(); }); if($requestOk){ // Memory seems fine // $requestOk now has the value from memory_test() function } else { // Function would have exceeded memory } I just find it problematic that my script can just die at any moment because of memory issues. From what I know, try-catch cannot be used here because it is a fatal error. Any help would be appreciated!

    Read the article

  • Does the Java Memory Model (JSR-133) imply that entering a monitor flushes the CPU data cache(s)?

    - by Durandal
    There is something that bugs me with the Java memory model (if i even understand everything correctly). If there are two threads A and B, there are no guarantees that B will ever see a value written by A, unless both A and B synchronize on the same monitor. For any system architecture that guarantees cache coherency between threads, there is no problem. But if the architecture does not support cache coherency in hardware, this essentially means that whenever a thread enters a monitor, all memory changes made before must be commited to main memory, and the cache must be invalidated. And it needs to be the entire data cache, not just a few lines, since the monitor has no information which variables in memory it guards. But that would surely impact performance of any application that needs to synchronize frequently (especially things like job queues with short running jobs). So can Java work reasonably well on architectures without hardware cache-coherency? If not, why doesn't the memory model make stronger guarantees about visibility? Wouldn't it be more efficient if the language would require information what is guarded by a monitor? As i see it the memory model gives us the worst of both worlds, the absolute need to synchronize, even if cache coherency is guaranteed in hardware, and on the other hand bad performance on incoherent architectures (full cache flushes). So shouldn't it be more strict (require information what is guarded by a monitor) or more lose and restrict potential platforms to cache-coherent architectures? As it is now, it doesn't make too much sense to me. Can somebody clear up why this specific memory model was choosen? EDIT: My use of strict and lose was a bad choice in retrospect. I used "strict" for the case where less guarantees are made and "lose" for the opposite. To avoid confusion, its probably better to speak in terms of stronger or weaker guarantees.

    Read the article

  • What is the exact difference between MEM_RESERVE and MEM_COMMIT states?

    - by pj4533
    As I understand it MEM_RESERVE is actually 'free' memory, ie available to be used by my process, but just hasn't been allocated yet? Or it was previously allocated, but had since been freed? Specifically, see in my !address output below how I am nearly out of virtual address space (99900 KB free, 2307872 as MEM_PRIVATE. But the states shows that 44.75% of that is actually MEM_RESERVE. Does that mean it is actually free, in my process...but maybe fragmented? 0:000> !address -summary --------- PEB a8bd8000 not found ---- -------------------- Usage SUMMARY -------------------------- TotSize ( KB) Pct(Tots) Pct(Busy) Usage 259af000 ( 616124) : 22.29% 23.12% : RegionUsageIsVAD 618f000 ( 99900) : 03.61% 00.00% : RegionUsageFree 13e22000 ( 325768) : 11.78% 12.22% : RegionUsageImage 42c04000 ( 1093648) : 39.56% 41.04% : RegionUsageStack 42d000 ( 4276) : 00.15% 00.16% : RegionUsageTeb 2625d000 ( 625012) : 22.61% 23.45% : RegionUsageHeap 0 ( 0) : 00.00% 00.00% : RegionUsagePageHeap 0 ( 0) : 00.00% 00.00% : RegionUsagePeb 1000 ( 4) : 00.00% 00.00% : RegionUsageProcessParametrs 1000 ( 4) : 00.00% 00.00% : RegionUsageEnvironmentBlock Tot: a8bf0000 (2764736 KB) Busy: a2a61000 (2664836 KB) -------------------- Type SUMMARY -------------------------- TotSize ( KB) Pct(Tots) Usage 618f000 ( 99900) : 03.61% : <free> 13e22000 ( 325768) : 11.78% : MEM_IMAGE 1e77000 ( 31196) : 01.13% : MEM_MAPPED 8cdc8000 ( 2307872) : 83.48% : MEM_PRIVATE -------------------- State SUMMARY -------------------------- TotSize ( KB) Pct(Tots) Usage 57235000 ( 1427668) : 51.64% : MEM_COMMIT 618f000 ( 99900) : 03.61% : MEM_FREE 4b82c000 ( 1237168) : 44.75% : MEM_RESERVE Largest free region: Base 7e4a1000 - Size 000ff000 (1020 KB)

    Read the article

  • C++ Dynamic Allocation Mismatch: Is this problematic?

    - by acanaday
    I have been assigned to work on some legacy C++ code in MFC. One of the things I am finding all over the place are allocations like the following: struct Point { float x,y,z; }; ... void someFunc( void ) { int numPoints = ...; Point* pArray = (Point*)new BYTE[ numPoints * sizeof(Point) ]; ... //do some stuff with points ... delete [] pArray; } I realize that this code is atrociously wrong on so many levels (C-style cast, using new like malloc, confusing, etc). I also realize that if Point had defined a constructor it would not be called and weird things would happen at delete [] if a destructor had been defined. Question: I am in the process of fixing these occurrences wherever they appear as a matter of course. However, I have never seen anything like this before and it has got me wondering. Does this code have the potential to cause memory leaks/corruption as it stands currently (no constructor/destructor, but with pointer type mismatch) or is it safe as long as the array just contains structs/primitive types?

    Read the article

  • pthread_create followed by pthread_detach still results in possibly lost error in Valgrind.

    - by alesplin
    I'm having a problem with Valgrind telling me I have some memory possible lost: ==23205== 544 bytes in 2 blocks are possibly lost in loss record 156 of 265 ==23205== at 0x6022879: calloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so) ==23205== by 0x540E209: allocate_dtv (in /lib/ld-2.12.1.so) ==23205== by 0x540E91D: _dl_allocate_tls (in /lib/ld-2.12.1.so) ==23205== by 0x623068D: pthread_create@@GLIBC_2.2.5 (in /lib/libpthread-2.12.1.so) ==23205== by 0x758D66: MTPCreateThreadPool (MTP.c:290) ==23205== by 0x405787: main (MServer.c:317) The code that creates these threads (MTPCreateThreadPool) basically gets an index into a block of waiting pthread_t slots, and creates a thread with that. TI becomes a pointer to a struct that has a thread index and a pthread_t. (simplified/sanitized): for (tindex = 0; tindex < NumThreads; tindex++) { int rc; TI = &TP->ThreadInfo[tindex]; TI->ThreadID = tindex; rc = pthread_create(&TI->ThreadHandle,NULL,MTPHandleRequestsLoop,TI); /* check for non-success that I've omitted */ pthread_detach(&TI->ThreadHandle); } Then we have a function MTPDestroyThreadPool that loops through all the threads we created and cancels them (since the MTPHandleRequestsLoop doesn't exit). for (tindex = 0; tindex < NumThreads; tindex++) { pthread_cancel(TP->ThreadInfo[tindex].ThreadHandle); } I've read elsewhere (including other questions here on SO) that detaching a thread explicitly would prevent this possibly lost error, but it clearly isn't. Any thoughts?

    Read the article

  • Instance of method leaking, iPhone

    - by Wolfert
    The following method shows up as leaking while performing a memory-leaks test with Instruments: - (NSDictionary*) initSigTrkLstWithNiv:(int)pm_SigTrkNiv SigTrkSig:(int)pm_SigTrkSig SigResIdt:(int)pm_SigResIdt SigResVal:(int)pm_SigResVal { NSArray *objectArray; NSArray *keyArray; if (self = [super init]) { self.SigTrkNiv = [NSNumber numberWithInt:pm_SigTrkNiv]; self.SigTrkSig = [NSNumber numberWithInt:pm_SigTrkSig]; self.SigResIdt = [NSNumber numberWithInt:pm_SigResIdt]; self.SigResVal = [NSNumber numberWithInt:pm_SigResVal]; objectArray = [NSArray arrayWithObjects:SigTrkNiv,SigTrkSig,SigResIdt,SigResVal, nil]; keyArray = [NSArray arrayWithObjects:@"SigTrkNiv", @"SigTrkSig", @"SigResIdt", @"SigResVal", nil]; self = [NSDictionary dictionaryWithObjects:objectArray forKeys:keyArray]; } return self; } code that invokes the instance: NSDictionary *lv_SigTrkLst = [[SigTrkLst alloc]initSigTrkLstWithNiv:[[tempDict objectForKey:@"SigTrkNiv"] intValue] SigTrkSig:[[tempDict objectForKey:@"SigTrkSig"] intValue] SigResIdt:[[tempDict objectForKey:@"SigResIdt"] intValue] SigResVal:[[tempDict objectForKey:@"SigResVal"] intValue]]; [[QBDataContainer sharedDataContainer].SigTrkLstArray addObject:lv_SigTrkLst]; [lv_SigTrkLst release]; Instruments informs that 'SigTrkLst' is leaking. Even though I have released the instance? (I know that adding it to the array increments the retainCount by 1 but releasing it twice removes it from the array?)

    Read the article

  • SQL Server 2005 high memory usage and performance problems

    - by emzero
    Hi there guys. I have this ASP.NET/SQLServer2005 website running on a production server (Win2003, QuadCore, 4GB). The site runs smoothly normally, but after 2-3 weeks I notice a slow performance on the site (especifically in one particular page). Also I notice that the SQL Server process is using like 2GBs of RAM. So I restart the service, the site runs fast again and the process 300-400MBs. I'm looking for an explanation of why is this happening? What is SQL Server storing in RAM that takes too much space and degrades the performance? What can I do to avoid this? I'm trying to avoid restarting the SQLServer everytime this happens. Thank you!

    Read the article

  • Javascript new keyword and memory management

    - by Whyamistilltyping
    Coming from C++ it is hard grained into my mind that everytime I call new I call delete. In javascript I find myself calling new occasionally in my code but (hoping) the garbage collection functionality in the browser will take care of the mess for me. I don't like this - is there a 'delete' method in javascript and is how I use it different from in C++? Thanks.

    Read the article

  • Memory management in objective-c

    - by prathumca
    I have this code in one of my classes: - (void) processArray { NSMutableArray* array = [self getArray]; . . . [array release]; array = nil; } - (NSMutableArray*) getArray { //NO 1: NSMutableArray* array = [[NSMutableArray alloc]init]; //NO 2: NSMutableArray* array = [NSMutableArray array]; . . . return array; } NO 1: I create an array and return it. In the processArray method I release it. NO 2: I get an array by simply calling array. As I'm not owner of this, I don't need to release it in the processArray method. Which is the best alternative, NO 1 or NO 2? Or is there a better solution for this?

    Read the article

  • CUDA: Memory copy to GPU 1 is slower in multi-GPU

    - by zenna
    My company has a setup of two GTX 295, so a total of 4 GPUs in a server, and we have several servers. We GPU 1 specifically was slow, in comparison to GPU 0, 2 and 3 so I wrote a little speed test to help find the cause of the problem. //#include <stdio.h> //#include <stdlib.h> //#include <cuda_runtime.h> #include <iostream> #include <fstream> #include <sstream> #include <string> #include <cutil.h> __global__ void test_kernel(float *d_data) { int tid = blockDim.x*blockIdx.x + threadIdx.x; for (int i=0;i<10000;++i) { d_data[tid] = float(i*2.2); d_data[tid] += 3.3; } } int main(int argc, char* argv[]) { int deviceCount; cudaGetDeviceCount(&deviceCount); int device = 0; //SELECT GPU HERE cudaSetDevice(device); cudaEvent_t start, stop; unsigned int num_vals = 200000000; float *h_data = new float[num_vals]; for (int i=0;i<num_vals;++i) { h_data[i] = float(i); } float *d_data = NULL; float malloc_timer; cudaEventCreate(&start); cudaEventCreate(&stop); cudaEventRecord( start, 0 ); cudaMemcpy(d_data, h_data, sizeof(float)*num_vals,cudaMemcpyHostToDevice); cudaMalloc((void**)&d_data, sizeof(float)*num_vals); cudaEventRecord( stop, 0 ); cudaEventSynchronize( stop ); cudaEventElapsedTime( &malloc_timer, start, stop ); cudaEventDestroy( start ); cudaEventDestroy( stop ); float mem_timer; cudaEventCreate(&start); cudaEventCreate(&stop); cudaEventRecord( start, 0 ); cudaMemcpy(d_data, h_data, sizeof(float)*num_vals,cudaMemcpyHostToDevice); cudaEventRecord( stop, 0 ); cudaEventSynchronize( stop ); cudaEventElapsedTime( &mem_timer, start, stop ); cudaEventDestroy( start ); cudaEventDestroy( stop ); float kernel_timer; cudaEventCreate(&start); cudaEventCreate(&stop); cudaEventRecord( start, 0 ); test_kernel<<<1000,256>>>(d_data); cudaEventRecord( stop, 0 ); cudaEventSynchronize( stop ); cudaEventElapsedTime( &kernel_timer, start, stop ); cudaEventDestroy( start ); cudaEventDestroy( stop ); printf("cudaMalloc took %f ms\n",malloc_timer); printf("Copy to the GPU took %f ms\n",mem_timer); printf("Test Kernel took %f ms\n",kernel_timer); cudaMemcpy(h_data,d_data, sizeof(float)*num_vals,cudaMemcpyDeviceToHost); delete[] h_data; return 0; } The results are GPU0 cudaMalloc took 0.908640 ms Copy to the GPU took 296.058777 ms Test Kernel took 326.721283 ms GPU1 cudaMalloc took 0.913568 ms Copy to the GPU took[b] 663.182251 ms[/b] Test Kernel took 326.710785 ms GPU2 cudaMalloc took 0.925600 ms Copy to the GPU took 296.915039 ms Test Kernel took 327.127930 ms GPU3 cudaMalloc took 0.920416 ms Copy to the GPU took 296.968384 ms Test Kernel took 327.038696 ms As you can see, the cudaMemcpy to the GPU is well double the amount of time for GPU1. This is consistent between all our servers, it is always GPU1 that is slow. Any ideas why this may be? All servers are running windows XP.

    Read the article

  • Malloc corrupting already malloc'd memory in C

    - by Kyte
    I'm currently helping a friend debug a program of his, which includes linked lists. His list structure is pretty simple: typedef struct nodo{ int cantUnos; char* numBin; struct nodo* sig; }Nodo; We've got the following code snippet: void insNodo(Nodo** lista, char* auxBin, int auxCantUnos){ printf("*******Insertando\n"); int i; if (*lista) printf("DecInt*%p->%p\n", *lista, (*lista)->sig); Nodo* insert = (Nodo*)malloc(sizeof(Nodo*)); if (*lista) printf("Malloc*%p->%p\n", *lista, (*lista)->sig); insert->cantUnos = auxCantUnos; insert->numBin = (char*)malloc(strlen(auxBin)*sizeof(char)); for(i=0 ; i<strlen(auxBin) ; i++) insert->numBin[i] = auxBin[i]; insert-numBin[i] = '\0'; insert-sig = NULL; Nodo* aux; [etc] (The lines with extra indentation were my addition for debug purposes) This yields me the following: *******Insertando DecInt*00341098->00000000 Malloc*00341098->2832B6EE (*lista)-sig is previously and deliberately set as NULL, which checks out until here, and fixed a potential buffer overflow (he'd forgotten to copy the NULL-terminator in insert-numBin). I can't think of a single reason why'd that happen, nor I've got any idea on what else should I provide as further info. (Compiling on latest stable MinGW under fully-patched Windows 7, friend's using MinGW under Windows XP. On my machine, at least, in only happens when GDB's not attached.) Any ideas? Suggestions? Possible exorcism techniques? (Current hack is copying the sig pointer to a temp variable and restore it after malloc. It breaks anyways. Turns out the 2nd malloc corrupts it too. Interestingly enough, it resets sig to the exact same value as the first one).

    Read the article

  • Objective C selector memory managment (does this leak memory)?

    - by James Jones
    - (IBAction) someButtonCall { if(!someCondition) { someButtonCallBack = @selector(someButtonCall); [self presentModalViewController:someController animated:YES]; } else ... } //Called from someController - (void) someControllerFinished:(BOOL) ok { [self dismissModalViewControllerAnimated:YES]; if(ok) [self performSelector:someButtonCallBack]; else ... } I'm wondering if the user keeps getting into the !someCondition clause if the selector is leaked by assigning a new selector each time (the code above is hypothetical and not what i'm doing). Any help is appreciated. Thanks, James Jones

    Read the article

  • JVM memory initializazion error after windows update

    - by Pier Luigi
    Hi all, I have three Windows Server 2003 with 2 GB RAM. Server1 tomcat 5.5.25 jvm version SUN 1.6.0_11-b03 Server2 tomcat 5.5.25 jvm version SUN 1.6.0_14-b08 Server3 tomcat 6.0.18 jvm version SUN 1.6.0_14-b08 For the three servers JVM parameters are: -XX:MaxPermSize=256m -Dcatalina.base=C:\Apache Group\apache-tomcat-5.5.25 -Dcatalina.home=C:\Apache Group\apache-tomcat-5.5.25 -Djava.endorsed.dirs=C:\Apache Group\apache-tomcat-5.5.25\common\endorsed -Djava.io.tmpdir=C:\Apache Group\apache-tomcat-5.5.25\temp vfprintf -Xms512m -Xmx1024m For some months everithing worked fine. Last friday we installed some windows updates. After the reboot tomcat doesn't start anymore, with error: Error occurred during initialization of VM Could not reserve enough space for object heap We reduced the parameter -Xmx1024m to -Xmx768m and now tomcat starts. But we need greater max heap size What happened to our servers ? Thanks in advance.

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >