Search Results

Search found 258446 results on 10338 pages for 'stack memory'.

Page 92/10338 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • Porting - Shared Memory x32 & x64 processes

    - by dpb
    A 32 bit host Windows application setups shared memory (using memory mapped file / CreateFileMapping() API), and then other 32 bit client processes use this shared memory to communicate with each other. I am planning to port the host application to 64 bit platform and once it is ready, I intend that both 32 bit and 64 bit client processes should be able to use the shared memory setup by the main 64 bit host application. The original code written for host x32 application uses "size_t" almost everywhere, since this differs from 4 bytes to 8 bytes as we move from x32 to x64, I am looking for replacing it. I intend to replace "size_t" by "unsigned long long", so that its size will be same on 32 bit & 64 bit. Can you please suggest me better alternative? Also, will the use of "unsigned long long" have performance impact on x32 app .. i guess yes? Research Done - Found very useful articles - a) 20 issue in porting from 32 bit to 64 bit (www.viva64.com) b) No way to restrict/change "size_t" on x64 platform to 4 bytes using compiler flags or any hooks/crooks since it is typedef

    Read the article

  • iPhone OpenGLES: Textures are consuming too much memory and the program crashes with signal "0"

    - by CustomAppsMan
    I am not sure what the problem is. My app is running fine on the simulator but when I try to run it on the iPhone it crashes during debugging or without debugging with signal "0". I am using the Texture2D.m and OpenGLES2DView.m from the examples provided by Apple. I profiled the app on the iPhone with Instruments using the Memory tracer from the Library and when the app died the final memory consumed was about 60Mb real and 90+Mb virtual. Is there some other problem or is the iPhone just killing the application because it has consumed too much memory? If you need any information please state it and I will try to provide it. I am creating thousands of textures at load time which is why the memory consumption is so high. Really cant do anything about reducing the number of pics being loaded. I was running before on just UIImage but it was giving me really low frame rates. I read on this site that I should use OpenGLES for higher frame rates. Also sub question is there any way not to use UIImage to load the png file and then use the Texture class provided to create the texture for OpenGLES functions to use it for drawing? Is there some function in OpenGLES which will create a texture straight from a png file?

    Read the article

  • Concept: Information Into Memory Location.

    - by Richeve S. Bebedor
    I am having troubles conceptualizing an algorithm to be used to transform any information or data into a specific appropriate and reasonable memory location in any data structure that I will be devising. To give you an idea, I have a JPanel object instance and I created another Container type object instance of any subtype (note this is in Java because I love this language), then I collected those instances into a data structure not specifically just for those instances but also applicable to any type of object. Now my procedure for fetching those data again is to extract the object specific features similar in category to all object in that data structure and transform it into a integer data memory location (specifically as much as possible) or any type of data that will pertain to this transformation. And I can already access that memory location without further sorting or applications of O(n) time complex algorithms (which I think preferable but I wanted to do my own way XD). The data structure is of any type either binary tree, linked list, arrays or sets (and the like XD). What is important is I don't need to have successive comparing and analysis of data just to locate information in big structures. To give you a technical idea, I have to an array DS that contains JLabel object instance with a specific name "HelloWorld". But array DS contains other types of object (in multitude). Now this JLabel object has a location in the array at index [124324] (which is if you do any type of searching algorithm just to arrive at that location is conceivably slow because added to it the data structure used was an array *note please disregard the efficiency of the data structure to be used I just want to explain to you my concept XD). Now I want to equate "HelloWorld" to 124324 by using a conceptually made function applicable to all data types. So that I can do a direct search by doing this DS[extractLocation("HelloWorld")] just to get that JLabel instance. I know this may sound crazy but I want to test my concept of non-sorting feature extracting search algorithm for any data structure wherein my main problem is how to transform information to be stored into memory location of where it was stored.

    Read the article

  • Servlet File Upload Memory Consumption

    - by Scott
    Hi, I'm using a servlet to do a multi fileupload (using apache commons fileupload to help). A portion of my code is posted below. My problem is that if I upload several files at once, the memory consumption of the app server jumps rather drastically. This is probably OK if it were only until the file upload is finished, but the app server seems to hang on to the memory and never return it to the OS. I'm worried that when I put this into production I'll end up getting an out of memory exception on the server. Any ideas on why this is happening? I'm thinking the server may have started a session and will give the memory back after it expires, but I'm not 100% positive. if(ServletFileUpload.isMultipartContent(request)) { ServletFileUpload upload = new ServletFileUpload(); FileItemIterator iter = upload.getItemIterator(request); while(iter.hasNext()) { FileItemStream license = iter.next(); if(license.getFieldName().equals("upload_button") || license.getName().equals("")) continue; //DataInputStream stream = new DataInputStream(license.openStream()); InputStream stream = license.openStream(); ArrayList<Integer> byteArray = new ArrayList<Integer>(); int tempByte; do { tempByte = stream.read(); byteArray.add(tempByte); }while(tempByte != -1); stream.close(); byteArray.remove(byteArray.size()-1); byte[] bytes = new byte[byteArray.size()]; int i = 0; for(Integer tByte : byteArray) { bytes[i++] = tByte.byteValue(); } Thanks in advanced!!

    Read the article

  • Can reducing index length in Javascript associative array save memory

    - by d777
    I am trying to build a large Associative Array in JavaScript (22,000 elements). Do I need to worry about the length of the indices with regards to memory usage? In other words, which of the following options saves memory? or are they the same in memory consumption? Option 1: var student = new Array(); for (i=0; i<22000; i++) student[i] = { "studentName": token[0], "studentMarks": token[1], "studentDOB": token[2] }; Option 2: var student = new Array(); for (i=0; i<22000; i++) student[i] = { "n": token[0], "m": token[1], "d": token[2] }; I tried to test this on Google Chrome DevTools, but the numbers are inconsistent to make a decision. My best guess is that because the Array indices repeat, the browser can optimize memory usage by not repeating them for each student[i], but that is just a guess. Thanks.

    Read the article

  • Bitnami stacks compatible with Ubuntu 11?

    - by pthesis
    Has anyone successfully installed Bitnami native stacks on Ubuntu 11? After changing the bin file's permissions to allow it to be executable, I get the following error in the Terminal: bitnami-dokuwiki-2011-05-25a-0-linux-x64-installer.bin: 1: Syntax error: Unterminated quoted string for the Docuwiki stack, and bitnami-lampstack-3.0.6-0-linux-x64-installer.bin: 1: Syntax error: Unterminated quoted string for the LAMP stack.

    Read the article

  • why is my centos, nginx, php,mysql server using so much memory

    - by kb2tfa
    i'm not sure where the problem lies. I have: centos 6 mysql php php-fpm wordpress (1 site). this is a dedicated server i'm learning on. as soon as i ran the web url the memory slammed to 125% of a 512k server, i had to upgrade to 1gig, so i'm not sure where the problem lies. I thought by switching to nginx from apache i would have more memory free, but i'm still stuck with the problem. before launching the site with nginx and mysqld running the site was about 5% memory. I reanamed my "my-small.cnf" to my.cnf and put in my /etc folder where the original was, but that seems not to have done it. after looking at my TOP results, i'm starting to think it may be php-fpm eating my memory, but not sure. is php-fpm the preferend way or is there something better to use. I read about possible memory leaks in php-fpm. here is what i have: php-fpm.conf ;;;;;;;;;;;;;;;;;;;;; ; FPM Configuration ; ;;;;;;;;;;;;;;;;;;;;; ; All relative paths in this configuration file are relative to PHP's install ; prefix. ; Include one or more files. If glob(3) exists, it is used to include a bunch of ; files from a glob(3) pattern. This directive can be used everywhere in the ; file. include=/etc/php-fpm.d/*.conf ;;;;;;;;;;;;;;;;;; ; Global Options ; ;;;;;;;;;;;;;;;;;; [global] ; Pid file ; Default Value: none pid = /var/run/php-fpm/php-fpm.pid ; Error log file ; Default Value: /var/log/php-fpm.log error_log = /var/log/php-fpm/error.log ; Log level ; Possible Values: alert, error, warning, notice, debug ; Default Value: notice ;log_level = notice ; If this number of child processes exit with SIGSEGV or SIGBUS within the time ; interval set by emergency_restart_interval then FPM will restart. A value ; of '0' means 'Off'. ; Default Value: 0 ;emergency_restart_threshold = 0 ; Interval of time used by emergency_restart_interval to determine when ; a graceful restart will be initiated. This can be useful to work around ; accidental corruptions in an accelerator's shared memory. ; Available Units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;emergency_restart_interval = 0 ; Time limit for child processes to wait for a reaction on signals from master. ; Available units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;process_control_timeout = 0 ; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging. ; Default Value: yes ;daemonize = yes ;;;;;;;;;;;;;;;;;;;; ; Pool Definitions ; ;;;;;;;;;;;;;;;;;;;; ; See /etc/php-fpm.d/*.conf end php-fpm.conf

    Read the article

  • JVM process resident set size "equals" max heap size, not current heap size

    - by Volune
    After a few reading about jvm memory (here, here, here, others I forgot...), I am expecting the resident set size of my java process to be roughly equal to the current heap space capacity. That's not what the numbers are saying, it seems to be roughly equal to the max heap space capacity: Resident set size: # echo 0 $(cat /proc/1/smaps | grep Rss | awk '{print $2}' | sed 's#^#+#') | bc 11507912 # ps -C java -O rss | gawk '{ count ++; sum += $2 }; END {count --; print "Number of processes =",count; print "Memory usage per process =",sum/1024/count, "MB"; print "Total memory usage =", sum/1024, "MB" ;};' Number of processes = 1 Memory usage per process = 11237.8 MB Total memory usage = 11237.8 MB Java heap # jmap -heap 1 Attaching to process ID 1, please wait... Debugger attached successfully. Server compiler detected. JVM version is 24.55-b03 using thread-local object allocation. Garbage-First (G1) GC with 18 thread(s) Heap Configuration: MinHeapFreeRatio = 10 MaxHeapFreeRatio = 20 MaxHeapSize = 10737418240 (10240.0MB) NewSize = 1363144 (1.2999954223632812MB) MaxNewSize = 17592186044415 MB OldSize = 5452592 (5.1999969482421875MB) NewRatio = 2 SurvivorRatio = 8 PermSize = 20971520 (20.0MB) MaxPermSize = 85983232 (82.0MB) G1HeapRegionSize = 2097152 (2.0MB) Heap Usage: G1 Heap: regions = 2560 capacity = 5368709120 (5120.0MB) used = 1672045416 (1594.586769104004MB) free = 3696663704 (3525.413230895996MB) 31.144272834062576% used G1 Young Generation: Eden Space: regions = 627 capacity = 3279945728 (3128.0MB) used = 1314914304 (1254.0MB) free = 1965031424 (1874.0MB) 40.089514066496164% used Survivor Space: regions = 49 capacity = 102760448 (98.0MB) used = 102760448 (98.0MB) free = 0 (0.0MB) 100.0% used G1 Old Generation: regions = 147 capacity = 1986002944 (1894.0MB) used = 252273512 (240.5867691040039MB) free = 1733729432 (1653.413230895996MB) 12.702574926293766% used Perm Generation: capacity = 39845888 (38.0MB) used = 38884120 (37.082786560058594MB) free = 961768 (0.9172134399414062MB) 97.58628042120682% used 14654 interned Strings occupying 2188928 bytes. Are my expectations wrong? What should I expect? I need the heap space to be able to grow during spikes (to avoid very slow Full GC), but I would like to have the resident set size as low as possible the rest of the time, to benefit the other processes running on the server. Is there a better way to achieve that? Linux 3.13.0-32-generic x86_64 java version "1.7.0_55" Running in Docker version 1.1.2 Java is running elasticsearch 1.2.0: /usr/bin/java -Xms5g -Xmx10g -XX:MinHeapFreeRatio=10 -XX:MaxHeapFreeRatio=20 -Xss256k -Djava.awt.headless=true -XX:+UseG1GC -XX:MaxGCPauseMillis=350 -XX:InitiatingHeapOccupancyPercent=45 -XX:+AggressiveOpts -XX:+UseCompressedOops -XX:-OmitStackTraceInFastThrow -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintClassHistogram -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCApplicationConcurrentTime -Xloggc:/opt/elasticsearch/logs/gc.log -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/opt elasticsearch/logs/heapdump.hprof -XX:ErrorFile=/opt/elasticsearch/logs/hs_err.log -Des.logger.port=99999 -Des.logger.host=999.999.999.999 -Delasticsearch -Des.foreground=yes -Des.path.home=/opt/elasticsearch -cp :/opt/elasticsearch/lib/elasticsearch-1.2.0.jar:/opt/elasticsearch/lib/*:/opt/elasticsearch/lib/sigar/* org.elasticsearch.bootstrap.Elasticsearch There actually are 5 elasticsearch nodes, each in a different docker container. All have about the same memory usage. Some stats about the index: size: 9.71Gi (19.4Gi) docs: 3,925,398 (4,052,694)

    Read the article

  • Why might PC3200 ECC ram be incompatible when upgrading?

    - by Zak
    I had some 512MB PC3200 Kingston mem modules in a server, and I ordered replacement Kingston memory in 1GB sticks. The vendor said they were out of stock and I agreed to let them ship me "compatible" Samsung memory. However, after installing, the MB posted BIOS errors and wouldn't even boot to a BIOS screen. Both memory was ECC PC3200. Any ideas why that would happen?

    Read the article

  • Terminal services and memory limits

    - by Mark Wassell
    Is there a way in Terminal Services to set limits on memory related parameters for a process. For example working set size and, possibly, if it makes sense, total virtual memory allocation for the session? To turn the question around, we have an application which cannot allocate as much virtual memory running on a terminal server as it can when running on a desktop PC (both I would expect to have a limit of 2GB for user mode address space) and I was wondering if there is another limit for processes or users on a terminal server. Perhaps even 2GB per user rather than per process.

    Read the article

  • Intermittent 404 on select assets, LAMP stack

    - by Tom Lagier
    We have a LAMP stack WordPress server that is serving most assets correctly. However, one plugin's CSS file and several images are returning soft 404s roughly 20% of the time. I can't find any reference to the 404 in the access logs, but the browser is definitely receiving a 404 response from somewhere (WordPress, I would assume). When I use an alias URL that does not match the site URL but does resolve to the asset path, the resource loads correctly 100% of the time. However, using the site url only resolves for the select, problematic assets 20% of the time. You can test one of the problematic assets here: http://www.mreco.org/wp-content/uploads/2014/05/zero-cost.jpg However the alias link always resolves correctly: http://mr-eco.wordpress.promocampaigns.com/wp-content/uploads/2014/05/zero-cost.jpg Stranger, if I attempt to access outdated content that definitely does not exist on the server, at the live URL it returns the content roughly 50% of the time. Using the alias link, it 404s 100% of the time - the correct behavior. Error log and PHP error log are clean. A sample access log (pulled from grep 'zero-cost.jpg' /var/log/httpd/mr-eco-access_log) from several refreshes of the live direct link (where I am not seeing any 404's): 10.166.202.202 - - [28/May/2014:20:27:41 +0000] "GET /wp-content/uploads/2014/05/zero-cost.jpg HTTP/1.1" 304 - 10.166.202.202 - - [28/May/2014:20:27:42 +0000] "GET /wp-content/uploads/2014/05/zero-cost.jpg HTTP/1.1" 304 - 10.166.202.202 - - [28/May/2014:20:27:43 +0000] "GET /wp-content/uploads/2014/05/zero-cost.jpg HTTP/1.1" 304 - 10.166.202.202 - - [28/May/2014:20:27:43 +0000] "GET /wp-content/uploads/2014/05/zero-cost.jpg HTTP/1.1" 304 - 10.176.201.37 - - [28/May/2014:20:27:56 +0000] "GET /wp-content/uploads/2014/05/zero-cost.jpg HTTP/1.1" 200 57027 Chrome's dev tools list the following network activity before displaying 404 page content: zero-cost.jpg /wp-content/uploads/2014/05 GET 404 Not Found text/html Other 15.9?KB 73.2?KB 953?ms 947?ms My Apache configuration is standard, I've listed the virtual host entry and .htaccess file below. I can provide other parts of Apache config if necessary. Virtual host: <VirtualHost *:80> DocumentRoot /var/www/public_html/mr-eco.wordpress.promocampaigns.com ServerName www.mreco.org ServerAlias mreco.org mr-eco.wordpress.promocampaigns.com ErrorLog logs/mr-eco-error_log CustomLog logs/mr-eco-access_log common <Directory /var/www/public_html/mr-eco.wordpress.promocampaigns.com> AllowOverride All SetOutputFilter DEFLATE </Directory> </VirtualHost> .htaccess: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress I have checked for multiple A records and can confirm that there is a single A record pointing at the domain: ;; ANSWER SECTION: mreco.org. 60 IN A 50.18.58.174 I'm fairly new to systems administration, and at a complete loss as to what could cause this. In the past, inconsistently 404ing assets have been because of out-of-sync instances behind a load balancer. In this case, it is a single instance behind the load balancer. Because of the inconsistency, it feels like a caching issue. We don't make use of Apache caching, and as far as I know WordPress should not be caching either. What I've done so far: Reset WordPress permalinks Disabled WordPress plugins Re-generated WordPress .htaccess file Swapped ServerName and ServerAlias directives Cleared browser cache Confirmed disk location of resources Checked PHP, access, and error logs Confirmed correct DNS setup (can post if necessary) I'm at a total loss. Thanks for helping me out!

    Read the article

  • Why does my 64-bit IIS app pool show 3 gigabytes more virtual memory than private memory?

    - by Brett
    I have an ASP.Net application that I am running on 64-bit IIS 6 on Windows XP x64. When I open performance counters after one page hit of a trivial page, I see a Private Bytes of about 88 megs, but a Virtual Bytes of about 3 Gigs. When I try the same thing with a VERY trivial ASP.Net app, I get the same result. We see something similar on Windows Server 2003 in production -- there it is an issue because we recycle when the virtual memory consumed outgrows a limit. Before we make any changes to our recycling settings, we'd like to answer the following questions: Why does the app pool grab such a large hunk of virtual memory? Is the amount of virtual memory headroom the app requests configurable? Thanks! Brett

    Read the article

  • Mobile Intel® GMA 4500MHD boost

    - by Andy Smith
    My machine has a Mobile Intel® GMA 4500MHD integrated graphics chipset. The machine is currently running 64 bit windows 7 premium with 3GB of ram (1x1gb and 1x2gb). I note that the Mobile Intel® GMA 4500MHD shares the physical memory to process the graphics. now, the total available graphics memory can be up to 1,340 MB with a 32-bit operating system and 3 GB system memory or 1,759 MB with a 64-bit operating system and 4 GB system memory. I am considering investing in a 4GB stick to replace the 1gb stick bringing the total up to 6gb, mainly for an increase in graphics processing ability. Can anyone let me know what sort of power (if any over the 4gb) I could expect by upgrading to 6GB?

    Read the article

  • ACT Professional for Windows-Memory leak?

    - by Dan
    I have an ACT! professional for Windows V11.1, with the latest SQL service pack (SP3) and have an apparent memory leak on the server. After a restart the ACT! SQL instance (SQLSERVR) consumes almost all the available memory on the server, we have added more memory to the server (it is running under Hyper-V) but it continues to consume it all. I have not been able to connect to the SQL server instance using management studio in order to limit the amount of RAM it is allocated. Are there any potential solutions for this? or should I continue to restart the services?

    Read the article

  • Windows Server 2008: Terminal Services

    - by JohnyD
    I have a Dell R710 with 72GB of memory running Hyper-V. Within Hyper-V I have a Windows 2008 (32-bit) VM running Terminal Services. How do I allocate memory so that any user who connects to this Terminal Server (from their thin-client) is allocated 2GB (or whatever amount I choose) of memory? Currently I have provisioned the TS with 2GB of memory but it seems that this is shared among all that connect. Please let me know if there is further information I can provide. Thank you.

    Read the article

  • iphone Memory gets freed in debug mode but not in release mode

    - by gdr
    I have been testing my iPhone debug build on both my device and simulator with activity monitor, leaks, and object allocations. The code is pretty well optimized so I have decided to test the release build. I went into the project Menu and set the target build to be release, I then added the necessary header paths that my app is using to the headers search paths and ran the release build on the device with the above mentioned instruments. What I have noticed now is that memory that was freed when I used the debug build does not get freed when using release version. There is one place in my App that I remove a scroll view with some images which frees up a significant amount of memory when I use the debug build, but no memory is freed up in that place when I use the release version. Does someone have any ideas where I need to start looking at? did I setup my release build wrong?

    Read the article

  • Bind texture with pinned mapped memory in CUDA

    - by sjchoi
    I was trying to bind a host memory that was mapped for zero-copy to a texture, but it looks like it isn't possible. Here is a code sample: float* a; float* d_a; cudaSetDeviceFlags(cudaDeviceMapHost); cudaHostAlloc( (void **)&a, bytes, cudaHostAllocMapped); cudaHostGetDevicePointer((void **)&d_a, (void *)a, 0); texture<float, 2, cudaReadModeElementType> tex; cudaBindTexture2D( 0, &tex, d_a, &channelDesc, width, height, pitch); Is it recommended that you used pinned memory and just copy it over to device memory that is bind to texture?

    Read the article

  • Help, I need to debug my BrowserHelperObject (BHO) (in C++) after a internet explorer 8 crash in Rel

    - by BHOdevelopper
    Hi, here is the situation, i'm developping a Browser Helper Object (BHO) in C++ with Visual Studio 2008, and i learned that the memory wasn't managed the same way in Debug mode than in Release mode. So when i run my BHO in debug mode, internet explorer 8 works just fine and i got no erros at all, the browser stays alive forever, but as soon as i compile it in release mode, i got no errors, no message, nothing, but after 5 minutes i can see through the task manager that internet explorer instances are just eating memory and then the browser just stop responding every time. Please, I really need some hint on how to get a feedback on what could be the error. I heard that, often it was happening because of memory mismanagement. I need a software that just grab a memory dump or something when iexplorer crashes to help me find the problem. Any help is appreciated, I'll be looking for responses every single days, thank you.

    Read the article

  • wanting a good memory + disk caching solution

    - by brofield
    I'm currently storing generated HTML pages in a memcached in-memory cache. This works great, however I am wanting to increase the storage capacity of the cache beyond available memory. What I would really like is: memcached semantics (i.e. not reliable, just a cache) memcached api preferred (but not required) large in-memory first level cache (MRU) huge on-disk second level cache (main) evicted from on-disk cache at maximum storage using LRU or LFU proven implementation In searching for a solution I've found the following solutions but they all miss my marks in some way. Does anyone know of either: other options that I haven't considered a way to make memcachedb do evictions Already considered are: memcachedb best fit but doesn't do evictions: explicitly "not a cache" can't see any way to do evictions (either manual or automatic) tugela cache abandoned, no support don't want to recommend it to customers nmdb doesn't use memcache api new and unproven don't want to recommend it to customers

    Read the article

  • Server Load Check

    - by ntechi
    Is it possible to trace which file or process or database query is effecting the load on a VPS? I am using Centos with 512 MB Guarantee Memory and 1 GB burst Memory, I am running 3 wordpress sites from it, where all are having daily traffic of 30-100 visitors each, After every 2-3 days I need to restart my VPS because the resources are taking high usage of memory, I tried running top command and it shows Apache as high, But is it possible to check which website is taking load? Here is my 'top -c' command output results

    Read the article

  • More memory usage for IIS 6 asp.net 2.0 on webserver 2003

    - by Alan King
    Running a webserver 2003 SP2 (x86) with IIS 6 and asp.net 2. The box is running mostly dynamic asp pages connecting to a sql 2008 server. At any given time there is over 1 gig of memory available out of the 2 gig in the box. It seems like there would be a way for it to make better use of the free memory. It is using a default machine.config file and default http.sys. I would like to maximize incoming internet connections and database connections. Is there something I can do to make better use of the available memory?

    Read the article

  • How to free memory in try-catch blocks?

    - by Kra
    Hi, I have a simple question hopefully - how does one free memory which was allocated in the try block when the exception occurs? Consider the following code: try { char *heap = new char [50]; //let exception occur here delete[] heap; } catch (...) { cout << "Error, leaving function now"; //delete[] heap; doesn't work of course, heap is unknown to compiler return 1; } How can I free memory after the heap was allocated and exception occurred before calling delete[] heap? Is there a rule not to allocate memory on heap in these try .. catch blocks? Thanks

    Read the article

  • ESXi 5.1 - Unable to register host

    - by deanvz
    I download and successfully installed ESXi 5.1. I am however unable to get the licence key I received installed. An error occurred when assigning the specified licence key: The system Memory is not satisfied with the 32 GB of Maximum memory limit. Current with 80.00 GB of Memory. Is there now way around this? A quick google revealed that this is a global problem with no real answer or resolution. The only workaround is to remove the physical RAM chips, but as this is in going to be in production I dont want to do that as it would mean down time when I have to reinsert the memory

    Read the article

  • Memory leak with ContextMenuStrip

    - by Dave
    I'm creating a lot of custom controls and adding them to a FlowLayoutPanel. There is also a ContextMenuStrip created and populated at design time. Every time a control is added to the panel it has its ContextMenuStrip property assigned to this menu, so that all controls "share" the same menu. But I noticed when the controls are removed from the panel and disposed of, the memory in use in Task Manager doesn't drop. It rises around 50kB every time a control is created and added to the layout panel. I downloaded the trial of .NET Memory Profiler and it showed there were references to the menu strip hanging around after the controls were disposed. I changed the code to explicitly set the ContextMenuStrip property to null before disposing of the control, and yep, the memory is now released. Why is this? Shouldn't the GC clean up this type of thing?

    Read the article

  • AS3 try/catch out of memory

    - by StfnoPad
    Hi, I'm loading a few huge images on my flex/as3 app, but I can't manage to catch the error when the flash player runs out of memory. Here is the what I was thinking might work (I use ???? because i dont know what to catch): try{ images = new Array(frames); for (var i:uint = 0; i < frames; i++){ imagesBA[i] = new BitmapData(width, height, false, 0x000000FF); } } catch(error:????){ Alert.show("Out of memory!"); } Any idea what ???? can be? Or does anyone knows how to catch when there is no memory for a variable?

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >