Search Results

Search found 17317 results on 693 pages for 'memory upgrade'.

Page 100/693 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • mod_rewrite not working after upgrade to 12.10

    - by CrowderSoup
    I'm hoping this is a quick and simple fix and that I just need a fresh set of eyes. However, I'm fearful that it might actually be an error in the latest build of the rewrite module. I have a .htaccess file that turns on the rewrite engine (I've made sure the module is enabled), creates some rewrite conditions, and finally a rewrite rule. Here's my .htaccess file for reference: <IfModule mod_rewrite.c> RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?request=$1 [L,QSA,NC] </IfModule> Now for the problem: if I go to hostname.com it works fine. If I go to hostname.com/Index it works fine. However, if I go to hostname.com/index it doesn't rewrite the request and I get a 404. I'm not sure what's going on here. I've used a rewrite rule tester and there doesn't appear to be any issues with my rewrite rule itself. Again, this issue didn't manifest until after I upgraded to 12.10, at which point I know that Apache was updated. Any thoughts? Has anyone else here experienced this? I know that two other people besides myself have experienced this here. Thanks in advance for any help you can provide!

    Read the article

  • linux new/delete, malloc/free large memory blocks

    - by brian_mk
    Hi folks, We have a linux system (kubuntu 7.10) that runs a number of CORBA Server processes. The server software uses glibc libraries for memory allocation. The linux PC has 4G physical memory. Swap is disabled for speed reasons. Upon receiving a request to process data, one of the server processes allocates a large data buffer (using the standard C++ operator 'new'). The buffer size varies depening upon a number of parameters but is typically around 1.2G Bytes. It can be up to about 1.9G Bytes. When the request has completed, the buffer is released using 'delete'. This works fine for several consecutive requests that allocate buffers of the same size or if the request allocates a smaller size than the previous. The memory appears to be free'd ok - otherwise buffer allocation attempts would eventually fail after just a couple of requests. In any case, we can see the buffer memory being allocated and freed for each request using tools such as KSysGuard etc. The problem arises when a request requires a buffer larger than the previous. In this case, operator 'new' throws an exception. It's as if the memory that has been free'd from the first allocation cannot be re-allocated even though there is sufficient free physical memory available. If I kill and restart the server process after the first operation, then the second request for a larger buffer size succeeds. i.e. killing the process appears to fully release the freed memory back to the system. Can anyone offer an explanation as to what might be going on here? Could it be some kind of fragmentation or mapping table size issue? I am thinking of replacing new/delete with malloc/free and use mallopt to tune the way the memory is being released to the system. BTW - I'm not sure if it's relevant to our problem, but the server uses Pthreads that get created and destroyed on each processing request. Cheers, Brian.

    Read the article

  • JavaME - LWUIT images eat up all the memory

    - by Marko
    Hi, I'm writing a MIDlet using LWUIT and images seem to eat up incredible amounts of memory. All the images I use are PNGs and are packed inside the JAR file. I load them using the standard Image.createImage(URL) method. The application has a number of forms and each has a couple of labels an buttons, however I am fairly certain that only the active form is kept in memory (I know it isn't very trustworthy, but Runtime.freeMemory() seems to confirm this). The application has worked well in 240x320 resolution, but moving it to 480x640 and using appropriately larger images for UI started causing out of memory errors to show up. What the application does, among other things, is download remote images. The application seems to work fine until it gets to this point. After downloading a couple of PNGs and returning to the main menu, the out of memory error is encountered. Naturally, I looked into the amount of memory the main menu uses and it was pretty shocking. It's just two labels with images and four buttons. Each button has three images used for style.setIcon, setPressedIcon and setRolloverIcon. Images range in size from 15 to 25KB but removing two of the three images used for every button (so 8 images in total), Runtime.freeMemory() showed a stunning 1MB decrease in memory usage. The way I see it, I either have a whole lot of memory leaks (which I don't think I do, but memory leaks aren't exactly known to be easily tracked down), I am doing something terribly wrong with image handling or there's really no problem involved and I just need to scale down. If anyone has any insight to offer, I would greatly appreciate it.

    Read the article

  • Memory mapping of files and system cache behavior in WinXP

    - by Canopus
    Our application is memory intensive and deals with reading a large number of disk files. The total load can be more than 3 GB. There is a custom memory manager that uses memory mapped files to achieve reading of such a huge data. The files are mapped into the process memory space only when needed and with this the process memory is well under control. But what is observed is, with memory mapping, the system cache keeps on increasing until it occupies the available physical memory. This leads to the slowing down of the entire system. My question is how to prevent system cache from hogging the physical memory? I attempted to remove the file buffering (by using FILE_FLAG_NO_BUFFERING ), but with this, the read operations take considerable amount of time and slows down the application performance. How to achieve the scalability without sacrificing much on performance. What are the common techniques used in such cases? I dont have a good understanding of the WinXP OS caching behavior. Any good links explaining the same would also be helpful.

    Read the article

  • Did 12.04 just add multi-touch gesture support mid-release?

    - by adempewolff
    I was reviewing the updates I was about to download today and I noticed that a lot of them had to do with gesture support, noticed that many of these were new installs rather than upgrades. Has 12.04 just added multi-touch gesture support mid-release? If so, what are the capabilities that this adds? Which applications already support these capabilities and can I expect others to add support in the near future? Here are the packages that were installed: Install: libframe6:amd64 (2.2.4-0ubuntu0.12.04.1), libgeis1:amd64 (2.2.9.2-0ubuntu1), libgrail5:amd64 (3.0.6-0ubuntu0.12.04.01, automatic) And here are those that were upgraded (also including many with touch support): Upgrade: libgrip0:amd64 (0.3.4-0ubuntu2~ubuntu12.04.1, 0.3.5-0ubuntu1~12.04.1), eog:amd64 (3.4.2-0ubuntu1, 3.4.2-0ubuntu1.1), ginn:amd64 (0.2.4-0ubuntu1, 0.2.4.1-0ubuntu1) Of which the descriptions for the new installs are, libgeis1: Gesture engine interface support A common API for clients of a systemwide gesture recognition and propagation engine. libframe6: Touch Frame Library This library handles the buildup and synchronization of a set of simultaneous touches. The library is input agnostic, with bindings for mtdev, frame and XI2.1. libgrail5: Gesture Recognition And Instantiation Library This library consists of an interface and tools for handling gesture recognition and gesture instantiation. Applications can use the grail callbacks to receive gesture primitives and raw input events from the underlying kernel device. And the descriptions for the upgraded packages are, ligrip0: provides multitouch gestures to GTK+ apps Libgrip hooks gesture recognition into GTK+ applications. ginn: Gesture Injector: No-GEIS, No-Toolkits A daemon with jinn-like wish-granting capabilities: it gives applications the ability to support a subset of multi-touch gestures without having to integrate GEIS or multi-touch GTK/Qt libs. Adding in a ton of new libraries and upgrading the existing components makes me wonder if 12.04 is meant to start natively supporting gestures other than two finger scroll in the near future. I expected these capabilities to be introduced soon but I thought that they would only be rolled out in a new release, not as upgrades for an existing release. Anyone have any info about this?

    Read the article

  • Updating Ubuntu server from 8.10 to 10.04

    - by Ward
    I have a VPS that has Ubuntu 8.10 Server Edition installed on it and I would like to upgrade it to 10.04. What would be the correct way of doing this? I only have ssh access to it and a "Start/Shutdown VPS" in the client panel of the vendor. In other words, I do not have physical access to it. Also worth noting is that I apparently cannot install programs any more since the sources (osuosl.org ?) are not online. Not the ones this server has set anyway. # apt-get update Ign http://ubuntu.osuosl.org intrepid Release.gpg Ign http://ubuntu.osuosl.org intrepid/main Translation-en_US Ign http://ubuntu.osuosl.org intrepid/universe Translation-en_US Ign http://ubuntu.osuosl.org intrepid Release Ign http://ubuntu.osuosl.org intrepid/main Packages Ign http://ubuntu.osuosl.org intrepid/universe Packages Err http://ubuntu.osuosl.org intrepid/main Packages 404 Not Found Err http://ubuntu.osuosl.org intrepid/universe Packages 404 Not Found W: Failed to fetch http://ubuntu.osuosl.org/ubuntu/dists/intrepid/main/binary-amd64/Packages.gz 404 Not Found W: Failed to fetch http://ubuntu.osuosl.org/ubuntu/dists/intrepid/universe/binary-amd64/Packages.gz 404 Not Found E: Some index files failed to download, they have been ignored, or old ones used instead.

    Read the article

  • Cannot login in account with encrypted home after update from 11.04 to 11.10

    - by martin
    After upgrading from ubuntu 11.04 to 10.10 I cannot access my encrypted home partition anymore. I can login, however all data stays encrypted. ecryptfs-mount-private gives: ERROR: Encrypted private directory is not setup properly Any idea how to fix this? Update I have several kernels installed (after the upgrade my menu.lst looks like this: http://paste.org/pastebin/view/35591) the problem is the same for all kernels. Booting from 2.6.32-27-generic and adduser --encrypt-home tes gives: Adding user `tes' ... Adding new group `tes' (1008) ... Adding new user `tes' (1007) with group `tes' ... Creating home directory `/home/tes' ... Setting up encryption ... ************************************************************************ YOU SHOULD RECORD YOUR MOUNT PASSPHRASE AND STORE IT IN A SAFE LOCATION. ecryptfs-unwrap-passphrase ~/.ecryptfs/wrapped-passphrase THIS WILL BE REQUIRED IF YOU NEED TO RECOVER YOUR DATA AT A LATER TIME. ************************************************************************ Error: Your kernel does not support filename encryption ERROR: Could not add passphrase to the current keyring adduser: `/usr/bin/ecryptfs-setup-private -b -u tes' returned error code 1. Exiting.

    Read the article

  • How to fix "Sub-process /usr/bin/dpkg returned an error code (1)" when installing and upgrading packages?

    - by soum
    I am getting this error whenever tring to install or update anything: "Sub-process /usr/bin/dpkg returned an error code (1)" I need help, as I cannot install or upgrade any packages on my Ubuntu 11.10 system. Here is the rest of the error: unknown argument `triggered' dpkg: error processing mtools (--configure): subprocess installed post-installation script returned error exit status 1 Processing triggers for network-manager-pptp-gnome ... No apport report written because MaxReports is reached already postinst called with unknown argument `triggered' dpkg: error processing network-manager-pptp-gnome (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Processing triggers for network-manager-pptp ... postinst called with unknown argument `triggered' dpkg: error processing network-manager-pptp (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Processing triggers for network-manager-gnome ... /var/lib/dpkg/info/network-manager-gnome.postinst called with unknown argument `triggered' dpkg: error processing network-manager-gnome (--configure): subprocess installed post-installation script returned error exit status 1 Processing triggers for network-manager ... No apport report written because MaxReports is reached already /var/lib/dpkg/info/network-manager.postinst called with unknown argument `triggered' dpkg: error processing network-manager (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Processing triggers for mscompress ... postinst called with unknown argument `triggered' dpkg: error processing mscompress (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: netbase mtr-tiny module-init-tools mountmanager mono-4.0-gac mousetweaks mozilla-plugin-vlc mtools network-manager-pptp-gnome network-manager-pptp network-manager-gnome network-manager mscompress E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • How can I get a user account back?

    - by Ilan
    With all my computers I make one partition for the root and another for /home. This is useful for disasters where I need to reformat the root for ubuntu, but leave my /home data untouched. With the upgrade to 13.10 I had troubles on my wife's computer so I reinstalled 13.10. My own /home files came up, as expected, as if nothing had happened. For my wife, it is a different story - and that is the part where I need help. If I go into Files, computer I can see the home directory. There I can see ilan (my files) and yona (my wife's files). I can open yona, documents and see all her work. This means that all is well and I just need to hook up to her files. So the problem is that I need to create a user called Yona or yona, but something which will get me to exactly the files of interest. I'm not sure if I created her account as standard or an administrator. Is there any way I could tell by looking at the files in /home? I created a new user called Yona as a standard user (hoping that this is the right guess). The account came up as disabled. I pressed on the disabled button so I could change the password. I put in her password but it was refused as too short. Too short, too short, but that is what was used and that is what I need. Can anyone help me before my wife comes home and shoots me? Thanks, Ilan

    Read the article

  • Unity is broken after upgrading to 12.10 (Optimus laptop)

    - by SyS
    I upgraded to GNU/Linux Ubuntu 12.10 but have been unable to use Unity properly afterwards. Indeed, I encountered the exact same problem as a lot of people: the Unity side and top bars are not displaying, although in my case, Unity seems completely broken, as I can't even right-click. However, in my case, it's worth noticing that I have an Optimus laptop with a Nvidia graphics card (GeForce GT 540M). Bumblebee and its 'optirun' command is working just fine, as usual, after the upgrade. I tried several things, as resetting Compiz and Unity (with the command 'setside unity') -- which works but I have to do it everytime I boot and it resets all my settings -- updating/reinstalling/reconfiguring my Nvidia drivers as well as bumblebee, trying with Nouveau drivers instead of nvidia-current, check if linux-headers-generic were installed (they were). However, I couldn't reset xorg.conf files as they're just not there. There is neither xorg.conf file, nor its backup in /etc/X11. I think this is where the problem comes from, although I'm far from an expert. Maybe retrieving a xorg.conf file will fix this mess, but I have no idea how to do that. I'm just tired and don't know what to do. So, here I am, begging for your help.

    Read the article

  • How to fix Black screen?

    - by stupidwhiteguy
    so I recently had my question deleted and merged into a standard how to for blank screens. I am relatively new to This type of computer work and i don't understand the steps nessary to diagnose my problem well enough to solve it so this help full how to has me feeling helpless I can use Ctrl + Alt +F1 and log in So how do I use sudo commands to fix the blank screen on my old dell? sudo lsoci -nn tells me my video card is a ATI rage 128 pro ultra tf /var/log/Xorg.0.log tells me Permission denied that is all i get Sudo apt-get install --reinstall unity tried thay also have also tried Apt-get update and upgrade Please dont close this question without providing an actual answer or if you think it is an exact duplicate provide a solution that worked for that question. I see a lot of these questions as closed and there is no real answer given I will try any solutions available and report results so that others can also solve problems not be overwhelmed by overly broad troubleshooting guides that do nothing to help solve specific issues The nomodeset change from quiet splash also yields no results on reboot I still get a blank screen this screen still has contorl alt f1 abilities but that is it contorl alt f8 causes blinking cursor and F7 gets a crazy flash with green and blue then blank screen Contorl alt f1 a log in prompt in text only when run in recovery mode with failsafe graphics its says the syestem is running in low graphics mode your screen graphics card and input device settings could not be detected correctly. You will need to configure these your self how do i do that? I got this /var/log/failsafeX-backup-120909200641.tar as the location of my log files but i have no idea how to axcess sounds work in blank screen also screen responds or flashes after log in is typed and password entered really any help is good I don't even know where to start I believe 12.04 is installed and functioning but i don't think I can see it at the end of the error log it says error setting mtrr (base= oxf0000000, size= 0x01000000, type=1) inappropriate ioctl for device(25) i have tried to provide as much info as I understand how to provide

    Read the article

  • How do I prevent software packages from being downloaded until I know it's safe?

    - by Dave M G
    Recently, an update that caused a problem with Gnome session caused me to lose a day's work. The solution was to rollback some packages to a previous version. The update manager is now telling me that my old packages should be updated: ... but I don't want to do that until I know that whatever bug or problem the latest version has is resolved. I understand that with any upgrade that there is a risk of instability. However, in the 8 years or more that I've been on Ubuntu, using the latest releases has been stable enough and with the benefit of the latest features and security. So, I'm not looking for general advice on how to handle upgrades. What I'm saying is that in this one particular instance, the bug introduced by these upgrades is severe and time wasting. But, as an end user, when I encounter a problem like this, I have no idea how to address a specific concern about a specific package. I don't, for example, know which of these packages is the problem, and I can't take time from my work schedule to be experimenting with each package. So, my question is: How do I find out who exactly is responsible for these, or any, packages so that I can contact them and let them know about the problem? How do I freeze these packages only, but allow other upgrades to happen? ubuntu-session gnome-session-common gnome-session-bin gnome-session

    Read the article

  • How to upgrade the project build in visual studio 2005 to visual studio 2008?

    - by Shailesh Jaiswal
    I have one OPC ( OLE for Process control ) server project which is developed into visual studio 2005. I want to run it in visual studio 2008. The coding for the OPC server project is done in VC++. I want to connect my OPC client to this OPC server. When I was opened the OPC server project which was build into visual studio 2005 into visual studio 2008 first time it was asking for conversion wizard. I gone through that wizard & successfully finished that wizard. But when I build ( by right clicking on the project & choosing build solution ) it is giving lots of error near about 64 errors. Most of the errors are like - fetal error C1083:Can not open type library file:'msxml4.dll':No such file or directory, fetal error LINK1181:can not open input file 'rpcndr.lib' , error C2051:case expression not constant. only these 3 types of errors in am getting. All these 3 errors are repeated in Error list & becoming bunch of 64 errors. Please provide me the solution for the above issue. Can you provide me any suusgestion or link or any way through whcih I can resolve the above issue?

    Read the article

  • When is it appropriate to use C++ exceptions?

    - by krebstar
    I'm trying to design a class that needs to dynamically allocate some memory.. I had planned to allocate the memory it needs during construction, but how do I handle failed memory allocations? Should I throw an exception? I read somewhere that exceptions should only be used for "exceptional" cases, and running out of memory doesn't seem like an exceptional case to me.. Should I allocate memory in a separate initialization routine instead and check for failures and then destroy the class instance gracefully? Or should I use exceptions instead? The class won't have anything useful to do if these memory allocations should fail.. EDIT: The consensus seems to be that running out of memory IS an exceptional case. Will see how to go about this.. Thanks.. :)

    Read the article

  • Efficient batch SQL query execution on Android, for upgrade database.

    - by Pentium10
    As we Android developers know, the SQLiteDatabase execSQL method can execute only one statement. The doc says: Execute a single SQL statement that is not a query. For example, CREATE TABLE, DELETE, INSERT, etc. Multiple statements separated by ;s are not supported. I have to load in a batch of records, 1000 and counting. How do I insert these efficiently? And what's the easiest way to deliver these SQLs with your apk? I mention, there is already a system database and I will run this on the onUpdate event. I have this code so far: List<String[]> li = new ArrayList<String[]>(); li.add(new String[] { "-1", "Stop", "0" }); li.add(new String[] { "0", "Start", "0" }); /* the rest of the assign */ try { for (String[] elem : li) { getDb().execSQL( "INSERT INTO " + TABLENAME + " (" + _ID + "," + NAME + "," + PARSE_ORDER + ") VALUES (?,?,?)", elem); } } catch (Exception e) { e.printStackTrace(); }

    Read the article

  • iPhone mapView / mapKit using removeAnnotation & addAnnotation results in memory leak?

    - by user266618
    To update the location of a GPS indicator on mapView... [mapView removeAnnotation:myGpsAnnotation]; [myGpsAnnotation release]; myGpsAnnotation = nil; myGpsAnnotation = [[MapLocationAnnotation alloc] initWithCoordinate:region.center annotationType:MapAnnotationTypeGps title:MAP_ANNOTATION_TYPE_GPS]; [mapView addAnnotation:myGpsAnnotation]; ...I see net memory slowly climbing in Instruments (simulator). No "Leak" blip, but "Net Bytes" and "# Net" slowly incrementing... unless this code is commented out. So I'm 100% certain this is the offending code. BUT if I do the following... [mapView removeAnnotation:myGpsAnnotation]; [myGpsAnnotation release]; myGpsAnnotation = nil; myGpsAnnotation = [[MapLocationAnnotation alloc] initWithCoordinate:region.center annotationType:MapAnnotationTypeGps title:MAP_ANNOTATION_TYPE_GPS]; [mapView addAnnotation:myGpsAnnotation]; [mapView removeAnnotation:myGpsAnnotation]; [mapView addAnnotation:myGpsAnnotation]; [mapView removeAnnotation:myGpsAnnotation]; [mapView addAnnotation:myGpsAnnotation]; ...then the "Net Bytes" and "# Net" increase much faster. Is it possible this isn't my mistake, and I'm trying to track down a leak in MapKit? Am I really leaking memory? Again, nothing appears under "Leaks", but then I don't see why Net values would be continually climbing. Thanks for any help, -Gord

    Read the article

  • System.AccessViolationException: Attempted to read or write protected memory.

    - by Ananth
    I get the following exception when I try to "find and replace" in a word 2007 working on Windows Vista , Windows 7. System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. at Microsoft.Office.Interop.Word.Find.Execute(Object& FindText, Object& MatchCase, Object& MatchWholeWord, Object& MatchWildcards, Object& MatchSoundsLike, Object& MatchAllWordForms, Object& Forward, Object& Wrap, Object& Format, Object& ReplaceWith, Object& Replace, Object& MatchKashida, Object& MatchDiacritics, Object& MatchAlefHamza, Object& MatchControl) Is there any solution for this ? Iam using .net3.5 C#.

    Read the article

  • How Can I Prevent Memory Leaks in IE Mobile?

    - by Jake Howlett
    Hi All, I've written an application for use offline (with Google Gears) on devices using IE Mobile. The devices are experiencing memory leaks at such a rate that the device becomes unusable over time. The problem page fetches entries from the local Gears database and renders a table of each entry with a link in the last column of each row to open the entry ( the link is just onclick="open('myID')" ). When they've done with the entry they return to the table, which is RE-rendered. It's the repeated building of this table that appears to be the problem. Mainly the onclick events. The table is generated in essence like this: var tmp=""; for (var i=0; i<100; i++){ tmp+="<tr><td>row "+i+"</td><td><a href=\"#\" id=\"LINK-"+i+"\""+ " onclick=\"afunction();return false;\">link</a></td></tr>"; } document.getElementById('view').innerHTML = "<table>"+tmp+"</table>"; I've read up on common causes of memory leaks and tried setting the onclick event for each link to "null" before re-rendering the table but it still seems to leak. Anybody got any ideas? In case it matters, the function being called from each link looks like this: function afunction(){ document.getElementById('view').style.display="none"; } Would that constitute a circular reference in any way? Jake

    Read the article

  • Throttling CPU/Memory usage of a Thread in Java?

    - by Nalandial
    I'm writing an application that will have multiple threads running, and want to throttle the CPU/memory usage of those threads. There is a similar question for C++, but I want to try and avoid using C++ and JNI if possible. I realize this might not be possible using a higher level language, but I'm curious to see if anyone has any ideas. EDIT: Added a bounty; I'd like some really good, well thought out ideas on this. EDIT 2: The situation I need this for is executing other people's code on my server. Basically it is completely arbitrary code, with the only guarantee being that there will be a main method on the class file. Currently, multiple completely disparate classes, which are loaded in at runtime, are executing concurrently as separate threads. I inherited this code (the original author is gone). The way it's written, it would be a pain to refactor to create separate processes for each class that gets executed. If that's the only good way to limit memory usage via the VM arguments, then so be it. But I'd like to know if there's a way to do it with threads. Even as a separate process, I'd like to be able to somehow limit its CPU usage, since as I mentioned earlier, several of these will be executing at once. I don't want an infinite loop to hog up all the resources. EDIT 3: An easy way to approximate object size is with java's Instrumentation classes; specifically, the getObjectSize method. Note that there is some special setup needed to use this tool.

    Read the article

  • ASP.NET Memory Usage in IIS is FAR greater than in DevEnv. Is this normal?

    - by Tom
    Greetings! I have an ASP.NET app that scrapes data from a handful of external pages, parses the relevant bits and displays them in a table. Total data retrieved is 3-4MB and the resulting page is about 1MB. I am using synchronous WebRequest GetResponse for the retrieval, but the same problem existed using an asynchronous BeginGetResponse/EndGetResponse process. There is no database access, no session storage, no caching, but an in-memory list of about 100 objects (total 1MB of data), plus a good amount of AJAX (AjaxControlToolkit). This issue appears on the very first run of the app, even if I have restarted IIS. The issue: When I run the app on my dev computer, the maximum commit charge is about 1.5GB. The biggest user, measured by Task Manager's VM Size, is WebDev.WebServer.exe (600MB). The app runs perfectly. When I run it on my rent-a-server (IIS 7.5, 1GB RAM), the maximum commit charge is over 3.8GB. The biggest user is w3wp.exe at 2.7GB. IIS grinds to a halt and spits out a timed-out error page. Given my limited server budget and the hope of having multiple simultaneous users, I'm kind of in a panic. Is this normal? If I bump the server RAM up to 4GB, will that be enough? Will multiple users require even more memory? Could the culprit be AJAX or the list of objects? Thanks for any insight you can provide.

    Read the article

  • Memory leak when changing Text field of a Scintilla object.

    - by PlaZmaZ
    I have a relatively large program that I'm optimizing for ASCII input files around 10-80mB in size. The program reads every line of the file into a stringbuilder and then sets the Text field of the ScintillNET object to the stringbuilder. The stringbuilder is then set to null. private void ReloadFile(string sFile) { txt_log.ResetText(); try { StringBuilder sLine = new StringBuilder(""); using (StreamReader sr = new StreamReader(sFile)) { while (true) { string temp = sr.ReadLine(); if (temp == null) break; sLine.AppendLine(temp); } sr.Close(); } txt_log.Text = sLine.ToString(); sLine = null; } catch (Exception ex) { MessageBox.Show(this, "An error occurred opening this file.\n\n" + ex.Message, "File Open Error", MessageBoxButtons.OK, MessageBoxIcon.Error); } GC.Collect(); } The program has an option to reload or open a file. This is irrelevant, as any call to txt_log.Text seems to not get rid of the previous memory used for the .Text field. Commenting out the txt_log.Text line gives proper memory behavior. The GC.Collect() line seems pointless, and I have tried both with and without it. Is there something I'm missing here? I HIGHLY doubt it's a problem with the ScintillaNET component itself--rather something in this code.

    Read the article

  • Save memory in Python. How to iterate over the lines and save them efficiently with a 2million line

    - by skyl
    I have a tab-separated data file with a little over 2 million lines and 19 columns. You can find it, in US.zip: http://download.geonames.org/export/dump/. I started to run the following but with for l in f.readlines(). I understand that just iterating over the file is supposed to be more efficient so I'm posting that below. Still, with this small optimization, I'm using 10% of my memory on the process and have only done about 3% of the records. It looks like, at this pace, it will run out of memory like it did before. Also, the function I have is very slow. Is there anything obvious I can do to speed it up? Would it help to del the objects with each pass of the for loop? def run(): from geonames.models import POI f = file('data/US.txt') for l in f: li = l.split('\t') try: p = POI() p.geonameid = li[0] p.name = li[1] p.asciiname = li[2] p.alternatenames = li[3] p.point = "POINT(%s %s)" % (li[5], li[4]) p.feature_class = li[6] p.feature_code = li[7] p.country_code = li[8] p.ccs2 = li[9] p.admin1_code = li[10] p.admin2_code = li[11] p.admin3_code = li[12] p.admin4_code = li[13] p.population = li[14] p.elevation = li[15] p.gtopo30 = li[16] p.timezone = li[17] p.modification_date = li[18] p.save() except IndexError: pass if __name__ == "__main__": run()

    Read the article

  • Ajax heavy JS apps using excessive amounts of memory over time.

    - by Shane Reustle
    I seem to have some pretty large memory leaks in an app that I am working on. The app itself is not very complex. Every 15 seconds, the page requests approx 40kb of JSON from the server, and draws a table on the page using it. It is cheaper to draw the table over because the data is usually always new. I am attaching a few events to the table, approx 5 per line, 30 lines in the table. I used jQuery's .html() method to put the new html into the container and overwrite the existing. I do this specifically so that jQuery's special cleanup functions go in and attempt to detach all events on the elements in the element that it is overwriting. I then also delete the large variables of html once they are sent to the DOM using delete my_var. I have checked for circular references and attached events that are never cleared a few times, but never REALLY dug into it. I was wondering if someone could give me a few pointers on how to optimize a very heavy app like this. I just picked up "High Performance Javascript" by Nicholas Zakas, but didn't have much time to get into it yet. To give an idea on how much memory this is using, after 4~ hours, it is using about 420,000k on chrome, and much more on Firefox or IE. Thanks!

    Read the article

  • From where starts the process' memory space and where does it end?

    - by nhaa123
    Hi, I'm trying to dump memory from my application where the variables lye. Here's the function: void MyDump(const void *m, unsigned int n) { const unsigned char *p = reinterpret_cast<const unsigned char *(m); char buffer[16]; unsigned int mod = 0; for (unsigned int i = 0; i < n; ++i, ++mod) { if (mod % 16 == 0) { mod = 0; std::cout << " | "; for (unsigned short j = 0; j < 16; ++j) { switch (buffer[j]) { case 0xa: case 0xb: case 0xd: case 0xe: case 0xf: std::cout << " "; break; default: std::cout << buffer[j]; } } std::cout << "\n0x" << std::setfill('0') << std::setw(8) << std::hex << (long)i << " | "; } buffer[i % 16] = p[i]; std::cout << std::setw(2) << std::hex << static_cast<unsigned int(p[i]) << " "; if (i % 4 == 0 && i != 1) std::cout << " "; } } Now, how can I know from which address starts my process memory space, where all the variables are stored? And how do I now, how long the area is? For instance: MyDump(0x0000 /* <-- Starts from here? */, 0x1000 /* <-- This much? */); Best regards, nhaa123

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >