Search Results

Search found 6511 results on 261 pages for 'everyday usage'.

Page 4/261 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • SQL 2008 Memory Usage

    - by Danilo Brambilla
    I have a SQL Server 2008 (ver 10.0.1600) running on a Windows Server 2008 R2 Enterprise server with 8 GB of physical ram. If I open Task Manager I can see on 'Physical Memory' section of 'Performance' tab that only 340 MB are Available of 8191 Total, but I can't see any process using such amount of memory. Please note SQL Server is memory limited to 6GB (Maximum Server Memory = 6000). If I open Sysinternals Process Explorer, I can see sqlsrvr.exe process has: Private Bytes: 227.000 K Working Set: 140.000 K Virtual Size: 8.762.000 K What does this means? Is there any way to free up this memory for other process? Why Virtual Size figure as allocated memory? I thought that Virtual Size was 'reserved memory' only.

    Read the article

  • Bugzilla random lag and CPU usage

    - by Serty Oan
    Here is the configuration : Bugzilla (3.4.2) application running on a Ubuntu server 8.04 with Perl 5.8.8. Here is the problem : Sometimes (randomly), pages take very long to load. It can be any of the login page or query.cgi page or buglist.cgi... etc... Using top on the server, I tried to see what was wrong, and saw that sometimes a bugzilla script would use 30% memory and last pretty long (1 to 5 seconds) and sometimes not even show in the list because it responded too quickly or just use 2%. The mysqld process does not seems to be the problem.

    Read the article

  • How to debug high memory usage by registry?

    - by bkr
    I have a windows 2008 r2 server running ADFS2 and some web apps that is having issues with low memory. Digging around I found that the 5.5 GB were being used under Kernel Memory (paged). Further digging with Poolmon, I discovered that the majority of that (5GB+) was being used by CM - configuration manager. Also known as the registry. I'm really now sure how to tell why the registry is using so much memory however, or how to release it? Looking at the physical registry files they don't appear that large. EDIT #1 Using the powershell script @ http://jdhitsolutions.com/blog/2011/05/get-registry-size-and-age/ confirms what I saw looking at the physical registry files, that they're relatively small Computername : (removed) Status : OK CurrentSize : 67 MaximumSize : 2048 FreeSize : 1981 PercentFree : 96.728515625 Created : 4/1/2011 11:38:02 AM Age : 454.23:41:28.2540682

    Read the article

  • Cpu usage from top command

    - by kairyu
    How can i get the result like example following. Any command or scripts? Snapshot u1234 3971 1.9 0.0 0 0 ? Z 20:00 0:00 [php] <defunct> u1234 4243 3.8 0.2 64128 43064 ? D 20:00 0:00 /usr/bin/php /home/u1234/public_html/index.php u1234 4289 5.3 0.2 64128 43064 ? R 20:00 0:00 /usr/bin/php /home/u1234/public_html/index.php u1234 4312 9.8 0.2 64348 43124 ? D 20:01 0:00 /usr/bin/php /home/u1234/public_html/index.php u1235 4368 0.0 0.0 30416 6604 ? R 20:01 0:00 /usr/bin/php /home/u1235/public_html/index.php u1236 4350 2.0 0.0 34884 13284 ? D 20:01 0:00 /usr/bin/php /home/u1236/public_html/index.php u1237 4353 13.3 0.1 51296 30496 ? S 20:01 0:00 /usr/bin/php /home/u1237/public_html/index.php u1238 4362 63.0 0.0 0 0 ? Z 20:01 0:00 [php] <defunct> u1238 4366 0.0 0.1 51352 30532 ? R 20:01 0:00 /usr/bin/php /home/u1238/public_html/index.php u1239 4082 3.0 0.0 0 0 ? Z 20:00 0:01 [php] <defunct> u1239 4361 26.0 0.1 49104 28408 ? R 20:01 0:00 /usr/bin/php /home/u1239/public_html/index.php u1240 1980 0.4 0.0 0 0 ? Z 19:58 0:00 [php] <defunct> CPU TIME = 8459.71999999992 This result i got from hostgator support :) I was used "top -c" but they do not show "/home/u1239/public_html/index.php Thanks

    Read the article

  • windows 2008 r2 iis worker proccess memory usage increase

    - by nLL
    I have this web site written in c#. around 400-500 users online at any time. it was on windows 2008 32 bit machine before and never ever locked/slowed down due to increased memory consumption up until i upgraded it's server to win 2008 r2 64 bit. Old server had only 4 gig ram and quad core cpu at 2ghz. site was working just fine. since i've upgraded the server i noticed (2 times with in 10 days) it started to eat ram. last night it went up to 4 gb ram. with ram increase response slows down quite a lot. recycling app pool doesn't help. I have to restart it's worker process to recover. i've noticed this usually happens if there are continuous errors. as i didn't change anything in the code am i safe to assume it is not related to memory leak in the code? did anyone came across something like that? same thing happens if i create continuous errors with classic asp. thanks

    Read the article

  • Low FPS in some games, but hardware not fully used

    - by Mario De Schaepmeester
    I just did a little funny experiment in the game/sim "Train Simulator 2013". I normally have good FPS in it (around 30) at full settings. What I did was make a really, really long train so that the calculations the sim needed to make were enormous (the sim is quite realistic, it takes all things into account like speed/acceleration, G-forces, comfort levels, possible wheel slip and many more, and most of those things on each carriage seperately). This resulted in only 14FPS as reported by the game, but it felt more like 8FPS or so. I have a Logitech G15 keyboard which has an LCD, and it allows me to monitor CPU/RAM and video card load on it. The strange thing is, all CPU cores were busy, but the total load was only about 60% maximum at all times. The video card was only on 30% load (possibly an important note, the memory was full, which is however not unusual for the game in question). The RAM had plenty of room and there weren't many operations as it didn't grow or shrink much. I just have the feeling that the game would run smoother if it used more of my hardware power. Why is it not doing so? I had the same in another game, The Elder Scrolls: Morrowind when using more than 100 mods (that all use scripting) and a few high res texture mods, + a full-on graphics improvement program. The engine is very old (2003), and so I thought this might be the cause (not being optimised for multithreading). I had thought of possible causes, like: The operating system doesn't let the games use all the resources. It doesn't make use of multi-threading appropriately. To eliminate the former, I tried a CPU stress tool and that got 100% CPU juice as I let it run, so the OS is not the problem. I gave its thread the "higher" priority though. My actual question In both games, I did things the engine was not really built to do or support. Can those games' framerate be limited cause of their own engine not being able to cope? What is the real reason and more importantly, can I help it? And in any case, could something actually be wrong with my hardware? It's all reasonably new, a couple of months, and I (almost) never experience any other trouble. Modern and much more demanding games work absolutely fine. Specs CPU: AMD Phenom II 965 X4 @ 3.4gHz RAM: 8GB of DDR3 RAM Video: MSI GTX560 (nVidia chip) with 1GB of GDDR5 memory OS: Windows 7 Ultimate 64 bit Nothing overclocked.

    Read the article

  • CPU/RAM usage log over a period of time to file on CentOS

    - by joel_gil
    Hi everyone Im looking for an app pr line of code that could let me observe a process, save the info in a number of variable and then put the gathered info on a file. Ive been trying with variations of top but no luck. I am running several CentOS virtual servers, VM is 2gb ram 2 processor. Maybe a script that works over a specified amount of time while writing lines with the info on a text file so at the end i can have a sort of table with the data. The thing is Im going to stress test the server and I would like to have the data to make some statistics. Any comments and suggestions are most welcome.

    Read the article

  • Graphing per-user CPU usage on a Linux machine

    - by mart1n
    I want to graph (graphical output would be great, i.e. a .png file) the following situation: I have users A, B, and C. I limit their resources so that when all users run a CPU intensive task at the same time, those processes will use 25%, 25%, and 50% of CPU. I know I can get the real-time stats using top but have no idea what to do with them. I've searched through the huge top man page but haven't found much on the subject of outputting data that can be graphed. Ideally, the graph would show a span of maybe 30 seconds. Any ideas how to achieve this?

    Read the article

  • HIGH CPU USAGE FROM SUSTEM [on hold]

    - by user195641
    CAN ANYONE EXPLAIN THIS ? IT IS BAD FOR MY CPU ? CAN ANYONE HELP ME ? I DONT KNOW WHAT TO DO ! THERE IS HIGH DELAY WHEN I OPEN A NEW TASK HOWEVER ITS A INTEL CORE DUO EXTREAMA 3.0GH THANKS There are about 100 items monitored with the agent. They are also monitored on other identical hosts where Zabbix agent does not consume so much of CPU. Agents send collected data to Zabbix proxy. The agent configuration is default. The host CPU has 8 cores (2.4 Gz). The smallest time value for monitored items is 60 seconds.

    Read the article

  • How to configure ubuntu for lightweight low-memory usage?

    - by augustin
    I just upgraded an old, secondary computer to the latest Kubuntu (10.10). It seems the effort was a bit too much for the hardware and one 512MB memory module died. I tried to take it away, clean the connectors, put it back several times, but to no avail. Until such a time I can find a second hand DDR memory module, I am left with a meagre 256MB RAM, which is below the official requirements (384MB) to run Kubuntu/KDE. Indeed: the computer constantly swaps the memory, making everything painfully slow. Since Kubuntu is already installed and I use it on all my computers (and I want to keep KDE for when I really need it), how can I configure ubuntu to squeeze out every bit of unnecessary memory usage? This is a secondary computer but still very useful. We use it mostly for web browsing. A "lightweight" tag is missing.

    Read the article

  • How to configure kubuntu for lightweight low-memory usage?

    - by augustin
    I just upgraded an old, secondary computer to the latest Kubuntu (10.10). It seems the effort was a bit too much for the hardware and one 512MB memory module died. I tried to take it away, clean the connectors, put it back several times, but to no avail. Until such a time I can find a second hand DDR memory module, I am left with a meagre 256MB RAM, which is below the official requirements (384MB) to run Kubuntu/KDE. Indeed: the computer constantly swaps the memory, making everything painfully slow. Since Kubuntu is already installed and I use it on all my computers (and I want to keep KDE for when I really need it), how can I configure Kubuntu to squeeze out every bit of unnecessary memory usage? This is a secondary computer but still very useful. We use it mostly for web browsing. A "lightweight" tag is missing.

    Read the article

  • How to read unix usage

    - by sixtyfootersdude
    I did some searching but I cannot find documentation on how unix usage works. I know somethings (mostly through trial and error) but for example how do I know that: /usr/bin/ls [-aAbcCdeEfFghHilLmnopqrRstuvVx1@] [file]... Means that you can include more than one option. Ie: ls -la Can someone point me to some documentation on what the usage syntax is.

    Read the article

  • Poll C# app's memory usage at runtime?

    - by maxfridbe
    I have an app that, while running, needs to poll its own memory usage. It would be ideal if it could list out the memory usage for each object instantiated. I know this can be achieved by WMI, but I was hoping for something that doesn't rely on WMI.

    Read the article

  • PNG's In Mac on Java cause massive CPU usage

    - by alexganose
    Hey, I've been having this problem for a while now and I was hoping someone could help. I make small games using Java on Mac OSX 1.6.3 and if I use PNG's as the image format my CPU usage by Java skyrockets to say 50% (on a very small 2D game). However if I use GIF as the format my CPU usage by Java stays constant at 10% which is reasonable. What is causing this problem?? It occurs on every game I develop using PNG's so I always just switch to GIF's. The problem is now that i need to use a PNG for its variable alpha properties rather than just plain transparency. This is not available using GIF's. The problem is present on Java SE 6 and previous versions. I am using an early 2009 Mac Book Pro 15". The problem does not occur on a Windows PC running the same game. The CPU usage due to Java using PNG's on a Windows PC (I have tried XP, Vista and 7) is always constantly low at ~10%. Any help would be greatly appreciated. Thanks :)

    Read the article

  • msvcrt: memory usage goes wild, but not under debugger

    - by al_miro
    I have a C++ code compiled with Intel compiler, 32bit, in MS VC6 mode, so using either msvcrt.dll or msvcrtd.dll. The process makes heavy memory allocation and deallocation. I monitor the memory usage with WMI and look at VirtualSize and WorkingSetSize. with debug runtime (msvcrtd.dll): virtual constant 1.7GB, working constant 1.2GB with non-debug runtime (msvcrt.dll): virtual raising 1.7-- 2.1GB, working raising 1.2-1.4GB with non-debug runtime but under debugger (windbg): virtual constant 1.7GB, working constant At 2.1 GB virtual the process is crashing (as expected). But why would the virtual usage increase only with (non-debug) msvcrt.dll and only if not under debugger? In all cases compilation flags are identical, only runtime libs are different.

    Read the article

  • How to get total cpu usage in Linux (c++)

    - by cz-david
    I am trying to get total cpu usage in %. First I should start by saying that "top" will simply not do, as there is a delay between cpu dumps, it requires 2 dumps and several seconds, which hangs my program (I do not want to give it its own thread) next thing what I tried is "ps" which is instant but always gives very high number in total (20+) and when I actually got my cpu to do something it stayed at about 20... Is there any other way that I could get total cpu usage? It does not matter if it is over one second or longer periods of time... Longer periods would be more useful, though.

    Read the article

  • java memory usage

    - by xdevel2000
    I know I always post a similar question about array memory usage but now I want post the question more specific. After I read this article: http://www.javamex.com/tutorials/memory/object_memory_usage.shtml I didn't understand some things: the size of a data type is always the same also on different platform (Linux / Windows 32 / 64 bit)??? so an int will be always 32 bit?; when I compute the memory usage I must put also the reference value itself? If I have an object to a class that has an int field its memory will be 12 (object header) + 4 reference + 4 (the int field) + 3 (padding) = 24 bytes??

    Read the article

  • Can C++ memory leaks negatively affect CPU usage?

    - by Dan
    Hi all, I have a C++ program that has a pretty terrible memory leak, about 4MB / second. I know where it's coming from and can fix it, but that's not my main problem. My program is taking up a very large amount of CPU usage and it isn't running as fast as I want it to. I have two different threads in the program. One by itself takes ~50% CPU, which is fine, and the other by itself takes ~15% CPU, which is fine. Together however CPU usage is 100% and the program cannot run as fast as it needs to. Can a memory leak by itself cause a problem like this? I know the program will eventually crash due to the leaked memory, but does a memory leak immediately lead to a slower program? By immediately I mean the program is too slow at the very start, not just when the memory footprint is huge. Thanks!

    Read the article

  • How does a company proxy server act in reporting internet usage of employees?

    - by Mehper C. Palavuzlar
    Our company recently set up a proxy server. As far as I understand, they want to apply some access policies to undesired sites, and log/audit the usage of the internet and generate employee internet usage reports. My question is related to the latter part. Before they were using a proxy server, they were also generating internet usage reports. At that point, what kind of contribution will the new proxy server make? Does it have further advantages on reporting internet usage?

    Read the article

  • Amazon EC2: how to find out detailed CPU usage?

    - by j0nes
    I am running several EC2 instances, and I want to know the exact work my CPU is doing. On "normal" machines I am doing this with munin and its CPU plugin which looks at the statistics provided by /proc/stat. On my EC2 machines however, I get incorrect graphs. The machine has two cores, so the max CPU usage should be 200% - however it gets as high as 400%: I know that I should use Amazon CloudWatch to see the total CPU usage (and this is the official and recommended from Amazon way to do this), but I am specifically looking on how the CPU usage is spend (e.g. system, user, iowait). Is there a way to get detailed CPU usage statistics on EC2 instances?

    Read the article

  • Software usage analytics in C#

    - by TiernanO
    I have a project i am working on currently and would like to implement some sort of software tracking in the code. ideally, stuff like how often its launched. how long it runs for, feature tracking, etc. I already use Exceptioneer for unhandled exceptions, but would like something similar for usage tracking. this data should all be anonymous and ideally run as a service by someone else. and i would like to give the users the option to turn it off, if they so wish to... So, is this something i should implement myself, or are there third parties out there that do this sort of things? i know it might be a sticky area, but i have seen stats about iPhone app usage. they do it, so why cant we? (if the user agrees, of course) [Update] Based on the comments, i should have been more clear. this is a Winforms .NET 4. application, though i am thinking of updating it later with WCF. i would only be tracking my own application, though i would also want to know minor information about environment (Windows OS Version, SP, maybe proc and ram...)

    Read the article

  • Should one replace the usage addJSONData of jqGrid to the usage of setGridParam(), and trigger('relo

    - by Oleg
    Hi everybody who use jqGrid! I am a new on stackoverflow.com and it seems to me that a lot of peoples who use stackoverflow.com are not only the persons who have a problem which must be quickly solved. A lot of people read stackoverflow.com to look at well-known things from the other side. Sometime perhaps the reason is a self-training (to stay in the good form) during solving of problems other people. For all these gays, who not want only to solve his problem is my question. I wrote recently an answer to the question "jqGrid display default “loading” message when updating a table / on custom update". During writing of the answer I thought: why he uses addJSONData() function for refresh of data in the grid instead of changing URL with respect of setGridParam() and refreshing jqGrid data with respect of trigger('reloadGrid')? At the beginning I wanted to recommend using of 'reloadGrid', but after thinking about this I understood, that I am not quite sure what the best way is. At least I can't explain in two sentences why I prefer the second way. So I decide that it could be an interesting subject of a discussion. So to be exactly: We have a typical situation. We have a web page with at least one jqGrid and some other controls like combo-boxes (selects), checkboxes etc. which give user possibilities to change scope on information displayed in a jqGrid. Typically we define some event handler like jQuery("#selector").change(myRefresh).keyup(myKeyRefresh) and we need reload the jqGrid contain based on users choose. After reading and analyzing of the information from additional users input we can refresh jqGrid contain in at least two ways: Make call of $.ajax() manual and then inside of success or complete handle of $.ajax call jQuery.parseJSON() (or eval) and then call addJSONData function of jqGrid. I found a lot of examples on stackoverflow.com who use addJSONData. Update url of jqGrid based on users input, reset current page number to 1 and optionally change the caption of the grid. All these can be done with respect of setGridParam(), and optionally setCaption() jqGrid methods. At the end one call trigger('reloadGrid') method of the grid. To construct the url, by the way I use mostly jQuery.param function to be sure, that I all url parameters packed correctly with respect of encodeURIComponent. I want that we discuss advantages and disadvantages of both ways. I use currently the second way, so I start with advantages of this one. One can say me: I call existing Web Service, convert received data to the jqGrid format and call addJSONData. This is the reason why I use addJSONData method! OK, I choose another way. jqGrid can make a call of the Web Service directly and fill results inside of grid. There are a lot of jqGrid options, which allow you to customize this process. First of all, one can delete or rename any standard parameter sent to server with respect of prmNames option of jqGrid or add any more additional parameters with respect of postData option (see http://www.trirand.com/jqgridwiki/doku.php?id=wiki:options). One can modify all constructed parameters immediately before jqGrid makes corresponding $.ajax request by defining of serializeGridData() function (one more option of jqGrid). More than that, one can change every $.ajax parameter by setting ajaxGridOptions option of jqGrid. I use ajaxGridOptions: {contentType: "application/json"} for example as a general setting of $.jgrid.defaults. The ajaxGridOptions option is very powerful. With respect of ajaxGridOptions option one can redefine any parameter of $.ajax request sending by jqGrid, like error, complete and beforeSend events. I see potentially interesting to define dataFilter event to be able makes any modification of the row data responded from the server. One more argument for using of trigger('reloadGrid') way is blocking of jqGrid during ajax request processing. Mostly I use parameter loadui: 'block' to block jqGrid during JSON request sending to server. With respect of jQuery blockUI plugin http://malsup.com/jquery/block/ one can block more parts of web page as the grid only. To do this one can call jQuery('#main').block({ message: '<h1>Die Daten werden vom Server geladen...</h1>' }); before calling of trigger('reloadGrid') method and jQuery('#main').unblock() inside of loadComplete and loadError functions. The loadui option could be set to 'disable' in this case. So I don’t see why the function addJSONData() should be used. Can somebody who use addJSONData() function explain me advantages of its usage? Should one replace the usage addJSONData of jqGrid to the usage of setGridParam(), and trigger('reloadGrid')? I am opened to the discussion.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >