Search Results

Search found 16794 results on 672 pages for 'memory usage'.

Page 30/672 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • monitoring services, CPU, memory remotely on a Windows server machine

    - by ToastMan
    I'm looking for a tool that is able to (remotely) monitor CPU and Memory in a Windows server but most importantly, which service/process is using it. Or-- is it possible to monitor a specific running service? We got a server that freezes on regular basis and we're trying to find the culprit without using a local debugger. Would be great if the monitoring software came with an agent that we can install on the remote clients for maximum accuracy. Any suggestions are very much appreciated.

    Read the article

  • Centos swap cache memory leak

    - by user30008
    We have a image server that keeps running out of memory and crashing, we thought there was a hardware issue with the machine because the code base has not changed and this is a new issue. We brought a new machine online with newer kernel and fresh centos 5.4 install and just brought online one subdomain and the exact same error is occurring on the new machine. How should I try and troubleshoot this issue.

    Read the article

  • iTunes memory usage

    - by Jordan S. Jones
    Why does iTunes use upwards of 70 megs of ram when it is minimized to my system tray playing music? -- Update -- I understand that iTunes is a resource hog :) What I'm trying to find out, is what part of iTunes is using all that ram. Is it the music library? If I have a smaller music library, will it use less ram? Is it loading all the Album Artwork into ram for some dumb reason? Additionally, is there any recommendations on what someone could do to reduce the amount of ram it is using?

    Read the article

  • Identify an instance of Google Chrome by PID

    - by Laramie
    While working I generally need around 40 windows open at a time and run 100-200 processes. When memory constraints become an issue, I start picking off the processes that are the most resource intensive and disposable. Often these are chrome.exe. It would be helpful to be able to match a particularly memory-hungry instance of chrome to it's PID so I can selectively close it. That is, if I knew what the page title it is currently open to, I could choose whether it lives or dies. I've tried Process Explorer to no avail. Any ideas?

    Read the article

  • How can we tell what driven the Private bytes spiking.

    - by ronin
    I have websites running on .Net Framework 2.0 environment. For recent every day my website becomes slow at certain time and I need to recycle my app pool. I checked the log file found that the private bytes will spike during that time slot. Through some research I already know that the managed code and unmanaged code consists of Privates and we can identify which one cause the spike based on "Bytes in all heaps" counter. But I can't find a way to dig deeper. Is there any way that I can find out what driven my private bytes spike? How can we see what the private bytes are being used for? Thanks, Ronin

    Read the article

  • Current trends in Random Access Memory speed [closed]

    - by Vetal
    As I know for now because of laws of Physics there will be not any tangible improvements in CPU cycles per second for the nearest future. However because of Von Neumann bottleneck it seems to not be an issue for non-server applications. So what about RAM, is there any upcoming technologies that promise to improve memory speed or we are stack with the current situation till quantum computers will come out from labs?

    Read the article

  • What does CPU Time consist of?

    - by Sid
    What does CPU time exactly consist of? For instance, is the time taken to access a page from the RAM (at which point, the CPU is most likely idling) part of the CPU time? I'm not talking about fetching the page from the disk here, just fetching it from the RAM. Thanks

    Read the article

  • strange memory usage pattern on windows server 2008 on login through remote desktop..

    - by headsling
    I'm running Windows Server 2008 Datacenter Service Pack 2 on a VM Ware instance with 10Gb ram allocated. I'm not running IIS or SQL Server. Under 'normal' conditions, the machine uses ~5.5Gb of memory. However, when I login to the server through remote desktop, the memory usage slowly climbs up to 9.8Gb of memory in use. After several minutes the memory slowly creeps back down to the 5.5Gb mark. I've tried killing all the processes associated with my login, on login, barring the taskmanager without success, and I can't see any process that is growing in memory usage when the memory is increasing. I'm assuming this is some system level cache that is growing / shrinking... but why is it doing this?

    Read the article

  • Stupid Geek Tricks: Compare Your Browser’s Memory Usage with Google Chrome

    - by The Geek
    Ever tried to figure out exactly how much memory Google Chrome or Internet Explorer is using? Since they each show up a bunch of times in Task Manager, it’s not so easy! Here’s the quick and easy way to compare them. Both Chrome and IE use multiple processes to isolate tabs from each other, to make sure that one tab doesn’t kill the whole browser. Firefox, on the other hand, just uses a single process for everything. Rather than pulling out a calculator and adding them all up, you can just open up Google Chrome, and type in about:memory into the location bar to see a full list of each browser’s memory usage.   On my test system with 6 GB of system RAM, I’m running the Development channel version of Chrome, and I’ve got about 40 different tabs open, which is why the memory usage is so high. Firefox has 8 tabs open, and IE is enjoying being opened for the first time in forever. Want to help cut down on memory usage and keep your Chrome browser running fast? Disable all unnecessary extensions, and then make sure you disable any plug-ins that you don’t need either. Similar Articles Productive Geek Tips Stupid Geek Tricks: Duplicate a Tab with a Shortcut Key in Chrome or FirefoxStupid Geek Tricks: Shrink the XP Volume ControlStupid Geek Tricks: Tile or Cascade Multiple Windows in Windows 7Fix for Firefox memory leak on WindowsHow to Purge Memory in Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Download Free MP3s from Amazon Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites Restore Hidden Updates in Windows 7 & Vista Iceland an Insurance Job? Find Downloads and Add-ins for Outlook

    Read the article

  • Does ReleaseStringUTF do more than free memory?

    - by Bayou Bob
    Consider the following C code segments. Segment 1: char * getSomeString(JNIEnv *env, jstring jstr) { char * retString; retString = (*env)->GetStringUTFChars(env, jstr, NULL); return retString; } void useSomeString(JNIEnv *env, jobject jobj, char *mName) { jclass cl = (*env)->GetObjectClass(env, jobj); jmethodId mId = (*env)->GetMethodID(env, cl, mName, "()Ljava/lang/String;"); jstring jstr = (*env)->CallObjectMethod(env, obj, id, NULL); char * myString = getSomeString(env, jstr); /* ... use myString without modifing it */ free(myString); } Because myString is freed in useSomeString, I do not think I am creating a memory leak; however, I am not sure. The JNI spec specifically requires the use of ReleaseStringUTFChars. Since I am getting a C style 'char *' pointer from GetStringUTFChars, I believe the memory reference exists on the C stack and not in the JAVA heap so it is not in danger of being Garbage Collected; however, I am not sure. I know that changing getSomeString as follows would be safer (and probably preferable). Segment 2: char * getSomeString(JNIEnv *env, jstring jstr) { char * retString; char * intermedString; intermedString = (*env)->GetStringUTFChars(env, jstr, NULL); retString = strdup(intermedString); (*env)->ReleaseStringUTFChars(env, jstr, intermedString); return retString; } Because of our 'process' I need to build an argument on why getSomeString in Segment 2 is preferable to Segment 1. Is anyone aware of any documentation or references which detail the behavior of GetStringUTFChars and ReleaseStringUTFChars in relation to where memory is allocated or what (if any) additional bookkeeping is done (i.e. local Reference Pointer to the Java Heap being created, etc). What are the specific consequences of ignoring that bookkeeping. Thanks in advance.

    Read the article

  • php memory how much is too much

    - by Rob
    I'm currently re-writing my site using my own framework (it's very simple and does exactly what I need, i've no need for something like Zend or Cake PHP). I've done alot of work in making sure everything is cached properly, caching pages in files so avoid sql queries and generally limiting the number of sql queries. Overall it looks like it's very speedy. The average time taken for the front page (taken over 100 times) is 0.046152 microseconds. But one thing i'm not sure about is whether i've done enough to reduce php memory usage. The only time i've ever encountered problems with it is when uploading large files. Using memory_get_peak_usage(TRUE), which I THINK returns the highest amount of memory used whilst the script has been running, the average (taken over 100 times) is 1572864 bytes. Is that good? I realise you don't know what it is i'm doing (it's rather simple, get the 10 latest articles, the comment count for each, get the user controls, popular tags in the sidebar etc). But would you be at all worried with a script using that sort of memory getting hit 50,000 times a day? Or once every second at peak times? I realise that this is a very open ended question. Hopefully you can understand that it's a bit of a stab in the dark and i'm really just looking for some re-assurance that it's not going to die horribly come re-launch day.

    Read the article

  • Applying memory limits to screen sessions

    - by CollinJSimpson
    You can set memory usage limits for standard Linux applications in: /etc/security/limits.conf Unfortunately, I previously thought these limits only apply to user applications and not system services. This means that users can by bypass their limits by launching applications through a system service such as screen. I'd like to know if it's possible to let users use screen but still enforce application limits. Jeff had the great idea of using nohup which obeys user limits (wonderful!), but I would still like to know if it's possible to mimic the useful windowing features of screen. EDIT: It seems my screen sessions are now obeying my hard address space limits defined in /etc/security/limits.conf. I must have been making some mistake. I recently installed cpulimit, but I doubt that's the solution.Thanks for the nohup tip, Jeff! It's very useful. Link to CPU Limit package

    Read the article

  • Memory usages in Linux drops frequently

    - by FunkyChicken
    I run a CentOS 5.6 (64bit) machine that has Nginx (latest version) running, with php-fpm (latest version). Things run very well, but since about 2 weeks I noticed in my Munin graphs that about every 2 hours the 'cache' usages drops. Before it used be a steady fully graph, that didn't seem to reset every so often. PHP-FPM settings: pm.max_children = 300 daemonize = yes pm = static listen = /tmp/fpm.sock pm.max_requests = 1000 I have checked the php-fpm.log, and about once per 5 seconds a child process is killed, and restarted. But this is all the time, so this does not explain the sudden drops. I only run Nginx, PHP (via fpm), Munin and vsftpd on this machine. No crons run at exactly the time of the drops. My question: What could be causing these drops in cache usage?

    Read the article

  • Amazon EC2 - Free memory

    - by Damo
    We have an amazon ec2 small instance running and over the past few days we noticed that the memory is going down and down. On the small instance, we are running apache and tomcat6 Tomcat is started with the following JVM parameters -Xms32m -Xmx128m -XX:PermSize=128m -XX:MaxPermSize=256m We use nagios to monitor stuff like updates to apply, free disk space and memory. Everything else is behaving as expected but our memory is going down all the time. Our app receives approx half a million hits a day When I shutdown apache and tomcat, and ran free -m, we had only 594mb of memory free out out of the 1.7gb of memory. Not much else is running on the small instance and when running the top command I cannot see where the memory is going. The app we run on tomcat is a grails webapp. Could there be a possibility that there is a memory leak within our application? I read online and folks say that a small amazon instance is perfect for running apach and tomcat. I found a few posts online that showed how to setup apache and tomcat to limit the memory usage and I have already performed those steps. The memory is not being used up as quick but the memory is still decreasing over time. We have other amazone ec2 small instances running grails apps and the memory is fairly standard on those nodes. But they would not be receiving as much traffic Just to add, when I run the top command on the problem server, I cannot see where all the memory is being used Any help with this is greatly appreciated The output of free -m when run on my server is as follows total used free shared buffers cached Mem: 1657 1380 277 0 158 773 -/+ buffers/cache: 447 1209 Swap: 895 0 895 In your opinion, does this look ok? At what stage would the OS give back memory, would it wait to the memory reaches 0% or is this OS dependent?

    Read the article

  • WPF abnormal CPU usage for animation

    - by 0xDEAD BEEF
    HI! I am developing WPF application and client reports extreamly high CPU usage (90%) (whereas i am unable to repeat that behavior). I have traced bootleneck down to these lines. It is simple glowing animation for small single led control (blinking led). What could be reason for this simple annimation taking up SO huge CPU resources? <Trigger Property="State"> <Trigger.Value> <local:BlinkingLedStatus>Blinking</local:BlinkingLedStatus> </Trigger.Value> <Trigger.EnterActions> <BeginStoryboard Name="beginStoryBoard"> <Storyboard> <DoubleAnimation Storyboard.TargetName="glow" Storyboard.TargetProperty="Opacity" AutoReverse="True" From="0.0" To="1.0" Duration="0:0:0.5" RepeatBehavior="Forever"/> </Storyboard> </BeginStoryboard> </Trigger.EnterActions> <Trigger.ExitActions> <StopStoryboard BeginStoryboardName="beginStoryBoard"/> </Trigger.ExitActions> </Trigger>

    Read the article

  • Commited memory goes to physical RAM or reserves space in the paging file?

    - by Sil
    When I do VirtualAlloc with MEM_COMMIT this "Allocates physical storage in memory or in the paging file on disk for the specified reserved memory pages" (quote from MSDN article http://msdn.microsoft.com/en-us/library/aa366887%28VS.85%29.aspx). All is fine up until now BUT: the description of Commited Bytes Counter says that "Committed memory is the physical memory which has space reserved on the disk paging file(s)." I also read "Windows via C/C++ 5th edition" and this book says that commiting memory means reserving space in the page file.... The last two cases don't make sense to me... If you commit memory, doesn't that mean that you commit to physical storage (RAM)? The page file being there for swaping out currently unused pages of memory in case memory gets low. The book says that when you commit memory you actually reserve space in the paging file. If this were true than that would mean that for a committed page there is space reserved in the paging file and a page frame in physical in memory... So twice as much space is needed ?! Isn't the page file's purpose to make the total physical memory larger than it actually is? If I have a 1G of RAM with a 1G page file = 2G of usable "physical memory"(the book also states this but right after that it says what I discribed at point 2). What am I missing? Thanks.

    Read the article

  • RedHat 5.5 server does not show per processor memory utilization

    - by Mike S
    I have been searching all over internet but not finding any leads. I have a system with a memory leak that I am trying to troubleshoot. Unfortunately I am not able to see per processor memory utilization. Here are the outputs of TOP and PS commands. Linux SERVER_NAME 2.6.18-194.8.1.el5 #1 SMP Wed Jun 23 10:52:51 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux top - 09:17:13 up 18:43, 3 users, load average: 0.00, 0.00, 0.00 Tasks: 375 total, 1 running, 373 sleeping, 0 stopped, 1 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 32922828k total, 32776712k used, 146116k free, 267128k buffers Swap: 5245212k total, 0k used, 5245212k free, 32141044k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 15 0 10348 744 620 S 0.0 0.0 0:05.65 init 2 root RT -5 0 0 0 S 0.0 0.0 0:00.05 migration/0 3 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/0 4 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/0 5 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/1 6 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/1 7 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/1 8 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/2 9 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/2 10 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/2 11 root RT -5 0 0 0 S 0.0 0.0 0:00.01 migration/3 12 root 34 19 0 0 0 S 0.0 0.0 0:00.01 ksoftirqd/3 13 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/3 14 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/4 15 root 34 19 0 0 0 S 0.0 0.0 0:00.01 ksoftirqd/4 16 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/4 17 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/5 18 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/5 19 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/5 20 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/6 % ps -auxf | sort -nr -k 4 | head -10 Warning: bad syntax, perhaps a bogus '-'? See /usr/share/doc/procps-3.2.7/FAQ xfs 6205 0.0 0.0 23316 3892 ? Ss Aug19 0:00 xfs -droppriv -daemon uuidd 6101 0.0 0.0 60976 224 ? Ss Aug19 0:00 /usr/sbin/uuidd USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND smmsp 6130 0.0 0.0 57900 1784 ? Ss Aug19 0:00 sendmail: Queue runner@01:00:00 for /var/spool/clientmqueue rpc 5126 0.0 0.0 8052 632 ? Ss Aug19 0:00 portmap root 99 0.0 0.0 0 0 ? S< Aug19 0:00 [events/1] root 98 0.0 0.0 0 0 ? S< Aug19 0:00 [events/0] root 97 0.0 0.0 0 0 ? S< Aug19 0:00 [watchdog/31] root 96 0.0 0.0 0 0 ? SN Aug19 0:00 [ksoftirqd/31] root 95 0.0 0.0 0 0 ? S< Aug19 0:00 [migration/31] Any help with this is appretiate.

    Read the article

  • Python: Memory usage and optimization when modifying lists

    - by xApple
    The problem My concern is the following: I am storing a relativity large dataset in a classical python list and in order to process the data I must iterate over the list several times, perform some operations on the elements, and often pop an item out of the list. It seems that deleting one item out of a Python list costs O(N) since Python has to copy all the items above the element at hand down one place. Furthermore, since the number of items to delete is approximately proportional to the number of elements in the list this results in an O(N^2) algorithm. I am hoping to find a solution that is cost effective (time and memory-wise). I have studied what I could find on the internet and have summarized my different options below. Which one is the best candidate ? Keeping a local index: while processingdata: index = 0 while index < len(somelist): item = somelist[index] dosomestuff(item) if somecondition(item): del somelist[index] else: index += 1 This is the original solution I came up with. Not only is this not very elegant, but I am hoping there is better way to do it that remains time and memory efficient. Walking the list backwards: while processingdata: for i in xrange(len(somelist) - 1, -1, -1): dosomestuff(item) if somecondition(somelist, i): somelist.pop(i) This avoids incrementing an index variable but ultimately has the same cost as the original version. It also breaks the logic of dosomestuff(item) that wishes to process them in the same order as they appear in the original list. Making a new list: while processingdata: for i, item in enumerate(somelist): dosomestuff(item) newlist = [] for item in somelist: if somecondition(item): newlist.append(item) somelist = newlist gc.collect() This is a very naive strategy for eliminating elements from a list and requires lots of memory since an almost full copy of the list must be made. Using list comprehensions: while processingdata: for i, item in enumerate(somelist): dosomestuff(item) somelist[:] = [x for x in somelist if somecondition(x)] This is very elegant but under-the-cover it walks the whole list one more time and must copy most of the elements in it. My intuition is that this operation probably costs more than the original del statement at least memory wise. Keep in mind that somelist can be huge and that any solution that will iterate through it only once per run will probably always win. Using the filter function: while processingdata: for i, item in enumerate(somelist): dosomestuff(item) somelist = filter(lambda x: not subtle_condition(x), somelist) This also creates a new list occupying lots of RAM. Using the itertools' filter function: from itertools import ifilterfalse while processingdata: for item in itertools.ifilterfalse(somecondtion, somelist): dosomestuff(item) This version of the filter call does not create a new list but will not call dosomestuff on every item breaking the logic of the algorithm. I am including this example only for the purpose of creating an exhaustive list. Moving items up the list while walking while processingdata: index = 0 for item in somelist: dosomestuff(item) if not somecondition(item): somelist[index] = item index += 1 del somelist[index:] This is a subtle method that seems cost effective. I think it will move each item (or the pointer to each item ?) exactly once resulting in an O(N) algorithm. Finally, I hope Python will be intelligent enough to resize the list at the end without allocating memory for a new copy of the list. Not sure though. Abandoning Python lists: class Doubly_Linked_List: def __init__(self): self.first = None self.last = None self.n = 0 def __len__(self): return self.n def __iter__(self): return DLLIter(self) def iterator(self): return self.__iter__() def append(self, x): x = DLLElement(x) x.next = None if self.last is None: x.prev = None self.last = x self.first = x self.n = 1 else: x.prev = self.last x.prev.next = x self.last = x self.n += 1 class DLLElement: def __init__(self, x): self.next = None self.data = x self.prev = None class DLLIter: etc... This type of object resembles a python list in a limited way. However, deletion of an element is guaranteed O(1). I would not like to go here since this would require massive amounts of code refactoring almost everywhere.

    Read the article

  • c++ overloading delete, retrieve size

    - by user300713
    Hi, I am currently writing a small custom memory Allocator in c++, and want to use it together with operator overloading of new/delete. Anyways, my memory Allocator basicall checks if the requested memory is over a certain threshold, and if so uses malloc to allocate the requested memory chunk. Otherwise the memory will be provided by some fixedPool allocators. that generally works, but for my deallocation function looks like this: void MemoryManager::deallocate(void * _ptr, size_t _size){ if(_size heapThreshold) deallocHeap(_ptr); else deallocFixedPool(_ptr, _size); } so I need to provide the size of the chunk pointed to, to deallocate from the right place. No the problem is that the delete keyword does not provide any hint on the size of the deleted chunk, so I would need something like this: void operator delete(void * _ptr, size_t _size){ MemoryManager::deallocate(_ptr, _size); } But as far as I can see, there is no way to determine the size inside the delete operator.- If I want to keep things the way it is right now, would I have to save the size of the memory chunks myself? Any ideas on how to solve this are welcome! Thanks!

    Read the article

  • Justifying a memory upgrade

    - by AngryHacker
    My employer has over a thousand servers (running SQL Server 2005 x64 and a couple of other apps) all across the country. And in my opinion they are all massively underpowered for what they need to do. Specifically, I feel that the servers simply do not have enough RAM for the amount of volume the machines are asked to do. All the servers currently have 6GB of RAM. The users are pretty much always complaining about performance (mostly because, immo, the server dips into the paging file quite often). I finally convinced the powers that be to at least try out a memory upgrade on one box and see the results. However, they want before and after metrics, so that they can see that the expense will be justified. My question is what metrics should I collect to see whether the performance truly improves on the box? I am a dev, so I am not sure how and what to collect (i have a passing knowledge of Perfmon).

    Read the article

  • postgres memory allocation tuning 2

    - by pstanton
    i've got a Ubuntu Linux system with 12Gb memory most of which (at least 10Gb) can be allocated solely to postgres. the system also has a 6 disk 15k SCSI RAID 10 setup. The process i'm trying to optimise is twofold. firstly a single threaded, single connection will do many inserts into 2-4 tables linked by foreign key. secondly many different complex queries are run against the resulting data, using group by extensively. this part especially needs to be optimised. i have four of these processes running at once in order to make use of the quad core CPU, therefore there will generally be no more than 5 concurrent connections (1 spare for admin tasks). what configuration changes to the default Postgres config would you recommend? I'm looking for the optimum values for things like work_mem, shared_buffers etc. relevant doco thanks!

    Read the article

  • USB Adapter for Memory Cards

    - by ktm5124
    I am looking for something like a USB adapter for video cards. It is a cable that on one end hooks into a USB port on a computer and on the other accepts any one of a variety of video cards. Essentially it's an "all-in-one" USB adapter for video cards. I'm told that they are sold all over... does anyone know what it is called or where to find them? I should clarify, by the way, that by "video card" I mean a memory card for a video camera, and that the goal is to read this video data onto a computer, from a variety of video card types, through a USB port. That way if you go to your friend's house and bring your computer, you can transfer the video data on his video camera to your computer, trusting that the adapter will have a slot for his kind of video card.

    Read the article

  • Justifying a memory upgrade, take 2

    - by AngryHacker
    Previously I asked a question on what metrics I should measure (e.g. before and after) to justify a memory upgrade. Perfmon was suggested. I'd like to know which specific perfmon counters I should be measuring. So far I got: PhysicalDisk/Avg. Disk Queue Length (for each drive) PhysicalDisk/Avg. Disk Write Queue Length (for each drive) PhysicalDisk/Avg. Disk Read Queue Length (for each drive) Processor/Processor Time% SQLServer:BufferManager/Buffer cache hit ratio What other ones should I use?

    Read the article

  • Increase memory to memcached

    - by Petrus
    I need to increase the memory size for memcached. I have done this before, but I cannot remember all the steps that I took. If I remember correctly, I downloaded /etc/sysconfig/memcached and changed CACHESIZE=64 to CACHESIZE=1024. However I am not sure if that is how it is supposed to be done. Anyone that could guide me into how I do this? Also a command that confirms the change would be useful. I am running RedHat x86_64 es5.

    Read the article

  • How to troubleshoot memory card read?

    - by shinjin
    The built in memory card reader in my laptop mounts SD cards as read-only only. This happens both in Windows 7 and Ubuntu. Most of the time. Ever now an then it works. After a some non-deterministic combination of uninstalling/reinstalling/disabling/enabling of the driver with the mandatory reboots the card reader works for a while. Is there any sane way to troubleshoot if it's an actual hardware problem, or just a matter of drivers? I've tested it with several SD cards, that work just fine in other devices. System: Acer Aspire 8951G, Windows 7-64bit

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >