Search Results

Search found 5572 results on 223 pages for 'cpu'.

Page 28/223 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • Eclipse uses 100 % CPU randomly

    - by Florian Gutmann
    Hi everyone! My eclipse sometimes starts using 100 % of my CPU very spontaneously. I can't figure out why it needs that much CPU usage. There is no background task like "building workspace" running. After some time the CPU load drops to 0 and everything is normal. I can't find any information related to the problem in workspace/.metadata/.log file. Has anybody some tip how i can figure out which part of eclipse is using the CPU so heavily? Is there a way to get a thread dump of eclipse? The kill -3 on the eclipse process doesn't do anything. Eclipse Version: Galileo JavaEE Operating System: Linux 2.6.31 Thanks in advance! Florian

    Read the article

  • CPU consumption of my process

    - by Abruzzo Forte e Gentile
    Hi all I would like to use Performance Monitor to check the CPU consumption of my process. Right now I am working on a MultiCore machine. If I have a look at my process in TASK MANAGER I see that my process consumes 20% of CPU. If I start performance monitor, I select Process--% Processor Time I see values peaking up and over 100%. Do you know why and how to get the real measure? I also looked at the CPU consumption for all of my 4 cores, but I don't know exactly how to attribute consumption to my process. If you can suggest a link or url about how to read CPU usage I would really appreciate! Thanks a lot! AFG

    Read the article

  • Visual Studio "Any CPU" target

    - by galets
    I have some confusion related to the .NET platform build options in VS 2008 Does anyone have a clear understanding what does "Any CPU" compilation target is and what sort of files it generates? I examined the output executable of this "Any CPU" build and found that they are (who would not see that coming!) the x86 executables. So, is there any the difference between targeting executable to x86 vs "Any CPU"? Another thing that I noticed, is that managed C++ projects do not have this platform as option. I'm wondering why is that. Does that mean that my suspicion about "Any CPU" executables being plain 32-bit ones is right?

    Read the article

  • Video capture Performance

    - by volting
    I have noticed high CPU utilization in a number of applications (except mplayer) which read from the embedded webcam on my laptop. Bizarrely CPU utilization varies proportionately to the level of illumination present. I know that that high CPU usage has nothing to do with rendering the video, as I have written a simple app using the OpenCV library to simply grab frames from the webcam, and cpu usage is still high. I think that mplayer might be using my GPU (and the other apps aren't), but since its not an issue with rendering, I dont think this explains anything. Cheese Low light --- ~12% CPU Bright Light ---- ~63% CPU Camorama Low light --- ~7% CPU Bright Light ---- ~30% CPU Opencv C++ library, (display in a single highgui window) Low light --- ~13% CPU Bright Light ---- ~40% CPU (same test on windows 7, 4-9%) Mplayer No problem, 1-2% regardless of light levels Note: If all I want't to do is capture a feed from my webcam I would use mplayer and forget about it, but I'm developing an application which uses the OpenCV to capture a video feed among other things, performance is important.

    Read the article

  • Measuring daemon CPU utilization over a portion of it's wall clock run time

    - by WhirlWind
    I am dealing with a network-related daemon: it takes data in, processes it, and spits it out. I would like to increase the performance of this daemon by profiling it and reducing it's CPU utilization. I can do this easily on Linux with gprof. However, I would also like to use something like "time" to measure it's total CPU utilization over a period of time. If possible, I would like to time it over a period that is less than its total run time: thus, I would like to start the daemon, wait awhile, generate CPU statistics, stop generating them, then stop the daemon at some later time. The "time" command would work well for me, but it seems to require that I start and stop the daemon as a child of time. Is there a way to measure CPU utilization for only a portion of the daemon's wall clock time?

    Read the article

  • Monitoring UDP socket in glib(mm) eats up CPU time

    - by Gyorgy Szekely
    Hi, I have a GTKmm Windows application (built with MinGW) that receives UDP packets (no sending). The socket is native winsock and I use glibmm IOChannel to connect it to the application main loop. The socket is read with recvfrom. My problem is: this setup eats 25% percent CPU time on a 3GHz workstation. Can somebody tell me why? The application is idle in this case, and if I remove the UDP code, CPU usage drops down to almost zero. As the application has to perform some CPU intensive tasks, I could image better ways to spend that 25% Here are some code excerpts: (sorry for the printf's ;) ) /* bind */ void UDPInterface::bindToPort(unsigned short port) { struct sockaddr_in target; WSADATA wsaData; target.sin_family = AF_INET; target.sin_port = htons(port); target.sin_addr.s_addr = 0; if ( WSAStartup ( 0x0202, &wsaData ) ) { printf("WSAStartup failed!\n"); exit(0); // :) WSACleanup(); } sock = socket( AF_INET, SOCK_DGRAM, 0 ); if (sock == INVALID_SOCKET) { printf("invalid socket!\n"); exit(0); } if (bind(sock,(struct sockaddr*) &target, sizeof(struct sockaddr_in) ) == SOCKET_ERROR) { printf("failed to bind to port!\n"); exit(0); } printf("[UDPInterface::bindToPort] listening on port %i\n", port); } /* read */ bool UDPInterface::UDPEvent(Glib::IOCondition io_condition) { recvfrom(sock, (char*)buf, BUF_SIZE*4, 0, NULL, NULL); /* process packet... */ } /* glibmm connect */ Glib::RefPtr channel = Glib::IOChannel::create_from_win32_socket(udp.sock); Glib::signal_io().connect( sigc::mem_fun(udp, &UDPInterface::UDPEvent), channel, Glib::IO_IN ); I've read here in some other question, and also in glib docs (g_io_channel_win32_new_socket()) that the socket is put into nonblocking mode, and it's "a side-effect of the implementation and unavoidable". Does this explain the CPU effect, it's not clear to me? Whether or not I use glib to access the socket or call recvfrom() directly doesn't seem to make much difference, since CPU is used up before any packet arrives and the read handler gets invoked. Also glibmm docs state that it's ok to call recvfrom() even if the socket is polled (Glib::IOChannel::create_from_win32_socket()) I've tried compiling the program with -pg and created a per function cpu usage report with gprof. This wasn't usefull because the time is not spent in my program, but in some external glib/glibmm dll.

    Read the article

  • What does %st mean in top?

    - by Ben
    Here is an example from my top: Cpu(s): 6.0%us, 3.0%sy, 0.0%ni, 78.7%id, 0.0%wa, 0.0%hi, 0.3%si, 12.0%st I am trying to figure out the significance of the %st field. I read that it means steal cpu and it represents time spent by the hypervisor, but I want to know what that actually means to me. Does it mean I may be on a busy physical server and someone else is using too much CPU on the server and they are taking from my VM? If I am using EBS could it be related to handling EBS I/O at the hypervisor level? Is it related to things running on my VM or is it completely unaffected by me?

    Read the article

  • Where does power consumption go in a computer?

    - by Johannes Rössel
    Today we had a weird discussion over lunch: What exactly causes power consumption in a computer, particularly in the CPU? Figures you usually see indicate that only a percentage (albeit a large one) of the power consumption ends up in heat. However, what exactly does happen with the rest? A CPU isn't (anymore) a device that mechanically moves parts, emits light or uses other ways of transforming energy. Conservation of energy dictates that all energy going in has to go out somewhere and for something like a CPU I seriously can't imagine that output being anything but heat. Us being computer science instead of electrical engineering students certainly didn't help in accurately answering the question.

    Read the article

  • Understanding top output in Linux

    - by Rayne
    Hi, I'm trying to determine the CPU usage of a program by looking at the output from Top in Linux. I understand that %us means userspace and %sy means system/kernel etc. But say I see 100%us. Does this mean that the CPU is really only doing useful work? What if a CPU is tied up waiting for resources that are not avaliable, or cache misses, would it also show up in the %us column, or any other column? Thank you.

    Read the article

  • PC wont boot with more than 1 stick of RAM.

    - by Aidan
    Hi Guys, I've got the following computer and I've just put in a new CPU QX9650 and I've run into this problem since making this hardware change. Whenever I put more than 1 of my 4 sticks of ram into my machine it wont load an OS. It'll go through the BIOS but BSOD on windows load. It also wont let me install an OS from disk or boot into Linux. I've ran memtest with all 4 sticks in and I get 10k+ errors on test5. Each stick of ram on it's own is fine and functions properly. I only have problems when all 4 sticks are in the machine at the same time. System specs.. CPU: QX9650 Mobo: Asus P5B 2104 BIOS RAM: 2xPC25400 DDR2 , 2xPC2 6400 both OCZ. Is the problem on my end or is the CPU faulty?

    Read the article

  • PC wont boot with more than 1 stick of RAM.

    - by Aidan
    Hi Guys, I've got the following computer and I've just put in a new CPU QX9650 and I've run into this problem since making this hardware change. Whenever I put more than 1 of my 4 sticks of ram into my machine it wont load an OS. It'll go through the BIOS but BSOD on windows load. It also wont let me install an OS from disk or boot into Linux. I've ran memtest with all 4 sticks in and I get 10k+ errors on test5. Each stick of ram on it's own is fine and functions properly. I only have problems when all 4 sticks are in the machine at the same time. System specs.. CPU: QX9650 Mobo: Asus P5B 2104 BIOS RAM: 2xPC25400 DDR2 , 2xPC2 6400 both OCZ. Is the problem on my end or is the CPU faulty?

    Read the article

  • Intel T5600 to upgrade T5250

    - by galets
    I want to upgrade my laptop which has T5250 CPU with a T5600 CPU to support virtualization. I ordered T5600 on ebay, but it didn't fit. It says T5250 supports PPGA478 socket, so I assume that is what I have. T5600 says supports "PBGA479, PPGA478". Since T5600 didn't fit as a replacement, I assume it means there are 2 models of T5600, one supports PBGA479, another one supports PPGA478, and not like I thought - one CPU supports both. Is that a correct statement? Does anybody know if it's even possible to do such an upgrade, or I'm wasting time?

    Read the article

  • performance monitoring

    - by Sunny
    I want to monitor CPU usage, disk read/write usage for a particular process, say ./myprocess. To monitor CPU top command seems to be a nice option and for read and write iotop seems to be a handy one. For example to monitor read/write for every second i use the command iotop -tbod1 | grep "myprocess". My difficulty is I just want only three variables to store, namely read/sec, write/sec, cpu usage/sec. Could you help me with a script that combines the outputs the above said three variables from top and iotop to be stored into a log file? Thanks!

    Read the article

  • Assembling a number-crunching computer [closed]

    - by tugrul büyükisik
    What is needed to make a GPU fully fed by CPU? Comparing their flops/s is enough? For example, if i could manage to make a very old(pentium-3) CPU with one of Nvidia-Fermi GPU, it would not be able to fed the gpu with data per sec. What is the criteria to fit CPU to GPU exactly when OpenCL or some similar work needed? Of course RAM and BUS will be chosen in a similar way but how exactly? Assume each GPU-core will calculate a sqrt and a division and an adding for 100 times for every itearation. Thanks.

    Read the article

  • IIS 7 Application Pools using a different amount of memory on multiple servers behind a load balancer

    - by Jim March
    We have 6 servers in a web farm behind an F5. There are approximately 25 AppPools on each of these servers. On servers 1 - 5 the apppools are consuming approx 500MB Private Bytes, and 5GB Virtual Bytes. On server 6 the apppools are consuming approx 800MB Private Bytes, and 8GB Virtual Bytes. I can not seem to figure out why we have this difference. The code is the exact same on each box. We replicate the apphost.config between the boxes, so the Appplication Configs are identical. The only difference seems to be that this box consumes more RAM, and in turn ends up using a lot more CPU. During Black Friday we observed the CPU on server 6 spiking to 100% and noticed that the % Memory Commit was also near 100%, while the rest of the farm was at closer to 50% utilization. Pulling the 6th server from the load balancer dropped CPU/Memory on the 6th server back to normal, and did not cause a noticeable strain on the other servers.

    Read the article

  • xen virtual machines get to many porcentages of cpus

    - by ki0
    Hi everyone, This is my question. I have one Xen server with 8 CPU's and 6 virtual machine running, each virtual hard disk are running in diferent physical hardisk. Everything worked fine but sometimes one virtual machine get almost whole CPU, if the Domain-0 is 90% that is normal, the virtual machine is 500% usage of CPU. I have improved that it is not depends who are working with the VM even when nobody are working with the server this still happens. I dont know what happen. Anyone have any idea?¿ or anyone have happened the same?¿

    Read the article

  • AWStats on Plesk consumes all of CPU and crashes server - how do you disable plesk

    - by columbo
    I have Plesk 9.0.1 running on a Red Hat server. Every week or so at about 4:10 AM the server locks up. At this time the server CPU usage shoots from 4% to 90% at the same time as a mass of awstats.pl processes start (I can't see how many as my datat only shows the top 30 processes, but all of these are awstats.pl). I turned off awstats through the Plesk control panel for all but 5 domains but I still get 90% CPU usage and at least 30 instances of awstats.pl happening at 4:10am as usual. Does anyone know why this may be? Does anyone know how to disable awstats (I have stats covered using piwik)? Or how do I uninstall awstats without snarling up Plesk?

    Read the article

  • IIS, multiple CPU cores, application pools and worker processes - best configuration for a single si

    - by Egghead Design
    Hi We use Kentico CMS and I've exchanged emails with them about a web garden deployment. We have a single site running on a server with 8 cpu cores. In line with Kentico's advice, we have not altered the application pool web garden setting from the default i.e. it is set to a maximum number of worker processes of 1. Our experience is that the site only uses one of the cpu cores - the others are idling. When I emailed them about this, their response was that the OS/IIS would handle this and use other cores as necessary even though the application pool only has a single worker process. Now, I've a lot of respect for the guys at Kentico, but this doesn't seem right to me? Surely, if we want to use all cores, we need to permit eight worker processes (and implement session state storage in SQL server)? Many thanks Tony

    Read the article

  • php-cgi cpu usage is super high

    - by Ryan Thompson
    I am getting constantly high and wildly fluctuating CPU usage % for php-cgi commands as seen via "top" on my Centos server.. I have a server density account and it seems that this is a common trend: User - PID - CPU % - MEM % - VSZ - RSS - TT - Stat - Started - Time - Command 500 - 6389 - 22.4 - 3 - 271136 - 32380 - ? - S - 20:26 - 0:40 - /usr/bin/php-cgi Seems there are about 6 or so of those records in my processes list at any given check-in. Any ideas what's causing this? I have fast_cgi installed and the module is loading.. Not sure why it isn't handling this though. Any help would be greatly appreciated! Ryan

    Read the article

  • Postgres 9.0 locking up, 100% CPU

    - by Jake
    We are having a problem where our Postgres 9.0 server occasionally locks up and kills our webapp. Restarting Postgres fixes the problem. Here's what I've been able to observe: First, usage of one CPU jumps to 100% for a few minutes Disk operations drop to ~0 during this time Database operations drop to 0 (blocks and tuples per sec) Logs show during this time: WARNING: worker took too long to start; cancelled WARNING: worker took too long to start; cancelled No Queries in logs (only those over 200ms are logged) No unusually long-running queries logged before or during Then the second CPU jumps to 100% The number of postgres processes jumps from the usual 8-10 to ~20 Matched by a spike in Postgres Blocks per second (about twice normal) Logs show LOG: could not accept SSL connection: EOF detected Queries are running but slow Restarting postgres returns everything to normal Setup: Server: Amazon EC2 Large Ubuntu 10.04.2 LTS Postgres 9.0.3 Dedicated DB server Does anyone have any idea what's causing this? Or any suggestions about what else I should be checking out?

    Read the article

  • Windows / Apache / PHP CPU at 100% under small load

    - by Bart
    I have a Windows box loaded with Apache 2.2 and PHP 5.2. It runs great if only a few users are on it at a time, but under load testing (50 users for test #1), the CPU climbs up to 100%. Nearly all of this CPU usage comes from httpd.exe. I currently have PHP set up via php5_module, but one of the first things I plan to try next is to use FastCGI instead. Is FastCGI better at handling multiple connections? Any other ideas on what might be causing Apache to run so high?

    Read the article

  • CPU I/O communication

    - by b-gen-jack-o-neill
    Hi, I know there is this question already discussed, but I still don´t understand something, so please just help me clarify it. What I understand there is 2 way to do I/O aka communicate from CPU with other HW. One is to use in and out instructions, and second is the memory mapped. But what I don´t actually understand is, is IN and OUT instructions are used, you define source port. But what is this port? I mean, is it different set of pins on CPU or what? And, to what is that port connected? And for the memory mapped, I miss just a tiny detail. Wheather memory mapped I/O must be first set by IN and OUT instructions, or does the device actually somehow itself connects to the RAM and reads it? Thanks.

    Read the article

  • Finding throuput of CPU and Hardrive on Solaris

    - by Jim
    How do i find the throughput of a CPU and the Hardisk on an open solaris machine. Using MPstat or iostat. I'm having a hard time identifying the throughput if it is given at all in the commands output. Eg. in mpstat there is very little explanation as to what the columns mean http://docs.sun.com/app/docs/doc/816-5166/mpstat-1m?l=en&a=view&q=syscl+mpstat I've been using the syscl column divided by time interval to find the throughput but to be honest i have no idea what a system call truelly is. I'm trying to to analyze a hardrive and CPU while writing a file to the hardisk and when at rest Thanks in advance. Jim

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >