Search Results

Search found 7065 results on 283 pages for 'cpu sockets'.

Page 19/283 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Quick CPU ring mode protection question

    - by b-gen-jack-o-neill
    Hi, me again :) I am very curious in messing up with HW. But my top level "messing" so far was linked or inline assembler in C program. If my understanding of CPU and ring mode is right, I cannot directly from user mode app access some low level CPU features, like disabling interrupts, or changing protected mode segments, so I must use system calls to do everything I want. But, if I am right, drivers can run in ring mode 0. I actually don´t know much about drivers, but this is what I ask for. I just want to know, is learning how to write your own drivers and than call them the way I should go, to do what I wrote? I know I could write whole new OS (at least to some point), but what I exactly want to do is acessing some low level features of HW from standart windows application. So, is driver the way to go?

    Read the article

  • How can I restart yum?

    - by Rob
    on my server, yum is using a LOT of system resources. My friend suggested to kill -9 it, but I want to know I'll be able to restart it first. Preferably without rebooting the whole server. However, I'd also like some alternatives, I'd rather know WHY yum is taking up all these resources. I'm fairly new to running servers, so I'd appreciate any help.

    Read the article

  • What affects video encoding speeds?

    - by Pig Head
    FRAPs doesn't compress its videos when you record, so the files are enormous. In a long recording you can get up to a few hundred gigabytes. Obviously, usually you would need to convert/compress them. What affects the speed of this? I don't think the RAM does, as when I converted 600 gb my RAM usage only went to 6 gig, but the processor was at 100%, which is surprising as I have a 6 core processor @ 3.46 ghz. Would clock speed or cores help the most?

    Read the article

  • When modern computers boot, what initial setup of RAM do they execute, and how does it exactly work?

    - by user272840
    I know the title reeks of confusion, and some of you might assume I am just wondering about how the computer boots in general, but I'm not. But I'll sort this out for you people now: 1.Onboard firmware is how mostly all modern computer devices work, whether or not with EFI/UEFI(even without "onboard firmware", older computers still employed bank switching, or similar methods with snap-in firmware, cartridges, etc.) 2.On startup there is no "programs" running in the traditional sense yet, i.e. no kernel, OS, user-applications; all of the instructions, especially the very first instruction, is specified by the Instruction Pointer, I am guessing. How is the IP/PC/etc. set to first point to an address for a BIOS/firmware/etc. instruction, and how do the BIOS instructions map themself out in memory prior to startup? 3.Aside from MMIO, BIOS uses certain RAM addresses to have instructions. The big ? comes in when I ask this ... how does BIOS do this? Conclusion: I am assuming that with the very first instruction there is an initial hardware setup for BIOS prior to complete OS bootup. What I want to know is if it's hardware engineered to always work this way, if there's another step in this bootup method I am missing, a gap of information I am unaware of, or how this all works from the very first instruction, and the RAM data itself.

    Read the article

  • 12.04 Unity 3D 80% CPU load with Compiz

    - by user39288
    EDIT : I have been able to to determine that the problem is not compiz, but is actually Xorg. I don't know why, but by quickly maximizing terminal and taking a screenshot with top running before the problem went away I am able to see xorg takes up 72% of cpu, with bamfdaemon taking up 18%, and compiz taking up 14%. Seems the nvidia drivers are to blame, will play more with settings and perhaps do a clean nvidia-current install to try to fix the problem. Having a very annoying problem with high CPU usage. Running 12.04 with latest drivers and nvidia-current installed. Have not had any issues for days, now I have a strange problem. Unity 3d runs great most of the time, 1-2% CPU usage with only transmission running in background. Windows open and close smoothly. However,no matter what programs are open, if I minimize all open programs to the unity bar on the left, my CPU jumps to about 80% and slows down all maximize and minimize effects. Mouse movement stays smooth the whole time, but unity becomes unresponsive for up to 30 seconds at times. Hitting alt + tab to bring up even a single window fixes the problem. The window I bring back up doesn't even have to be maximized to solve the problem. Hitting the super button to bring up the dash makes CPU drop back to idle until I close it, then high CPU usage resumes. Believe the problem is compiz, but even just having only terminal running "top", I have to minimize it to the tray for the problem to show, so I can't see the problem process. I can only tell about the high CPU usage using indicator-sysmonitor. Even tried quitting the indicator, but I can still tell very poor performance with all applications when minimized. Reset compiz back to defaults, tried going to the post-release update nvidia drivers, played with vsync settings in both the nvidia settings and compiz. Even forced refresh rate, but cannot solve the problem. The problem does NOT occur in Unity 2D. Specs are core 2 duo 2.0ghz, 4GB ddr2 ram, 2x 320's HDD in RAID 0, and Nvidia GTX 260M graphics card.

    Read the article

  • WPF abnormal CPU usage for animation

    - by 0xDEAD BEEF
    HI! I am developing WPF application and client reports extreamly high CPU usage (90%) (whereas i am unable to repeat that behavior). I have traced bootleneck down to these lines. It is simple glowing animation for small single led control (blinking led). What could be reason for this simple annimation taking up SO huge CPU resources? <Trigger Property="State"> <Trigger.Value> <local:BlinkingLedStatus>Blinking</local:BlinkingLedStatus> </Trigger.Value> <Trigger.EnterActions> <BeginStoryboard Name="beginStoryBoard"> <Storyboard> <DoubleAnimation Storyboard.TargetName="glow" Storyboard.TargetProperty="Opacity" AutoReverse="True" From="0.0" To="1.0" Duration="0:0:0.5" RepeatBehavior="Forever"/> </Storyboard> </BeginStoryboard> </Trigger.EnterActions> <Trigger.ExitActions> <StopStoryboard BeginStoryboardName="beginStoryBoard"/> </Trigger.ExitActions> </Trigger>

    Read the article

  • C# WPF abnormal CPU usage for animation

    - by 0xDEAD BEEF
    I am developing WPF application and client reports extreamly high CPU usage (90%) (whereas i am unable to repeat that behavior). I have traced bootleneck down to these lines. It is simple glowing animation for small single led control (blinking led). What could be reason for this simple annimation taking up SO huge CPU resources? <Trigger Property="State"> <Trigger.Value> <local:BlinkingLedStatus>Blinking</local:BlinkingLedStatus> </Trigger.Value> <Trigger.EnterActions> <BeginStoryboard Name="beginStoryBoard"> <Storyboard> <DoubleAnimation Storyboard.TargetName="glow" Storyboard.TargetProperty="Opacity" AutoReverse="True" From="0.0" To="1.0" Duration="0:0:0.5" RepeatBehavior="Forever"/> </Storyboard> </BeginStoryboard> </Trigger.EnterActions> <Trigger.ExitActions> <StopStoryboard BeginStoryboardName="beginStoryBoard"/> </Trigger.ExitActions> </Trigger>

    Read the article

  • high cpu in IIS

    - by Miki Watts
    Hi all. I'm developing a POS application that has a local database on each POS computer, and communicates with the server using WCF hosted in IIS. The application has been deployed in several customers for over a year now. About a week ago, we've started getting reports from one of our customers that the server that the IIS is hosted on is very slow. When I've checked the issue, I saw the application pool with my process rocket to almost 100% cpu on an 8 cpu server. I've checked the SQL Activity Monitor and network volume, and they showed no significant overload beyond what we usually see. When checking the threads in Process Explorer, I saw lots of threads repeatedly calling CreateApplicationContext. I've tried installing .Net 2.0 SP1, according to some posts I found on the net, but it didn't solve the problem and replaced the function calls with CLRCreateManagedInstance. I'm about to capture a dump using adplus and windbg of the IIS processes and try to figure out what's wrong. Has anyone encountered something like this or has an idea which directory I should check ? p.s. The same version of the application is deployed in another customer, and there it works just fine. I also tried rolling back versions (even very old versions) and it still behaves exactly the same. Edit: well, problem solved, turns out I've had an SQL query in there that didn't limit the result set, and when the customer went over a certain number of rows, it started bogging down the server. Took me two days to find it, because of all the surrounding noise in the logs, but I waited for the night and took a dump then, which immediately showed me the query.

    Read the article

  • Can I reduce the CPU speed of my MacBook when on battery?

    - by Greg Hewgill
    I've got a MacBook with a Core 2 Duo CPU. I've got CoreDuoTemp installed which can show the current speed of the CPU. It appears to always show: Mini : 1.0 GHz Maxi : 2.0 GHz Current : 2.0 GHz I believe my laptop would run longer on battery if it were to run at a maximum of 1 GHz. Is there a way to configure this, or is the CPU speed adjustment completely automatic?

    Read the article

  • How to see if turbo boost is working on I7 860 CPU?

    - by Jan Derk
    I just build myself a new system with a Intel I7 860 CPU. When loading it using a single threaded application like Super PI, CPU-Z shows 2.933Ghz as speed. Now I understood that the I7 goes into turbo boost mode up to 3.46GHz for a single core. How can I check that? Is there a utility to monitor CPU speed per core?

    Read the article

  • Sudden and frequent hangs on desktop computer: mobo or CPU fault?

    - by djechelon
    I have a desktop computer equipped with an ASUS Crosshair 2 Formula and a Phenom x6 3.2GHz CPU. My problem is that often the computer will hang all of a sudden, completely stopping responding. When that occurs, reset key is inoperative and power button turns the computer off but is unable to turn it back on. I have to physically disconnect power cable. The problem can occur anytime, when I'm booting Windows, when I'm logging in, when I'm listening to a song, when I'm browsing Internet, etc. It always occurs after very few minutes of 3D gameplay I thought it was a video card fault. I had 3 8800GTX so I could try all combinations of them: didn't fix I thought it was a RAM problem: I tried running with only a subset of my DDR2 banks but didn't fix. Almost every time I have to reset and reconfigure BIOS (without AHCI, Win7 won't boot, so I need to restore a few things). If I enable AMD Live, Cool&Quiet or other things from CPU configuration menu I'll be sure that the computer won't reach Windows desktop in 99% of cases (it randomly hangs somewhere in the boot process or even in the BIOS POST). Another interesting thing is that during the POST process the computer always takes unusually long time detecting USB devices (LCD POSTer shows USB INIT), and I've also tried disconnecting all USB devices but didn't take less time to POST BIOS revision is 2702, the latest. Today I found a different behaviour once: during boot screen I got a BSOD with error Stop 0x00000101 A clock interrupt was not received on a secondary processor within the allocated time interval, and this is usually related to overclocking, but I never overclocked my CPU. Judging from the description of my problem, hoping someone had the same and fixed, and since I don't have a spare CPU or motherboard for replacement, I'd like to ask if you think this is a problem with faulty CPU or faulty motherboard, and if I can perform additional tests (I mean software tests because of my lack of spare components) to identify the component to replace.

    Read the article

  • How to limit my CPU power programmatically on Windows 7?

    - by Ivan
    Whenever I run a CPU-heavy activity (like compressing a big set of files into an archive for example) my CPU switches to its full throttle (maximum frequency) and shuts down of overheat in less than a minute. Instead, I would like it to keep slowed-down slightly to do the task a bit slower but be able to reach the finish. At the same time I don't want to dim my screen brightness or adjust anything else what standard Windows power-saving system does. So how do I actually set a cap to limit my CPU power? The CPU is Core 2 Duo T7250, the OS is Windows 7 32-bit, there seem to be no BIOS settings or jumpers available to configure the frequencies.

    Read the article

  • Why does my iTunes use so much CPU time?

    - by bikesandcode
    I have a roughly 2 year old Macbook (10.5). I have iTunes 10. When iTunes is playing MP3s, I see CPU usage of the iTunes process in the system monitor ranging from 65%-75%. When I pause the music, I see CPU usage of about 65%-75%. I do not have any visualisations going, to my knowledge I have not turned on any CPU destroying features, my music library isn't tiny, but it's hardly huge (3GB). This is mildly annoying when I'm plugged into the wall as I only have slightly longer compile times, but if I am out and about, this is a major drain on the battery. Using VLC I see CPU loads of ~= 10% at the most when listening to music and generally lower. What the heck is iTunes doing?

    Read the article

  • Can an ESX server under heavy load cause cpu spikes on guest VM's?

    - by ReferentiallySeethru
    So we have a number of vm's running on an ESX 4.1 server for product testing. The ESX Server is at times under heavy load. We've been experiencing high CPU levels during some use cases, but we can't always duplicate this. If the ESX server as a whole is under heavy load could this cause guest machines to show high CPU usage? To ask it a different way, if the guest machines require more cpu resources than the server has, how does this affect CPU usage as indicated by the OS and process?

    Read the article

  • 100% CPU in QuickTime H.264 decoder on Windows on Win7, except when using XP compat. mode

    - by user858518
    I have a Windows program that uses the Apple QuickTime API to play video. On Windows 7, CPU usage is 100% on one core, which I believe is why the playback is choppy. If I turn on XP compatibility mode for this program, the CPU usage is around 20% of one core, and playback is normal. Using a profiling tool called Very Sleepy (http://www.codersnotes.com/sleepy), I was able to narrow down the high CPU usage to a function in the QuickTime H.264 decoder called JVTCompComponentDispatch. I can't imagine why there would be a difference in CPU usage when XP compatibility mode is turned off or on. Any ideas?

    Read the article

  • How to kill/restart automatically a specific Windows application if it begins taking 100% CPU?

    - by Esko Luontola
    I have one program running in the background (so I can use a remote controller with my PC) but every now and then the program crashes and begins using 100% CPU (I have quad-core, so it's 25% CPU usage). When that happens, the program needs to be killed and restarted. Is there a program for Windows, which can be used to detect automatically that a specific application hogs all the CPU, and would then automatically kill and restart that application?

    Read the article

  • CPU Usage in Very Large Coherence Clusters

    - by jpurdy
    When sizing Coherence installations, one of the complicating factors is that these installations (by their very nature) tend to be application-specific, with some being large, memory-intensive caches, with others acting as I/O-intensive transaction-processing platforms, and still others performing CPU-intensive calculations across the data grid. Regardless of the primary resource requirements, Coherence sizing calculations are inherently empirical, in that there are so many permutations that a simple spreadsheet approach to sizing is rarely optimal (though it can provide a good starting estimate). So we typically recommend measuring actual resource usage (primarily CPU cycles, network bandwidth and memory) at a given load, and then extrapolating from those measurements. Of course there may be multiple types of load, and these may have varying degrees of correlation -- for example, an increased request rate may drive up the number of objects "pinned" in memory at any point, but the increase may be less than linear if those objects are naturally shared by concurrent requests. But for most reasonably-designed applications, a linear resource model will be reasonably accurate for most levels of scale. However, at extreme scale, sizing becomes a bit more complicated as certain cluster management operations -- while very infrequent -- become increasingly critical. This is because certain operations do not naturally tend to scale out. In a small cluster, sizing is primarily driven by the request rate, required cache size, or other application-driven metrics. In larger clusters (e.g. those with hundreds of cluster members), certain infrastructure tasks become intensive, in particular those related to members joining and leaving the cluster, such as introducing new cluster members to the rest of the cluster, or publishing the location of partitions during rebalancing. These tasks have a strong tendency to require all updates to be routed via a single member for the sake of cluster stability and data integrity. Fortunately that member is dynamically assigned in Coherence, so it is not a single point of failure, but it may still become a single point of bottleneck (until the cluster finishes its reconfiguration, at which point this member will have a similar load to the rest of the members). The most common cause of scaling issues in large clusters is disabling multicast (by configuring well-known addresses, aka WKA). This obviously impacts network usage, but it also has a large impact on CPU usage, primarily since the senior member must directly communicate certain messages with every other cluster member, and this communication requires significant CPU time. In particular, the need to notify the rest of the cluster about membership changes and corresponding partition reassignments adds stress to the senior member. Given that portions of the network stack may tend to be single-threaded (both in Coherence and the underlying OS), this may be even more problematic on servers with poor single-threaded performance. As a result of this, some extremely large clusters may be configured with a smaller number of partitions than ideal. This results in the size of each partition being increased. When a cache server fails, the other servers will use their fractional backups to recover the state of that server (and take over responsibility for their backed-up portion of that state). The finest granularity of this recovery is a single partition, and the single service thread can not accept new requests during this recovery. Ordinarily, recovery is practically instantaneous (it is roughly equivalent to the time required to iterate over a set of backup backing map entries and move them to the primary backing map in the same JVM). But certain factors can increase this duration drastically (to several seconds): large partitions, sufficiently slow single-threaded CPU performance, many or expensive indexes to rebuild, etc. The solution of course is to mitigate each of those factors but in many cases this may be challenging. Larger clusters also lead to the temptation to place more load on the available hardware resources, spreading CPU resources thin. As an example, while we've long been aware of how garbage collection can cause significant pauses, it usually isn't viewed as a major consumer of CPU (in terms of overall system throughput). Typically, the use of a concurrent collector allows greater responsiveness by minimizing pause times, at the cost of reducing system throughput. However, at a recent engagement, we were forced to turn off the concurrent collector and use a traditional parallel "stop the world" collector to reduce CPU usage to an acceptable level. In summary, there are some less obvious factors that may result in excessive CPU consumption in a larger cluster, so it is even more critical to test at full scale, even though allocating sufficient hardware may often be much more difficult for these large clusters.

    Read the article

  • debugging JBoss 100% CPU usage

    - by NateS
    Originally posted on Server Fault, where it was suggested this question might better asked here. We are using JBoss to run two of our WARs. One is our web app, the other is our web service. The web app accesses a database on another machine and makes requests to the web service. The web service makes JMS requests to other machines, aggregates the data, and returns it. At our biggest client, about once a month the JBoss Java process takes 100% of all CPUs. The machine running JBoss has 8 CPUs. Our web app is still accessible during this time, however pages take about 3 minutes to load. Restarting JBoss restores everything to normal. The database machine and all the other machines are fine, only the machine running JBoss is affected. Memory usage is normal. Network utilization is normal. There are no suspect error messages in the JBoss logs. I have set up a test environment as close as possible to the client's production environment and I've done load testing with as much as 2x the number of concurrent users. I have not gotten my test environment to replicate the problem. Where do we go from here? How can we narrow down the problem? Currently the only plan we have is to wait until the problem occurs in production on its own, then do some debugging to determine the cause. So far people have just restarted JBoss when the problem occurred to minimize down time. Next time it happens they will get a developer to take a look. The question is, next time it happens, what can be done to determine the cause? We could setup a separate JBoss instance on the same box and install the web app separately from the web service. This way when the problem next occurs we will know which WAR has the problem (assuming it is our code). This doesn't narrow it down much though. Should I enable JMX remote? This way the next time the problem occurs I can connect with VisualVM and see which threads are taking the CPU and what the hell they are doing. However, is there a significant down side to enabling JMX remote in a production environment? Is there another way to see what threads are eating the CPU and to get a stacktrace to see what they are doing? Any other ideas? Thanks!

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >