Search Results

Search found 10536 results on 422 pages for 'cpu usage'.

Page 114/422 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • BIOS interrupts, priviledge levels and paging

    - by Jack
    Hi, I was learning about Intel 8086-80486 CPUs and their interactions with HW. But I still don´t understand it quite well. Please, help me fill blank spots. First, I know that CPU communicates with HW using BIOS interrupts. But, what really happens in PC, when I call some INT instruction? I know that according the interrupt table some instructions begin to execute, but how by executing some instructions can BIOS recognize what I want to do? Becouse as far as I know, CPU has no extra communication channel with BIOS, it can only adress memory and receive data. So how can I instruct BIOS to do something, when I can only adress RAM? Next thing I dont understand is about priviledge levels. I know about ring model, and acess rights, but how CPU knows which priviledge level has executed instruction? I think that these priviledges apply only when intruction is trying to adress memory, but how applications gets its priviledge level? I mean I know its level 3, but how its set? And last thing, I know that paging is adress scheme that is used to support aplication-transparent virtual memory, or swaping, but I could not find any informations about how is paging tied with protected mode. Like if paging is like next mode independent of protectet mode, or its somehow implemented within protected mode. And if it is implemented in protected mode, isn´t it too slow, to first adress application space, than offset, and than paging folder, page and offset once again? Thank you for every response.

    Read the article

  • Linux AMD-FX 8350 temperature monitoring

    - by HyperDevil
    I’m trying to get the CPU temperature for my AMD-FX8350 on Debian Squeeze. I ran sensors-detect and then sensors, but I only get my motherboard sensors (it8720-isa-0228). There are three temperature values there but I assume those are not for the CPU. it8720-isa-0228 Adapter: ISA adapter in0: +1.36 V (min = +0.00 V, max = +4.08 V) in1: +1.50 V (min = +0.00 V, max = +4.08 V) in2: +3.38 V (min = +0.00 V, max = +4.08 V) in3: +2.93 V (min = +0.00 V, max = +4.08 V) in4: +3.07 V (min = +0.00 V, max = +4.08 V) in5: +4.08 V (min = +0.00 V, max = +4.08 V) in6: +4.08 V (min = +0.00 V, max = +4.08 V) in7: +2.93 V (min = +0.00 V, max = +4.08 V) Vbat: +3.01 V fan1: 3375 RPM (min = 10 RPM) fan2: 0 RPM (min = 0 RPM) fan3: 1730 RPM (min = 10 RPM) fan5: 0 RPM (min = 0 RPM) temp1: +27.0°C (low = +127.0°C, high = +127.0°C) sensor = thermistor temp2: +53.0°C (low = +127.0°C, high = +127.0°C) sensor = thermal diode temp3: +65.0°C (low = +127.0°C, high = +90.0°C) sensor = thermal diode cpu0_vid: +0.000 V Is there anything I am missing? I also loaded the K8temp and K10temp modules and ran sensor-detect without any results. I do see this message in dmesg: hwmon-vid: Unknown VRM version of your x86 CPU

    Read the article

  • Sun Solaris - Find out number of processors and cores

    - by Adrian
    Our SPARC server is running Sun Solaris 10; I would like to find out the actual number of processors and the number of cores for each processor. The output of psrinfo and prtdiag is ambiguous: $psrinfo -v Status of virtual processor 0 as of: dd/mm/yyyy hh:mm:ss on-line since dd/mm/yyyy hh:mm:ss. The sparcv9 processor operates at 1592 MHz, and has a sparcv9 floating point processor. Status of virtual processor 1 as of: dd/mm/yyyy hh:mm:ss on-line since dd/mm/yyyy hh:mm:ss. The sparcv9 processor operates at 1592 MHz, and has a sparcv9 floating point processor. Status of virtual processor 2 as of: dd/mm/yyyy hh:mm:ss on-line since dd/mm/yyyy hh:mm:ss. The sparcv9 processor operates at 1592 MHz, and has a sparcv9 floating point processor. Status of virtual processor 3 as of: dd/mm/yyyy hh:mm:ss on-line since dd/mm/yyyy hh:mm:ss. The sparcv9 processor operates at 1592 MHz, and has a sparcv9 floating point processor. _ $prtdiag -v System Configuration: Sun Microsystems sun4u Sun Fire V445 System clock frequency: 199 MHZ Memory size: 32GB ==================================== CPUs ==================================== E$ CPU CPU CPU Freq Size Implementation Mask Status Location --- -------- ---------- --------------------- ----- ------ -------- 0 1592 MHz 1MB SUNW,UltraSPARC-IIIi 3.4 on-line MB/C0/P0 1 1592 MHz 1MB SUNW,UltraSPARC-IIIi 3.4 on-line MB/C1/P0 2 1592 MHz 1MB SUNW,UltraSPARC-IIIi 3.4 on-line MB/C2/P0 3 1592 MHz 1MB SUNW,UltraSPARC-IIIi 3.4 on-line MB/C3/P0 _ $more /etc/release Solaris 10 8/07 s10s_u4wos_12b SPARC Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Use is subject to license terms. Assembled 16 August 2007 Patch Cluster - EIS 29/01/08(v3.1.5) What other methods can I use? EDITED: It looks like we have a 4 processor system with one core each: $psrinfo -p 4 _ $psrinfo -pv The physical processor has 1 virtual processor (0) UltraSPARC-IIIi (portid 0 impl 0x16 ver 0x34 clock 1592 MHz) The physical processor has 1 virtual processor (1) UltraSPARC-IIIi (portid 1 impl 0x16 ver 0x34 clock 1592 MHz) The physical processor has 1 virtual processor (2) UltraSPARC-IIIi (portid 2 impl 0x16 ver 0x34 clock 1592 MHz) The physical processor has 1 virtual processor (3) UltraSPARC-IIIi (portid 3 impl 0x16 ver 0x34 clock 1592 MHz)

    Read the article

  • Auto load balancing two node Cluster Hyper-v 2008 R2 enterprise?

    - by Kristofer O
    My setup is a 2 node cluster with 72GB ram each and a ~10TB MD3000i Iscsi SAN. I have about 30VMs running I keep about 15 on either server. I do a live migration to the other server if I need to run updates or whatever... Either one of the servers is able of running all VM if needed, but the cpu is pretty high. Here's my issues. I know Hyper-v has a limit of a single Live-migration at a time. But Why doesn't it queue them up to move one at a time? If I multi select I don't get the option to live migrate a one at a time. OR if I'm in the process of Migrating one it will give me an error that it's currently migrating a VM... Is there a button I missed that will tell a Node that it needs to migrate all the VMs elsewhere? Another question, does anyone know whats the best way to load balance VMs based on CPU and/or network utilzation. I have some VMs that don't do much. and some that trash the CPU or network. I'd like to balance it out on both servers if at all possible. and Is there any way to automate it? last question... If I overcommit my Cluster is there a way to tell the cluster that I want certian VMs the be running and to savestate other VMs based on availible system resources? Say when my one node blue screens and the other node begins starting the VMs up. I want the unimportant ones to shutdown or savestate so the important ones can stay running or come back online. Thanks just for reading all that. Any help would be great.

    Read the article

  • Htpc aka "Media Center": cheap and *silent*?

    - by Unknown
    It may be me, or the place I live (Italy), but it seems pretty hard to get a build or a prebuilt nettop or a laptop that fits the need. I need something silent able to playback all h.264 fullhd content without stuttering, and well (and not loosing the hw acceleration because of softsubs...) silent not ugly silent and (possibly) cheap. I'm going the linux route, therefore i'm moving towards a cpu-based or nvida-integrated solution (i don't think ati hw accelerated playback - or the intel "hd" acceleration - is useable yet). Ion nettop; it's either the Acer Revo (but here it's incredibly pricey and it's hard to find the dualcore version) or the Asrock Ion 330, that in the current version is rated "silent" at 26Db. 26. Sounds pretty noisy to me!!! the previous version was even worse. was this product really aimed at htpc market?? the Dell Zino - i think it's ATI based unfortunately. Laptop: correct me if I'm wrong: sub 600€/$ units are quite loud under full load (because of the tiny fans). ULW laptops are indeed quite similar: tiniest fans = high pitched noise and the cpu still lacks power for non hd-accelerated video decoding Handmade build: little money can be saved with underpowered cpus, a low-midrange cpu would help in the case of non-hw-accelerated content the cases are quite pricey the PSU one has to get ranges between 100/150 €/$ minimum to keep the noise down a low-mid build, all included, sums up to over 650 €/$ for a still-looking-ugly-unit, without the blu-ray drive. Please help. What do you advise on this? ;) Am I ignoring laptops too much, maybe? Are low-priced Acers that noisy/high pitched under full load?

    Read the article

  • Problems with freetype on OSX 10.7.4

    - by eythor
    I'm trying to install mplayer with OSD using homebrew. I've added both --enable-menu and --with-freetype-config=/usr/local/Cellar/freetype/2.4.10/freetype-config to the brew recipe. ==> Downloading http://www.mplayerhq.hu/MPlayer/releases/MPlayer-1.1.tar.xz Already downloaded: /Library/Caches/Homebrew/mplayer-1.1.tar.xz xz -dc "/Library/Caches/Homebrew/mplayer-1.1.tar.xz" | /usr/bin/tar xf - ==> ./configure --prefix=/usr/local/Cellar/mplayer/1.1 --cc=cc --host-cc=cc --disable-cdparanoia --disable-libopenjpeg --enable-menu --disable-x11 -- with-freetype-config=/usr/local/Cellar/freetype/2.4.10/freetype-config ./configure --prefix=/usr/local/Cellar/mplayer/1.1 --cc=cc --host-cc=cc --disable-cdparanoia --disable-libopenjpeg --enable-menu --disable-x11 --with -freetype-config=/usr/local/Cellar/freetype/2.4.10/freetype-config Checking for cc version ... clang 4.2.1 (experimental support only) Checking for working compiler ... yes Detected operating system: Darwin Detected host architecture: x86_64 Checking for cross compilation ... no Checking for host cc ... cc Checking for CPU vendor ... GenuineIntel (6:15:10) Checking for CPU type ... Intel(R) Core(TM)2 Duo CPU T7700 @ 2.40GHz For freetype-config I've tried three seperate paths; /usr/X11R6/bin/freetype-config, /usr/X11/bin/freetype-config and the one in Cellar. Checking for freetype always fails: Checking for freetype >= 2.0.9 ... no Checking for fontconfig ... no (FreeType support needed) Although freetype itself seems to be installed. mufasa:bin eythor$ freetype-config --version 15.0.9 mufasa:bin eythor$ freetype-config --ftversion 2.4.10 mufasa:bin eythor$ freetype-config --libs -L/usr/local/Cellar/freetype/2.4.10/lib -lfreetype -lz -lbz2 mufasa:bin eythor$ freetype-config --cflags -I/usr/local/Cellar/freetype/2.4.10/include/freetype2 - I/usr/local/Cellar/freetype/2.4.10/include I'm not sure what to try next or how to figure out why freetype isn't recognized. Can anyone point me in a sensible direction?

    Read the article

  • Memory leak in Google Chrome

    - by jasondavis
    As a developer it is very common for me to have 2-3 different IDE's open, 10-15 google chrome windows which can hold up to 200 open tabs (I know I get out of hand some times), Photoshop, couple twitter bots for promo, and a few other programs but my system still runs fast and smooth. I have an i7 processor with 12gb ram. Now with all my usual stuff running my Physical memory is usually running around 50-60% however over the course of the day or much less even, I will gradually grow to 98% The highest Memory usage processes will be from Google Chrome, if I sort in the task manager by highest memory usage and end the 1 highest process which will be a google chrome one, my memory usage will jump back down to about 60%. Also by ending that 1 process, all my Chrome windows will remain open and in use, so it doesn't affect me at all by ending that process. Based on this research I am assuming that that 1 runaway process is likely the Adobe Flash as I also can say that it gets up to the 98% much faster when I am using flash items like video or music player. But even without using any of them it will still climb up to that high number eventually. Has anyone else experienced similar results?

    Read the article

  • How to improve Windows Server 2008 R2 to handle many connections?

    - by invisal
    It has been a few days so far that I am trying to figure how to solve this problem. First of all, I am running a website with an average daily page view of 350,000. Previously, all ads management (tracking click and impression that each ads has served) and content were served in a single server with the following spec: Server 1 OS: Windows 2008 R2 64-Bit CPU: Intel® Core™ i5 - 4 cores RAM: 8 GB Storage: 2 x 1 TB hard drives Bandwidth: 10 TB per month To improve our website speed, I decided to separate the ads management script to another dedicated server because we have more than 15 advertisers to 30 advertisers per each page. Server 2 OS: Windows 2008 R2 64-Bit CPU: Intel® Core™ i5 - 4 cores RAM: 4 GB Storage: 2 x 300 GB hard drives Bandwidth: 10 TB per month The Problem The problem is that Server 1 can handle both content and ads system. Now, that I take away the ads system and put it at Server 2. Server 2 can barely serve only ads system. Test First of all, I moved 75% of the ads to Server 2. And then, perform a ping to server: ping -t xxxxx. [I did the ping for 10 minutes and its following similar pattern as below] Reply from xxxxx bytes=32 time=290ms TTL=116 Reply from xxxxx bytes=32 time=289ms TTL=116 Reply from xxxxx bytes=32 time=320ms TTL=116 Reply from xxxxx bytes=32 time=286ms TTL=116 Reply from xxxxx bytes=32 time=286ms TTL=116 Reply from xxxxx bytes=32 time=348ms TTL=116 Reply from xxxxx bytes=32 time=284ms TTL=116 Then, I moved 100% of the ads to Server 2. Then, perform a ping to server again. [I did the ping for 10 minutes and its following similar pattern as below] Reply from xxxxx bytes=32 time=290ms TTL=116 Request timed out Reply from xxxxx bytes=32 time=320ms TTL=116 Reply from xxxxx bytes=32 time=286ms TTL=116 Request timed out Request timed out Reply from xxxxx bytes=32 time=284ms TTL=116 Attempts Increase MaxUserPort and TcpNumConnection Restart the server Increase IIS Max Instances and Instance MaxRequests Server Resource Only 10%-15% of the network connection is used Only 10%-15% of the CPU is used Only 25% of the memory is used

    Read the article

  • Max. Bandwidth Cannot be Reached

    - by Poyraz Sagtekin
    I have a question regarding the bandwidth usage. I don't think it's a very technical question but I really want to know the reason behind the problem. I'm having my education in one of my university's campuses. There is a 200mbps bandwidth availability for approx. 2100 students. Our main campus has 25,000 students and 2gbps bandwidth and there is no problem with the internet connection speed in main campus. However, in my campus, we are having difficulties with the internet connection speed. I went to IT department of my university and they've showed me some graphics about the bandwidth usage. The usage of the bandwidth has never reached 200mbps and it generally is around 130-140mbps. According to this information, maximum bandwidth is never made by the users but the browsing speed is not very convincing. What can be the problem? Maybe this is a silly question but I really want to know the reason. They told me that they've reached 180mbps before.

    Read the article

  • Why is my Compaq NC8430 laptop so darned HOT ?

    - by Cheeso
    For a long time I've had a Compaq nc8430 laptop. It's nearly 3 years old now. Originally shipped with WinXP, but I installed Vista on it. From the very start it was not a good experience. This thing has one of those "stickpoint" mice, which I like. After a while, I noticed that the computer was generating lots and lots of heat. So much heat, that the stickpoint bumper would melt and disintegrate. Normally I would expect heat if the CPU was working hard, but even when the CPU was idle, the computer was hot. Much too hot to keep on my lap. Turns out this is not an uncommon problem. I installed the HWmonitor tool, and found that the CPU temp was 82C when it was plugged in - pretty darn hot. And because the temp was so high, the fan never turned off, so the laptop was as loud as a jet engine, always. If I unplugged it from A/C power, the screen would dim and the temperature would decrease, and the fan noise would lessen, but still, it was too loud. It's totally unusable. What is the problem?

    Read the article

  • System fans connected to a Gigabyte Z77-D3H motherboard do not increase in speed

    - by Andrew
    The motherboard (Gigabyte Z77-D3H) controls my 3-pin CPU fan just fine. My system fans are a 3-pin fan (plugged into SYS_FAN1), and a 4-pin fan plugged into SYS_FAN3. All 3 of the system fan headers are 4-pin, but the user manual states that SYS_FAN1 is really a 3-pin header (that it can control the speed of a 3-pin fan) and the 4th pin is just a reserve. All my fans have a max RPM of 2000. Normally, all the fans run around 1000 RPMs when I'm not doing anything intensive. This proves that the motherboard can set the speed. However,when I run Folding@Home and my temperatures increase (around 70C) only the CPU fan increases to around 2000 RPMs. The system fans stay around 1000 RPMs. Through the BIOS I am able to disable the system fan control and the system fans then run at max RPMs (meaning the motherboard was doing something). I've updated the BIOS to the latest version and tried out Speedfan, but neither helped my situation. What I'd like is for the system fans to ramp up their RPMs as needed. Any thoughts? Tl;dr: My system (case) fans, but not my CPU fan, are always stuck around 1000 RPMs out of 2000 no matter the temperature.

    Read the article

  • sql server 2008 cluster hang when a heavy load is run

    - by Billy OT
    we have a sql server 2008 active/active cluster running on wondows 2008R2 O/S. 14GB RAM, 4xCPU. we have set a ceiling of 12GB for sql server. We're running an agent job which loads 3 million records to a database. during this load the job fails and the cluster seems to attempt to fail over to the other node but unsuccessfully i.e., the cluster address is no longer accessible. we have to manually fail the cluster node back. during the load on viewing task manager we can see that memory usage hits a max of 12.5GB and CPU at times hits 100% on all 4 CPU, but for the most part fluctuates at an average of about 60%. I suppose my question is, will a cluster try to fail over if memory or CPU are taking a heavy hit? or am i barking up the wrong tree? also any ideas why it wouldn't fully fail over? we've crawled through logs, of which there are a lot, and can't find anything useful. we've also tried recreating the issue but it ran successfully at a later time. Also 3 million rows doesn't seem like a lot but in terms of resources should 14GB RAM and 4xCPU not be sufficient? Further information on this, we ran the load again today and corrupted the database! We received the error message : LogWriter: Operating system error 170. It looks like, under the heavy load, the sql cluster attempted to fail over and in doing so migrated a lun (or drive) which meant the disk was no longer reachable. (this is just our theory). The database is now 'suspect' and requiring restoration. The 170 error above also indicates that on failing over to the other node, the sql service could not start as it was already in use, therefore it couldn't fail over fully?? But I'm wondering why would it need to fail over in the first place? My assumptions could be completely wrong on this, so any ideas would be appreciated.

    Read the article

  • Officially announced RAM support size doesn't apply to one of twin rigs with just one difference

    - by Deniz
    It'll take a little long to describe my situation but here goes the story : In January 2009 we bought (the OEM parts) two similar systems with just one difference. One of them had a Phenom X4 cpu and the other one (mine) a Phenom X3 cpu. At the beginning we had problems with both systems to power them on whilst having all of their ram slots being full. We decided to install the systems with just 2 slots populated and later try to install the rest of ram sticks. Both systems did succeed to support 3 sticks. We tried many different procedures to make the systems work with their fourth ram slots being populated. We waited for new bios updates and flashed the boards when they were available, we tried different ram sticks with different frequencies etc. One day while we were trying to install the fourth stick, the X4 machine did accept it. The other one did not. The most mind boggling thing was that after one of my trials the X3 system begun to not operate with the third slot populated. Our boards did have AMD 770 chipsets and we even tried to change the board of the X3 machine with another 770 chipset board. Now my questions are : Should we change the cpu ? What is causing the X3 system to not accept the fourth (or now the third) ram stick ? The manufacturers sites do claim that this boards do accept 4 ram sticks (but they only tested them with certain ram brands and models). What are the limitations for maximum ram configurations on motherboards ? Are there some "rules of thumb" except frequency, voltage, chip type considerations for which we did check our parts ? Our boards are : Gigabyte GA-MA770-DS3 Sapphire PC-AM2RX780 - PURE CrossFireX 770

    Read the article

  • PostgreSQL lots of large Arrays and Writes

    - by strife911
    Hi, I am running a python program that spawns 8 threads and as each thread launch its own postmaster process via psycopg2. This is to maximize the use of my CPU-cores (8). Each thread call a series of SQL Functions. Most of these functions go through many thousands of rows each associated to a large FLOAT8[] Array (250-300) values by using unnest() and multiplying each FLOAT8 by an another FLOAT8 associated to each row. This Array approach minimized the size of the Indexes and the Tables. The Function ends with an Insert into another Table of a row of the same form (pk INT4, array FLOAT8[]). Some SQL Functions called by python will Update a row of these kind of Tables (with large Arrays). Now I currently have configured PostgreSQL to use most of the memory for cache (effective_cache_size of 57 GB I think) and only a small amount of it for shared memory (1GB I think). First, I was wondering what the difference between Cache and Shared memory was in regards to PostgreSQL (and my application). What I have noticed is that only about 20-40% of my total CPU processing power is used during the most Read intensive parts of the application (Select unnest(array) etc). So secondly, I was wondering what I could do to improve this so that 100% of the CPU is used. Based on my observations, it does not seem to have anything to do with python or its GIL. Thanks

    Read the article

  • Will disabling hyperthreading improve performance on our SQL Server install

    - by Sam Saffron
    Related to: Current wisdom on SQL Server and Hyperthreading Recently we upgraded our Windows 2008 R2 database server from an X5470 to a X5560. The theory is both CPUs have very similar performance, if anything the X5560 is slightly faster. However, SQL Server 2008 R2 performance has been pretty bad over the last day or so and CPU usage has been pretty high. Page life expectancy is massive, we are getting almost 100% cache hit for the pages, so memory is not a problem. When I ran: SELECT * FROM sys.dm_os_wait_stats order by signal_wait_time_ms desc I got: wait_type waiting_tasks_count wait_time_ms max_wait_time_ms signal_wait_time_ms ------------------------------------------------------------ -------------------- -------------------- -------------------- -------------------- XE_TIMER_EVENT 115166 2799125790 30165 2799125065 REQUEST_FOR_DEADLOCK_SEARCH 559393 2799053973 5180 2799053973 SOS_SCHEDULER_YIELD 152289883 189948844 960 189756877 CXPACKET 234638389 2383701040 141334 118796827 SLEEP_TASK 170743505 1525669557 1406 76485386 LATCH_EX 97301008 810738519 1107 55093884 LOGMGR_QUEUE 16525384 2798527632 20751319 4083713 WRITELOG 16850119 18328365 1193 2367880 PAGELATCH_EX 13254618 8524515 11263 1670113 ASYNC_NETWORK_IO 23954146 6981220 7110 1475699 (10 row(s) affected) I also ran -- Isolate top waits for server instance since last restart or statistics clear WITH Waits AS ( SELECT wait_type, wait_time_ms / 1000. AS [wait_time_s], 100. * wait_time_ms / SUM(wait_time_ms) OVER() AS [pct], ROW_NUMBER() OVER(ORDER BY wait_time_ms DESC) AS [rn] FROM sys.dm_os_wait_stats WHERE wait_type NOT IN ('CLR_SEMAPHORE','LAZYWRITER_SLEEP','RESOURCE_QUEUE', 'SLEEP_TASK','SLEEP_SYSTEMTASK','SQLTRACE_BUFFER_FLUSH','WAITFOR','LOGMGR_QUEUE', 'CHECKPOINT_QUEUE','REQUEST_FOR_DEADLOCK_SEARCH','XE_TIMER_EVENT','BROKER_TO_FLUSH', 'BROKER_TASK_STOP','CLR_MANUAL_EVENT','CLR_AUTO_EVENT','DISPATCHER_QUEUE_SEMAPHORE', 'FT_IFTS_SCHEDULER_IDLE_WAIT','XE_DISPATCHER_WAIT', 'XE_DISPATCHER_JOIN')) SELECT W1.wait_type, CAST(W1.wait_time_s AS DECIMAL(12, 2)) AS wait_time_s, CAST(W1.pct AS DECIMAL(12, 2)) AS pct, CAST(SUM(W2.pct) AS DECIMAL(12, 2)) AS running_pct FROM Waits AS W1 INNER JOIN Waits AS W2 ON W2.rn <= W1.rn GROUP BY W1.rn, W1.wait_type, W1.wait_time_s, W1.pct HAVING SUM(W2.pct) - W1.pct < 95; -- percentage threshold And got wait_type wait_time_s pct running_pct CXPACKET 554821.66 65.82 65.82 LATCH_EX 184123.16 21.84 87.66 SOS_SCHEDULER_YIELD 37541.17 4.45 92.11 PAGEIOLATCH_SH 19018.53 2.26 94.37 FT_IFTSHC_MUTEX 14306.05 1.70 96.07 That shows huge amounts of time synchronizing queries involving parallelism (high CXPACKET). Additionally, anecdotally many of these problem queries are being executed on multiple cores (we have no MAXDOP hints anywhere in our code) The server has not been under load for more than a day or so. We are experiencing a large amount of variance with query executions, typically many queries appear to be slower that they were on our previous DB server and CPU is really high. Will disabling Hyperthreading help at reducing our CPU usage and increase throughput?

    Read the article

  • Request bursting from web application Load Tests

    - by MaseBase
    I'm migrating our web and database hosting to a new environment on all new machines. I've recently performed a Load Test using WAPT to generate load from multiple distributed clients. The server has plenty of room to handle the traffic load, but I'm seeing an odd pattern of incoming traffic during the load tests. Here is the gist of our setup: Firewall server running MS Forefront TMG 2010 on Win 2k8 server Request routing done by IIS Application Request Routing on firewall machine Web server is a Hyper-V VM on the Database server (which is the host OS) These machines are hefty with dual-CPU's with six cores (12 total procs) Web server running IIS 7.5 Web applications built in ASP.NET 2.0, with 1 ISAPI filter (Url Rewrite) in front What I'm seeing during the load tests is that the requests all come through in bursts. Even though I have 7 different distributed clients sending traffic loads, the requests come through about 300-500 requests at a time. The performance monitor shows nearly all of the counters moving through this pattern, where a burst of requests comes in the req/sec jumps to 70, the queued requests jumps to 500, the current requests jumps up, the CPU jumps up, everything. Then once it's handled that group of requests, it has a lull for nearly 10 seconds where nearly nothing is happening. 0-5 req/sec, 0 queued requests, minimal CPU usage. Then after 10 seconds of inactivity, another burst comes through, spiking all of the counters once again. What I can't figure out is why the requests are coming through in bursts when I know that the load being generated is not sent that way, especially considering the various load-generating clients sending traffic all in different intervals with random think time's between each request. Is there something in the layers between Hyper-V or perhaps in the hardware which might cause this coalesce of requests together? Here is what i'm looking at, the highlighted metric is Requests/sec, but the others critical counter go with it: Requests Queued (which I'd obviously like to keep as close to 0 as possible). Any ideas on this?

    Read the article

  • High load without explanation

    - by Sebastian
    I have a very high load on my machine and don't know what is responsible or how to find out. On the machine runs a jboss appserver and mysql. Here is a top from the user at peak time: top - 16:23:01 up 101 days, 6:50, 1 user, load average: 23.42, 21.53, 24.73 Tasks: 9 total, 1 running, 8 sleeping, 0 stopped, 0 zombie Cpu(s): 17.2%us, 1.6%sy, 0.0%ni, 80.4%id, 0.1%wa, 0.1%hi, 0.7%si, 0.0%st Mem: 16440784k total, 16263720k used, 177064k free, 151916k buffers Swap: 16780872k total, 30428k used, 16750444k free, 8963648k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 27344 b 40 0 16.0g 6.5g 14m S 169 41.7 1184:09 java 6047 b 40 0 11484 1232 1228 S 0 0.0 0:00.01 mysqld_safe 6192 b 40 0 604m 182m 4696 S 0 1.1 93:30.40 mysqld 7948 b 40 0 84036 1968 1176 S 0 0.0 0:00.07 sshd 7949 b 40 0 14004 2900 1608 S 0 0.0 0:00.03 bash 7975 b 40 0 8604 1044 840 S 0 0.0 0:00.44 top The CPU usage of the java process is normal. The peaks only show up when i deployed a certain web application. Could the resulting network traffic boost the load in such way that i don't see it in top?

    Read the article

  • Intel Atom overheating in ASUS EEE Box 1501P

    - by Sergey L.
    I have had an ASUS EEE Box 1501P for just a little bit over a year. Of course it breaks 2 months after the warranty runs out. http://www.asus.com/Eee/EeeBox_PC/EeeBox_PC_EB1501P/ I have been using the box as a Home Media Center. Running mostly 24/7 often pausing a video overnight. Since last week the fan started running extremely loud. After some digging I found that the Intel Atom CPU in it is overheating and the built-in sensor is reporting temperatures way over 105°C. This got me worried, so I took the unit apart. Completely vacuumed the heat sink, oiled the fan, but the unit is still showing the same behaviour. After turning it on and just observing the hardware monitor in the BIOS the temperature slowly rises from 40°C to over 95°C in appx 5 min. I am running the newest BIOS and a lightweight Linux OPENELEC OS with XBMC on it. Now I am wondering if it could be a faulty heat sensor in the Atom. Recommended running temperature is up to 85°C, but I have not detected any performance hits when running at the above mentioned 105°C and there seem to be no software faults. How can an Atom with an attached heat sink and a fan running at full capacity even get this hot in the first place at 0 load? Aren't those things designed to generate virtually no heat? Could it be a faulty heat sensor? What shall I try to fix this? I would prefer not to damage the CPU, since it is hard fused into the motherboard and cannot be replaced. I could remove the heat pipe/heat sink, but it is getting hot, so heat is properly transferring from the CPU to the heat pipe, the fan is running at full capacity, is recently oiled and warm air is making it out of the exhaust. Edit: One more note: The North-bridge (or whatever it is called nowadays) is on the same heat pipe.

    Read the article

  • Resource Monitor (resmon) in Windows Server 2008 R2

    - by Clever Human
    In Windows Server 2008 R2's Resource Monitor, is there a way to set the scale of the various graphs to be constant values instead of variable based on data? It seems to me that the utility of a graph is to get a quick overview glance at the values those graphs are showing. So if I look at the CPU graph and the line is up near the top, I can know immediately that something is using all my CPU and go investigate what. I don't really care if the CPU is jumping between .01% and 2%. Or if the network usage monitor is up near the top, I will know that all my bandwidth is being used up, and go figure out what. But the way things are now, the graphs are meaningless because the scales constantly shift. If you look at the network usage graph in one second it might have a scale out of 100kbps, and the next second have a scale based on 1mbps! So... is there a registry key or something that will peg the scale of these graphs to logical maximums? (the graph on the right hand side of the screenshot below):

    Read the article

  • Automatically Kill/Restart Process(es) When Memory is Critically Low

    - by nemesisfixx
    I have a Debian Wheezy VPS box where am running a couple of Django apps in production. Ideally, would have tried addressed my current memory footprint issues by optimizing the apps, adding more RAM or augmenting with Swap. But the problem is that I doubt there's much memory optimization I'd milk from optimizing the Django apps (the stack being open-source and robust), and adding RAM is a cost constraint for me (this is a remote VPS), also, the host doesn't offer options to use Swap! So, in the meantime (as I wait to secure more resources to afford more RAM), I wish to mitigate the scenarios where the server runs out memory so that I just have to request a VPS restart (as in, at that point, I can't even SSH into the box!). So, what I would love in a solution is the ability to detect when a process (or generally, total system memory usage) exceeds a certain critical amount (for now, example the FREE RAM falls to say 10%) - which I've noticed occurs after the VPS's been up for long, and when also traffic is suddenly much to some of the heavy apps (most are just staging apps anyway). So, I wish to be able to kill/restart the offending process(es) - most likely Apache. Which solution when done manually in these situations has restored sane memory usage levels - a hint that possibly one or more of the Django apps has a memory leak? In brief: Monitor overall system RAM usage When FREE RAM falls below a given critical threshold (say below 10%), kill/restart the offending process(es) - or simpler, if we assume from my current log analysis (using linux-dash) that Apache is often the offender, then kill/restart it. Rinse and repeat...

    Read the article

  • Tomcat performing terribly for no apparent reason

    - by John
    We're running a game application .WAR on Tomcat 6 on an Amazon EC2 server, 8 core processor, 7gb RAM. The application uses a MySQL database hosted on Amazon RDS. This Facebook application takes ages to access when a mere 20-30 users are playing it. Big difference from 1-2 users. The entire .WAR is ~4mb, all static content hosted elsewhere. The server has never been close to running out of RAM. The CPU utilization has never been higher than 13.5-14%. Even with ~500 users that completely slowed everything to a standstill. The thread count or threadpools isn't close to being maxed out. I heightened maxthreads but it didn't make a noticeable difference. My theory is that Tomcat can only use one processor core, which would explain why it was slowed to a halt even though CPU usage was stably at 13-14% at the activity spike. But I'm struggling to understand why it would only use one CPU core. There is no processor cap in server.xml. The app contains several servlets (4 or 5). There is no mention of SingleThreadModel in the Java code. WHAT could be causing the application to run extremely slowly? If there is only 1-5 people on the application it runs fine. With 20-30 people it's barely contactable.

    Read the article

  • 16TB Volumes and SNMP On Windows

    - by John K
    As volumes larger than 16TB became more common, it was recognized that the 32 bit value used to report disk size and usage within the standard "HOST-RESOURCES" MIB in SNMP was not large enough to report the proper disk size. Net-SNMP seems to have addressed this issue by simply manipulating the value of "AllocationUnits" to maintain a 32 bit value for disk utilization (since total disk size/usage is equal to the 32 bit space value times the allocation unit), to allow for the calculation of a volume larger than 8/16TB. Presuming you don't have any reporting interest in the allocation unit, this seems like a fine solution. https://bugzilla.redhat.com/show_bug.cgi?id=654384 Window's built in SNMP service, however, seems to continue to suffer from this error, simply reporting the modulo of the used/assigned disk space, resulting in inaccurate disk size reporting. Is there a way to enable Windows to correctly report disk usage for volumes over 16TB? We attempted to simply install Net-SNMP 5.5 x64 and disable Windows SNMP service entirely, however this unfortunately did not fix our issue. I've seen people in the Cacti community mention simply scripting out a solution. Unfortunately, we're using Observium for quick and basic systems monitoring. If the issue can't be correct on the Window's side, can Observium be made to report custom MIBs?

    Read the article

  • Hyper-V 2008 R2 synthetic networking stops working with linux 2.6.32.15

    - by luxifer
    Hi there, so I thought I'd give Hyper-V on Windows Server 2008 R2 Enterprise a try on my Homeserver (yes, it's legit... got it from msdnaa). First thing to throw at it was my firewall which runs IPFire. This distribution currently uses the kernel version 2.6.32.15 and comes with the Hyper-V drivers. So I enabled them and at first they work just fine but after a few minutes they just fail. There are no packages going in or out anymore until I reboot the VM but sometimes even that won't work so the VM just keeps "Stopping" like forever. Emulated networking works fine but it slow and uses more CPU. That way my firewall routes slower than when running under virtualbox on an atom N270. My server has an E6750; VM is limited to 25%, but that should still outperform this atom CPU especially since it's never going anywhere near 100% CPU load, so give me a break! A quick google search led me to people having the same problem (even with other distributions and kernel versions that include those drivers) but no solution yet... I already found this but I can't quite follow the author on the part where he solved the issue - especially since I need two virtual nics for my firewall distro to work (obviously one internal and one external) What am I missing here?

    Read the article

  • ESX Scheduler and NUMA issue

    - by babyg_wc
    On our 24 core bl685 (4sockets x 6core), we find that NUMA nodes 0 and 1 are pretty busy (unfortunately resulting in elevated cpu ready times on the VMS), whilst NUMA nodes 2 and 3 are almost unused. I thought this just maybe a ESX4 U1 issue, so I had a colleague with a 32 core (dl785) farm investigate, and it seems that his last 3 or 4 NUMA nodes are also not really being utilised. ESX seems to have a weakness when it comes to balancing lightly loaded NUMA boxes, Im going to enabled node interleaving in the BIOS and see if the scheduler balancers across all 24 cores, instead of just 12!... For those of you with large core counts, I would suggest you fire up you viclient, and check Physical CPU useage (or esxtop), I would be interested to hear what your results are. Please note, that its only the lightly loaded (eg less than 30% cpu load on the esx host) that seems to have the biggest issue with load imbalance. Thoughts/comments. PS ive logged a SR with vmware to assist, also the other "problem" could be that we have 128gb of ram in each host, and therefore the scheduler sees no good reason why it shouldnt try and cram all vms's into the first two NUMA nodes, as we only have around 50gb of ram worth of vms on each host...

    Read the article

  • Does AMD Cool n Quiet Slow Down Your System?

    - by Software Monkey
    I discovered today that having AMD Cool n Quiet enabled in my BIOS appears to be slowing down my Windows XP SP2 system by about 29% on memory & CPU intensive workloads. I was wondering if (a) anyone else had encountered this, (b) anyone can offer an explanation, (c) there are any negatives I need to be aware of if I keep AMD CnQ disabled. With some superficial testing so far, I don't immediately notice any difference with CnQ off (other than the performance being what I expected from this new hardware). It seems to ramp up the CPU fan a little bit as my program maxes out 1 core, but that's the same as with CnQ on. And when I let the system idle the CPU fan slows down and the systems as quiet as a mouse (after years of 6 small fans churning like they want to go into orbit it's nice to again have a system where I can hear the HDDs seeking). Bonus question: Does CnQ cause issues with system stability? I ask because the reason I disabled it was because I have had a few freezes and 1 spontaneous reboot with my new hardware.

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >