Search Results

Search found 5628 results on 226 pages for 'cpu hogging'.

Page 51/226 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • Do Hyper-V guests see multiple CPUs (sockets) or multiple CPU cores when assigned more than 1 vCPU?

    - by Filip Kierzek
    I have SQL Server 2008 Express running on Hyper-V based virtual machine with two vCPU-s. I've just been reading up on SQL Server 2012 Express and noticed that it's CPU is "Limited to lesser of 1 Socket or 4 cores" (http://msdn.microsoft.com/en-us/library/cc645993(v=SQL.110).aspx) My question is how do the SQL Server 2012 limits on CPUs/Cores translate into vCPU-s? Are they "processors" or are they "cores"?

    Read the article

  • Does VMware ESX Fault Tolerance (FT) support depend on the CPU only?

    - by user71784
    I'm trying to find out whether VMware ESX 4.x Fault Tolerance (FT) is supported on a particular server and VMware's HCL is confusing me. It says that some servers with FT-supported processors (specifically the Xeon 3400 Lynnfield) do not support FT and some with almost identical specs (same chipset for instance) do support FT. Could this be a mistake on the HCL itself? To my understanding FT support is based only on the CPU. Thanks. RC

    Read the article

  • How to limit a process to a single CPU core?

    - by Jonathan
    How do you limit a single process program run in a Windows environment to run only on a single CPU on a multi-core machine? Is it the same between a windowed program and a command line program? UPDATE: Reason for doing this: benchmarking various programming languages aspects I need something that would work from the very start of the process, therefore @akseli's answer, although great for other cases, doesn't solve my case

    Read the article

  • how can i sell hp and ibm server cpu?

    - by elvayee
    i'm now working in a company exporting hp and ibm server cpu. Our price is very competitve, and our quality is very high, also, we have good after sale service. But the problem is: we don't have paid B2B. How can I find customers? if anyone knows, pls contact me by msn : melodyhua123 AT hotmail dot com or elvayee123 at gmail dot com thanks!

    Read the article

  • GPGPU

    WhatGPU obviously stands for Graphics Processing Unit (the silicon powering the display you are using to read this blog post). The extra GP in front of that stands for General Purpose computing.So, altogether GPGPU refers to computing we can perform on GPU for purposes beyond just drawing on the screen. In effect, we can use a GPGPU a bit like we already use a CPU: to perform some calculation (that doesn’t have to have any visual element to it). The attraction is that a GPGPU can be orders of magnitude faster than a CPU.WhyWhen I was at the SuperComputing conference in Portland last November, GPGPUs were all the rage. A quick online search reveals many articles introducing the GPGPU topic. I'll just share 3 here: pcper (ignoring all pages except the first, it is a good consumer perspective), gizmodo (nice take using mostly layman terms) and vizworld (answering the question on "what's the big deal").The GPGPU programming paradigm (from a high level) is simple: in your CPU program you define functions (aka kernels) that take some input, can perform the costly operation and return the output. The kernels are the things that execute on the GPGPU leveraging its power (and hence execute faster than what they could on the CPU) while the host CPU program waits for the results or asynchronously performs other tasks.However, GPGPUs have different characteristics to CPUs which means they are suitable only for certain classes of problem (i.e. data parallel algorithms) and not for others (e.g. algorithms with branching or recursion or other complex flow control). You also pay a high cost for transferring the input data from the CPU to the GPU (and vice versa the results back to the CPU), so the computation itself has to be long enough to justify the overhead transfer costs. If your problem space fits the criteria then you probably want to check out this technology.HowSo where can you get a graphics card to start playing with all this? At the time of writing, the two main vendors ATI (owned by AMD) and NVIDIA are the obvious players in this industry. You can read about GPGPU on this AMD page and also on this NVIDIA page. NVIDIA's website also has a free chapter on the topic from the "GPU Gems" book: A Toolkit for Computation on GPUs.If you followed the links above, then you've already come across some of the choices of programming models that are available today. Essentially, AMD is offering their ATI Stream technology accessible via a language they call Brook+; NVIDIA offers their CUDA platform which is accessible from CUDA C. Choosing either of those locks you into the GPU vendor and hence your code cannot run on systems with cards from the other vendor (e.g. imagine if your CPU code would run on Intel chips but not AMD chips). Having said that, both vendors plan to support a new emerging standard called OpenCL, which theoretically means your kernels can execute on any GPU that supports it. To learn more about all of these there is a website: gpgpu.org. The caveat about that site is that (currently) it completely ignores the Microsoft approach, which I touch on next.On Windows, there is already a cross-GPU-vendor way of programming GPUs and that is the DirectX API. Specifically, on Windows Vista and Windows 7, the DirectX 11 API offers a dedicated subset of the API for GPGPU programming: DirectCompute. You use this API on the CPU side, to set up and execute the kernels that run on the GPU. The kernels are written in a language called HLSL (High Level Shader Language). You can use DirectCompute with HLSL to write a "compute shader", which is the term DirectX uses for what I've been referring to in this post as a "kernel". For a comprehensive collection of links about this (including tutorials, videos and samples) please see my blog post: DirectCompute.Note that there are many efforts to build even higher level languages on top of DirectX that aim to expose GPGPU programming to a wider audience by making it as easy as today's mainstream programming models. I'll mention here just two of those efforts: Accelerator from MSR and Brahma by Ananth. Comments about this post welcome at the original blog.

    Read the article

  • How to get bearable 2D and 3D performance on AMD Radeon HD 6950?

    - by l0b0
    I have had an AMD Radeon HD 6950 (i.e., Cayman series) for a couple years now, and I have tried a lot of combinations of drivers and settings with terrible results. I'm completely at a loss as to how to proceed. The open source driver has much better 2D performance, but it offloads all OpenGL rendering to the CPU. What I've tried so far: All the latest stable Ubuntu releases in the period, plus one Linux Mint release. All the latest stable AMD Catalyst Proprietary Display Drivers, and currently 13.1. The unofficial wiki installation instructions for every Ubuntu version and the semi-official Ubuntu instructions. All the tips and tweaks I could find for Minecraft (Optifine, reducing settings to minimum), VLC (postprocessing at minimum, rendering at native video size), Catalyst Control Center (flipped every lever in there) and X11 (some binary toggles I can no longer remember). Results: Typically 13-15 FPS in Minecraft, 30 max (100+ in Windows with the same driver version). Around 10 FPS in Team Fortress 2 using the official Steam client. Choppy video playback, in Flash and with VLC. CPU use goes through the roof when rendering video (150% for 1080p on YouTube in Chromium, 100% for 1080p H264 in VLC). glxgears shows 12.5 FPS when maximized. fgl_glxgears shows 10 FPS when maximized. Hardware details from lshw: Motherboard ASUS P6X58D-E CPU Intel Core i7 CPU 950 @ 3.07GHz (never overclocked; 64 bit) 6 GB RAM Video card product "Cayman PRO [Radeon HD 6950]", vendor "Hynix Semiconductor (Hyundai Electronics)" 2 x 1920x1200 monitors, both connected with HDMI. I feel I must be missing something absolutely fundamental here. Is there no accelerated support for anything on 64-bit architectures? Does a dual monitor completely mess up the driver? $ fglrxinfo display: :0 screen: 0 OpenGL vendor string: Advanced Micro Devices, Inc. OpenGL renderer string: AMD Radeon HD 6900 Series OpenGL version string: 4.2.11995 Compatibility Profile Context $ glxinfo | grep 'direct rendering' direct rendering: Yes I am currently using the open source driver, with the following results: Full frame rate and low CPU load when playing 1080p video. Black screen (but music in the background) in Team Fortress 2. Similar performance in Minecraft as the Catalyst driver. In hindsight obvious, since both end up offloading the rendering to the CPU. My /var/log/Xorg.0.log after upgrading to AMD Catalyst 13.1. Some possibly important lines: (WW) Falling back to old probe method for fglrx (WW) fglrx: No matching Device section for instance (BusID PCI:0@3:0:1) found The generated xorg.conf. The disabled "monitor" 0-DFP9 is actually an A/V receiver, which sometimes confuses the monitor drivers when turned on/off (but not in Windows). All three "monitor" devices are connected with HDMI. Edit: Chris Carter's suggestion to use the xorg-edgers PPA (Catalyst 13.1) resulted in some improvement, but still pretty bad performance overall: Minecraft stabilizes at 13-17 FPS, but at least the CPU load is "only" at 45-60%. Still 150% CPU use for 1080p video rendering on YouTube in Chromium. Massive improvement for 1080p H264 in VLC: 40-50% CPU use and no visible jitter glxgears performance about doubled to 25-30 FPS when maximized. fgl_glxgears still at ~10 FPS when maximized.

    Read the article

  • Installing VirtualBox on BackTrack 5

    - by m0skit0
    I'm getting this error when running VirtualBox's installation script: $ sudo ~/Downloads/VirtualBox-4.1.14-77440-Linux_x86.run Verifying archive integrity... All good. Uncompressing VirtualBox for Linux installation........... VirtualBox Version 4.1.14 r77440 (2012-04-12T16:20:44Z) installer Removing previous installation of VirtualBox 4.1.14 r77440 from /opt/VirtualBox Installing VirtualBox to /opt/VirtualBox tar: Record size = 8 blocks Python found: python, installing bindings... Building the VirtualBox kernel modules Error! Bad return status for module build on kernel: 3.2.6 (i686) Consult the make.log in the build directory /var/lib/dkms/vboxhost/4.1.14/build/ for more information. ERROR: binary package for vboxhost: 4.1.14 not found Here's the log: $ cat /var/lib/dkms/vboxhost/4.1.14/build/make.log DKMS make.log for vboxhost-4.1.14 for kernel 3.2.6 (i686) Sun May 13 14:32:52 CEST 2012 make: Entering directory `/usr/src/linux-headers-3.2.6' /usr/src/linux-headers-3.2.6/arch/x86/Makefile:39: /usr/src/linux-headers-3.2.6/arch/x86/Makefile_32.cpu: No such file or directory make: *** No rule to make target `/usr/src/linux-headers-3.2.6/arch/x86/Makefile_32.cpu'. Stop. make: Leaving directory `/usr/src/linux-headers-3.2.6' /usr/src/linux-headers-3.2.6/arch/x86/ directory: $ ls /usr/src/linux-headers-3.2.6/arch/x86/ Kconfig Makefile ia32 lguest mm pci tools video Kconfig.cpu boot kernel lib net platform um xen Kconfig.debug crypto kvm math-emu oprofile power vdso Makefile references on "cpu" $ cat /usr/src/linux-headers-3.2.6/arch/x86/Makefile | grep cpu include $(srctree)/arch/x86/Makefile_32.cpu # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu) Before upgrading to 3.X I didn't have this problem, the script would install VB correctly. Any ideas on what might be causing this? Thanks in advance!

    Read the article

  • How to determine cpu, ram needed for rails app?

    - by Ben
    What is the most accurate way to determine the amount of cpu speed and ram needed to run my rails app? I believe there are stress testing tools like Tsung, but how do I determine, for example, that I need X more ram, or X more CPU? I would like to find some way to roughly gauge the performance needs of my application so I can anticipate future needs. I think this data will also be useful for me to decide whether to upgrade one machine, or get another dedicated machine and put all the databases on that one. Essentially, I am concerned about scaling issues, and how to anticipate them. Thanks in advance for the help!

    Read the article

  • Google App Engine - What causes cold start latency time to be high, even though my CPU usage is rela

    - by Spines
    I've optimized my code to use only lightweight libraries. I'm even using the low level datastore rather than JDO. And my cold start CPU usage has dropped from about 5 seconds to about 1.5 seconds. However, the time it takes to respond is often about 4.5 seconds, though it varies a lot. Here are some lines from my logs: 03-19 09:16PM 57.368 /donothing 200 4506ms 1516cpu_ms 0kb Mozilla/5.0 03-19 09:22PM 54.884 /donothing 200 4452ms 1477cpu_ms 0kb Mozilla/5.0 What is the app engine doing for those extra 3 seconds that apparently isn't using any CPU?

    Read the article

  • Is there any difference between processor and core?

    - by Salvador
    The following two command seems to give me different information about the same hardware srs@ubuntu:~$ cat /proc/cpuinfo | grep -e processor -e cores processor : 0 cpu cores : 4 processor : 1 cpu cores : 4 processor : 2 cpu cores : 4 processor : 3 cpu cores : 4 srs@ubuntu:~$ sudo dmidecode -t processor # dmidecode 2.9 SMBIOS 2.6 present. Handle 0x0004, DMI type 4, 42 bytes Processor Information Socket Designation: LGA1155 Type: Central Processor Family: <OUT OF SPEC> Manufacturer: Intel ID: A7 06 02 00 FF FB EB BF Version: Intel(R) Core(TM) i5-2500K CPU @ 3.30GHz Voltage: 1.0 V External Clock: 100 MHz Max Speed: 3800 MHz Current Speed: 3300 MHz Status: Populated, Enabled Upgrade: Other L1 Cache Handle: 0x0005 L2 Cache Handle: 0x0006 L3 Cache Handle: 0x0007 Serial Number: To Be Filled By O.E.M. Asset Tag: To Be Filled By O.E.M. Part Number: To Be Filled By O.E.M. Core Count: 4 Core Enabled: 1 Characteristics: 64-bit capable Until today I thought I had a single processor with 4 independent cores. I also thought that within each core can be used different threads.

    Read the article

  • (When) Does hardware, especially the CPU(s), deliver wrong results?

    - by sub
    What I'm talking about is: Is it possible that under certain circumstances the CPU "buggs" and suddenly responses 1+1=2? In which parts of the computer can that happen (HDD, RAM, Mainboard)? What could be the causes? Bad quality? Overheating? Does that even happen? When yes, how frequently? If everything is okay with the CPU (not a single fault in production, good temperature), can that still happen sometimes? What would be the results of, let's say one to three wrong computations? This is programming related as it would be nice to know if you can even rely on the hardware to return the right results.

    Read the article

  • Why does C++ linking use virtually no CPU? (updated)

    - by John
    On a native C++ project, linking right now can take a minute or two, yet during this time CPU drops from 100% during compilation to virtually zero. Does this mean linking is primarily a disk activity? If so, is this the main area an SSD would make big changes? But, why aren't all my OBJ files (or as many as possible) kept in RAM after compilation to avoid this? With 4Gb of RAM I should be able to save a lot of disk access and make it CPU-bound again, no? update: so the obvious follow-up is, can VC++ compiler and linker talk together better to streamline things and keep OBJ files in memory, similar to how Delphi does?

    Read the article

  • How to check CPU temperature on a HP P2000?

    - by Pavel
    I have a HP StorageWorks MSA Storage P2000 G3 SAS. show sensor-status gives something like # show sensor-status Sensor Name Value Status ---------------------------------------------------- On-Board Temperature 1-Ctlr A 53 C OK On-Board Temperature 1-Ctlr B 52 C OK On-Board Temperature 2-Ctlr A 61 C OK On-Board Temperature 2-Ctlr B 63 C OK On-Board Temperature 3-Ctlr A 53 C OK On-Board Temperature 3-Ctlr B 53 C OK Disk Controller Temp-Ctlr A 34 C OK Disk Controller Temp-Ctlr B 32 C OK Memory Controller Temp-Ctlr A 66 C OK Memory Controller Temp-Ctlr B 67 C OK [...] Overall Unit Status OK OK Temperature Loc: upper-IOM A 40 C OK Temperature Loc: lower-IOM B 38 C OK Temperature Loc: left-PSU 36 C OK Temperature Loc: right-PSU 40 C OK [...] is one of the values the CPU/FPGA temperature? Or, if not, how do I get it? Thanks!

    Read the article

  • explorer.exe eating all CPU, how to to detect culprit?

    - by JohnDoe
    Windows 7 64bit. I am using ProcessExplorer from Sysinternals, and it says, that the offending call is ntdll.dll!RtlValidateHeap+0x170 however, the call stack towards the entry is always different, so it's hard for me to track the problem. Maybe it's a mal-programed trojan, causing exceptions in Explorer.exe, but that is only a wild speculation. Explorer.exe is then consuming 25% (a core on a dual core). Killing the process makes the task bar go away, respawning from task manager, and half a minute later it's again eating all CPU cycles.

    Read the article

  • Can I provision half a core as a virtual CPU?

    - by ramdaz
    I am virtualization newbie. Please advise on these questions. Please note using a commercial VM software like Citrix or VMware is not a choice for me. I have at my disposal a couple of 2x 4 core servers with 32 GB RAM. I need to create 16 VMs on each server to test some web applications. Can I provision half a core as a virtual CPU for each VM? To my best knowledge I can't do so on Xen. Is it possible on KVM or some other free open source VM solution? If it's not possible to assign half a core, how do I ensure that uniform processing power is available for all VMs? Since the job is to create separate instances for hosting 16 web apps in a physical server, do you recommend setting up a private cloud using Ubuntu Enterprise Cloud as a better option? Is there HA solution under KVM, like Remus for Xen?

    Read the article

  • How much thermal paste should apply to the CPU?

    - by iconiK
    There a million different pages around the Internet with conflicting information on how much thermal paste to apply and how to spread it. Some say a half-bean-sized drop in the middle, others say a circle or rectangle on the CPU. Some tell you to let the heatsink's base spread it, while others say to spread it with a knife or your finger with a plastic bag on it. Some coolers even come it with applied fully on the base, like Corsair H50 and all Arctic Cooling products. What is the best way to apply and spread thermal paste, and how much of it?

    Read the article

  • Why should krfb use so much cpu when I never use it?

    - by Newton Falls
    I was playing around with KSysGuard and I noticed the process using the most cpu was krfb, which is the server process for desktop sharing. I never use desktop sharing so I suppose it is a default loaded process. Why would this process use so much juice (around 15%) when I never use it and it really shouldn't be doing much of anything? I don't see any network activity so I don't think I am being hacked. I have suspended the process and nothing bad seems to have happened. Can I assume this is a safe thing to do?

    Read the article

  • Dual Core or Quad Core CPU for NetBeans/Eclipse development?

    - by cdb
    I am going to buy a new desktop CPU. I am a programmer who mainly uses NetBeans IDE for Java web application development, with GlassFish application server. I went through the discussion regarding Dual Core or Quad Core. My doubt is that software like IDEs (NetBeans, Eclipse, etc., with a server running) may not be written with multiple cores in mind? I am not a game addict... So what is best for me, and which company should I choose, AMD/Intel?

    Read the article

  • Why might apache2 use 100% of CPU at startup?

    - by QuantumMechanic
    This is apache 2.2.14 on SLES9. Out of nowhere (i.e. it had been working fine for ages) I am seeing apache2 suddenly start using 100% of the CPU at startup, and never completing startup. Nothing is getting written to /var/log/error_log (when it did back when things were OK). ps only shows the main httpd process and not any of the spawned threads. When things were OK, it would show the spawned threads. So it appears httpd is going into some sort of infinite loop right at startup and isn't even completing startup. It's not an issue of being overloaded by connections -- this happens even when nothing is trying to contact it. The config files haven't changed (or at least they haven't changed in a way that changed their last-modified time). I've tried added -e debug -E /var/log/apache2/startup_info to the command line, but nothing is put in the file. Any ideas what could be happening?

    Read the article

  • Why does WMI Provider Host ( WmiPrvSE.exe ) keep spiking my CPU ?

    - by Sathya
    I generally keep my laptop on 24x7, and at the end of the day it's really annoying to have my thighs burnt because over overheating. The overheating seems to be a result of WMI Provider Host ( WmiPrvSE.exe ) spiking the CPU utilization to 25% every few minutes. Any ideas why this is happening ? I have an HP Envy 14 (w/ the HP bundled crap) running on Windows 7 Home Premium. (Note: Based on @nhinkle's past observations, it seems that HP Wireless Manager might be the culprit, is there any way to confirm this ?)

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >