Search Results

Search found 7065 results on 283 pages for 'cpu sockets'.

Page 84/283 | < Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >

  • GPGPU

    WhatGPU obviously stands for Graphics Processing Unit (the silicon powering the display you are using to read this blog post). The extra GP in front of that stands for General Purpose computing.So, altogether GPGPU refers to computing we can perform on GPU for purposes beyond just drawing on the screen. In effect, we can use a GPGPU a bit like we already use a CPU: to perform some calculation (that doesn’t have to have any visual element to it). The attraction is that a GPGPU can be orders of magnitude faster than a CPU.WhyWhen I was at the SuperComputing conference in Portland last November, GPGPUs were all the rage. A quick online search reveals many articles introducing the GPGPU topic. I'll just share 3 here: pcper (ignoring all pages except the first, it is a good consumer perspective), gizmodo (nice take using mostly layman terms) and vizworld (answering the question on "what's the big deal").The GPGPU programming paradigm (from a high level) is simple: in your CPU program you define functions (aka kernels) that take some input, can perform the costly operation and return the output. The kernels are the things that execute on the GPGPU leveraging its power (and hence execute faster than what they could on the CPU) while the host CPU program waits for the results or asynchronously performs other tasks.However, GPGPUs have different characteristics to CPUs which means they are suitable only for certain classes of problem (i.e. data parallel algorithms) and not for others (e.g. algorithms with branching or recursion or other complex flow control). You also pay a high cost for transferring the input data from the CPU to the GPU (and vice versa the results back to the CPU), so the computation itself has to be long enough to justify the overhead transfer costs. If your problem space fits the criteria then you probably want to check out this technology.HowSo where can you get a graphics card to start playing with all this? At the time of writing, the two main vendors ATI (owned by AMD) and NVIDIA are the obvious players in this industry. You can read about GPGPU on this AMD page and also on this NVIDIA page. NVIDIA's website also has a free chapter on the topic from the "GPU Gems" book: A Toolkit for Computation on GPUs.If you followed the links above, then you've already come across some of the choices of programming models that are available today. Essentially, AMD is offering their ATI Stream technology accessible via a language they call Brook+; NVIDIA offers their CUDA platform which is accessible from CUDA C. Choosing either of those locks you into the GPU vendor and hence your code cannot run on systems with cards from the other vendor (e.g. imagine if your CPU code would run on Intel chips but not AMD chips). Having said that, both vendors plan to support a new emerging standard called OpenCL, which theoretically means your kernels can execute on any GPU that supports it. To learn more about all of these there is a website: gpgpu.org. The caveat about that site is that (currently) it completely ignores the Microsoft approach, which I touch on next.On Windows, there is already a cross-GPU-vendor way of programming GPUs and that is the DirectX API. Specifically, on Windows Vista and Windows 7, the DirectX 11 API offers a dedicated subset of the API for GPGPU programming: DirectCompute. You use this API on the CPU side, to set up and execute the kernels that run on the GPU. The kernels are written in a language called HLSL (High Level Shader Language). You can use DirectCompute with HLSL to write a "compute shader", which is the term DirectX uses for what I've been referring to in this post as a "kernel". For a comprehensive collection of links about this (including tutorials, videos and samples) please see my blog post: DirectCompute.Note that there are many efforts to build even higher level languages on top of DirectX that aim to expose GPGPU programming to a wider audience by making it as easy as today's mainstream programming models. I'll mention here just two of those efforts: Accelerator from MSR and Brahma by Ananth. Comments about this post welcome at the original blog.

    Read the article

  • Deploying Socket.IO App to Windows Azure Web Site with Azure CLI

    - by shiju
    In this blog post, I will demonstrate how to deploy Socket.IO app to Windows Azure Website using Windows Azure Cross-Platform Command-Line Interface, which leverages the Windows Azure Website’s new support for Web Sockets. Recently Windows Azure has announced lot of enhancements including the support for Web Sockets in Windows Azure Websites, which lets the Node.js developers deploy Socket.IO apps to Windows Azure Websites. In this blog post, I am using  Windows Azure CLI for create and deploy Windows Azure Website. Install  Windows Azure CLI The Windows Azure CLI available as a NPM module so that you can install Windows Azure CLI using  NPM as shown in the below command. After installing the azure-cli, just enter the command “azure” which will show the useful commands provided by Azure CLI. Import Windows Azure Subscription Account In order to import our Azure subscription account, we need to download the Windows Azure subscription profile. The Azure CLI command “account download” lets you download the  Windows Azure subscription profile as shown in the below command. The command redirect you login to Windows Azure portal and allow you to download the Windows Azure publish settings file. The account import command lets you import the downloaded publish settings file so that you can create and manage Websites, Cloud Services, Virtual Machines and Mobile Services in Windows Azure. Create Windows Azure Website and Enable Web Sockets In this post, we are going to deploy Socket.IO app to Windows Azure Website by using the Web Socket support provided by Windows Azure. Let’s create a Website named “socketiochatapp” using the Azure CLI. The above command will create a Windows Azure Website that will also initialize a Git repository with a remote named Azure. We can see the newly created Website from Azure portal. By default, the Web Sockets will be disabled. So let’s enable it by navigating to the Configure tab of the Website, and select “ON” in Web Sockets option and save the configuration changes. Deploy a Node.js Socket.IO App to Windows Azure Now, our Windows Azure Website supports Web Sockets so that we can easily deploy Socket.IO app to Windows Azure Website. Let’s add Node.js chat app which leverages Socket.IO module. Please note that you have to add npm module dependencies in the package.json file so that Windows Azure can install the dependencies when deploying the app. Let’s add the Node.js app and add the files to git repository. Let’s commit the changes to git repository. We have committed the changes to git local repository. Let’s push the changes to Windows Azure production environment. The successful deployment can see from the Windows Azure portal by navigating to the deployments tab of the selected Windows Azure Website. The screen shot below shows that our chat app is running successfully.   You can follow me on Twitter @shijucv

    Read the article

  • How to get bearable 2D and 3D performance on AMD Radeon HD 6950?

    - by l0b0
    I have had an AMD Radeon HD 6950 (i.e., Cayman series) for a couple years now, and I have tried a lot of combinations of drivers and settings with terrible results. I'm completely at a loss as to how to proceed. The open source driver has much better 2D performance, but it offloads all OpenGL rendering to the CPU. What I've tried so far: All the latest stable Ubuntu releases in the period, plus one Linux Mint release. All the latest stable AMD Catalyst Proprietary Display Drivers, and currently 13.1. The unofficial wiki installation instructions for every Ubuntu version and the semi-official Ubuntu instructions. All the tips and tweaks I could find for Minecraft (Optifine, reducing settings to minimum), VLC (postprocessing at minimum, rendering at native video size), Catalyst Control Center (flipped every lever in there) and X11 (some binary toggles I can no longer remember). Results: Typically 13-15 FPS in Minecraft, 30 max (100+ in Windows with the same driver version). Around 10 FPS in Team Fortress 2 using the official Steam client. Choppy video playback, in Flash and with VLC. CPU use goes through the roof when rendering video (150% for 1080p on YouTube in Chromium, 100% for 1080p H264 in VLC). glxgears shows 12.5 FPS when maximized. fgl_glxgears shows 10 FPS when maximized. Hardware details from lshw: Motherboard ASUS P6X58D-E CPU Intel Core i7 CPU 950 @ 3.07GHz (never overclocked; 64 bit) 6 GB RAM Video card product "Cayman PRO [Radeon HD 6950]", vendor "Hynix Semiconductor (Hyundai Electronics)" 2 x 1920x1200 monitors, both connected with HDMI. I feel I must be missing something absolutely fundamental here. Is there no accelerated support for anything on 64-bit architectures? Does a dual monitor completely mess up the driver? $ fglrxinfo display: :0 screen: 0 OpenGL vendor string: Advanced Micro Devices, Inc. OpenGL renderer string: AMD Radeon HD 6900 Series OpenGL version string: 4.2.11995 Compatibility Profile Context $ glxinfo | grep 'direct rendering' direct rendering: Yes I am currently using the open source driver, with the following results: Full frame rate and low CPU load when playing 1080p video. Black screen (but music in the background) in Team Fortress 2. Similar performance in Minecraft as the Catalyst driver. In hindsight obvious, since both end up offloading the rendering to the CPU. My /var/log/Xorg.0.log after upgrading to AMD Catalyst 13.1. Some possibly important lines: (WW) Falling back to old probe method for fglrx (WW) fglrx: No matching Device section for instance (BusID PCI:0@3:0:1) found The generated xorg.conf. The disabled "monitor" 0-DFP9 is actually an A/V receiver, which sometimes confuses the monitor drivers when turned on/off (but not in Windows). All three "monitor" devices are connected with HDMI. Edit: Chris Carter's suggestion to use the xorg-edgers PPA (Catalyst 13.1) resulted in some improvement, but still pretty bad performance overall: Minecraft stabilizes at 13-17 FPS, but at least the CPU load is "only" at 45-60%. Still 150% CPU use for 1080p video rendering on YouTube in Chromium. Massive improvement for 1080p H264 in VLC: 40-50% CPU use and no visible jitter glxgears performance about doubled to 25-30 FPS when maximized. fgl_glxgears still at ~10 FPS when maximized.

    Read the article

  • Installing VirtualBox on BackTrack 5

    - by m0skit0
    I'm getting this error when running VirtualBox's installation script: $ sudo ~/Downloads/VirtualBox-4.1.14-77440-Linux_x86.run Verifying archive integrity... All good. Uncompressing VirtualBox for Linux installation........... VirtualBox Version 4.1.14 r77440 (2012-04-12T16:20:44Z) installer Removing previous installation of VirtualBox 4.1.14 r77440 from /opt/VirtualBox Installing VirtualBox to /opt/VirtualBox tar: Record size = 8 blocks Python found: python, installing bindings... Building the VirtualBox kernel modules Error! Bad return status for module build on kernel: 3.2.6 (i686) Consult the make.log in the build directory /var/lib/dkms/vboxhost/4.1.14/build/ for more information. ERROR: binary package for vboxhost: 4.1.14 not found Here's the log: $ cat /var/lib/dkms/vboxhost/4.1.14/build/make.log DKMS make.log for vboxhost-4.1.14 for kernel 3.2.6 (i686) Sun May 13 14:32:52 CEST 2012 make: Entering directory `/usr/src/linux-headers-3.2.6' /usr/src/linux-headers-3.2.6/arch/x86/Makefile:39: /usr/src/linux-headers-3.2.6/arch/x86/Makefile_32.cpu: No such file or directory make: *** No rule to make target `/usr/src/linux-headers-3.2.6/arch/x86/Makefile_32.cpu'. Stop. make: Leaving directory `/usr/src/linux-headers-3.2.6' /usr/src/linux-headers-3.2.6/arch/x86/ directory: $ ls /usr/src/linux-headers-3.2.6/arch/x86/ Kconfig Makefile ia32 lguest mm pci tools video Kconfig.cpu boot kernel lib net platform um xen Kconfig.debug crypto kvm math-emu oprofile power vdso Makefile references on "cpu" $ cat /usr/src/linux-headers-3.2.6/arch/x86/Makefile | grep cpu include $(srctree)/arch/x86/Makefile_32.cpu # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu) Before upgrading to 3.X I didn't have this problem, the script would install VB correctly. Any ideas on what might be causing this? Thanks in advance!

    Read the article

  • storing and retrieving socket

    - by Trevor Newhook
    From what I can understand, once I create a socket, I can then create an array to store it with userArray[socket.nickname]=socket; I can then send a message to it with: io.sockets.socket(userArray[data.to]).emit('private message', tstamp(), socket.nickname, message); The basic logic is to store a copy of each socket in an object, identified by nickname. When I want to send a message to that socket, I use the copy of the socket, and send the message via io.sockets.socket(id).emit(). The entire server code is below: io.sockets.on('connection', function (socket) { socket.on('user message', function (msg) { socket.broadcast.emit('user message', tstamp(), socket.nickname, msg); updateLog('user message', socket.nickname, msg); }); socket.on('private message', function(data) { socket.get(data.nickname, function (err, name) { console.log('Chat message by ', name); }); updateLog('private message', socket.nickname, data.message); message=data.message; io.sockets.socket(userArray[data.to]).emit('private message', tstamp(), socket.nickname, message); }); socket.on('get log', function () { updateLog(); // Ensure old entries are cleared out before sending it. io.sockets.emit('chat log', log); }); socket.on('nickname', function (nick, fn) { var i = 1; var orignick = nick; while (nicknames[nick]) { nick = orignick+i; i++; } fn(nick); nicknames[nick] = socket.nickname = nick; userArray[socket.nickname]=socket; socket.set('nickname', nick, function () { socket.emit('ready'); }); socket.broadcast.emit('announcement', nick + ' connected'); // io.sockets.socket(userArray[nick]).emit('newID', 'Your name is: ' + nick, '. Your ID is: '+ userArray[nick]); io.sockets.emit('nicknames', nicknames); });

    Read the article

  • Getting exception when trying to monkey patch pymongo.connection._Pool

    - by Creotiv
    I use pymongo 1.9 on Ubuntu 10.10 with python 2.6.6 When i trying to monkey patch pymongo.connection._Pool i'm getting error on connection: AutoReconnect: could not find master/primary But when i change _Pool class in pymongo.connection module, it work pretty fine. Even if i copy _Pool implementation from pymongo.connection module and will try to monkey patch by the same code, it still giving same exception. I need to remove threading.local from _Pool class, because i use gevent and i need to implement Pool for all mongo connections(for all threads). I use this code: import pymongo class GPool: """A simple connection pool. Uses thread-local socket per thread. By calling return_socket() a thread can return a socket to the pool. Right now the pool size is capped at 10 sockets - we can expose this as a parameter later, if needed. """ # Non thread-locals __slots__ = ["sockets", "socket_factory", "pool_size","sock"] #sock = None def __init__(self, socket_factory): self.pool_size = 10 if not hasattr(self,"sock"): self.sock = None self.socket_factory = socket_factory if not hasattr(self, "sockets"): self.sockets = [] def socket(self): # we store the pid here to avoid issues with fork / # multiprocessing - see # test.test_connection:TestConnection.test_fork for an example # of what could go wrong otherwise pid = os.getpid() if self.sock is not None and self.sock[0] == pid: return self.sock[1] try: self.sock = (pid, self.sockets.pop()) except IndexError: self.sock = (pid, self.socket_factory()) return self.sock[1] def return_socket(self): if self.sock is not None and self.sock[0] == os.getpid(): # There's a race condition here, but we deliberately # ignore it. It means that if the pool_size is 10 we # might actually keep slightly more than that. if len(self.sockets) < self.pool_size: self.sockets.append(self.sock[1]) else: self.sock[1].close() self.sock = None pymongo.connection._Pool = GPool

    Read the article

  • How to determine cpu, ram needed for rails app?

    - by Ben
    What is the most accurate way to determine the amount of cpu speed and ram needed to run my rails app? I believe there are stress testing tools like Tsung, but how do I determine, for example, that I need X more ram, or X more CPU? I would like to find some way to roughly gauge the performance needs of my application so I can anticipate future needs. I think this data will also be useful for me to decide whether to upgrade one machine, or get another dedicated machine and put all the databases on that one. Essentially, I am concerned about scaling issues, and how to anticipate them. Thanks in advance for the help!

    Read the article

  • Google App Engine - What causes cold start latency time to be high, even though my CPU usage is rela

    - by Spines
    I've optimized my code to use only lightweight libraries. I'm even using the low level datastore rather than JDO. And my cold start CPU usage has dropped from about 5 seconds to about 1.5 seconds. However, the time it takes to respond is often about 4.5 seconds, though it varies a lot. Here are some lines from my logs: 03-19 09:16PM 57.368 /donothing 200 4506ms 1516cpu_ms 0kb Mozilla/5.0 03-19 09:22PM 54.884 /donothing 200 4452ms 1477cpu_ms 0kb Mozilla/5.0 What is the app engine doing for those extra 3 seconds that apparently isn't using any CPU?

    Read the article

  • Is there any difference between processor and core?

    - by Salvador
    The following two command seems to give me different information about the same hardware srs@ubuntu:~$ cat /proc/cpuinfo | grep -e processor -e cores processor : 0 cpu cores : 4 processor : 1 cpu cores : 4 processor : 2 cpu cores : 4 processor : 3 cpu cores : 4 srs@ubuntu:~$ sudo dmidecode -t processor # dmidecode 2.9 SMBIOS 2.6 present. Handle 0x0004, DMI type 4, 42 bytes Processor Information Socket Designation: LGA1155 Type: Central Processor Family: <OUT OF SPEC> Manufacturer: Intel ID: A7 06 02 00 FF FB EB BF Version: Intel(R) Core(TM) i5-2500K CPU @ 3.30GHz Voltage: 1.0 V External Clock: 100 MHz Max Speed: 3800 MHz Current Speed: 3300 MHz Status: Populated, Enabled Upgrade: Other L1 Cache Handle: 0x0005 L2 Cache Handle: 0x0006 L3 Cache Handle: 0x0007 Serial Number: To Be Filled By O.E.M. Asset Tag: To Be Filled By O.E.M. Part Number: To Be Filled By O.E.M. Core Count: 4 Core Enabled: 1 Characteristics: 64-bit capable Until today I thought I had a single processor with 4 independent cores. I also thought that within each core can be used different threads.

    Read the article

  • (When) Does hardware, especially the CPU(s), deliver wrong results?

    - by sub
    What I'm talking about is: Is it possible that under certain circumstances the CPU "buggs" and suddenly responses 1+1=2? In which parts of the computer can that happen (HDD, RAM, Mainboard)? What could be the causes? Bad quality? Overheating? Does that even happen? When yes, how frequently? If everything is okay with the CPU (not a single fault in production, good temperature), can that still happen sometimes? What would be the results of, let's say one to three wrong computations? This is programming related as it would be nice to know if you can even rely on the hardware to return the right results.

    Read the article

  • Why does C++ linking use virtually no CPU? (updated)

    - by John
    On a native C++ project, linking right now can take a minute or two, yet during this time CPU drops from 100% during compilation to virtually zero. Does this mean linking is primarily a disk activity? If so, is this the main area an SSD would make big changes? But, why aren't all my OBJ files (or as many as possible) kept in RAM after compilation to avoid this? With 4Gb of RAM I should be able to save a lot of disk access and make it CPU-bound again, no? update: so the obvious follow-up is, can VC++ compiler and linker talk together better to streamline things and keep OBJ files in memory, similar to how Delphi does?

    Read the article

  • How to check CPU temperature on a HP P2000?

    - by Pavel
    I have a HP StorageWorks MSA Storage P2000 G3 SAS. show sensor-status gives something like # show sensor-status Sensor Name Value Status ---------------------------------------------------- On-Board Temperature 1-Ctlr A 53 C OK On-Board Temperature 1-Ctlr B 52 C OK On-Board Temperature 2-Ctlr A 61 C OK On-Board Temperature 2-Ctlr B 63 C OK On-Board Temperature 3-Ctlr A 53 C OK On-Board Temperature 3-Ctlr B 53 C OK Disk Controller Temp-Ctlr A 34 C OK Disk Controller Temp-Ctlr B 32 C OK Memory Controller Temp-Ctlr A 66 C OK Memory Controller Temp-Ctlr B 67 C OK [...] Overall Unit Status OK OK Temperature Loc: upper-IOM A 40 C OK Temperature Loc: lower-IOM B 38 C OK Temperature Loc: left-PSU 36 C OK Temperature Loc: right-PSU 40 C OK [...] is one of the values the CPU/FPGA temperature? Or, if not, how do I get it? Thanks!

    Read the article

  • explorer.exe eating all CPU, how to to detect culprit?

    - by JohnDoe
    Windows 7 64bit. I am using ProcessExplorer from Sysinternals, and it says, that the offending call is ntdll.dll!RtlValidateHeap+0x170 however, the call stack towards the entry is always different, so it's hard for me to track the problem. Maybe it's a mal-programed trojan, causing exceptions in Explorer.exe, but that is only a wild speculation. Explorer.exe is then consuming 25% (a core on a dual core). Killing the process makes the task bar go away, respawning from task manager, and half a minute later it's again eating all CPU cycles.

    Read the article

  • Can I provision half a core as a virtual CPU?

    - by ramdaz
    I am virtualization newbie. Please advise on these questions. Please note using a commercial VM software like Citrix or VMware is not a choice for me. I have at my disposal a couple of 2x 4 core servers with 32 GB RAM. I need to create 16 VMs on each server to test some web applications. Can I provision half a core as a virtual CPU for each VM? To my best knowledge I can't do so on Xen. Is it possible on KVM or some other free open source VM solution? If it's not possible to assign half a core, how do I ensure that uniform processing power is available for all VMs? Since the job is to create separate instances for hosting 16 web apps in a physical server, do you recommend setting up a private cloud using Ubuntu Enterprise Cloud as a better option? Is there HA solution under KVM, like Remus for Xen?

    Read the article

  • How much thermal paste should apply to the CPU?

    - by iconiK
    There a million different pages around the Internet with conflicting information on how much thermal paste to apply and how to spread it. Some say a half-bean-sized drop in the middle, others say a circle or rectangle on the CPU. Some tell you to let the heatsink's base spread it, while others say to spread it with a knife or your finger with a plastic bag on it. Some coolers even come it with applied fully on the base, like Corsair H50 and all Arctic Cooling products. What is the best way to apply and spread thermal paste, and how much of it?

    Read the article

  • Why should krfb use so much cpu when I never use it?

    - by Newton Falls
    I was playing around with KSysGuard and I noticed the process using the most cpu was krfb, which is the server process for desktop sharing. I never use desktop sharing so I suppose it is a default loaded process. Why would this process use so much juice (around 15%) when I never use it and it really shouldn't be doing much of anything? I don't see any network activity so I don't think I am being hacked. I have suspended the process and nothing bad seems to have happened. Can I assume this is a safe thing to do?

    Read the article

  • Dual Core or Quad Core CPU for NetBeans/Eclipse development?

    - by cdb
    I am going to buy a new desktop CPU. I am a programmer who mainly uses NetBeans IDE for Java web application development, with GlassFish application server. I went through the discussion regarding Dual Core or Quad Core. My doubt is that software like IDEs (NetBeans, Eclipse, etc., with a server running) may not be written with multiple cores in mind? I am not a game addict... So what is best for me, and which company should I choose, AMD/Intel?

    Read the article

  • Why might apache2 use 100% of CPU at startup?

    - by QuantumMechanic
    This is apache 2.2.14 on SLES9. Out of nowhere (i.e. it had been working fine for ages) I am seeing apache2 suddenly start using 100% of the CPU at startup, and never completing startup. Nothing is getting written to /var/log/error_log (when it did back when things were OK). ps only shows the main httpd process and not any of the spawned threads. When things were OK, it would show the spawned threads. So it appears httpd is going into some sort of infinite loop right at startup and isn't even completing startup. It's not an issue of being overloaded by connections -- this happens even when nothing is trying to contact it. The config files haven't changed (or at least they haven't changed in a way that changed their last-modified time). I've tried added -e debug -E /var/log/apache2/startup_info to the command line, but nothing is put in the file. Any ideas what could be happening?

    Read the article

  • Why does WMI Provider Host ( WmiPrvSE.exe ) keep spiking my CPU ?

    - by Sathya
    I generally keep my laptop on 24x7, and at the end of the day it's really annoying to have my thighs burnt because over overheating. The overheating seems to be a result of WMI Provider Host ( WmiPrvSE.exe ) spiking the CPU utilization to 25% every few minutes. Any ideas why this is happening ? I have an HP Envy 14 (w/ the HP bundled crap) running on Windows 7 Home Premium. (Note: Based on @nhinkle's past observations, it seems that HP Wireless Manager might be the culprit, is there any way to confirm this ?)

    Read the article

  • For Intel cpu , if chipset and motherboard are also from intel then it will give best performance. I

    - by metal gear solid
    I'm going to purchase Intel® Core™2 Duo Processor E7500 (3M Cache, 2.93 GHz, 1066 MHz FSB) and for motherboard my local vendor suggesting me to purchase Intel DG41RQ MB motherboard and he is also saying if i'm purchasing Intel CPU then purchasing Intel's Own motherboard with intel chipset will give best performance. Is it true? To get good inbuilt graphic I'm thinking to purchase nvidia chipset based motherboard of any other company like Asus, Gigabyte, MSi etc. is it ok? Although i never play games on my PC but thinking Inbuilt Nvidia graphics will be better for running Photoshop and watching movies then Intel's inbuilt graphics. or it's ok to purchase Intel DG41RQ MB motherborad as suggested by local vendor. Intel's inbuilt graphics would be enough for Photoshop and Watching movies. If you know any other good motherboard for Intel® Core™2 Duo Processor E7500 (3M Cache, 2.93 GHz, 1066 MHz FSB) then tell.

    Read the article

  • How can I reduce the CPU usage of Offline Files?

    - by Diego
    Whenever I have the Offline Files service running, there is a constant 25% CPU usage on svchost.exe (this is a quad core, so that means it's using up one core). This, in turn, triples the power consumption and keeps the machine hot... I do have several GB synchronized (music collection), but they are not changing at all, in either side. Am I misusing this feature? Is there anything I can configure to keep it down when there's nothing to do? Or should I forget about it and synchronize big folders manually?

    Read the article

  • Good free software to track and graph memory and CPU usage on Windows?

    - by TheLQ
    There are many questions on Linux memory tracking, but I haven't seen any for Windows. In my case however its a Windows XP Pro box I need to track the memory and CPU usage of. The reason I need it is due to a server program I'm trying that is eating all my processor and some of my memory which is freezing my RDP session and System Explorer and even makes it difficult to login physically. As this is a very constrained server I'm working off of (768 MB RAM with Pentium 4 which disappears with this program), I need a program that doesn't run/require a webserver. I can give it a MySQL database if necessary however. Is there any suggestions for such a program?

    Read the article

  • What is the computer "doing" when it is running slow and task manager is not showing any CPU activity?

    - by Joakim Tall
    Typical example is when shutting down a memoryintensive application. It can take quite a while before the computer gets back up to speed. Is there some inherent cost in releasing memory? Or is it throttled by some kind of harddrive activity, and if so is there any good way to track that? I usually bring up task manager when a computer is running slow, and usually sorting by cpu activity can show what process is causing the problem, but sometimes there is no activity showing. And yes I "show processes from all users", I have been wondering this since the days win2k :)

    Read the article

  • Why does my HDD produce a high-pitched noise when the CPU is in use?

    - by CyberOptic
    I know this is strange. Some time ago, I bought a new 7200rpm HDD for my desktop system (I'll look for the model later). Every time the CPU is used, a high frequency cheep comes from the HDD. I'm sure it's the HDD because the problem does not occur if the HDD is not attached or is in energy-saving mode (I cross-checked by booting from a live CD). What could be the reason for the cheep sounds? Could it be the power supply?

    Read the article

< Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >