Search Results

Search found 17045 results on 682 pages for 'high cpu usage'.

Page 114/682 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • Anyone have real world experience with Rackspace Cloud Sites at high scale?

    - by Allara
    I have a pure web service application layer using .NET. I was originally planning to use Amazon EC2, but rolling my own autoscaling procedures is a bit intimidating, and the scaling isn't very granular from a cost perspective. If the app is successful, we could be looking at relatively high scale (millions of requests per month). The app uses Amazon SimpleDB as the database layer. As a test, I have the app running successfully in Rackspace Cloud Sites. Performance seems to be equal to (if not better than) a standard EC2 instance, even with the added latency of the SimpleDB requests travelling to the Rackspace network. However, testing at this stage is at a very low scale. My question is this: has anyone had real-world experience running a high scale application on Rackspace Cloud Sites? Moreover, once you pass the "included" 10,000 compute cycles per month, does the overall cost seem to be lower than rolling lots of EC2 instances? My assumption would be that with completely smooth scaling (i.e. only adding compute resources as needed), the cost could be lower on average. However, their stated goal of calibrating 10,000 CCs as a single 1.2 Ghz CPU seems on average to be much more expensive than EC2. I like the idea of no-touch scaling, but is it too good to be true?

    Read the article

  • How can I stop wmplayer.exe causing CPU Usage spikes?

    - by SwanWhisperer
    I've found that on a very fast new machine CPU usage runs between 0-8% normally, but then with wmplayer on it hovers between 8-18%. The problem is particularly to my new machine with Windows 7, and doesn't occur on my old Vista machine. I believe it's possibly because every time I open wmplayer it tries to load up every media file on my computer into the startup screen. Assuming I want to keep using wmplayer (and since I've got a lot of playlists set up there, I do), how can I fix this problem?

    Read the article

  • How to make Linux reliably boot on multi-cpu machines?

    - by Adam Tabi
    I've got two machines, one with 4x12 AMD Opteron cores (AMD Opteron(tm) Processor 6176), one with 2x8 Xeon cores (HT disabled; Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz). On both machines I experience difficulties during boot of Linux using recent kernels. The system hangs during the initialization of the kernel, before or just when initramfs started initializing the hardware. The last thing which got displayed was a stacktrace like this: CPU: 31 PID: 0 Comm: swapper/31 Tainted: G D 3.11.6-hardened #11 Hardware name: Supermicro X9DRT-HF+/X9DRT-HF+, BIOS 3.00 07/08/2013 task: ffff880854695500 ti: ffff880854695a28 task.ti: ffff880854695a28 RIP: 0010:[<ffffffff8100a82e>] [<ffffffff8100a82e>] default_idle+0x6/0xe RSP: 0000:ffff8808546b3ec8 EFLAGS: 00000286 RAX: ffffffff8100a828 RBX: ffff880854695a28 RCX: 00000000ffffffff RDX: 0100000000000000 RSI: 0000000000000000 RDI: ffff88107fdec690 RBP: ffff8808546b3ec8 R08: 0000000000000000 R09: ffff880854695500 R10: ffff880854695500 R11: 0000000000000001 R12: ffff880854695a28 R13: ffff880854695a28 R14: ffff880854695a28 R15: 0000000000000000 FS: 0000000000000000(0000) GS:ffff88107fde0000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 000002b43256a960 CR3: 00000000016b5000 CR4: 00000000000607f0 Stack: ffff8808546b3ed8 ffffffff8100aec9 ffff8808546b3f10 ffffffff8109ce25 334ab55852ec7aef 000000000000001f ffffffff8102d6c0 0000000000000000 0000000000000000 ffff8808546b3f48 ffffffff810276e0 ffff8808546b3f28 Call Trace: [<ffffffff8100aec9>] arch_cpu_idle+0x20/0x2b [<ffffffff8109ce25>] cpu_startup_entry+0xed/0x138 [<ffffffff8102d6c0>] ? flat_init_apic_ldr+0x80/0x80 [<ffffffff810276e0>] start_secondary+0x2c9/0x2f8 I compiled the kernel myself and it works fine, if I boot with nolapic. Yet, only one core is used. Also, the kernel of RHEL6 seems to work fine. I suspect that there are some patches used to make things work. Using the kernel config file from RHEL6 and building a more recent kernel yields the same problems. On the Xeon machine, things got better by disabling Hyperthreading completely. The machine now boots successfully on at least 4 out of 5 times. And if it boots, multicore stuff works just fine. However, I'm wondering about what to do about the AMD machine. So to sum it up: Gentoo kernel 3.6 - 3.11 won't reliably boot those machines unless you reduce the amount of cores (e.g. via nolapic). RHEL6 kernel (which is 2.6.32) boots just fine. RH kernel config used to build a 3.x kernel won't yield a working kernel. Not distribution specific (apart from the kernel being used). These stack traces got printed every minute or so. The kernel seems to be stuck in an endless loop. Yet, a recent kernel is needed for various reasons. So the question is: What does the RHEL6 kernel do, what vanilla or gentoo kernels don't do? Is there a boot option that might lead to a reliable boot with all the cores enabled? Best, Adam

    Read the article

  • Has anyone seen .NET 4 RC MVC2 RTM web apps hogging CPU on Win2008 R2?

    - by kim3er
    We have a number of .NET4 RC ASP.NET MVC2 RTM web applications running on a Windows 2008 R2 server. All behave very well except one that we regularly find running at 99% CPU. It is the most complex of the applications, but is not doing anything extraordinary. It relies on ASP.NET Cache quite heavily, but we have limited the amount of memory it is allowed to use. Does this sound like an issue with the environment? Rich

    Read the article

  • Do Hyper-V guests see multiple CPUs (sockets) or multiple CPU cores when assigned more than 1 vCPU?

    - by Filip Kierzek
    I have SQL Server 2008 Express running on Hyper-V based virtual machine with two vCPU-s. I've just been reading up on SQL Server 2012 Express and noticed that it's CPU is "Limited to lesser of 1 Socket or 4 cores" (http://msdn.microsoft.com/en-us/library/cc645993(v=SQL.110).aspx) My question is how do the SQL Server 2012 limits on CPUs/Cores translate into vCPU-s? Are they "processors" or are they "cores"?

    Read the article

  • Does VMware ESX Fault Tolerance (FT) support depend on the CPU only?

    - by user71784
    I'm trying to find out whether VMware ESX 4.x Fault Tolerance (FT) is supported on a particular server and VMware's HCL is confusing me. It says that some servers with FT-supported processors (specifically the Xeon 3400 Lynnfield) do not support FT and some with almost identical specs (same chipset for instance) do support FT. Could this be a mistake on the HCL itself? To my understanding FT support is based only on the CPU. Thanks. RC

    Read the article

  • How to limit a process to a single CPU core?

    - by Jonathan
    How do you limit a single process program run in a Windows environment to run only on a single CPU on a multi-core machine? Is it the same between a windowed program and a command line program? UPDATE: Reason for doing this: benchmarking various programming languages aspects I need something that would work from the very start of the process, therefore @akseli's answer, although great for other cases, doesn't solve my case

    Read the article

  • AWS Free Usage Tier + Cloudflare... possible?

    - by crashintoty
    If I throw my MySQL/PHP app up on a Amazon EC2 instance (using their AWS Free Usage Tier program) and couple it with CloudFlare (the free plan of course) roughly how many daily visitors can I comfortably handle before performance starts to suffer? Just looking for a rough estimate or educated guess - I understand this setup might be less than ideal but I'm still very curious nonetheless. Thanks in advance

    Read the article

  • how can i sell hp and ibm server cpu?

    - by elvayee
    i'm now working in a company exporting hp and ibm server cpu. Our price is very competitve, and our quality is very high, also, we have good after sale service. But the problem is: we don't have paid B2B. How can I find customers? if anyone knows, pls contact me by msn : melodyhua123 AT hotmail dot com or elvayee123 at gmail dot com thanks!

    Read the article

  • What would be a quick fix in case of server downtime due to sudden high traffic?

    - by PMoubed
    Let's consider a scenario like below: A small web blog build based on LAMP stack and deployed on a shared hosting. Suddenly it becomes popular in one day and it gets million hits per day. Since the developer have not consider high traffic, it caused server downtime and crashes. What would be a quick fix for such a scenario? BTW I know on cloud Servers I may be able to add more RAM or CPU to avoid that like in Amazon EC2.

    Read the article

  • GPGPU

    WhatGPU obviously stands for Graphics Processing Unit (the silicon powering the display you are using to read this blog post). The extra GP in front of that stands for General Purpose computing.So, altogether GPGPU refers to computing we can perform on GPU for purposes beyond just drawing on the screen. In effect, we can use a GPGPU a bit like we already use a CPU: to perform some calculation (that doesn’t have to have any visual element to it). The attraction is that a GPGPU can be orders of magnitude faster than a CPU.WhyWhen I was at the SuperComputing conference in Portland last November, GPGPUs were all the rage. A quick online search reveals many articles introducing the GPGPU topic. I'll just share 3 here: pcper (ignoring all pages except the first, it is a good consumer perspective), gizmodo (nice take using mostly layman terms) and vizworld (answering the question on "what's the big deal").The GPGPU programming paradigm (from a high level) is simple: in your CPU program you define functions (aka kernels) that take some input, can perform the costly operation and return the output. The kernels are the things that execute on the GPGPU leveraging its power (and hence execute faster than what they could on the CPU) while the host CPU program waits for the results or asynchronously performs other tasks.However, GPGPUs have different characteristics to CPUs which means they are suitable only for certain classes of problem (i.e. data parallel algorithms) and not for others (e.g. algorithms with branching or recursion or other complex flow control). You also pay a high cost for transferring the input data from the CPU to the GPU (and vice versa the results back to the CPU), so the computation itself has to be long enough to justify the overhead transfer costs. If your problem space fits the criteria then you probably want to check out this technology.HowSo where can you get a graphics card to start playing with all this? At the time of writing, the two main vendors ATI (owned by AMD) and NVIDIA are the obvious players in this industry. You can read about GPGPU on this AMD page and also on this NVIDIA page. NVIDIA's website also has a free chapter on the topic from the "GPU Gems" book: A Toolkit for Computation on GPUs.If you followed the links above, then you've already come across some of the choices of programming models that are available today. Essentially, AMD is offering their ATI Stream technology accessible via a language they call Brook+; NVIDIA offers their CUDA platform which is accessible from CUDA C. Choosing either of those locks you into the GPU vendor and hence your code cannot run on systems with cards from the other vendor (e.g. imagine if your CPU code would run on Intel chips but not AMD chips). Having said that, both vendors plan to support a new emerging standard called OpenCL, which theoretically means your kernels can execute on any GPU that supports it. To learn more about all of these there is a website: gpgpu.org. The caveat about that site is that (currently) it completely ignores the Microsoft approach, which I touch on next.On Windows, there is already a cross-GPU-vendor way of programming GPUs and that is the DirectX API. Specifically, on Windows Vista and Windows 7, the DirectX 11 API offers a dedicated subset of the API for GPGPU programming: DirectCompute. You use this API on the CPU side, to set up and execute the kernels that run on the GPU. The kernels are written in a language called HLSL (High Level Shader Language). You can use DirectCompute with HLSL to write a "compute shader", which is the term DirectX uses for what I've been referring to in this post as a "kernel". For a comprehensive collection of links about this (including tutorials, videos and samples) please see my blog post: DirectCompute.Note that there are many efforts to build even higher level languages on top of DirectX that aim to expose GPGPU programming to a wider audience by making it as easy as today's mainstream programming models. I'll mention here just two of those efforts: Accelerator from MSR and Brahma by Ananth. Comments about this post welcome at the original blog.

    Read the article

  • Does heavy library and snippet codes usage make you a bad programmer?

    - by Henrik P.
    Overall I'm in programming for about 8 years now and it seems to me that I'm relying more and more on open source libraries and snippets (damn you GitHub!) to "get the job done". I know that in time I could write me own implementation but I like to focus on the overall design. Is this normal (non cooperate environment)? Does it make you a bad programmer if "programming" is nothing more than cluing different libraries together. Feels like it. I know about "don't reinvent the wheel" but what happens when you don't invent a single wheel anymore. What's your take on this?

    Read the article

  • What is the most accurate/frequent report on browser usage on the Internet?

    - by Ryan Hayes
    I'm determining which browsers a new site should support. I'm looking for a respected and accurate (as possible) report on the browser versions that are currently in use. This report should, at minimum cover the % of people who use what browsers, and versions of that browser. Is there a widely accepted source for this kind of report? If so, are they regularly released and available for free? Bonus points for other metrics such as breaking down by OS, Flash versions, JS versions, etc.

    Read the article

  • How to get bearable 2D and 3D performance on AMD Radeon HD 6950?

    - by l0b0
    I have had an AMD Radeon HD 6950 (i.e., Cayman series) for a couple years now, and I have tried a lot of combinations of drivers and settings with terrible results. I'm completely at a loss as to how to proceed. The open source driver has much better 2D performance, but it offloads all OpenGL rendering to the CPU. What I've tried so far: All the latest stable Ubuntu releases in the period, plus one Linux Mint release. All the latest stable AMD Catalyst Proprietary Display Drivers, and currently 13.1. The unofficial wiki installation instructions for every Ubuntu version and the semi-official Ubuntu instructions. All the tips and tweaks I could find for Minecraft (Optifine, reducing settings to minimum), VLC (postprocessing at minimum, rendering at native video size), Catalyst Control Center (flipped every lever in there) and X11 (some binary toggles I can no longer remember). Results: Typically 13-15 FPS in Minecraft, 30 max (100+ in Windows with the same driver version). Around 10 FPS in Team Fortress 2 using the official Steam client. Choppy video playback, in Flash and with VLC. CPU use goes through the roof when rendering video (150% for 1080p on YouTube in Chromium, 100% for 1080p H264 in VLC). glxgears shows 12.5 FPS when maximized. fgl_glxgears shows 10 FPS when maximized. Hardware details from lshw: Motherboard ASUS P6X58D-E CPU Intel Core i7 CPU 950 @ 3.07GHz (never overclocked; 64 bit) 6 GB RAM Video card product "Cayman PRO [Radeon HD 6950]", vendor "Hynix Semiconductor (Hyundai Electronics)" 2 x 1920x1200 monitors, both connected with HDMI. I feel I must be missing something absolutely fundamental here. Is there no accelerated support for anything on 64-bit architectures? Does a dual monitor completely mess up the driver? $ fglrxinfo display: :0 screen: 0 OpenGL vendor string: Advanced Micro Devices, Inc. OpenGL renderer string: AMD Radeon HD 6900 Series OpenGL version string: 4.2.11995 Compatibility Profile Context $ glxinfo | grep 'direct rendering' direct rendering: Yes I am currently using the open source driver, with the following results: Full frame rate and low CPU load when playing 1080p video. Black screen (but music in the background) in Team Fortress 2. Similar performance in Minecraft as the Catalyst driver. In hindsight obvious, since both end up offloading the rendering to the CPU. My /var/log/Xorg.0.log after upgrading to AMD Catalyst 13.1. Some possibly important lines: (WW) Falling back to old probe method for fglrx (WW) fglrx: No matching Device section for instance (BusID PCI:0@3:0:1) found The generated xorg.conf. The disabled "monitor" 0-DFP9 is actually an A/V receiver, which sometimes confuses the monitor drivers when turned on/off (but not in Windows). All three "monitor" devices are connected with HDMI. Edit: Chris Carter's suggestion to use the xorg-edgers PPA (Catalyst 13.1) resulted in some improvement, but still pretty bad performance overall: Minecraft stabilizes at 13-17 FPS, but at least the CPU load is "only" at 45-60%. Still 150% CPU use for 1080p video rendering on YouTube in Chromium. Massive improvement for 1080p H264 in VLC: 40-50% CPU use and no visible jitter glxgears performance about doubled to 25-30 FPS when maximized. fgl_glxgears still at ~10 FPS when maximized.

    Read the article

  • Why should we use low level languages if a high level one like python can do almost everything? [closed]

    - by killown
    I know python is not suitable for things like microcontrolers, make drivers etc, but besides that, you can do everything using python, companys get stuck with speed optimizations for real hard time system but does forget other factors which one you can just upgrade your hardware for speed proposes in order to get your python program fit in it, if you think how much cust can the company have to maintain a system written in C, the comparison is like that: for example: 10 programmers to mantain a system written in c and just one programmer to mantain a system written in python, with python you can buy some better hardware to fit your python program, I think that low level languages tend to get more cost, since programmers aren't so cheaply than a hardware upgrade, then, this is my point, why should a system be written in c instead of python?

    Read the article

  • Can throwing the iPhone high in the air launch my app or trigger desired function in iOS 7 or later

    - by aMother
    My app is an emergency app. It will be used by people in emergency and disasters. It's possible that they got stuck in situations where they just don't have the time to enter or draw their password, launch the appp and push a button. Is it possible that ask the OS to launch the app if user throw their iphone up in the air or shake it vigrously or something else. PS: I think it's possible with the accelerometer.

    Read the article

  • How can I create a partition without the usage of Live CD nor USB?

    - by Ariel
    ¿Cómo crear una partición sin usar live CD ni USB? Is it possible to create a partition when using the system? When I try to do it on gParted, it seems that the options are disabled because of the disk is mounted and it cannot be unmounted because of I am using it in the system. I wish to create a new partition without removing or affecting the file system; just creating a new partition, but without the need to use a Live CD or USB. ¿Es posible crear una partición estando en el sistema? Ya que cuando lo intento desde GParted, al parecer están desactivadas las opciones porque la unidad está montada y no se puede desmontar ya que estoy usando el sistema. Quiero crear una nueva partición pero sin quitar o afectar el sistema de archivos; sólo crear una nueva partición, pero sin live CD o USB.

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >