Search Results

Search found 5756 results on 231 pages for 'cpu utilization'.

Page 46/231 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • is there any faster way to parse than by walk each byte?

    - by uray
    is there any faster way to parse a text than by walk each byte of the text? I wonder if there is any special CPU (x86/x64) instruction for string operation that is used by string library, that somehow used to optimize the parsing routine. for example instruction like finding a token in a string that could be run by hardware instead of looping each byte until a token is found.

    Read the article

  • How do I upgrade the BIOS to boot the motherboard when the CPU is not suported?

    - by Matt
    So I have a Tyan S8225 motherboard with a Valencia (Opteron 4200) CPU. Two of us have tried everything. We have even swapped the whole motherboard, Power supply, memory, even a different CPU (still Valencia). We even tried it without any memory installed and with only a single CPU. There are no beep codes as I suspect even those are controlled by the BIOS boot process. We put the whole thing on the bench with just the power supply, VGA, keyboard and network (for IPMI) connected. The IPMI is not even showing the BIOS starting, but the IPMI is working. After some hunting around, I discovered the claim on the website is that the Valencia CPU's are not supported on older BIOS revisions. For a start, I don't know what the bios revision is and if it's older but it's the only thing left. Could the BIOS be causing a board not to boot at all? If that's the case, then is there any other way to update the BIOS without buying an old CPU only to be put back in a box just to update the BIOS? Yes, we even tried updating the BIOS through IPMI but you can't do that either.

    Read the article

  • Processes sharing cores on Ubuntu system

    - by muckabout
    My coworkers and I share an 8-core server running Ubuntu for our batch processes. I tend to run 4 processes at a time, each of which consumes 100% CPU per core when nothing else is running. When a coworker runs his processes (typically about 4 at a time), his also get 100% per. However, when both of us run ours (he always goes first), his still get 100% and mine seem to divide the remaining processing power and linger in the 10-40% range. I even reniced his process to a lower value and it did not change. What are the issues that may cause this?

    Read the article

  • C# performance analysis- how to count CPU cycles?

    - by Lirik
    Is this a valid way to do performance analysis? I want to get nanosecond accuracy and determine the performance of typecasting: class PerformanceTest { static double last = 0.0; static List<object> numericGenericData = new List<object>(); static List<double> numericTypedData = new List<double>(); static void Main(string[] args) { double totalWithCasting = 0.0; double totalWithoutCasting = 0.0; for (double d = 0.0; d < 1000000.0; ++d) { numericGenericData.Add(d); numericTypedData.Add(d); } Stopwatch stopwatch = new Stopwatch(); for (int i = 0; i < 10; ++i) { stopwatch.Start(); testWithTypecasting(); stopwatch.Stop(); totalWithCasting += stopwatch.ElapsedTicks; stopwatch.Start(); testWithoutTypeCasting(); stopwatch.Stop(); totalWithoutCasting += stopwatch.ElapsedTicks; } Console.WriteLine("Avg with typecasting = {0}", (totalWithCasting/10)); Console.WriteLine("Avg without typecasting = {0}", (totalWithoutCasting/10)); Console.ReadKey(); } static void testWithTypecasting() { foreach (object o in numericGenericData) { last = ((double)o*(double)o)/200; } } static void testWithoutTypeCasting() { foreach (double d in numericTypedData) { last = (d * d)/200; } } } The output is: Avg with typecasting = 468872.3 Avg without typecasting = 501157.9 I'm a little suspicious... it looks like there is nearly no impact on the performance. Is casting really that cheap?

    Read the article

  • ICC vs GCC - Optimization and CPU architecture

    - by Rayne
    Hi all, I'm interested in knowing how GCC differs from Intel's ICC in terms of the optimization levels and catering to specific processor architecture. I'm using GCC 4.1.2 20070626 and ICC v11.1 for Linux. How does ICC's optimization levels (O1 to O3) differ from GCC, if they differ at all? The ICC is able to cater specifically to different architectures (IA-32, intel64 and IA-64). I've read that GCC has the -march compiler option which I think is similar, but I can't find a list of the options to use. I'm using Intel Xeon X5570, which is 64-bit. Are there any other GCC compiler options I could use that would cater my applications for 64-bit Intel CPUs? Thank you. Regards, Rayne

    Read the article

  • WPF Application Typing in Custom TextBox CPU Jumping from 3 to 80 percent

    - by azamsharp
    I have created a RichTextBox called SharpTextBox which indicates and limits the number of characters that can be typed in it. The implementation is shown in the following link: http://www.highoncoding.com/Articles/673_Creating_SharpRichTextBox_for_Live_Character_Count_in_WPF.aspx Anyway when I start typing in the TextBox it goes from 3% to 78%. The TextBox updates the Label control which shows the number of characters remaining for the count. How can I increase the performance of the textbox? UPDATE: I read there seems to be some problem with the TextRange.Text property which kills performance.

    Read the article

  • Minimal "Task Queue" with stock Linux tools to leverage Multicore CPU

    - by Manuel
    What is the best/easiest way to build a minimal task queue system for Linux using bash and common tools? I have a file with 9'000 lines, each line has a bash command line, the commands are completely independent. command 1 > Logs/1.log command 2 > Logs/2.log command 3 > Logs/3.log ... My box has more than one core and I want to execute X tasks at the same time. I searched the web for a good way to do this. Apparently, a lot of people have this problem but nobody has a good solution so far. It would be nice if the solution had the following features: can interpret more than one command (e.g. command; command) can interpret stream redirects on the lines (e.g. ls > /tmp/ls.txt) only uses common Linux tools Bonus points if it works on other Unix-clones without too exotic requirements.

    Read the article

  • mkmapview cpu usage on iphone 3G

    - by mozveren
    Hello all, I have some troubles with iphone 3G and Mkmapview. After a certain random time, my application freeze. When I launch with performances tool, I can see that the application a lot of time to retrieve map tiles in cache. 19.6 7006 Foundation +[NSURLConnection(NSURLConnectionReallyInternal) 18.5 6613 GMM GMM::TileCachePrivate::runCacheThread() Its seems that the MkMapView component launch several threads to load the tiles in caches. How I can avoid this behavior ? The behavior seems to not make troubles on 3GS. Thanks

    Read the article

  • C#: Perform Operations on GPU, not CPU (Calculate Pi)

    - by Alex
    Hello, I've recently read a lot about software (mostly scientific/math and encryption related) that moves part of their calculation onto the GPU which causes a 100-1000 (!) fold increase in speed for supported operations. Is there a library, API or other way to run something on the GPU via C#? I'm thinking of simple Pi calculation. I have a GeForce 8800 GTX if that's relevant at all (would prefer card independent solution though). Any hints are appreciated!

    Read the article

  • Measure CPU performance via JS

    - by Nicholas Kyriakides
    A webapp has as a central component a relatively heavy algorithm that handles geometric operations. There are 2 solutions to make the whole thing accessible from both high-end machines and relatively slower mobile devices. I will use RPC's if i detect that the user machine is ''slow'' or else if i detect that the user machine can handle it OK, then i provide to the webapp the script to handle it client side. Now what would be a reliable way to detect the speed of the user machine? I was thinking of providing a sample script as a test when the page loads and detect the time it took to execute that. Any ideas?

    Read the article

  • Distributing cpu-bound compression jobs to multiple computers?

    - by barnaby
    The other day I needed to archive a lot of data on our network and I was frustrated I had no immediate way to harness the power of multiple machines to speed-up the process. I understand that creating a distributed job management system is a leap from a command-line archiving tool. I'm now wondering what the simplest solution to this type of distributed performance scenario could be. Would a custom tool always be a requirement or are there ways to use standard utilities and somehow distribute their load transparently at a higher level? Thanks for any suggestions.

    Read the article

  • CPU emulator on C for assembler

    - by krlzx00
    Hi, there. I have a problem. I´m working on a little aplication of security, but i recived an array that means a bytes, and that bytes can be interpreted as a assembler code, so my cuestion is.... some one knows a library that a i can use on my aplication that can execute this bytes and show what it do, or something like that?

    Read the article

  • Getting the alternative to the 200-Line Linux Kernel patch to work

    - by Gödel
    Apparently, there is a comparable alternative to the 200-line kernel patch that involves no kernel upgrade. It is presented here and discussed here. However, I am not sure if webupd8's solution (under the section "Use it in Ubuntu") on Ubuntu actually works or not. In particular, one commenter on ./ is saying he's getting an error message. Could anyone post the "correct" method that actually works? Suggested solution: Based on the comments I've read so far, the following seems to work. (1) In /etc/rc.local, add the following lines to above exit 0: mkdir -p /dev/cgroup/cpu mount -t cgroup cgroup /dev/cgroup/cpu -o cpu mkdir -m 0777 /dev/cgroup/cpu/user echo "/usr/local/sbin/cgroup_clean" > /dev/cgroup/cpu/release_agent (2) Create a file named /usr/local/sbin/cgroup_clean with the following content: #!/bin/sh rmdir /dev/cgroup/cpu/$1 (3) In your ~/.bashrc, add: if [ "$PS1" ] ; then mkdir -m 0700 /dev/cgroup/cpu/user/$$ echo $$ > /dev/cgroup/cpu/user/$$/tasks echo "1" > /dev/cgroup/cpu/user/$$/notify_on_release fi (4) (To make sure the execution bit is on) execute sudo chmod +x /usr/local/sbin/cgroup_clean /etc/rc.local (5) Reboot.

    Read the article

  • Running 64 bit Ubuntu distribution from 32 bit Ubuntu

    - by csg
    Related to this question How do I run qemu with 64bit processor on a 64bit machine?, I'm trying to run latest ubuntu 11.10 64bit distribution under Ubuntu 11.04 32 bit using qemu on a core2duo (64 bit cpu) machine, using following qemu parameters with no success. Error under qemu: "This kernel required an x86-64 CPU, but only detected an i686 CPU. Unable to boot - please use a kernel appropiate for your CPU" Isn't qemu suppose to emulate a 64 bit machine? I think I'm missing something, but I can't figure it out. qemu -cpu (kvm64|core2duo|qemu64) -boot d -cdrom ubuntu-11.10-desktop-amd64.iso qemu-system-x86_64 -boot d -cdrom ubuntu-11.10-desktop-amd64.iso Here is my uname -m i686 Here is my /proc/cpuinfo processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 23 model name : Intel(R) Core(TM)2 Duo CPU P8400 @ 2.26GHz stepping : 6 cpu MHz : 800.000 cache size : 3072 KB physical id : 0 siblings : 2 core id : 1 cpu cores : 2 apicid : 1 initial apicid : 1 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc arch_perfmon pebs bts aperfmperf pni dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm sse4_1 lahf_lm dts tpr_shadow vnmi flexpriority bogomips : 4522.45 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management:

    Read the article

  • Cache consistency & spawning a thread

    - by Dave Keck
    Background I've been reading through various books and articles to learn about processor caches, cache consistency, and memory barriers in the context of concurrent execution. So far though, I have been unable to determine whether a common coding practice of mine is safe in the strictest sense. Assumptions The following pseudo-code is executed on a two-processor machine: int sharedVar = 0; myThread() { print(sharedVar); } main() { sharedVar = 1; spawnThread(myThread); sleep(-1); } main() executes on processor 1 (P1), while myThread() executes on P2. Initially, sharedVar exists in the caches of both P1 and P2 with the initial value of 0 (due to some "warm-up code" that isn't shown above.) Question Strictly speaking – preferably without assuming any particular CPU – is myThread() guaranteed to print 1? With my newfound knowledge of processor caches, it seems entirely possible that at the time of the print() statement, P2 may not have received the invalidation request for sharedVar caused by P1's assignment in main(). Therefore, it seems possible that myThread() could print 0. References These are the related articles and books I've been reading. (It wouldn't allow me to format these as links because I'm a new user - sorry.) Shared Memory Consistency Models: A Tutorial hpl.hp.com/techreports/Compaq-DEC/WRL-95-7.pdf Memory Barriers: a Hardware View for Software Hackers rdrop.com/users/paulmck/scalability/paper/whymb.2009.04.05a.pdf Linux Kernel Memory Barriers kernel.org/doc/Documentation/memory-barriers.txt Computer Architecture: A Quantitative Approach amazon.com/Computer-Architecture-Quantitative-Approach-4th/dp/0123704901/ref=dp_ob_title_bk

    Read the article

  • Why this code generates different numbers?

    - by frbry
    Hello, I have this function that creates a unique number for hard-disk and CPU combination. DWORD hw_hash() { char drv[4]; char szNameBuffer[256]; DWORD dwHddUnique; DWORD dwProcessorUnique; DWORD dwUniqueKey; char *sysDrive = getenv ("SystemDrive"); strcpy(drv, sysDrive); drv[2] = '\\'; drv[3] = 0; GetVolumeInformation(drv, szNameBuffer, 256, &dwHddUnique, NULL, NULL, NULL, NULL); SYSTEM_INFO si; GetSystemInfo(&si); dwProcessorUnique = si.dwProcessorType + si.wProcessorArchitecture + si.wProcessorRevision; dwUniqueKey = dwProcessorUnique + dwHddUnique; return dwUniqueKey; } It returns different numbers if I format my hard-disk and install a new Windows. Any ideas, why? Thank you. Edit: OK, Got it: This function returns the volume serial number that the operating system assigns when a hard disk is formatted. To programmatically obtain the hard disk's serial number that the manufacturer assigns, use the Windows Management Instrumentation (WMI) Win32_PhysicalMedia property SerialNumber. I should do more research before posting my problems online. Sorry to bother you, let's keep this here in case anybody else can need it.

    Read the article

  • Why is this JLabel continuously repainting?

    - by Morinar
    I've got an item that appears to continuously repaint when it exists, causing the CPU to spike whenever it is in any of my windows. It directly inherits from a JLabel, and unlike the other JLabels on the screen, it has a red background and a border. I have NO idea why it would be different enough to continuously repaint. The callstack looks like this: Thread [AWT-EventQueue-1] (Suspended (breakpoint at line 260 in sItem)) sItem.paint(Graphics) line: 260 sItem(JComponent).paintToOffscreen(Graphics, int, int, int, int, int, int) line: 5124 RepaintManager$PaintManager.paintDoubleBuffered(JComponent, Image, Graphics, int, int, int, int) line: 1475 RepaintManager$PaintManager.paint(JComponent, JComponent, Graphics, int, int, int, int) line: 1406 RepaintManager.paint(JComponent, JComponent, Graphics, int, int, int, int) line: 1220 sItem(JComponent)._paintImmediately(int, int, int, int) line: 5072 sItem(JComponent).paintImmediately(int, int, int, int) line: 4882 RepaintManager.paintDirtyRegions(Map<Component,Rectangle>) line: 803 RepaintManager.paintDirtyRegions() line: 714 RepaintManager.seqPaintDirtyRegions() line: 694 [local variables unavailable] SystemEventQueueUtilities$ComponentWorkRequest.run() line: 128 InvocationEvent.dispatch() line: 209 summitEventQueue(EventQueue).dispatchEvent(AWTEvent) line: 597 summitEventQueue(SummitHackableEventQueue).dispatchEvent(AWTEvent) line: 26 summitEventQueue.dispatchEvent(AWTEvent) line: 62 EventDispatchThread.pumpOneEventForFilters(int) line: 269 EventDispatchThread.pumpEventsForFilter(int, Conditional, EventFilter) line: 184 EventDispatchThread.pumpEventsForHierarchy(int, Conditional, Component) line: 174 EventDispatchThread.pumpEvents(int, Conditional) line: 169 EventDispatchThread.pumpEvents(Conditional) line: 161 EventDispatchThread.run() line: 122 [local variables unavailable] It basically just continually hits that over and over again as fast as I can press continue. The code that is "unique" to this particular label looks approximately like this: bgColor = OurColors.clrWindowTextAlert; textColor = Color.white; setBackground(bgColor); setOpaque(true); setSize(150, getHeight()); Border border_warning = BorderFactory.createCompoundBorder( BorderFactory.createMatteBorder(1, 1, 1, 1, OurColors.clrXBoxBorder), Global.border_left_margin); setBorder(border_warning); It obviously does more, but that particular block only exists for these labels that are causing the spike/continuous repaint. Any ideas why it would keep repainting this particular label?

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >