Search Results

Search found 5572 results on 223 pages for 'cpu'.

Page 34/223 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • Semantic is consuming all CPU, causing emacs to hang

    - by Cheeso
    I upgraded to emacs 23.2.1 on Windows 7, not long ago. Since then I've been unable to use Semantic. As soon as I start it, the cpu goes to MAX . (actually, Windows reports it at 50%, but this is a dual core machine, so emacs is effectively consuming 100% of a core). Emacs becomes non-responsive. Is there a particular combination of versions of semantic and emacs I that is unsafe to use together? how would I debug this spin/hang? I've seen other suggestions to change the semantic-idle-scheduler-idle-time, from its default 2 to something very large. I tried that, but got the same results.

    Read the article

  • ASP.NET High CPU Bringing Servers to their Knees

    - by user880954
    Ok, our new build is having 100% cpu spikes on each server at random intervals. For long durations it make the site totally unresponsive - this will be at peak times as people in different countries log on to the site etc. We've looked at perfmom, memory profilers, CLR profiler, sql profilers, Red gate ants profiler, tried load testing in UAT - but cannot even reproduce the problem. This could mean only thousands of users hitting the live site causes it to happen. One pattern we did notice was that the new code - the broken build - actually uses noticably less threads. We are also using spring for IOC - does this have a bed reputation? To make things worse, we cannot deploy to live due to the business impact - so cannot narrow the problem down to subset of the new features we've added. We truly are destroyed - has anyone got any battle scars that may save us a few lives?

    Read the article

  • Brand New MOnitor Won't Get Input From CPU

    - by HollerTrain
    I have an old Dell 2350. I found a HD a few months ago and plugged it in and it booted up. So I just purchased a monitor and plugged it in, and put the HD inside and connected the two connections (Bus, and a four pronged connector) and no signal is being sent to monitor. is this a monitor issue or the HD is just dead? HD is spinning. CPU is turned on :) Yet no signal.. any thoughts?

    Read the article

  • Fortigate 200A firewall CPU high resource usage

    - by user119720
    This morning I'm receiving complaints from several end users that saying their whole department network are slow and have intermittent. Therefore I've checked our firewall to see whether if something goes wrong with the device.From my observation in the FortiGate dashboard status, the CPU resources is very high (99 percent). My first assumption is to clear the log since in the alert log the Fortigate log mention that it is already 90% full.Based on my understanding,the log can be cleared by restarting the firewall. After restarting the firewall the network seems okay again but then after several minutes it went up again.The condition still persist until now. Can someone show me where else I can check to fix this issue? I've really appreciate any help that I can get here.Thanks. Edit: diag sys top command

    Read the article

  • Mac OS X Server 10.6.6 "disables" CPU in VMware Fusion

    - by wjlafrance
    Hello! I installed Mac OS X Server 10.6.0 in VMWare Fusion the other night and it worked perfectly, until I ran Software Update. I upgraded to 10.6.6 through the combo updater, and now when I start the VM it says: "The CPU has been disabled by the guest operating system. You will need to power off or reset the virtual machine at this point." I've switched the operating system in the options to OS X Server 32bit, 64bit, and even to Windows 7, and nothing has worked. Does anyone have any ideas?

    Read the article

  • CPU Affinity on ARM processors

    - by dsljanus
    I am using some RaspberryPI boards for a data acquisition system. They are nice boards, with plenty of community support around them, but they are really slow. I am thinking of gradually replacing them with ODROID multicore boards, with the Samsung Exynos processors. I have some experience using taskset to set CPU affinity on my servers because I am always running Node.js applications that are by definition single threaded. Now, is it possible to do this on an ARM board? I do not see why it would not in theory, but I have doubts over how well it is going to work. Does anyone have experience with this kind of hack? Also, I would appreciate any comments about ARM CPUs and how they differ from x86.

    Read the article

  • How to understand cpu family/model/stepping fields in /proc/cpuinfo [closed]

    - by Victor Sorokin
    I have following in cpuinfo: processor : 0 vendor_id : AuthenticAMD cpu family : 15 model : 107 model name : AMD Athlon(tm) 64 X2 Dual Core Processor 5600+ stepping : 2 According to Wikipedia page there are two kinds of 5600+ -- one of 90nm technology, another of 65nm. How can I understand which one I have? There seem to be no direct correspondence between contents of cpuinfo and info on Wikipedia page. AMD site seems to use some other naming scheme for processors too. How can I map values of family, model and stepping from cpuinfo to the data available on Wikipedia/AMD?

    Read the article

  • Rack processes taking over CPU under Passenger

    - by pjmorse
    I have a Spree site running the following stack: Nginx 1.0.8 Passenger 3.0.9 Ruby 1.9.2-p290 Rack 1.3.6 Rails 3.1.4 Spree 0.70.5 I recently upgraded from Spree 0.70.3, which also brought a Deface upgrade from 0.7.x to 0.8.0. Since then things have been very unstable. Recently we've seen some CPU-hogging processes which drive load up on the server and grind the whole thing to a stop. They're Rack processes and it looks like Passenger is starting them; they're owned by the site-runner user, an unprivileged user who owns the application code. (Passenger automatically runs the site code as the user who owns it.) If I restart Nginx and kill the runaway processes, it helps for a while, but eventually similar processes return and bog things down again. How can I figure out what's starting these processes, what they're trying to do, and how to stop them?

    Read the article

  • How do I stop someone from saturating my line & wasting CPU cycles

    - by JoshRibs
    My web host shows inbound & outbound traffic with mrtg. I have a steady 3.5mbps inbound traffic from Nigeria. Even assuming the source IPs & destination ports are blocked with Iptables & verifying nothing is listening on those ports, will the traffic still always pass through the switch & "get" to my server (where my server wastes CPU cycles "dropping" the packets)? Assuming I was setup with a hardware firewall, the traffic would still show in mrtg assuming the firewall is behind the switch? So is there any way to stop someone from saturating your 100mbps line, if they also have a 100mbps line? Other than filing an abuse complaint with the kind folks in Nigeria?

    Read the article

  • HP 4530s: Fan is Always On even When Laptop/CPU is Idle

    - by tolitius
    Just bought an HP 4530s from newegg. Laptop is great, but.. The FAN is always on. I did some googling, found it is a known problem, but no easily googlable solution. Tried to: Update BIOS to the latest Disable "CPU fan always on when plugged in" BIOS setting Installed Windows 7 Home (came with), Live CDed Ubuntu, Windows XP Spent 2 hours with horrible HP support Some other things that I can't already recall = spent too much time on it Laptop is not refundable (learned it the hard way, after the fact, by looking at the NEWEGG clever policy that is hidden in "details") I would really appreciate a workable solution / workaround / hack. The laptop is for my friend who will most likely be running Windows XP/7.

    Read the article

  • What is good RAM for my CPU?

    - by Jason94
    I'm going to upgrade my RAM, but I have no clue what to look for. There is cheap and there is expensive RAM. What should I look for? What are good attributes for intel i5? The i5 CPU and motherboeard supports 1600 MHz (PC3-12800), but there is still to much to choose from. There are CL10 (10-10-10-27) @ 1.5V CL11 @ 1.35V CL9 (9-9-9-24) @ 1.5V CL9 (9-9-9-24) @ 1.65V Higher CL is better? CL is classification? And I guess a high voltage is also good since thats what overclockers do to the ram (increase the voltage)?

    Read the article

  • Mac OS X Server 10.6.6 "disables" CPU in VMware Fusion [closed]

    - by wjlafrance
    Hello! I installed Mac OS X Server 10.6.0 in VMWare Fusion the other night and it worked perfectly, until I ran Software Update. I upgraded to 10.6.6 through the combo updater, and now when I start the VM it says: "The CPU has been disabled by the guest operating system. You will need to power off or reset the virtual machine at this point." I've switched the operating system in the options to OS X Server 32bit, 64bit, and even to Windows 7, and nothing has worked. Does anyone have any ideas?

    Read the article

  • Mysql process goes over 100% of CPU usage

    - by Temnovit
    Hello! I'm experiencing some problems with my LAMP server. Recently, everything became very slow, even though visitor count on my websites didn't change to much. When I run top command, it sais that mysql process has taken over 150-200% of CPU. How's that possible, I always thought that 100% is a maximum? I'm running Ubuntu 9.04 server edition with 1,5 GB RAM my.cnf settings: key_buffer = 64M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 myisam-recover = BACKUP max_connections = 200 table_cache = 512 table_definition_cache = 512 thread_concurrency = 2 read_buffer_size = 1M sort_buffer_size = 4M join_buffer_size = 1M query_cache_limit = 1M # the maximum size of individual query results query_cache_size = 128M Here is the output of MySQLTuner: The top command: What could be the cause of this problem? Can I make changes to my my.cnf to prevent server from hanging?

    Read the article

  • MSSQL 2005 Snapshot Agent Uses 100% CPU

    - by matth1jd
    When setting up a new subscription to a publication (transactional replication) from 64-bit SQL Server 2005 to 64-bit SQL Server 2005 the Snapshot Agent on the publisher consumes 100% of the CPU. I am using SSMS to create the new subscription. My initial impression is that this could be from row locking occurring during the generation of the snapshot but I have read that a concurrent snapshot is generated by default in SQL Server 2005, and that row locking shouldn't be a concern. As this is a production server I would like to be able to initialize replication without bringing the box to it's knees. Any suggestions would be helpful and appreciated.

    Read the article

  • sqlserver.exe uses 100% CPU

    - by Markus
    I've created an application (asp.net) that once a day syncs an entire database through XML-files. The sync first creates an transaction and then clears the databases tables and then starts to parse and insert the new rows into the database. When all the parsing is complete it commits the transaction. This works fine on a SQL Server 2005 (on another machine), but on SQL Server 2005 Express, the process starts to use 100% CPU after a while, and as I log the inserts being made I can see that it just stops inserting. No exception, it just stops inserting. Anyone got any idea what this may be? I've previously run the syncronization on another sql 2005 express (also on another computer), and that worked. The server has only 2GB RAM, could this be the problem?

    Read the article

  • A Patent for Workload Management Based on Service Level Objectives

    - by jsavit
    I'm very pleased to announce that after a tiny :-) wait of about 5 years, my patent application for a workload manager was finally approved. Background Many operating systems have a resource manager which lets you control machine resources. For example, Solaris provides controls for CPU with several options: shares for proportional CPU allocation. If you have twice as many shares as me, and we are competing for CPU, you'll get about twice as many CPU cycles), dedicated CPU allocation in which a number of CPUs are exclusively dedicated to an application's use. You can say that a zone or project "owns" 8 CPUs on a 32 CPU machine, for example. And, capped CPU in which you specify the upper bound, or cap, of how much CPU an application gets. For example, you can throttle an application to 0.125 of a CPU. (This isn't meant to be an exhaustive list of Solaris RM controls.) Workload management Useful as that is (and tragic that some other operating systems have little resource management and isolation, and frighten people into running only 1 app per OS instance - and wastefully size every server for the peak workload it might experience) that's not really workload management. With resource management one controls the resources, and hope that's enough to meet application service objectives. In fact, we hold resource distribution constant, see if that was good enough, and adjust resource distribution if that didn't meet service level objectives. Here's an example of what happens today: Let's try 30% dedicated CPU. Not enough? Let's try 80% Oh, that's too much, and we're achieving much better response time than the objective, but other workloads are starving. Let's back that off and try again. It's not the process I object to - it's that we to often do this manually. Worse, we sometimes identify and adjust the wrong resource and fiddle with that to no useful result. Back in my days as a customer managing large systems, one of my users would call me up to beg for a "CPU boost": Me: "it won't make any difference - there's plenty of spare CPU to be had, and your application is completely I/O bound." User: "Please do it anyway." Me: "oh, all right, but it won't do you any good." (I did, because he was a friend, but it didn't help.) Prior art There are some operating environments that take a stab about workload management (rather than resource management) but I find them lacking. I know of one that uses synthetic "service units" composed of the sum of CPU, I/O and memory allocations multiplied by weighting factors. A workload is set to make a target rate of service units consumed per second. But this seems to be missing a key point: what is the relationship between artificial 'service units' and actually meeting a throughput or response time objective? What if I get plenty of one of the components (so am getting enough service units), but not enough of the resource whose needed to remove the bottleneck? Actual workload management That's not really the answer either. What is needed is to specify a workload's service levels in terms of externally visible metrics that are meaningful to a business, such as response times or transactions per second, and have the workload manager figure out which resources are not being adequately provided, and then adjust it as needed. If an application is not meeting its service level objectives and the reason is that it's not getting enough CPU cycles, adjust its CPU resource accordingly. If the reason is that the application isn't getting enough RAM to keep its working set in memory, then adjust its RAM assignment appropriately so it stops swapping. Simple idea, but that's a task we keep dumping on system administrators. In other words - don't hold the number of CPU shares constant and watch the achievement of service level vary. Instead, hold the service level constant, and dynamically adjust the number of CPU shares (or amount of other resources like RAM or I/O bandwidth) in order to meet the objective. Instrumenting non-instrumented applications There's one little problem here: how do I measure application performance in a way relating to a service level. I don't want to do it based on internal resources like number of CPU seconds it received per minute - We need to make resource decisions based on externally visible and meaningful measures of performance, not synthetic items or internal resource counters. If I have a way of marking the beginning and end of a transaction, I can then measure whether or not the application is meeting an objective based on it. If I can observe the delay factors for an application, I can see which resource shortages are slowing an application enough to keep it from meeting its objectives. I can then adjust resource allocations to relieve those shortages. Fortunately, Solaris provides facilities for both marking application progress and determining what factors cause application latency. The Solaris DTrace facility let's me introspect on application behavior: in particular I can see events like "receive a web hit" and "respond to that web hit" so I can get transaction rate and response time. DTrace (and tools like prstat) let me see where latency is being added to an application, so I know which resource to adjust. Summary After a delay of a mere few years, I am the proud creator of a patent (advice to anyone interested in going through the process: don't hold your breath!). The fundamental idea is fairly simple: instead of holding resource constant and suffering variable levels of success meeting service level objectives, properly characterise the service level objective in meaningful terms, instrument the application to see if it's meeting the objective, and then have a workload manager change resource allocations to remove delays preventing service level attainment. I've done it by hand for a long time - I think that's what a computer should do for me.

    Read the article

  • Daemon program that uses select() inside infinite loop uses significantly more CPU when ported from

    - by Jake
    I have a daemon app written in C and is currently running with no known issues on a Solaris 10 machine. I am in the process of porting it over to Linux. I have had to make minimal changes. During testing it passes all test cases. There are no issues with its functionality. However, when I view its CPU usage when 'idle' on my Solaris machine it is using around .03% CPU. On the Virtual Machine running Red Hat Enterprise Linux 4.8 that same process uses all available CPU (usually somewhere in the 90%+ range). My first thought was that something must be wrong with the event loop. The event loop is an infinite loop ( while(1) ) with a call to select(). The timeval is setup so that timeval.tv_sec = 0 and timeval.tv_usec = 1000. This seems reasonable enough for what the process is doing. As a test I bumped the timeval.tv_sec to 1. Even after doing that I saw the same issue. Is there something I am missing about how select works on Linux vs. Unix? Or does it work differently with and OS running on a Virtual Machine? Or maybe there is something else I am missing entirely? One more thing I am sure sure which version of vmware server is being used. It was just updated about a month ago though. Thanks.

    Read the article

  • File transfer eating alot of CPU

    - by Dan C.
    I'm trying to transfer a file over a IHttpHandler, the code is pretty simple. However when i start a single transfer it uses about 20% of the CPU. If i were to scale this to 20 simultaneous transfers the CPU is very high. Is there a better way I can be doing this to keep the CPU lower? the client code just sends over chunks of the file 64KB at a time. public void ProcessRequest(HttpContext context) { if (context.Request.Params["secretKey"] == null) { } else { accessCode = context.Request.Params["secretKey"].ToString(); } if (accessCode == "test") { string fileName = context.Request.Params["fileName"].ToString(); byte[] buffer = Convert.FromBase64String(context.Request.Form["data"]); string fileGuid = context.Request.Params["smGuid"].ToString(); string user = context.Request.Params["user"].ToString(); SaveFile(fileName, buffer, user); } } public void SaveFile(string fileName, byte[] buffer, string user) { string DirPath = @"E:\Filestorage\" + user + @"\"; if (!Directory.Exists(DirPath)) { Directory.CreateDirectory(DirPath); } string FilePath = @"E:\Filestorage\" + user + @"\" + fileName; FileStream writer = new FileStream(FilePath, File.Exists(FilePath) ? FileMode.Append : FileMode.Create, FileAccess.Write, FileShare.ReadWrite); writer.Write(buffer, 0, buffer.Length); writer.Close(); }

    Read the article

  • threading in Python taking up too much CPU

    - by KevinShaffer
    I wrote a chat program and have a GUI running using Tkinter, and to go and check when new messages have arrived, I create a new thread so Tkinter keeps doing its thing without locking up while the new thread goes and grabs what I need and updates the Tkinter window. This however becomes a huge CPU hog, and my guess is that it has to do somehow with the fact that the Thread is started and never really released when the function is done. Here's the relevant code (it's ugly and not optimized at the moment, but it gets the job done, and itself does not use too much processing power, as when I run it not threaded, it doesn't take up much CPU but it locks up Tkinter) Note: This is inside of a class, hence the extra tab. def interim(self): threading.Thread(target=self.readLog).start() self.after(5000,self.interim) def readLog(self): print 'reading' try: length = len(str(self.readNumber)) f = open('chatlog'+str(myport),'r') temp = f.readline().replace('\n','') while (temp[:length] != str(self.readNumber)) or temp[0] == '<': temp = f.readline().replace('\n','') while temp: if temp[0] != '<': self.updateChat(temp[length:]) self.readNumber +=1 else: self.updateChat(temp) temp = f.readline().replace('\n','') f.close() Is there a way to better manage the threading so I don't consume 100% of the CPU very quickly?

    Read the article

  • cset as non-root to set cpu affinity for running processes

    - by RaveTheTadpole
    I've been playing with cset to set cpu affinity for running processes. I'm recreating the built-in "shield" function manually with set and proc, to add some subsets for specific threads of my application. I have a bash script that is calling cset to create the sets, and move the correct threads to the correct sets. It works when run with sudo. Now I'd like to make this script executable by another user, who does not have sudo powers. I trust this user enough to be responsible with cset, but don't want to open up the wide powers of root. I thought that CAP_SYS_NICE -- which is needed for sched_setaffinity, which I just assume cset must use -- on the script would be sufficient, but that didn't work. I tried extending CAP_SYS_NICE to the cset program (which is a thin python wrapper for the cset python library). No dice. The output of cap_to_text on my CAP_SYS_NICE'd scripts is "=cap_ipc_lock,cap_sys_nice,cap_sys_resource+eip" (it has ipc_lock and sys_resource for other reasons; I think only sys_nice is relevant). Any ideas?

    Read the article

  • Replacing stock Core 2 Duo heatsink fan (just the fan really) with a Dell CPU fan

    - by user647345
    My old heatsink fan broke and I'm trying to reconnect its plugs to a new fan. My Dell CPU fan has some custom Dell plug. I snipped the old fan's wire in half and kept the plug on the end of it. I want to connect it to the Dell fan wire to the plug. The motherboard is a P5Q-e, the stock Core 2 Duo fan was .20A and the dell is .70A. Is that going to matter? The wire from the fan has four wires, the wire with the plug has four wires. They share three, of the four colors: red, black, and blue. Dell's fourth wire is white, while the plug's fourth wire is yellow. Is it safe to assume that I just connect the yellow and the white plug together and match the rest up? I don't want to take any risk of damaging anything. It runs fine passively without a fan, but I have speedstep on, so I would like to use this fan and just fasten it to the heatsink with some twist ties and paperclips and call it a day.

    Read the article

  • NFS high CPU usage

    - by user269836
    Hello, I have a very strange issue. I have next server: Intel(R) Xeon(TM) MP CPU 3.16GHz cat /proc/cpuinfo | grep proce | wc -l 8 free -m total used free shared buffers cached Mem: 28203 27606 596 0 10789 9714 -/+ buffers/cache: 7103 21100 Swap: 24695 0 24695 RAID card *-storage description: RAID bus controller product: MegaRAID vendor: LSI Logic / Symbios Logic physical id: 7 bus info: pci@0000:13:07.0 logical name: scsi2 version: 01 width: 32 bits clock: 66MHz capabilities: storage pm bus_master cap_list rom configuration: driver=megaraid latency=32 resources: irq:134 memory:d8ff0000-d8ffffff(prefetchable) memory:df600000-df60ffff(prefetchable) HDD: 10x148Gb SCSI U320 15k - RAID5 /dev/sdb1 807G 674G 93G 88% /storage /dev/sdb1 /storage ext4 defaults,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,noatime,nodiratime,noacl,errors=remount-ro 0 1 network cards ethtool -i eth0 driver: tg3 version: 3.116 firmware-version: 5704-v3.36, ASFIPMIc v2.36 bus-info: 0000:10:02.0 ethtool -i eth1 driver: tg3 version: 3.116 firmware-version: 5704-v3.36, ASFIPMIc v2.36 bus-info: 0000:10:02.0 ifconfig bond0 Link encap:Ethernet HWaddr 00:0f:1f:ff:d6:4d inet addr:192.168.15.71 Bcast:192.168.15.255 Mask:255.255.255.0 inet6 addr: fe80::20f:1fff:feff:d64d/64 Scope:Link UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1 RX packets:1062818202 errors:0 dropped:3918 overruns:0 frame:0 TX packets:1041317321 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:10000 RX bytes:258867684559 (241.0 GiB) TX bytes:396569192650 (369.3 GiB) this server running only nfs-kernel-server uname -a Linux nas2-backup 2.6.32-5-amd64 #1 SMP Sun Sep 23 10:07:46 UTC 2012 x86_64 GNU/Linux Debian 6. What do I have, once per day or two, LA goes up, it can be reached around LA: 40 but if I do: nfs-kernel-server restart. Every thing is OK. But on the next day or a little bit later, LA goes up again. Servers are connected to d-link dgs 1016d with 24 GBits ports. I have tried everything to find out what the problem is. Why it's happening, but still I can not resolve this issue. Any ideas on what is happening here?

    Read the article

  • High load average due to high system cpu load (%sys)

    - by Nick
    We have server with high traffic website. Recently we moved from 2 x 4 core server (8 cores in /proc/cpuinfo), 32 GB RAM, running CentOS 5.x, to 2 x 4 core server (16 cores in /proc/cpuinfo), 32 GB RAM, running CentOS 6.3 Server running nginx as a proxy, mysql server and sphinx-search. Traffic is high, but mysql and sphinx-search databases are relatively small, and usually everything works blazing fast. Today server experienced load average of 100++. Looking at top and sar, we noticed that (%sys) is very high - 50 to 70%. Disk utilization was less 1%. We tried to reboot, but problem existed after the reboot. At any moment server had at least 3-4 GB free RAM. Only message shown by dmesg was "possible SYN flooding on port 80. Sending cookies.". Here is snippet of sar 11:00:01 CPU %user %nice %system %iowait %steal %idle 11:10:01 all 21.60 0.00 66.38 0.03 0.00 11.99 We know that this is traffic issue, but we do not know how to proceed future and where to check for solution. Is there a way we can find where exactly those "66.38%" are used. Any suggestions would be appreciated.

    Read the article

  • Bash Parallelization of CPU-intensive processes

    - by ehsanul
    tee forwards its stdin to every single file specified, while pee does the same, but for pipes. These programs send every single line of their stdin to each and every file/pipe specified. However, I was looking for a way to "load balance" the stdin to different pipes, so one line is sent to the first pipe, another line to the second, etc. It would also be nice if the stdout of the pipes are collected into one stream as well. The use case is simple parallelization of CPU intensive processes that work on a line-by-line basis. I was doing a sed on a 14GB file, and it could have run much faster if I could use multiple sed processes. The command was like this: pv infile | sed 's/something//' > outfile To parallelize, the best would be if GNU parallel would support this functionality like so (made up the --demux-stdin option): pv infile | parallel -u -j4 --demux-stdin "sed 's/something//'" > outfile However, there's no option like this and parallel always uses its stdin as arguments for the command it invokes, like xargs. So I tried this, but it's hopelessly slow, and it's clear why: pv infile | parallel -u -j4 "echo {} | sed 's/something//'" > outfile I just wanted to know if there's any other way to do this (short of coding it up myself). If there was a "load-balancing" tee (let's call it lee), I could do this: pv infile | lee >(sed 's/something//' >> outfile) >(sed 's/something//' >> outfile) >(sed 's/something//' >> outfile) >(sed 's/something//' >> outfile) Not pretty, so I'd definitely prefer something like the made up parallel version, but this would work too.

    Read the article

  • At what point does the performance gap between GPU & CPU become so great that the CPU is holding back a system?

    - by Matthew Galloway
    I know that generally speaking for gaming performance the GPU is the primary factor which holds back performance, with everything else such as RAM/motherboard/PSU/CPU being secondary in importance to the graphics card. But at some point the other components ARE going to be significant in holding back the whole system! For instance nobody would be silly enough to play modern games with 512MB RAM and the very latest graphics cards (such as an HD7970) as I bet the performance increase over such a system with only 512MB but a mid range card would be non-existent! Thus it would be a "waste" for such a person to buy any high end graphics card without resolving first the system's other problems. The same point applies to other components, such as if it only had a Pentium II a current high end graphics card would be wasted on it! So my core question is how do you determine at what point for your system is spending on extra GPU power be completely "wasted"? (also, a slightly more nuanced question is trying work out at what point might the extra graphics power not be "wasted" but would be "sub optimal" value for money, when the expenditure should then be split around graphics card and other components. As obviously a gamer shouldn't always just spend on upgrading the graphics card! But needs to balance it out)

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >