Search Results

Search found 10536 results on 422 pages for 'cpu usage'.

Page 128/422 | < Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >

  • Workaround for API limits [closed]

    - by blunders
    Problem: Planning on building out a client services company that requires access to APIs. Most APIs are limited based on user, IP, etc. - and even though the API calls would be on a per client basis, there's no way to get usage not tied to IPs. (Theoretical) Solution: Have each client install on their network a proxy/VPN that would allow my systems to connect and use their assigned usage. So, it's possible there's a better solution than the one I've thought of, but it's the only one I've been able to come up with.

    Read the article

  • SQLS Timeouts - High Reads in Profiler

    - by lb01
    I've audited a SQLS2008 server with Profiler for one day.. the overhead didn't seem to trouble this new client my company has. They are using a legacy VB6 application as a front-end. They're experiencing timeouts once SQLS RAM usage is high. The server is currently running x64 sqls2008 on a VM with nearly 9 GB of RAM. SQL Server's 'max server memory option' is currently set to 6GB. I've put the results of the trace in a table and queried them using this query. SELECT TextData, ApplicationName, Reads FROM [TraceWednesday] WHERE textdata is not null and EventClass = 12 GROUP BY TextData, ApplicationName, Reads ORDER BY Reads DESC As I expected, some values are very high. Top Reads, in pages. 2504188 1965910 1445636 1252433 1239108 1210153 1088580 1072725 Am I correct in thinking that the top one (2504188 pages) is 20033504 KB, which then is roughly ~20'000 MB, 20GB? These queries are often executed and can take quite some time to run. Eventually RAM is used up because of the cache fattening, and timeouts occur once SQL cannot 'splash' pages in the buffer pool as much. Costs go up. Am I correct in my understanding? I've read that I should tune the associated T-SQL and create appropriate indices. Obviously cutting down the I/O would make SQL Server use less RAM. OR, maybe it might just slow down the process of chewing up the whole RAM. If a lot less pages are read, maybe it'll all run much better even when usage is high? (less time swapping, etc.) Currently, our only option is to restart SQL once a week when RAM usage is high, suddenly the timeouts disappear. SQL breathes again. I'm sure lots of DBAs have been in this situation.. I'm asking before I start digging out all of the bad T-SQL and put indices here and there, is there is something else I can do? Any advice except from what I know (not much yet..) Much appreciated. Leo.

    Read the article

  • Applications starts very slowly from a network path

    - by Snowfox
    Hi We have a windows 2008 server which hosts the network share \\srvcompany\lib. This share contains several applications needed for the daily business. Every client/user (all win xp) has shortcuts on the desktop to these apps. We have the problem that at several (but not all) clients the apps starts very slowly. If I copy the application's programm files to a local folder then they'll start fastly. When I watch the memory usage in the task manager on such a "slow" machine while an applications starts I notice that the memory usage grows much slowier than when I start the app from a "fast" machine. But when I copy files with Windows Explorer from this share, the speed is nearly the same. I've also checked the network driver, both tested clients have the same network card with the same driver version. Has anyone an idea where or what I should check next to solve this problem? Thanks for answers.

    Read the article

  • HTTP transfer speeds start fast then slows to a crawl

    - by AnITAdmin
    We just got a new dedicated 1 gigabit server running IIS. The CPU is 15% or less, the RAM (4 GB total) has 3 GB unused... We are pushing 110 mbits per second... Speeds are really slow.. And, if fact, here's how it happens: We connect, and then the speeds are really fast, and quickly decline to 40 kBps or less. What's going on? It seems the server just wont go above 120 mbits per second. The files are all very large. 50 MB to 500 MB... Could this be a factor? Again, CPU, RAM, UI responsiveness when accessing remotely all seem fine.

    Read the article

  • Bash Shell Scripting Errors: ./myDemo: 56: Syntax error: Unterminated quoted string [EDITED]

    - by ???
    Could someone take a look at this code and find out what's wrong with it? #!/bin/sh while : do echo " Select one of the following options:" echo " d or D) Display today's date and time" echo " l or L) List the contents of the present working directory" echo " w or W) See who is logged in" echo " p or P) Print the present working directory" echo " a or A) List the contents of a specified directory" echo " b or B) Create a backup copy of an ordinary file" echo " q or Q) Quit this program" echo " Enter your option and hit <Enter>: \c" read option case "$option" in d|D) date ;; l|L) ls $PWD ;; w|w) who ;; p|P) pwd ;; a|A) echo "Please specify the directory and hit <Enter>: \c" read directory if [ "$directory = "q" -o "Q" ] then exit 0 fi while [ ! -d "$directory" ] do echo "Usage: "$directory" must be a directory." echo "Re-enter the directory and hit <Enter>: \c" read directory if [ "$directory" = "q" -o "Q" ] then exit 0 fi done printf ls "$directory" ;; b|B) echo "Please specify the ordinary file for backup and hit <Enter>: \c" read file if [ "$file" = "q" -o "Q" ] then exit 0 fi while [ ! -f "$file" ] do echo "Usage: \"$file\" must be an ordinary file." echo "Re-enter the ordinary file for backup and hit <Enter>: \c" read file if [ "$file" = "q" -o "Q" ] then exit 0 fi done cp "$file" "$file.bkup" ;; q|Q) exit 0 ;; esac echo done exit 0 There are some syntax errors that I can't figure out. However I should note that on this unix system echo -e doesn't work (don't ask me why I don't know and I don't have any sort of permissions to change it and even if I wouldn't be allowed to) Bash Shell Scripting Error: "./myDemo ./myDemo: line 62: syntax error near unexpected token done' ./myDemo: line 62: " [Edited] EDIT: I fixed the while statement error, however now when I run the script some things still aren't working correctly. It seems that in the b|B) switch statement cp $file $file.bkup doesn't actually copy the file to file.bkup ? In the a|A) switch statement ls "$directory" doesn't print the directory listing for the user to see ? #!/bin/bash while $TRUE do echo " Select one of the following options:" echo " d or D) Display today's date and time" echo " l or L) List the contents of the present working directory" echo " w or W) See who is logged in" echo " p or P) Print the present working directory" echo " a or A) List the contents of a specified directory" echo " b or B) Create a backup copy of an ordinary file" echo " q or Q) Quit this program" echo " Enter your option and hit <Enter>: \c" read option case "$option" in d|D) date ;; l|L) ls pwd ;; w|w) who ;; p|P) pwd ;; a|A) echo "Please specify the directory and hit <Enter>: \c" read directory if [ ! -d "$directory" ] then while [ ! -d "$directory" ] do echo "Usage: "$directory" must be a directory." echo "Specify the directory and hit <Enter>: \c" read directory if [ "$directory" = "q" -o "Q" ] then exit 0 elif [ -d "$directory" ] then ls "$directory" else continue fi done fi ;; b|B) echo "Specify the ordinary file for backup and hit <Enter>: \c" read file if [ ! -f "$file" ] then while [ ! -f "$file" ] do echo "Usage: "$file" must be an ordinary file." echo "Specify the ordinary file for backup and hit <Enter>: \c" read file if [ "$file" = "q" -o "Q" ] then exit 0 elif [ -f "$file" ] then cp $file $file.bkup fi done fi ;; q|Q) exit 0 ;; esac echo done exit 0 Another thing... is there an editor that I can use to auto-parse code? I.e something similar to NetBeans?

    Read the article

  • Can a motherboard be faulty even if it's getting power and so are components hooked up to it?

    - by Davy8
    Sort of a followup to this question. The mobo's getting power, the lights are on. The GPU fan is spinning (it doesn't use auxiliary power, it's only connected to the mobo). I'm not getting any video signal, and it's not the video card (nor monitor) that's faulty, so I'm suspecting mobo or CPU (possibly RAM?) and I'm trying to pinpoint which part is at fault. Is the motherboard a candidate for being broken or is it not very likely if it's getting power and powering other components? The CPU fan is getting power as well.

    Read the article

  • Tool to track bandwidth by domain name?

    - by Grant Limberg
    I'm running an Ubuntu 10.04 server that hosts several domain names. All domains point to the same IP address and use the same network interface. I'm really only concerned with the main domain name such as my-domain1.com and my-domain2.com. It should include subdomains such as www.my-domain1.com with the totals for my-domain1.com. Is there a tool out there that is configurable to track bandwidth usage on a per-domain name basis? Edit: I'm not looking for only web usage. I'm looking for all traffic.

    Read the article

  • How do I change the output line length from the "top" linux command running in batch mode

    - by Tom
    The following command is useful to capture the current processes that are taking up the most CPU in a file: top -c -b -n 1 > top.log The -c flag is particularly useful because it gives you the command line arguments of each process rather than just the process name. The problem is that each line of output is truncated to fit on the current terminal window. This is ok if you can have a wide terminal because you have a lot of the output but if your terminal is only 165 characters wide, you only get 165 characters of information per process and it is often not enough characters to show the full process command. This is a particular problem when the command is executed without a terminal, for example if you do it via a cron job. Does anyone know how to stop top truncating data or force top to display a certain number of characters per line? This is not urgent because there is an alternative method of getting the top 10 CPU using processes: ps -eo pcpu,pmem,user,args | sort -r -k1 | head -n 10

    Read the article

  • How to set a low process priority for everything spawned from a command prompt in XP?

    - by Binary Worrier
    As a developer, once or twice a week I run a full build on my XP dev machine. This will run at 100% cpu for 30 or 40 minutes, making my machine usless for anything other than basic browsing & email. Is there anyway I can specify that for a given process (i.e. a command prompt) it and any process spawned by it will hae a lower priority, say taking up no more than 60 - 70% of CPU, leaving my machine more usable. I don't mind the build talking 30 or 40% longer, if I still have use of my machine while it's running. Thanks BW P.S. I'd love to be able to throw more hardware at the problem, but that isn't under my control.

    Read the article

  • Tips for maximizing Nginx requests/sec?

    - by linkedlinked
    I'm building an analytics package, and project requirements state that I need to support 1 billion hits per day. Yep, "billion". In other words, no less than 12,000 hits per second sustained, and preferably some room to burst. I know I'll need multiple servers for this, but I'm trying to get maximum performance out of each node before "throwing more hardware at it". Right now, I have the hits-tracking portion completed, and well optimized. I pretty much just save the requests straight into Redis (for later processing with Hadoop). The application is Python/Django with a gunicorn for the gateway. My 2GB Ubuntu 10.04 Rackspace server (not a production machine) can serve about 1200 static files per second (benchmarked using Apache AB against a single static asset). To compare, if I swap out the static file link with my tracking link, I still get about 600 requests per second -- I think this means my tracker is well optimized, because it's only a factor of 2 slower than serving static assets. However, when I benchmark with millions of hits, I notice a few things -- No disk usage -- this is expected, because I've turned off all Nginx logs, and my custom code doesn't do anything but save the request details into Redis. Non-constant memory usage -- Presumably due to Redis' memory managing, my memory usage will gradually climb up and then drop back down, but it's never once been my bottleneck. System load hovers around 2-4, the system is still responsive during even my heaviest benchmarks, and I can still manually view http://mysite.com/tracking/pixel with little visible delay while my (other) server performs 600 requests per second. If I run a short test, say 50,000 hits (takes about 2m), I get a steady, reliable 600 requests per second. If I run a longer test (tried up to 3.5m so far), my r/s degrades to about 250. My questions -- a. Does it look like I'm maxing out this server yet? Is 1,200/s static files nginx performance comparable to what others have experienced? b. Are there common nginx tunings for such high-volume applications? I have worker threads set to 64, and gunicorn worker threads set to 8, but tweaking these values doesn't seem to help or harm me much. c. Are there any linux-level settings that could be limiting my incoming connections? d. What could cause my performance to degrade to 250r/s on long-running tests? Again, the memory is not maxing out during these tests, and HDD use is nil. Thanks in advance, all :)

    Read the article

  • Win7 Professional x64 16GB (4.99GB usable)

    - by Killrawr
    I've installed Corsair Vengeance CMZ16GX3M2A1600C10, 2x8GB, DDR3-1600, PC3-12800, CL10, DIMM and my BIOS picks up that there is 16GB, Windows says there is 16GB, CPU-z says there is 16GB. But it only says I can use 4.99GB out of 16GB. Motherboard is P55-GD65 (MS-7583) Supports four unbuffered DIMM of 1.5 Volt DDR3 1066/1333/1600*/2000*/2133* (OC) DRAM, 16GB Max Windows (Above screenshot specifies that I am on a System type: 64-bit OS) CPU-z Microsoft says that the physical memory limit on a 64 bit win7 professional operating system is 192GB. Dxdiag Run Command BIOS Screenshot #1 BIOS Screenshot #2 Why is my OS limiting me to just over a quarter of the available memory? is there anyway to increase it?

    Read the article

  • AWS EC2: how to compute the cost

    - by EsseTi
    i'm new to AWS, i'm using the free right not and it's terrific. Now, in 1yr the free expires. i went to the website http://aws.amazon.com/ec2/pricing/ where the pricing is but i didn't really get how to compute it. The price are in $ per Hours but i don't think that this means, if i need to have my application running 24h/365d i've to multiplay it for 8760, or do i have? because they write about usage, but how do i compute this value? if i've a website where people in total spend smt like 10 minutes a month and 1 where people spend 750hour a months i pay the same? i can't believe that is the same price. PS:if i've a scheduled task, does it affect the usage?

    Read the article

  • SQLS Timeouts - High Reads in Profiler

    - by lb01
    Hi I've audited a SQLS2008 server with Profiler for one day.. the overhead didn't seem to trouble this new client my company has. They are using a legacy VB6 application as a front-end. They're experiencing timeouts once SQLS RAM usage is high. The server is currently running x64 sqls2008 on a VM with nearly 9 GB of RAM. SQL Server's 'max server memory option' is currently set to 6GB. I've put the results of the trace in a table and queried them using this query. SELECT TextData, ApplicationName, Reads FROM [TraceWednesday] WHERE textdata is not null and EventClass = 12 GROUP BY TextData, ApplicationName, Reads ORDER BY Reads DESC As I expected, some values are very high. Top Reads, in pages. 2504188 1965910 1445636 1252433 1239108 1210153 1088580 1072725 Am I correct in thinking that the top one (2504188 pages) is 20033504 KB, which then is roughly ~20'000 MB, 20GB? These queries are often executed and can take quite some time to run. Eventually RAM is used up because of the cache fattening, and timeouts occur once SQL cannot 'splash' pages in the buffer pool as much. Costs go up. Am I correct in my understanding? I've read that I should tune the associated T-SQL and create appropriate indices. Obviously cutting down the I/O would make SQL Server use less RAM. OR, maybe it might just slow down the process of chewing up the whole RAM. If a lot less pages are read, maybe it'll all run much better even when usage is high? (less time swapping, etc.) Currently, our only option is to restart SQL once a week when RAM usage is high, suddenly the timeouts disappear. SQL breathes again. I'm sure lots of DBAs have been in this situation.. I'm asking before I start digging out all of the bad T-SQL and put indices here and there, is there is something else I can do? Any advice except from what I know (not much yet..) Much appreciated. Leo.

    Read the article

  • Good Choice of Memory for Asus K52F-BBR5

    - by Christopher Painter
    I recently purchased an Asus K52F-BBR5 notebook. It's a basic laptop with an Intel P6100 CPU and Mobile Intel® HM55 Express Chipset. It came with 3GB of DDR3 SODIMM memory and I'd like to expand it to 8GB. I'm a little confused by DDR3 nomenclature and not up to date on my knowledge of chipsets. I'd like to make a good choice when selecting memory for it. Crucial's database suggests using either a PC3-8500 with CAS 7 or a PC3-10600 with a CAS of 9. Is the 8500 better because of it's CAS 7 or will my chipset run the memory async at a higher speed and get better performance? Which would be a better choice for my chipset and CPU? Price difference is negligble.

    Read the article

  • Why am I having trouble loading Ubuntu alongside Windows as an application?

    - by STEVE PEAVEY
    I have two good CD ISO files. Both load OK, but when I boot to Ubuntu the screen is fragmented by dozens of white lines. Program works but is useless. I'm running Windows XP SP3 on D201GLY MB, CELERON CPU 220 1.02 GHZ, 512 RAM What could be my problem? CPU? Not enough RAM? Or maybe even the graphics card? to be clearer i am trying to load either ubuntu 8.04 or 9.04 inside windows as an aplication from known GOOD cd's. trying to load with the wubi installer that is loaded on the cd's. sis mirage graphics 32mb vid prosser sis 662.

    Read the article

  • Server Sizing Methodology

    - by adbrpc
    Our development environment consist of JBoss 5.0.1 DB Server, SQL Server 2008, Oracle IDM. Hardware is Win 2008 32 bit, 4GB RAM. We have reached stage where our environment can not handle application resulting in JBoss shut down throwing out of memory errors and CPU reaching to 90% usage. I am looking methodology to calculate correct server sizing where I input TPS, max number of concurrent users, max CPU utilization etc.. to give me number of servers, RAM size, number of cores. I am expecting application to grow 10% annually. Load Balancer and Failover should also be taken in account while sizing.

    Read the article

  • Question about getting information off an old HD...

    - by user37983
    Ok so my old cpu was a sony vaio. its kinda old and had xp on it. My gf thrashed it - the cd drive went and she tried fixing it by messing with the configurations bios etc. the actual laptop is really fried the keyboard doesnt work, etc. However the harddrive is still intact. I tried putting the HD in my new cpu (toshiba runnig win7) and it looks like its gonna boot up it goes to the screen with the logo and the status bar starts to load. then it flashes a blue screen for a split second, and goes to the black screen where it says windows did not shut down properly and gives options to (start windows normally, safe mode, safe mode with networking, safe mode with prompts) ive tried every option but it always goes back to this screen. I need to get into the hd cuz i have very important files. is there anyway?

    Read the article

  • Solr performance (tomcat) - High load

    - by Ward Loockx
    I'm relatively new to solr. I have a production site running on a VPS, but now I'm having serious load issues. I don't know where to start in order to get the load down... VPS specs (linode.com 512) 512 MB RAM 4 CPU (1x priority) Looks like my solr server (tomcat) is using a lot of CPU power You can find my solrconfig.xml on http://pastebin.com/qdfi8Med and my schema.xml on http://pastebin.com/rRusDP8b I've tried to increaese the cache size, but this didn't do anything on the load. You can see the stats page below. EDIT - Because the screenshot was unclear, I took smaller screenshots if what (I think) is important. Dismax query handler stats Caches stats Thanks for the help!

    Read the article

  • What is this PHP process? It is crippling my server

    - by user1019588
    This process has been using 65% of my site CPU and has lasted for about 10 minutes now (aren't processes only supposed to go for a couple seconds?) It is obviously something with mysql. This makes sense because I have a lot of queries going, but something still seems a bit odd... This could have something to do with my bad PDO connection that I mentioned in the previous question. Perhaps I am opening too many connections or something like that? Here is the stats on it: Owner: mysql Priority: 0 CPU %: 61.1 Memory %: 0.4 Command:/usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --log-error=/var/lib/mysql/cvps54834319.myhost.com.err --pid-file=/var/lib/mysql/cvps54834319.myhost.com.pid Thanks for any help on this. I have over 10GHZ on my server so this is very concerning to me.

    Read the article

  • Origin of display connector numbers in XServer (e.g. HDMI1, HDMI2, DP1)

    - by Andreas N
    a custom mainboard has a DVI and a DisplayPort connector on the board. Currently, everything that is connected at DVI will be named "HDMI2" in XServer. I can see that by calling the "xrandr" tool (in Ubuntu Trusty Tahr). A display connected to the DP connector will be named "DP1" or "HDMI1", if I use a DP-to-DVI adapter. We are now testing a slightly upgraded board version, which has a newer CPU (Intel J1800, Baytrail) among other things and the position of the DVI and DP connectors are switched. Also, everything at the DVI port is called "HDMI1" and something connected to the DP port gets "DP2" or "HDMI2". Q: What causes these numbers to be produced in this manner and where (probably in the kernel) is it happening? I suspect the cause to be hardware related. Specifically, at which CPU pins the connector pins are routed and attached to. Q: Would it be possible to influence this numbering scheme in order to retain the previous numbering behaviour?

    Read the article

  • Computer loads BIOS, but won't load OS

    - by LEGEND383
    I have just purchased a brand new motherboard bundle, installed into the old case with the old PSU and the old HDD. I can get into the BIOS, but whenever I try to boot from the HDD, it just sits with the fans going, but nothing is displayed. The moniter works (tested with another machine), and I'd hope there are no problems with the motherboard, CPU or RAM because as I said I only brought them today. The only things I can think of are: The PSU's motherboard connector is a 20-pin, and the motherboard has a 24-pin connector (not a problem with the previous board) The OS is not supported (doesn't seem likly to me, but possible I guess) Here is my system configuration: Motherboard: ASUS F1A55-M le CPU: AMD APU A6 3500 HDD: 1Tb SATA RAM: 4Gb DDR3 OS: Ubuntu Satanic v666.9 PSU: Winpower ATX-400 (this thing is REALLY old) If anyone is able to offer a reason as to why this is not working, or a possible solution, it would be greatly appriciated.

    Read the article

  • DIagnosing another Windows 7 Lockup

    - by MSEoris
    Im running windows 7 on a fairly modern machine (8gb ram, amd fx-6100, gtx 560ti) and I notice that periodically my windows seems to just hang for a little while. Frequently this occurs after a cold boot and i start up five or six small to medium sized programs, but also occasionally it occurs during normal usage. Basically what occurs is the screen locks up, there is no keyboard responsiveness for a period of 30 seconds to a full minute - after a bit of patience, control is returned, but I'm interested in figuring out what is causing such lockups. I checked the event log and dont see any issues, and all i can see in task manager is a spike in cpu and memory usage right after this occurs. Any tips on how to even begin to diagnose this? Thanks.

    Read the article

  • Why are browsers so heavy?

    - by Kaivosukeltaja
    Back in 1998 I had a computer with 233MHz Pentium MMX CPU and a GFX card with no 3D acceleration. It was able to run games like Quake II at a decent FPS rate. My current computer has tons more performance and a mid-class GPU, yet struggles to reach 20 FPS when rendering a single model inside a skybox with WebGL. Even regular pages with lots of 2D CSS animations bring many modern computers to their metaphorical knees. As a web developer I understand there's a lot going on in a web page but not what makes it that heavy. Modern browsers compile JavaScript to CPU native machine code before running it and rendering into a canvas element shouldn't trigger DOM rebuilds so theoretically it should be a lot faster than it is. What am I missing here and is it possible to avoid or minimize whatever is making the browsers slow to build more efficient websites?

    Read the article

  • monitor internet bandwidth on LAN

    - by Dimal Chandrasiri
    I'm on a office network and there are around 12 PCs that are connected to a switch and then to a Prolink Router. Now the Internet connection is very slow and I'm the one who manage all the computers. I want to monitor bandwidth usage of each computer on the LAN. how can I achieve this using my computer. I cannot install any software on the client computers other than mine. I tried wireshark but, it only shows my network usage so there is no use with the data I get from it. Is there any specific software that I can use within my PC to get the bandwidth details of other computers. I'm on a windows 7 x64bit PC with admin privileges. Thank you.

    Read the article

  • non-mapped virtual memory & total number of connections

    - by tszming
    We have two MongoDB data nodes (replica set) - Primary & Secondary. I noticed that the non-mapped virtual memory is relatively high and wondering if they are hurting our MongoDB performance (The server usually peaked at around 6-7K queries per sec). In MMS, it was stated: "The most common case of usage of a high amount of memory for non-mapped is that there are very many connections to the database." So we checked the memory usage with db.serverStatus().mem in our Secondary: { "bits" : 64, "resident" : 6846, "virtual" : 416797, "supported" : true, "mapped" : 205549, "mappedWithJournal" : 411098, "note" : "virtual minus mapped is large. could indicate a memory leak" } Note: We are using 2.0.4 and now the default stack size should be 1MB per connection. The current number of connections is around 1.1K, but the non-mapped virtual memory (virtual-mappedWithJournal) is around 5699 MB. The trend is quite stable so I can't say there is a leak here, but where is the memory gone? Any idea?

    Read the article

< Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >