Search Results

Search found 14282 results on 572 pages for 'performance counter'.

Page 128/572 | < Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >

  • Load testing nginx inside AWS

    - by andy
    I'm trying to load test nginx running on AWS. I need to try to optimise it to handle 1Gbps of inbound traffic. Currently I've got it to peak at 85Mbit/s by running nginx on an m1.large with 4 other machines hitting it by using ab with -i (for head requests) -k (keepalives) -r (ignore failed requests) -n 500000 -c 20000. I'm struggling to generate more than 85 Mbit/s traffic from 4 machines, yet when I do scp a large file I get nearly 0.25Gbit/s of traffic going over the network. Are there any tools or approaches that I could use to load test nginx that might generate more load? I'm only interested in inbound traffic, so perhaps a DoS tool could help if it chucks away responses? I'm hitting a very small (40 byte) static asset, and have peaked at handling 50K concurrent connections and getting 25k reqs/s when just using a single load generator machine.

    Read the article

  • Would upgrading memory from 4GB to 8GB on my laptop solve swapping issues?

    - by Tom
    I have a laptop with 4GB of memory with Windows 7 on it and I often experience with Eclipse that it is swapped out to disk. On the net they usually write 4GB of RAM is more than enough for average use and aside from Eclipse+Android Emulator I don't really use other extra apps, yet Eclipse is always swapped out if I haven't used it for a while (say, 1 day) and it is annoying it to wait for it to be resurrected from swap. My question is: would an upgrade to 8GB solve the issue of swapped out applications? With 8GB would windows 7 keep everything in memory? Or it wouldn't change anything and Eclipse would be swapped out regardless of the amount of memory, because Win 7 has a habit of kicking out every application from memory which hasn't be used for a while?

    Read the article

  • A good free software for freeing up RAM Memory in Windows 7(64bit)

    - by Flavius Frantz
    I am looking for a good windows 7 software to free up RAM memory on my PC... i tried some ones I found on google but they were bad stuff... with viruses, spamware etc... i want a free clean professional software, if you don't know a good one thats free, please recommend a payed version. Also other tips/software to speed up my pc(on win7- 64bit) and such utilities. Also software to measure temperature would be great... If you can make a "must have" list of such software... Thank you I am a graphic designer, usually using this stack exchange for graphic design questions, now I realised there is this superuser one... nice :) [I usually have a lot of running programs, such as Photoshop, Flash, Illustrator, InDesign, running at the same time... with only 4GB of RAM memmory.. any tips to improve my PC perfomance would be great... I have a Asus K50IP Notebook]

    Read the article

  • Use the same database or replicate it for reports and web

    - by developer
    I would like to know if i have a web with a huge Database and throw expensive (in time)reports , the best way to do this is with one database for the web and a replicated one for reports, or only one for both, i'm worried that users can throw reports for 5 or more years because they need that information and the web crashes because of this.

    Read the article

  • Why does Oracle SQL Developer take so long to open?

    - by oscilatingcretin
    I think anyone who's used Oracle SQL Developer will agree that it's painfully slow on the load. My research has lead me to a solution that seems to have helped a little, and that's telling OSQLD not to check for updates on startup. However, it still takes several minutes to open. What could OSQLD possibly be doing during load time? Is there any way get it to open right away? Edit: Adding potentially relevant system specs: CPU: Intel i5-2520M 2.5 ghz Windows 7 32-bit RAM: 4 gb

    Read the article

  • High frequency, kernel bypass vs tuning kernels?

    - by Keith
    I often hear tales about High Frequency shops using network cards which do kernel bypass. However, I also often hear about them using operating systems where they "tune" the kernel. If they are bypassing the kernel, do they need to tune the kernel? Is it a case of they do both because whilst the network packets will bypass the kernel due to the card, there is still all the other stuff going on which tuning the kernel would help? So in other words, they use both approaches, one is just to speed up network activity and the other makes the OS generally more responsive/faster? I ask because a friend of mine who works within this industry once said they don't really bother with kernel tuning anymore-because they use kernel bypass network cards? This didn't make too much sense as I thought you would always want a faster kernel for all the CPU-offloaded calculations.

    Read the article

  • Applications starts very slowly from a network path

    - by Snowfox
    Hi We have a windows 2008 server which hosts the network share \\srvcompany\lib. This share contains several applications needed for the daily business. Every client/user (all win xp) has shortcuts on the desktop to these apps. We have the problem that at several (but not all) clients the apps starts very slowly. If I copy the application's programm files to a local folder then they'll start fastly. When I watch the memory usage in the task manager on such a "slow" machine while an applications starts I notice that the memory usage grows much slowier than when I start the app from a "fast" machine. But when I copy files with Windows Explorer from this share, the speed is nearly the same. I've also checked the network driver, both tested clients have the same network card with the same driver version. Has anyone an idea where or what I should check next to solve this problem? Thanks for answers.

    Read the article

  • Best use of new express card on Windows

    - by jckdnk111
    I just bought a 48GB SSD express card for my laptop and I am trying to decide how best to use it. I will be running some sort of virtualization (prob VirtualBox) to test / learn Windows Server administration. I am running Windows 7 Ultimate 64 bit. I have 4GB of RAM and a 7200 RPM SATA hard disk. The express card will read at 115MB/s and write at 65MB/s. So how best to use this new disk? Readyboost, relocate pagefile, store VM disks, mix / match?

    Read the article

  • How do you disable the magnifier in Windows Vista?

    - by PhantomDrummer
    Every so often while I'm working, if I accidentally jolt the mouse, the magnifier starts. I'm fairly sure the cause is that some comination of mouse keys is supposed to start the magnifier, and I occasionally hit the right keys accidentally. However, I've never been able to reproduce the behaviour deliberately so I don't know which combination. This is very annoying since switching the magnifier off is non-trivial (control panel, or run dialog, which are hard to use when the magnifier keeps - ummm - windowing and magnifying the bit of screen you're about to click on :-) ). So I'm wondering if there's any way to disable the magnifier completely, so that it doesn't start when you do whatever it is you'd normally do to the mouse to start it? Anyone know?

    Read the article

  • What does the 'Burst Rate' stat mean in HDTune?

    - by UpTheCreek
    I recently upgraded my laptop's v slow hard drive to a seagate momentus 7200. Everything is working fine, but I'm a bit confused by these benchmark results: The burst rate is significantly less than the Maximim transfer rate, and not much higher than the normal minimum (if you ignore the spikes). What's going on here? On the HDtune website it defines Burst Rate as: ...the highest speed (in megabytes per second) at which data can be transferred from the drive interface (IDE or SCSI for example) to the operating system. Which begs some questions... e.g. if this is the highest, then how did the bechmarking tool record the 103MB/sec maximum? And if this really is the true maximum, then where is the bottleneck? The laptops SATA interface is on an Intel 82801GBM southbridge controller. When I check in hardware manager, I see that it's driver is iaStor.sys from 2005. Maybe that's the issue? I'll look for a newever version, but any insights would be appreciated. Thanks

    Read the article

  • troubleshooting really slow login on a (linux) machine

    - by Peeter Joot
    Within the last couple of weeks, any attempt to login to a specific linux server has gotten really slow. Once I've logged in, things appear to run without significant delay, but some other login like activities (like starting a new screen session) are slow. The machine's been rebooted a couple of times recently and that hasn't helped. , and it doesn't appear to be $PATH search (where $PATH can sometimes include bad NFS mounts), which I've seen historically in our environment. I've also tried completely removing my .profile/.bash*/... type of init files to rule out anything bad there. I also see slow login for at least one other userid on the system. One thing I've noticed is the following message when trying to exit from a screen terminal: Utmp slot not found -> not removed and am wondering if this is related (having a vague recollection that Utmp has something to do with login). Any idea what that message means, or how to fix it, and if it would be related? Failing that, what sort of problem determination tools are available to investigate what is slowing down this login process?

    Read the article

  • apache chokes after 300 connections

    - by john titus
    We have an apache webserver in front of Tomcat hosted on EC2, instance type is extra large with 34GB memory. Our application deals with lot of external webservices and we have a very lousy external webservice which takes almost 300 seconds to respond to requests during peak hours. During peak hours the server chokes at just about 300 httpd processes. ps -ef | grep httpd | wc -l =300 I have googled and found numerous suggestions but nothing seems to work.. following are some configuration i have done which are directly taken from online resources. I have increased the limits of max connection and max clients in both apache and tomcat. here are the configuration details: //apache <IfModule prefork.c> StartServers 100 MinSpareServers 10 MaxSpareServers 10 ServerLimit 50000 MaxClients 50000 MaxRequestsPerChild 2000 </IfModule> //tomcat <Connector port="8080" protocol="org.apache.coyote.http11.Http11NioProtocol" connectionTimeout="600000" redirectPort="8443" enableLookups="false" maxThreads="1500" compressableMimeType="text/html,text/xml,text/plain,text/css,application/x-javascript,text/vnd.wap.wml,text/vnd.wap.wmlscript,application/xhtml+xml,application/xml-dtd,application/xslt+xml" compression="on"/> //Sysctl.conf net.ipv4.tcp_tw_reuse=1 net.ipv4.tcp_tw_recycle=1 fs.file-max = 5049800 vm.min_free_kbytes = 204800 vm.page-cluster = 20 vm.swappiness = 90 net.ipv4.tcp_rfc1337=1 net.ipv4.tcp_max_orphans = 65536 net.ipv4.ip_local_port_range = 5000 65000 net.core.somaxconn = 1024 I have been trying numerous suggestions but in vain.. how to fix this? I'm sure m2xlarge server should serve more requests than 300, probably i might be going wrong with my configuration.. The server chokes only during peak hours and when there are 300 concurrent requests waiting for the [300 second delayed] webservice to respond. Please help..

    Read the article

  • SQL Management Studio external database only

    - by Robuust
    I'm trying to speed up my PC, and I figured out that a full version of SQL Management Studio 2012 is installed including localhost server. I only need to connect to remote hosts, so running a local server by default should be disabled. Is there an easy way to disable certain parts so I can speed up my PC and booting time? Thanks in advance. I really have no clue what processes I can disable without ruining everything.

    Read the article

  • Low FPS in some games, but hardware not fully used

    - by Mario De Schaepmeester
    I just did a little funny experiment in the game/sim "Train Simulator 2013". I normally have good FPS in it (around 30) at full settings. What I did was make a really, really long train so that the calculations the sim needed to make were enormous (the sim is quite realistic, it takes all things into account like speed/acceleration, G-forces, comfort levels, possible wheel slip and many more, and most of those things on each carriage seperately). This resulted in only 14FPS as reported by the game, but it felt more like 8FPS or so. I have a Logitech G15 keyboard which has an LCD, and it allows me to monitor CPU/RAM and video card load on it. The strange thing is, all CPU cores were busy, but the total load was only about 60% maximum at all times. The video card was only on 30% load (possibly an important note, the memory was full, which is however not unusual for the game in question). The RAM had plenty of room and there weren't many operations as it didn't grow or shrink much. I just have the feeling that the game would run smoother if it used more of my hardware power. Why is it not doing so? I had the same in another game, The Elder Scrolls: Morrowind when using more than 100 mods (that all use scripting) and a few high res texture mods, + a full-on graphics improvement program. The engine is very old (2003), and so I thought this might be the cause (not being optimised for multithreading). I had thought of possible causes, like: The operating system doesn't let the games use all the resources. It doesn't make use of multi-threading appropriately. To eliminate the former, I tried a CPU stress tool and that got 100% CPU juice as I let it run, so the OS is not the problem. I gave its thread the "higher" priority though. My actual question In both games, I did things the engine was not really built to do or support. Can those games' framerate be limited cause of their own engine not being able to cope? What is the real reason and more importantly, can I help it? And in any case, could something actually be wrong with my hardware? It's all reasonably new, a couple of months, and I (almost) never experience any other trouble. Modern and much more demanding games work absolutely fine. Specs CPU: AMD Phenom II 965 X4 @ 3.4gHz RAM: 8GB of DDR3 RAM Video: MSI GTX560 (nVidia chip) with 1GB of GDDR5 memory OS: Windows 7 Ultimate 64 bit Nothing overclocked.

    Read the article

  • How can I pin point a USB file transfer bottleneck in Unix?

    - by HankHendrix
    I'm experiencing very slow data transfer speeds over USB 2.0 on my nix box and was wondering how I can pin-point the cause of the problem. I've looked into iotop and top but the cpu and mem figures look normal (compared to guides I have checked). The box which is affected is Ubuntu 12.04 32bit Server running on an Asus EEE 701 2G model and I am transferring from the OS over USB 2.0 to an external HDD (which transfers at 30MB/s+ on Windows 7 on other machine). I get rsync write speeds of 1MB/s from OS to USB HDD which seems ridiculously slow. These speeds are consistent with other USB HDDs and sticks.

    Read the article

  • Over gigabit connection, Teracopy does 31MB/s, but Windows 8 does it at ~109MB per second?

    - by Gaurang
    I got my brain-melting first taste of Gigabit networking today, between my 2011 MacMini and Windows 8 Pro desktop connected via Cat.5e to Linksys WRT320N(sporting dd-WRT). After making sure that the line speed on both systems showed 1Gbps, I proceeded to copying a 2.4GB MP4 from the Mini to the Win 8 desktop (SMB sharing). Although satisfied with the 30-34 MB/s that Teracopy was showing (that was a proper step-up for me from 10 MB/s), I still was curious about this massive difference in the advertised and real-world speed. 2 hours of Google had me believing that there were other factors that resulted in less speed, SMB being one. So just for the sake of doing it, I iPerf'd both the systems and guess what that showed - around 875mbps on both systems! I then stumbled upon this little piece of info after which I turned off Teracopy and copied the same file through Windows 8's regular copier. 109 MB/s. Molten brains :) What exactly is causing this? And can I enable such speeds via Teracopy? I really dig the extra features that Teracopy has, will surely miss them now :D

    Read the article

  • High Steal Time utilization on Apache Linux Server

    - by JMC
    I have a CentOS "development / testing" server that runs extremely slowly. It's running Apache and Mysql using PHP. Top reports that 98% of the CPU utilization is frequently spent on "st" - Steal Time. What could cause a server to spend so much CPU on steal time, and how can I diagnose the problem? I didn't notice the problem until after I granted a third party developer root access (for all I know it has a root kit running, though unlikely).

    Read the article

  • Using an Internal HDD as an External HDD also or an External HDD for installing SAP ?

    - by Asterix
    Is it possible and advisable to use an Internal Hard Drive as an External Hard drive also. I wanted to install SAP ECC 6 on my system which has only 250 GB but atleast 300 GB is required.I wanted to buy an External Drive first, then I heard loading SAP on an External would make it extremely slow. I'll be using it only as a beginer so even if it is a little slow i don't mind. Is it feasible to run such a big application from an External Hard disk ? So can i purchase a 500 or 1 TB Internal Hard disk and use it as an External too by fitting it with the necessary USB 3.0 Hard drive cases and cables ? or should i purchase a External and load SAP onto it ? Thank you.

    Read the article

  • Why do most of us use 'i' as a loop counter variable?

    - by kprobst
    Has anyone thought about why so many of us repeat this same pattern using the same variable names? for (int i = 0; i < foo; i++) { // ... } It seems most code I've ever looked at uses i, j, k and so on as iteration variables. I suppose I picked that up from somewhere, but I wonder why this is so prevalent in software development. Is it something we all picked up from C or something like that? Just an itch I've had for a while in the back of my head.

    Read the article

  • Rule of thumb in RAM estimate for static pages? [closed]

    - by IMB
    Possible Duplicate: How do you do Load Testing and Capacity Planning for Web Sites I've seen tutorials saying they can run decent websites on 64MB RAM (Debian/Lighttpd/PHP/MySQL) however it's not clearly defined how much hits/traffic a "decent" site gets. Is there a rule of thumb on how much RAM a web server needs? To keep things simple, let's say you're running a site with static content and it's averaging at 100,000 hits per hour (HTML + images combined, no MySQL). How much RAM is the minimum requirement for that?

    Read the article

  • What to do before trying to benchmark

    - by user23950
    What are the things that I should do before trying to benchmark my computer: I've got this tools for benchmarking: 3dmark cinebench geekbench juarez dx10 open source mark Do I need to have a full spyware and virus scan before proceeding?What else should I do, in order to get accurate readings.

    Read the article

  • How can I determine Breaking point of my Web application using JMeter?

    - by Gopu Alakrishna
    How can I determine Breaking point of my Web application using JMeter? I have executed the JMeter Testplan with different concurrent users load. EX. 300 users(0% error), 400 users(7% error in a sample, 5% error in another sample), 500 users(more than 10% error in 4 out of 6 samples). At What value of % Error, I can say system reached the Breaking point.I used concurrent users 300, 400, 500 in a PHP website. Should I consider any other parameter to determine breaking point. How many maximum concurrent users my application can support?

    Read the article

  • How dow I remove 1.000.000 directories?

    - by harper
    I found that in a directory more than 1.000.000 subdirectories has been created due to a bug. I want to remove all these directories, let's say in the directory WebsiteCache. My first approach was to use the command line tool: cd WebsiteCache rmdir /Q /S . This will remove all subdirectories except the directory WebsiteCache itself, since it is the current working directory. I noticed after two hours that the directoriws starting with A-H have been removed. Why does rmdir removes the directories in alphabetical order? It must take additional effort to do this ordered. What is the fastest way to delete such an amount of directories?

    Read the article

  • Linux: find out what process is using all the RAM?

    - by Timur
    Before actually asking, just to be clear: yes, I know about disk cache, and no, it is not my case :) Sorry, for this preamble :) I'm using CentOS 5. Every application in the system is swapping heavily, and the system is very slow. When I do free -m, here is what I got: total used free shared buffers cached Mem: 3952 3929 22 0 1 18 -/+ buffers/cache: 3909 42 Swap: 16383 46 16337 So, I actually have only 42 Mb to use! As far as I understand, -/+ buffers/cache actually doesn't count the disk cache, so I indeed only have 42 Mb, right? I thought, I might be wrong, so I tried to switch off the disk caching and it had no effect - the picture remained the same. So, I decided to find out who is using all my RAM, and I used top for that. But, apparently, it reports that no process is using my RAM. The only process in my top is MySQL, but it is using 0.1% of RAM and 400Mb of swap. Same picture when I try to run other services or applications - all go in swap, top shows that MEM is not used (0.1% maximum for any process). top - 15:09:00 up 2:09, 2 users, load average: 0.02, 0.16, 0.11 Tasks: 112 total, 1 running, 111 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4046868k total, 4001368k used, 45500k free, 748k buffers Swap: 16777208k total, 68840k used, 16708368k free, 16632k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ SWAP COMMAND 3214 ntp 15 0 23412 5044 3916 S 0.0 0.1 0:00.00 17m ntpd 2319 root 5 -10 12648 4460 3184 S 0.0 0.1 0:00.00 8188 iscsid 2168 root RT 0 22120 3692 2848 S 0.0 0.1 0:00.00 17m multipathd 5113 mysql 18 0 474m 2356 856 S 0.0 0.1 0:00.11 472m mysqld 4106 root 34 19 251m 1944 1360 S 0.0 0.0 0:00.11 249m yum-updatesd 4109 root 15 0 90152 1904 1772 S 0.0 0.0 0:00.18 86m sshd 5175 root 15 0 90156 1896 1772 S 0.0 0.0 0:00.02 86m sshd Restart doesn't help, and, by they way is very slow, which I wouldn't normally expect on this machine (4 cores, 4Gb RAM, RAID1). So, with that - I'm pretty sure that this is not a disk cache, who is using the RAM, because normally it should have been reduced and let other processes to use RAM, rather then go to swap. So, finally, the question is - if someone has any ideas how to find out what process is actually using the memory so heavily?

    Read the article

  • Why is piping dd through gzip so much faster than a direct copy?

    - by Foo Bar
    I wanted to backup a path from a computer in my network to another computer in the same network over a 100MBit/s line. For this I did dd if=/local/path of=/remote/path/in/local/network/backup.img which gave me a very low network transfer speed of something about 50 to 100 kB/s, which would have taken forever. So I stopped it and decided to try gzipping it on the fly to make it much smaller so that the amount to transfer is less. So I did dd if=/local/folder | gzip > /remote/path/in/local/network/backup.img.gz But now I get something like 1 MB/s network transfer speed, so a factor of 10 to 20 faster. After noticing this, I tested this on several paths and files and it was always the same. Why does piping dd through gzip also increase the transfer rates by a large factor instead of only reducing the bytelength of the stream by a large factor? I'd expected even a small decrease in transfer rates instead, due to the higher CPU consumption while compressing, but now I get a double plus. Not that I'm not happy, but just wondering. ;)

    Read the article

< Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >