Search Results

Search found 13249 results on 530 pages for 'virtualized performance'.

Page 92/530 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • Running game server and webserver on EC2

    - by mazzzzz
    Hey guys, I have a webhost, and an EC2 server (to run a game server on). The problem is that I want to access/modify the EC2's files with php admin programs. I looked into a lot of options to just have the webhost communicate with the EC2 server (ssh, etc), but none of them panned out. My question is if I were to install a lightweight webserver (think lighttpd) on my EC2 server, how badly would it hurt the game server's performance? I was leaning away from this solution, even though the webserver (on the EC2 server) wouldn't get many hits (less than a 100 a day). Thanks for your thoughts, Max

    Read the article

  • Simultaneous read/write to RAID array slows server to a crawl

    - by Jeff Leyser
    Fairly beefy NFS/SMB server (32GB RAM, 2 Xeon quad cores) with LSI MegaRAID 8888ELP controlling 12 drives configured into 3 different arrays. 5 2TB drives are grouped into a RAID 6 array. As expected, write performance to the array is slow. However, sustained, simultaneous read/write to the array (wether through NFS or done locally) seems to practically block any other access to anything else on the controller. For example, if I do: cp /home/joe/BigFile /home/joe/BigFileCopy where BigFile is 20G, then even a simple ls /home/jane will take many 10s of seconds to complete. In addition, an ls /backup will also take many tens of seconds, even though /backup is a different array on the same controller. As soon as the cp is done, everything is back to normal. cp /home/joe/BigFile /backup/BigFile does not exhibit this behavior. It's only when doing read/write to the same array.

    Read the article

  • need for tcp fine-tuning on heavily used proxy server

    - by Vijay Gharge
    Hi all, I am using squid like Internet proxy server on RHEL 4 update 6 & 8 with quite heavy load i.e. 8k established connections during peak hour. Without depending much on application provider's expertise I want to achieve maximum o/p from linux. W.r.t. that I have certain questions as following: How to find out if there is scope for further tcp fine-tuning (without exhausting available resources) as the benchmark values given by vendor looks poor! Is there any parameter value that is available from OS / network stack that will show me the results. If at all there is scope, how shall I identify & configure OS tcp stack parameters i.e. using sysctl or any specific parameter Post tuning how shall I clearly measure performance enhancement / degradation ?

    Read the article

  • Troubleshooting a high SQL Server Compilation/Batch-Ratio

    - by Sleepless
    I have a SQL Server (quad core x86, 4GB RAM) that constantly has almost the same values for "SQLServer:SQL Statistics: SQL compilations/sec" and "SQLServer:SQL Statistics: SQL batches/sec". This could be interpreted as a server running 100% ad hoc queries, each one of which has to be recompiled, but this is not the case here. The sys.dm_exec_query_stats DMV lists hundreds of query plans with an execution_count much larger than 1. Does anybody have any idea how to interpret / troubleshoot this phenomenon? BTW, the server's general performance counters (CPU,I/O,RAM) all show very modest utilization.

    Read the article

  • Why does restarting the modem fix latency?

    - by Giovanni Galbo
    In the last few days I've noticed poor internet performance. Today I ran a speed test and the results were abysmal... 10mb down and 0.18mb up (which really hurt, because I was trying to RDC from another location). I pay for 30mb down and 5mb up. Latency was at 128ms. Before calling my ISP to give them a verbal lashing, I unplugged the modem and plugged it back in. I pretty much got top speed after doing that (with a latency of 7ms). I'm the type of guy that likes to know what goes on under the hood. So what's the deal? What mysterious powers does restarting give to my modem?

    Read the article

  • Monitoring bespoke software with Zenoss

    - by Andy S
    We've got a lot of back-end applications that we need to monitor the performance of (metrics such as orders waiting to be processed, time since last run, etc). Currently, this is done by an in-house watchdog application that fires out emails whenever a threshold is exceeded, but there's no way to acknowledge an issue and squelch these alerts. Rather than build our own complete alerting system, we'd like to tie in to the Zenoss installation we use to monitor our servers. I've found a few articles on creating events programmatically, but I'd rather Zenoss itself monitors the values that the current watchdog app is looking at (so we get the benefits of graphing and history as well). Is it possible, then, to programmatically provide a data feed (rather than an event) to Zenoss? Or is there another way to go about this?

    Read the article

  • Intel cpu hyperthreading on or off for ibm db2?

    - by rtorti19
    Has anyone ever done any database performance comparisons with hyper-threading enabled vs disabled? We are running ibm db2 and I'm curious if anyone has an recommendations for enabling hyper-threading or not. With hyper-threading enabled it makes it quite difficult to do capacity planning for cpu usage. For example. "With 8 physical cores represented as 16 "threads" on the OS and a cpu-bound workload, does that mean when your cpu usage hit's 50% you are actually running at 100%." What real benefits do I gain with leaving hyper-threading enabled on an intel server running DB2? Does hyper-threading help if you're workload is truly disk IO bound? If so, up to what percentage? These are the types of questions I'm trying to answer. Any thoughts?

    Read the article

  • Do SSD hybrid drives perform better than HDD + ReadyBoost flash?

    - by Chris W. Rea
    Seagate has released a product called the Momentus XT Solid State Hybrid Drive. This looks exactly like what Windows ReadyBoost attempts to do with software at the OS level: Pairing the benefits of a large hard drive together with the performance of solid-state flash memory. Does the Momentus XT out-perform a similar ad-hoc pairing of a decent hard drive with similar flash memory storage under Windows ReadyBoost? Other than the obvious "a hardware implementation ought to be faster than a software implementation", why would ReadyBoost not be able to perform as well as such a hybrid device?

    Read the article

  • Do you run anti-virus software?

    - by Paolo Bergantino
    Do you find the crippling effect that most anti virus software has on a computer's performance worth the "security" they provide? I've never been able to really tell myself its worth it, and have used my computer without "protection" for years without any problems. Jeff Atwood wrote about this a while back, taking a similar stance. So I'm looking for some discussion on the merits and downfalls of antivirus software, and whether you personally think its worth the hassle. One point I do think is valid is that I am probably okay with not running it because I know if something goes wrong I have the ability to make it right (most of the time) but I can't really recommend the same for family as they may not be able to...

    Read the article

  • Tuning (and understanding) table_cache in mySQL

    - by jotango
    Hello, I ran the excellent MySQL performance tuning script and started to work through the suggestions. One I ran into was TABLE CACHE Current table_cache value = 4096 tables You have a total of 1073 tables. You have 3900 open tables. Current table_cache hit rate is 2%, while 95% of your table cache is in use. You should probably increase your table_cache I started to read up on the table_cache but found the MySQL documentation quite lacking. They do say to increase the table_cache, "if you have the memory". Unfortunately the table_cache variable is defined as "The number of open tables for all threads." How will the memory used by MySQL change, if I increase this variable? What is a good value, to set it to?

    Read the article

  • Benchmarks relevant for a Visual Studio .Net development workstation

    - by user30715
    I am developing a system with Windows 7-64, Visual Studio and Sharepoint on a virtual workstation on some kind of VMWare server. The system is painfully slow, with VS lagging behind when entering code, Intellisense lagging, opening and saving files takes ages when compared to a normal budget laptop. As far as I can see the virtual machine has OK specs and does not seem to be swapping etc., and the IT dept also says that they can't see anything wrong when they're monitoring the system. As long as the problem is not well-documented, the IT dept and management does not want to throw money (=upgraded laptops) at us, so I need to show some sort of benchmark. It has been many years since I did any system benchmarking, and I don't know the current benchmark software, so my question is which benchmark will be most relevant for Visual Studio performance? Not just for compiling fast, but also to reflect the "responsiveness" of the system. Cheers, user30715

    Read the article

  • Is a larger hard drive with the same cache, rpm, and bus type faster?

    - by Joel Coehoorn
    I recently heard that, all else being equal, larger hard are faster than smaller. It has to do with more bits passing under the read head as the drive spins - since a large drive packs the bits more tightly, the same amount of spin/time presents more data to the read head. I had not heard this before, and was inclined to believe the the read heads expected bits at a specific rate and would instead stagger data, so that the two drives would be the same speed. I now find myself looking at purchasing one of two computer models for the school where I work. One model has an 80GB drive, the other a 400GB (for ~$13 more). The size of the drive is immaterial, since users will keep their files on a file server where they can be backed up. But if the 400GB drive will really deliver a performance boost to the hard drive, the extra money is probably worth it. Thoughts?

    Read the article

  • Apache on Win32: Slow Transfers of single, static files in HTTP, fast in HTTPS

    - by Michael Lackner
    I have a weird problem with Apache 2.2.15 on Windows 2000 Server SP4. Basically, I am trying to serve larger static files, images, videos etc. The download seems to be capped at around 550kB/s even over 100Mbit LAN. I tried other protocols (FTP/FTPS/FTP+ES/SCP/SMB), and they are all in the multi-megabyte range. The strangest thing is that, when using Apache with HTTPS instead of HTTP, it serves very fast, around 2.7MByte/s! I also tried the AnalogX SimpleWWW server just to test the plain HTTP speed of it, and it gave me a healthy 3.3Mbyte/s. I am at a total loss here. I searched the web, and tried to change the following Apache configuration directives in httpd.conf, one at a time, mostly to no avail at all: SendBufferSize 1048576 #(tried multiples of that too, up to 100Mbytes) EnableSendfile Off #(minor performance boost) EnableMMAP Off Win32DisableAcceptEx HostnameLookups Off #(default) I also tried to tune the following registry parameters, setting their values to 4194304 in decimal (they are REG_DWORD), and rebooting afterwards: HKLM\SYSTEM\CurrentControlSet\Services\AFD\Parameters\DefaultReceiveWindow HKLM\SYSTEM\CurrentControlSet\Services\AFD\Parameters\DefaultSendWindow Additionally, I tried to install mod_bw, which sets the event timer precision to 1ms, and allows for bandwidth throttling. According to some people it boosts static file serving performance when set to unlimited bandwidth for everybody. Unfortunately, it did nothing for me. So: AnalogX HTTP: 3300kB/s Gene6 FTPD, plain: 3500kB/s Gene6 FTPD, Implicit and Explicit SSL, AES256 Cipher: 1800-2000kB/s freeSSHD: 1100kB/s SMB shared folder: about 3000kB/s Apache HTTP, plain: 550kB/s Apache HTTPS: 2700kB/s Clients that were used in the bandwidth testing: Internet Explorer 8 (HTTP, HTTPS) Firefox 8 (HTTP, HTTPS) Chrome 13 (HTTP, HTTPS) Opera 11.60 (HTTP, HTTPS) wget under CygWin (HTTP, HTTPS) FileZilla (FTP, FTPS, FTP+ES, SFTP) Windows Explorer (SMB) Generally, transfer speeds are not too high, but that's because the server machine is an old quad Pentium Pro 200MHz machine with 2GB RAM. However, I would like Apache to serve at at least 2Mbyte/s instead of 550kB/s, and that already works with HTTPS easily, so I fail to see why plain HTTP is so crippled. I am using a Kerio Winroute Firewall, but no Throttling and no special filters peeking into HTTP traffic, just the plain Firewall functionality for blocking/allowing connections. The Apache error.log (Loglevel info) shows no warnings, no errors. Also nothing strange to be seen in access.log. I have already stripped down my httpd.conf to the bare minimum just to make sure nothing is interfering, but that didn't help either. If you have any idea, help would be greatly appreciated, since I am totally out of ideas! Thanks! Edit: I have now tried a newer Apache 2.2.21 to see if it makes any difference. However, the behaviour is exactly the same. Edit 2: KM01 has requested a sniff on the HTTP headers, so here comes the LiveHTTPHeaders output (an extension to Firefox). The Output is generated on downloading a single file called "elephantsdream_source.264", which is an H.264/AVC elementary video stream under an Open Source license. I have taken the freedom to edit the URL, removing folders and changing the actual servers domain name to www.mydomain.com. Here it is: LiveHTTPHeaders, Plain HTTP: http://www.mydomain.com/elephantsdream_source.264 GET /elephantsdream_source.264 HTTP/1.1 Host: www.mydomain.com User-Agent: Mozilla/5.0 (Windows NT 5.2; WOW64; rv:6.0.2) Gecko/20100101 Firefox/6.0.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Connection: keep-alive HTTP/1.1 200 OK Date: Wed, 21 Dec 2011 20:55:16 GMT Server: Apache/2.2.21 (Win32) mod_ssl/2.2.21 OpenSSL/0.9.8r PHP/5.2.17 Last-Modified: Thu, 28 Oct 2010 20:20:09 GMT Etag: "c000000013fa5-29cf10e9-493b311889d3c" Accept-Ranges: bytes Content-Length: 701436137 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/plain LiveHTTPHeaders, HTTPS: https://www.mydomain.com/elephantsdream_source.264 GET /elephantsdream_source.264 HTTP/1.1 Host: www.mydomain.com User-Agent: Mozilla/5.0 (Windows NT 5.2; WOW64; rv:6.0.2) Gecko/20100101 Firefox/6.0.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Connection: keep-alive HTTP/1.1 200 OK Date: Wed, 21 Dec 2011 20:56:57 GMT Server: Apache/2.2.21 (Win32) mod_ssl/2.2.21 OpenSSL/0.9.8r PHP/5.2.17 Last-Modified: Thu, 28 Oct 2010 20:20:09 GMT Etag: "c000000013fa5-29cf10e9-493b311889d3c" Accept-Ranges: bytes Content-Length: 701436137 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/plain

    Read the article

  • Does anyone still use Iometer?

    - by Brian T Hannan
    "Iometer is an I/O subsystem measurement and characterization tool for single and clustered systems. It is used as a benchmark and troubleshooting tool and is easily configured to replicate the behaviour of many popular applications." link text Does anyone still use this tool? It seems helpful, but I'm not sure if it's for the thing I am trying to work on. I am trying create a benchmark computer performance test that can be run before and after a Windows Optimization program does its stuff (ex: PC Optimizer Pro or CCleaner). I want to be able to make a quick statement like CCleaner makes the computer run 50% faster or something along those lines. Are there any newer tools like this one?

    Read the article

  • What are the quality metrics for RAM?

    - by Hi-Tech KitKat Android
    I have searched RAM and i found there are given some specification for the same capacity RAM, What are the difference and performance comparison between these? Like RAM1 General Brand Transcend Memory Type 2 GB (8 x 128 MB) DDR2 DIMM Memory Standard DDR2-800/PC-6400 Compatible Device PC Pins 240-pin Burst Length 4, 8 Buffered/Unbuffered Unbuffered Memory Memory Clock 400 MHz Technology DDR2 SDRAM Memory CAS Latency 4, 5, 6 RAM 2 General Brand Transcend Memory Type 2 GB (8 x 128 MB) DDR2 DIMM Memory Standard DDR2-667/PC2-5300 Compatible Device PC Pins 240-pin Burst Length 4, 8 Buffered/Unbuffered Unbuffered Memory Memory Clock 333 MHz Technology DDR2 SDRAM Memory CAS Latency 3, 4, 5 RAM3 General Brand Kingston Memory Type 2 GB (64 x 256 MB) 800 MHz DDR2 DIMM Compatible Device PC Pins 240-pin Error Check Non-ECC Buffered/Unbuffered Unbuffered Memory Memory Clock 200 MHz Technology DDR2 SDRAM Memory CAS Latency 6 What are the affect of the following Memory Type(given as 8 x 128 MB) Memory Clock (given in MHz) CAS Latency (given as 4,5,6) my Requirement is 2 GB DDR2 Type Desktop Please help

    Read the article

  • Interpreting Munin graphs showing available entropy and MySQL slow queries in sync

    - by user64204
    We're experiencing performance issues on our website, and after reviewing our munin graphs, the only metrics we've found in sync are Available entropy and MySQL slow queries, with the latter influenced by our number of logged in users: Based on the wikipedia entropy page, my understanding is that entropy is the amount of randomness (here measured in bytes) that the system can use for various tasks, mainly cryptography and functions that require random input. Since the peaks in available entropy and MySQL slow queries are occurring in sync and at regular interval, that the number of MySQL slow queries is proportional to our number of Drupal users whereas the peaks in available entropy seem to be much more constant and less proportional to these 2 metrics, we're thinking available entropy is the reflect of a root cause which, combined with the traffic to our website, is causing those slow queries (and not the opposite, slow queries influencing the entropy). Accordingly: Q: What underlying problem do you think could cause regular peaks in available entropy that could have an influence on MySQL's ability to process queries?

    Read the article

  • How complex of a daemon should be run through inetd?

    - by amphetamachine
    What is the general rule for which daemons should be started up through inetd? Currently, on my server, sshd, apache and sendmail are set up to run all the time, where simple *NIX services are set up to be started by inetd. I'm the only one who uses ssh on my computer, and break-in attempts aren't a problem because I have it running on a non-standard port, and my HTTP server gets maybe 5 hits a day that aren't GoogleBot. My question is, what are the benefits vs. the performance hits associated with running a complex daemon like sshd or apache through superserver, and what, if any successes or failures have you had running your own daemons in this manner?

    Read the article

  • reducing Windows 7 size

    - by Sejanus
    My Windows 7 uses around 16 GB while Windows XP only around 4 GB hard disk space. Seems weird. I use Windows only for gaming so I dont need a lot of stuff they have to offer. What is the best way to reduce Windows 7 size? What can I delete / uninstall and how? Also I'd like reduce RAM and processor usage as much as possible (as long as it doesnt hurt game performance)... turn off all that fancy stuff and so on. What can I turn off and how? Thanks in advance!

    Read the article

  • Challenges w.r.t. proximity between application hosted outside Amazon and Amazon persistence service

    - by Kabeer
    Hello. This is about hosting a web portal. Earlier my topology was entirely based on Amazon AWS but the price factor (especially for EC2) now makes me re-think. I'll now quickly come to what I have finally arrived at. I'll launch the portal that'll be hosted on Godaddy (unlimited plan on Windows). The portal uses SimpleDB for storing metadata and S3 for blobs. Locally available MySQL will be used for the ASP.Net provider services. Once the portal is profitable, I intent to move to Amazon in totality. Now considering the proximity between Godaddy & Amazon, would I face 'substantial' performance problems? Are there any suggestions to improve upon my topology.

    Read the article

  • Rail's FileStore with Linux Disk Caching or RAMdisk?

    - by Yo Ludke
    I have a Ruby on Rails application that stores it's catched files on the filesystem (Rails file-system cache). I was thinking about changing to memcached Store, but a short test shows it isn't a big difference in speed. From linuxatemyram.com I learned a bit about file caching. On the current machine there would be around 40..45GB RAM left which isn't needed for the application and which can be used to linux-disk-cache this rails file cache store. The disk is a RAID10 system with almost 120MB disk perfomance. How can I tell Linux to use free RAM more deliberately and not to be shy about using it? Do think it's necessary to adjust a sysyctl/.. value here, or would I have performance advantages to put the File Store root diretory on a ramdisk? (Loosing the cache during a reboot wouldn't be a problem)

    Read the article

  • RAID 1 not performing as expected

    - by Faken
    I recently bought a new 320Gb hard drive for my computer to set up RAID 1 on it for some added security. Installation went as smooth as could possibly be (plug in power, plug in data cable, start up computer, Intel software recognized new drive, right click create RAID 1, done!). However, for some inexplicable reason, I seem to have strange test results when using BENCH32. On my old configuration, a single 7200 rpm drive, I achieved about 60 MB/s write and 70 MB/s read. With a new RAID 1 configuration, I would expect the write to be slightly diminished but read to be significantly improved (though not exactly double speed). However, with the new configuration, I am getting 90 MB/s write and only about 80 MB/s read. I should NOT be getting improved write performance, especially NOT better than read! What's going on? My system setup is: q6600 2.4ghz CPU 4Gb DDR2 667mhz RAM on board Intel ICH9R "RAID chip" 2x Seagate 7200 RPM 320GB drives in RAID 1 Widows 7 home premium 64-bit

    Read the article

  • PC is very slow

    - by Appoos
    Hi All, My Windows XP system is very slow. I tried all possible ways of improving the performance. But now luck. I've 4GB RAM and AMD Phenon XII B53 processor. I don't see any applications consuming CPU resources. But the Page File usage is 4.18 GB(System Managed size in MyComputerPropertiesPeformance). There is enough RAM available, but still why OS is using Page File? How can I improve the Page File usage? Please help me.

    Read the article

  • Suggested benchmark for testing CPU footprint of antivirus software

    - by Alex Chernavsky
    Our organization is currently running Symantec Corporate Antivirus, which is rumored to be a big resource hog. I know that we do have a lot of older machines that are running slow. Our PCs are all running Windows XP Pro and are used only for business applications (mostly Microsoft Office), e-mail, and web surfing. They're not used for gaming (one would hope not, anyway). I'd like to take one of the old PCs and do a speed benchmark test while it's running Symantec AV, then another test with no antivirus, and a third test with ESET NOD32. As I said, I don't care much about graphics performance. What would be an appropriate benchmarking program program to use? Freeware is best, of course. Thank you for considering my question.

    Read the article

  • Signal strength and Speed of wireless network

    - by Tim
    As shown by Lenovo Access Connections on my Windows 7, the wireless network I am using has a speed of 54.0Mbps but a signal strength of 88%. I am using WinSCP with unlimited speed to download files. WinSCP shows that the speed fluctuates between 100 and 120KiB/s. I was wondering what are the difference between the two speeds from Lenovo Access Connections and WinSCP? How can I tell the actual speed performance, for example, from the above measurements: speeds and the signal strength mentioned in the two places. Thanks and regards!

    Read the article

  • Having to Toggle Between Two Sets of Video Drivers

    - by Maarx
    So, I've got a Win7 64-bit gaming PC with GTX 260's. Recently, StarCraft 2 had an issue with flickering, which NVidia fixed with a new set of drivers. However, these new drivers induce unplayable graphical errors with Neverwinter Nights 2, something me and my friends still play from time to time. I am seeking advice on the "best" way to rectify this situation, to be able to switch between two driver releases, without compromising the stability of my system (if Windows stability isn't an oxymoron). I'm wondering if Windows 7 is structured in such a way that I can constantly reinstall these two sets of drivers back and forth overtop each other, possibly six or eight times a day, without very quickly driving myself to reformat to maintain that "just like new" performance. I'm loathe to have to reformat the drive and maintain two copies of the operating system, but I'll do it if I have to.

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >