Search Results

Search found 13206 results on 529 pages for 'performance measurement'.

Page 14/529 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • SQL Performance Problem IA64

    - by Vendoran
    We’ve got a performance problem in production. QA and DEV environments are 2 instances on the same physical server: Windows 2003 Enterprise SP2, 32 GB RAM, 1 Quad 3.5 GHz Intel Xeon X5270 (4 cores x64), SQL 2005 SP3 (9.0.4262), SAN Drives Prod: Windows 2003 Datacenter SP2, 64 GB RAM, 4 Dual Core 1.6 GHz Intel Family 80000002, Model 6 Itanium (8 cores IA64), SQL 2005 SP3 (9.0.4262), SAN Drives, Veritas Cluster I am seeing excessive Signal Wait Percentages ( 250%) and Page Reads /s (50) and Page Writes /s (25) are both high occasionally. I did test this query on both QA and PROD and it has the same execution plan and even the same stats: SELECT top 40000000 * INTO dbo.tmp_tbl FROM dbo.tbl GO Scan count 1, logical reads 429564, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. As you can see it’s just logical reads, however: QA: 0:48 Prod: 2:18 So It seems like a processor related issue, however I’m not sure where to go next, any ideas? Thanks, Aaron

    Read the article

  • Raid0 performance degradation?

    - by davy8
    Not sure if this belongs here or on SuperUser, feel free to move as appropriate. I've noticed the performance on my RAID0 setup seems to have degraded over the past months. The throughput is fine, but I think the random access time has increased or something. In use I generally see about 1-5mb/sec when loading stuff in Visual Studio and other apps and it doesn't seem like the CPU is bottlenecking as the CPU utilization is pretty low. I don't recall what Access Time used to be, but HD Tune is reporting 12.6ms Read throughput is showing as averaging about 125MB/sec so it should be great for sequential reads. Defrag daily and it shows fragmentation levels low, so that shouldn't be an issue. Additional info, Windows 7 x64, Intel raid controller on mobo, WD Black 500GB (I think 32mb cache) x2.

    Read the article

  • Raid0 performance degradation?

    - by davy8
    Not sure if this belongs here or on SuperUser, feel free to move as appropriate. I've noticed the performance on my RAID0 setup seems to have degraded over the past months. The throughput is fine, but I think the random access time has increased or something. In use I generally see about 1-5mb/sec when loading stuff in Visual Studio and other apps and it doesn't seem like the CPU is bottlenecking as the CPU utilization is pretty low. I don't recall what Access Time used to be, but HD Tune is reporting 12.6ms Read throughput is showing as averaging about 125MB/sec so it should be great for sequential reads. Defrag daily and it shows fragmentation levels low, so that shouldn't be an issue. Additional info, Windows 7 x64, Intel raid controller on mobo, WD Black 500GB (I think 32mb cache) x2.

    Read the article

  • Improving Chrome performance on OSX

    - by Giannis
    There are a number of sites that do not display properly on Safari and I need to switch to Chrome. Although when the content of the sites requires flash player, Chrome will consume a significant amount of CPU. Running more than 3 windows, will cause my MBP to overheat, start the fans, and reduce battery life way more than Safari. What I am looking for is suggestions on ways to improve performance of Chrome running flash. I know Safari is optimised for OSX, but any improvement is welcome. Following I have a demo to display the issue. I am running same youtube video on Safari 6 and Chrome 21,both updated,at the same time. Both browsers have been reseted and have no extensions. This is run on MBP 13" 2012 with 2.9 i7 running OSX 10.8.1. p.s If any additional details can help please let me know.

    Read the article

  • Join performance on MyISAM and InnoDB tables

    - by j0nes
    I am thinking about converting some tables from MyISAM to InnoDB in my mysql server. The tables will certainly benefit from the change because a lot of write requests come to these tables, while there are also quite a lot of read request at the same time. However, they are often joined together with some tables that almost don't get any writes. Is there a performance penalty when joining together MyISAM and InnoDB tables or should everything work fine? Second question: During backups at night, I am copying data from the InnoDB tables to MyISAM tables for archiving purposes. In these backups, a lot of write-requests happen, however there is almost no read from these archive tables. Would these tables also benefit from using InnoDB or is this just a waste of space and RAM?

    Read the article

  • Database Performance Across 3 NetApp FAS2050

    - by bjd145
    I recently inherited a SQL Server 2005 box that has LUNs mounted on it from three different NetApp FAS2050 filers. The reason there are three is because the first two filers are out of space and our databases continually need more of it. What kind of performance impact (if any) does this setup have? Is it better to move the LUNs from the two other older filers to the newer one so that we have only one filer/controller for caching, etc ? This is in general. I apologize for not having more specifics.

    Read the article

  • MySQL & tmpfs : performance

    - by Serty Oan
    I was wondering if, and how much, using tmpfs could improve MySQL performance and how it should be done ? My guess would be to do mount -t tmpfs -o size=256M /path/to/mysql/data/DatabaseName, and to use the database normally but maybe I'm wrong (I'm using MyISAM tables only). Will a hourly rsync between the tmpfs /path/to/mysql/data/DatabaseName and /path/to/mysql/data/DatabaseName_backup penalize performances ? If so, how should I make the backup of the tmpfs database ? So, is it a good way to do things, is there a better way or am I losing my time ?

    Read the article

  • xinet vs iptables for port forwarding performance

    - by jamie.mccrindle
    I have a requirement to run a Java based web server on port 80. The options are: Web proxy (apache, nginx etc.) xinet iptables setuid The baseline would be running the app using setuid but I'd prefer not to for security reasons. Apache is too slow and nginx doesn't support keep-alives so new connections are made for every proxied request. xinet is easy to set up but creates a new process for every request which I've seen cause problems in a high performance environment. The last option is port forwarding with iptables but I have no experience of how fast it is. Of course, the ideal solution would be to do this on a dedicated hardware firewall / load balancer but that's not an option at present.

    Read the article

  • RAID P410i and P812 performance issues

    - by Alexey
    I'm having much trouble with I/O performance of HP DL360 server with two RAID controllers - P410i and P812, Windows Server 2008, 36 GiB RAM and 16 x Intel Xeon x5550. The server runs a bunch of tasks producing heavy sequential I/O, and after about 20-30 minutes of intensive work it looks like the tasks are stuck, not using CPU and with enough free memory (so this cannot be a bottleneck). The same tasks were running quite well on the older server (Windows Server 2003, 4 x Intel Xeon, 12 GiB RAM). RAID cache is present, write-cache battery is installed. Cache is configured as 25% readahead/75% writeback. The swap file resides on the logical disk served by P410i and other logical disks are on P812. Can someone tell me what can be the matter of this? Is this a hardware problem or misconfiguration?

    Read the article

  • Hyper-V R2 Performance Counters

    - by Ascendo
    Hi all I've been playing around with the WMI performance counters for Hyper-V. Of interest to me are the Virtual NIC bytes/sec input and output counters. I notice that the results are very "spikey". Over what time period is the latest counter averaged? I'm trying to calculate total traffic volume per VM, but sometimes a very high instantaneous poll result is inflating the result as I only poll the result each minute. I would prefer to read a 'bytes total' counter instead of a 'bytes/sec' counter - is there such a thing? Thanks Acendo

    Read the article

  • How to open ports on router for better torrent performance

    - by Mehper C. Palavuzlar
    I've been using utorrent to download and upload torrents for a long time. Recently someone told me that I need to open port(s) for utorrent from my router settings for better downloading and uploading performance. Is it true? If yes, how can I do that? My utorrent version: 2.0 and the port used for incoming connections: 61829. My router: Yaksu S200 ADSL router modem and I can reach its settings via web interface. I looked at the settings but they seem a bit complicated to me. Other info you may need to know: I have dynamic IP. I'm using Win7 x64.

    Read the article

  • Using LDAP Attributes to improve performance for large directories

    - by Vineet Bhatia
    We have a LDAP directory with more than 50,000 users in it. LDAP Vendor suggests maximum limit of 40,000 users per LDAP group. We have number of inactive users and those are being purged but what if we don't get below the 40,000 users? Would switching to using multivalued attribute at user record level instead of using LDAP groups yield better performance during authentication, adding new users, etc? I know most server software (portal, application servers, etc) use LDAP groups. But, we have a standardized web service interface for access control instead of relying on server software to map LDAP groups to security roles. Each application uses this common "access control web service". Security roles are used within application to build fine-grained ACL used within each enterprise application.

    Read the article

  • How to improve performance of ubuntu server 10.04 for my dms system?

    - by prasanna
    I will be using one of the dms (document management system) which is java + jackrabbit + postgresql + jboss + openoffice based on ubuntu server 10.04. this is the only application i will running on my server. i want to speed up the performance of the system for this. can you give me tips for improvements of ubuntu server? can i change any settings which give fast system performance. My application will be used concurrently by around 70 - 80 people. We have total 600 users. they will constanly upload , download the files in dms. i am going to use dedicated dell server with minimum 4 gb of RAM. i appreciate help. thanks and regards, Prasanna

    Read the article

  • why is iometer performance slower after first run?

    - by Dan
    I'm doing some benchmarking with IOMeter, and I'm seeing a consistent and susbtantial drop-off in performance after running the first test. These drop-offs are about the same on the three systems I've tested on, which makes me think it's a configuration setting, or just a fact of life about using IOMeter. For example, one system (local RAID 10) went from 388 I/Os per second the first run, to about 211 I/Os per second on every run after that. Everything else about the test was identical, and I also bounced the machine in between runs. So, is this expected behavior, or am I missing something?

    Read the article

  • Slow performance over network (Ubuntu)

    - by Filipe Santos
    i did setup this NodeJs TCP Server and tested it with a message flooder. Just to see how the performance of the server is. While the message throughput is great if i run the server and the message flooder on the same computer (ubuntu), the throughput dramaticaly decreases if i start the server on computer1(ubuntu1) and the message flooder on computer2(also ubuntu). Both PC are on the same network. In fact, they are directly connected to each other. I started searching the internet for reasons and i suppose i need to tune TCP on both Ubuntu-pcs but until now i haven't been successfull at all. Has anyone experienced such problems, or could someone help me out? Thanks Here the flooding code: var net=require('net') var client = net.createConnection(5000, "10.0.0.2") client.addListener("connect", function(){ for(var i = 0; i < 1000; i++) { client.write("message "); } })

    Read the article

  • Can USB Hubs affect peripheral performance?

    - by Daniel Rasmussen
    Like many laptops, mine has 2 USB 2.0 ports. I'm looking to purchase a USB hub for my peripheral devices. I'm also planning on purchasing a laptop cooler with two USB powered fans, which connects to a USB port but can extend the plug, so it doesn't use one up. (Sorry for the poor description; see the link for a picture.) My questions are these: Can one plug 'too many' peripherals into a USB port? Can I plug my fans into a port, then the hub into the fans, then my keyboard and mouse, mic, and webcam into the extender..? Is it possible to draw too much power from a port? Secondly, will a USB hub affect the performance of any of my devices? I'm mostly worried about my mouse and keyboard. I like wired mice because I've noticed some lag in my Bluetooth mouse.

    Read the article

  • Exchange 2010 SP2 OWA performance

    - by Frederik Nielsen
    How do I increase performance in OWA 2010 SP2? I am running CAS on a seperate installation, which has 8GB RAM and 4 CPU cores - running virtualized in a vmware environment. However, the load times are pretty bad, so is there any way to improve those? I am thinking of installing a linux cache-stuff-server in front of the OWA, but will that work? And how should it be done? Allright, I "fixed" it - was just something temporary issue. Thanks for your replies

    Read the article

  • Apache mpm-itk Performance

    - by Matt Beckman
    I manage a bunch of VPSs with memory ranging from 1GB to 8GB. Most of these websites are Joomla websites, and the servers must support multiple sites/users/S-FTP. I use mpm-itk almost exclusively (mostly due to it's convenience in these shared environments). However, I'm aware it isn't known for performance, so I need some advice on making it faster. Due to the lack of documentation when I first went the way of mpm-itk, I included only one setting in the config, and that was to limit each user to 50 clients (the rest I left up to defaults): <IfModule mpm_itk_module> MaxClientsVHost 50 </IfModule> Are there any better alternatives available? Are there any settings supported in mpm-prefork or mpm-worker that are also supported in mpm-itk? Thanks!

    Read the article

  • Increase performance of Samsung 830 256GB SSD

    - by Robert Koritnik
    I have a Samsung 830 SSD in my notebook connected to a SATA interface. This is a rather old HP notebook nc8430 which means SATA is SATA I and not II or even III that disk supports. But SATA I still supports speeds up to 150MB/s so I expected at least double values as per image below. CrystalDiskMark shows rather slow performance: I've been using this SSD for over a year now and I would like to know what to do to make it blazingly fast as other reports say it should be? Edit As suggested I'm adding AS SSD screenshot of the test. And Samsung Magician's benchmark which is likely biased...

    Read the article

  • How exactly is Google Webmaster Tools measuring "Site Performance"?

    - by Rémi
    I've been working for two months now on improving our response time (mainly server side) on a new forum (a brand new product on a technical point of view) we've launched in Germany a few month ago and I'm a lot surprised by the results I get. I monitor our response time using Apache logs and our own implementation of Boomerang beacon. Using my stats, I can see that our new product responds in about 680 ms where our old product was responding in about 1050 ms. On the other side, Google Webmaster Tool tells us that our pages have an average reponse time of about 1500 ms today where it was 700 three months ago with our old product. I've figured that GWT was taking client side metrics into account so I've added some measures on our Boomerang beacon and everything looks just fine. I've also ran some random pages on ySlow and Google's Page Speed and everything looks better than it was before. We event have a 82% on Google's Page Speed tool which is quite cool for a site with some ads in it :) Lately, we have signed a deal with Akamai to use two of their products : CDN for our static files (we were using another CDN before but it wasn't very effective) and RMA to improve Networks routes. We have also introduced a new agressive cache mecanism to ensure that most of the pages served to crawlers are cached by our memcache grid. After checking my metrics, it seems that this changes have improved from 650ms to about 500ms, which is good (still not great but it is definitly an improvement). But webmaster tools continues to report an increasing average response time where we see it decreasing in the same time. Have you ever had the same kind of wierd behavior on your sites while doing performance improvements ? Do you have any idea how to monitor the same thing Google does with Site Performance in Google Webmaster Tools so that we could improve our site and constantly check if it is what Google wants ? Edit 2011/07/26 : Thanks for your answers guys ! Nevertheless, I was not precise enough. The main issue we have is not with the Site Performance page but with the Crawl Stats one for now. We probably found an issue on our side with some very slow pages (around 3000 ms !!) and we are trying to fix them. I'll keep you posted as soon I'll have some infos. Thanks again !

    Read the article

  • Firefox 3.6 performance increase tricks.......

    - by metal gear solid
    I use many Add-ons which helps me in Web development so i can't uninstall those addons. and usually I keep open lots of tabs in Firefox. And almost always keep Firefox on on my system. I use default profile only I always keep every addons and firefox itself updated. I found some addons to reduce memory use on Firefox addons site , but user reviews were not good for them still Is there any tested tricks to increase performance and reduce memory use of Firefox 3.6, which really works?

    Read the article

  • What are performance limits of a database?

    - by Tommy
    What are some rough performance limits (read/s, write/s) for a single database server (no master-slave architecture), assuming storage on disk? How many read/s, write/s, depending on the kind of disk? (SSD vs non-SSD) , assuming simple operations (select one row by primary key, update one row, correctly indexed). I assume this limit is dependent on disk seek/write. EDIT: My question is more about getting rough metrics of the number of operations a database supports: to be able to know for example, if a new feature triggering 300 inserts/s can be supported without scaling out with additional servers.

    Read the article

  • RHEL raw device (over VMware RDM) performance issues

    - by jifa
    I'm running RHEL 5.3 over vSphere 4.0U1. I configured multiple LUNs on my NetApp (Fibre) storage, and added the RDM on two (Linux) VMs, using the Paravirtual SCSI adapter. One LUN is 100GB in size, successfully mapped to /dev/sdb on both VMs, 5 more are 500MB in size (mapped to /dev/sd{c-g}. I also created one partition per device. I have encountered two issues: First, writing directly to /dev/sdb1 gives me ~50MB/s, while any of the /dev/sd{c-g}1 gives me ~9MB/s. There is no difference in configuration of the LUNs apart from their size. I am wondering what causes this but this is not my main problem, as I would settle for 9 MB/s. I created raw devices using udev pretty straightforwardly: ACTION=="add", KERNEL=="sdb1", RUN+="/bin/raw /dev/raw/raw1 %N" per device Writing to any of the new raw devices dramatically slows down performance to just over 900KB/s. Can anyone point me in a helpful direction? Thanks in advance, -- jifa

    Read the article

  • Performance issues with new dedicated server [closed]

    - by Pierre Espenan
    I have just subscribed to a new dedicated server and am getting worst than expected PHP execution performance. Execution times are twice as high as on my old mutualized server! I'm definitely not an expert at server management, so I'm wondering what I missed. Here are some stuff that can help you understand what's wrong here : My server (in french but easy to understand) : http://www.online.net/fr/serveur-dedie/dedibox-sc phpinfo(); output : http://jsfiddle.net/E8b7W/embedded/result/ PHP bench script (dedicated server) : http://jsfiddle.net/EhXzK/embedded/result/ PHP bench script (old mutualized) : http://jsfiddle.net/ANbWt/embedded/result/ Is it normal to get such poor performances after a kernel update and basics "apt-get install" for apache2 and php ? Thanks !

    Read the article

  • Performance improvements of VBulletin by integrating the plugins

    - by reggie
    I have amassed quite a lot of plugins and code that is being hooked into VBulletin's plugin system. There are good uses for this system. But since I am now locked in with the VB 3 branch and it is no longer updated, I wonder what kind of performance improvements I would see if I integrated all the plugins into the vbulletin files and turned the plugin system completely off. My site has about 1.5 mio posts, about 100,000 threads, 100,000 members (of which 10,000 are "active"). I estimate I have about 200 plugins from different products in the plugin manager. Has anybody ever tried this move and could share the experiences?

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >