Search Results

Search found 18148 results on 726 pages for 'performance monitor'.

Page 166/726 | < Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >

  • How to speed up apache

    - by Zen_silence
    We have a server with 8Cores, 16GB of RAM and RAID 0 SAS 10K drives. Our goal is to use this to serve a fairly simple php application quickly. We have tested all other components and we think we have narrowed it down to apache is our bottleneck. I am no apache guru I have done some research and tested a couple things but when i test with JMeter launching 100 concurrent connections against the server the first 10 - 20 come back quickly 30 - 100ms but the rest take between 1000ms to 3000ms. Anyone have any ideas on what to change in our apache config to make this faster right now its a vanilla install of apache.

    Read the article

  • HP DL185 - very slow disk read speed

    - by fistameeny
    Hi, I have a HP DL185 G6 Server (12 disk model) with the following spec: Quad Core Xeon 2.27GHz 6GB RAM HP P212 RAID controller with battery backup 2 x 128GB 15K SAS 3.5" (RAID-1 for the operating system) 4 x 750GB 7.5K SAS 3.5" (RAID-5 for the data, 2TB usable space) The operating system is Ubuntu Server 9.10. Both drives have been formatted as EXT4. We are finding that read speed of the RAID-5 array is poor. Disk test results below: sudo hdparm -tT /dev/cciss/c0d1p1 /dev/cciss/c0d1p1: Timing cached reads: 15284 MB in 2.00 seconds = 7650.18 MB/sec Timing buffered disk reads: 74 MB in 3.02 seconds = 24.53 MB/sec For info, the RAID-1 array performs as follows: sudo hdparm -tT /dev/cciss/c0d0p1 /dev/cciss/c0d0p1: Timing cached reads: 15652 MB in 2.00 seconds = 7834.26 MB/sec Timing buffered disk reads: 492 MB in 3.01 seconds = 163.46 MB/sec We thought this was because with no battery, read/write cache is disabled. We have bought and installed the battery backup and have used the HP bootable CD to change the cache settings to 50% read / 50% write and check cache is enabled on the drives and the controller. Is there something I'm missing?

    Read the article

  • master-slave-slave replication: master will become bottleneck for writes

    - by JMW
    hi, the mysql database has arround 2TB of data. i have a master-slave-slave replication running. the application that uses the database does read (SELECT) queries just on one of the 2 slaves and write (DELETE/INSERT/UPDATE) queries on the master. the application does way more reads, than writes. if we have a problem with the read (SELECT) queries, we can just add another slave database and tell the application, that there is another salve. so it scales well... Currently, the master is running arround 40% disk io due to the writes. So i'm thinking about how to scale the the database in the future. Because one day the master will be overloaded. What could be a solution there? maybe mysql cluster? if so, are there any pitfalls or limitations in switching the database to ndb? thanks a lot in advance... :)

    Read the article

  • Slow Write Speed on ESXi host

    - by Gregg Leventhal
    I have an ESXi 5.0 free host with an internal datastore of 7.2K 5 disk RAID 5 using a PERC 710 mini RAID controller in a Dell Poweredge R620 Server with 32GB Ram and a 12 Core Xeon. I seem to get slow write speeds in the guests so I checked out ESXTOP and I see 15MB/s write speed there on this host, which is comprable to the guests. What could be causing such horrible write speeds? Is RAID 5 really this slow to write??

    Read the article

  • Justifying a memory upgrade, take 2

    - by AngryHacker
    Previously I asked a question on what metrics I should measure (e.g. before and after) to justify a memory upgrade. Perfmon was suggested. I'd like to know which specific perfmon counters I should be measuring. So far I got: PhysicalDisk/Avg. Disk Queue Length (for each drive) PhysicalDisk/Avg. Disk Write Queue Length (for each drive) PhysicalDisk/Avg. Disk Read Queue Length (for each drive) Processor/Processor Time% SQLServer:BufferManager/Buffer cache hit ratio What other ones should I use?

    Read the article

  • Is my large Windows folder slowing down my machine?

    - by Moses
    I have a problem with my Windows installation running very slow and my Windows folder being too large. I thought that the problems are related. My Windows folder is 17.4 GB I have 1807 folders totalling 2.4 GB that are prefaced with a $. My System32 folder is 1.55 GB My Microsoft.NET folder is 654 MB – I don't know what if any programs I have that are using it. My Service Pack folder is 568 MB. The Software Distribution folder is 536 MB The ie8updates folder is 380 MB. How can I reduce the size of these folders and could their size be why I am running do slow?

    Read the article

  • My KDE very slow in certain operations

    - by Pietro
    I have a problem with my Linux installation. It seems that the KDE code that deals with directory windows is extremely slow (on both Dolphin and Konqueror). This happens both when I click on a directory icon and when I want to open/save a file from many KDE applications. The time the window takes to open can be one minute or more. The same happens when I right click on an icon. Looking at the CPU usage, this is very low (less than 10%). Am I the only one with this problem, or is it well known and maybe already fixed? Consider that I cannot update to a more recent version of OpenSuse. Thank you, Pietro Configuration: Linux version: OpenSuse 11.4 KDE 4.6.0 System: DELL Precision T3500 - Intel Xeon Home directory mounted on a remote drive. <-- could this be the reason?

    Read the article

  • How to set the request start time with HAProxy?

    - by Tupy
    I would like to measure the time of full request stack. The New Relic capture time of the middleware (e.g. java, python, ruby) and request time (See https://newrelic.com/docs/features/tracking-front-end-time). For this, I need to configure the X-Request-Start header as the request pass through the HAProxy load balance. The haproxy.cfg should look like: backend www balance roundrobin mode http reqadd "X-Request-Start" UNKNOWN_TIME_FUNCTION() server servername 192.168.0.1:80 weight 1 check There is a haproxy native function to replace the UNKNOWN_TIME_FUNCTION()?

    Read the article

  • Why is a single thread spread across CPU's?

    - by Marcus Lindblom
    I'm just curious why the scheduler constantly moves an app between CPUs, rather than keeping it on one. It looks a bit silly to have 4 cores at 25% rather than one at 100%. Does it has to do with heat, or is it more efficient somehow? Do other OS's do it differently? Insights or links to in-depth stuff would be nice. (Couldn't find much myself.) Update: By "spread out" I don't mean that it executes on several cpu's at once, but is being moved from one to the other several times per second, making the effect that it looks spread out.

    Read the article

  • Why can't get more speed on iperf on windows xp

    - by SledgehammerPL
    I test my bandwith and throughput using iperf (jperf) on desktop PC with WinXP. I can't get more than 3Mbit/s outside until I change TCP Window size - about 84Kb is ok. but I can't force XP to use this value by default.. I try very many magic spells on Registry, use many TCP Optimisers - but nothing works. I will accept that that everything is ok, when I reboot the PC, run iperf and will see 18.1Mbit - like my Linux box standing very near my Windows XP Box. Is it possible?

    Read the article

  • MongoDB: ReplicaSet slower than a corresponding Master/Slave config

    - by SecondThought
    Is it true that a mongoDB configured as a replicaset (lets say two nodes + an arbiter) will always be slower than the same DB and server specs but configured as a Master? I've run some tests and found out that for a fresh DB, RS is a little quicker than Master/Slave config but when the DB is getting bigger than ~100k records the latter is getting much snappier. am I missing something here? PS: I was testing it with mongoid driver for ruby.

    Read the article

  • Linux server became extremely slow

    - by Ariel Aharonson
    I have a file sharing website, and my files hosted in a server with those system specifications: 32GB RAM 12x3TB 2x Intel Quad Core E5620 I have files in this server up to 4gb for each file. 446gb is full (/36TB) [root@hosted-by ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 50G 2.7G 44G 6% / tmpfs 16G 0 16G 0% /dev/shm /dev/sda1 97M 57M 36M 62% /boot /dev/mapper/VolGroup01-LogVol00 33T 494G 33T 2% /home And take a look at this: Why is the wa% so high? (I think that what makes the server to be so slow)

    Read the article

  • IIS7 ASP.NET application - 2 identical apps in 2 identical app pools, 1 is responsive and 1 is not

    - by Ben
    I have an ASP.NET (v4.0) web app that is installed in a virtual directory (as an application) and is hosted in it's own app pool. This is repeated for each instance of the app (i.e. per customer). The app pools are integrated (not classic) mode and LoadUserProfile is set to true. Otherwise, default settings. Each instance currently has it's own copy of the code/config, and it's own data folder (basic file read/writes). 1 instance of this app runs well (operation used for comparison takes ~4 seconds). Every other instance runs slowly (from 10-25 seconds for the same operation). If I move the slower instance to the "fastest" app pool that instance springs to life. If I move the faster instance into the slower app pool that instance slows to a crawl. The app pools were created in the same way initially - manually. I later used the powershell copy routine to ensure an exact copy of the faster app pool and still the same behaviour. Comparing the apppool.config files shows they are identical barring the virtual directory assignments. There are no shared resources that are being blocked, so far as I can tell, and I tested that by shutting down the performant app pool and restarting... slow is still slow, and then when I restart that app pool (so it's loaded last) it's still faster...

    Read the article

  • setup lowcost image storage server with 24x SSD array to get high IOPS?

    - by Nenad
    I want to build let's name it a lowcost Ra*san which would host for our social site the images (many millions) we have 5 sizes of every photo with 3 KB, 7 KB, 15 KB, 25 KB and 80 KB per Image. My idea is to build a Server with 24x consumer 240 GB SSD's in Raid 6 which will give me some 5 TB Disk space for the photo storage. To have HA I can add a 2nd one and use drdb. I'm looking to get above 150'000 IOPS (4K Random reads). As we mostly have read access only and rarely delete photos i think to go with consumer MLC SSD. I read many endurance reviews and don't see there a problem as long we don't rewrite the cells. What you think about my idea? - I'm not sure between Raid 6 or Raid 10 (more IOPS, cost SSD). - Is ext4 OK for the filesystem - Would you use 1 or 2 Raid controller, with Extender Backplane If anyone has realized something similar i would be happy to get Real World numbers. UPDATE I have buy 12 (plus some spare) OCZ Talos 480GB SAS SSD Drive's they will be placed in a 12-bay DAS and attached to a PERC H800 (1GB NV Cache, manufactured by LSI with fastpath) Controller, I plan to setup Raid 50 with ext4. If someone is wondering about some benchmarks let me know what you would like to see.

    Read the article

  • Client-based program to track response time for online webservice

    - by Søren Haagerup
    I am helping a customer with general IT support, and they have a problem with a hosted web-based system being slow. The provider of the system blames the client's computer, and the client calls me for help. I blame the provider, but it is hard to get them to do something about it without rock-solid evidence. And every time the provider comes around for a TeamViewer session, everything of course runs smoothly. Does there exist a client program or browser plugin that tracks statistics about response time for specific web services?

    Read the article

  • How To Troubleshoot Excess Time From Connect to First Byte?

    - by Gaia
    I measured load times for a wordpress 2.9.2 install on apache 2.2.3 and I was intrigued by the long periods between connect and first byte for the css and image files. Load Average is 0.0, 0.0, 0.0 and there are 150MB free RAM on the VPS. Pingdom results are at http://imagebin.ca/img/6UaiOU.png How do I gain insight into the possible causes of this problem and how would I troubleshoot it? Thanks

    Read the article

  • How does MySQL 5.5 and InnoDB on Linux use RAM?

    - by Loren
    Does MySQL 5.5 InnoDB keep indexes in memory and tables on disk? Does it ever do it's own in-memory caching of part or whole tables? Or does it completely rely on the OS page cache (I'm guessing that it does since Facebook's SSD cache that was built for MySQL was done at the OS-level: https://github.com/facebook/flashcache/)? Does Linux by default use all of the available RAM for the page cache? So if RAM size exceeds table size + memory used by processes, then when MySQL server starts and reads the whole table for the first time it will be from disk, and from that point on the whole table is in RAM? So using Alchemy Database (SQL on top of Redis, everything always in RAM: http://code.google.com/p/alchemydatabase/) shouldn't be much faster than MySQL, given the same size RAM and database?

    Read the article

  • Why is MySQL table_cache full but never used

    - by Jeremy Clarke
    I have been using the tuning-primer.sh script to tune my my.cnf settings. I have most things working well but the part about TABLE CACHE makes no sense: TABLE CACHE Current table_cache value = 900 tables. You have a total of 0 tables You have 900 open tables. Current table_cache hit rate is 1% , while 100% of your table cache is in use. You should probably increase your table_cache When I do SHOW STATUS; I get the following table-related numbers: Open_tables = 900 Opened_tables = 0 It seems like something is going wrong. I have some extra memory I could use on increasing the table_cache size, but my sense is that the 900 tables already available aren't doing anything, and increasing it will just waste more energy. Why might this be happening? Are there other settings that could cause all my table_cache slots to be used even though there are no hits to them? I have 150 max connections and probably no more than 4 tables per join, FWIW. Here is the tuner script output for temp tables, which I've also been tuning: TEMP TABLES Current max_heap_table_size = 90 M Current tmp_table_size = 90 M Of 11032358 temp tables, 40% were created on disk Perhaps you should increase your tmp_table_size and/or max_heap_table_size to reduce the number of disk-based temporary tables. Note! BLOB and TEXT columns are not allow in memory tables. If you are using these columns raising these values might not impact your ratio of on disk temp tables.

    Read the article

  • Large keepalive_requests values are severely slowing-down Nginx

    - by Gil
    When running a bacon (43-byte transparent pixel) load test on Nginx, we have tried several keepalive_requests values (from 10 to 100,000) and the optimal value seems to be 10. Here are the server HTTP headers of this tiny reply: HTTP/1.1 200 OK Server: nginx/1.5.6 Date: Wed, 23 Oct 2013 12:39:45 GMT Content-Type: image/gif Content-Length: 43 Last-Modified: Mon, 28 Sep 1970 06:00:00 GMT Connection: keep-alive Nginx is twice slower with keepalive_requests 100000 than with keepalive_requests 10. Can you help understanding that result? Or tell what we do wrong? For reference, here is the nginx.conf file.

    Read the article

  • How do you debug why Windows is slow?

    - by aaron
    I've got Vista Biz and when my machine chugs I think it is because of paging, but I never know how to verify this. Procexp doesn't seem to provide useful information because it appears that nothing is going on when the chugs happen. perfmon seems like it has the counters I need, but I'm never sure what counters I should add to cover the information I want. For perfmon, I prefer numbers that are percents, so I can gauge load. Here are the counters I have up, but they don't always seem to correlate to chugs: - % disk time (logical) - page faults/sec (an indicator of lots of paging activity) - processor/%priviliged time

    Read the article

< Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >