Search Results

Search found 13461 results on 539 pages for 'optimizing performance'.

Page 19/539 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • PostgreSQL: performance descrease due to index bloatper

    - by Henry-Nicolas Tourneur
    I'm running a PgSQL 8.1 on a CentOS 4.4 system (not upgradable unfortunately). There's a Java app running on top of the PgSQL daemon and we got to reindex the database every 2 months or so. Also important: the database isn't growing. It looks like the bloat is now coming faster than before and this tends to increase. My config is available here, autovacuum daemon is enabled and running quite often: pastebin.com/RytNj7dK You can also find the output of this query wiki.postgresql.org/wiki/Show_database_bloat 3 hours after running reindex: http://pastebin.com/raw.php?i=75fybKyd 72 hours after running reindex: http://pastebin.com/raw.php?i=89VKd7PC Does anyone have any idea what should I tweak to get rid of that growing bloat? Thanks for your help PS: due to antispam prevention system, I had to remove the first 2 http:// prefixes for my two first links.

    Read the article

  • apache and ajp performance

    - by user12145
    I have an apache sitting in front of two tomcat app servers(one on the same physical server, the other on a different one) that does time consuming work(0.5 sec to 10sec per request). The apache http server is getting killed by an average of 1 to 2 concurrent requests per second. both Server spec is about 2GB of RAM. Is there a way to optimize apache to handle the load? any advise is welcome. BalancerMember ajp://localhost:8009/whoisserver BalancerMember ajp://XXX.XX.XXX.XX:8009/whoisserver I keep getting the following in apache2.2 log: [Mon Dec 28 00:31:02 2009] [error] ajp_read_header: ajp_ilink_receive failed [Mon Dec 28 00:31:02 2009] [error] (120006)APR does not understand this error code: proxy: read response failed from 127.0.0.1:8009 (localhost)

    Read the article

  • Identifying Hard Drive as performance bottleneck for desktop machines

    - by Programming Hero
    I'm working in a development team where we all use laptops so we can work in multiple locations. These laptops are proving notoriously slow for development work, but at a glance they all look to have the specification for a much faster experience: CPU - Intel Core 2 Duo T7500 Memory - 2GB of RAM We all experience the biggest delays when the hard-drives are being accessed, particularly when swap-files are being thrashed. After doing a little bit of profile, a colleague discovered that our HDDs are seeing Read/Write speeds of about 10MB/sec. This seems abnormally low and we believe it the cause of the problem. Sensibly (though somewhat annoyingly) our business wont blow money on faster drives just to see if it fixes the problem; we need to illustrate this is definitely the problem and that buying some solid-state drives will make it go away. I need some way of showing how 90% of the system resources aren't being used over the course of a day, and that whenever there is utilization, it's all in HDD reads or writes. Are there any tools I could use to provide this information? Does it seem likely the problem is going to be fixed by a faster drive? Should I be looking for alternatives?

    Read the article

  • Finegrain Performance Reporting on svchost.exe

    - by Randolpho
    This is something that's always bothered me, so I'll ask the serverfault community. I love me some Process Explorer for keeping track of more than just the high-level tasks you get in the Task Manager. But I constantly want to know which of those dozen services hosted in a single process under svchost is making my processor spike. So... is there any non-intrusive way to find this information out?

    Read the article

  • FreeBSD: Samba performance over GBit-Ethernet

    - by Axel Gneiting
    I'm using a FreeBSD NAS with RAID-Z. I can read ~300MB/s from the ZFS disks to /dev/null on the box, but only get about 50MB/s over GBit-Ethernet with SMB to Windows 7 (Samba 3.5.6). Both systems have Intel-PCIe-NICs and are connected directly. Samba is configured to use AIO and I already tried to tune TCP/IP: kern.ipc.maxsockbuf=16777216 net.inet.tcp.sendspace=1048576 net.inet.tcp.recvspace=1048576 net.inet.tcp.sendbuf_max=8388608 net.inet.tcp.recvbuf_max=8388608 net.inet.tcp.delayed_ack=0 Any ideas what's causing the bottleneck? I think the link should handle 100 MB/s easily.

    Read the article

  • monitoring TCP/IP performance on Solaris

    - by Andy Faibishenko
    I am trying to tune a high message traffic system running on Solaris. The architecture is a large number (600) of clients which connect via TCP to a big Solaris server and then send/receive relatively small messages (.5 to 1K payload) at high rates. The goal is to minimize the latency of each message processed. I suspect that the TCP stack of the server is getting overwhelmed by all the traffic. What are some commands/metrics that I can use to confirm this, and in case this is true, what is the best way to alleviate this bottleneck?

    Read the article

  • How to calculate RAM value on performance per dollar spent

    - by Stucko
    Hi, I'm trying to make decisions on buying a new PC. I have most specifications (processor/graphic card/hard disk) pin-downed except for RAM. I am wondering what is the best RAM configuration for the amount of money I'm spending. As the question of best is subjective, I'd like to know how would I calculate the value of RAM sticks sold. 1.(sample)The value of amount of memory: 1) CORSAIR PC1333 D3 2GB = costs $80 2) CORSAIR PC1333 D3 4GB = costs $190 would it be better to buy 2 of item 1) instead of 1 of item 2) ?? Although I would normally choose to have 1 of 2) as the difference is only (190-(80*2)) = 30 as I would save 1 DIMM slot, What I need is the value per amount: 1) 80/ 2 = $40 per 1GB 2) 190/ 4 = $47.5 per 1GB 2. The value of frequency: 1) CORSAIR PC1333 4GB = costs 190 2) CORSAIR PC1600C7 4GB = costs 325 Im not even sure of the denominator ... $ per 1 ghz speed? 3. The value of latency: 1) CORSAIR CMP1600C8 8-8-8-24 2GBx3 (triple channel) = costs 589 2) CORSAIR CMP1600C7D 7-7-7-20 2GBx3 (triple channel) = costs 880 Im not even sure of the denominator ... $ per 1 ghz speed? Just for your information i'd like to get the best out of the money im going to spend to put on a 6 DIMM slot i7core motherboard.

    Read the article

  • NAS Performance issues

    - by Markus
    I bought a NAS from Conceptronic CH3MNAS and built in two Western Digital 1,5TB Green Drives. I only get a write speed of 6mb/s in LAN The configuration of the drives is as follows: - Raid 0 - EXT2 Is that a normal speed?

    Read the article

  • Vista startup performance

    - by PeterMmm
    After 2 years my Vista (32-bit) machine now boots quite slowly. The event viewer tells me two programs comming up slow: explorer.exe and svchost.exe. Fine. But what can i do that these programs comes up as quickly as before ?

    Read the article

  • Webserver: Performance impact when storing session files on /dev/shm

    - by GetFree
    I have a website runing on a typical setup: Linux, Apache, PHP, MySQL. However, what's not typical about it, is that it's getting tons of traffic (400,000+ visits a day) and so, efficiency is becoming more and more important to me. I'm constantly looking for things I could optimize and, right now, my attention is focused on PHP's session files. There's a hell lot of session files constantly being read and created on the /tmp directory. So my question is: Is it a good idea to store the session files in /dev/shm (tmpfs) in order to speed things up a little bit??

    Read the article

  • Hard Drive Fundamentals And Verifying Disk Performance

    - by Agnel Kurian
    Over the past few months, my Windows XP machine has slowed down to a crawl. It takes about 10-15 minutes to go from power-up to reaching a responsive state. I have reasons to believe that this is a result of the hard disk slowing down. Questions: Do hard disks slow down as a result of mechanical wear and tear ...or age? How do I check if my disk has slowed down? Conversely, how can I verify that my disk is indeed running at the speed it's designed to run at? Could drivers be at fault here? Do hard disks come with drivers or does Windows use a generic driver?

    Read the article

  • Methodologies for performance-testing a WAN link

    - by Chopper3
    We have a pair of new diversely-routed 1Gbps Ethernet links between locations about 200 miles apart. The 'client' is a new reasonably-powerful machine (HP DL380 G6, dual E56xx Xeons, 48GB DDR3, R1 pair of 300GB 10krpm SAS disks, W2K8R2-x64) and the 'server' is a decent enough machine too (HP BL460c G6, dual E55xx Xeons, 72GB, R1 pair of 146GB 10krpm SAS disks, dual-port Emulex 4Gbps FC HBA linked to dual Cisco MDS9509s then onto dedicated HP EVA 8400 with 128 x 450GB 15krpm FC disks, RHEL 5.3-x64). Using SFTP from the client we're only seeing about 40Kbps of throughput using large (2GB) files. We've performed server to 'other local server' tests and see around 500Mbps through the local switches (Cat 6509s), we're going to do the same on the client side but that's a day or so away. What other testing methods would you use to prove to the link providers that the problem is theirs?

    Read the article

  • Solaris TCP/IP performance tuning

    - by Andy Faibishenko
    I am trying to tune a high message traffic system running on Solaris. The architecture is a large number (600) of clients which connect via TCP to a big Solaris server and then send/receive relatively small messages (.5 to 1K payload) at high rates. The goal is to minimize the latency of each message processed. I suspect that the TCP stack of the server is getting overwhelmed by all the traffic. What are some commands/metrics that I can use to confirm this, and in case this is true, what is the best way to alleviate this bottleneck? PS I posted this on StackOverflow originally. One person suggested snoop and dtrace. dtrace seems pretty general - are there any additional pointers on how to use it to diagnose TCP issues?

    Read the article

  • Google bots are severely affecting site performance

    - by Lynn
    I have an aggregate site on a linux server that pulls in feeds from a universe of about 2,000 blogs. It's in Wordpress 3.4.2 and I have a cron job that is staggered to run five times an hour on another server to pull in the stories and then publish them to the front page of this site. This is so I didn't put too much pressure all on one server. However, the Google bots, which visit a few times every hour bring the server to its knees in the morning and evenings when there is an increase in traffic on the site. The bots have something like 30,000 links to follow at this point. How do I throttle the bots to simply grab the new stories off the front page and stop there? EDIT- Details of my server configuration: The way we have this set up is the server that handles all the publishing is an unmanaged instance via AWS. It mounts the NFS server and connects to the RDS to update content, etc. You get to this publishing instance via a plugin that detects the wp-admin link and then redirects you into there. The front end app server also mounts the NFS and requests data from the RDS. It is the only one that has the WP Super Cache on it.... The OS is Ubuntu on the App server and the NFS runs CentOs. The front end is Nginx and the publishing server is Apache.

    Read the article

  • apache and ajp performance

    - by user12145
    I have an apache sitting in front of two tomcat app servers(one on the same physical server, the other on a different one) that does time consuming work(0.5 sec to 10sec per request). The apache http server is getting killed by an average of 1 to 2 concurrent requests per second. both Server spec is about 2GB of RAM. Is there a way to optimize apache to handle the load? any advise is welcome. BalancerMember ajp://localhost:8009/xxxxxx BalancerMember ajp://XXX.XX.XXX.XX:8009/xxxxxx I keep getting the following in apache2.2 log: [Mon Dec 28 00:31:02 2009] [error] ajp_read_header: ajp_ilink_receive failed [Mon Dec 28 00:31:02 2009] [error] (120006)APR does not understand this error code: proxy: read response failed from 127.0.0.1:8009 (localhost)

    Read the article

  • poor performance when deleteing many files

    - by choppy
    I've got two machines: The first is IBM Blade with 24 cores 96GB RAM and single local hard drive with 278GB divided to 4 partitions: 1. c: - 40GB; 3GB free 2. d: - 40GB; 37GB free 3. e: - 198322GB; 198.1 free 4. 100MB (EFI system Partition) Formatted with GPT The other is pizza server with 4 cores 8GB RAM and single local hard drive with 273GB divided to 3 partitions: 1. c: - 136.81; 20GB free 2. d: - 88.74GB; 87.91 free 3. e: - 47.85GB; 46.91 free Formatted with MBR I have two scripts, the first creates 20,000 files in one directory, each file size is 192KB, the second delete the folder (recursive) and prints how much time it toke to delete all files. The problem is on the first server (blade) it takes about 2 minutes to delete all 20,000 files while on the second (pizza) it takes about 4 seconds!? Both servers have clean windows server 2008R2 with no special application running on background. Any ideas what is going on?

    Read the article

  • SamFS performance problem on file creation

    - by Gregor Longariva
    I have two samfs filesystems (samfs1 and samfs2), both on the same 6130, both with the same config/watermarks/timeouts etc. creating a file on samfs2 works as it should, on samfs1 not. A little simple script shows up, that every while and then the file creation needs between 11 and 28 seconds: stan 12:32 [scratch]# while ( 1 ) while? echo - while? time echo test file while? time mv file file2 while? echo + while? sleep 1 while? end 0.00u 0.00s 0:00.01 0.0% 0.00u 0.00s 0:00.00 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.03 0.0% + 0.00u 0.00s 0:23.71 0.0% 0.00u 0.00s 0:00.14 0.0% + 0.00u 0.00s 0:00.18 0.0% 0.00u 0.00s 0:00.13 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.05 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.06 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.05 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.05 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.05 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.04 0.0% + 0.00u 0.00s 0:00.04 0.0% 0.00u 0.00s 0:00.05 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.01 0.0% + 0.00u 0.00s 0:26.05 0.0% 0.00u 0.00s 0:00.50 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.06 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.12 0.0% + Any idea where the problem could be?

    Read the article

  • Getting More Performance out of a MacBook Pro

    - by 5arx
    So I've got a mid-2009 MacBook Pro 13". Integrated GPU so not a games machine but fast enough for doing .Net development in VMs. I love the little thing and wanted to give it a Christmas present so thought I'd mod it up a bit and give it a boost. I'm thinking of swapping out the stock 5400rpm HD with a faster hybrid drive (has 4GB RAM and spins at 7200rpm) but was wondering if any of you had tried or knew of anything else I could change/upgrade/mod to squeeze more out of my laptop. Before you answer though, please be aware that I'm not sure I can run to putting in the 8GB of RAM Apple have suggested :-( Thanks in advance.

    Read the article

  • refresh windows network performance counters in command line

    - by michalv82
    I am testing a USB device connected to a windows PC. When the device is connected then windows has another network interface going through the device. I need to get the bytes transffered for that specific interface, basically I need the data shown in the networking tab in the task manager for my interface adapter. I found this question which helped to get this info: ms windows network activity bytes send and receive in command line However I have a problem when I run multiple tests - each time I disconnect and connect the device there's another line for the interface, like below. In the task manager networking tab I only see one record for my interface but I don't know how I can know from command line which is the lastest and current instance (it's not like the first line or last line is always the current interface, I noticed it's not consistent): wmic path Win32_PerfRawData_Tcpip_NetworkInterface Get Name,PacketsReceivedPerSec,PacketsSentPerSec,BytesReceivedPersec,BytesSentPersec BytesReceivedPersec BytesSentPersec Name PacketsReceivedPersec PacketsSentPersec 422666370 6317989292 Intel[R] 82579LM Gigabit Network Connection 2715169 8109643 49150 375973 My USB Device 432 568 0 0 My USB Device _2 0 0 0 0 My USB Device _3 0 0 0 0 My USB Device _6 0 0 0 0 Local Area Connection* 9 0 0

    Read the article

  • monitoring TCP/IP performance on Solaris

    - by Andy Faibishenko
    I am trying to tune a high message traffic system running on Solaris. The architecture is a large number (600) of clients which connect via TCP to a big Solaris server and then send/receive relatively small messages (.5 to 1K payload) at high rates. The goal is to minimize the latency of each message processed. I suspect that the TCP stack of the server is getting overwhelmed by all the traffic. What are some commands/metrics that I can use to confirm this, and in case this is true, what is the best way to alleviate this bottleneck?

    Read the article

  • Joomla performance problems on AWS

    - by Bobby Jack
    I'm running a site on AWS with the following setup: Single m1.small instance (web server) Single RDS m1.small db Joomla 1.5 Generally, the site is performant, but is fairly low-traffic - say around 50-100 visits / hour. However, at peak time, we see about double that traffic. During peak time, pretty much every day: CPU usage on the web server slowly climbs to 100% CPU usage on the RDS server climbs quite quickly to about 30%, from an average of about 15 Database connections shoot up to about 140, from a normal average of about 2 or 3 The site is then occasionally unreachable, certainly according to pingdom monitoring. Does anyone recognise this behaviour? Can you point me in the right direction to begin investigating? Of course, RDS makes it difficult to do things like slow query logging, so I've started by regularly dumping the mysql process list into a file to see if there's anything I can spot there, but it would be good to have something more concrete to investigate. UPDATE At least, can someone confirm that I'm definitely right in saying that the level of traffic implies the problem must be a specific type of query taking way longer than it should to execute? This would happen if a table gets locked, and many queries need to write to it, right? For this very reason, I've already changed the __session table type to InnoDB.

    Read the article

  • SSD performance

    - by Tom
    I recently upgraded to a Kingston Hyper-X 120GB SSD, when I run Crystaldiskmark my scores look really slow, my MB (gigabyte 775) does not have an option for ACHI in the BIOS, I'm wondering if that's an issue. The scores were: Seq read -233 write-176.8 512K-224 write-175.8 4K-25 write-80 4K-23 write-102 This drive is rated for over 500, Any help or input would be greatly appreciated..

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >