Search Results

Search found 14841 results on 594 pages for 'performance monitoring'.

Page 151/594 | < Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >

  • How does MySQL 5.5 and InnoDB on Linux use RAM?

    - by Loren
    Does MySQL 5.5 InnoDB keep indexes in memory and tables on disk? Does it ever do it's own in-memory caching of part or whole tables? Or does it completely rely on the OS page cache (I'm guessing that it does since Facebook's SSD cache that was built for MySQL was done at the OS-level: https://github.com/facebook/flashcache/)? Does Linux by default use all of the available RAM for the page cache? So if RAM size exceeds table size + memory used by processes, then when MySQL server starts and reads the whole table for the first time it will be from disk, and from that point on the whole table is in RAM? So using Alchemy Database (SQL on top of Redis, everything always in RAM: http://code.google.com/p/alchemydatabase/) shouldn't be much faster than MySQL, given the same size RAM and database?

    Read the article

  • How to stop Firefox on an SSD from freezing when using the search box or submitting a form?

    - by sblair
    Firefox usually freezes for about a second whenever I search for something from the toolbar search box, when submitting a form, or when clearing the search box history. I suspect it has something to do with the auto-complete feature. Using Windows 7's Resource Monitor, the problem seems to be from the file: C:\Users\<username>\AppData\Roaming\Mozilla\Firefox\Profiles\<profile>\formhistory.sqlite-journal I believe this is a temporary file which caches database writes. The following screenshot shows the very high response times from six different searches, and that the queue length on drive C shoots off the scale: My Firefox profile is on an Intel X25-M G2 SSD. The problem doesn't seem to occur if I create a new profile on a hard disk drive. However, I'd like to know why the problem exists on the SSD in the first place (because it's an annoying problem which contradicts the reason I bought an SSD, and it might happen with other applications too), and how to prevent it. It still occurs if Firefox is started in safe mode, and with the recent beta versions. Updates: VACUUMing the Firefox profile databases does not help with this problem. The SSD Optimizer in the Intel SSD Toolbox does not help either.

    Read the article

  • setup lowcost image storage server with 24x SSD array to get high IOPS?

    - by Nenad
    I want to build let's name it a lowcost Ra*san which would host for our social site the images (many millions) we have 5 sizes of every photo with 3 KB, 7 KB, 15 KB, 25 KB and 80 KB per Image. My idea is to build a Server with 24x consumer 240 GB SSD's in Raid 6 which will give me some 5 TB Disk space for the photo storage. To have HA I can add a 2nd one and use drdb. I'm looking to get above 150'000 IOPS (4K Random reads). As we mostly have read access only and rarely delete photos i think to go with consumer MLC SSD. I read many endurance reviews and don't see there a problem as long we don't rewrite the cells. What you think about my idea? - I'm not sure between Raid 6 or Raid 10 (more IOPS, cost SSD). - Is ext4 OK for the filesystem - Would you use 1 or 2 Raid controller, with Extender Backplane If anyone has realized something similar i would be happy to get Real World numbers. UPDATE I have buy 12 (plus some spare) OCZ Talos 480GB SAS SSD Drive's they will be placed in a 12-bay DAS and attached to a PERC H800 (1GB NV Cache, manufactured by LSI with fastpath) Controller, I plan to setup Raid 50 with ext4. If someone is wondering about some benchmarks let me know what you would like to see.

    Read the article

  • Large keepalive_requests values are severely slowing-down Nginx

    - by Gil
    When running a bacon (43-byte transparent pixel) load test on Nginx, we have tried several keepalive_requests values (from 10 to 100,000) and the optimal value seems to be 10. Here are the server HTTP headers of this tiny reply: HTTP/1.1 200 OK Server: nginx/1.5.6 Date: Wed, 23 Oct 2013 12:39:45 GMT Content-Type: image/gif Content-Length: 43 Last-Modified: Mon, 28 Sep 1970 06:00:00 GMT Connection: keep-alive Nginx is twice slower with keepalive_requests 100000 than with keepalive_requests 10. Can you help understanding that result? Or tell what we do wrong? For reference, here is the nginx.conf file.

    Read the article

  • Why is MySQL table_cache full but never used

    - by Jeremy Clarke
    I have been using the tuning-primer.sh script to tune my my.cnf settings. I have most things working well but the part about TABLE CACHE makes no sense: TABLE CACHE Current table_cache value = 900 tables. You have a total of 0 tables You have 900 open tables. Current table_cache hit rate is 1% , while 100% of your table cache is in use. You should probably increase your table_cache When I do SHOW STATUS; I get the following table-related numbers: Open_tables = 900 Opened_tables = 0 It seems like something is going wrong. I have some extra memory I could use on increasing the table_cache size, but my sense is that the 900 tables already available aren't doing anything, and increasing it will just waste more energy. Why might this be happening? Are there other settings that could cause all my table_cache slots to be used even though there are no hits to them? I have 150 max connections and probably no more than 4 tables per join, FWIW. Here is the tuner script output for temp tables, which I've also been tuning: TEMP TABLES Current max_heap_table_size = 90 M Current tmp_table_size = 90 M Of 11032358 temp tables, 40% were created on disk Perhaps you should increase your tmp_table_size and/or max_heap_table_size to reduce the number of disk-based temporary tables. Note! BLOB and TEXT columns are not allow in memory tables. If you are using these columns raising these values might not impact your ratio of on disk temp tables.

    Read the article

  • VMWare Pre-Allocated vs. Growable, which is faster?

    - by tekiegreg
    In an effort to increase speed in my Vmware setup, I was thinking about converting a Windows XP Guest 32 bit I have from growable to pre-allocated, I'm currently running VMWare Workstation 7 with Windows 7 64 bit as the host. Specs: Dual Core CPU, one allocated to guest 4GB of RAM, 2GB to guest HD max capacity is 500GB, 150GB allocated to guest (I have 300GB left and don't mind parting with the space, currently HD is 80GB and converting would obviously add another 70GB of space), HD that guest is running on is separate from Host OS Either that or any other suggestions you have might be appreciated, thanks!

    Read the article

  • SQL Management Studio is painfully slow on 32-bit Windows 7

    - by Sergei
    I've been having issues running anything in SQL Management Studio on Win 7. Basically, doing anything through the Management Studio interfaces completely freezes it up for a few minutes. Running a query is nearly impossible because it takes nearly 2 minutes just for the IDE to parse it and another minute to run it when the query itself completes instantaneously outside of the IDE. I'm not even going to go into the query designer. Anything with heavy user interaction such as editing a row in the result set where i have to click a cell freezes up the front-end. I tried reinstalling to no avail. Also tried running in compatibility mode without any difference whatsoever. Anybody had a similar experience? I'm running SQL Management Studio 2008 version 10.0.2531.0 on 32-bit Windows 7. Connecting to a remote SQL Server instance (2008 R2). Thanks.

    Read the article

  • How to interpret IOZone results?

    - by homer5439
    Here are the resuts of running IOZone on an ext3 filesystem on an LVM volume residing on a SAN LUN (it was ran with 5 parallel processes). "Throughput report Y-axis is type of test X-axis is number of processes" "Record size = 4 Kbytes " "Output is in Kbytes/sec" " Initial write " 81628.55 " Rewrite " 83354.72 " Read " 115595.02 " Re-read " 119306.09 " Reverse Read " 47684.20 " Stride read " 10011.09 " Random read " 16751.27 " Mixed workload " 5659.77 " Random write " 1661.85 " Pwrite " 36030.83 Now this is all nice and dandy, but my question is: how do I know whether the values are as good as they could be or there is something to tweak (and if so, what?) The actual usage I will have for that Logical Volume is to act as virtual disk for a VM.

    Read the article

  • How do you debug why Windows is slow?

    - by aaron
    I've got Vista Biz and when my machine chugs I think it is because of paging, but I never know how to verify this. Procexp doesn't seem to provide useful information because it appears that nothing is going on when the chugs happen. perfmon seems like it has the counters I need, but I'm never sure what counters I should add to cover the information I want. For perfmon, I prefer numbers that are percents, so I can gauge load. Here are the counters I have up, but they don't always seem to correlate to chugs: - % disk time (logical) - page faults/sec (an indicator of lots of paging activity) - processor/%priviliged time

    Read the article

  • Multiple columns in a single index versus multiple indexes

    - by Tim Coker
    The short version of my question is what's the difference between three indexes each indexing a single column and one index indexing three columns. Background follows. I'm primarily a programmer but have to do DBA work because we don't have a DBA. I'm evaluating our indexes versus the queries run against a particular table. The table as 3 columns that I'm often filtering against or getting the max value of. Most of the time the queries look like select max(col_a) from table where col_b = 'avalue' or select col_c from table where col_b = 'avalue' and col_a = 'anothervalue' All columns are independently indexed. My question is would I see any difference if I had an index that indexed col_b and col_a together since they can appear in a where clause together?

    Read the article

  • Can I change a MySQL table back and forth between InnoDB and MyISAM without any problems?

    - by Daniel Magliola
    I have a site with a decently big database, 3Gb in size, a couple of tables with a dozen million records. It's currently 100% on MyISAM, and I have the feeling that the server is going slower than it should because of too much locking, so I'd like to try going to InnoDB and see if that makes things better. However, I need to do that directly in production, because obviously without load this doesn't make any difference. However, I'm a bit worried about this, because InnoDB actually has potential to be slower, so the question is: If I convert all tables to InnoDB and it turns out i'm worse off than before, can I go back to MyISAM without losing anything? Can you think of any problems I might encounter? (For example, I know that InnoDB stores all data in ONE big file that only gets bigger, can this be a problem?) Thank you very much Daniel

    Read the article

  • Diagnostic high load sys cpu - low io

    - by incous
    A Linux server running Ubuntu 12.04 LTS with LAMP has a strange behaviour since last week: - cpu %sys higher than before, nearly equal %usr (before that, %sys just little compare with %usr) - IO reduce by half or 1/3 compare with the week before I try to diagnostic the process/cpu by some command (top/vmstat/mpstat/sar), and see that maybe it's a bit high on interrupt timer/resched. I don't know what that means, now open to any suggestion.

    Read the article

  • GNOME/KDE Linux entirely in RAM?

    - by František Žiacik
    Hi. I'd like to have very responsive linux but I also like modern, elegant and functional desktops like gnome or kde, not the lightweight ones like xfce or lxde. Once I tried PuppyLinux and was impressed by the responsivity when I clicked an application. In my Ubuntu, it bothers me much when I click chromium and must wait 5 seconds of disk flashing until main window appears. Or evolution or anything else. Is it possible to make GNOME or KDE run entirely in RAM like PuppyLinux (of course, I mean frequently used applications and services, not all) if you have enough of it? I don't care if boot time is longer. I tried using "preload" but it didn't help much.

    Read the article

  • Various problems with software raid1 array built with Samsung 840 Pro SSDs

    - by Andy B
    I am bringing to ServerFault a problem that is tormenting me for 6+ months. I have a CentOS 6 (64bit) server with an md software raid-1 array with 2 x Samsung 840 Pro SSDs (512GB). Problems: Serious write speed problems: root [~]# time dd if=arch.tar.gz of=test4 bs=2M oflag=sync 146+1 records in 146+1 records out 307191761 bytes (307 MB) copied, 23.6788 s, 13.0 MB/s real 0m23.680s user 0m0.000s sys 0m0.932s When doing the above (or any other larger copy) the load spikes to unbelievable values (even over 100) going up from ~ 1. When doing the above I've also noticed very weird iostat results: Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.00 1589.50 0.00 54.00 0.00 13148.00 243.48 0.60 11.17 0.46 2.50 sdb 0.00 1627.50 0.00 16.50 0.00 9524.00 577.21 144.25 1439.33 60.61 100.00 md1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 md2 0.00 0.00 0.00 1602.00 0.00 12816.00 8.00 0.00 0.00 0.00 0.00 md0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 And it keeps it this way until it actually writes the file to the device (out from swap/cache/memory). The problem is that the second SSD in the array has svctm and await roughly 100 times larger than the second. For some reason the wear is different between the 2 members of the array root [~]# smartctl --attributes /dev/sda | grep -i wear 177 Wear_Leveling_Count 0x0013 094% 094 000 Pre-fail Always - 180 root [~]# smartctl --attributes /dev/sdb | grep -i wear 177 Wear_Leveling_Count 0x0013 070% 070 000 Pre-fail Always - 1005 The first SSD has a wear of 6% while the second SSD has a wear of 30%!! It's like the second SSD in the array works at least 5 times as hard as the first one as proven by the first iteration of iostat (the averages since reboot): Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 10.44 51.06 790.39 125.41 8803.98 1633.11 11.40 0.33 0.37 0.06 5.64 sdb 9.53 58.35 322.37 118.11 4835.59 1633.11 14.69 0.33 0.76 0.29 12.97 md1 0.00 0.00 1.88 1.33 15.07 10.68 8.00 0.00 0.00 0.00 0.00 md2 0.00 0.00 1109.02 173.12 10881.59 1620.39 9.75 0.00 0.00 0.00 0.00 md0 0.00 0.00 0.41 0.01 3.10 0.02 7.42 0.00 0.00 0.00 0.00 What I've tried: I've updated the firmware to DXM05B0Q (following reports of dramatic improvements for 840Ps after this update). I have looked for "hard resetting link" in dmesg to check for cable/backplane issues but nothing. I have checked the alignment and I believe they are aligned correctly (1MB boundary, listing below) I have checked /proc/mdstat and the array is Optimal (second listing below). root [~]# fdisk -ul /dev/sda Disk /dev/sda: 512.1 GB, 512110190592 bytes 255 heads, 63 sectors/track, 62260 cylinders, total 1000215216 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00026d59 Device Boot Start End Blocks Id System /dev/sda1 2048 4196351 2097152 fd Linux raid autodetect Partition 1 does not end on cylinder boundary. /dev/sda2 * 4196352 4605951 204800 fd Linux raid autodetect Partition 2 does not end on cylinder boundary. /dev/sda3 4605952 814106623 404750336 fd Linux raid autodetect root [~]# fdisk -ul /dev/sdb Disk /dev/sdb: 512.1 GB, 512110190592 bytes 255 heads, 63 sectors/track, 62260 cylinders, total 1000215216 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0003dede Device Boot Start End Blocks Id System /dev/sdb1 2048 4196351 2097152 fd Linux raid autodetect Partition 1 does not end on cylinder boundary. /dev/sdb2 * 4196352 4605951 204800 fd Linux raid autodetect Partition 2 does not end on cylinder boundary. /dev/sdb3 4605952 814106623 404750336 fd Linux raid autodetect /proc/mdstat root # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb2[1] sda2[0] 204736 blocks super 1.0 [2/2] [UU] md2 : active raid1 sdb3[1] sda3[0] 404750144 blocks super 1.0 [2/2] [UU] md1 : active raid1 sdb1[1] sda1[0] 2096064 blocks super 1.1 [2/2] [UU] unused devices: Running a read test with hdparm root [~]# hdparm -t /dev/sda /dev/sda: Timing buffered disk reads: 664 MB in 3.00 seconds = 221.33 MB/sec root [~]# hdparm -t /dev/sdb /dev/sdb: Timing buffered disk reads: 288 MB in 3.01 seconds = 95.77 MB/sec But look what happens if I add --direct root [~]# hdparm --direct -t /dev/sda /dev/sda: Timing O_DIRECT disk reads: 788 MB in 3.01 seconds = 262.08 MB/sec root [~]# hdparm --direct -t /dev/sdb /dev/sdb: Timing O_DIRECT disk reads: 534 MB in 3.02 seconds = 176.90 MB/sec Both tests increase but /dev/sdb doubles while /dev/sda increases maybe 20%. I just don't know what to make of this. As suggested by Mr. Wagner I've done another read test with dd this time and it confirms the hdparm test: root [/home2]# dd if=/dev/sda of=/dev/null bs=1G count=10 10+0 records in 10+0 records out 10737418240 bytes (11 GB) copied, 38.0855 s, 282 MB/s root [/home2]# dd if=/dev/sdb of=/dev/null bs=1G count=10 10+0 records in 10+0 records out 10737418240 bytes (11 GB) copied, 115.24 s, 93.2 MB/s So sda is 3 times faster than sdb. Or maybe sdb is doing also something else besides what sda does. Is there some way to find out if sdb is doing more than what sda does? UPDATE Again, as suggested by Mr. Wagner, I have swapped the 2 SSDs. And as he thought it would happen, the problem moved from sdb to sda. So I guess I'll RMA one of the SSDs. I wonder if the cage might be problematic. What is wrong with this array? Please help!

    Read the article

  • How to tell if linux disk IO is causing excessive (> 1 second) application stalls

    - by noahz
    I have a Java application performing a large volume (hundreds of MB) of continuous output (streaming plain text) to about a dozen files a ext3 SAN filesystem. Occasionally, this application pauses for several seconds at a time. I suspect that something related to ext3 vsfs (Veritas Filesystem) functionality (and/or how it interacts with the OS) is the culprit. What steps can I take to confirm or refute this theory? I am aware of iostat and /proc/diskstats as starting points. Revised title to de-emphasize journaling and emphasize "stalls" I have done some googling and found at least one article that seems to describe behavior like I am observing: Solving the ext3 latency problem Additional Information Red Hat Enterprise Linux Server release 5.3 (Tikanga) Kernel: 2.6.18-194.32.1.el5 Primary application disk is fiber-channel SAN: lspci | grep -i fibre 14:00.0 Fibre Channel: Emulex Corporation Saturn-X: LightPulse Fibre Channel Host Adapter (rev 03) Mount info: type vxfs (rw,tmplog,largefiles,mincache=tmpcache,ioerror=mwdisable) 0 0 cat /sys/block/VxVM123456/queue/scheduler noop anticipatory [deadline] cfq

    Read the article

  • scp vs netatalk, samba, and/or vsftpd with External USB drive

    - by KitsuneYMG
    I set up a ubuntu server machine to share an ext2 formatted external usb drive. When attempting to copy a single 275MB files from said device through netatalk, I get estimated download rates at around 45 min. With samba and ftp (using vsftpd) I get 1+ hours! Using scp to copy the file results in complete download within 5 minutes. Another option, ssh+cp from external device to ~ and then using netatalk to grab it from there results in a total time of arounf 7 minutes. Does anyone have a clue what is misconfigured? Assuming that nothing is, is there any fs/pseudo-fs that would use the internal hdd as an intermediate location/onion-layer for the external hdd (for reads only)? Details: AppleVolumes.default: /mnt/ext USB allow:username cnidscheme:cdb options:usedots,upriv

    Read the article

  • Important hardware components to avoid bottlenecks/improve speed on a laptop?

    - by joelhaus
    Looking for a powerful general use (including web development) laptop running Windows. Price points seem to be all over the place. Many less powerful machines are priced much higher than machines with better specs. How does one navigate this market? Are there any unpublished/under-publicized specs/bottlenecks you look for? Understanding that hardware improves over time, is there an efficient ratio that can be used (or something similar, like Windows Experience Index?) which will indicate how powerful a system is? Thanks in advance! P.S. Here is an example from a laptop released on September 17, 2010. Can anyone pick apart these specs? Is there missing information you would be looking for? OS: Win 7 Display: 16.4" LED backlit Processor: Intel Core i7-740QM, 6MB L3 Cache RAM: 6GB DDR3 1333MHz (8GB max.) Graphics: NVIDIA GeForce GT 425M (1 GB of dedicated DDR3) HDD: 500GB 7200RPM SATA Hard Drive Removable Disc: Blue-ray with DVD±R/RW Misc: webcam/mic/speakers/bluetooth (via Sony Vaio VPC-F137FX/B)

    Read the article

  • How to fix audio/game stuttering in Google Chrome's Flash plug-in?

    - by Simon Belmont
    I'm having an issue. Windows XP, running the latest Chrome 23 build. I'm using Flash 11.5 built into Chrome (Pepper Flash). It runs horribly. Chrome 22 did not have this issue as far as I recall. What a shame. YouTube videos stutter badly and after a while, they begin to lag and lose sync with the video. I disabled Pepper Flash and tested HTML5 video in YouTube and it was smooth as glass. Additionally, certain Flash based games are almost unusable now. The plug-in is using 100% CPU and it lags horribly in these games. Google/Adobe, please fix this. I shouldn't have to disable the built-in Flash plug-in (with added sandboxing security) and use regular Flash to resolve this. Short of waiting for an update to Chrome, does anyone have a better solution to fixing this? I am all ears.

    Read the article

  • 503 error Varnish cache when eAccelerator is started

    - by Netismine
    I have a Magento installation running on x-large Amazon server. I have Varnish, memcached and eAccelerator installed on the server. At first everything was working fine, but then at some point it stopped working, throwing 503 error with Varnish cache stamp below it. When I disable eaccelerator, error is gone and site is working. This is my eaccelerator config: extension="eaccelerator.so" eaccelerator.shm_size = "512" eaccelerator.cache_dir = "/var/cache/php-eaccelerator" eaccelerator.enable = "1" eaccelerator.optimizer = "1" eaccelerator.debug = 0 eaccelerator.log_file = "/var/log/httpd/eaccelerator_log" eaccelerator.name_space = "" eaccelerator.check_mtime = "1" eaccelerator.filter = "" eaccelerator.shm_ttl = "0" eaccelerator.shm_prune_period = "0" eaccelerator.shm_only = "0" eaccelerator.allowed_admin_path = "" any hints?

    Read the article

  • Email server for huge number of subscriber

    - by bogha
    My question is that my company is thinking of providing a free email account for each of its customers. As a new company we will assume that our corporate email system will be MS Exchange server which will support about 1000 employees. They are asking why not adding the customer list to be a part of Exchange users. My suggestion was to separate the two systems, for the corporate we can use Exchange but for customers (around 30000) we have to use a Linux based system. My only argument was that Linux can be used for enterprise services like this and Microsoft may fail. What do you suggest? And if you are with me on choosing Linux as the server platform, what do you suggest to use as an alternative for Exchange in Linux? Thank you.

    Read the article

  • How can I simulate a slow machine in a VM?

    - by Nathan Long
    I'm testing an AJAX-heavy web-application. I develop on a new Mac, but I use VmWare Fusion (currently 3.1.2) to test in Windows XP, using IETester to simulate older versions of IE. This lets me see how older IE versions would render the site, but I'd also like to see how the site would perform on an older machine. I see in the VM's settings that I can decrease the RAM; is there a way to also dial down the processor speed? How else might I simulate a slow machine? (I am also going to check out how to simulate a slow internet connection.)

    Read the article

  • Why change net.inet.tcp.tcbhashsize in FreeBSD?

    - by sh-beta
    In virtually every FreeBSD network tuning document I can find: # /boot/loader.conf net.inet.tcp.tcbhashsize=4096 This is usually paired with some unhelpful statement like "TCP control-block hash table tuning" or "Set this to a reasonable value." man 4 tcp isn't much help either: tcbhashsize Size of the TCP control-block hash table (read-only). This may be tuned using the kernel option TCBHASHSIZE or by setting net.inet.tcp.tcbhashsize in the loader(8). The only document I can find that touches on this mysterious thing is the Protocol Control Block Lookup subsection beneath Transport Layer in Optimizing the FreeBSD IP and TCP Stack, but its description is more about potential bottlenecks in using it. It seems tied to matching new TCP segments to their listening sockets, but I'm not sure how. What exactly is the TCP Control Block used for? Why would you want to set its hash size to 4096 or any other particular number?

    Read the article

  • Firefox takes a really long time to load some sites on Ubuntu

    - by Dave
    Hello guys, I have an issue here. Some sites - just a few - takes a really long time to load on Firefox. One example is A List Apart (http://www.alistapart.com/) which takes more than 30 minutes (yes, minutes, not seconds). On Opera, ou even through a telnet session, the problematic sites run without problem, fast as expected. I am using Linux 8.04, running Firefox 3.6.3 downloaded from mozilla site, with a 10M ADSL connection. I tried many tweaks I found googling, like disable IPv6, and change http pipelining settings on FF's about:config. None worked. I also used Firebug to find what phase during negotiation is the bottleneck. Findings are in the screenshot. Well guys, any idea what is the issue? And how to solve it? I repeat, this only happens with firefox (3.6.3 and prior versions), for a few sites only (even sites with much more requests, images, javascripts, stylesheets work fine), and http pipelines and IPv6 tweaks on about:config didn't work. Thanks

    Read the article

< Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >