Search Results

Search found 14253 results on 571 pages for 'css parsing'.

Page 394/571 | < Previous Page | 390 391 392 393 394 395 396 397 398 399 400 401  | Next Page >

  • HAProxy MySQL Failover is not starting

    - by thiesdiggity
    I am trying to setup HAProxy with MySQL failover with Ubuntu. I used a setup similar to this serverfault question, however I am getting the following error when starting haproxy: [ALERT] 341/220001 (17405) : parsing [/etc/haproxy/haproxy.cfg:29] : unknown option 'mysql-check'. [ALERT] 341/220001 (17405) : Error(s) found in configuration file : /etc/haproxy/haproxy.cfg [ALERT] 341/220001 (17405) : Fatal errors found in configuration. I even tried installing the lastest version of HAProxy (1.4.22). Does anyone know how to fix this? I have Google'd the heck out of it and can't find any solution. Thanks for your time!

    Read the article

  • Apache ProxyPass Missing Images

    - by EpicOfChaos
    I have a apache server that sits in front of my glassfish server. mydomain.com goes directly to my static files on apache, than if you hit the subdomain forum.mydomain.com it goes to the glassfish webapp forum/ at 127.0.0.1:8080/forum/. This proxy seems to work it takes me to the web app but all of the images are missing! Here is how I go my virtual host setup. NameVirtualHost *:80 <VirtualHost *:80> ServerName www.mydomain.com ServerAlias subdomain.mydomain.com mydomain.com DocumentRoot "/usr/local/apache/htdocs" </VirtualHost> <VirtualHost *:80> ServerName forum.mydomain.com # any logging config, etc, that you need ProxyPass / http://127.0.0.1:8080/forum/ ProxyPassReverse / http://127.0.0.1:8080/forum/ </VirtualHost> And in the access log this is what I am seeing. [15/Jan/2012:03:28:02 +0000] "GET /forums/list.page HTTP/1.1" 200 12861 [15/Jan/2012:03:28:02 +0000] "GET /forum/templates/default/images/logo.jpg HTTP/1.1" 404 1075 [15/Jan/2012:03:28:02 +0000] "GET /forum/templates/default/styles/style.css?1326582403934 HTTP/1.1" 404 1075 [15/Jan/2012:03:28:02 +0000] "GET /forum/templates/default/images/icon_mini_recentTopics.gif HTTP/1.1" 404 1075 [15/Jan/2012:03:28:02 +0000] "GET /forum/templates/default/images/icon_mini_search.gif HTTP/1.1" 404 1075 [15/Jan/2012:03:28:02 +0000] "GET /forum/templates/default/images/icon_mini_members.gif HTTP/1.1" 404 1075 [15/Jan/2012:03:28:02 +0000] "GET /forum/templates/default/styles/en_US.css?1326582403934 HTTP/1.1" 404 1075 [15/Jan/2012:03:28:02 +0000] "GET /forum/templates/default/images/icon_mini_groups.gif HTTP/1.1" 404 1075 [15/Jan/2012:03:28:02 +0000] "GET /forum/templates/default/images/folder_big.gif HTTP/1.1" 404 1075 [15/Jan/2012:03:28:02 +0000] "GET /forum/templates/default/images/icon_mini_login.gif HTTP/1.1" 404 1075 [15/Jan/2012:03:28:02 +0000] "GET /forum/templates/default/images/whosonline.gif HTTP/1.1" 404 1075 [15/Jan/2012:03:28:02 +0000] "GET /forum/templates/default/images/icon_mini_register.gif HTTP/1.1" 404 1075 [15/Jan/2012:03:28:02 +0000] "GET /forum/ping_session.jsp HTTP/1.1" 404 1075 [15/Jan/2012:03:28:02 +0000] "GET /forum/templates/default/images/folder_lock.gif HTTP/1.1" 404 1075 [15/Jan/2012:03:28:02 +0000] "GET /forum/templates/default/images/folder.gif HTTP/1.1" 404 1075 [15/Jan/2012:03:28:02 +0000] "GET /forum/templates/default/images/folder_new.gif HTTP/1.1" 404 1075 Any Ideas why the images are not working?

    Read the article

  • I can't view my website in internet explorer browser after move from apache to nginx, while it work

    - by sauravdon
    After installation of nginx webserver, i run my website in firefox. It works good in firefox, i can see my website template is looks good, but in internet explorer it is not working properly, i can't see my webpage has text and images and every content in bad style. Like images are not loading, may be css is not working. Please help me to sort out this problem. Before this i was running my website on apache with different ip address and moved to nginx. Tanks saurav

    Read the article

  • Optimize php-fpm and varnish for a powerfull server

    - by Jim
    My setup is: Intel® Core™ i7-2600 and RAM 16 GB DDR3 RAM varnish+nginx+php-fpm+apc for a not very heavy WordPress blog with W3 Total Cache and CDN My problem is that after 55 hits per second according to blitz.io varnish starts giving out timeouts. CPU usage at this time is hardly 1%. Free memory at all time remains 10GB+. I tried benchmarking php-fpm directly with result of 150hits/s without any timeouts. But after that the CPU usage goes 100% and it stops responding. Can you help me optimize it to handle more? As i understand nginx has nothing to do over here so i dont put its config. php-fpm config listen = /tmp/php5-fpm.sock listen.allowed_clients = 127.0.0.1 user = nginx group = nginx pm = dynamic pm.max_children = 150 pm.start_servers = 7 pm.min_spare_servers = 2 pm.max_spare_servers = 15 pm.max_requests = 500 slowlog = /var/log/php-fpm/www-slow.log php_admin_value[error_log] = /var/log/php-fpm/www-error.log php_admin_flag[log_errors] = on apc extension = apc.so apc.enabled=1 apc.shm_size=512MB apc.num_files_hint=0 apc.user_entries_hint=0 apc.ttl=7200 apc.use_request_time=1 apc.user_ttl=7200 apc.gc_ttl=3600 apc.cache_by_default=1 apc.filters apc.mmap_file_mask=/tmp/apc.XXXXXX apc.file_update_protection=2 apc.enable_cli=0 apc.max_file_size=1M apc.stat=1 apc.stat_ctime=0 apc.canonicalize=0 apc.write_lock=1 apc.report_autofilter=0 apc.rfc1867=0 apc.rfc1867_prefix =upload_ apc.rfc1867_name=APC_UPLOAD_PROGRESS apc.rfc1867_freq=0 apc.rfc1867_ttl=3600 apc.include_once_override=0 apc.lazy_classes=0 apc.lazy_functions=0 apc.coredump_unmap=0 apc.file_md5=0 apc.preload_path Varnish VCL backend default { .host = "127.0.0.1"; .port = "8080"; .connect_timeout = 6s; .first_byte_timeout = 6s; .between_bytes_timeout = 60s; } acl purgehosts { "localhost"; "127.0.0.1"; } # Called after a document has been successfully retrieved from the backend. sub vcl_fetch { # Uncomment to make the default cache "time to live" is 5 minutes, handy # but it may cache stale pages unless purged. (TODO) # By default Varnish will use the headers sent to it by Apache (the backend server) # to figure out the correct TTL. # WP Super Cache sends a TTL of 3 seconds, set in wp-content/cache/.htaccess set beresp.ttl = 24h; # Strip cookies for static files and set a long cache expiry time. if (req.url ~ "\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$") { unset beresp.http.set-cookie; set beresp.ttl = 24h; } # If WordPress cookies found then page is not cacheable if (req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)") { # set beresp.cacheable = false;#versions less than 3 #beresp.ttl>0 is cacheable so 0 will not be cached set beresp.ttl = 0s; } else { #set beresp.cacheable = true; set beresp.ttl=24h;#cache for 24hrs } # Varnish determined the object was not cacheable #if ttl is not > 0 seconds then it is cachebale if (!beresp.ttl > 0s) { # set beresp.http.X-Cacheable = "NO:Not Cacheable"; } else if ( req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)" ) { # You don't wish to cache content for logged in users set beresp.http.X-Cacheable = "NO:Got Session"; return(hit_for_pass); #previously just pass but changed in v3+ } else if ( beresp.http.Cache-Control ~ "private") { # You are respecting the Cache-Control=private header from the backend set beresp.http.X-Cacheable = "NO:Cache-Control=private"; return(hit_for_pass); } else if ( beresp.ttl < 1s ) { # You are extending the lifetime of the object artificially set beresp.ttl = 300s; set beresp.grace = 300s; set beresp.http.X-Cacheable = "YES:Forced"; } else { # Varnish determined the object was cacheable set beresp.http.X-Cacheable = "YES"; if (beresp.status == 404 || beresp.status >= 500) { set beresp.ttl = 0s; } # Deliver the content return(deliver); } sub vcl_hash { # Each cached page has to be identified by a key that unlocks it. # Add the browser cookie only if a WordPress cookie found. if ( req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)" ) { #set req.hash += req.http.Cookie; hash_data(req.http.Cookie); } } # vcl_recv is called whenever a request is received sub vcl_recv { # remove ?ver=xxxxx strings from urls so css and js files are cached. # Watch out when upgrading WordPress, need to restart Varnish or flush cache. set req.url = regsub(req.url, "\?ver=.*$", ""); # Remove "replytocom" from requests to make caching better. set req.url = regsub(req.url, "\?replytocom=.*$", ""); remove req.http.X-Forwarded-For; set req.http.X-Forwarded-For = client.ip; # Exclude this site because it breaks if cached if ( req.http.host == "sr.ituts.gr" ) { return( pass ); } # Serve objects up to 2 minutes past their expiry if the backend is slow to respond. set req.grace = 120s; # Strip cookies for static files: if (req.url ~ "\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$") { unset req.http.Cookie; return(lookup); } # Remove has_js and Google Analytics __* cookies. set req.http.Cookie = regsuball(req.http.Cookie, "(^|;\s*)(__[a-z]+|has_js)=[^;]*", ""); # Remove a ";" prefix, if present. set req.http.Cookie = regsub(req.http.Cookie, "^;\s*", ""); # Remove empty cookies. if (req.http.Cookie ~ "^\s*$") { unset req.http.Cookie; } if (req.request == "PURGE") { if (!client.ip ~ purgehosts) { error 405 "Not allowed."; } #previous version ban() was purge() ban("req.url ~ " + req.url + " && req.http.host == " + req.http.host); error 200 "Purged."; } # Pass anything other than GET and HEAD directly. if (req.request != "GET" && req.request != "HEAD") { return( pass ); } /* We only deal with GET and HEAD by default */ # remove cookies for comments cookie to make caching better. set req.http.cookie = regsub(req.http.cookie, "1231111111111111122222222333333=[^;]+(; )?", ""); # never cache the admin pages, or the server-status page, or your feed? you may want to..i don't if (req.request == "GET" && (req.url ~ "(wp-admin|bb-admin|server-status|feed)")) { return(pipe); } # don't cache authenticated sessions if (req.http.Cookie && req.http.Cookie ~ "(wordpress_|PHPSESSID)") { return(lookup); } # don't cache ajax requests if(req.http.X-Requested-With == "XMLHttpRequest" || req.url ~ "nocache" || req.url ~ "(control.php|wp-comments-post.php|wp-login.php|bb-login.php|bb-reset-password.php|register.php)") { return (pass); } return( lookup ); } Varnish Daemon options DAEMON_OPTS="-a :80 \ -T 127.0.0.1:6082 \ -f /etc/varnish/ituts.vcl \ -u varnish -g varnish \ -S /etc/varnish/secret \ -p thread_pool_add_delay=2 \ -p thread_pools=8 \ -p thread_pool_min=100 \ -p thread_pool_max=1000 \ -p session_linger=50 \ -p session_max=150000 \ -p sess_workspace=262144 \ -s malloc,5G" Im not sure where to start, should i for start optimize php-fpm and then go to varnish or php-fpm is at its max right now so i should start looking for the problem in varnish?

    Read the article

  • Nginx 500 Internal Server error on subdirectory

    - by juyoung518
    I'm getting a 500 Internal Server error only on sub directories. For example, If my website is example.com, example.com/index.php works. But example.com/phpbb/index.php doesn't work. It just turns up a blank php page. The HTTP header shows HTTP error 500 Internal Server error. If I enter example.com/phpbb/index.php/somedirectory, the index.php of my root directory shows up. This is all very strange. I have tried searching etc but nothing worked. tried re-installing nginx but not fixed. I'm sure I got the DNS configured right. My Nginx Config /sites-available/example.com server { server_name www.example.com; return 301 https://example.com$request_uri; } server { listen 443; listen 80; #listen 80; ## listen for ipv4; this line is default and implied #listen [::]:80 default_server ipv6only=on; ## listen for ipv6 root /var/www/example.com/public_html; index index.html index.php index.htm; ssl on; ssl_certificate /etc/nginx/ssl/cert.pem; ssl_certificate_key /etc/nginx/ssl/ssl.key; ssl_session_timeout 5m; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS; ssl_prefer_server_ciphers on; ssl_stapling on; resolver 8.8.8.8; add_header Strict-Transport-Security max-age=63072000; # Make site accessible from http://localhost/ server_name example.com; location ~* \.(jpg|jpeg|png|gif|ico|css|js|bmp)$ { expires 365d; add_header Cache-Control public; } if ($scheme = http) { return 301 https://example.com$request_uri; } location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ /index.php; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } if ($http_user_agent ~ (musobot|screenshot|AhrefsBot|picsearch|Gender|HostTracker|Java/1.7.0_51|Java) ) { return 403; } location /phpmyadmin { root /usr/share/; index index.php index.html index.htm; location ~ ^/phpmyadmin/(.+\.php)$ { try_files $uri =404; root /usr/share/; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; } location ~* ^/phpmyadmin/(.+\.(jpg|jpeg|gif|css|png|js|ico|html|xml|txt))$ { root /usr/share/; } } location /phpMyAdmin { rewrite ^/* /phpmyadmin last; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; allow ::1; deny all; } # Only for nginx-naxsi used with nginx-naxsi-ui : process denied requests #location /RequestDenied { # proxy_pass http://127.0.0.1:8080; #} #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/www; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # With php5-cgi alone: fastcgi_pass 127.0.0.1:9000; # With php5-fpm: #fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_buffer_size 128k; fastcgi_buffers 256 16k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_read_timeout 240; # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # location ~ /\.ht { deny all; } } } nginx.conf user www-data; worker_processes 1; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## Block spammers and other unwanted visitors ## include /etc/nginx/blockips.conf; fastcgi_cache_path /var/cache/nginx levels=1:2 keys_zone=microcache:10m max_size=1000m inactive=60m; ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 100; types_hash_max_size 2048; server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log off; error_log /var/log/nginx/error.log; ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS; ssl_prefer_server_ciphers on; ## # File Cache Settings ## open_file_cache max=5000 inactive=5m; open_file_cache_valid 2m; open_file_cache_min_uses 1; open_file_cache_errors on; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; gzip_vary on; gzip_proxied any; gzip_comp_level 6; gzip_buffers 16 8k; gzip_http_version 1.1; gzip_types text/plain text/x-js text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*;

    Read the article

  • Can I make ssh tell me which control file it would use for multiplexing?

    - by Ryan Thompson
    I am using the following options in my ~/.ssh/config in order to enable connection multiplexing: ControlMaster auto ControlPath ~/.ssh/control/master-%r@%h:%p However, this has the annoying problem that the first shell to connect to a particular server must be the last to disconnect, because it is the master connection that all the other connections are using. So if you log out of the master, it appears to just hang. To solve this, I would like to wrap ssh with a script that checks if the control master file exists, and if not, starts a master ssh process in the background. Then it would start a slave ssh session. In order to accomplish this, my script would have to determine the path to the control file that ssh would use. This would entail parsing the ssh command line options and config files and implementing the logic for determining the ControlPath. Is there any way to just ask ssh what path it would use, so I can check it?

    Read the article

  • Facebook, Twitter, Yahoo doesn't work. CDN problem. Akamai?

    - by Toktik
    Some sites doesn't work normally, they are open, without css, images and javascript errors... Facebook stucks on static.ak.fbcdn.net Twitter stucks on a1.twimg.com Yahoo stucks on l.yimg.com On firefox I'm receiving Waiting for ...(any of those). I can access facebook only with SSL. Like https://facebook.com I ping them, only receive request timed out. Update: When I ping static.ak.fbcdn.net I refer to a749.g.akamai.net, when I ping this server I get Request timed out.

    Read the article

  • Get active network interface on Windows

    - by Kevin Walzer
    I'm developing an application that provides a UI to windump, the packet sniffer. Windump has a "-D" parameter that lists all network interfaces it can find, and then you can specify which interface to listen on. However, I'd like to avoid forcing the user to manually configure which interface to listen on. On Unix, I can obtain the right network interface (en0, en1, etc.) via a call to ifconfig and some parsing of the output, but I cannot locate any equivalent Windows API or command that can yield similar information--ipconfig doesn't seem to obtain this data. Can anyone suggest either a Windows command-line tool or an API that can be called via VBScript to obtain this data so that I don't have to present the user with a dialog in my GUI telling them to select the right interface?

    Read the article

  • Syntax of mac2vlan file in freeradius?

    - by Edik Mkoyan
    below is the content of mac2vlan file in freeradius When I uncomment this 00:01:02:03:04:05,VLAN1 it logs parsing error including configuration file /etc/raddb/modules/mac2vlan /etc/raddb/modules/mac2vlan[10]: Parse error after "00:01:02:03:04:05" Errors reading /etc/raddb/radiusd.conf what is the correct syntax? # -*- text -*- # # $Id$ # A simple file to map a MAC address to a VLAN. # # The file should be in the format MAC,VLAN # the VLAN name cannot have spaces in it, for example: # 00:01:02:03:04:05,VLAN1 # 03:04:05:06:07:08,VLAN2 # ... passwd mac2vlan { filename = ${confdir}/mac2vlan format = "*VMPS-Mac:=VMPS-VLAN-Name" delimiter = "," #}

    Read the article

  • blocking hotlinking with .htaccess only works for plain domain, when preceeded by www no block

    - by casualprogrammer
    Having tried all sorts of suggestions popping up from google, I am at my wit's end. Presently I use a solution created with htaccesstools.com/hotlink-protection/ RewriteEngine on RewriteCond %{HTTP_REFERER} !^$ RewriteCond %{HTTP_REFERER} !^http://(www\.)?mydomain.tld/.*$ [NC] RewriteRule \.(gif|jpg|jpeg|png|js|css)$ - [NC,F,L] Checking it out with altlab.com/htaccess_tutorial.html testing facility (near bottom of page ) shows no image if mydomain.tld/mypic.jpg is entered, while if prefixed with www (www.mydomain.tld/mypic.jpg) the pic is displayed. Any helpful comments welcome.

    Read the article

  • Stripping cookies from image files ?

    - by iTech
    Hi, I want to achieve cookie free image serving as discussed here : http://code.google.com/speed/page-speed/docs/request.html#ServeFromCookielessDomain I have created a new sub-domain "static.example.com" serving only images, javscript and css (file serving restrictions made via filesmatch.conf file) , pls. tell how to make it serve cookie free images. Thanks

    Read the article

  • why the output of ls is like this

    - by dorelal
    I am using snow leopard and this is what I get in my terminal. By default I am using bash. > ls c* clock: PSD demo.html jquery.tzineClock script.js styles.css clock2: clojure-presentations: Clojure-1up.pdf ClojureInTheField-1up.pdf license.html Clojure-4up.pdf README ClojureForRubyists-1up.pdf keynote coffee-script: Cakefile README bin examples index.html package.json test LICENSE Rakefile documentation extras lib src vendor

    Read the article

  • mac + parallels and https site test = router restarts

    - by Erik
    Ok I have an interesting & very frustrating problem happening. I'm going to explain it the best I can. I work as a graphic designer & web designer on a mac and have a Comcast internet connection that comes through a Comcast branded router (SMC8014) which then ties into an Airport Extreme Base Station which runs my office network. I run OS 10.5.7 and also run Parallels 4.0.3 (running Windows XP) for testing websites in Internet Explorer and so on. Ok, so that the basic background. Here's my issue. I've been collaborating an ecommerce website with another designer/developer and when testing the site on the PC side we have started to run into some sort of network problem. The site is https if that matters at all I suspect it may. Basically when I run parallels for testing I am constantly having to restart the router in order to connect to the test site (it's hosted). Funny thing is I can access the rest of the internet fine, just not this site I'm working on until I restart the router (It's sorta like the site is timing out). This never happens when just running the Mac side of things. It only becomes an issue when Parallels is open and I am doing page refreshes while making css or HTML edits via something like Coda or CSS edit (connected to the hosting server via ftp). The real problem is that once the problem starts I only get about 2 or 3 page loads before I have to restart the router again. It's absolutely crippling. I cannot get any work done when I have to restart the router every couple of minutes. So, if you think this problem is isolated to me, the answer is no. The designer/developer I'm collaborating with has an office a couple miles away and experiences very similar problems under slightly different setup. He also has Comcast as his internet provider and connects his router to an Airport and primarily works on a mac. The main difference is that rather than using a visualizer like parallels to test the website on the PC he uses a real live PC that is on his network. Once he fire up the PC to do testing he runs into the same issue described above. After a couple of page refreshes in Internet Explorer or other browser on the PC the site becomes unresponsive and the router has to get restarted. Any thoughts on what is going on here would be greatly appreciated. Thanks in advance.

    Read the article

  • How To Troubleshoot Excess Time From Connect to First Byte?

    - by Gaia
    I measured load times for a wordpress 2.9.2 install on apache 2.2.3 and I was intrigued by the long periods between connect and first byte for the css and image files. Load Average is 0.0, 0.0, 0.0 and there are 150MB free RAM on the VPS. Pingdom results are at http://imagebin.ca/img/6UaiOU.png How do I gain insight into the possible causes of this problem and how would I troubleshoot it? Thanks

    Read the article

  • sub domains with /etc/hosts and apache for gitorious

    - by QLands
    I managed to have a local install of Gitorious. Now I need to finalize the apache integration using a virtual server but nothing seems to work. See for example my /etc/hosts file: 127.0.0.1 localhost 172.26.17.70 darkstar.ilri.org darkstar 172.26.17.70 git.darkstar.ilri.org My vhosts.conf has the following entries: # # Use name-based virtual hosting. # NameVirtualHost *:80 <VirtualHost *:80> <Directory /srv/httpd/htdocs> Options Indexes FollowSymLinks ExecCGI AllowOverride None Order allow,deny Allow from all </Directory> ServerName darkstar.ilri.org DocumentRoot /srv/httpd/htdocs ErrorLog /var/log/httpd/error_log AddHandler cgi-script .cgi </VirtualHost> <VirtualHost *:80> <Directory /srv/httpd/git.darkstar.ilri.org/gitorious/public> Options FollowSymLinks ExecCGI AllowOverride None Order allow,deny Allow from All </Directory> AddHandler cgi-script .cgi DocumentRoot /srv/httpd/git.darkstar.ilri.org/gitorious/public ServerName git.darkstar.ilri.org ErrorLog /var/www/git.darkstar.ilri.org/log/error.log CustomLog /var/www/git.darkstar.ilri.org/log/access.log combined AddOutputFilterByType DEFLATE text/html text/plain text/xml text/javascript text/css application/x-javascript BrowserMatch ^Mozilla/4 gzip-only-text/html BrowserMatch ^Mozilla/4\.0[678] no-gzip BrowserMatch \bMSIE !no-gzip !gzip-only-text/html <FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf)$"> ExpiresActive On ExpiresDefault "access plus 1 year" </FilesMatch> FileETag None RewriteEngine On RewriteCond %{DOCUMENT_ROOT}/system/maintenance.html -f RewriteCond %{SCRIPT_FILENAME} !maintenance.html RewriteRule ^.*$ /system/maintenance.html [L] </VirtualHost> Now, when I go with Firefox to darkstar.ilri.org it shows the default Apache screen: "It works!". but when I go to git.darkstar.ilri.org it waits for few seconds then falls to darkstar.ilri.org and the default apache page. No error is reported. If I run httpd -S I get: VirtualHost configuration: wildcard NameVirtualHosts and _default_ servers: *:80 is a NameVirtualHost default server darkstar.ilri.org (/etc/httpd/extra/httpd-vhosts.conf:21) port 80 namevhost darkstar.ilri.org (/etc/httpd/extra/httpd-vhosts.conf:21) port 80 namevhost git.darkstar.ilri.org (/etc/httpd/extra/httpd-vhosts.conf:37) Syntax OK The funny thing is that if I configure gotirious in a host called gitrepository, add 127.0.0.1 gitrepository and go with Firefox to gitrepository.. Gitorious works... But why not with git.darkstar.ilri.org? Many thanks in advance.

    Read the article

  • How to take backup of any online file in email?

    - by Jitendra vyas
    For example this is file : http://sstatic.net/su/all.css I want to take automatically backup this file in my email for every/hour/5 hour etc? I need free and portable solution. and i don't have access to any FTP and cpanel server of that file which i want to take as a backup. and I use Windows.

    Read the article

  • Code optimizer extension for Dreamweaver?

    - by Vercas
    Due to my neat coding style, my pages take up like 30% more space on both my server and the output HTML. Is there any free extension for Dreamweaver to automatically optimize my pages when uploading them? I mean not only HTML, but also PHP, CSS and JS... Actually, removing unnecessary tabs, spaces and new lines will just do the trick. After removing the unnecessary spaces, tabs and new lines from my PHP code, the page loaded three times faster so this is important...

    Read the article

  • SVN command that returns wether a user has a valid login for a repository?

    - by Zárate
    Hi there, I'm trying to find out an SVN command that would return some kind of true / false value depending on wether the user running it has access to a given repository. I'm building a tool for automated deployment and part of the process is checking out the code from the SVN repository. I'd like to find out if the user running the tool has a valid login already. If there's no valid login, just show a message and exit the tool (handling the SVN login internally is not an option at the moment). Plan B would be parsing the file in svn.simple looking for the repo URL, but thought about asking first. Thanks, Juan

    Read the article

  • Scan HTML for unused assets in folders

    - by Byran
    I have a aging website I'm managing and I'd like to remove all unused external files (.css, .jpg, .js, etc.) that are currently in various folders all over the site. Is there a tool out there that can help me identify and/or remove these for me?

    Read the article

  • Addons which actually make Firefox run faster?

    - by Zombies
    I would like to know of addons which actually enhance firefox's performance, both intentionally and unintentionally. I find that firefox tends to have major performance issues with certain websites. These websites tend to have a fair amount of javascript and css, and probably a large dom tree which may even be growing dynamically through javascript too. The worse offenders are those with heavy javascript, use heavy facebook integration, websites with non performant javascript, excessive javascript and websites with too many advertisements.

    Read the article

  • Nginx Reverse Proxy Node.js and Wordpress + Static Files Issue

    - by joemccann
    I have had quite a time trying to get nginx to serve static assets from my wordpress blog. Have a look at the config and let me know if you can help. ( https://gist.github.com/1130332 - to see the entire thing) server { listen 80; server_name subprint.com; access_log /var/www/subprint/logs/access.log; error_log /var/www/subprint/logs/error.log; root /var/www/subprint/server/public; # express serves static resources for subprint.com out of here location / { proxy_pass http://127.0.0.1:8124; root /var/www/subprint/server; access_log on; } #serve static assets location ~* ^(?!\/).+\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$ { expires max; access_log off; } # the route for the wordpress blog # unfortunately the static assets (css, img, etc.) are not being pathed/served properly location /blog { root /var/www/localhost/public; index index.php; access_log /var/www/localhost/logs/access.log; error_log /var/www/localhost/logs/error.log; if (!-e $request_filename) { rewrite ^/(.*)$ /index.php?q=$1 last; break; } if (!-f $request_filename) { rewrite /blog$ /blog/index.php last; break; } } # actually serves the wordpress and subsequently phpmyadmin location ~* (?!\/blog).+\.php$ { fastcgi_pass localhost:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/localhost/public$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; include /usr/local/nginx/conf/fastcgi_params; } # This works fine, but ONLY with a symlink inside the /var/www/localhost/public directory pointing to /usr/share/phpmyadmin location /phpmyadmin { index index.php; access_log /var/www/phpmyadmin/logs/access.log; error_log /var/www/phpmyadmin/logs/error.log; alias /usr/share/phpmyadmin/; if (!-f $request_filename) { rewrite /phpmyadmin$ /phpmyadmin/index.php permanent; break; } } # opt-in to the future add_header "X-UA-Compatible" "IE=Edge,chrome=1"; }

    Read the article

  • How to Change Bookmarks Toolbar Height

    - by IneedHelp
    I have tried using Multirow Bookmarks Toolbar Plus and Roomy Bookmarks Toolbar Firefox add-ons, but the problem is that they are constantly stressing my CPU (even when idling). It probably has something to do with version 11 of the browser. Anyway, is there some clean way to increase the height of the bookmarks toolbar? I've read a lot of articles that refer to userChrome.css and I have tried a dozen solutions, but none worked because they were outdated. Please help me with this.

    Read the article

< Previous Page | 390 391 392 393 394 395 396 397 398 399 400 401  | Next Page >