Search Results

Search found 3021 results on 121 pages for 'min hong tan'.

Page 24/121 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • APEX auf der DOAG2013

    - by Carsten Czarski
    Auch dieses Jahr trifft sich die Oracle Community wieder zur DOAG2013, der größten Oracle-Konferenz im deutschsprachigen Raum. Zum Thema APEX gibt es, wie schon in den letzten Jahren, in einem eigenen Stream mit mehr als 20 Vorträgen. Bekannte Sprecher wie Patrick Wolf, Niels de Bruijn, Peter Raganitsch und andere werden vor Ort sein - die DOAG2013 ist also die Gelegenheit zum Kennenlernen, Austausch von Neuigkeiten und zur Diskussion. Das in den letzten Jahren sehr gut angenommene APEX Experten Panel findet auch dieses Jahr wieder statt. Am 21. November um 11:00 beantwortet das Entwicklerteam im Raum Hong Kong Ihre Fragen rund um Application Express. Bitte reichen Sie uns Ihre Fragen vorab auf dieser Webseite ein. Sehen wir uns auf der DOAG2013 ...?

    Read the article

  • Why am I getting messages from cloudfront in my error log?

    - by JK01
    I frequently have messages like this in my websites error log: "Script error.". URL: https://e3m4drct5m1ays.cloudfront.net/items/loaders /loader_21.js?pid=21&systemid=13504281c5a501837196c23300f84e66&aoi=1327214632& zoneid=16620&cid=HK&rid=Hong%20Kong%20(general)&ccid=Kowloon&dma=0. Line number: 0 Error name: Stack: Now I don't actually know what cloudfront is or what it does. And I do not refer to this script in my site. So why would I be getting js error logged as if it was a script being run on my own site? This is using elmah logging.

    Read the article

  • Do you know some Information about train travel in China?

    - by user79989
    Me and my friend are planning to China travel next year.well train travel in China is an interesting experience, with the world's fastest train (guangzhou to wuhan), the word's highest train (in tibet) and the world's oldest working train (from Tonglio to Baotou in the north of China). Now travelling in China by train is not always easy.You can do a Hong Kong to Beijing Train trip, and buy those tickets online. But to be honest with you, most of that journey is pretty boring. The best part of it is going through northern Guangdong and Southern Hunan provinces.ChinaTour.com is a reliable China Travel Agency based in USA, which has specialized in inbound China travel for decades.

    Read the article

  • What is cloudfront.net and what does it do?

    - by JK01
    I frequently have messages like this in my websites error log: "Script error.". URL: https://e3m4drct5m1ays.cloudfront.net/items/loaders /loader_21.js?pid=21&systemid=13504281c5a501837196c23300f84e66&aoi=1327214632& zoneid=16620&cid=HK&rid=Hong%20Kong%20(general)&ccid=Kowloon&dma=0. Line number: 0 Error name: Stack: Now I don't actually know what cloudfront is or what it does. And I do not refer to this script in my site. So why would I be getting js error logged as if it was a script being run on my own site? This is using elmah logging.

    Read the article

  • Google ne censure plus ses résultats en Chine, comment va réagir le gouvernement de Pékin ?

    Mise à jour du 22.03.2010 par Katleen Google ne censure plus ses résultats en Chine, comment va réagir le gouvernement de Pékin ? Ca y est, Google a franchi le pas. Comme nous vous l'annoncions vendredi, Google a officiellement pris position ce lundi. L'entreprise a cessé de censurer les résultats de ses recherches en Chine. Dès à présent, les internautes chinois qui se connectent sur Google.cn sont automatiquement redirigés vers Google.com.hk, le site de Hong Kong, comme l'a expliqué ce matin le directeur juridique David Drummond : «Aujourd'hui nous avons cessé de censurer nos services de recherches ?Google Search, Google News et Google Images sur Google.cn. Les internautes visitant Google.cn...

    Read the article

  • Apache OpenOffice dans le Cloud : premier prototype en HTML5, avant une déclinaison sur Smartphones ?

    OpenOffice dans le Cloud : Apache fait une démo d'un premier prototype en HTML5 Avant une déclinaison sur Smartphones ? C'est la Fondation Apache qui le dit. OpenOffice a été la star de la deuxième journée de l'ApacheCon Europe. On ne pourra pas lui donner tort puisque deux de ses membres - Jian Hong Cheng et Fan Zheng ? ont fait la démonstration de la très attendue première version "Cloud" de la suite bureautique open-source. [IMG]http://ftp-developpez.com/gordon-fowler/OpenOffice%20Cloud/OpenOffice%20Cloud%201small.jpg[/IMG...

    Read the article

  • Nginx location regex is not matching

    - by shtuff.it
    The following has been working to cache css and js for me: location ~ "^(.*)\.(min.)?(css|js)$" { expires max; } results: $ curl -I http://mysite.com/test.css HTTP/1.1 200 OK Server: nginx Date: Thu, 16 Jan 2014 18:55:28 GMT Content-Type: text/css Content-Length: 19578 Last-Modified: Mon, 13 Jan 2014 18:54:53 GMT Connection: keep-alive Expires: Thu, 31 Dec 2037 23:55:55 GMT Cache-Control: max-age=315360000 X-Backend: stage01 Accept-Ranges: bytes I am trying to get versioning setup for my js / css using a 10 digit unix timestamp and am having issues getting a regex match with the following valid a regex. location ~ "^(.*)([\d]{10})\.(min\.)?(css|js)$" { expires max; } results: $ curl -I http://mysite.com/test_1234567890.css HTTP/1.1 200 OK Server: nginx Date: Thu, 16 Jan 2014 19:05:03 GMT Content-Type: text/css Content-Length: 19578 Last-Modified: Mon, 13 Jan 2014 18:54:53 GMT Connection: keep-alive X-Backend: stage01 Accept-Ranges: bytes

    Read the article

  • How to correctly use DERIVE or COUNTER in munin plugins

    - by Johan
    I'm using munin to monitor my server. I've been able to write plugins for it, but only if the graph type is GAUGE. When I try COUNTER or DERIVE, no data is logged or graphed. The plugin i'm currently stuck on is for monitoring bandwidth usage, and is as follows: /etc/munin/plugins/bandwidth2 #!/bin/sh if [ "$1" = "config" ]; then echo 'graph_title Bandwidth Usage 2' echo 'graph_vlabel Bandwidth' echo 'graph_scale no' echo 'graph_category network' echo 'graph_info Bandwidth usage.' echo 'used.label Used' echo 'used.info Bandwidth used so far this month.' echo 'used.type DERIVE' echo 'used.min 0' echo 'remain.label Remaining' echo 'remain.info Bandwidth remaining this month.' echo 'remain.type DERIVE' echo 'remain.min 0' exit 0 fi cat /var/log/zen.log The contents of /var/log/zen.log are: used.value 61.3251953125 remain.value 20.0146484375 And the resulting database is: <!-- Round Robin Database Dump --><rrd> <version> 0003 </version> <step> 300 </step> <!-- Seconds --> <lastupdate> 1269936605 </lastupdate> <!-- 2010-03-30 09:10:05 BST --> <ds> <name> 42 </name> <type> DERIVE </type> <minimal_heartbeat> 600 </minimal_heartbeat> <min> 0.0000000000e+00 </min> <max> NaN </max> <!-- PDP Status --> <last_ds> 61.3251953125 </last_ds> <value> NaN </value> <unknown_sec> 5 </unknown_sec> </ds> <!-- Round Robin Archives --> <rra> <cf> AVERAGE </cf> <pdp_per_row> 1 </pdp_per_row> <!-- 300 seconds --> <params> <xff> 5.0000000000e-01 </xff> </params> <cdp_prep> <ds> <primary_value> NaN </primary_value> <secondary_value> NaN </secondary_value> <value> NaN </value> <unknown_datapoints> 0 </unknown_datapoints> </ds> </cdp_prep> <database> <!-- 2010-03-28 09:15:00 BST / 1269764100 --> <row><v> NaN </v></row> <!-- 2010-03-28 09:20:00 BST / 1269764400 --> <row><v> NaN </v></row> <!-- 2010-03-28 09:25:00 BST / 1269764700 --> <row><v> NaN </v></row> <snip> The value for last_ds is correct, it just doesn't seem to make it into the actual database. If I change DERIVE to GAUGE, it works as expected. munin-run bandwidth2 outputs the contents of /var/log/zen.log I've been all over the (sparse) docs for munin plugins, and can't find my mistake. Modifying an existing plugin didn't work for me either.

    Read the article

  • AWS Autoscaling issue with existing nodes in ELB

    - by Ram Prasad
    I already have a ELB setup called MyLoadBalancer. I already have 2 nodes running on it with health checks (that checks a URL on the node to see if they are up) Created an autoscaling group (min 2, Max 10) Associated launchconfig mylaunchconfig that provisions a node using an AMI Created a trigger, that checks for avg min connections of 100 and Max of 500 (checks the load balancer and it is support to increase the node count by 1, if avg connections are 500 and decrease by one if less than 100) as-create-or-update-trigger MyTrigger --auto-scaling-group MyAutoScalingGroup --namespace "AWS/ELB" --measure RequestCount --statistic Average --dimensions "LoadBalancerName=MyLoadBalancer" --period 60 --lower-threshold 500 --upper-threshold 800 --lower-breach-increment=-1 --upper-breach-increment=1 --breach-duration 600 Now the issue is, as soon as I put in the trigger, it start 2 nodes .... but there are already two nodes in the LB. So, why is it provisioning 2 more nodes, when the nodes are there ? is it because it is not recognizing the existing 2 nodes ? then how do I add the existing nodes to the AutoScaling group ?

    Read the article

  • Migrate apache->tomcat to nginx->tomcat

    - by Slezhuk
    Now we are using apache2 as frontend, and tomcat as backend. We are using mod_proxy_balancer and AJP. Also we are using stickysession by JSESSIONID cookie: <Proxy balancer://backend> BalancerMember ajp://127.0.0.1:8008 min=10 max=100 ping=5 connectiontimeout=40 ttl=60 retry=20 route=node-1 BalancerMember ajp://127.0.0.1:8009 min=10 max=100 ping=5 connectiontimeout=40 ttl=60 retry=20 route=node-2 ProxySet lbmethod=byrequests timeout=30 ProxySet stickysession=JSESSIONID|jsessionid nofailover=Off </Proxy> and using jvmRoute parameter in web.xml to add tail to JSESSIONID cookie: <Engine name="Catalina" defaultHost="localhost" jvmRoute="node-1"> So far i did not found way to do this in nginx. Is there any solution for this? We are not using session replication, so getting sequential requests to same backend is crucial.

    Read the article

  • Bash-Scripting - Munin Plugin don't work

    - by FTV Admin
    i have written a munin-plugin to count the http-statuscodes of lighttpd. The script: #!/bin/bash ###################################### # Munin-Script: Lighttpd-Statuscodes # ###################################### ##Config # path to lighttpd access.log LIGHTTPD_ACCESS_LOG_PATH="/var/log/lighttpd/access.log" # rows to parse in logfile (higher value incrase time to run plugin. if value to low you may get bad counting) LOG_ROWS="200000" # #munin case $1 in autoconf) # check config AVAILABLE=`ls $LIGHTTPD_ACCESS_LOG_PATH` if [ "$AVAILABLE" = "$LIGHTTPD_ACCESS_LOG_PATH" ]; then echo "yes" else echo "No: "$AVAILABLE echo "Please check your config!" fi exit 0;; config) # graph config cat <<'EOM' graph_title Lighhtpd Statuscodes graph_vlabel http-statuscodes / min graph_category lighttpd 1xx.label 1xx 2xx.label 2xx 3xx.label 3xx 4xx.label 4xx 5xx.label 5xx EOM exit 0;; esac ## calculate AVAILABLE=`ls $LIGHTTPD_ACCESS_LOG_PATH` if [ "$AVAILABLE" = "$LIGHTTPD_ACCESS_LOG_PATH" ]; then TIME_NOW=`date` CODE_1xx="0" CODE_2xx="0" CODE_3xx="0" CODE_4xx="0" CODE_5xx="0" for i in 1 2 3 4 5; do TIME5=`date +%d/%b/%Y:%k:%M --date "$TIME_NOW -"$i"min"` CODE_1xx=$(( $CODE_1xx + `tail -n $LOG_ROWS $LIGHTTPD_ACCESS_LOG_PATH | grep "$TIME5" | grep 'HTTP/1.1" 1' | grep -c " "` )) CODE_2xx=$(( $CODE_2xx + `tail -n $LOG_ROWS $LIGHTTPD_ACCESS_LOG_PATH | grep "$TIME5" | grep 'HTTP/1.1" 2' | grep -c " "` )) CODE_3xx=$(( $CODE_3xx + `tail -n $LOG_ROWS $LIGHTTPD_ACCESS_LOG_PATH | grep "$TIME5" | grep 'HTTP/1.1" 3' | grep -c " "` )) CODE_4xx=$(( $CODE_4xx + `tail -n $LOG_ROWS $LIGHTTPD_ACCESS_LOG_PATH | grep "$TIME5" | grep 'HTTP/1.1" 4' | grep -c " "` )) CODE_5xx=$(( $CODE_5xx + `tail -n $LOG_ROWS $LIGHTTPD_ACCESS_LOG_PATH | grep "$TIME5" | grep 'HTTP/1.1" 5' | grep -c " "` )) done CODE_1xx=$(( $CODE_1xx / 5 )) CODE_2xx=$(( $CODE_2xx / 5 )) CODE_3xx=$(( $CODE_3xx / 5 )) CODE_4xx=$(( $CODE_4xx / 5 )) CODE_5xx=$(( $CODE_5xx / 5 )) echo "1xx.value "$CODE_1xx echo "2xx.value "$CODE_2xx echo "3xx.value "$CODE_3xx echo "4xx.value "$CODE_4xx echo "5xx.value "$CODE_5xx else echo "1xx.value U" echo "2xx.value U" echo "3xx.value U" echo "4xx.value U" echo "5xx.value U" fi If i run the script on local machine it runs perfectly: root@server1 /etc/munin/plugins # ll lrwxrwxrwx 1 root root 45 2011-12-19 15:23 lighttpd_statuscodes -> /usr/share/munin/plugins/lighttpd_statuscodes* root@server1 /etc/munin/plugins # ./lighttpd_statuscodes autoconf yes root@server1 /etc/munin/plugins # ./lighttpd_statuscodes config graph_title Lighhtpd Statuscodes graph_vlabel http-statuscodes / min graph_category lighttpd 1xx.label 1xx 2xx.label 2xx 3xx.label 3xx 4xx.label 4xx 5xx.label 5xx root@server1 /etc/munin/plugins #./lighttpd_statuscodes 1xx.value 0 2xx.value 5834 3xx.value 1892 4xx.value 0 5xx.value 0 But Munin shows no graph: http://s1.directupload.net/images/111219/3psgq3vb.jpg I have tested the Plugin from munin-server via telnet: root@munin-server /etc/munin/plugins/ # telnet 123.123.123.123 4949 Trying 123.123.123.123... Connected to 123.123.123.123. Escape character is '^]'. # munin node at server1.cluster1 fetch lighttpd_statuscodes 1xx.value U 2xx.value U 3xx.value U 4xx.value U 5xx.value U . Connection closed by foreign host. You can see in the script that value = U only printed, when the script can't check the lighttpd's access.log. But why can't script do it, when running via munin, and when running on local machine all is ok? Is there a bug in my bash-script? I have no Idea. Thanks for helping!

    Read the article

  • RHEL 6.x on Rackspace Cloud and Dedicated hardware experiencing Redis Timeouts

    - by zhallett
    I just recently set up a mixture of RHEL 6.1 Rackspace cloud hosts and RHEL 6.2 dedicated hosts using Rackconnect. I am experiencing intermittent Redis timeouts from within our Rails 3.2.8 app with Redis 2.4.16 running on the RHEL 6.2 dedicated hosts. There is no network latency or packet loss. Also there are no errors on any interfaces on our cloud or dedicated servers or on the managed firewall from Rackspace. When Redis timesout, there is nothing logged within redis even though it is set up to do debug logging. The only error we receive is from Airbrake saying there was a Redis timeout. Network topology: RHEL 6.1 cloud hosts <--> Alert logic IDS <--> Cisco ASA 5510 <--> RHEL 6.2 dedicated hosts (web nodes) (two way NAT) (db hosts running redis) Ping from db host to web host: 64 bytes from 10.181.230.180: icmp_seq=998 ttl=64 time=0.520 ms 64 bytes from 10.181.230.180: icmp_seq=999 ttl=64 time=0.579 ms 64 bytes from 10.181.230.180: icmp_seq=1000 ttl=64 time=0.482 ms --- web1.xxxxxx.com ping statistics --- 1000 packets transmitted, 1000 received, 0% packet loss, time 999007ms rtt min/avg/max/mdev = 0.359/0.535/5.684/0.200 ms Ping from web host to db host: 64 bytes from 192.168.100.26: icmp_seq=998 ttl=64 time=0.544 ms 64 bytes from 192.168.100.26: icmp_seq=999 ttl=64 time=0.452 ms 64 bytes from 192.168.100.26: icmp_seq=1000 ttl=64 time=0.529 ms --- data1.xxxxxx.com ping statistics --- 1000 packets transmitted, 1000 received, 0% packet loss, time 999017ms rtt min/avg/max/mdev = 0.358/0.499/6.120/0.201 ms Redis config: daemonize yes pidfile /var/run/redis/6379/redis_6379.pid port 6379 timeout 0 loglevel debug logfile /var/lib/redis/log syslog-enabled yes syslog-ident redis-6379 syslog-facility local0 databases 16 save 900 1 save 300 10 save 60 10000 rdbcompression yes dbfilename dump-6379.rdb dir /var/lib/redis maxclients 10000 maxmemory-policy volatile-lru maxmemory-samples 3 appendfilename appendonly-6379.aof appendfsync everysec no-appendfsync-on-rewrite no auto-aof-rewrite-percentage 100 auto-aof-rewrite-min-size 64mb slowlog-log-slower-than 10000 slowlog-max-len 1024 vm-enabled no vm-swap-file /tmp/redis.swap vm-max-memory 0 vm-page-size 32 vm-pages 134217728 vm-max-threads 4 hash-max-zipmap-entries 512 hash-max-zipmap-value 64 list-max-ziplist-entries 512 list-max-ziplist-value 64 set-max-intset-entries 512 zset-max-ziplist-entries 128 zset-max-ziplist-value 64 activerehashing yes Redis-cli info: redis-cli info redis_version:2.4.16 redis_git_sha1:00000000 redis_git_dirty:0 arch_bits:64 multiplexing_api:epoll gcc_version:4.4.6 process_id:4174 uptime_in_seconds:79346 uptime_in_days:0 lru_clock:1064644 used_cpu_sys:13.08 used_cpu_user:19.81 used_cpu_sys_children:1.56 used_cpu_user_children:7.69 connected_clients:167 connected_slaves:0 client_longest_output_list:0 client_biggest_input_buf:0 blocked_clients:6 used_memory:15060312 used_memory_human:14.36M used_memory_rss:22061056 used_memory_peak:15265928 used_memory_peak_human:14.56M mem_fragmentation_ratio:1.46 mem_allocator:jemalloc-3.0.0 loading:0 aof_enabled:0 changes_since_last_save:166 bgsave_in_progress:0 last_save_time:1352823542 bgrewriteaof_in_progress:0 total_connections_received:286 total_commands_processed:507254 expired_keys:0 evicted_keys:0 keyspace_hits:1509 keyspace_misses:65167 pubsub_channels:0 pubsub_patterns:0 latest_fork_usec:690 vm_enabled:0 role:master db0:keys=6,expires=0 edit 1: add redis-cli info output

    Read the article

  • The ping response time doesn't reflect the real network response time

    - by yangchenyun
    I encountered a weird problem that the response time returned by ping is almost fixed at 98ms. Either I ping the gateway, or I ping a local host or a internet host. The response time is always around 98ms although the actual delay is obvious. However, the reverse ping (from a local machine to this host) works properly. The following is my route table and the result: route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.1.1 0.0.0.0 UG 100 0 0 eth1 60.194.136.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eth1 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 # ping the gateway ping 192.168.1.1 PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data. 64 bytes from 192.168.1.1: icmp_req=1 ttl=64 time=98.7 ms 64 bytes from 192.168.1.1: icmp_req=2 ttl=64 time=97.0 ms 64 bytes from 192.168.1.1: icmp_req=3 ttl=64 time=96.0 ms 64 bytes from 192.168.1.1: icmp_req=4 ttl=64 time=94.9 ms 64 bytes from 192.168.1.1: icmp_req=5 ttl=64 time=94.0 ms ^C --- 192.168.1.1 ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4004ms rtt min/avg/max/mdev = 94.030/96.149/98.744/1.673 ms #ping a local machine ping 192.168.1.88 PING 192.168.1.88 (192.168.1.88) 56(84) bytes of data. 64 bytes from 192.168.1.88: icmp_req=1 ttl=64 time=98.7 ms 64 bytes from 192.168.1.88: icmp_req=2 ttl=64 time=96.9 ms 64 bytes from 192.168.1.88: icmp_req=3 ttl=64 time=96.0 ms 64 bytes from 192.168.1.88: icmp_req=4 ttl=64 time=95.0 ms ^C --- 192.168.1.88 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3003ms rtt min/avg/max/mdev = 95.003/96.696/98.786/1.428 ms #ping a internet host ping google.com PING google.com (74.125.128.139) 56(84) bytes of data. 64 bytes from hg-in-f139.1e100.net (74.125.128.139): icmp_req=1 ttl=42 time=99.8 ms 64 bytes from hg-in-f139.1e100.net (74.125.128.139): icmp_req=2 ttl=42 time=99.9 ms 64 bytes from hg-in-f139.1e100.net (74.125.128.139): icmp_req=3 ttl=42 time=99.9 ms 64 bytes from hg-in-f139.1e100.net (74.125.128.139): icmp_req=4 ttl=42 time=99.9 ms ^C64 bytes from hg-in-f139.1e100.net (74.125.128.139): icmp_req=5 ttl=42 time=99.9 ms --- google.com ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 32799ms rtt min/avg/max/mdev = 99.862/99.925/99.944/0.284 ms I am running iperf to test the bandwidth, the rate is quite low for a LAN connection. iperf -c 192.168.1.87 -t 50 -i 10 -f M ------------------------------------------------------------ Client connecting to 192.168.1.87, TCP port 5001 TCP window size: 0.06 MByte (default) ------------------------------------------------------------ [ 4] local 192.168.1.139 port 54697 connected with 192.168.1.87 port 5001 [ ID] Interval Transfer Bandwidth [ 4] 0.0-10.0 sec 6.12 MBytes 0.61 MBytes/sec [ 4] 10.0-20.0 sec 6.38 MBytes 0.64 MBytes/sec [ 4] 20.0-30.0 sec 6.38 MBytes 0.64 MBytes/sec [ 4] 30.0-40.0 sec 6.25 MBytes 0.62 MBytes/sec [ 4] 40.0-50.0 sec 6.38 MBytes 0.64 MBytes/sec [ 4] 0.0-50.1 sec 31.6 MBytes 0.63 MBytes/sec

    Read the article

  • What is the nameserver in soa used for?

    - by John Lee
    Hey can you tell me what the nameserver in the soa record is for? name ttl class rr name-server email-addr (sn ref ret ex min) example.com. IN SOA **ns.example.com** (this nameserver). hostmaster.example.com. ( 2003080800 ; sn = serial number 172800 ; ref = refresh = 2d 900 ; ret = update retry = 15m 1209600 ; ex = expiry = 2w 3600 ; min = minimum = 1h ) ; the following are also valid using @ and blank @ IN SOA ns.example.com. hostmaster.example.com. ( IN SOA ns.example.com. hostmaster.example.com. ( so if I were to add 5 nameservers, and I put the first nameserver on soa, and this server was not working will the user go to the next nameserver?

    Read the article

  • Bypass cache for mobile user agents, VARNISH+NGINX+W3CACHE

    - by Mike McGhee
    Right now I'm running Wordpress w/ W3 Cache on nginx with varnish front end. I'm trying to use the WP Touch Pro plugin for wordpress to display mobile sites, but it is not working. Shows the desktop theme still. I've put the mobile user agents in the rejected user agents box in w3 cache. Here is the nginx config w3 cache spit out: BEGIN W3TC Page Cache cache location ~ /wp-content/w3tc/pgcache.*html$ { expires modified 3600s; add_header X-Powered-By "W3 Total Cache/0.9.2.4"; add_header Vary "Accept-Encoding, Cookie"; } location ~ /wp-content/w3tc/pgcache.*gzip$ { gzip off; types {} default_type text/html; expires modified 3600s; add_header X-Powered-By "W3 Total Cache/0.9.2.4"; add_header Vary "Accept-Encoding, Cookie"; add_header Content-Encoding gzip; } # END W3TC Page Cache cache # BEGIN W3TC Browser Cache gzip on; gzip_types text/css application/x-javascript text/x-component text/richtext image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon; location ~ \.(css|js|htc)$ { expires 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.4"; } location ~ \.(html|htm|rtf|rtx|svg|svgz|txt|xsd|xsl|xml)$ { expires 3600s; add_header X-Powered-By "W3 Total Cache/0.9.2.4"; } location ~ \.(asf|asx|wax|wmv|wmx|avi|bmp|class|divx|doc|docx|eot|exe|gif|gz|gzip|ico|jpg|jpeg|jpe|mdb|mid|midi|mov|qt|mp3|m4a|mp4|m4v|mpeg|mpg|mpe|mpp|otf|odb|odc|odf|odg|odp|ods|odt|ogg|pdf|png|pot|pps|ppt|pptx|ra|ram|svg|svgz|swf|tar|tif|tiff|ttf|ttc|wav|wma|wri|xla|xls|xlsx|xlt|xlw|zip)$ { expires 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.4"; } # END W3TC Browser Cache # BEGIN W3TC Minify core rewrite ^/wp-content/w3tc/min/w3tc_rewrite_test$ /wp-content/w3tc/min/index.php?w3tc_rewrite_test=1 last; rewrite ^/wp-content/w3tc/min/(.+\.(css|js))$ /wp-content/w3tc/min/index.php?file=$1 last; # END W3TC Minify core # BEGIN W3TC Page Cache core rewrite ^(.*\/)?w3tc_rewrite_test$ $1?w3tc_rewrite_test=1 last; set $w3tc_rewrite 1; if ($request_method = POST) { set $w3tc_rewrite 0; } if ($query_string != "") { set $w3tc_rewrite 0; } if ($http_host != "mysite.com") { set $w3tc_rewrite 0; } set $w3tc_rewrite2 1; if ($request_uri !~ \/$) { set $w3tc_rewrite2 0; } if ($request_uri ~* "(sitemap(_index)?\.xml(\.gz)?|[a-z0-9_\-]+-sitemap([0-9]+)?\.xml(\.gz)?)") { set $w3tc_rewrite2 1; } if ($w3tc_rewrite2 != 1) { set $w3tc_rewrite 0; } set $w3tc_rewrite3 1; if ($request_uri ~* "(\/wp-admin\/|\/xmlrpc.php|\/wp-(app|cron|login|register|mail)\.php|\/feed\/|wp-.*\.php|index\.php)") { set $w3tc_rewrite3 0; } if ($request_uri ~* "(wp\-comments\-popup\.php|wp\-links\-opml\.php|wp\-locations\.php)") { set $w3tc_rewrite3 1; } if ($w3tc_rewrite3 != 1) { set $w3tc_rewrite 0; } if ($http_cookie ~* "(comment_author|wp\-postpass|wordpress_\[a\-f0\-9\]\+|wordpress_logged_in)") { set $w3tc_rewrite 0; } if ($http_user_agent ~* "(W3\ Total\ Cache/0\.9\.2\.4|iphone|ipod|ipad|aspen|incognito|webmate|android|dream|cupcake|froyo|blackberry9500|blackberry9520|blackberry9530|blackberry9550|blackberry\ 9800|blackberry\ 9780|webos|s8000|bada)") { set $w3tc_rewrite 0; } set $w3tc_ua ""; if ($http_user_agent ~* "(acer\ s100|android|archos5|blackberry9500|blackberry9530|blackberry9550|blackberry\ 9800|cupcake|docomo\ ht\-03a|dream|htc\ hero|htc\ magic|htc_dream|htc_magic|incognito|ipad|iphone|ipod|kindle|lg\-gw620|liquid\ build|maemo|mot\-mb200|mot\-mb300|nexus\ one|opera\ mini|samsung\-s8000|series60.*webkit|series60/5\.0|sonyericssone10|sonyericssonu20|sonyericssonx10|t\-mobile\ mytouch\ 3g|t\-mobile\ opal|tattoo|webmate|webos)") { set $w3tc_ua _high; } set $w3tc_ref ""; set $w3tc_ssl ""; set $w3tc_enc ""; if ($http_accept_encoding ~ gzip) { set $w3tc_enc _gzip; } set $w3tc_ext ""; if (-f "$document_root/wp-content/w3tc/pgcache/$request_uri/_index$w3tc_ua$w3tc_ref$w3tc_ssl.html$w3tc_enc") { set $w3tc_ext .html; } if ($w3tc_ext = "") { set $w3tc_rewrite 0; } if ($w3tc_rewrite = 1) { rewrite .* "/wp- content/w3tc/pgcache/$request_uri/_index$w3tc_ua$w3tc_ref$w3tc_ssl$w3tc_ext$w3tc_enc" last; } # END W3TC Page Cache core And here is what I have in my varnish vcl.. sub vcl_recv { # Add a unique header containing the client address remove req.http.X-Forwarded-For; set req.http.X-Forwarded-For = client.ip; # Device detection set req.http.X-Device = "desktop"; if ( req.http.User-Agent ~ "iP(hone|od|ad)" || req.http.User-Agent ~ "Android" ) { set req.http.X-Device = "smart"; } elseif ( req.http.User-Agent ~ "(SymbianOS|BlackBerry|SonyEricsson|Nokia|SAMSUNG|^LG)" ) { set req.http.X-Device = "cell"; } Any help is greatly appreciated, I've been banging my head against this for 2 days..

    Read the article

  • how to make a php crontab silent

    - by BandonRandon
    I set up a crontab in Cpanel to run every min. It's working great but I don't want an e-mail every min. I have a second cron tab that runs every day. I would like the responce of this tab. Is there a way to tell the crontab to be silent or only e-mail on error? I have: * * * * * php /home/public_html/folder/file.php 2>&1 The last bit 2>&1 I added because i thought it would make it silent. From the Cpanel Docs: You can have cron send an email everytime it runs a command. If you do not want an email to be sent for an individual cron job you can redirect the command's output to /dev/null like this: mycommand /dev/null 2&1

    Read the article

  • Overwrite SOA expiry in a bind9 slave name server.

    - by Joachim Breitner
    I run a slave name server of a domain that I do not have full control over (i.e. changing the SOA is not possibly). The SOA specifies an expiry time of one week. For various reasons, I’d like to override that value on my specific slave server to something larger. Is there a way to do that? N.B: I know that for the refresh and retry fields, bind9 provides the options min-refresh-time, max-refresh-time, min-retry-time and max-retry-time to overrule the SOA, as mentioned in the documentation. For some reason this just does not inclucde expiry.

    Read the article

  • Haproxy Slow Reload in DB Mode

    - by com
    Recently I started using great tool for load balancing - Haproxy. There is only one disturbing thing that I cannot figure out how to deal with it. We use haproxy for load balancing mysql traffic. When there is a lot of traffic and many connection it takes ages for haproxy to reload (~ 30 min), with less traffic it doest reload within 1 min. I do reload with: service haproxy reload Of course if I need to do an urgent change in configuration I expect haproxy to do reload very fast. Killing haproxy instances waiting for disconnection causes to disconnection of msyql connections. It looks like that I made mistake in settings of haproxy or in settings of application. If you know how to solve this please help me. Thanks!

    Read the article

  • Cutting up videos (excerpting) on Mac OS X -- iMovie produces super-large files

    - by markvgti
    I need to cut out parts of a video (+ the associated audio, of course) to make a short clip. For example, take 2 minutes from one location, 3 minutes from another part of the video, 30 seconds from another location and join it all together to form one single clip. The format of the input video is mp4 (H.264 encoding, AFAICR). Don't need very sophisticated merges or transitions from one part to the next, or sophisticated banners (text) on-screen, but some ability to do so would be a plus point. I've done this with iMovie in the past, but where the original file was under 5MB/min of play time, the chopped-up version was over 11MB/min of play time, which to me seems really bad. Is there a better/different way of doing this on OS X? Looking for free (gratis) solutions. OS: OS X 10.9.3

    Read the article

  • Boxee on Dell 32in TV causes headaches, how to troubleshoot?

    - by brown145
    I have revamped an old pc, installing 4GB ram and a 256MB video card so that I could run Boxee to a Dell W3200 series TV that came with my apartment. It is connected via VGA, resolution is 1360x768, 24bit color, 60Htz refresh rate. Unfortunately, every time I use it I end up with a headache after less that 30 min. I have had my eyes check recently, and am able to play xbox (connected via hdmi) on the same TV without problems. What are the potential causes? My system seems to well meet or exceed the min requirements for boxee, XP, AMD Atholon 64 X2 Dual Core, 4GB RAM. Could it be the VGA cable? What is the most cost effective way to trouble shoot?

    Read the article

  • Why "scope link" ipv6 address can be pinged via interfaces which they are not active on

    - by olagu
    [root@2_01 ~]# /sbin/ip -6 addr show pubeth0 inet6 2001:1::6/64 scope global inet6 2001:1::1/64 scope global inet6 fe80::20c:29ff:fe69:f9e8/64 scope link [root@v2_01 ~]# /sbin/ip -6 addr show pubeth1 inet6 fe80::20c:29ff:fe69:f906/64 scope link [root@2_01 ~]# ping6 fe80::20c:29ff:fe69:f9e8%pubeth1 PING fe80::20c:29ff:fe69:f9e8%pubeth1(fe80::20c:29ff:fe69:f9e8) 56 data bytes 64 bytes from fe80::20c:29ff:fe69:f9e8: icmp_seq=1 ttl=64 time=0.259 ms --- fe80::20c:29ff:fe69:f9e8%pubeth1 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 286ms rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms [root@2_01 ~]# ping6 fe80::20c:29ff:fe69:f9e8%pubeth0 PING fe80::20c:29ff:fe69:f9e8%pubeth0(fe80::20c:29ff:fe69:f9e8) 56 data bytes 64 bytes from fe80::20c:29ff:fe69:f9e8: icmp_seq=1 ttl=64 time=0.057 ms --- fe80::20c:29ff:fe69:f9e8%pubeth0 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 390ms rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms Why can I ping6 "fe80::20c:29ff:fe69:f9e8" via pubeth1?

    Read the article

  • Batch file to open multiple cmd prompts

    - by JHarris
    I am trying to write a batch file that will automate the following manual process: Open a new cmd prompt (prompt1) Run a bat file (b1) Run a program (that will continue to run) Minimize prompt1 Open a new cmd prompt (prompt2) Run a bat file (b1) Run a different program (that will continue to run) Minimize prompt2 I've found ways to open multiple instances of cmd to run different things, but after I've run the first thing (b1), I then need to run a program in that same cmd window. I currently have start /min cmd /k C:\Users\db2admin\python_environment\Scripts\activate.bat start /min cmd /k C:\Users\db2admin\python_environment\Scripts\activate.bat This opens the two windows and runs the bat, great, but now I need to execute another command (running a python file) in each of the cmd windows. How do I send commands to each prompt?

    Read the article

  • Time of splitting and encoding video with ffmpeg increases exponentially

    - by Rnd_d
    I'm trying to split video by subtitles and encode into .mp4(h.264/aac) using ffmpeg, but it takes so much time! First pieces of video are splited and encoded really fast, but for each iteration time increases, and so 40 min video takes all night or more. Small 3 min video takes max 10 mins. cmd for splitting and encoding: ffmpeg -i filename.avi -ss 00:00:0(time of sub start) -t 0:0:3(time of sub duration) -acodec libfaac -vcodec libx264 -bf 0 -f mp4 filename.mp4 ffmpeg version N-34849-g07c7ffc (last, i think) How I can make it faster? Are there, maybe, some magic arguments for ffmpeg or some hacks?

    Read the article

  • Raidz in FreeNAS eating more space than expected

    - by swood
    I just got 6 new 2TB drives, and added them to my FreeNAS box. I have only dealt with RAID1 previously, and each setup has given what I was expecting. However, with the 6*2TB drives, I wanted to maximize the space available, so I went with raidz. But I seem to be missing space. I have 8.6TB available after the raidz was built. Maybe I did my math horribly wrong, but (N-1) x S(min) (where N=6 and S(min)=2TB) should result in 10TB. (I understand it would be more like 9.something) Does raidz actually consume more then 1 drive worth of space? Or could their possibly be another problem? (All drives have been independently verified that 2TB of space is available)

    Read the article

  • intermittent SSH with ssh_exchange_identification error

    - by rafamvc
    My ssh connection to my server works every 30 min for around 10 min. Things that I figure out that might be the problem: The server is underload (it is a database server), but on those spare moments that I can connect, it is still under the same load, which doesnt make sense. The server is a ubuntu, and the consolekit was using a lot of virtual memory. Restarted the consolekit and it seems to be using a right amount of memory now. It is not the host alows or deny. Those are setup properly. It is not a firewall problem. Those settings were working and the same settings are working for other similar machines. It is on the ec2. Amazon cloud.

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >