Search Results

Search found 893 results on 36 pages for 'gzip'.

Page 13/36 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Why doesn't Firefox cache my images and CSS

    - by Richard A
    I am using IIS7, I have already set up the following. But when I run Firefox it seems not to cache any of my images even with "remember history" set. <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <staticContent> <clientCache cacheControlCustom="public" cacheControlMode="UseMaxAge" cacheControlMaxAge="7.00:00:00" /> </staticContent> </system.webServer> </configuration> However when I use Firebug it still points to Firefox not caching images and CSS: public,max-age=604800 Content-Type text/css Content-Encoding gzip Last-Modified Mon, 27 Jun 2011 03:53:22 GMT Accept-Ranges bytes Etag "507968c27d34cc1:0" Vary Accept-Encoding Server Microsoft-IIS/7.5 X-Powered-By ASP.NET Date Mon, 27 Jun 2011 13:06:41 GMT Content-Length 5067 Request Headersview source Host www.xx.com User-Agent Mozilla/5.0 (Windows NT 6.1; rv:2.0.1) Gecko/20100101 Firefox/4.0.1 Accept text/css,*/*;q=0.1 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip, deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer http://www.xx.com/ Cookie __utma=62996397.135679654.1309106351.1309159743.1309164158.8; __utmz=62996397.1309106351.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); __utmc=62996397

    Read the article

  • Windows software to copy from/to image/disk/partition with offset&compression

    - by Alex131089
    I tried to put everything in the title : I'm looking for a software that is able : to work with image (raw file), partition & whole disk, without distinction to copy whole image or only selected part (let's say .. from 0 to end of last partition, excluding free space for example ; or with start + offset/end system) to handle compression (at least gzip) You recognized, I'm looking for a "dd | gzip" utility with GUI on Windows. The closest tool I found so far is http://www.dubaron.com/diskimage/ but it's a bit old and don't have compression support. Any idea ?

    Read the article

  • Why doesn't Firefox cache my images and CSS

    - by Richard A
    I am using IIS7, I have already set up the following. But when I run Firefox it seems not to cache any of my images even with "remember history" set. <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <staticContent> <clientCache cacheControlCustom="public" cacheControlMode="UseMaxAge" cacheControlMaxAge="7.00:00:00" /> </staticContent> </system.webServer> </configuration> However when I use Firebug it still points to Firefox not caching images and CSS: public,max-age=604800 Content-Type text/css Content-Encoding gzip Last-Modified Mon, 27 Jun 2011 03:53:22 GMT Accept-Ranges bytes Etag "507968c27d34cc1:0" Vary Accept-Encoding Server Microsoft-IIS/7.5 X-Powered-By ASP.NET Date Mon, 27 Jun 2011 13:06:41 GMT Content-Length 5067 Request Headersview source Host www.xx.com User-Agent Mozilla/5.0 (Windows NT 6.1; rv:2.0.1) Gecko/20100101 Firefox/4.0.1 Accept text/css,*/*;q=0.1 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip, deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer http://www.xx.com/ Cookie __utma=62996397.135679654.1309106351.1309159743.1309164158.8; __utmz=62996397.1309106351.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); __utmc=62996397

    Read the article

  • Serving Compressed Files Amazon vs Lightty

    - by tike
    We are currently using amazon CloudFront to serve css and according to Amazon itself, Amazon CloudFront can serve both compressed and uncompressed files from an origin server. But while i check compression it shows everything fine in origin server but it shows notcompressed checking in the link with cloudfront. e.g. http://www.port80software.com/tools/compresscheck.asp?url=http%3A%2F%2Fimgsrv.mydomain.com%2Fen-UK%2Fsomething.css it would result with Compression status: (gzip) while with cloudfront http://www.port80software.com/tools/compresscheck.asp?url=http%3A%2F%2hereisit.cloudfront.net%2F%2Fsomething.css Compression status: Uncompressed Origin server is running lighttpd with mod_deflate however, allowed config is: deflate.allowed_encodings = ("bzip2", "gzip", "deflate") [i would think, putting extra allowed encoding wont affect as such.] Here i am clueless, what is the real issue.

    Read the article

  • Beast / CRIME / Beach attack and stopping it

    - by user2143356
    I have read so much on all this but not entirely sure I understand what has gone on. Also, is this one, two or three problems? It looks to me like three, but it's all very confusing: Beast CRIME Beach It seems the solution may be to simply not use compression with HTTPS traffic (or is that just on one of them?) I use GZIP compression. Is that okay, or is that part of the problem? I also use Ubuntu 12.04 LTS Also, is non-HTTPS traffic okay? So after reading all the theory I just want the solution. I think this may be the solution, but can someone please confirm I have understood everything so I am not likely to suffer from this attack: SOLUTION: Use GZIP compression on HTTP traffic, but don't use any compression on HTTPS traffic

    Read the article

  • Forcing rsync to convert file names to lower case

    - by SvrGuy
    We are using rsync to transfer some (millions) files from a Windows (NTFS/CYGWIN) server to a Linux (RHEL) server. We would like to force all file and directory names on the linux box to be lower case. Is there a way to make rsync automagically convert all file and directory names to lower case? For example, lets say the source file system had a file named: /foo/BAR.gziP Rsync would create (on the destination system) /foo/bar.gzip Obviously, with NTFS being a case insensitive file system there can not be any conflicts... Failing the availability of an rsync option, is there an enhanced build or some other way to achieve this effect? Perhaps a mount option on CYGWIN? Perhaps a similar mount option on Linux? Its RHEL, in case that matters.

    Read the article

  • Cron job failing to backing up a Postgres database

    - by user705142
    I'm unsure what's going on here: I've got a backup script which runs fine under root. It produces a 300kb database dump in the proper directory. When running it as a cron job with exactly the same command however, an empty gzip file appears with nothing in it. The cron log shows no error, just that the command has been run. This is the script: #! /bin/bash DIR="/opt/backup" YMD=$(date "+%Y-%m-%d") su -c "pg_dump -U postgres mydatabasename | gzip -6 > "$DIR/database_backup.$YMD.gz" " postgres # delete backup files older than 60 days OLD=$(find $DIR -type d -mtime +60) if [ -n "$OLD" ] ; then echo deleting old backup files: $OLD echo $OLD | xargs rm -rfv fi And the cron job: 01 10 * * * root sh /opt/daily_backup_script.sh It produces a database_backup file, just an empty one. Anyone know what's going on here?

    Read the article

  • Solaris 11 ? Zone ???????????????

    - by Homma
    ???? Solaris 11 ????????? Global Zone ?? Non-global Zone ??????????????????????????? Global Zone ??????????????? Solaris 11 ???????????????????Global Zone ?? solaris-large-server ?????????????????????? ????????????????????????? http://docs.oracle.com/cd/E26924_01/html/E25785/gihfn.html#gkkqw > ?????????????????????????????????? > AI ????????solaris-large-server ?????????? > ?????????? solaris-large-server ???????????????????????? solaris-large-server ???????????????????????????? # pkg search -o fmri -H '*/solaris-large-server:depend:group:' archiver/gnu-tar compress/bzip2 compress/gzip compress/p7zip compress/unzip compress/zip crypto/pwgen developer/build/gnu-make ... Non-global Zone ??????????????? Solaris 11 ? Non-global Zone ??????????????? solaris-small-server ????????????????????????????????????????????? zone_default.xml ?????????????? # grep pkg /usr/share/auto_install/manifest/zone_default.xml <name>pkg:/group/system/solaris-small-server</name> solaris-small-server ???????????????????????? solaris-small-server ???????????????????????????? # pkg search -o fmri -H '*/solaris-small-server:depend:group:' compress/bzip2 compress/gzip compress/p7zip compress/unzip compress/zip developer/debug/mdb ... Non-global Zone ? solaris-large-server ??????????? Non-global Zone ? solaris-large-server ?????????????????????????????????????? # pkg install solaris-large-server Non-global Zone ?????????????????????????? Zone ????????????? solaris-large-server ??????????????????????????????? AI ????????????? ???? AI ???????????????????????? http://docs.oracle.com/cd/E26924_01/html/E25829/z.pkginst.ov-14.html#glxbn > AI ??????????????????????????????? > ??????????????? ??????

    Read the article

  • ??OSW (OSWatcher Black Box) ????

    - by Feng
       OSWatcher Black Box, ??OSW,?oracle???????????????,?????OS??????????OS??????????,??CPU/Memory/Swap/Network IO/Disk IO?????? +++ ????????OSW? OSW?????????,????????????????,???mrtg, cacti, sar, nmon, enterprise manger grid control. ????OSW?????: 1. ???????,???????2. ???????,????CPU,???????????3. ???????,????????????????????????OS? ???????OS???,??OS?????,?????????????;??????????????????????,???????. ???????,????????:?????????,??????????,????????????(root cause),?????????????????????????,OSW??????,??????: 1. ??????????OS??????????????????????????OSW??,?????????OS??,??????DB/???? 2. ??ORACLE Database Performance???,?????????????OS??????OS?????????????Swapping,???????????????,?????????,???AWR?????????latch/mutex?????? 3. ??????????????AWR??????????,top5??????????;?CPU,??,Swap, Disk IO?????????????OSW??????????,????????????????????????OSW???,??????????????? 4. ?????ORA-04030?????CJQ0, P00X, J00X?????????,???????OSW,???????????????????OS????????? 5. ????server process??hung?,??????OSW????????????????suspend???,?????????CPU/Memory? 6. ??Listener hung???,?????OSW??????????????? 7. Login Storm??:????????????,????,????ASH,AWR????????????????OSW?ps?????,??????, oracle ?server process????????? ???,OSW????????????????????OS?????????????,??????DBA???OSW??????????????OSW,????DB Performance????,????????OSW???? +++ ?????OSW??????: 1. ??????????????,???????,???????? 2. OSW???????? OSW??????????????OS???????,??ps, vmstat, netstat, mpstat, top;????????????????? ?????????CPU, Disk IO, Disk Space, Memory;???????????????,??????????????????????????,??OSW????????:?????????,CPU????90%??;???free space???????????????????????????,??OSW????????? +++ ????????UNIX/LINUX???/??OSW: 1. ???301137.1???OSW 2. ????????(/tmp??),??????????root?? $ tar xvf osw.tar 3. ?? $ nohup ./startOSWbb.sh 60 48 gzip & ????????,??OSW,????60???????,???????48?????(??????????),???????gzip?????? 4. ????? $ ./stopOSWbb.sh ?????????archive???? ????????????????????OSW???????,???????

    Read the article

  • Session Cookies and IE 8

    - by Matt Luongo
    I recently built a simple web-app deployed over Tomcat. The app uses pretty standard session based security where a user who has logged in is given a session. Sessions work fine in Firefox and Chrome, but require the use of jsessionid in the URL for IE (tested 7 & 8), set to medium privacy. In IE 8, I tried to override cookie handling, setting "Allow all 3rd party cookies" and "Allow all session cookies"- no dice. However, when I run Tomcat on my local machine, IE accepts the cookie, and sessions work just fine. And now, for the HTTP headers. From Chrome, a logged in user gets a session GET http://devl:8080/testing/ HTTP/1.1 Host: devl:8080 Connection: keep-alive User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/532.5 (KHTML, like Gecko) Chrome/4.1.249.1036 Safari/532.5 Accept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 HTTP/1.1 200 OK Server: Apache-Coyote/1.1 P3P: CP="NON CURa ADMa DEVa TAIa OUR BUS IND UNI COM NAV INT STA" Set-Cookie: JSESSIONID=9280023BCE2046F32B13C89130CBC397; Path=/testing Content-Type: text/html;charset=UTF-8 Content-Language: en-US Content-Length: 2450 Date: Fri, 26 Mar 2010 14:14:40 GMT GET http://devl:8080/testing/logout HTTP/1.1 Host: devl:8080 Connection: keep-alive User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/532.5 (KHTML, like Gecko) Chrome/4.1.249.1036 Safari/532.5 Referer: http://devl:8080/testing/ Accept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 Cookie: JSESSIONID=9280023BCE2046F32B13C89130CBC397 ... From IE 8, with standard medium level security and privacy- GET http://devl:8080/testing/ HTTP/1.1 Accept: application/x-ms-application, image/jpeg, application/xaml+xml, image/gif, image/pjpeg, application/x-ms-xbap, */* Accept-Language: en-US User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Win64; x64; Trident/4.0; .NET CLR 2.0.50727; SLCC2; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; MDDC; Tablet PC 2.0) UA-CPU: AMD64 Accept-Encoding: gzip, deflate Host: devl:8080 Connection: Keep-Alive HTTP/1.1 200 OK Server: Apache-Coyote/1.1 P3P: CP="NON CURa ADMa DEVa TAIa OUR BUS IND UNI COM NAV INT STA" Set-Cookie: JSESSIONID=192999F922D6E9C868314452726764BA; Path=/testing Content-Type: text/html;charset=UTF-8 Content-Language: en-US Content-Length: 2450 Date: Fri, 26 Mar 2010 14:32:34 GMT GET http://devl:8080/testing/logout HTTP/1.1 Accept: application/x-ms-application, image/jpeg, application/xaml+xml, image/gif, image/pjpeg, application/x-ms-xbap, */* Referer: http://devl:8080/testing/;jsessionid=6371A83EFE39A46997544F9146AA5CEA Accept-Language: en-US User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Win64; x64; Trident/4.0; .NET CLR 2.0.50727; SLCC2; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; MDDC; Tablet PC 2.0) UA-CPU: AMD64 Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: devl:8080 ... I thought it might be P3P, but on adding a compact policy, nothing changes. This is the standard Tomcat session, so I'm really surprised I haven't been able to find other people with the same problem so far. Anyone have any ideas?

    Read the article

  • Getting 500 Error when trying to access Rails application through Apache2

    - by cojones
    Hey, I'm using Apache2 as proxy and mongrel_cluster as server for my Rails applications. When I try to access it by typing in the url I get a 500 "Internal Server Error" but when try to locally access the website with "lynx http://localhost:8200" it works. This is my config: <Proxy balancer://sportfreundewitold_cluster> BalancerMember http://127.0.0.1:8200 BalancerMember http://127.0.0.1:8201 </Proxy> # httpd [example.org] dmn entry BEGIN. <VirtualHost x.x.x.x:80> <IfModule suexec_module> SuexecUserGroup vu2025 vu2025 </IfModule> ServerAdmin [email protected] DocumentRoot /var/www/virtual/example.org/htdocs/current/public ServerName example.org ServerAlias www.example.org example.org *.example.org vu2025.admin.roughneck-media.de Alias /errors /var/www/virtual/example.org/errors/ RedirectMatch permanent ^/ftp[\/]?$ http://admin.roughneck-media.de/ftp/ RedirectMatch permanent ^/pma[\/]?$ http://admin.roughneck-media.de/pma/ RedirectMatch permanent ^/webmail[\/]?$ http://admin.roughneck-media.de/webmail/ RedirectMatch permanent ^/ispcp[\/]?$ http://admin.roughneck-media.de/ ErrorDocument 401 /errors/401.html ErrorDocument 403 /errors/403.html ErrorDocument 404 /errors/404.html ErrorDocument 500 /errors/500.html ErrorDocument 503 /errors/503.html <IfModule mod_cband.c> CBandUser example.org </IfModule> # httpd awstats support BEGIN. # httpd awstats support END. # httpd dmn entry cgi support BEGIN. ScriptAlias /cgi-bin/ /var/www/virtual/example.org/cgi-bin/ <Directory /var/www/virtual/example.org/cgi-bin> AllowOverride AuthConfig #Options ExecCGI Order allow,deny Allow from all </Directory> # httpd dmn entry cgi support END. <Directory /var/www/virtual/example.org/htdocs/current/public> # httpd dmn entry PHP support BEGIN. # httpd dmn entry PHP support END. Options -Indexes Includes FollowSymLinks MultiViews AllowOverride All Order allow,deny Allow from all </Directory> # httpd dmn entry PHP2 support BEGIN. <IfModule mod_php5.c> php_admin_value open_basedir "/var/www/virtual/example.org/:/var/www/virtual/example.org/phptmp/:/usr/share/php/" php_admin_value upload_tmp_dir "/var/www/virtual/example.org/phptmp/" php_admin_value session.save_path "/var/www/virtual/example.org/phptmp/" php_admin_value sendmail_path '/usr/sbin/sendmail -f vu2025 -t -i' </IfModule> <IfModule mod_fastcgi.c> ScriptAlias /php5/ /var/www/fcgi/example.org/ <Directory "/var/www/fcgi/example.org"> AllowOverride None Options +ExecCGI -MultiViews -Indexes Order allow,deny Allow from all </Directory> </IfModule> <IfModule mod_fcgid.c> Include /etc/apache2/mods-available/fcgid_ispcp.conf <Directory /var/www/virtual/example.org/htdocs> FCGIWrapper /var/www/fcgi/example.org/php5-fcgi-starter .php Options +ExecCGI </Directory> <Directory "/var/www/fcgi/example.org"> AllowOverride None Options +ExecCGI MultiViews -Indexes Order allow,deny Allow from all </Directory> </IfModule> # httpd dmn entry PHP2 support END. Include /etc/apache2/ispcp/example.org.conf RewriteEngine On # Make sure people go to www.myapp.com, not myapp.com RewriteCond %{HTTP_HOST} ^myapp\.com$ [NC] RewriteRule ^(.*)$ http://www.myapp.com$1 [R=301,L] # Yes, I've read no-www.com, but my site already has much Google-Fu on # www.blah.com. Feel free to comment this out. # Uncomment for rewrite debugging #RewriteLog logs/myapp_rewrite_log #RewriteLogLevel 9 # Check for maintenance file and redirect all requests RewriteCond %{DOCUMENT_ROOT}/system/maintenance.html -f RewriteCond %{SCRIPT_FILENAME} !maintenance.html RewriteRule ^.*$ /system/maintenance.html [L] # Rewrite index to check for static RewriteRule ^/$ /index.html [QSA] # Rewrite to check for Rails cached page RewriteRule ^([^.]+)$ $1.html [QSA] # Redirect all non-static requests to cluster RewriteCond %{DOCUMENT_ROOT}/%{REQUEST_FILENAME} !-f RewriteRule ^/(.*)$ balancer://mongrel_cluster%{REQUEST_URI} [P,QSA,L] # Deflate AddOutputFilterByType DEFLATE text/html text/plain text/xml application/xml application/xhtml+xml text/javascript text/css BrowserMatch ^Mozilla/4 gzip-only-text/html BrowserMatch ^Mozilla/4\.0[678] no-gzip BrowserMatch \\bMSIE !no-gzip !gzip-only-text/html # Uncomment for deflate debugging #DeflateFilterNote Input input_info #DeflateFilterNote Output output_info #DeflateFilterNote Ratio ratio_info #LogFormat '"%r" %{output_info}n/%{input_info}n (%{ratio_info}n%%)' deflate #CustomLog logs/myapp_deflate_log deflate </VirtualHost> # httpd [example.org] dmn entry END. Does anyone know what could be wrong with it?

    Read the article

  • Varnish default.vcl grace period

    - by Vladimir
    These are my settings for a grace period (/etc/varnish/default.vcl) sub vcl_recv { .... set req.grace = 360000s; ... } sub vcl_fetch { ... set beresp.grace = 360000s; ... } I tested Varnish using localhost and nodejs as a server. I started localhost, the site was up. Then I disconnected server and the site got disconnected in less than 2 min. It says: Error 503 Service Unavailable Service Unavailable Guru Meditation: XID: 1890127100 Varnish cache server Could you tell me what could be the problem? sub vcl_fetch { if (beresp.ttl < 120s) { ##std.log("Adjusting TTL"); set beresp.ttl = 36000s; ##120s; } # Do not cache the object if the backend application does not want us to. if (beresp.http.Cache-Control ~ "(no-cache|no-store|private|must-revalidate)") { return(hit_for_pass); } # Do not cache the object if the status is not in the 200s if (beresp.status >= 300) { # Remove the Set-Cookie header #remove beresp.http.Set-Cookie; return(hit_for_pass); } # # Everything below here should be cached # # Remove the Set-Cookie header ####remove beresp.http.Set-Cookie; # Set the grace time ## set beresp.grace = 1s; //change this to minutes in case of app shutdown set beresp.grace = 360000s; ## 10 hour - reduce if it has negative impact # Static assets - browser caches tpiphem for a long time. if (req.url ~ "\.(css|js|.js|jpg|jpeg|gif|ico|png)\??\d*$") { /* Remove Expires from backend, it's not long enough */ unset beresp.http.expires; /* Set the clients TTL on this object */ set beresp.http.cache-control = "public, max-age=31536000"; /* marker for vcl_deliver to reset Age: */ set beresp.http.magicmarker = "1"; } else { set beresp.http.Cache-Control = "private, max-age=0, must-revalidate"; set beresp.http.Pragma = "no-cache"; } if (req.url ~ "\.(css|js|min|)\??\d*$") { set beresp.do_gzip = true; unset beresp.http.expires; set beresp.http.cache-control = "public, max-age=31536000"; set beresp.http.expires = beresp.ttl; set beresp.http.age = "0"; } ##do not duplicate these settings if (req.url ~ ".css") { set beresp.do_gzip = true; unset beresp.http.expires; set beresp.http.cache-control = "public, max-age=31536000"; set beresp.http.expires = beresp.ttl; set beresp.http.age = "0"; } if (req.url ~ ".js") { set beresp.do_gzip = true; unset beresp.http.expires; set beresp.http.cache-control = "public, max-age=31536000"; set beresp.http.expires = beresp.ttl; set beresp.http.age = "0"; } if (req.url ~ ".min") { set beresp.do_gzip = true; unset beresp.http.expires; set beresp.http.cache-control = "public, max-age=31536000"; set beresp.http.expires = beresp.ttl; set beresp.http.age = "0"; } ## If the request to the backend returns a code other than 200, restart the loop ## If the number of restarts reaches the value of the parameter max_restarts, ## the request will be error'ed. max_restarts defaults to 4. This prevents ## an eternal loop in the event that, e.g., the object does not exist at all. if (beresp.status != 200 && beresp.status != 403 && beresp.status != 404) { return(restart); } if (beresp.status == 302) { return(deliver); } # Never cache posts if (req.url ~ "\/post\/" || req.url ~ "\/submit\/" || req.url ~ "\/ask\/" || req.url ~ "\/add\/") { return(hit_for_pass); } ##check this setting to ensure that it does not cause issues for browsers with no gzip if (beresp.http.content-type ~ "text") { set beresp.do_gzip = true; } if (beresp.http.Set-Cookie) { return(deliver); } ##if (req.url == "/index.html") { set beresp.do_esi = true; ##} ## check if this is needed or should be used # return(deliver); the object return(deliver); } sub vcl_recv { ##avoid leeching of images call hot_link; set req.grace = 360000s; ##2m ## if one backend is down - use another if (req.restarts == 0) { set req.backend = cache_director; ##we can specify individual VMs } else if (req.restarts == 1) { set req.backend = cache_director; } ## post calls should not be cached - add cookie for these requests if using micro-caching # Pass requests that are not GET or HEAD if (req.request != "GET" && req.request != "HEAD") { return(pass); ## return(pass) goes to backend - not cache } # Don't cache the result of a redirect if (req.http.Referer ~ "redir" || req.http.Origin ~ "jumpto") { return(pass); } # Don't cache the result of a redirect (asking for logon) if (req.http.Referer ~ "post" || req.http.Referer ~ "submit" || req.http.Referer ~ "add" || req.http.Referer ~ "ask") { return(pass); } # Never cache posts - ensure that we do not use these strings in our URLs' that need to be cached if (req.url ~ "\/post\/" || req.url ~ "\/submit\/" || req.url ~ "\/ask\/" || req.url ~ "\/add\/") { return(pass); } ## if (req.http.Authorization || req.http.Cookie) { if (req.http.Authorization) { /* Not cacheable by default */ return (pass); } # Handle compression correctly. Different browsers send different # "Accept-Encoding" headers, even though they mostly all support the same # compression mechanisms. By consolidating these compression headers into # a consistent format, we can reduce the size of the cache and get more hits. # @see: http:// varnish.projects.linpro.no/wiki/FAQ/Compression if (req.http.Accept-Encoding) { if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|ico)$") { # No point in compressing these remove req.http.Accept-Encoding; } else if (req.http.Accept-Encoding ~ "gzip") { # If the browser supports it, we'll use gzip. set req.http.Accept-Encoding = "gzip"; } else if (req.http.Accept-Encoding ~ "deflate") { # Next, try deflate if it is supported. set req.http.Accept-Encoding = "deflate"; } else { # Unknown algorithm. Remove it and send unencoded. unset req.http.Accept-Encoding; } } # lookup graphics, css, js & ico files in the cache if (req.url ~ "\.(png|gif|jpg|jpeg|css|.js|ico)$") { return(lookup); } ##added on 0918 - check if it causes issues with user specific content if (req.request == "GET" && req.http.cookie) { return(lookup); } # Pipe requests that are non-RFC2616 or CONNECT which is weird. if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") { ##closing connection and calling pipe return(pipe); } ##purge content via localhost only if (req.request == "PURGE") { if (!client.ip ~ purge) { error 405 "Not allowed."; } return(lookup); } ## do we need this? ## return(lookup); }

    Read the article

  • JSON Feed Appears to be XHR when it should be JS

    - by Oscar Godson
    I don't get why it'd doing this with the 2nd feed (appearing as a XHR call rather than just JS [looking at it in Firefox/Firebug]). The 2nd feed has the exact same MIME type as Flickr's JSON feed, yet the PortlandOregon.gov one shows as XHR and i get a NULL callback when using $.getJSON and if i use $.ajax with a 'json' or 'jsonp' type i get nothing at all. If i do the Flickr one i get the normal "[object Object]" callback. Whats going on? Please help! This has been such a headache for about a week. And i have authorization to change the feed, but i have to request the change, so if anyone knows for absolute sure let me know that! Response Headers from Flickr's API ( http://api.flickr.com/services/feeds/photos_public.gne?tags=cat&tagmode=any&format=json&jsoncallback=? ) [JS]: Date Mon, 15 Mar 2010 21:56:06 GMT P3P policyref="http://p3p.yahoo.com/w3c/p3p.xml", CP="CAO DSP COR CUR ADM DEV TAI PSA PSD IVAi IVDi CONi TELo OTPi OUR DELi SAMi OTRi UNRi PUBi IND PHY ONL UNI PUR FIN COM NAV INT DEM CNT STA POL HEA PRE GOV" Expires Mon, 26 Jul 1997 05:00:00 GMT Last-Modified Mon, 15 Mar 2010 21:52:17 GMT Cache-Control no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma no-cache Vary Accept-Encoding Content-Encoding gzip Content-Length 3647 Connection close Content-Type application/x-javascript; charset=utf-8 Request Headers Host api.flickr.com User-Agent Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.2) Gecko/20100115 Firefox/3.6 Accept */* Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer http://oscargodson.com/dev/addWidget/test.html Cookie BX=4lflj455amesp&b=3&s=iv; fltoto=0%2C0%2C0%2C0%2C1%2C0%3B0%2C0%2C0%2C0%2C0%2C0%2C0%2C0%2C0%2C0%2C0%2C0%2C0%3B1%3B0%3B; search_z=t; localization=en-us%3Bus%3Bus PortlandOregon.gov ( http://www.portlandonline.com/shared/cfm/json.cfm?c=27321 ) [XHR]: Response Headers Connection close Date Mon, 15 Mar 2010 21:57:49 GMT Server Microsoft-IIS/6.0 Set-Cookie CONTACT_ID=0;path=/ LAST_USER=;path=/ BIGipServercgis_pol_web_pool-http=1191537418.20480.0000; path=/ Content-Type application/x-javascript; charset=utf-8 Request Headers Host www.portlandonline.com User-Agent Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.2) Gecko/20100115 Firefox/3.6 Accept application/json, text/javascript, */* Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer http://oscargodson.com/dev/addWidget/test.html Origin http://oscargodson.com

    Read the article

  • Server removes all custom HTTP header fields

    - by MartinMoizard
    Hello, I've been trying to receive HTTP requests with custom fields in the headers but it seems like my server removes them... I printed the headers of the request when I arrive on my page.php. I see that : body uri http://url.com/oauth.php/request_token parameters headers Array ....*/* ....gzip, deflate ....en-us ....keep-alive ....s320650601.onlinehome.fr ....DearStranger/1.0 CFNetwork/485.12.7 Darwin/10.6.0 method POST when I should be seeing that (it is working on a local version) body uri http://localhost:8888/oauth.php/request_token parameters headers Array ....*/* ....gzip, deflate ....en-us ....OAuth realm="", oauth_consumer_key="582d95bd45d455fa2e5819f88fc0c5a104d2c7ff3", oauth_signature_method="HMAC-SHA1", oauth_signature="7mKKzEw0Clv237nBHFzcTcA3SCE%3D", oauth_timestamp="1295267612", oauth_nonce="C546644E-8918-4FA3-A2A0-DAADCF7D1E5A", oauth_version="1.0" ....keep-alive ....0 ....localhost:8888 ....DearStranger/1.0 CFNetwork/485.12.7 Darwin/10.6.0 method POST I am using php 5.2.17 on the server. Do you have any idea to help my fix that issue? Thanks!

    Read the article

  • Best Installation Software?

    - by Chris
    I am interested in knowing what the best software would be to build an installation package that performs the following: (1) Installs client application (2) Detects all SQL server instances on network, allowing user to select specific database to upgrade (which would then upgrade database using an embedded SQL script) (3) Installs website on a server/location specified by user, and configures IIS 6.0 and/or 7.0 based on settings that I specify. (4) Creates a simple setup.exe - and allows user to choose installation components (listed above, i.e install client app, sql server database, and/or website), and then download selected components from remove server. I have tried NSIS - as was able to create an installation package that will download a compressed (gzip) component from a remote server, decompress the file, install the components, and then remove the gzip file. So, this worked beautifully. The part where I am stuck is to be able to perform the database upgrade and website install. Any suggestions would be great. Thanks. Chris

    Read the article

  • PHP's apache_setenv function causes 500 Internal Server Error

    - by guitar-
    apache_setenv ( 'no-gzip', 1 ) I'm trying to disable gzip for a certain page's output, but only that page. This works fine on testing servers, but not the production server, which is running the same thing (CentOS and Apache), works on Ubuntu though. Anyway, do you know why? Or is there some other alternative? I was thinking of using ob_start () to put all output in a buffer, and then unzip it myself with a PHP function then call ob_end_flush ()... or would it not be gzipped until right before Apache sends it to the client? Thanks for any help.

    Read the article

  • Is my form password being passed in clear text?

    - by liinkas
    This is what my browser sent, when logging into some site: POST http://www.some.site/login.php HTTP/1.0 User-Agent: Opera/8.26 (X2000; Linux i686; Z; en) Host: www.some.site Accept: text/html, application/xml;q=0.9, application/xhtml+xml, image/png, image/jpeg, image/gif, image/x-xbitmap, */*;q=0.1 Accept-Language: en-US,en;q=0.9 Accept-Charset: iso-8859-1, utf-8, utf-16, *;q=0.1 Accept-Encoding: deflate, gzip, x-gzip, identity, *;q=0 Referer: http://www.some.site/ Proxy-Connection: close Content-Length: 123 Content-Type: application/x-www-form-urlencoded lots_of_stuff=here&e2ad811=my_login_name&e327696=my_password&lots_of_stuff=here Can I state that anyone can sniff my login name and password for that site? Maybe just on my LAN? If so (even only on LAN ) then I'm shocked. I thought using <input type="password"> did something more than make all characters look like ' * ' p.s. If it matters I played with netcat (on linux) and made connection browser <= netcat (loged here) <= proxy <= remote_site

    Read the article

  • Webserver parsing chrome input from post request

    - by ravenspoint
    I am developing a small embedded web server. I want to add parsing of post requests, but I am having a problem with input password fields from Chrome. Firefox and IE work perfectly. The HTML: <form action=start.webem method=post> <input value="START" type=submit /><!--#webem start --> <p>Password: <input TYPE=PASSWORD name=yourname AUTOCOMPLETE=OFF /><br> </form> From Firefox I get POST /stop.webem HTTP/1.1 Host: 127.0.0.1:8080 User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.9) Gecko/20100315 Firefox/3.5.9 (.NET CLR 3.5.30729) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive Referer: http://127.0.0.1:8080/ Content-Type: application/x-www-form-urlencoded Content-Length: 13 yourname=test However from Chrome, about 90% of the time, the yourname=test is missing POST /start.webem HTTP/1.1 Host: 127.0.0.1:8080 Connection: keep-alive User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/532.5 (KHTML, like Gecko) Chrome/4.1.249.1045 Safari/532.5 Referer: http://127.0.0.1:8080/ Content-Length: 13 Cache-Control: max-age=0 Origin: http://127.0.0.1:8080 Content-Type: application/x-www-form-urlencoded Accept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 Though, occasionally it does work!!! POST /start.webem HTTP/1.1 Host: 127.0.0.1:8080 Connection: keep-alive User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/532.5 (KHTML, like Gecko) Chrome/4.1.249.1045 Safari/532.5 Referer: http://127.0.0.1:8080/start.webem Content-Length: 13 Cache-Control: max-age=0 Origin: http://127.0.0.1:8080 Content-Type: application/x-www-form-urlencoded Accept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 yourname=test I cannot find what causes it to work sometimes.

    Read the article

  • Curl Wrapper Class does not return any data even though it worked previously?

    - by Scott Faisal
    We changed servers and installed all necessary software and just cannot seem to pin point what is going on. A simple CURL request does not return anything. Command Line CURL commands work just fine. We are using a wrapper for CURL utilizing streams. Do PHP streams require any out of the ordinary configuration? We are using the latest Lamp stack. This is the var_dump: object(cURL_Response)#180 (14) { ["cURL:private"]= resource(288) of type (curl) ["data_stream:private"]= object(elTempStream)#178 (1) { ["fp"]= resource(290) of type (stream) } ["request_header:private"]= NULL ["response_header:private"]= object(cURL_Headers)#179 (1) { ["headers:private"]= string(0) "" } ["response_headers:private"]= array(1) { [0]= object(cURL_Headers)#179 (1) { ["headers:private"]= string(0) "" } } ["error:private"]= string(0) "" ["errno:private"]= int(0) ["info:private"]= array(21) { ["url"]= string(21) "http://www.yahoo.com/" ["content_type"]= string(23) "text/html;charset=utf-8" ["http_code"]= int(200) ["header_size"]= int(1195) ["request_size"]= int(1153) ["filetime"]= int(-1) ["ssl_verify_result"]= int(0) ["redirect_count"]= int(1) ["total_time"]= float(0.486924) ["namelookup_time"]= float(0.003692) ["connect_time"]= float(0.005709) ["pretransfer_time"]= float(0.005714) ["size_upload"]= float(0) ["size_download"]= float(28509) ["speed_download"]= float(58549) ["speed_upload"]= float(0) ["download_content_length"]= float(211) ["upload_content_length"]= float(0) ["starttransfer_time"]= float(0.149365) ["redirect_time"]= float(0.312743) ["request_header"]= string(973) "GET / HTTP/1.0 User-Agent: cURL_ClientBase (PHP v/5.2.6-1+lenny4) Host: www.yahoo.com Accept: / Accept-Encoding: gzip, deflate, compress Referer: http://yahoo.com Cookie: B=e5iber15t7u05&b=3&s=ie; fpc_s=d=GGX6WCTIR29HWsjgLxFejKc_YJWxRqm3jYdEd6lu7W5ophpuAHBm6JGtNvhv97anG4VtaIMHQBPg3JAMOZGq59Lz_tRn_TFXgUT8T_at5HdCktVJLycy&v=2; fpt=d=nt1OT7HPe9wVIkHbMkpzQOgbP3.mQ3o1SPX7k5ztrFrWeeSWK5IgQooRY.8KtTeRMiaSEZ0kv3sO1MWtEsAzjVlRCDAZBoxqOs17v6PaZbPRqmDc92ivoMia.CqjufRs4_guOO4AyhRZ7_ml8rzxFrYeexpR2jLN0oPMyEWT0nbEf6Sdf._Bkh0HMfmI7KBnEx5uZBEEmV.wTfGRLG7zSd9sA4itOFv.r6AjP39CnogSn7NTJnqg_kEcKoiCM.lR5w_MqMc8IgWMBgSAZZgGEZpfmvxlQGnUzPwNh2pSpTe2wxFS3v1zPopDgoo2VsO3uzeyA3A_j7Hlk1P8T08DHbfr6ApDMUcr7d0QIt4pGYIxVV45XzfgpT7mgUdMei6VZrD9ozVQF0oqxrs1Ufri.XzPdB3NdQ--&v=1; fpc=d=sRPCfUfBTW96.RGiQn4hSkfi3p7WnPCAqYl5YoHecI7zjg7gH7PolscoPcq1Esm8dR.Rg1.AbQCpo2WBPXn1St96PpcjeCC.pj2.Upb3mKSRQkYPIVP1vQcL9nL7J8s9Z0VIXjiBFgSUcxyzDeUdP4us2YbVO3PbaVIwaIEfFsX3WI7YgiTbkrTGtwnFgoSYq6l8tnw-&v=2" } ["info_flagged:private"]= array(20) { [1048577]= string(21) "http://www.yahoo.com/" [2097154]= int(200) [2097166]= int(-1) [3145731]= float(0.486924) [3145732]= float(0.003692) [3145733]= float(0.005709) [3145734]= float(0.005714) [3145745]= float(0.149365) [3145747]= float(0.312743) [3145735]= float(0) [3145736]= float(28509) [3145737]= float(58549) [3145738]= float(0) [2097163]= int(1195) [2]= string(973) "GET / HTTP/1.0 User-Agent: cURL_ClientBase (PHP v/5.2.6-1+lenny4) Host: www.yahoo.com Accept: / Accept-Encoding: gzip, deflate, compress Referer: http://yahoo.com Cookie: B=e5iber15t7u05&b=3&s=ie; fpc_s=d=GGX6WCTIR29HWsjgLxFejKc_YJWxRqm3jYdEd6lu7W5ophpuAHBm6JGtNvhv97anG4VtaIMHQBPg3JAMOZGq59Lz_tRn_TFXgUT8T_at5HdCktVJLycy&v=2; fpt=d=nt1OT7HPe9wVIkHbMkpzQOgbP3.mQ3o1SPX7k5ztrFrWeeSWK5IgQooRY.8KtTeRMiaSEZ0kv3sO1MWtEsAzjVlRCDAZBoxqOs17v6PaZbPRqmDc92ivoMia.CqjufRs4_guOO4AyhRZ7_ml8rzxFrYeexpR2jLN0oPMyEWT0nbEf6Sdf._Bkh0HMfmI7KBnEx5uZBEEmV.wTfGRLG7zSd9sA4itOFv.r6AjP39CnogSn7NTJnqg_kEcKoiCM.lR5w_MqMc8IgWMBgSAZZgGEZpfmvxlQGnUzPwNh2pSpTe2wxFS3v1zPopDgoo2VsO3uzeyA3A_j7Hlk1P8T08DHbfr6ApDMUcr7d0QIt4pGYIxVV45XzfgpT7mgUdMei6VZrD9ozVQF0oqxrs1Ufri.XzPdB3NdQ--&v=1; fpc=d=sRPCfUfBTW96.RGiQn4hSkfi3p7WnPCAqYl5YoHecI7zjg7gH7PolscoPcq1Esm8dR.Rg1.AbQCpo2WBPXn1St96PpcjeCC.pj2.Upb3mKSRQkYPIVP1vQcL9nL7J8s9Z0VIXjiBFgSUcxyzDeUdP4us2YbVO3PbaVIwaIEfFsX3WI7YgiTbkrTGtwnFgoSYq6l8tnw-&v=2" [2097164]= int(1153) [2097165]= int(0) [3145743]= float(211) [3145744]= float(0) [1048594]= string(23) "text/html;charset=utf-8" } ["request_url:private"]= string(16) "http://yahoo.com" ["response_url:private"]= string(21) "http://www.yahoo.com/" ["status_code:private"]= int(200) ["cookies:private"]= array(0) { } ["request_headers"]= string(973) "GET / HTTP/1.0 User-Agent: cURL_ClientBase (PHP v/5.2.6-1+lenny4) Host: www.yahoo.com Accept: / Accept-Encoding: gzip, deflate, compress Referer: http://yahoo.com Cookie: B=e5iber15t7u05&b=3&s=ie; fpc_s=d=GGX6WCTIR29HWsjgLxFejKc_YJWxRqm3jYdEd6lu7W5ophpuAHBm6JGtNvhv97anG4VtaIMHQBPg3JAMOZGq59Lz_tRn_TFXgUT8T_at5HdCktVJLycy&v=2; fpt=d=nt1OT7HPe9wVIkHbMkpzQOgbP3.mQ3o1SPX7k5ztrFrWeeSWK5IgQooRY.8KtTeRMiaSEZ0kv3sO1MWtEsAzjVlRCDAZBoxqOs17v6PaZbPRqmDc92ivoMia.CqjufRs4_guOO4AyhRZ7_ml8rzxFrYeexpR2jLN0oPMyEWT0nbEf6Sdf._Bkh0HMfmI7KBnEx5uZBEEmV.wTfGRLG7zSd9sA4itOFv.r6AjP39CnogSn7NTJnqg_kEcKoiCM.lR5w_MqMc8IgWMBgSAZZgGEZpfmvxlQGnUzPwNh2pSpTe2wxFS3v1zPopDgoo2VsO3uzeyA3A_j7Hlk1P8T08DHbfr6ApDMUcr7d0QIt4pGYIxVV45XzfgpT7mgUdMei6VZrD9ozVQF0oqxrs1Ufri.XzPdB3NdQ--&v=1; fpc=d=sRPCfUfBTW96.RGiQn4hSkfi3p7WnPCAqYl5YoHecI7zjg7gH7PolscoPcq1Esm8dR.Rg1.AbQCpo2WBPXn1St96PpcjeCC.pj2.Upb3mKSRQkYPIVP1vQcL9nL7J8s9Z0VIXjiBFgSUcxyzDeUdP4us2YbVO3PbaVIwaIEfFsX3WI7YgiTbkrTGtwnFgoSYq6l8tnw-&v=2" }

    Read the article

  • Serving GZipped files from s3 using the Asset Pipeline

    - by kmurph79
    I have a Rails 3.2.3 app on Heroku and I'm using the asset_sync gem to serve my assets from s3, via these instructions. It works great, except s3 is not serving up the gzipped css/js files (just the uncompressed version). I've enabled gzip compression, to no avail: config.gzip_compression = true According to Using GZIP with html pages served from Amazon S3 I need to add meta-data to the s3 object for uploading. How would I do this in concert with the Asset Pipeline? Thank you for any help.

    Read the article

  • Is there a way in CXF to disable the SoapCompressed header for debugging purposes?

    - by Don Branson
    I'm watching CXF service traffic using DonsProxy, and the CXF client sends an HTTP header "SoapCompressed": HttpHeadSubscriber starting... Sender is CLIENT at 127.0.0.1:2680 Packet ID:0-1 POST /yada/yada HTTP/1.1 Content-Type: text/xml; charset=UTF-8 SoapCompressed: true Accept-Encoding: gzip,gzip;q=1.0, identity; q=0.5, *;q=0 SOAPAction: "" Accept: */* User-Agent: Apache CXF 2.2 Cache-Control: no-cache Pragma: no-cache Host: localhost:9090 Connection: keep-alive Transfer-Encoding: chunked I'd like to turn SoapCompressed off in my dev environment so that I can see the SOAP on the wire. I've searched Google and grepped the CXF source code, but don't see anything in the docs or code that reference this. Any idea how to make the client send "SoapCompressed: off" instead, without routing it through Apache HTTPD or the like? Is there a way to configure it at the CXF client, in other words?

    Read the article

  • Rewriting Live TCP/IP (Layer 4) (i.e. Socket Layer) Streams

    - by user213060
    I have a simple problem which I'm sure someone here has done before... I want to rewrite Layer 4 TCP/IP streams (Not lower layer individual packets or frames.) Ettercap's etterfilter command lets you perform simple live replacements of Layer 4 TCP/IP streams based on fixed strings or regexes. Example ettercap scripting code: if (ip.proto == TCP && tcp.dst == 80) { if (search(DATA.data, "gzip")) { replace("gzip", " "); msg("whited out gzip\n"); } } if (ip.proto == TCP && tcp.dst == 80) { if (search(DATA.data, "deflate")) { replace("deflate", " "); msg("whited out deflate\n"); } } http://ettercap.sourceforge.net/forum/viewtopic.php?t=2833 I would like to rewrite streams based on my own filter program instead of just simple string replacements. Anyone have an idea of how to do this? Is there anything other than Ettercap that can do live replacement like this, maybe as a plugin to a VPN software or something? I would like to have a configuration similar to ettercap's silent bridged sniffing configuration between two Ethernet interfaces. This way I can silently filter traffic coming from either direction with no NATing problems. Note that my filter is an application that acts as a pipe filter, similar to the design of unix command-line filters: >[eth0] <----------> [my filter] <----------> [eth1]< What I am already aware of, but are not suitable: Tun/Tap - Works at the lower packet layer, I need to work with the higher layer streams. Ettercap - I can't find any way to do replacements other than the restricted capabilities in the example above. Hooking into some VPN software? - I just can't figure out which or exactly how. libnetfilter_queue - Works with lower layer packets, not TCP/IP streams. Again, the rewriting should occur at the transport layer (Layer 4) as it does in this example, instead of a lower layer packet-based approach. Exact code will help immensely! Thanks!

    Read the article

  • How to download file into string with progress callback?

    - by Kaminari
    I would like to use the WebClient (or there is another better option?) but there is a problem. I understand that opening up the stream takes some time and this can not be avoided. However, reading it takes a strangely much more amount of time compared to read it entirely immediately. Is there a best way to do this? I mean two ways, to string and to file. Progress is my own delegate and it's working good. FIFTH UPDATE: Finally, I managed to do it. In the meantime I checked out some solutions what made me realize that the problem lies elsewhere. I've tested custom WebResponse and WebRequest objects, library libCURL.NET and even Sockets. The difference in time was gzip compression. Compressed stream lenght was simply half the normal stream lenght and thus download time was less than 3 seconds with the browser. I put some code if someone will want to know how i solved this: (some headers are not needed) public static string DownloadString(string URL) { WebClient client = new WebClient(); client.Headers["User-Agent"] = "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/532.5 (KHTML, like Gecko) Chrome/4.1.249.1045 Safari/532.5"; client.Headers["Accept"] = "application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5"; client.Headers["Accept-Encoding"] = "gzip,deflate,sdch"; client.Headers["Accept-Charset"] = "ISO-8859-2,utf-8;q=0.7,*;q=0.3"; Stream inputStream = client.OpenRead(new Uri(URL)); MemoryStream memoryStream = new MemoryStream(); const int size = 32 * 4096; byte[] buffer = new byte[size]; if (client.ResponseHeaders["Content-Encoding"] == "gzip") { inputStream = new GZipStream(inputStream, CompressionMode.Decompress); } int count = 0; do { count = inputStream.Read(buffer, 0, size); if (count > 0) { memoryStream.Write(buffer, 0, count); } } while (count > 0); string result = Encoding.Default.GetString(memoryStream.ToArray()); memoryStream.Close(); inputStream.Close(); return result; } I think that asyncro functions will be almost the same. But i will simply use another thread to fire this function. I dont need percise progress indication.

    Read the article

  • What is this for an IP in my google app engine log file?

    - by Christian Harms
    I get many normal log lines in my google app engine application. But today I go these instead the 4-part number: 2a01:e35:2f20:f770:6c54:3ee8:67fb:df8 What is this for an format? ipv6 are 6 numbers, mac address too... Normal logfile line: 187.14.44.208 - - [19/Mar/2010:14:31:35 -0700] "GET /geo_data.js HTTP/1.1" 200 776 "http://www.xxx.com.br/spl19/index.php?refid=gv_av_ri" "Mozilla/5.0 (Windows; U; Windows NT 5.1; pt-BR; rv:1.9.2) Gecko/20100115 Firefox/3.6 (.NET CLR 3.5.30729),gzip(gfe)" This special logfile line: 2a01:e35:2f20:f770:6c54:3ee8:67fb:df8 - - [18/Mar/2010:17:00:37 -0700] "GET /geo_data.js HTTP/1.1" 500 450 "http://www.xxx.com.br/spl19/index.php?refid=cm_av_ri" "Mozilla/5.0 (Windows; U; Windows NT 6.1; pt-PT; rv:1.9.2) Gecko/20100115 Firefox/3.6,gzip(gfe)"

    Read the article

  • Python text file processing speed issues

    - by Anonymouslemming
    Hi all, I'm having a problem with processing a largeish file in Python. All I'm doing is f = gzip.open(pathToLog, 'r') for line in f: counter = counter + 1 if (counter % 1000000 == 0): print counter f.close This takes around 10m25s just to open the file, read the lines and increment this counter. In perl, dealing with the same file and doing quite a bit more (some regular expression stuff), the whole process takes around 1m17s. Perl Code: open(LOG, "/bin/zcat $logfile |") or die "Cannot read $logfile: $!\n"; while (<LOG>) { if (m/.*\[svc-\w+\].*login result: Successful\.$/) { $_ =~ s/some regex here/$1,$2,$3,$4/; push @an_array, $_ } } close LOG; Can anyone advise what I can do to make the Python solution run at a similar speed to the Perl solution? I've tried just uncompressing the file and dealing with it using open instead of gzip.open, but that made a very small difference to the overall time.

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >