Search Results

Search found 1926 results on 78 pages for 'cookie monster'.

Page 53/78 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • Allow HTTPS cookies but not HTTP?

    - by Ken
    I want to allow cookies for a domain but only over HTTPS -- not cookies from the same domain that come from HTTP. For example, I don't want any http://www.google.com cookies, but I do want to allow https://www.google.com cookies (because Calendars are there). Is there a way to do this? Does the goal even make sense? In Chrome, it only allows domain names, not URLs, to be added to the cookie exception list. In Firefox, it allows a protocol, but it only records the domain name, and if you click "Allow" or "Deny", it changes the same entry in the list.

    Read the article

  • Date header returned by IIS7 is wrong

    - by James Hollingworth
    I am serving an ASP.NET application from IIS 7 but we are experiencing some weird cookie issues. The code works fine in other environments so we are assuming this is specific to this server (related question). We have been looking at the http headers returned and someone pointed out that the date http header is showing the 1st of Jan rather than today's date (so far it always shows that date regardless of what the current date is). The system clock is set correctly (and we can print out the current time/date via DateTime.Now correctly as well) so we can't work out why it's now working. Does anyone have any ideas? Is this a red-herring? Thanks, James

    Read the article

  • X11 for apache user

    - by fuenfundachtzig
    We are using inkscape to convert SVG images uploaded to our server via a web form. For this inkscape offers a batch mode via the -z option, but this batch mode has a flaw: When inkscape is run by the apache user, it breaks saying $ inkscape -z -W drawing.svg X11 connection rejected because of wrong authentication. The application 'inkscape' lost its connection to the display localhost:11.0; most likely the X server was shut down or you killed/destroyed the application. If you do the same as a normal user you also get errors: Xlib: connection to "localhost:11.0" refused by server Xlib: PuTTY X11 proxy: MIT-MAGIC-COOKIE-1 data did not match (inkscape:24050): Gdk-CRITICAL **: gdk_display_list_devices: assertion `GDK_IS_DISPLAY (display)' failed 301.27942 But at least inkscape gives the correct answer (here the number stating the width of the image). Does somebody know how to make this also work for the apache user? Does it make sense to authorize apache to use X (if so how)? In any case it doesn't feel like the right solution...

    Read the article

  • Apache ProxyPass/ProxyPassReverse to IIS

    - by Dana
    We have an ASP.NET web application which is mapped to a folder on an apache hosted php site using ProxyPass.ProxyPassReverse. A couple of problems being encountered. cookies are being lost which breaks the site navigation, this can be overcome by setting the asp app as cookieless. Forms authentication is used on the ASP site, this is also broken withe the proxypass in place, suspect this is cookie related also. ASP site works ok when run from a domain/ip address. Use of a separate domain / sub-domain is not an option duew to client requirements.

    Read the article

  • Making nginx withstand flood attacks

    - by Tiffany Walker
    How can I make it stand stand against attacks better? Are their plugins. Looking for a way to RATE LIMIT and remain up and not slow down. My Setup: user nobody; # no need for more workers in the proxy mode worker_processes 4; worker_cpu_affinity 0001 0010 0100 1000; worker_priority -2; error_log /var/log/nginx/error.log info; worker_rlimit_nofile 40480; events { worker_connections 5120; # increase for busier servers use epoll; # you should use epoll here for Linux kernels 2.6.x } http { server_name_in_redirect off; server_names_hash_max_size 10240; server_names_hash_bucket_size 1024; include mime.types; default_type application/octet-stream; server_tokens off; disable_symlinks if_not_owner; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 5; gzip on; gzip_vary on; gzip_disable "MSIE [1-6]\."; gzip_proxied any; gzip_http_version 1.1; gzip_min_length 1000; gzip_comp_level 9; gzip_buffers 16 8k; # You can remove image/png image/x-icon image/gif image/jpeg if you have slow CPU gzip_types text/plain text/xml text/css application/x-javascript application/xml image/png image/x-icon image/gif image/jpeg application/xml+rss text/javascript application/atom+xml; ignore_invalid_headers on; client_header_timeout 3m; client_body_timeout 3m; send_timeout 3m; reset_timedout_connection on; connection_pool_size 256; client_header_buffer_size 256k; large_client_header_buffers 4 256k; client_max_body_size 200M; client_body_buffer_size 128k; request_pool_size 32k; output_buffers 4 32k; postpone_output 1460; proxy_temp_path /tmp/nginx_proxy/; client_body_in_file_only on; log_format bytes_log "$msec $bytes_sent ."; include "/etc/nginx/vhosts/*"; } vhost file: server { error_log /var/log/nginx/vhost-error_log warn; listen 194.145.208.19:80; server_name ipxnow.in www.ipxnow.in; access_log /usr/local/apache/domlogs/ipxnow.in-bytes_log bytes_log; access_log /usr/local/apache/domlogs/ipxnow.in combined; root /home/ipxnowin/public_html; location / { location ~.*\.(3gp|gif|jpg|jpeg|png|ico|wmv|avi|asf|asx|mpg|mpeg|mp4|pls|mp3|mid|wav|swf|flv|html|htm|txt|js|css|exe|zip|tar|rar|gz|tgz|bz2|uha|7z|doc|docx|xls|xlsx|pdf|iso)$ { expires 7d; try_files $uri @backend; } error_page 405 = @backend; add_header X-Cache "HIT from Backend"; proxy_pass http://194.145.208.19:8081; include proxy.inc; } location @backend { internal; proxy_pass http://194.145.208.19:8081; include proxy.inc; } location ~ .*\.(php|jsp|cgi|pl|py)?$ { proxy_pass http://194.145.208.19:8081; include proxy.inc; } location ~ /\.ht { deny all; } } and proxy.inc: proxy_connect_timeout 59s; proxy_send_timeout 600; proxy_read_timeout 600; proxy_buffer_size 64k; proxy_buffers 16 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_pass_header Set-Cookie; proxy_redirect off; proxy_hide_header Vary; proxy_set_header Accept-Encoding ''; proxy_ignore_headers Cache-Control Expires; proxy_set_header Referer $http_referer; proxy_set_header Host $host; proxy_set_header Cookie $http_cookie; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

    Read the article

  • What cookies are allowed from Internet Explorer using this option?

    - by kiamlaluno
    When I look at the options used from Internet Explorer for the cookies, I read the following description: Blocca i cookie di terze party privi di una versione compatta dell'informativa sulla privacy. Its translation is roughtly: Blocks the cookies without a compact version of the privacy's informative. I don't get which cookies are blocked. From that description, it seems the cookies should include a compact description of the privacy informative, but I don't get how cookies can contain that information, or what happens with cookies set from sites outside the European Union. What cookies are blocked? The screenshot has been taken from Internet Explorer 9, but that words were used also from previous versions. The settings are the ones shown for the "Privacy" tab; I don't recall if the English version calls that tab the same way. EDIT, English screenshot .

    Read the article

  • How can Facebook's session get mixed up because of NAT and/or Proxy

    - by Alex
    Have received some reports from a customer (a very large company) they reported issues from clients who are using Facebook. These clients claim that once in a while when they log in to Facebook they end up in someone else's session. I know that network is NATed then Proxied before getting to Facebook.com. Although I'm not able to explain how this issue can occur. Is it possible that the Proxy is not sending the right session back to the clients? How can they end up with someone else's session since Facebook is cookie based session?? Anyone seen this before?

    Read the article

  • How does this main domain have a CNAME record?

    - by TRiG
    I was under the impression that only subdomains could have CNAME records: main domains need to define all their own records. However, apt-get.com seems to have only a CNAME record. How can this work? $ dig apt-get.com ; <<>> DiG 9.8.1-P1 <<>> apt-get.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 45743 ;; flags: qr rd ra; QUERY: 1, ANSWER: 9, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;apt-get.com. IN A ;; ANSWER SECTION: apt-get.com. 86336 IN CNAME thie5ku9.dsgeneration.com. thie5ku9.dsgeneration.com. 60 IN A 208.73.211.242 thie5ku9.dsgeneration.com. 60 IN A 208.73.211.246 thie5ku9.dsgeneration.com. 60 IN A 208.73.211.166 thie5ku9.dsgeneration.com. 60 IN A 208.73.211.232 thie5ku9.dsgeneration.com. 60 IN A 208.73.211.161 thie5ku9.dsgeneration.com. 60 IN A 208.73.210.233 thie5ku9.dsgeneration.com. 60 IN A 208.73.211.186 thie5ku9.dsgeneration.com. 60 IN A 208.73.211.188 ;; Query time: 59 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Tue Jun 10 15:05:48 2014 ;; MSG SIZE rcvd: 193 $ dig apt-get.com ns ; <<>> DiG 9.8.1-P1 <<>> apt-get.com ns ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 43831 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;apt-get.com. IN NS ;; Query time: 26 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Tue Jun 10 15:12:37 2014 ;; MSG SIZE rcvd: 29 $ dig apt-get.com ns @b.gtld-servers.net ; <<>> DiG 9.8.1-P1 <<>> apt-get.com ns @b.gtld-servers.net ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 38228 ;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 2, ADDITIONAL: 2 ;; WARNING: recursion requested but not available ;; QUESTION SECTION: ;apt-get.com. IN NS ;; AUTHORITY SECTION: apt-get.com. 172800 IN NS ns1.domainrecover.com. apt-get.com. 172800 IN NS ns2.domainrecover.com. ;; ADDITIONAL SECTION: ns1.domainrecover.com. 172800 IN A 66.45.232.66 ns2.domainrecover.com. 172800 IN A 65.23.159.179 ;; Query time: 70 msec ;; SERVER: 192.33.14.30#53(192.33.14.30) ;; WHEN: Tue Jun 10 15:07:05 2014 ;; MSG SIZE rcvd: 111 The domain does resolve. I get the following headers: GET / HTTP/1.1 User-Agent: Testing_Sniffer/4.15 Host: apt-get.com Accept: */* HTTP/1.0 200 (OK) Cache-Control: private, no-cache, must-revalidate Connection: Keep-Alive Pragma: no-cache Server: Oversee Turing v1.0.0 Content-Length: 1347 Content-Type: text/html Expires: Mon, 26 Jul 1997 05:00:00 GMT Keep-Alive: timeout=3, max=96 P3P: policyref="http://www.dsparking.com/w3c/p3p.xml", CP="NOI DSP COR ADMa OUR NOR STA" Set-Cookie: parkinglot=1; domain=.apt-get.com; path=/; expires=Wed, 11-Jun-2014 14:10:37 GMT <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Frameset//EN" "http://www.w3.org/TR/html4/frameset.dtd"> <!-- turing_cluster_prod --> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>apt-get.com</title> <meta name="keywords" content="apt-get.com" /> <meta name="description" content="apt-get.com" /> <meta name="robots" content="index, follow" /> <meta name="revisit-after" content="10" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <script type="text/javascript"> document.cookie = "jsc=1"; </script> </head> <frameset rows="100%,*" frameborder="no" border="0" framespacing="0"> <frame src="http://apt-get.com?epl=5PfLSSqWrYDAt-gbwMDK_rA3b1UJCYVTJHfxTzr9FTDQV84b6vAgVhU3FTeCRQNiuRNv79Ni0V3mkEVNRhpqo2gpMjp5iOIR1w2_EISPENaqzoXohVXl2QI3ryXlRCB4FaIIaxynnWXWY6QBgBgNiIZ6agD1NBoNGg0ajXpUCXUAIJDer78AAOB_AwAAQIDbCwAAe_NWlVlTJllBMTZoWkKPAAAA8A" name="apt-get.com"> </frameset> <noframes> <body><a href="http://apt-get.com?epl=5PfLSSqWrYDAt-gbwMDK_rA3b1UJCYVTJHfxTzr9FTDQV84b6vAgVhU3FTeCRQNiuRNv79Ni0V3mkEVNRhpqo2gpMjp5iOIR1w2_EISPENaqzoXohVXl2QI3ryXlRCB4FaIIaxynnWXWY6QBgBgNiIZ6agD1NBoNGg0ajXpUCXUAIJDer78AAOB_AwAAQIDbCwAAe_NWlVlTJllBMTZoWkKPAAAA8A">Click here to go to apt-get.com</a>.</body> </noframes> </html>

    Read the article

  • Nginx reverse proxy IP issue

    - by Tiffany Walker
    For some reason Apache is still seeing my SERVERS ip. Is this an nginx problem? /etc/nginx.conf user nobody; # no need for more workers in the proxy mode worker_processes 4; error_log /var/log/nginx/error.log info; worker_rlimit_nofile 20480; events { worker_connections 5120; # increase for busier servers use epoll; # you should use epoll here for Linux kernels 2.6.x } http { server_name_in_redirect off; server_names_hash_max_size 10240; server_names_hash_bucket_size 1024; include mime.types; default_type application/octet-stream; server_tokens off; disable_symlinks if_not_owner; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 5; gzip on; gzip_vary on; gzip_disable "MSIE [1-6]\."; gzip_proxied any; gzip_http_version 1.1; gzip_min_length 1000; gzip_comp_level 6; gzip_buffers 16 8k; # You can remove image/png image/x-icon image/gif image/jpeg if you have slow CPU gzip_types text/plain text/xml text/css application/x-javascript application/xml image/png image/x-icon image/gif image/jpeg application/xml+rss text/javascript application/atom+xml; ignore_invalid_headers on; client_header_timeout 3m; client_body_timeout 3m; send_timeout 3m; reset_timedout_connection on; connection_pool_size 256; client_header_buffer_size 256k; large_client_header_buffers 4 256k; client_max_body_size 200M; client_body_buffer_size 128k; request_pool_size 32k; output_buffers 4 32k; postpone_output 1460; proxy_temp_path /tmp/nginx_proxy/; client_body_in_file_only on; log_format bytes_log "$msec $bytes_sent ."; include "/etc/nginx/vhosts/*"; } proxy.inc proxy_connect_timeout 59s; proxy_send_timeout 600; proxy_read_timeout 600; proxy_buffer_size 64k; proxy_buffers 16 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_pass_header Set-Cookie; proxy_redirect off; proxy_hide_header Vary; proxy_set_header Accept-Encoding ''; proxy_ignore_headers Cache-Control Expires; proxy_set_header Referer $http_referer; proxy_set_header Host $host; proxy_set_header Cookie $http_cookie; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; vhost file: server { error_log /var/log/nginx/vhost-error_log warn; listen 63.6.1.12:80; server_name photo-rolldomain.com www.domain.com; access_log /usr/local/apache/domlogs/domain.com-bytes_log bytes_log; access_log /usr/local/apache/domlogs/domain.com combined; root /home/mtech/public_html; location / { location ~.*\.(3gp|gif|jpg|jpeg|png|ico|wmv|avi|asf|asx|mpg|mpeg|mp4|pls|mp3|mid|wav|swf|flv|html|htm|txt|js|css|exe|zip|tar|rar|gz|tgz|bz2|uha|7z|doc|docx|xls|xlsx|pdf|iso)$ { expires 7d; try_files $uri @backend; } error_page 405 = @backend; add_header X-Cache "HIT from Backend"; proxy_pass http://63.6.1.12:8081; include proxy.inc; } location @backend { internal; proxy_pass http://63.6.1.12:8081; include proxy.inc; } location ~ .*\.(php|jsp|cgi|pl|py)?$ { proxy_pass http://63.6.1.12:8081; include proxy.inc; } location ~ /\.ht { deny all; } }

    Read the article

  • Forms Authentication across Sub-Domains on local IIS

    - by Parminder
    I asked this question at SO http://stackoverflow.com/questions/8278015/forms-nauthentication-across-sub-domains-on-local-iis Now asking it here. I know a cookie can be shared across multiple subdomains using the setting <forms name=".ASPXAUTH" loginUrl="Login/" protection="Validation" timeout="120" path="/" domain=".mydomain.com"/> in Web.config. But how to replicate same thing on local machine. I am using windows 7 and IIS 7 on my laptop. So I have sites localhost.users/ for my actual site users.mysite.com localhost.host/ for host.mysite.com and similar.

    Read the article

  • How can I delete current session in Chrome?

    - by Eric
    I'm using Google Chrome and want to delete the current session data on the fly. I can do this on Firefox with the web developer extension, but Chrome doesn't seem to have the same option in their webdev extension. So how can I do this? I realize that session data is stored on the server side and tracked in the browser with cookies. So really, I think what I want to do is delete cookies that are set to live for the session lifetime. Is there a way to do THAT in Chrome? "Delete browsing data" lets me delete all cookies from within a certain time period (for example, the last hour), but that could delete OTHER cookies on the site that I don't want to erase. I just want to delete the cookie being used to track my current session. Thanks y'all...

    Read the article

  • file downloaded via firefox and curl have different size

    - by Arash Mousavi
    When I download a file from this link by Firefox its size is 74580 B, But when I download it by curl with exactly all of header was sent by Firefox its size is 79891 B (I copied all header from Firefox and paste it in curl command). what is the problem? If you need any additional data ask me in comment. My curl command: curl --header 'Host: members.tsetmc.com' --header 'User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:29.0) Gecko/20100101 Firefox/29.0' --header 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' --header 'Accept-Language: en-US,en;q=0.5' --header 'Referer: http://www.tsetmc.com/Loader.aspx?ParTree=15131F' --header 'Cookie: ASP.NET_SessionId=pwzbckbdpjlzqj45vcdbd455' --header 'Connection: keep-alive' 'http://members.tsetmc.com/tsev2/excel/MarketWatchPlus.aspx?d=0' -o 'MarketWatchPlus-1393_3_14.xlsx' -L

    Read the article

  • Simple HTTP server that will send the same file for all requests?

    - by Rory McCann
    I need to debug a XML-RPC application, which sends XML replies over HTTP. I have a sample XML reply (i.e. data from the server, sent to the client that isn't working), I'd like to debug my application. Ideally I'd like a simple HTTP server that will serve one file in reply to all requests. Someone requests /? Send them this file. Someone makes a post to /server/page.php with a certain cookie? Just send them this file. I don't care about multithreading, or security. I will only need to use this for a few hours to debug. I have root on the machine. i.e. I'm hoping there's something as easy to use as this: simple_http_server -p 12445 -f my_test_file I'm aware of python's SimpleHTTPServer module, but I'm not sure how to make it work in this case.

    Read the article

  • Google Chrome doesn't stay logged in to Google sites when using pinned tabs

    - by Nick T
    Despite checking "stay logged in" or the like on Gmail or Docs, Chrome refuses to do so when I close and re-open it with Google sites pinned. If they're not pinned, it works fine. The "Clear cookies and other site and plug-in data when I close my browser" checkbox in the settings is not checked, and I don't have any cookie exceptions. All settings are defaults. Nor is the incognito mode being used. This occurs on all my computers using Chrome. I have deleted my cookies file (%userprofile%\AppData\Local\Google\Chrome\User Data\Default\Cookies) with no effect (other than losing the logins that ordinarly work fine). Of note is that when I relaunch Chrome with Gmail pinned and it asks me to log in, doing so once will fail (does nothing; no errors), then it will work on the second attempt. If I refresh the window before doing so, it will work on the first attempt.

    Read the article

  • Sticky Load Balancing with AWS

    - by John Wheal
    I have just setup a load balancer with AWS for a few instances as search engine crawlers were bringing down the site (it has millions of pages). Parts of the site allow you to login so I selected: Enable Application Generated Cookie Stickiness and everything works fine. I now wonder how this will effect my SEO and the crawlers. As I selected sticky load balancing does this mean that a crawler will be stuck on one server and therefore defeat the point in the load balancer? Any recommendations will be appreciated.

    Read the article

  • Remote X-windows between new RHEL5 and old Solaris 8

    - by joshxdr
    I have a very small lab network with three boxes: a modern x86-based RHEL3 box, an x86-based RHEL5 box, and a 1998-vintage SPARC Ultra5 with Solaris 8. I can use ssh -X to run a program on the RHEL5 box and view the windows on the RHEL3 box. I believe this uses xauth and magic cookies?? I have followed the X-Windows HOWTO to set up xauth on the Solaris box, but so far no dice. I would like to be able to use the X-windows server on the RHEL3 box with a client program on the Solaris box (program running on Solaris host, windows appearing at Linux host). Is there a trick to this, or have I made a mistake following the instructions for setting up xauth and magic cookie?

    Read the article

  • Apache ProxyPass/ProxyPassReverse to IIS

    - by Dana
    We have an ASP.NET web application which is mapped to a folder on an apache hosted php site using ProxyPass.ProxyPassReverse. A couple of problems being encountered. cookies are being lost which breaks the site navigation, this can be overcome by setting the asp app as cookieless. Forms authentication is used on the ASP site, this is also broken withe the proxypass in place, suspect this is cookie related also. ASP site works ok when run from a domain/ip address. Use of a separate domain / sub-domain is not an option duew to client requirements.

    Read the article

  • Creating a seperate static content site for IIS7 and MVC

    - by JK01
    With reference to this serverfault blog post: A Few Speed Improvements where it talks about how static content for stackexchange is served from a separate cookieless domain... How would someone go about doing this on IIS7.5 for a ASP.NET MVC site? The plan so far: Register domain eg static.com, create a new website in IIS Manually copy the js / css / images folders from MVC as is so that they have the same paths on the new server Enable IIS gzip settings (js/css = high compression, images = none) Set caching with far future expiry dates <clientCache cacheControlCustom="public" /> in the web.config Never set any cookies on the static.com site Combine and minimize js / css Auto deploy changes in static content with WebDeploy Is this plan correct? And how can you use WebDeploy to deploy the whole web app to one server and then only the static items to another? I can see there is a similar question, but for apache: Creating a cookie-free domain to serve static content so it doesn't apply

    Read the article

  • Session cach and tmp folder error on cPanel Centos

    - by Danialzo
    One of my clients has come across multiple breakdowns in their websites with the following error PHP Code: Warning: session_start() [function.session-start]:open(/tmp/sess_1d6616afe1b8a0d91a8d9ec29254b453, O_RDWR) failed: No space left on device (28) in /home/***/public_html/system/library/session.php on line 11Warning: session_start() [function.session-start]: Cannot send session cookie - headers already sent by (output started at /home/***/public_html/index.php:104) in /home/***/public_html/system/library/session.php on line 11Warning: session_start() [function.session-start]: Cannot send session cache limiter - headers already sent (output started at /home/***/public_html/index.php:104) in /home/***/public_html/system/library/session.php on line 11Warning: Cannot modify header information - headers already sent by (output started at /home/***/public_html/index.php:104) in /home/***/public_html/index.php on line 177Warning: Cannot modify header information - headers already sent by (output started at /home/***/public_html/index.php:104) in /home/***/public_html/system/library/currency.php on line 45&#65279;Notice: Error: Can't create/write to file '/tmp/#sql_4c5_0.MYI' (Errcode: 28) Error No: 1 They are experiencing this problem while: Nothing has been recently changed on the server tmp and other folders are occupied with not more than 10% of their total space The error come and goes I would really appreciate it if anyone could guide me through. Thanks in advance

    Read the article

  • Blocking facebook's Like button in firefox

    - by Quiark
    Many sites today use widgets from facebook such as the Like button, list of friends who are fans of that site and so on. While it may be a nice feature, I perceive it to be a serious privacy intrusion, because facebook most likely stores information about which sites you visit. I also heard that when you are not logged into facebook, it still tracks the sites you visit (probably with a cookie) and once you log in attaches the data to your real account. For now, I want to keep using facebook, but I would like to block just these widgets so it can't track me. Is there any Firefox extension which could do that?

    Read the article

  • Configure session length with htaccess

    - by brianpartridge
    My home web server is running the stock OSX Apache 2 install. I have some directories with content that I want to secure, so I setup htaccess files for those areas. However, I find it annoying to have to login to those areas as frequently as I do. Once I'm logged in I'd like to not have to login again for a long time, similar to setting a long time in a cookie. But, I'd like to increase the life time of the authenticated session with htaccess. I've googled but haven't found what I'm looking for, maybe because I'm looking for the wrong term. I want to configure the 'session length', 'session timeout', 'time limit', or 'expiration' for users authenticated via htaccess. Any thoughts?

    Read the article

  • Changing location in Google Chrome when searching

    - by Alex
    I've recently moved to the Czech Republic from Scotland and I can't find a way to permanently stop Google from automatically defaulting back to google.cz all the time. I've checked to ensure that all my google accounts and cookie based settings (e.g. Advanced Search Options) are set to English but it's still clearly doing an IP address lookup and disregarding everything else. The default Search Engine for Google Chrome (and switches to google.cz automatically): {google:baseURL}search?{google:RLZ}{google:acceptedSuggestion}{google:originalQueryForSuggestion}sourceid=chrome&ie={inputEncoding}&q=%s I've tried hardcoding it to: http://www.google.com/search?{google:RLZ}{google:acceptedSuggestion}{google:originalQueryForSuggestion}sourceid=chrome&ie={inputEncoding}&q=%s this kind of works, but won't work for inline searching, i.e. I always have to press enter in order to get any results which is a bit annoying as I've gotten so used to AJAX style searching I can't have been the only one to get this issue? Any help is appreciated

    Read the article

  • IPv6 scope id issue with IE

    - by eych
    I have an ASP.NET website that works with Firefox because FF doesn't need the % in the scope-id to be encoded (%25). The website also works on the same machine using IE because I can leave out the scope-id. However, to access the website from another machine in the network, I need to add the scope-id to the IPv6 address. For some reason, using the scope-id doesn't allow an authentication cookie to be created, and the website keeps going back to the login page. Anyone using IE7+ to access an ASP.NET website on a network using IPv6 with an encoded %?

    Read the article

  • nginx rewrite base url

    - by ptn777
    I would like the root url http://www.example.com to redirect to http://www.example.com/something/else This is because some weird WP plugin always sets a cookie on the base url, which doesn't let me cache it. I tried this directive: location / { rewrite ^ /something/else break; } But 1) there is no redirect and 2) pages start shooting more than 1,000 requests to my server. With this one: location / { rewrite ^ http://www.example.com/something/else break; } Chrome reports a redirect loop. What's the correct regexp to use?

    Read the article

  • Is it possible with Google searches to ban any and all results from a domain? [closed]

    - by Stu Thompson
    Is it possible to configure Google somehow to permanently ban search results from domains that I know 100% are never, ever going to make me happy? Something cookie/session based maybe? E.g. I want to ban (permanently, forever and always) results from experts-exchange.com. Every time I click results that take me to their page I just want to scream. Update! Google has released a Chrome Extension to allow users to block individual site from Google search results! Personal Blocklist (by Google). (Since this question has been closed, I cannot answer it.)

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >