Search Results

Search found 50527 results on 2022 pages for 'http expires'.

Page 94/2022 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • Squid throws error, The requested URL could not be retrieved

    - by Supratik
    Sometimes I am getting the following error The requested URL could not be retrieved While trying to retrieve the URL: http://groups.google.com/ The following error was encountered: Unable to determine IP address from host name for groups.google.com The dnsserver returned: Refused: The name server refuses to perform the specified operation. This means that: The cache was not able to resolve the hostname presented in the URL. Check if the address is correct. Your cache administrator is root. What could be the reason for the above error ? Regards Supratik

    Read the article

  • Animated HTTP request visualisation on Apache

    - by Simon Bennett
    This is more a question to appease my memory in trying to remember what it was I saw a while ago. I remember being introduced to a realtime server visualisation tool that showed the current requests that Apache was handling in a kind of fireworks effect on screen. Each request/group of requests would be shot across the screen in varying colours. I can't for the life for me remember what is was called and hunting around here and Google has left me empty handed. Just wondering if anybody else was able to plug this gem from the memory and ease my pain! Thanks

    Read the article

  • Possible to find what IP address caused a 403.6 error?

    - by Abe Miessler
    I have a website setup in IIS that is rejecting all IP addresses except for a specific set. I thought I had added my machines IP address to this list, but evertime I try to access I get the error: HTTP Error 403.6 - Forbidden: IP address of the client has been rejected. Internet Information Services (IIS) I would like to figure out what IP address it think I am coming form when it rejects me. I tried looking at the IIS logs and in the Security Event Logs but I'm not seeing anything. Suggestions on where to look?

    Read the article

  • Using wildcard domains to serve images without http blocking

    - by iopener
    I read that browsers sometimes block waiting for multiple images from the same host, and I'm trying to do everything I can to speed up page load times. One caveat: I need to serve files over HTTPS. Any opinions about whether this is feasible: Setup a wildcard cert for *.domain.com. Whenever I need an image, generate an number based on a hash mod 5 of the filename, and append it to an 'img' subdomain (eg img1.domain.com, img4.domain.com, img3.domain.com, etc.); the hash will make any filename always use the same subdomain, and therefore the browser should be able to cache the images Configure a dynamic virtualhost record to point all img#. subdomains to /var/www/img I am looking for feedback about this plan. My concerns are: Will I get warnings when my page has https:// links to multiple subdomains? Is the dynamic virtualhost record I'm talking about even possible? Considering the amount of processing this would require, is it likely to even produce any kind of overall benefit? I'm probably averaging a half-dozen images per page, with only half being changed on each page refresh. Thanks in advance for you feedback.

    Read the article

  • How to defend agains botnet http requests

    - by Killercode
    I have a server with WHM + CPanel and 5 of my costumer got infected with zbot. This means that the domains they have are constantly receiving requests to certain destinations. I tried to use mod_security but seems that it can't filter every requests... I don't really know why? I still see in the access log the connection comming in and it's consuming a LOT of bandwidth and server load Those accounts have already been clean so all of those requests go to error 404 (the ones catched on mod_security I am dropping the connection). Is there anymore ways to defend against this requests?

    Read the article

  • Protect all XML-RPC calls with HTTP basic auth but one

    - by bodom_lx
    I set up a Django project for smartphone serving XML-RPC methods over HTTPS and using basic auth. All XML-RPC methods require username and password. I would like to implement a XML-RPC method to provide registration to the system. Obviously, this method should not require username and password. The following is the Apache conf section responsible for basic auth: <Location /RPC2> AuthType Basic AuthName "Login Required" Require valid-user AuthBasicProvider wsgi WSGIAuthUserScript /path/to/auth.wsgi </Location> This is my auth.wsgi: import os import sys sys.stdout = sys.stderr sys.path.append('/path/to/project') os.environ['DJANGO_SETTINGS_MODULE'] = 'project.settings' from django.contrib.auth.models import User from django import db def check_password(environ, user, password): """ Authenticates apache/mod_wsgi against Django's auth database. """ db.reset_queries() kwargs = {'username': user, 'is_active': True} try: # checks that the username is valid try: user = User.objects.get(**kwargs) except User.DoesNotExist: return None # verifies that the password is valid for the user if user.check_password(password): return True else: return False finally: db.connection.close() There are two dirty ways to achieve my aim with current situation: Have a dummy username/password to be used when trying to register to the system Have a separate Django/XML-RPC application on another URL (ie: /register) that is not protected by basic auth Both of them are very ugly, as I would also like to define a standard protocol to be used for services like mine (it's an open Dynamic Ridesharing Architecture) Is there a way to unprotect a single XML-RPC call (ie. a defined POST request) even if all XML-RPC calls over /RPC2 are protected?

    Read the article

  • Protect all XML-RPC calls with HTTP basic auth but one

    - by bodom_lx
    I set up a Django project for smartphone serving XML-RPC methods over HTTPS and using basic auth. All XML-RPC methods require username and password. I would like to implement a XML-RPC method to provide registration to the system. Obviously, this method should not require username and password. The following is the Apache conf section responsible for basic auth: <Location /RPC2> AuthType Basic AuthName "Login Required" Require valid-user AuthBasicProvider wsgi WSGIAuthUserScript /path/to/auth.wsgi </Location> This is my auth.wsgi: import os import sys sys.stdout = sys.stderr sys.path.append('/path/to/project') os.environ['DJANGO_SETTINGS_MODULE'] = 'project.settings' from django.contrib.auth.models import User from django import db def check_password(environ, user, password): """ Authenticates apache/mod_wsgi against Django's auth database. """ db.reset_queries() kwargs = {'username': user, 'is_active': True} try: # checks that the username is valid try: user = User.objects.get(**kwargs) except User.DoesNotExist: return None # verifies that the password is valid for the user if user.check_password(password): return True else: return False finally: db.connection.close() There are two dirty ways to achieve my aim with current situation: Have a dummy username/password to be used when trying to register to the system Have a separate Django/XML-RPC application on another URL (ie: /register) that is not protected by basic auth Both of them are very ugly, as I would also like to define a standard protocol to be used for services like mine (it's an open Dynamic Ridesharing Architecture) Is there a way to unprotect a single XML-RPC call (ie. a defined POST request) even if all XML-RPC calls over /RPC2 are protected?

    Read the article

  • Nginx - basic http authentication on PHP-script

    - by half_bit
    I added a PHP-Script that serves as "cgi-bin", Configuration: location ~^/cgi-bin/.*\.(cgi|pl|py|rb) { gzip off; fastcgi_pass 127.0.0.1:9000; fastcgi_index cgi-bin.php; fastcgi_param SCRIPT_FILENAME /etc/nginx/cgi-bin.php; fastcgi_param SCRIPT_NAME /cgi-bin/cgi-bin.php; fastcgi_param X_SCRIPT_FILENAME /usr/lib/$fastcgi_script_name; fastcgi_param X_SCRIPT_NAME $fastcgi_script_name; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param REMOTE_USER $remote_user; } PHP-Script: <?php $descriptorspec = array( 0 => array("pipe", "r"), // stdin is a pipe that the child will read from 1 => array("pipe", "w"), // stdout is a pipe that the child will write to 2 => array("pipe", "w") // stderr is a file to write to ); $newenv = $_SERVER; $newenv["SCRIPT_FILENAME"] = $_SERVER["X_SCRIPT_FILENAME"]; $newenv["SCRIPT_NAME"] = $_SERVER["X_SCRIPT_NAME"]; if (is_executable($_SERVER["X_SCRIPT_FILENAME"])) { $process = proc_open($_SERVER["X_SCRIPT_FILENAME"], $descriptorspec, $pipes, NULL, $newenv); if (is_resource($process)) { fclose($pipes[0]); $head = fgets($pipes[1]); while (strcmp($head, "\n")) { header($head); $head = fgets($pipes[1]); } fpassthru($pipes[1]); fclose($pipes[1]); fclose($pipes[2]); $return_value = proc_close($process); } else { header("Status: 500 Internal Server Error"); echo("Internal Server Error"); } } else { header("Status: 404 Page Not Found"); echo("Page Not Found"); } ?> The problem with it thought is that I cannot add basic authentication. As soon as I enable it for location ~/cgi-bin it gives me a 404 error when I try to look it up. How can I solve this? I thought about restricting access to only my second server where I then add basic authentication over a proxy, but there must be a simpler solution. Sorry for the bad title, I couldn't think of a better one.

    Read the article

  • HTTP Proxypass of subdomain

    - by enedebe
    I'm trying to install a proxy on my gateway that everything that comes from a subdomain for example sub.mydomain.com goes to an inside server at a :3000 port. I'm installing a redmine server inside my network that has to be reached from outside. Any idea of how to do that? I think in httpd as proxypass, but I don't know how to get just the subdomain name to proxy it. My gateway is currently Clearos machine. Thanks

    Read the article

  • BIG IP - HTTPS Health Monitor setup

    - by djo
    I have a Web site that we have setup a health monitoring pages so we can take our servers in and out of the Big-IP as we see fit. Now we have just moved onto Big-IP and the issue I have hit is that you setup Health Monitors for port 80 and 443, now the 80 check works fine but when I to get the 443 check to look at our file it fails. Now I am aware as I am hitting the this page on the IP address over HTTPS is going to cause a cert error but I would have guessed that BIG-Ip would have been setup just to accept the cert and carry on with the check. Is what I am wanting to do possible? Also is there a way of just using a HTTP monitor for HTTPS? Because if port 80 has stopped sending traffic then if i use the same monitor for 443 it will stop traffic to that. Any help would be great! Thanks

    Read the article

  • How can I reroute a sub-domain to localhost + port number?

    - by urig
    I have several web applications running on my developer machine. They mimic our production web applications which are hosted on sub-domain. For example, consider: api.myserver.com - is mimicked by 127.0.0.1:8000 www.myserver.com - is mimicked by 127.0.0.1:8008 and so on... How can I make it so that, on my Windows 7 machine, HTTP calls to "api.myserver.com" (note the lack of port number) are redirected to 127.0.0.1:8000 etc? Note that this needs to apply both to client-side calls (in the browser) and server-side calls (from IIS to Python development server and vice versa). Do I need a proxy to run locally to achieve this? Can you recommend such a tool?

    Read the article

  • HTTPS vs. VPN for communication between business partners?

    - by Andrew H
    A business partner has asked to set up a site-to-site VPN just so that a few servers can communicate with each other over HTTPS. I'm convinced this isn't necessary, or even desirable. To be fair it must be part of a wider policy, potentially even a legal requirement. However I'd like to convince them to simply offer an IP to us (and us only) and a port of their choosing for HTTPS. Has anyone had a similar experience, or had to come up with a cast-iron argument against a VPN? Allow me to expand a little - we have a web service that initiates a connection to the partner's corresponding service using an encrypted HTTP connection. The connection uses a client certificate to authenticate. The connection is firewalled so only our IPs can contact the service. So why is a VPN necessary?

    Read the article

  • Proxy service likes Apache Http

    - by Aptos
    Currently I try to simulate my app as distributed servers, so I let them run on localhost:9000 and localhost:9001, i tried using apache load balancer but it is really hard to config on mac, my idea is the second server localhost:9001 will be kept ideal and the requests only be redirected to them when the first server is downed. Is there any good free program can do that ? (except Apache httpd). Extra functions: my application is written in java and maintain an in-memory object, is there any service that can synchronize that object between 2 servers so they can keep uptodate status of other (the second one takes state of the first one)? Is there any app can support that? Thank you very much.

    Read the article

  • Domain redirection to port on Windows Server 2008

    - by Rauffle
    I have a Windows server running IIS. I wish to run a piece of software that hosts a web interface on a non-standard HTTP port (let's say, port 9999). I have static DNS entries on my router for two FQDNs, both of which direct to the Windows server. I wish to have requests to 'website1' to continue to go to the IIS website on port 80, but requests for 'website2' to instead go to port 9999 to be handled by the other application. How can I accomplish this? Right now I can get to the application by going to 'website1:9999' or 'website2:9999'.

    Read the article

  • Setting up a simple proxy

    - by waiwai933
    I'm going to China for a week, and I'd prefer to be able to watch YouTube while I'm there. Since it's blocked, I presume I'm going to need a proxy. I have a Mac and a Linux box at home that I can use, but I'm not sure how complicated setting up a proxy is. From what I understand, I should be able to do it with a browser that supports HTTP 1.1 CONNECT if I connect to my machine at home. Can I do this, and if so, what browser can I use, or if I have misunderstood something, do I have any other simple solutions?

    Read the article

  • Can Squid 2.7 proxy gzipped content

    - by Tom Styles
    We have a forward proxy for our network which is Squid 2.7. This is managed for us by a third party. We noticed recently that http requests going from our network to the web were having the Accept-Encoding header removed. This was resulting in all web traffic across our network (approx 8000+ PCs) being uncompressed even though the browsers and server on each end were capable. We have asked the third party to look into this and they have said it is because Squid 2.7 does not support compression. I understand this to be true but I was under the impression that the compression happened on the webserver rather than the proxy. So... Can Squid 2.7 proxy and/or cache content that is gzipped? If it can, how/why might it be configured such that the Accept-Encoding header is being removed?

    Read the article

  • Window 7 image in vmware will allow network connection out but not http

    - by Ormis
    I am currently trying to create a set of images to deploy on my network, but I've run in to a snag. When I create my own Windows 7 image I can successfully use NAT for connecting to the network but whenever I try to access a webpage I get nothing. To be more specific, All firewalls/iptables are disabled on my host machine, my virtual machine, and my network. I can do lookups and all addresses respond correctly (i'm even using Google's DNS). On the host OS i have full connectivity. On the virtual machine I can ping any device I want and all addresses resolve correctly. Within a browser I cannot reach any page via hostname or IP. I feel almost like port 80 is being blocked but i can't find any reason this would be the case. If anyone has had this occur before, I would love some insight to the problem. I initially asked this on stackoverflow and now my eyes are now opened up to superuser. Thank you for any help you can provide.

    Read the article

  • Random timeout now and then

    - by KenavR
    Maybe this is a to generic question, but since we have this issue for quite a while now, I give it a shot. We have some applications which use HTTP for the connection between the client (website or fat-client) and the server. The Computer who runs this applications is in a Network behind a firewall and a proxy, the server isn't inside the same network. The problem is that every now and then the https Request times out and depending on the Client the Application "hangs" or does some other funky stuff. The problem is definitely inside our network, because if i try the applications outside our network it works fine. Can you give me a hint where i can most likely find the problem?

    Read the article

  • HTTP transfer speeds start fast then slows to a crawl

    - by AnITAdmin
    We just got a new dedicated 1 gigabit server running IIS. The CPU is 15% or less, the RAM (4 GB total) has 3 GB unused... We are pushing 110 mbits per second... Speeds are really slow.. And, if fact, here's how it happens: We connect, and then the speeds are really fast, and quickly decline to 40 kBps or less. What's going on? It seems the server just wont go above 120 mbits per second. The files are all very large. 50 MB to 500 MB... Could this be a factor? Again, CPU, RAM, UI responsiveness when accessing remotely all seem fine.

    Read the article

  • Squid throws error, The requested URL could not be retrieved

    - by Supratik
    Hi Sometimes I am getting the following error The requested URL could not be retrieved While trying to retrieve the URL: http://groups.google.com/ The following error was encountered: Unable to determine IP address from host name for groups.google.com The dnsserver returned: Refused: The name server refuses to perform the specified operation. This means that: The cache was not able to resolve the hostname presented in the URL. Check if the address is correct. Your cache administrator is root. What could be the reason for the above error ? Regards Supratik

    Read the article

  • Enabling http access on port 80 for centos 6.3 from console

    - by Hugo
    Have a centos 6.3 box running on Parallels and I'm trying to open port 80 to be accesible from outside tried the gui solution from this post and it works, but I need to get it done from a script. Tried to do this: sudo /sbin/iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT sudo /sbin/iptables-save sudo /sbin/service iptables restart This creates exactly the same iptables entries as the GUI tool except it does not work: $ telnet xx.xxx.xx.xx 80 Trying xx.xxx.xx.xx... telnet: connect to address xx.xxx.xx.xx: Connection refused telnet: Unable to connect to remote host UPDATE: $ netstat -ntlp (No info could be read for "-p": geteuid()=500 but you should be root.) Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN - tcp 0 0 127.0.0.1:6379 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN - tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN - tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:37439 0.0.0.0:* LISTEN - tcp 0 0 :::111 :::* LISTEN - tcp 0 0 :::22 :::* LISTEN - tcp 0 0 ::1:631 :::* LISTEN - tcp 0 0 :::60472 :::* LISTEN - $ sudo cat /etc/sysconfig/iptables # Generated by iptables-save v1.4.7 on Wed Dec 12 18:04:25 2012 *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [5:640] -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT -A FORWARD -j REJECT --reject-with icmp-host-prohibited COMMIT # Completed on Wed Dec 12 18:04:25 2012

    Read the article

  • using trickle to slow down browser

    - by tester
    according to trickle's man page, http://linux.die.net/man/1/trickle i can limit the download speed of a process, e.g. trickle -u 10 -d 20 ncftp to Launch ncftp(1) limiting its upload capacity to 10 KB/s, and download capacity at 20 KB/s. how would I go about limiting google-chrome or firefox with trickle? Edit: For those of you asking why I asked such an obvious question, I tried trickle -u 10 -d 20 firefox and I'm getting an error trickle: Could not reach trickled, working independently: No such file or directory firefox opens right after, but is definitely not rate limited...

    Read the article

  • How to make sure clients update their browser cache when my website is updated?

    - by user64204
    I am using the HTTP 1.1 Cache-Control header to implement client-side caching. Since I update my website only once a month I would like the CSS and JS files to be cached for 30 days with Cache-Control: max-age=2592000. The problem is that the 30-day period defined by Cache-Control doesn't coincide with the website update cycle, it starts from the moment the users visit the site and ends 30 days later, which means an update could occur in the meantime and users would be running with outdated content for a while, which could break the rendering of the website if for instance the HTML and CSS no longer match. How can I perform client-side caching of content for periods of several days but somehow get users to refresh their CSS/JS files after the website has been updated? One solution I could think of is that if website updates can be schedule, the max-age returned by the server could be decreased every day accordingly so that no matter when people visit the website, the end of caching period would coincide with the update of the website, but changing the server configuration every day goes against one of my sysadmin principles (once it's running, don't touch it).

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >