Search Results

Search found 50062 results on 2003 pages for 'http 1 1'.

Page 104/2003 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • need help with logparser on iis logs

    - by user36440
    I am using logparser 2.2 and need a script that does two things: finding urls that contain a value within referer need to loop over 30 folders logparser -rpt:-1 "select count(*)INTO feeds.txt from u_ex100302*.log where to_lowercase(cs(Referer)) like '/feeds%'"

    Read the article

  • Is it generally better to compress content on the proxy server or the app server?

    - by Dan
    We're using an F5 for load balancing and SSL proxying. Behind it we're serving up java applications with Tomcat instances. These are fairly small applications - hundreds of concurrent users. I'd like to compress some of the content, and I'm looking for advice on choosing to configure compression on the F5, or on the tomcat instances. Any big factors in the decision, or is it 6-of-one half-dozen of the other?

    Read the article

  • Opening port 80 in router has no results

    - by Ricardo Pieper
    A friend of mine has a ADSL modem and I need to forward some ports. I have already forwarded the 1521 port (Oracle) and it's working fine. Now I need to forward the port 80. I already set up his IIS bindings to this port, and also forwarded the port like this video shows: https://www.youtube.com/watch?v=DLKD-fyexoo So I think I did everything correctly. The local IP address is also the same as the machine where the IIS server is running. I'm sorry, but I can't post images since i don't have 10 points :( Somehow I can't forward this port, yougetsignal.com keeps saying that the door is closed. When I try to open the port, the Control Panel says me that I have to access the control panel in the 8080 port, because the 80 port will be open. Ok, that's fine. But I'm still able to access it in the 80 port, and when I try to access it in the 8080 port, it doesn't work. I'm trying it with the TPLINK 8816, but I also tried to open it in the Opticom DsLink 279, and it didn't worked (using another machine), I got the exact same results. He has a dynamic IP address, but he is also using No-ip, so I can always access his Oracle database in a certain static address. The 1521 port is open. I also tried to disable the firewall in Windows, but that makes no sense to me, since the router doesn't really open the port 80. Clearly I'm missing something. I have never done it in my life, so I dont know how to proceed. Restarting the router was the first I did, no results. I'm accessing his laptop through TeamViewer, so I'm testing the port outside his local network. Edit: My ISP says that they allow to open ports, and the 1521 port is opened. What could I do to open the 80 port?

    Read the article

  • CentOS 6.5 proxy bypass/no_proxy not working

    - by Naruto Uzumaki
    I am running CentOS 6.5 on my desktop. I've set the Network Proxy using the network proxy application provided under Preferences. I've also set the following exceptions: localhost,127.0.0.0/8,172.16.0.0/12,192.168.0.0./16 But whenever I am using wget (I'm testing the proxy settings using using wget) then wget tries to connect to the proxy for private addresses, but wget localhost works fine and doesn't use the proxy. I also removed all the proxy settings and set the proxy in the shell: export http_proxy="<proxy_url>:<port>" export https_proxy="<proxy_url>:<port>" export no_proxy="localhost,127.0.0.0/8,172.16.0.0/12,192.168.0.0./16" It work when I use the command wget <external_url> or wget localhost but fails when I use the command wget <private address from the $no_proxy variable>. I also tried setting the variables in Ubuntu 14.04 also and facing the same issue. Regards,

    Read the article

  • Apache whitelist a single location, but require basic auth for everything else

    - by Chris Lawlor
    I'm sure this is simple, but Google is not my friend this morning. The goal is: /public... is openly accessible everything else (including /) requires basic auth. This is a WSGI app, with a single WSGI script (it's a django site, if that matters..) I have this: <Location /public> Order deny,allow Allow from all </Location> <Directory /> AuthType Basic AuthName "My Test Server" AuthUserFile /path/to/.htpasswd Require valid-user </Directory> With this configuration, basic auth works fine, but the Location directive is totally ignored. I'm not surprised, as according to this (see How the Sections are Merged), the Directory directive is processed first. I'm sure I'm missing something, but since Directory applies to a filesystem location, and I really only have the one Directory at /, and it's a Location that I wish to allow access to, but Directory always overrides Location... EDIT I'm using Apache 2.2, which doesn't support AuthType None.

    Read the article

  • How to run a website domain without redirecting if IP is already used for another website? [duplicate]

    - by SSpoke
    This question already has an answer here: Hosting multiple distinct folders for distinct domains 1 answer I bought a VPS Host that gave me only 1 IP Address which I used on my first domain name and it works without any problems. Now my second domain name I can't use the same ip address as it points to the first domain name. So I figured my only option was to use a GoDaddy hosted iframe redirection which redirects to a sub folder on my first domain which worked so far. Now I'm trying to load paypal from <?php headers() ?> and I get a permission error because of that iframe Refused to display 'https://www.paypal.com/cgi-bin/webscr?notify_url=&cmd=_cart&upload=1&business=removed&address_override=1' in a frame because it set 'X-Frame-Options' to 'SAMEORIGIN'. How do I avoid the Iframe solution for my second domain while not messing up my first domain? Somebody I forgot once told me it doesn't matter if you have 1 IP Address you could host multiple websites on it? how it that possible the DNS doesn't seem to work off ports afaik, yes I could host multiple websites on different folders but that's not what I call hosting a real website it has to be pointed by a domain name, so this iframe issue doesn't happen My server configuration is httpd (apache) that comes with CentOS 6 (Linux) operating system

    Read the article

  • nginx dynamic servername with regular expression doesn't work for co.uk

    - by redn0x
    I'm trying to setup a nginx server which dynamically loads content from a folder for a domain. To do this I'm using regular expressions in the server name like so: server_name ((?<subdomain>.+)\.)?(?<domain>.+)\.(?<tld>.*); This will create a 3 variables for nginx to use later on, for example when using the following url: test.foo.example.com this will evaluate to: $subdomain = test.foo $domain = example $tld = com The problem arises when the co.uk top-level domain is used. In this case when using the url test.foo.example.co.ukit will evaluate to: $subdomain = test.foo.cedira $domain = co $tld = uk How can I edit the regular expression so that it will also work for co.uk?

    Read the article

  • how to rotate one squid user among multiple IPs based on number of requests processed by each IP

    - by Arvind
    I want to set up a Squid ACL in the following manner-- For example, my Squid Proxy Server has 10 IP addresses- now I have a user 'demouser'. I want that for the very first request sent to 'demouser' this user uses IP address #1, for the second request it uses IP address #2, for the 3rd request of the day it uses IP address #3 and so on till it uses up all IPs. One more level of control I would like is that once the user has used up all available IP addresses once per address, then it does not allow the proxy request to go through. How do I set up such a configuration on Squid Proxy server ACL? Even a document or how to would be very helpful. The official wiki talks about one 'weird' case- choosing an IP address based on time of day the request was made to the proxy server. The other cases are all regular use cases which are not even remotely near my requirement as specified above.

    Read the article

  • Google search results are downloaded as a file in Google Chrome

    - by i-g
    I'm behind a proxy at work, and Google Chrome insists on downloading Google search results pages instead of displaying them. Whether I try to search from the address bar, from google.com, or from a third-party site that has a Google search form, what ends up happening is that the search results page ends up as a downloaded file called "search" in my downloads directory. I haven't seen this happen with any other search pages. Yahoo! Search, for example, works fine. Has anyone run into this before and/or has any ideas on how to fix it or what might be causing it? I'd try the Chrome support pages, but they're blocked by the proxy...

    Read the article

  • System for public internet access

    - by Pydev UA
    I'm looking for solution for a public wifi spot.. I want that all people who connect to our wifi network should be redirected to special website(after they open any internet page) where they should reed and agree with our terms of services(by clicking a button). After that they can open any internet page. I have a router and a linux box with site(with agreement page ready), so I'm looking for some software/or hardware solution for this, preferable open-source.

    Read the article

  • Stunnel too many clients

    - by davidsmalley
    I'm trying to hook up stunnel and haproxy to forward https connections through to some backend servers. I've got haproxy setup right, and I seem to have stunnel set up right. Trouble is that when I hit the setup with a load test after a while I start to see these log entries: 2010.05.05 11:24:43 LOG7[3498:3086792368]: https accepted FD=512 from 10.195.158.225:52579 2010.05.05 11:24:43 LOG4[3498:3086792368]: Connection rejected: too many clients (=500) I guess I've hit a limit somewhere but I wasn't sure how to fix it, there doesn't seem to be a config file option for stunnel to change this. Does anyone know how to configure stunnel for a potentially large number of connections?

    Read the article

  • Proxy Access to my Squid Proxy

    - by Fake4d
    I have a squid proxy cluster to let my users surf in the internet and on intranet ressources. Now there is a special user, that wants to configure another squid in the net of the users. So this proxy wants to access the internet over a proxy-proxy configuration. It doesnt work at the moment. So here is the question: Whats the configuration line for my squid.conf to allow an IP to use my squid as an upstream proxy?

    Read the article

  • i keep getting a 403 forbidden permission error on my fedora server through my local network

    - by kdavis8
    Trying to view a javascript file on my home server I get the following error: Forbidden You don't have permission to access /jquery-1.8.2.js on this server. Apache/2.2.22 (Fedora) Server at 192.168.1.3 Port 80 I have given all users access to the file like this: sudo chmod -R 777 /var/www/html/jquery-1.8.2.js I have even gone as far as changing the user & group properties in the httpd.conf file.

    Read the article

  • Are you aware of any client-side malware that sends lots of junk requests for .gifs?

    - by Matt Sherman
    I am getting dozens of 404 errors on my site that are requests for gif's with apparently random names, like 4273uaqa.gif and 5pwowlag.gif. I see that most of them are coming from one user. I assume something is happening in the background on her machine without her knowledge. I assume it's a malware thing on the client. Has anyone seen this behavior before? Would love to advise my customer that s/he has an issue. I'd also like to stop getting these 404 reports. :)

    Read the article

  • Configure Apache with a htaccess file to strip out unneeded respond-headers.

    - by Koning Baard XIV
    For ultimate speed, I want my Apache server strip unneeded headers from the response. Currently, the headers looks like this (excluding the status header): Connection:Keep-Alive Content-Length:200 Content-Type:text/html Date:Sat, 15 May 2010 16:28:37 GMT Keep-Alive:timeout=5, max=100 Server:Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 PHP/5.3.1 Phusion_Passenger/2.2.7 X-Powered-By:PHP/5.3.1 Which I want to be like: Connection:Keep-Alive Content-Type:text/html Keep-Alive:timeout=5, max=100 How can I configure this in a .htaccess file? Thanks

    Read the article

  • OPTIONS request vs GET in Ajax

    - by user41172
    I have a PHP/javascript app that queries and returns info using an ajax request. On every server I've used so far, this works as expected, passing an Ajax GET request to the server and returning json data. On a new install, the query fails and returns nothing-- I inspected the request and it turns out that rather than passing the query as a GET, the server is passing it as an OPTIONS request. Is there any reason for this? I have no idea why this might happen. THanks!

    Read the article

  • Web page change monitoring for Mac OS X

    - by brucethehoon
    There is a site that I need to monitor for changes and pop an alarm if the change is over a certain threshold. Ideally it would use Growl to pop the notification but I'm open to alternatives. If there is an application out there that does this for purchase, I'll just buy it. If not, pointers to any Linux / Java / other recompilables that I could just add Growl support to would be very helpful!

    Read the article

  • Connecting to my home router web interface from work

    - by Joe
    Hi, I'm trying to connect to my home router web interface from work. I use dyndns, because I don't have a static IP at home, and it works perfectly from any other place except my workplace (update: I made a mistake, see edit below). When trying to access the web interface from work I get a "500 Server Error" with the code: SERVER_RESPONSE_RESET. I'm not trying to use any protocols such as remote desktop, I'm only trying to access the web interface. I can access any other web page from my workplace with no problems, and I think my router web interface is like any other web page, isn't it? I thought maybe my work place proxy blocks addresses of services like dyndns, so I also applied another trick. Since I have a web page on my own domain (say www.mydomain.com) which I can access from work, I tried adding a CNAME to my domain which is linked to the dyndns address (router.mydomain.com). This way if anyone enters the address router.mydomain.com from anywhere, they reach my home router web interface, and there's no way of knowing it's a dyndns address (or is there?). However, it still doesn't work from my workplace (I get the same error message). Any ides? Edit: I'm sorry to say I made a mistake earlier. I used to be able to access my home router web interface from my old workplace, and I thought it was still possible since I don't recall making any configuration changes. However, after reading the replies, I went over to my old workplace and checked, and it doesn't work from there either. I'm very sorry for giving out wrong and misleading information about my problem. So to summarize: my problem is that I can't access my home router web interface from anywhere.

    Read the article

  • Comcast SMC Port Forwarding Issue

    - by Zach Fedora
    I have a Comcast SMCD3G modem/router and I've been having issues getting the port to forward - When I check an online "open-port-checker" it says the port is forwarded/they can see the port on my IP. (1 Static IP is assigned to the modem) But when I try to access port 80 for example on a browser, it times out. Also when I try to remote desktop to the server (Windows Server 2008 R2) it doesn't work, yet canyouseeme.org says it is open. Any ideas as to what the problem could be?

    Read the article

  • IE6 does not follow 302 redirect - displays 404 instead

    - by Dexter
    One of our clients has reported that they are experiencing 404 (file not found) errors when attempting to navigate a website that we support. The behaviour only appears to affect her - other users on the same machine can navigate the website fine, but the problem follows her from one PC to another. I've had a good look through the IIS server logs and have identified the requests in question. The normal request pattern is as follows: POST /page.aspx - 80 - ... 401 1 0 POST /page.aspx - 80 DOMAIN/user ... 302 0 0 GET /anotherPage.aspx Request=833f80a5-f34c-4b0e-addb-d73e1ee1663a 80 - ... 401 1 0 GET /anotherPage.aspx Request=833f80a5-f34c-4b0e-addb-d73e1ee1663a 80 DOMAIN/user ... 200 0 However, requests for the affected user do not include a request for the redirected page, nor an entry for the 404, i.e.: POST /page.aspx - 80 - ... 401 1 0 POST /page.aspx - 80 DOMAIN/user ... 302 0 0 ... other unrelated requests Can anyone suggest what might trigger this behaviour, and how I might investigate the cause or prevent it from occuring? I read here that the Allow META refresh option in IE6 might trigger this behaviour, but I have not been able to replicate the behaviour by modifying this setting only.

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >