Search Results

Search found 52885 results on 2116 pages for 'http redirect'.

Page 32/2116 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • Redirect packages directed to port 5000 to another port

    - by tdc
    I'm trying to use eboard to connect to the FICS servers (http://www.freechess.org), but it fails because port 5000 is blocked (company firewall). However, I can connect to the server through the telnet port (23): telnet freechess.org 23 (succeeds) telnet freechess.org 5000 (fails) Unfortunately the port number is hardcoded (see here: http://ubuntuforums.org/archive/index.php/t-1613075.html). I'd rather not have to hack the source code as the author of that thread ended up doing. Can I just forward the port on my local machine using iptables? I tried: sudo iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 5000 -j REDIRECT --to-port 23 and sudo iptables -t nat -I OUTPUT --src 0/0 -p tcp --dport 5000 -j REDIRECT --to-ports 23 but these didn't work... Note that: $ sudo iptables -t nat -L Chain PREROUTING (policy ACCEPT) target prot opt source destination REDIRECT tcp -- anywhere anywhere tcp dpt:5000 redir ports 23 Chain INPUT (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination REDIRECT tcp -- anywhere anywhere tcp dpt:5000 redir ports 23 Chain POSTROUTING (policy ACCEPT) target prot opt source destination

    Read the article

  • Google.com and clients1.google.com/generate_204

    - by David Murdoch
    I was looking into google.com's Net activity in firebug just because I was curious and noticed a request was returning "204 No Content." It turns out that a 204 No Content "is primarily intended to allow input for actions to take place without causing a change to the user agent's active document view, although any new or updated metainformation SHOULD be applied to the document currently in the user agent's active view." Whatever. I've looked into the JS source code and saw that "generate_204" is requested like this: (new Image).src="http://clients1.google.com/generate_204" No variable declaration/assignment at all. My first idea is that it was being used to track if Javascript is enabled. But the "(new Image).src='...'" call is called from a dynamically loaded external JS file anyway, so that would be pointless. Anyone have any ideas as to what the point could be? UPDATE "/generate_204" appears to be available on many google services/servers (e.g., maps.google.com/generate_204, maps.gstatic.com/generate_204, etc...). You can take advantage of this by pre-fetching the generate_204 pages for each google-owned service your web app may use. Like This: window.onload = function(){ var two_o_fours = [ // google maps domain ... "http://maps.google.com/generate_204", // google maps images domains ... "http://mt0.google.com/generate_204", "http://mt1.google.com/generate_204", "http://mt2.google.com/generate_204", "http://mt3.google.com/generate_204", // you can add your own 204 page for your subdomains too! "http://sub.domain.com/generate_204" ]; for(var i = 0, l = two_o_fours.length; i < l; ++i){ (new Image).src = two_o_fours[i]; } };

    Read the article

  • C# SOCKS proxy service for HTTP requests

    - by Ed
    I'm trying to build a service that will forward HTTP requests from agents like a browser to the Tor service. Problem is, the Tor service only accepts SOCKS4a connections. So my solution is to listen for HTTP requests, get the URL they're requesting, and make a request via Tor with the help of the Starksoft.Net.Proxy library. Then return the response. The library kind of works, but I'm not happy. It returns HTTP headers with the response and it can't handle images. So the responses are messed up. How could I improve my code? I'm very new to network programming. Sorry for the long example. public AnonymiserService(ILogger logger) { try { _logger = logger; _logger.Log("Listening on port {0}...", Properties.Settings.Default.ListeningPort); StartListener(new string[] { string.Format("http://*:{0}/", Properties.Settings.Default.ListeningPort) }); } catch (Exception ex) { _logger.LogError("Exception!", ex); } } private void StartListener(string[] prefixes) { if (!HttpListener.IsSupported) { _logger.LogError("HttpListener isn't supported on this machine!"); return; } HttpListener listener = new HttpListener(); foreach (string s in prefixes) listener.Prefixes.Add(s); while (true) { listener.Start(); IAsyncResult result = listener.BeginGetContext(new AsyncCallback(ListenerCallback), listener); result.AsyncWaitHandle.WaitOne(); } } private void ListenerCallback(IAsyncResult result) { try { // Get HTTP request HttpListener listener = (HttpListener)result.AsyncState; HttpListenerContext context = listener.EndGetContext(result); _logger.Log("Retrieving [{0}]", context.Request.RawUrl); // Create connection // Use Tor as proxy IProxyClient proxyClient = new Socks4aProxyClient("localhost", 9050); TcpClient tcpClient = proxyClient.CreateConnection(context.Request.UserHostName, 80); // Create message // Need to set Connection: close to close the connection as soon as it's done byte[] data = Encoding.UTF8.GetBytes(String.Format("GET {0} HTTP/1.1\r\nHost: {1}\r\nConnection: close\r\n\r\n", context.Request.Url.PathAndQuery, context.Request.UserHostName)); // Send message NetworkStream ns = tcpClient.GetStream(); ns.Write(data, 0, data.Length); // Pass on HTTP response HttpListenerResponse responseOut = context.Response; if (ns.CanRead) { byte[] buffer = new byte[32768]; int read = 0; string responseString = string.Empty; // Read response while ((read = ns.Read(buffer, 0, buffer.Length)) > 0) { responseString += Encoding.UTF8.GetString(buffer, 0, read); } // Remove headers if (responseString.IndexOf("HTTP/1.1 200 OK") > -1) responseString = responseString.Substring(responseString.IndexOf("\r\n\r\n")); // Forward response byte[] byteArray = Encoding.UTF8.GetBytes(responseString); responseOut.OutputStream.Write(byteArray, 0, byteArray.Length); } // Close streams responseOut.OutputStream.Close(); ns.Close(); // Close connection tcpClient.Close(); _logger.Log("Retrieved [{0}]", context.Request.RawUrl); } catch (Exception ex) { _logger.LogError("Exception in ListenerCallback!", ex); } }

    Read the article

  • URL Rewrite – Protocol (http/https) in the Action

    - by OWScott
    IIS URL Rewrite supports server variables for pretty much every part of the URL and http header. However, there is one commonly used server variable that isn’t readily available.  That’s the protocol—HTTP or HTTPS. You can easily check if a page request uses HTTP or HTTPS, but that only works in the conditions part of the rule.  There isn’t a variable available to dynamically set the protocol in the action part of the rule.  What I wish is that there would be a variable like {HTTP_PROTOCOL} which would have a value of ‘HTTP’ or ‘HTTPS’.  There is a server variable called {HTTPS}, but the values of ‘on’ and ‘off’ aren’t practical in the action.  You can also use {SERVER_PORT} or {SERVER_PORT_SECURE}, but again, they aren’t useful in the action. Let me illustrate.  The following rule will redirect traffic for http(s)://localtest.me/ to http://www.localtest.me/. <rule name="Redirect to www"> <match url="(.*)" /> <conditions> <add input="{HTTP_HOST}" pattern="^localtest\.me$" /> </conditions> <action type="Redirect" url="http://www.localtest.me/{R:1}" /> </rule> The problem is that it forces the request to HTTP even if the original request was for HTTPS. Interestingly enough, I planned to blog about this topic this week when I noticed in my twitter feed yesterday that Jeff Graves, a former colleague of mine, just wrote an excellent blog post about this very topic.  He beat me to the punch by just a couple days.  However, I figured I would still write my blog post on this topic.  While his solution is a excellent one, I personally handle this another way most of the time.  Plus, it’s a commonly asked question that isn’t documented well enough on the web yet, so having another article on the web won’t hurt. I can think of four different ways to handle this, and depending on your situation you may lean towards any of the four.  Don’t let the choices overwhelm you though.  Let’s keep it simple, Option 1 is what I use most of the time, Option 2 is what Jeff proposed and is the safest option, and Option 3 and Option 4 need only be considered if you have a more unique situation.  All four options will work for most situations. Option 1 – CACHE_URL, single rule There is a server variable that has the protocol in it; {CACHE_URL}.  This server variable contains the entire URL string (e.g. http://www.localtest.me:80/info.aspx?id=5)  All we need to do is extract the HTTP or HTTPS and we’ll be set. This tends to be my preferred way to handle this situation. Indeed, Jeff did briefly mention this in his blog post: … you could use a condition on the CACHE_URL variable and a back reference in the rewritten URL. The problem there is that you then need to match all of the conditions which could be a problem if your rule depends on a logical “or” match for conditions. Thus the problem.  If you have multiple conditions set to “Match Any” rather than “Match All” then this option won’t work.  However, I find that 95% of all rules that I write use “Match All” and therefore, being the lazy administrator that I am I like this simple solution that only requires adding a single condition to a rule.  The caveat is that if you use “Match Any” then you must consider one of the next two options. Enough with the preamble.  Here’s how it works.  Add a condition that checks for {CACHE_URL} with a pattern of “^(.+)://” like so: How you have a back-reference to the part before the ://, which is our treasured HTTP or HTTPS.  In URL Rewrite 2.0 or greater you can check the “Track capture groups across conditions”, make that condition the first condition, and you have yourself a back-reference of {C:1}. The “Redirect to www” example with support for maintaining the protocol, will become: <rule name="Redirect to www" stopProcessing="true"> <match url="(.*)" /> <conditions trackAllCaptures="true"> <add input="{CACHE_URL}" pattern="^(.+)://" /> <add input="{HTTP_HOST}" pattern="^localtest\.me$" /> </conditions> <action type="Redirect" url="{C:1}://www.localtest.me/{R:1}" /> </rule> It’s not as easy as it would be if Microsoft gave us a built-in {HTTP_PROTOCOL} variable, but it’s pretty close. I also like this option since I often create rule examples for other people and this type of rule is portable since it’s self-contained within a single rule. Option 2 – Using a Rewrite Map For a safer rule that works for both “Match Any” and “Match All” situations, you can use the Rewrite Map solution that Jeff proposed.  It’s a perfectly good solution with the only drawback being the ever so slight extra effort to set it up since you need to create a rewrite map before you create the rule.  In other words, if you choose to use this as your sole method of handling the protocol, you’ll be safe. After you create a Rewrite Map called MapProtocol, you can use “{MapProtocol:{HTTPS}}” for the protocol within any rule action.  Following is an example using a Rewrite Map. <rewrite> <rules> <rule name="Redirect to www" stopProcessing="true"> <match url="(.*)" /> <conditions trackAllCaptures="false"> <add input="{HTTP_HOST}" pattern="^localtest\.me$" /> </conditions> <action type="Redirect" url="{MapProtocol:{HTTPS}}://www.localtest.me/{R:1}" /> </rule> </rules> <rewriteMaps> <rewriteMap name="MapProtocol"> <add key="on" value="https" /> <add key="off" value="http" /> </rewriteMap> </rewriteMaps> </rewrite> Option 3 – CACHE_URL, Multi-rule If you have many rules that will use the protocol, you can create your own server variable which can be used in subsequent rules. This option is no easier to set up than Option 2 above, but you can use it if you prefer the easier to remember syntax of {HTTP_PROTOCOL} vs. {MapProtocol:{HTTPS}}. The potential issue with this rule is that if you don’t have access to the server level (e.g. in a shared environment) then you cannot set server variables without permission. First, create a rule and place it at the top of the set of rules.  You can create this at the server, site or subfolder level.  However, if you create it at the site or subfolder level then the HTTP_PROTOCOL server variable needs to be approved at the server level.  This can be achieved in IIS Manager by navigating to URL Rewrite at the server level, clicking on “View Server Variables” from the Actions pane, and added HTTP_PROTOCOL. If you create the rule at the server level then this step is not necessary.  Following is an example of the first rule to create the HTTP_PROTOCOL and then a rule that uses it.  The Create HTTP_PROTOCOL rule only needs to be created once on the server. <rule name="Create HTTP_PROTOCOL"> <match url=".*" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false"> <add input="{CACHE_URL}" pattern="^(.+)://" /> </conditions> <serverVariables> <set name="HTTP_PROTOCOL" value="{C:1}" /> </serverVariables> <action type="None" /> </rule>   <rule name="Redirect to www" stopProcessing="true"> <match url="(.*)" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false"> <add input="{HTTP_HOST}" pattern="^localtest\.me$" /> </conditions> <action type="Redirect" url="{HTTP_PROTOCOL}://www.localtest.me/{R:1}" /> </rule> Option 4 – Multi-rule Just to be complete I’ll include an example of how to achieve the same thing with multiple rules. I don’t see any reason to use it over the previous examples, but I’ll include an example anyway.  Note that it will only work with the “Match All” setting for the conditions. <rule name="Redirect to www - http" stopProcessing="true"> <match url="(.*)" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false"> <add input="{HTTP_HOST}" pattern="^localtest\.me$" /> <add input="{HTTPS}" pattern="off" /> </conditions> <action type="Redirect" url="http://www.localtest.me/{R:1}" /> </rule> <rule name="Redirect to www - https" stopProcessing="true"> <match url="(.*)" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false"> <add input="{HTTP_HOST}" pattern="^localtest\.me$" /> <add input="{HTTPS}" pattern="on" /> </conditions> <action type="Redirect" url="https://www.localtest.me/{R:1}" /> </rule> Conclusion Above are four working examples of methods to call the protocol (HTTP or HTTPS) from the action of a URL Rewrite rule.  You can use whichever method you most prefer.  I’ve listed them in the order that I favor them, although I could see some people preferring Option 2 as their first choice.  In any of the cases, hopefully you can use this as a reference for when you need to use the protocol in the rule’s action when writing your URL Rewrite rules. Further information: Viewing all Server Variable for a site. URL Parts available to URL Rewrite Rules Further URL Rewrite articles

    Read the article

  • Generic HTTP 500 Error Message On Hosted Sites (like GoDaddy)

    - by Jimbo
    I decided to post this because I battled to find out how to do it and couldnt see anything on Stackoverflow about it. Often when you host with a provider like GoDaddy, they have "Custom Error Messages" set to ON. What I didnt realise was that the web.config settings dont just apply to ASP.NET, they apply to all applications on YOUR IIS site and hence will sort this problem out for Classic ASP as well (very few GoDaddy support people even know this) All you need to do is add the following to your web.config OR, for those using Classic ASP, just create a web.config file in your ROOT with this code in it. <configuration> <system.webServer> <asp scriptErrorSentToBrowser="true"/> <httpErrors errorMode="Detailed"/> </system.webServer> </configuration>

    Read the article

  • Getting HAPROXY to redirect http to https in users browser session

    - by Jon
    We are currently using a Internet cloud provider to host our SaaS platform. The platform consists of a Firewall - Cloud Provider SLB - - Apache Web Server - HAPROXY SLB - Liferay Platform We have had to use HAPROXY because of an issue with the cloud providers SLB that meant we were unable to use it for load balancing the Liferay platform applications. I have implemented HAPROXY in our secure tier and that seems to do the trick of load balancing the requests quite adequately. However during testing we encountered a functional issue whereby selecting a sub-menu from the web portal resulted in the application hanging, using an http analyser we saw that the request being passed back to the users browser was in http, from discussing this with the software vendor it transpires that the Liferay application has some hard-coded http links, and that other customers have worked around this by using physical NLB's such as F5 and redirecting the http traffic to https. The entry in the HAPROXY logs reads: haproxy[2717]: haproxy[2717]: <Apache Web Agent>:37957 [11/Apr/2013:08:07:00.128] http-uapi uapi/<ServerName> 0/0/0/9/10 200 4912 - - ---- 4/2/1/2/0 0/0 "GET /servicedesk/controller?docommand=renderradform&!key=esd_sfb001_frm_feedback_forms_list&isportalintegratedmode=true&USR=joe.bloggs%40gmail.com&_dc=1365667773097&redirecturl=controller%3Fdocommand%3Drenderbody%26%21key%3DESD_SFB001_FRM_FEEDBACK_FORMS_LIST%26isportalintegratedmode%3Dtrue&sso_token=ALiYv2UqzLsAhSw1ZchRDlCHlq44Bhj9&ONERROR=%2Fweb%2Fjsp%2Fapps%2Fportal-integration-error.jsp&itype=login&slicetoken=NW51O%242aRo%2C_Zz%2476P_9DTtnFmz6%28bhk&AUTOFORWARDURL=controller%3Fdocommand%3Drenderbody%26%21key%3DESD_SFB001_FRM_FEEDBACK_FORMS_LIST%26isportalintegratedmode%3Dtrue&LOGINPAGE=https%3A%2F%2F<FQDN of Web Portal>%2Fweb%2F4732cf01-82c3-4bc5-b6c9-552253e672cf%2Fworkflow-tools&appid=1&!uid=1&!redownloadToken=7.0.3.1.1363611301.0&userlocale=en_US&!datechanged=2012-05-18%2015:05:31.38 HTTP/1.1" :37957 [11/Apr/2013:08:07:00.128] http-uapi uapi/<ServerName> 0/0/0/9/10 200 4912 - - ---- 4/2/1/2/0 0/0 "GET /servicedesk/controller?docommand=renderradform&!key=esd_sfb001_frm_feedback_forms_list&isportalintegratedmode=true&USR=joe.bloggs%40gmail.com&_dc=1365667773097&redirecturl=controller%3Fdocommand%3Drenderbody%26%21key%3DESD_SFB001_FRM_FEEDBACK_FORMS_LIST%26isportalintegratedmode%3Dtrue&sso_token=ALiYv2UqzLsAhSw1ZchRDlCHlq44Bhj9&ONERROR=%2Fweb%2Fjsp%2Fapps%2Fportal-integration-error.jsp&itype=login&slicetoken=NW51O%242aRo%2C_Zz%2476P_9DTtnFmz6%28bhk&AUTOFORWARDURL=controller%3Fdocommand%3Drenderbody%26%21key%3DESD_SFB001_FRM_FEEDBACK_FORMS_LIST%26isportalintegratedmode%3Dtrue&LOGINPAGE=https%3A%2F%2F<FQDN of Web Portal>%2Fweb%2F4732cf01-82c3-4bc5-b6c9-552253e672cf%2Fworkflow-tools&appid=1&!uid=1&!redownloadToken=7.0.3.1.1363611301.0&userlocale=en_US&!datechanged=2012-05-18%2015:05:31.38 HTTP/1.1" The corresponding HTTP browser entry shows: http://<FQDN of ServiceDesk>/servicedesk/controller?docommand=renderradform&!key=esd_org019_frm_contact_list&isportalintegratedmode=true&USR=joe.bloggs%40gmail.com&_dc=1365665987887&redirecturl=controller%3Fdocommand%3Drenderbody%26%21key%3DESD_ORG019_FRM_CONTACT_LIST%26isportalintegratedmode%3Dtrue&sso_token=3NxsXYORMPp32SwL8ftVUCMH2QdWLH82&ONERROR=%2Fweb%2Fjsp%2Fapps%2Fportal-integration-error.jsp&itype=login&slicetoken=NW51O%242aRo%2C_Zz%2476P_9DTtnFmz6%28bhk&AUTOFORWARDURL=controller%3Fdocommand%3Drenderbody%26%21key%3DESD_ORG019_FRM_CONTACT_LIST%26isportalintegratedmode%3Dtrue&LOGINPAGE=https%3A%2F%2F<FQDN of Web Portal>>%2Fweb%2F4732cf01-82c3-4bc5-b6c9-552253e672cf%2Fapplication-setup&appid=1&!uid=1&!redownloadToken=7.0.3.1.1363611301.0&userlocale=en_US&!datechanged=2012-10-26%2019:00:25.08 From reading through the forums and other sites it looks like we should be use to use HAPROXY to redirect the traffic to https, but try as I might I cant get it to work. This is our HAPROXY configuration: global log 127.0.0.1 local2 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon stats socket /var/lib/haproxy/stats defaults mode http log global option httplog option dontlognull option http-server-close option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 frontend http-openfire bind *:7070 default_backend openfire backend openfire balance roundrobin server <serverName> <IPv4 Address>:7070 check server <serverName> <IPv4 Address>:7070 check frontend http-uapi bind *:7080 default_backend uapi backend uapi balance roundrobin server <serverName> <IPv4 Address>:7080 check server <serverName> <IPv4 Address>:7080 check frontend http-sec bind *:8080 default_backend sec backend sec balance roundrobin server <serverName> <IPv4 Address>:8080 check server <serverName> <IPv4 Address>:8080 check frontend http-wall bind *:9080 default_backend wall backend wall balance roundrobin server <serverName> <IPv4 Address>:9080 check server <serverName> <IPv4 Address>:9080 check frontend http-xmpp bind *:9090 default_backend xmpp backend xmpp balance roundrobin server <serverName> <IPv4 Address>:9090 check server <serverName> <IPv4 Address>:9090 check frontend http-aim bind *:10080 default_backend aim backend aim balance roundrobin server <serverName> <IPv4 Address>:10080 check server <serverName> <IPv4 Address>:10080 check frontend http-servicedesk bind *:8081 default_backend servicedesk backend servicedesk balance roundrobin server <serverName> <IPv4 Address>:8081 check server <serverName> <IPv4 Address>:8081 check listen stats :1936 mode http stats enable stats hide-version stats realm Haproxy\ Statistics stats uri / stats auth haproxy:<Password> I have tried following the articles listed posted on http://stackoverflow.com/questions/13227544/haproxy-redirecting-http-to-https-ssl and http://parsnips.net/haproxy-http-to-https-redirect/ but that hasn't made any difference. Am I on the right track with this or are we trying to achieve the impossible?, I'm hoping I'm just being an idiot and one of you good people can point me in the right direction.

    Read the article

  • Squid closing the connection on long HTTP GET requests

    - by Rhys
    Hello, When running a database query on a specific external site we use, Squid seems to cut off the connection after a consistent period of time (just over a minute). The query is submitted through a standard web form is that uses GET to query their database. Firefox 3 just displays a blank page. Internet Explorer throws a 'Page Cannot Be Displayed' error (tested in v6 and v8). When we perform the same query on the same machine, but bypass the Squid proxy, it works fine. The query takes about two and a half minutes to complete. There are a few timeout settings in Squid, but I honestly don't know what one to be looking at. Any possible solutions would be much appreciated. Cheers

    Read the article

  • Multithread http downloader with webui [closed]

    - by kiler129
    I looking for software similar to JDownloader or PyLoad. JD is pretty good but use heavy Java and for now have very weak web interface. PyLoad is awesome, include simple but powerful web-UI but downloading 10 files (10 threads each, so summary it's 100 connections running at around 8MB/s all) consume a lot of cpu - it's whole core for me. Do you know any lightweight alternatives? Aria2c is good for console but I failed to find any good webui, official one is good but after adding more files almost crashes Chrome :)

    Read the article

  • Windows HTTP tunnel through 2 Linux hosts?

    - by Darkmage
    The localhost only has connection to Host1. Host1 has connection to Host2 and localhost. How can I setup this to use Host2 as a proxy for web trafic from localhost? I have seen similar topics but can't get it to work. How do I set it up on the Windows XP client?

    Read the article

  • HTTP caching headers: how should must-revalidate work?

    - by Bobby Jack
    Using trac, I'm getting a response with the following header: Cache-control: must-revalidate Moreover, no 'Expires' header is being sent. Our local proxy, however, is caching these responses, so when an edit is made, pages need to be 'hard refreshed' to update. Is the proxy misbehaving? Other headers that might be relevant: Connection Keep-Alive Proxy-Connection Keep-Alive Keep-Alive timeout=15, max=100

    Read the article

  • Handling UTF-8 with BOM in HTTP

    - by Alois Mahdal
    Say I have a script which at some point serves a plain text file as a content (right after "\n\n"). These files are provided by users, but I can expect they will be UTF-8. So I hard-wire Content-Type: text/plain; charset=UTF-8. But while I can teach users to save everything in UTF-8, I can't be very sure that the files will be without BOM ("\xEE\xBB\xBF"), as at least on Windows, this is not very clearly distinguished in common plain text editors and not every one of them uses the same default. So what about these files created on Windows, where they may/may not start with BOM? Should/will server or UA get rid of this debris for me? Or is it my task to prepare clean UTF-8, i.e. open each file and check whether BOM needs to be removed?

    Read the article

  • Why do users get an HTTP 404 error when attempting to clone a Mercurial repository over HTTP?

    - by Geoffrey van Wyk
    The repository is hosted on my PC. I use Apache with WAMP and TortoiseHG. I have setup users and passwords and they are able to browse the repository in their browsers after entering their usernames and passwords. The problem is that, when they try to clone the repository, they get an HTTP404 file note found error. However, I can clone the repsoitory on my own PC using their credentials. The problem must lie somehwere with the mercurial setup.

    Read the article

  • Dynamically blocking excessive HTTP bandwidth use?

    - by Jeff Atwood
    We were a little surprised to see this on our Cacti graphs for June 4 web traffic: We ran Log Parser on our IIS logs and it turns out this was a perfect storm of Yahoo and Google bots indexing us.. in that 3 hour period, we saw 287k hits from 3 different google ips, plus 104k from yahoo. Ouch? While we don't want to block Google or Yahoo, this has come up before. We have access to a Cisco PIX 515E, and we're thinking about putting that in front so we can dynamically deal with bandwidth offenders without touching our web servers directly. But is that the best solution? I'm wondering if there is any software or hardware that can help us identify and block excessive bandwidth use, ideally in real time? Perhaps some bit of hardware or open-source software we can put in front of our web servers? We are mostly a windows shop but we have some linux skills as well; we're also open to buying hardware if the PIX 515E isn't sufficient. What would you recommend?

    Read the article

  • Tunneling over HTTP

    - by Morgan
    Hello, I have a network at work that is locked behind a firewall and Internet connection is available only by using a proxy server. At work, I can connect to databases that are distributed across the network. However, at home, I cannot connect to the proxy server or the databases. How can this be done? I can access my workstation via LogMeIn, so I can install anything on it. I thought of installing some kind of tunneling mechanism in my workstation. Then, at home, I could connect to this mechanism, which would in turn do the required connections. So essentially, what I'd like to do can be represented by the following diagram: Home = Workstation = Database. For example, whenever I connect to, say, 10.140.0.1:1234 at home, this would be redirected to 10.140.0.1:1234 of my Workstation, because 10.140.0.1:1234 is only available through the corporate network. NOTE: I'm using Windows XP.

    Read the article

  • HTTP resource caching / fetching

    - by Bobby Jack
    I'm trying to optimise a page, and I'm seeing some strange behaviour. Each time I click on a link to the page, all resources are fetched from the server, responding with 200s. However, when I refresh the page (specifically, F5 in Firefox), all resources return a 304 and - of course - the page loads much faster as a result. The main page returns a 200 in both cases. In the refresh case, If-Modified-Since headers are sent with the requests to the resources. However, in the 'clicking a link' case, they are not. What's the reason for that, and can I control it?

    Read the article

  • cannot connect via http but can via ssh in Windows 7

    - by Tim
    Hi, I have a strange problem in my Windows 7. Sometimes web browsers (ie firefox and chrome) works sometimes don't. But ssh is always working. What could be the reason and how to fix it? My router is Linksys WRT54GL. Web browsing via firefox in my Ubuntu is okay. Thanks and regards!

    Read the article

  • How to implement a secure authentication over HTTP?

    - by Zagorax
    I know that we have HTTPS, but I would like to know if there's an algorithm/approach/strategy that grants a reasonable security level without using SSL. I have read many solution on the internet. Most of them are based on adding some time metadata to the hashes, but it needs that both server and client has the time set equal. Moreover, it seems to me that none of this solution could prevent a man in the middle attack.

    Read the article

  • Squid closing the connection on long HTTP GET requests

    - by Rhys
    When running a database query on a specific external site we use, Squid seems to cut off the connection after a consistent period of time (just over a minute). The query is submitted through a standard web form is that uses GET to query their database. Firefox 3 just displays a blank page. Internet Explorer throws a 'Page Cannot Be Displayed' error (tested in v6 and v8). When we perform the same query on the same machine, but bypass the Squid proxy, it works fine. The query takes about two and a half minutes to complete. There are a few timeout settings in Squid, but I honestly don't know what one to be looking at. Any possible solutions would be much appreciated. Cheers

    Read the article

  • Redirecting HTTP traffic from a local server on the web

    - by MrJackV
    Here is the situation: I have a webserver (let's call it C1) that is running an apache/php server and it is port forwarded so that I can access it anywhere. However there is another computer within the webserver LAN that has a apache server too (let's call it C2). I cannot change the port forwarding nor I can change the apache server (a.k.a. install custom modules). My question is: is there a way to access C2 within a directory of C1? (e.g. going to www.website.org/random_dir will allow me to browse the root of C2 apache server.) I am trying to change as little as possible of the config/other (e.g. activating modules etc.) Is there a possible solution? Thanks in advance.

    Read the article

  • Remote HTTP to FTP

    - by jamd12
    I am on a very slow download with the internet that I have and unfortunately I only have access to expensive and slow wireless or satelite. I have set up an FTP with a computer supplier locally who has a nice 2 Mbps speed and am trying to set up a way of adding links remotely (Hotfile, rapidshare, Fileserve, etc.) so that they can be downloaded onto the FTP and then transfered a few times a week manually onto a portable HDD. On my home PC I use Internet download manager for all my downloads. Is there a simple way that I can add links remotely to Internet Download Manager on the FTP or perhaps another solution? The OS on the FTP is Linux - I use Windows XP SP3 and Windows 7. I have not used FTP very much before so any suggestions on how best to do this would be much appreciated.

    Read the article

  • Browser http port-forwarding

    - by Kakao
    When using a browser like Firefox I need that any url of the domain example.com to have appended the port :8008. Not only when I type it at address bar but any where it is referenced within the served html page. All the other domains should be left as is. I know I can setup a proxy like Squid or use a pac file in a web site but I want it simpler if possible.

    Read the article

  • Logging Into a site that uses Live.com authentication

    - by Josh
    I've been trying to automate a log in to a website I frequent, www.bungie.net. The site is associated with Microsoft and Xbox Live, and as such makes uses of the Windows Live ID API when people log in to their site. I am relatively new to creating web spiders/robots, and I worry that I'm misunderstanding some of the most basic concepts. I've simulated logins to other sites such as Facebook and Gmail, but live.com has given me nothing but trouble. Anyways, I've been using Wireshark and the Firefox addon Tamper Data to try and figure out what I need to post, and what cookies I need to include with my requests. As far as I know these are the steps one must follow to log in to this site. 1. Visit https: //login.live.com/login.srf?wa=wsignin1.0&rpsnv=11&ct=1268167141&rver=5.5.4177.0&wp=LBI&wreply=http:%2F%2Fwww.bungie.net%2FDefault.aspx&id=42917 2. Recieve the cookies MSPRequ and MSPOK. 3. Post the values from the form ID "PPSX", the values from the form ID "PPFT", your username, your password all to a changing URL similar to: https: //login.live.com/ppsecure/post.srf?wa=wsignin1.0&rpsnv=11&ct= (there are a few numbers that change at the end of that URL) 4. Live.com returns the user a page with more hidden forms to post. The client then posts the values from the form "ANON", the value from the form "ANONExp" and the values from the form "t" to the URL: http ://www.bung ie.net/Default.aspx?wa=wsignin1.0 5. After posting that data, the user is returned a variety of cookies the most important of which is "BNGAuth" which is the log in cookie for the site. Where I am having trouble is on fifth step, but that doesn't neccesarily mean I've done all the other steps correctly. I post the data from "ANON", "ANONExp" and "t" but instead of being returned a BNGAuth cookie, I'm returned a cookie named "RSPMaybe" and redirected to the home page. When I review the Wireshark log, I noticed something that instantly stood out to me as different between the log when I logged in with Firefox and when my program ran. It could be nothing but I'll include the picture here for you to review. I'm being returned an HTTP packet from the site before I post the data in the fourth step. I'm not sure how this is happening, but it must be a side effect from something I'm doing wrong in the HTTPS steps. using System; using System.Collections.Generic; using System.Collections.Specialized; using System.Text; using System.Net; using System.IO; using System.IO.Compression; using System.Security.Cryptography; using System.Security.Cryptography.X509Certificates; using System.Web; namespace SpiderFromScratch { class Program { static void Main(string[] args) { CookieContainer cookies = new CookieContainer(); Uri url = new Uri("https://login.live.com/login.srf?wa=wsignin1.0&rpsnv=11&ct=1268167141&rver=5.5.4177.0&wp=LBI&wreply=http:%2F%2Fwww.bungie.net%2FDefault.aspx&id=42917"); HttpWebRequest http = (HttpWebRequest)HttpWebRequest.Create(url); http.Timeout = 30000; http.UserAgent = "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.8) Gecko/20100202 Firefox/3.5.8 (.NET CLR 3.5.30729)"; http.Accept = "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"; http.Headers.Add("Accept-Language", "en-us,en;q=0.5"); http.Headers.Add("Accept-Charset", "ISO-8859-1,utf-8;q=0.7,*;q=0.7"); http.Headers.Add("Keep-Alive", "300"); http.Referer = "http://www.bungie.net/"; http.ContentType = "application/x-www-form-urlencoded"; http.CookieContainer = new CookieContainer(); http.Method = WebRequestMethods.Http.Get; HttpWebResponse response = (HttpWebResponse)http.GetResponse(); StreamReader readStream = new StreamReader(response.GetResponseStream()); string HTML = readStream.ReadToEnd(); readStream.Close(); //gets the cookies (they are set in the eighth header) string[] strCookies = response.Headers.GetValues(8); response.Close(); string name, value; Cookie manualCookie; for (int i = 0; i < strCookies.Length; i++) { name = strCookies[i].Substring(0, strCookies[i].IndexOf("=")); value = strCookies[i].Substring(strCookies[i].IndexOf("=") + 1, strCookies[i].IndexOf(";") - strCookies[i].IndexOf("=") - 1); manualCookie = new Cookie(name, "\"" + value + "\""); Uri manualURL = new Uri("http://login.live.com"); http.CookieContainer.Add(manualURL, manualCookie); } //stores the cookies to be used later cookies = http.CookieContainer; //Get the PPSX value string PPSX = HTML.Remove(0, HTML.IndexOf("PPSX")); PPSX = PPSX.Remove(0, PPSX.IndexOf("value") + 7); PPSX = PPSX.Substring(0, PPSX.IndexOf("\"")); //Get this random PPFT value string PPFT = HTML.Remove(0, HTML.IndexOf("PPFT")); PPFT = PPFT.Remove(0, PPFT.IndexOf("value") + 7); PPFT = PPFT.Substring(0, PPFT.IndexOf("\"")); //Get the random URL you POST to string POSTURL = HTML.Remove(0, HTML.IndexOf("https://login.live.com/ppsecure/post.srf?wa=wsignin1.0&rpsnv=11&ct=")); POSTURL = POSTURL.Substring(0, POSTURL.IndexOf("\"")); //POST with cookies http = (HttpWebRequest)HttpWebRequest.Create(POSTURL); http.UserAgent = "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.8) Gecko/20100202 Firefox/3.5.8 (.NET CLR 3.5.30729)"; http.Accept = "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"; http.Headers.Add("Accept-Language", "en-us,en;q=0.5"); http.Headers.Add("Accept-Charset", "ISO-8859-1,utf-8;q=0.7,*;q=0.7"); http.Headers.Add("Keep-Alive", "300"); http.CookieContainer = cookies; http.Referer = "https://login.live.com/login.srf?wa=wsignin1.0&rpsnv=11&ct=1268158321&rver=5.5.4177.0&wp=LBI&wreply=http:%2F%2Fwww.bungie.net%2FDefault.aspx&id=42917"; http.ContentType = "application/x-www-form-urlencoded"; http.Method = WebRequestMethods.Http.Post; Stream ostream = http.GetRequestStream(); //used to convert strings into bytes System.Text.ASCIIEncoding encoding = new System.Text.ASCIIEncoding(); //Post information byte[] buffer = encoding.GetBytes("PPSX=" + PPSX +"&PwdPad=IfYouAreReadingThisYouHaveTooMuc&login=YOUREMAILGOESHERE&passwd=YOURWORDGOESHERE" + "&LoginOptions=2&PPFT=" + PPFT); ostream.Write(buffer, 0, buffer.Length); ostream.Close(); HttpWebResponse response2 = (HttpWebResponse)http.GetResponse(); readStream = new StreamReader(response2.GetResponseStream()); HTML = readStream.ReadToEnd(); response2.Close(); ostream.Dispose(); foreach (Cookie cookie in response2.Cookies) { Console.WriteLine(cookie.Name + ": "); Console.WriteLine(cookie.Value); Console.WriteLine(cookie.Expires); Console.WriteLine(); } //SET POSTURL value string POSTANON = "http://www.bungie.net/Default.aspx?wa=wsignin1.0"; //Get the ANON value string ANON = HTML.Remove(0, HTML.IndexOf("ANON")); ANON = ANON.Remove(0, ANON.IndexOf("value") + 7); ANON = ANON.Substring(0, ANON.IndexOf("\"")); ANON = HttpUtility.UrlEncode(ANON); //Get the ANONExp value string ANONExp = HTML.Remove(0, HTML.IndexOf("ANONExp")); ANONExp = ANONExp.Remove(0, ANONExp.IndexOf("value") + 7); ANONExp = ANONExp.Substring(0, ANONExp.IndexOf("\"")); ANONExp = HttpUtility.UrlEncode(ANONExp); //Get the t value string t = HTML.Remove(0, HTML.IndexOf("id=\"t\"")); t = t.Remove(0, t.IndexOf("value") + 7); t = t.Substring(0, t.IndexOf("\"")); t = HttpUtility.UrlEncode(t); //POST the Info and Accept the Bungie Cookies http = (HttpWebRequest)HttpWebRequest.Create(POSTANON); http.UserAgent = "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.8) Gecko/20100202 Firefox/3.5.8 (.NET CLR 3.5.30729)"; http.Accept = "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"; http.Headers.Add("Accept-Language", "en-us,en;q=0.5"); http.Headers.Add("Accept-Encoding", "gzip,deflate"); http.Headers.Add("Accept-Charset", "ISO-8859-1,utf-8;q=0.7,*;q=0.7"); http.Headers.Add("Keep-Alive", "115"); http.CookieContainer = new CookieContainer(); http.ContentType = "application/x-www-form-urlencoded"; http.Method = WebRequestMethods.Http.Post; http.Expect = null; ostream = http.GetRequestStream(); int test = ANON.Length; int test1 = ANONExp.Length; int test2 = t.Length; buffer = encoding.GetBytes("ANON=" + ANON +"&ANONExp=" + ANONExp + "&t=" + t); ostream.Write(buffer, 0, buffer.Length); ostream.Close(); //Here lies the problem, I am not returned the correct cookies. HttpWebResponse response3 = (HttpWebResponse)http.GetResponse(); GZipStream gzip = new GZipStream(response3.GetResponseStream(), CompressionMode.Decompress); readStream = new StreamReader(gzip); HTML = readStream.ReadToEnd(); //gets both cookies string[] strCookies2 = response3.Headers.GetValues(11); response3.Close(); } } } This has given me problems and I've put many hours into learning about HTTP protocols so any help would be appreciated. If there is an article detailing a similar log in to live.com feel free to point the way. I've been looking far and wide for any articles with working solutions. If I could be clearer, feel free to ask as this is my first time using Stack Overflow.

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >