Search Results

Search found 53597 results on 2144 pages for 'http requests'.

Page 10/2144 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Squid closing the connection on long HTTP GET requests

    - by Rhys
    Hello, When running a database query on a specific external site we use, Squid seems to cut off the connection after a consistent period of time (just over a minute). The query is submitted through a standard web form is that uses GET to query their database. Firefox 3 just displays a blank page. Internet Explorer throws a 'Page Cannot Be Displayed' error (tested in v6 and v8). When we perform the same query on the same machine, but bypass the Squid proxy, it works fine. The query takes about two and a half minutes to complete. There are a few timeout settings in Squid, but I honestly don't know what one to be looking at. Any possible solutions would be much appreciated. Cheers

    Read the article

  • http(/* argument here */) How is this Object (Http) being used without an explicit or implicit meth

    - by Randin
    In the example for coding with Json using Databinder Dispatch Nathan uses an Object (Http) without a method, shown here: import dispatch._ import Http._ Http("http://www.fox.com/dollhouse/" >>> System.out ) How is he doing this? Thank you for all of the answers unfortunatly I was not specific enough... It looks like it is simply passing an argument to a constructor of class or companion object Http. In another example, I've seen another form: http = new Http http(/* argument here */) Is this valid Scala? I guess it must be, because the author is a Scala expert. But it makes no sense to me. Actions are usually performed by invoking methods on objects, whether explicitly as object.doSomething() or implicitly as object = something (using the apply() method underneath the syntactic sugar). All I can think of is that a constructor is being used to do something in addition to constructing an object. In other words, it is having side effects, such as in this case going off and doing something on the web.

    Read the article

  • Eclipse 3.5.1 update error (HTTP 503)

    - by PiedPiper
    I'm trying to update Eclipse 5.3.1 (on Gentoo Linux) from the Galileo Discovery Site and I get this error message: Network connection problems encountered during search. Unable to access "http://download.eclipse.org/releases/galileo". Error accessing site stream. [Server returned HTTP response code: 503 for URL: http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd] Server returned HTTP response code: 503 for URL: http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd Error accessing site stream. [Server returned HTTP response code: 503 for URL: http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd] Server returned HTTP response code: 503 for URL: http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd It seems the 503 error code is intended to stop software from constantly downloading this file from w3.org. But how do I persuade Eclipse to stop requesting it?

    Read the article

  • Mercurial mirror: abort: No such file or directory: http://[...]/00manifest.i

    - by Sridhar Ratnakumar
    I am trying to setup a daily mirror of a mercurial repository - code.python.org in particular - within our local network, and serve that via Apache HTTPD. On the remote host that hosts apache, I did this: $ cd /var/www $ hg clone http://code.python.org/hg/trunk/ On my macbook, I ran: $ hg -v clone http://remote/trunk/ (falling back to static-http) abort: No such file or directory: http://remote/trunk/.hg/store/00manifest.i Google does not show any relevant result for this particular error. I remember back in those days being able to setup Bazaar mirrors by a simple clone. Doesn't Mercurial work like that? How do I setup a mirror that must further act like a clone URL?

    Read the article

  • ftp server offering http access ?

    - by MikeJ
    Is there an FTP server that can also provide access via HTTP? Or what do I need to do to set up the mirror of FTP through HTTP access ? Some of my clients cannot access our FTP because of corporate policy and cannot get updates from me. However, they can use HTTP. Currently I use filezilla because it was fast/easy to set up but would switch to something with more flexibility.

    Read the article

  • check file revision through http only

    - by romant
    if the svn repo is exposed through say : http://svn to the users, and there's a file called script.sh Is there a way one can get the latest revision number of script.sh by means of just http access? Something along the lines of http://svn/rev?script.sh ?! Thank you.

    Read the article

  • Limit HTTP VERBS on Apache2

    - by user72295
    I am trying to limit the use of certain HTTP verbs on my site. I entered the following into my VirtualHost config file within the Directory element: <Limit GET POST HEAD> Allow from all </Limit> <Limit PUT DELETE OPTIONS> Deny from all </Limit> This seemed to work but with unexpected results: I ran the following telnet/HTTP commands before and after this change, open server 80 OPTIONS server/abs_path HTTP/1.1 User-Agent: Telnet/1.0 Host: server before the change I received a successful response with the Allowed headers. After the change, however, I was expecting to receive a 405 'Method not allowed' response but rather I received a 403 'Access Forbidden' response. What do I need to change in apache to return the 405 HTTP response? Many thanks

    Read the article

  • Configure tomcat behind loadbalancer to respond on HTTP and HTTPS

    - by user253530
    I have 2 tomcat machines behind a load balancer on Amazon EC2. Until now The load balancer was configured to respond only on https. So in order to access our services you would go to https://url. Tomcat was configured to listen on 8080 but the connector had additional params that would tell tomcat that it is behind a proxy and that it should respond on HTTPS 443. The connector looks like this: <Connector scheme="https" secure="true" proxyPort="443" proxyHost="my.domain.name" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" useBodyEncodingForURI="true" URIEncoding="UTF-8" /> What i would like to do is to open port 80 on the load balancer and basically allow traffic on HTTP and HTTPS. I've configured the load balancer to redirect all HTTP traffic to the tomcat machines on port 8088. I was thinking that i could define a new connector so that all HTTPS traffic goes to 8080 and HTTP to 8088. Unfortunately i did not succeed. Here is my connector <Connector port="8088" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" useBodyEncodingForURI="true" URIEncoding="UTF-8" /> Am I missing something? Thanks

    Read the article

  • Mercurial mirror: abort: No such file or directory: http://[...]/00manifest.i

    - by Sridhar Ratnakumar
    I am trying to setup a daily mirror of a mercurial repository - code.python.org in particular - within our local network, and serve that via Apache HTTPD. On the remote host that hosts apache, I did this: $ cd /var/www $ hg clone http://code.python.org/hg/trunk/ On my macbook, I ran: $ hg -v clone http://remote/trunk/ (falling back to static-http) abort: No such file or directory: http://remote/trunk/.hg/store/00manifest.i Google does not show any relevant result for this particular error. I remember back in those days being able to setup Bazaar mirrors by a simple clone. Doesn't Mercurial work like that? How do I setup a mirror that must further act like a clone URL?

    Read the article

  • Understanding tcptraceroute versus http response

    - by kojiro
    I'm debugging a web server that has a very high wait time before responding. The server itself is quite fast and has no load, so I strongly suspect a network problem. Basically, I make a web request: wget -O/dev/null http://hostname/ --2013-10-18 11:03:08-- http://hostname/ Resolving hostname... 10.9.211.129 Connecting to hostname|10.9.211.129|:80... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] Saving to: ‘/dev/null’ 2013-10-18 11:04:11 (88.0 KB/s) - ‘/dev/null’ saved [13641] So you see it took about a minute to give me the page, but it does give it to me with a 200 response. So I try a tcptraceroute to see what's up: $ sudo tcptraceroute hostname 80 Password: Selected device en2, address 192.168.113.74, port 54699 for outgoing packets Tracing the path to hostname (10.9.211.129) on TCP port 80 (http), 30 hops max 1 192.168.113.1 0.842 ms 2.216 ms 2.130 ms 2 10.141.12.77 0.707 ms 0.767 ms 0.738 ms 3 10.141.12.33 1.227 ms 1.012 ms 1.120 ms 4 10.141.3.107 0.372 ms 0.305 ms 0.368 ms 5 12.112.4.41 6.688 ms 6.514 ms 6.467 ms 6 cr84.phlpa.ip.att.net (12.122.107.214) 19.892 ms 18.814 ms 15.804 ms 7 cr2.phlpa.ip.att.net (12.122.107.117) 17.554 ms 15.693 ms 16.122 ms 8 cr1.wswdc.ip.att.net (12.122.4.54) 15.838 ms 15.353 ms 15.511 ms 9 cr83.wswdc.ip.att.net (12.123.10.110) 17.451 ms 15.183 ms 16.198 ms 10 12.84.5.93 9.982 ms 9.817 ms 9.784 ms 11 12.84.5.94 14.587 ms 14.301 ms 14.238 ms 12 10.141.3.209 13.870 ms 13.845 ms 13.696 ms 13 * * * … 30 * * * I tried it again with 100 hops, just to be sure – the packets never get there. So how is it that the server does respond to requests via http, even after a minute? Shouldn't all requests just die? I'm not sure how to proceed debugging why this server is slow (as opposed to why it responds at all).

    Read the article

  • Squid closing the connection on long HTTP GET requests

    - by Rhys
    When running a database query on a specific external site we use, Squid seems to cut off the connection after a consistent period of time (just over a minute). The query is submitted through a standard web form is that uses GET to query their database. Firefox 3 just displays a blank page. Internet Explorer throws a 'Page Cannot Be Displayed' error (tested in v6 and v8). When we perform the same query on the same machine, but bypass the Squid proxy, it works fine. The query takes about two and a half minutes to complete. There are a few timeout settings in Squid, but I honestly don't know what one to be looking at. Any possible solutions would be much appreciated. Cheers

    Read the article

  • how to do asynchronous http requests with epoll and python 3.1

    - by flow
    there is an interesting page http://scotdoyle.com/python-epoll-howto.html about how to do asnchronous / non-blocking / AIO http serving in python 3. there is the tornado web server which does include a non-blocking http client. i have managed to port parts of the server to python 3.1, but the implementation of the client requires pyCurl and seems to have problems (with one participant stating how ‘Libcurl is such a pain in the neck’, and looking at the incredibly ugly pyCurl page i doubt pyCurl will arrive in py3+ any time soon). now that epoll is available in the standard library, it should be possible to do asynchronous http requests out of the box with python. i really do not want to use asyncore or whatnot; epoll has a reputation for being the ideal tool for the task, and it is part of the python distribution, so using anything but epoll for non-blocking http is highly counterintuitive (prove me wrong if you feel like it). oh, and i feel threading is horrible. no threading. i use stackless. people further interested in the topic of asynchronous http should not miss out on this talk by peter portante at PyCon2010; also of interest is the keynote, where speaker antonio rodriguez at one point emphasizes the importance of having up-to-date web technology libraries right in the standard library.

    Read the article

  • cURL requests changed

    - by Andriy Mytroshyn
    I've start work with cURL library, before work i compile library. i Send request and have some problem. Code in c++ that i used for work with cURL: CURL *curl=NULL; CURLcode res; struct curl_slist *headers=NULL; // init to NULL is important curl_slist_append(headers, "POST /oauth/authorize HTTP/1.1"); curl_slist_append(headers, "Host: sp-money.yandex.ru"); curl_slist_append(headers, "Content-Type: application/x-www-form-urlencoded"); curl_slist_append(headers, "charset: UTF-8"); curl_slist_append(headers, "Content-Length: 12345"); curl = curl_easy_init(); if(!curl) return 0; curl_easy_setopt(curl, CURLOPT_HTTPHEADER, headers); curl_easy_setopt(curl, CURLOPT_URL, "sp-money.yandex.ru"); curl_easy_setopt(curl, CURLOPT_PROXY, "127.0.0.1:8888"); if( curl_easy_perform(curl)!=CURLE_OK) return 1; I've used proxy, fiddler2, for check what data sent to server. When i check sent data i get result: POST HTTP://sp-money.yandex.ru/ HTTP/1.1 Host: sp-money.yandex.ru Accept: */* Connection: Keep-Alive Content-Length: 151 Content-Type: application/x-www-form-urlencoded also i check this data using Wiresharck, result the same. Do you know why in first line cURL wrote: POST HTTP://sp-money.yandex.ru/ HTTP/1.1 I send POST /oauth/authorize HTTP/1.1 I've used VS 2010 for work, and also i don't used framework

    Read the article

  • What is correct HTTP status code when redirecting to a login page?

    - by PHP_Jedi
    When a user is not logged in and tries to access an page that requires login, what is the correct HTTP status code for a redirect to the login page? I don't feel that any of the 3xx fit that description. 10.3.1 300 Multiple Choices The requested resource corresponds to any one of a set of representations, each with its own specific location, and agent- driven negotiation information (section 12) is being provided so that the user (or user agent) can select a preferred representation and redirect its request to that location. Unless it was a HEAD request, the response SHOULD include an entity containing a list of resource characteristics and location(s) from which the user or user agent can choose the one most appropriate. The entity format is specified by the media type given in the Content- Type header field. Depending upon the format and the capabilities of the user agent, selection of the most appropriate choice MAY be performed automatically. However, this specification does not define any standard for such automatic selection. If the server has a preferred choice of representation, it SHOULD include the specific URI for that representation in the Location field; user agents MAY use the Location field value for automatic redirection. This response is cacheable unless indicated otherwise. 10.3.2 301 Moved Permanently The requested resource has been assigned a new permanent URI and any future references to this resource SHOULD use one of the returned URIs. Clients with link editing capabilities ought to automatically re-link references to the Request-URI to one or more of the new references returned by the server, where possible. This response is cacheable unless indicated otherwise. The new permanent URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s). If the 301 status code is received in response to a request other than GET or HEAD, the user agent MUST NOT automatically redirect the request unless it can be confirmed by the user, since this might change the conditions under which the request was issued. Note: When automatically redirecting a POST request after receiving a 301 status code, some existing HTTP/1.0 user agents will erroneously change it into a GET request. 10.3.3 302 Found The requested resource resides temporarily under a different URI. Since the redirection might be altered on occasion, the client SHOULD continue to use the Request-URI for future requests. This response is only cacheable if indicated by a Cache-Control or Expires header field. The temporary URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s). If the 302 status code is received in response to a request other than GET or HEAD, the user agent MUST NOT automatically redirect the request unless it can be confirmed by the user, since this might change the conditions under which the request was issued. Note: RFC 1945 and RFC 2068 specify that the client is not allowed to change the method on the redirected request. However, most existing user agent implementations treat 302 as if it were a 303 response, performing a GET on the Location field-value regardless of the original request method. The status codes 303 and 307 have been added for servers that wish to make unambiguously clear which kind of reaction is expected of the client. 10.3.4 303 See Other The response to the request can be found under a different URI and SHOULD be retrieved using a GET method on that resource. This method exists primarily to allow the output of a POST-activated script to redirect the user agent to a selected resource. The new URI is not a substitute reference for the originally requested resource. The 303 response MUST NOT be cached, but the response to the second (redirected) request might be cacheable. The different URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s). Note: Many pre-HTTP/1.1 user agents do not understand the 303 status. When interoperability with such clients is a concern, the 302 status code may be used instead, since most user agents react to a 302 response as described here for 303. 10.3.5 304 Not Modified If the client has performed a conditional GET request and access is allowed, but the document has not been modified, the server SHOULD respond with this status code. The 304 response MUST NOT contain a message-body, and thus is always terminated by the first empty line after the header fields. The response MUST include the following header fields: - Date, unless its omission is required by section 14.18.1 If a clockless origin server obeys these rules, and proxies and clients add their own Date to any response received without one (as already specified by [RFC 2068], section 14.19), caches will operate correctly. - ETag and/or Content-Location, if the header would have been sent in a 200 response to the same request - Expires, Cache-Control, and/or Vary, if the field-value might differ from that sent in any previous response for the same variant If the conditional GET used a strong cache validator (see section 13.3.3), the response SHOULD NOT include other entity-headers. Otherwise (i.e., the conditional GET used a weak validator), the response MUST NOT include other entity-headers; this prevents inconsistencies between cached entity-bodies and updated headers. If a 304 response indicates an entity not currently cached, then the cache MUST disregard the response and repeat the request without the conditional. If a cache uses a received 304 response to update a cache entry, the cache MUST update the entry to reflect any new field values given in the response. 10.3.6 305 Use Proxy The requested resource MUST be accessed through the proxy given by the Location field. The Location field gives the URI of the proxy. The recipient is expected to repeat this single request via the proxy. 305 responses MUST only be generated by origin servers. Note: RFC 2068 was not clear that 305 was intended to redirect a single request, and to be generated by origin servers only. Not observing these limitations has significant security consequences. 10.3.7 306 (Unused) The 306 status code was used in a previous version of the specification, is no longer used, and the code is reserved. 10.3.8 307 Temporary Redirect The requested resource resides temporarily under a different URI. Since the redirection MAY be altered on occasion, the client SHOULD continue to use the Request-URI for future requests. This response is only cacheable if indicated by a Cache-Control or Expires header field. The temporary URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s) , since many pre-HTTP/1.1 user agents do not understand the 307 status. Therefore, the note SHOULD contain the information necessary for a user to repeat the original request on the new URI. If the 307 status code is received in response to a request other than GET or HEAD, the user agent MUST NOT automatically redirect the request unless it can be confirmed by the user, since this might change the conditions under which the request was issued. I'm using 302 for now, until I find THE correct answer.

    Read the article

  • Parsing every part of an HTTP header field-value

    - by brickner
    Hi all. I'm parsing HTTP data directly from packets (either TCP reconstructed or not, you can assume it is). I'm looking for the best way to parse HTTP as accurately as possible. The main issue here is the HTTP header. Looking at the basic RFC of HTTP/1.1, it seems that HTTP header parsing would be complex. The RFC describes very complex regular expressions for different parts of the header. Should I write these regular expressions to parse the different parts of the HTTP header? The basic parsing I've written so far for HTTP header is for the generic HTTP header: message-header = field-name ":" [ field-value ] And I've included replacing inner LWS with SP and repeating headers with the same field-name with comma separated values as described in section 4.2. However, looking at section 14.9 for example would show that in order to parse the different parts of the field-value I need a much more complex parsing scheme. How do you suggest I should handle the complex parts of HTTP parsing (specifically the field-value) assuming I want to give the parser users the full capabilities of HTTP and to parse every part of HTTP? Design suggestions for this would also be appreciated. Thanks.

    Read the article

  • .htaccess blocking images on some internal pages

    - by jethomas
    I'm doing some web design for a friend and I noticed that everywhere else on her site images will load fine except for the subdirectory I'm working in. I looked in her .htaccess file and sure enough it is setup to deny people from stealing her images. Fair Enough, except the pages i'm working on are in her domain and yet I still get the 403 error. I'm pasting the .htaccess contents below but I replaced the domain names with xyz, 123 and abc. So specifically the page I'm on (xyz.com/DesignGallery.asp) pulls images from (xyz.com/machform/data/form_1/files) and it results in a forbidden error. RewriteEngine on <Files 403.shtml> order allow,deny allow from all </Files> RewriteCond %{HTTP_REFERER} !^http://xyz.com/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http://xyz.com/machform/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http://xyz.com/machform/data/form_1/files/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http://xyz.com$ [NC] RewriteCond %{HTTP_REFERER} !^http://abc.com/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http://abc.com$ [NC] RewriteCond %{HTTP_REFERER} !^http://abc.xyz.com/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http://abc.xyz.com$ [NC] RewriteCond %{HTTP_REFERER} !^http://123.com/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http://123.com$ [NC] RewriteCond %{HTTP_REFERER} !^http://123.xyz.com/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http://123.xyz.com$ [NC] RewriteCond %{HTTP_REFERER} !^http://www.xyz.com/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http://www.xyz.com/machform/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http://www.xyz.com/machform/$ [NC] RewriteCond %{HTTP_REFERER} !^http://www.xyz.com/machform/data/form_1/files/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http://www.xyz.com$ [NC] RewriteCond %{HTTP_REFERER} !^http://www.abc.com/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http://www.abc.com$ [NC] RewriteCond %{HTTP_REFERER} !^http://www.abc.xyz.com/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http://www.abc.xyz.com$ [NC] RewriteCond %{HTTP_REFERER} !^http://www.123.com/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http://www.123.com$ [NC] RewriteCond %{HTTP_REFERER} !^http://www.123.xyz.com/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http://www.123.xyz.com$ [NC] RewriteRule .*\.(jpg|jpeg|gif|png|bmp)$ - [F,NC] deny from 69.49.149.17 RewriteCond %{HTTP_HOST} ^.*$ RewriteRule ^vendors\.html$ "http\:\/\/www\.xyz\.com\/Design_Gallery_1\.htm" [R=301,L] RewriteCond %{HTTP_HOST} ^.*$ RewriteRule ^vendors\.asp$ "http\:\/\/www\.xyz\.com\/Design_Gallery_1\.htm" [R=301,L] RewriteCond %{HTTP_HOST} ^.*$ RewriteRule ^ArtGraphics\.html$ "http\:\/\/www\.xyz\.com\/Art_Gallery_1\.htm" [R=301,L] RewriteCond %{HTTP_HOST} ^.*$ RewriteRule ^ArtGraphics\.asp$ "http\:\/\/www\.xyz\.com\/Art_Gallery_1\.htm" [R=301,L] RewriteCond %{HTTP_HOST} ^.*$ RewriteRule ^Gear\.asp$ "http\:\/\/www\.xyz\.com\/Gear_Gallery_1\.htm" [R=301,L] RewriteCond %{HTTP_HOST} ^.*$ RewriteRule ^Gear\.html$ "http\:\/\/www\.xyz\.com\/Gear_Gallery_1\.htm" [R=301,L] RewriteCond %{HTTP_HOST} ^.*$ RewriteRule ^NewsletterSign\-Up\.html$ "http\:\/\/www\.xyz\.com\/Newsletter\.htm" [R=301,L] RewriteCond %{HTTP_HOST} ^.*$ RewriteRule ^NewsletterSign\-Up\.asp$ "http\:\/\/www\.xyz\.com\/Newsletter\.htm" [R=301,L] RewriteCond %{HTTP_HOST} ^.*$ RewriteRule ^KidzStuff\.html$ "http\:\/\/www\.xyz\.com\/KidzStuff1\.htm" [R=301,L] RewriteCond %{HTTP_HOST} ^.*$ RewriteRule ^KidzStuff\.asp$ "http\:\/\/www\.xyz\.com\/KidzStuff1\.htm" [R=301,L] RewriteCond %{HTTP_HOST} ^.*$ RewriteRule ^Vendors\.html$ "http\:\/\/www\.xyz\.com\/Design_Gallery_1\.htm" [R=301,L] RewriteCond %{HTTP_HOST} ^.*$ RewriteRule ^Vendors\.asp$ "http\:\/\/www\.xyz\.com\/Design_Gallery_1\.htm" [R=301,L]

    Read the article

  • Erlang : Handling HTTP Requests and Responses

    - by Ted Karmel
    What is the best erlang library for processing http requests and responses from within an erlang application? I have taken a look at inets but as a standalone application, it seems more like a replacement for curl. I would like to access external APIs from within the erlang application so would need to parse responses and be able to make subsequent requests with cookies generated from the response.

    Read the article

  • Issuing multiple requests using HTTP/1.1 Pipelining

    - by Robert S. Barnes
    When using HTTP/1.1 Pipelining what does the standard say about issuing multiple requests without waiting for each request to complete? What do servers do in practice? I ask because I once tried writing a client which would issue a batch of GET requests for multiple files and remember getting errors. I wasn't sure if it was due to me incorrectly issuing the GET's or needing to wait for each individual request to finish before issuing the next GET.

    Read the article

  • "Proxying" HTTP requests

    - by Richard
    Hi all, I have some software which runs as a black box, I have no access to it. This software makes HTTP requests. What I want to do is intercept these requests, forward them on, catch the response, do something with it, before passing the response back to the software. Can this be done? What's the best method? Thanks

    Read the article

  • Multiple requests to server question

    - by embedded
    I have a DB with user accounts information. I've scheduled a CRON job which updates the DB with every new user data it fetches from their accounts. I was thinking that this may cause a problem since all requests are coming from the same IP address and the server may block requests from that IP address. Is this the case? If so, how do I avoid being banned? should I be using a proxy? Thanks

    Read the article

  • http://localhost does not work, http://127.0.0.1 works

    - by dskanth
    Iam running Zend with Apache and got to see a strange behaviour.... If i type http://127.0.0.1 in my browser url, it works fine, but after typing: http://localhost, i will get a file download window, saying file type as: application/x-httpd-php And in my httpd.conf file, i have the following under VirtualHost *:80 definition: ServerName localhost DocumentRoot E:\zend\Apache2\htdocs\my_project\public Directory E:\zend\Apache2\htdocs\my_project\public Perhaps some configuration problem... can anyone guide me..

    Read the article

  • C# SOCKS proxy service for HTTP requests

    - by Ed
    I'm trying to build a service that will forward HTTP requests from agents like a browser to the Tor service. Problem is, the Tor service only accepts SOCKS4a connections. So my solution is to listen for HTTP requests, get the URL they're requesting, and make a request via Tor with the help of the Starksoft.Net.Proxy library. Then return the response. The library kind of works, but I'm not happy. It returns HTTP headers with the response and it can't handle images. So the responses are messed up. How could I improve my code? I'm very new to network programming. Sorry for the long example. public AnonymiserService(ILogger logger) { try { _logger = logger; _logger.Log("Listening on port {0}...", Properties.Settings.Default.ListeningPort); StartListener(new string[] { string.Format("http://*:{0}/", Properties.Settings.Default.ListeningPort) }); } catch (Exception ex) { _logger.LogError("Exception!", ex); } } private void StartListener(string[] prefixes) { if (!HttpListener.IsSupported) { _logger.LogError("HttpListener isn't supported on this machine!"); return; } HttpListener listener = new HttpListener(); foreach (string s in prefixes) listener.Prefixes.Add(s); while (true) { listener.Start(); IAsyncResult result = listener.BeginGetContext(new AsyncCallback(ListenerCallback), listener); result.AsyncWaitHandle.WaitOne(); } } private void ListenerCallback(IAsyncResult result) { try { // Get HTTP request HttpListener listener = (HttpListener)result.AsyncState; HttpListenerContext context = listener.EndGetContext(result); _logger.Log("Retrieving [{0}]", context.Request.RawUrl); // Create connection // Use Tor as proxy IProxyClient proxyClient = new Socks4aProxyClient("localhost", 9050); TcpClient tcpClient = proxyClient.CreateConnection(context.Request.UserHostName, 80); // Create message // Need to set Connection: close to close the connection as soon as it's done byte[] data = Encoding.UTF8.GetBytes(String.Format("GET {0} HTTP/1.1\r\nHost: {1}\r\nConnection: close\r\n\r\n", context.Request.Url.PathAndQuery, context.Request.UserHostName)); // Send message NetworkStream ns = tcpClient.GetStream(); ns.Write(data, 0, data.Length); // Pass on HTTP response HttpListenerResponse responseOut = context.Response; if (ns.CanRead) { byte[] buffer = new byte[32768]; int read = 0; string responseString = string.Empty; // Read response while ((read = ns.Read(buffer, 0, buffer.Length)) > 0) { responseString += Encoding.UTF8.GetString(buffer, 0, read); } // Remove headers if (responseString.IndexOf("HTTP/1.1 200 OK") > -1) responseString = responseString.Substring(responseString.IndexOf("\r\n\r\n")); // Forward response byte[] byteArray = Encoding.UTF8.GetBytes(responseString); responseOut.OutputStream.Write(byteArray, 0, byteArray.Length); } // Close streams responseOut.OutputStream.Close(); ns.Close(); // Close connection tcpClient.Close(); _logger.Log("Retrieved [{0}]", context.Request.RawUrl); } catch (Exception ex) { _logger.LogError("Exception in ListenerCallback!", ex); } }

    Read the article

  • How do you send a HEAD HTTP request in Python?

    - by fuentesjr
    So what I'm trying to do here is get the headers of a given URL so I can determine the mime-type. I want to be able to see if http://somedomain/foo/ will return an html document or a jpg image for example. Thus, I need to figure out how to send a HEAD request so that I can read the mime-type without having to download the content. Does anyone know of an easy way of doing this?

    Read the article

  • Nginx with http/https - Http seemed redirected to https all the time

    - by dwarfy
    I've this really weird behaviour with my ubuntu 10.04 / nginx 1.2.3 server. Basically I changed the SSL certificates this morning. And ever since it has been behaving weirdly on all apps. Godaddy is reporting that HTTPS/SSL setup is correct. When I try a page it still works correctly when I'm using HTTPS. But when I try using HTTP nginx reports error : 400 Bad Request The plain HTTP request was sent to HTTPS port After looking around on google for hours, I've tried different setup (while originaly my setup was working correctly for longtime, I just renewed certificates) I kindof found a half solution by adding this to my config : error_page 497 $request_uri; The realllly weird thing is that when I use this setup : server { listen 80; server_name john.johnrocks.eu; access_log /home/john/envs/john_prod/nginx_access.log; error_log /home/john/envs/john_prod/nginx_error.log; location / { uwsgi_pass unix:///home/john/envs/john_prod/john.sock; include uwsgi_params; } location /media { alias /home/john/envs/john_prod/johntab/www; } location /adminmedia { alias /home/john/envs/john_prod/johntab/www/adminmedia; } } I still have the same error when using HTTP (while nothing is setup for HTTPS here)?? I'm getting crazy on this !

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >