Search Results

Search found 51988 results on 2080 pages for 'http headers'.

Page 20/2080 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Relation between TCP/IP Keep Alive and HTTP Keep Alive timeout values

    - by Suresh Kumar
    I am trying to understand the relation between TCP/IP and HTTP timeout values. Are these two timeout values different or same? Most Web servers allow users to set the HTTP Keep Alive timeout value through some configuration. How is this value used by the Web servers? is this value just set on the underlying TCP/IP socket i.e is the HTTP Keep Alive timeout and TCP/IP Keep Alive Timeout same? or are they treated differently? My understanding is (maybe incorrect): The Web server uses the default timeout on the underlying TCP socket (i.e. indefinite) and creates Worker thread that counts down the specified HTTP timeout interval. When the Worker thread hits zero, it closes the connection.

    Read the article

  • Using PHP cURL with an HTTP Debugging Proxy

    - by Kane
    I'm using the app "Fiddler" to debug a GET attempt to a website via PHP cURL. In order to see the cURL traffic I had to specify that the cURL connection use the Fiddler proxy (see code below). $ch = curl_init(); curl_setopt($ch, CURLOPT_HTTPPROXYTUNNEL, 1); curl_setopt($ch, CURLOPT_PROXY, '127.0.0.1:8888'); curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 5); curl_setopt($ch, CURLOPT_TIMEOUT, 10); curl_setopt($ch, CURLOPT_HEADERFUNCTION, 'read_header'); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_USERAGENT, $user_agent); curl_setopt($ch, CURLOPT_REFERER, "http://domain.com"); curl_setopt($ch, CURLOPT_HTTPHEADER, $headers); curl_setopt($ch, CURLOPT_COOKIEJAR, "my_cookies.txt"); curl_setopt($ch, CURLOPT_COOKIEFILE, "my_cookies.txt"); curl_setopt($ch, CURLOPT_URL, "http://domain.com"); $response = curl_exec($ch); But the problem is that in Fiddler I can only see this: Request (domain.com is just an alias): CONNECT domain.com:80 HTTP/1.1 Response: HTTP/1.1 200 Blind-Connection Established If I manually load the website in a browser Fiddler gives me WAY more information. I can see the cookies, the header information, and what I'm receiving via the GET. Any ideas why Fiddler can't see more useful information from PHP cURL? Edit: I tried turning on the "Enable HTTPS Decryption" option inside Tools / Fiddler Options / HTTPS (which I'm not sure why I'd need to use as I didn't tell cURL to use HTTPS). Unfortunately, by changing this setting I now get a Response of: HTTP/1.1 502 Connection failed Edit: If it helps, the app "Charles" shows me WAY more information than Fiddler, but I really want to figure out Fiddler since I like it better.

    Read the article

  • Can HTTP URIs have non-ASCII characters?

    - by Cheeso
    I tried to find this in the relevant RFC, IETF RFC 3986, but couldn't figure it. Do URIs for HTTP allow Unicode, or non-ASCII of any kind? Can you please cite the section and the RFC that supports your answer. NB: For those who might think this is not programming related - it is. It's related to an ISAPI filter I'm building. Addendum I've read section 2.5 of RFC 3986. But RFC 2616, which I believe is the current HTTP protocol, predates 3986, and for that reason I'd suppose it cannot be compliant with 3986. Furthermore, even if or when the HTTP RFC is updated, there still will be the issue of rationalization - in other words, does an HTTP URI support ALL of the RFC3986 provisos, including whatever is appropriate to include non US-ASCII characters?

    Read the article

  • How do I debug a HTTP 502 error?

    - by Bialecki
    I have a Python Tornado server sitting behind a nginx frontend. Every now and then, but not every time, I get a 502 error. I look in the nginx access log and I see this: 127.0.0.1 - - [02/Jun/2010:18:04:02 -0400] "POST /a/question/updates HTTP/1.1" 502 173 "http://localhost/tagged/python" "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3" and in the error log: 2010/06/02 18:04:02 [error] 14033#0: *1700 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: _, request: "POST /a/question/updates HTTP/1.1", upstream: "http://127.0.0.1:8888/a/question/updates", host: "localhost", referrer: "http://localhost/tagged/python" I don't think any errors show up in the Tornado log. How would you go about debugging this? Is there something I can put in the Tornado or nginx configuration to help debug this? EDIT: In addition, I get a fair number of 504, gateway timeout errors. Is it possible that the Tornado instance is just busy or something?

    Read the article

  • Making HTTP POST request

    - by infrared
    I'm trying to make a POST request to retrieve information about a book. Here is the code that returns HTTP code: 302, Moved import httplib, urllib params = urllib.urlencode({ 'isbn' : '9780131185838', 'catalogId' : '10001', 'schoolStoreId' : '15828', 'search' : 'Search' }) headers = {"Content-type": "application/x-www-form-urlencoded", "Accept": "text/plain"} conn = httplib.HTTPConnection("bkstr.com:80") conn.request("POST", "/webapp/wcs/stores/servlet/BuybackSearch", params, headers) response = conn.getresponse() print response.status, response.reason data = response.read() conn.close() When I try from a browser, from this page: http://www.bkstr.com/webapp/wcs/stores/servlet/BuybackMaterialsView?langId=-1&catalogId=10001&storeId=10051&schoolStoreId=15828 , it works. What am I missing in my code? Thanks EDIT: Here's what I get when I call print response.msg 302 Moved Date: Tue, 07 Sep 2010 16:54:29 GMT Vary: Host,Accept-Encoding,User-Agent Location: http://www.bkstr.com/webapp/wcs/stores/servlet/BuybackSearch X-UA-Compatible: IE=EmulateIE7 Content-Length: 0 Content-Type: text/plain; charset=utf-8 Seems that the location points to the same url I'm trying to access in the first place? EDIT2: I've tried using urllib2 as suggested here. Here is the code: import urllib, urllib2 url = 'http://www.bkstr.com/webapp/wcs/stores/servlet/BuybackSearch' values = {'isbn' : '9780131185838', 'catalogId' : '10001', 'schoolStoreId' : '15828', 'search' : 'Search' } data = urllib.urlencode(values) req = urllib2.Request(url, data) response = urllib2.urlopen(req) print response.geturl() print response.info() the_page = response.read() print the_page And here is the output: http://www.bkstr.com/webapp/wcs/stores/servlet/BuybackSearch Date: Tue, 07 Sep 2010 16:58:35 GMT Pragma: No-cache Cache-Control: no-cache Expires: Thu, 01 Jan 1970 00:00:00 GMT Set-Cookie: JSESSIONID=0001REjqgX2axkzlR6SvIJlgJkt:1311s25dm; Path=/ Vary: Accept-Encoding,User-Agent X-UA-Compatible: IE=EmulateIE7 Content-Length: 0 Connection: close Content-Type: text/html; charset=utf-8 Content-Language: en-US Set-Cookie: TSde3575=225ec58bcb0fdddfad7332c2816f1f152224db2f71e1b0474c866f3b; Path=/

    Read the article

  • Tool to monitor HTTP traffic

    - by Samuh
    I have an application on my iPhone which sends out Http requests; is it possible to look into the HTTP stream using some tool?? I use standalone version of (IEInspector's) HttpAnalyzer tool on my windows PC to monitor HTTP traffic from all processes including the apps on Android phone (thanks to android debug bridge interface). Is there a similar tool for OS X that I can use for iPhone apps? Is this even allowed? Thanks in advance.

    Read the article

  • Weird IIS7 http redirection behavior

    - by wows
    I have a web server running Windows Server 2008 with IIS7. I have a bunch of websites which are all bound to the same IP address, but with different host header values. Most of the host headers are something like www.sitename.com. I also have a corresponding website entry for each which listens for the host "sitename.com" and does a redirect to "www.sitename.com" within IIS7 (to cater for non-www requests). Now this is all pretty straight forward, but I've noticed the when setting up the Http Redirection, some wierd things happen: Firstly, the "redirect" website entries must be pointed at a different physical directory than the site it's trying to redirect to, otherwise the redirection settings get set for both sites at once. Secondly, sometimes whilst setting up Http Redirection on an individual site, Http Redirection gets set at a server level, and all sites start redirecting to that one URL. How does this happen? Under what circumstances could setting Http Redirection on an individual site affect all sites? This is scary!!!

    Read the article

  • HTTP Digest Authentication Fails With URL Parameters (CakePHP)

    - by NathanGaskin
    I have a RESTful API set up and working with CakePHP using mapResources() and parseExtensions(). Authentication is handled by CakePHP's security component using HTTP Digest Authentication. Everything works fine, unless I add parameters to the url, in the form: http://example.com/locations.xml?distance=4 Which causes the authentication to always fail. Any ideas? Edit: This seems to be an issue with the regex in parseDigestAuthData(). There's a semi-fix here: http://old.nabble.com/paginator-conflicts-with-Security-%3ErequireLogin---td16301573.html which now allows me to use the format: http://example.com/locations/index/distance:4/.xml But that's not RESTful and doesn't look all that pretty. Still, getting closer!

    Read the article

  • Fast ruby http library for large XML downloads

    - by Vlad Zloteanu
    I am consuming various XML-over-HTTP web services returning large XML files ( 2MB). What would be the fastest ruby http library to reduce the 'downloading' time? Required features: both GET and POST requests gzip/deflate downloads (Accept-Encoding: deflate, gzip) - very important I am thinking between: open-uri Net::HTTP curb but you can also come with other suggestions. P.S. To parse the response, I am using a pull parser from Nokogiri, so I don't need an integrated solution like rest-client or hpricot.

    Read the article

  • After HTTP GET request, the resulting string is cut-off - content has been consumed

    - by Jayomat
    hi all, I'm making a http get request like this: try { HttpClient client = new DefaultHttpClient(); String getURL = "http://busspur02.aseag.de/bs.exe?SID=5FC39&ScreenX=1440&ScreenY=900&CMD=CR&Karten=true&DatumT="+day+"&DatumM="+month+"&DatumJ="+year+"&ZeitH="+hour+"&ZeitM="+min+"&Intervall=60&Suchen=(S)uchen&GT0=Aachen&T0=H&HT0="+start_from+"&GT1=Aachen&T0=H&HT1="+destination+""; HttpGet get = new HttpGet(getURL); HttpResponse responseGet = client.execute(get); HttpEntity resEntityGet = responseGet.getEntity(); if (resEntityGet != null) { //do something with the response Log.i("GET RESPONSE",EntityUtils.toString(resEntityGet)); } ........ It all works well... the only problem: the output from Log.i is cut-off... It's not the complete html page. If I make the same request in a browser, I get 3x the output in opposition to making the request in the emulator and using the above code.... what's wrong? ERROR: 04-30 14:01:01.287: WARN/System.err(1088): java.lang.IllegalStateException: Content has been consumed 04-30 14:01:01.297: WARN/System.err(1088): at org.apache.http.entity.BasicHttpEntity.getContent(BasicHttpEntity.java:84) 04-30 14:01:01.297: WARN/System.err(1088): at org.apache.http.conn.BasicManagedEntity.getContent(BasicManagedEntity.java:100) 04-30 14:01:01.307: WARN/System.err(1088): at org.apache.http.util.EntityUtils.toString(EntityUtils.java:112) 04-30 14:01:01.307: WARN/System.err(1088): at org.apache.http.util.EntityUtils.toString(EntityUtils.java:146) 04-30 14:01:01.307: WARN/System.err(1088): at mjb.project.AVV.ParseHTML.start(ParseHTML.java:177) 04-30 14:01:01.307: WARN/System.err(1088): at mjb.project.AVV.ParseHTML.onCreate(ParseHTML.java:139) 04-30 14:01:01.307: WARN/System.err(1088): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1047) 04-30 14:01:01.327: WARN/System.err(1088): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2459) 04-30 14:01:01.327: WARN/System.err(1088): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2512) 04-30 14:01:01.327: WARN/System.err(1088): at android.app.ActivityThread.access$2200(ActivityThread.java:119) 04-30 14:01:01.347: WARN/System.err(1088): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1863) 04-30 14:01:01.347: WARN/System.err(1088): at android.os.Handler.dispatchMessage(Handler.java:99) 04-30 14:01:01.347: WARN/System.err(1088): at android.os.Looper.loop(Looper.java:123) 04-30 14:01:01.347: WARN/System.err(1088): at android.app.ActivityThread.main(ActivityThread.java:4363) 04-30 14:01:01.347: WARN/System.err(1088): at java.lang.reflect.Method.invokeNative(Native Method) 04-30 14:01:01.357: WARN/System.err(1088): at java.lang.reflect.Method.invoke(Method.java:521) 04-30 14:01:01.357: WARN/System.err(1088): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:860) 04-30 14:01:01.357: WARN/System.err(1088): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:618) 04-30 14:01:01.357: WARN/System.err(1088): at dalvik.system.NativeStart.main(Native Method )

    Read the article

  • Maximizing the number of true concurrent / parrallel http requests in Silverlight

    - by Clems
    Hi all. I'm using SL 4 beta and my app needs to do a lot of small http requests to the server. I believe that when exceeding the number of allowed concurrent requests, the subsequent requests are put in a queue. I am also aware that SL 4 has both a http browser stack and a http client stack, with both different limit in terms of the number of concurrent requests. Let's say call those limits MAX_BROWSER and MAX_CLIENT. Also I think I read somewhere that the number of concurrent requests is limited per domain, not overall. But I'm sure if this applies to both the http client stack. That means that you CAN have MAX_BROWSER requests to domain1.com AND MAX_BROWSER requests to domain2.com at the same time. And I even believe that sub domains are considered different so you can also have MAX_BROWSER requests to domain1.com AND MAX_BROWSER requests to sub.domain1.com at the same time. I have ownership of the services and domain names so I could easily setup sub domains for my services. Given those considerations I'm trying to optimize the number of concurrent http requests to my server. Here are few questions ? Is is possible to use both stack at the same time ? Is the subdomain/domain story true for both stacks ? None ? If so that would mean that I could potentially have a number of concurrent requests equal to : (MAX_BROWSER + MAX_CLIENT) * NUMBER_OF_DOMAINS which would be fairly good. Is this correct ? I'm kind of sharing my morning thoughts here, hoping somebody has experimented with those things. Thank you.

    Read the article

  • Writing an http sniffer (or any other application level sniffer)

    - by Ishi
    Dear all, I am trying my hands understanding PCAP libraries. I am able to apply a filter and get the TCP payload at port 80. But what next ? How can I read the HTTP data - suppose I want to know the "User Agent" field value in the http header..how should I proceed ? I have searched the website (and googled a lot too), and could find a related thread here : http://stackoverflow.com/questions/2073183/writing-a-http-sniffer. But this doesn't help me anywhere... Thanks !!

    Read the article

  • Ruby Equivalent of Python Requests Library (HTTP Client)

    - by Hartator
    There is a library in python that I love called requests. requests is a http client build on urllib3, top-notch :) (http://docs.python-requests.org/en/latest/) I am looking for something similar in ruby, basically what I need is : Upload files support (multipart/form-data) Easy get/post Cookies can be passed from a response object to a request object (build manually login script) Stable and Flexible Sessions support (to not have to handle cookies manually if we don't have too) I've looked at Typhoeus, but the code example in the home page doesn't work (they have moved code along and the get method is not longer directly accessible like that), so it's not starting well! :) Curb seems nice and I like curl, there is alson RestClient which seems popular and em-http seems pretty fast according to benchmark. There is a aso Patron and CurlFu which I haven't have the time to try. And of course Net:Http. But it doesn't seems to have a main stream solution that everyone point. I think a lot of people have been in my situation and I wonder what they have choosen and why?

    Read the article

  • Erlang: HTTP GET Parameters with Inets

    - by Ted Karmel
    The following post indicates how to make a simple get http request with Erlang's inets. exploring erlang's http client Sometimes, URLs have GET parameters: http://example.net/item?parameter1=12&parameter2=1431&parameter3=8765 Besides including the parameters in the URL itself, is there a way to create variables and then send them with the request? Example appreciated.

    Read the article

  • Unable to HTTP PUT with libcurl to django-piston

    - by Jesse Beder
    I'm trying to PUT data using libcurl to mimic the command curl -u test:test -X PUT --data-binary @data.yaml "http://127.0.0.1:8000/foo/" which works correctly. My options look like: curl_easy_setopt(handle, CURLOPT_USERPWD, "test:test"); curl_easy_setopt(handle, CURLOPT_URL, "http://127.0.0.1:8000/foo/"); curl_easy_setopt(handle, CURLOPT_VERBOSE, 1); curl_easy_setopt(handle, CURLOPT_UPLOAD, 1); curl_easy_setopt(handle, CURLOPT_READFUNCTION, read_data); curl_easy_setopt(handle, CURLOPT_READDATA, &yaml); curl_easy_setopt(handle, CURLOPT_INFILESIZE, yaml.size()); curl_easy_perform(handle); I believe the read_data function works correctly, but if you ask, I'll post that code. I'm using Django with django-piston, and my update function is never called! (It is called when I use the command line version above.) libcurl's output is: * About to connect() to 127.0.0.1 port 8000 (#0) * Trying 127.0.0.1... * connected * Connected to 127.0.0.1 (127.0.0.1) port 8000 (#0) * Server auth using Basic with user 'test' > PUT /foo/ HTTP/1.1 Authorization: Basic dGVzdDp0ZXN0 Host: 127.0.0.1:8000 Accept: */* Content-Length: 244 Expect: 100-continue * Done waiting for 100-continue ** this is where my read_data handler confirms: read 244 bytes ** * HTTP 1.0, assume close after body < HTTP/1.0 400 BAD REQUEST < Date: Thu, 13 May 2010 08:22:52 GMT < Server: WSGIServer/0.1 Python/2.5.1 < Vary: Authorization < Content-Type: text/plain < Bad Request* Closing connection #0

    Read the article

  • Running Endpoint locally could not provide access to API explorer when HTTP proxy is enabled

    - by harik
    I'm using Android Studio(0.5.8) on Window7 x64 for developing my Android App with Google AppEngine backend. If my machine is having direct internet access and I launch backend locally (as DevApp Server) and access my API Endpoints through webbrowser (chrome) it is all working as expected. Accessing api explorer is also working fine from webbrowser. http://localhost:8080/_ah/api/explorer But if I have configured internet through http proxy (in Android Studio and also in webbrowser) then webbrowser displays initial page of backend but can't access endpoint api explorer. And deploying appbackend in Google AppEngine also fails with errors. gradlew backend:appengineUpdate Same is working fine if direct internet access is available (not via http proxy). How can we make it work with http proxy also? Any help is appreciated, Thanks.

    Read the article

  • Unable to HTTP PUT with libcurl

    - by Jesse Beder
    I'm trying to PUT data using libcurl to mimic the command curl -u test:test -X PUT --data-binary @data.yaml "http://127.0.0.1:8000/foo/" which works correctly. My options look like: curl_easy_setopt(handle, CURLOPT_USERPWD, "test:test"); curl_easy_setopt(handle, CURLOPT_URL, "http://127.0.0.1:8000/foo/"); curl_easy_setopt(handle, CURLOPT_VERBOSE, 1); curl_easy_setopt(handle, CURLOPT_UPLOAD, 1); curl_easy_setopt(handle, CURLOPT_READFUNCTION, read_data); curl_easy_setopt(handle, CURLOPT_READDATA, &yaml); curl_easy_setopt(handle, CURLOPT_INFILESIZE, yaml.size()); curl_easy_perform(handle); I believe the read_data function works correctly, but if you ask, I'll post that code. I'm using Django with django-piston, and my update function is never called! (It is called when I use the command line version above.) libcurl's output is: * About to connect() to 127.0.0.1 port 8000 (#0) * Trying 127.0.0.1... * connected * Connected to 127.0.0.1 (127.0.0.1) port 8000 (#0) * Server auth using Basic with user 'test' > PUT /foo/ HTTP/1.1 Authorization: Basic dGVzdDp0ZXN0 Host: 127.0.0.1:8000 Accept: */* Content-Length: 244 Expect: 100-continue * Done waiting for 100-continue ** this is where my read_data handler confirms: read 244 bytes ** * HTTP 1.0, assume close after body < HTTP/1.0 400 BAD REQUEST < Date: Thu, 13 May 2010 08:22:52 GMT < Server: WSGIServer/0.1 Python/2.5.1 < Vary: Authorization < Content-Type: text/plain < Bad Request* Closing connection #0

    Read the article

  • Connecting to a web server over HTTP, code snippet

    - by Emanuil
    I'v got the following piece of code: try { HttpClient httpClient = new DefaultHttpClient(); HttpPost httpPost = new HttpPost("http://www.flashstall.com/json.txt"); HttpResponse httpResponse = httpClient.execute(httpPost); } catch (Exception e) { Log.e("m40", "Error in http connection " + e.toString()); } When I run it it logs "Error in http connection java.net.UnkownHostException: www.flashstall.com". What am I doing wrong?

    Read the article

  • Erlang: HTTP Accept Header with Inets

    - by Ted Karmel
    I am trying to do the equivalent of the following curl command : curl -H "Accept: text/plain" http://127.0.0.1:8033/stats I tried with an Inets simple http request. But, it isn't processed. How can I specify in Inets (or some other Erlang http client for that matter) the accept header requirement?

    Read the article

  • Translating from cURL to straight HTTP requests

    - by Joshua
    What would the following cURL command look like as a generic (without cURL) http request? feedUri="https://www.someservice.com/feeds\ ?prettyprint=true" curl $feedUri --silent \ --header "GData-Version: 2" For example how could such an http request be expressed in the browser address bar? Partucluarly, how do I express the --header information if I were to just type out the plain http request?

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >