Search Results

Search found 57458 results on 2299 pages for 'http response codes'.

Page 19/2299 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Apache - Reverse Proxy and HTTP 302 status messsage

    - by Rob
    My team is trying to setup an Apache reverse proxy from a customer's site into one of our web applications. http://www.example.com/app1/some-path maps to http://internal1.example.com/some-path Inside our application we use struts and have redirect = true set on certain actions in order to provide certain functionality. The 302 status messages from these re-directs cause the user to break out of the proxy resulting in an error page for the end user. HTTP/1.1 302 Found Location: http://internal.example.com/some-path/redirect Is there any way to setup the reverse proxy in apache so that the redirects work correctly? http://www.example.com/app1/some-path/redirect

    Read the article

  • How to display characters in http get response correctly with the right encoding

    - by DixieFlatline
    Hello! Does anyone know how to read c,š,ž characters in http get response properly? When i make my request in browser the browser displays all characters correctly. But in java program with apache jars i don't know how to set the encoding right. I tried with client.getParams().setParameter(CoreProtocolPNames.HTTP_CONTENT_CHARSET, "UTF-8"); but it's not working. My code: HttpClient client = new DefaultHttpClient(); String getURL = "http://www.google.com"; HttpGet get = new HttpGet(getURL); HttpResponse responseGet = client.execute(get); HttpEntity resEntityGet = responseGet.getEntity(); if (resEntityGet != null) { Log.i("GET RESPONSE",EntityUtils.toString(resEntityGet)); } } catch (Exception e) { e.printStackTrace(); }

    Read the article

  • No exception, no error, still i dont recieve the json object from my http post

    - by user2978538
    My source code: final Thread t = new Thread() { public void run() { Looper.prepare(); HttpClient client = new DefaultHttpClient(); HttpConnectionParams.setConnectionTimeout(client.getParams(), 10000); HttpResponse response; JSONObject obj = new JSONObject(); try { HttpPost post = new HttpPost("http://pc.dyndns-office.com/mobile.asp"); obj.put("Model", ReadIn1); obj.put("Product", ReadIn2); obj.put("Manufacturer", ReadIn3); obj.put("RELEASE", ReadIn4); obj.put("SERIAL", ReadIn5); obj.put("ID", ReadIn6); obj.put("ANDROID_ID", ReadIn7); obj.put("Language", ReadIn8); obj.put("BOARD", ReadIn9); obj.put("BOOTLOADER", ReadIn10); obj.put("BRAND", ReadIn11); obj.put("CPU_API", ReadIn12); obj.put("DISPLAY", ReadIn13); obj.put("FINGERPRINT", ReadIn14); obj.put("HARDWARE", ReadIn15); obj.put("UUID", ReadIn16); StringEntity se = new StringEntity(obj.toString()); se.setContentType(new BasicHeader(HTTP.CONTENT_TYPE, "application/json")); post.setEntity(se); post.setHeader("host", "http://pc.dyndns-office.com/mobile.asp"); response = client.execute(post); if (response != null) { InputStream in = response.getEntity().getContent(); } } catch (Exception e) { e.printStackTrace(); } Looper.loop(); } }; t.start(); } } i want to send an Json object to a Website. As far as I can see, I set the header, but still I get this exception, can someone help me? (I'm using Android-Studio) __ Edit: i don't get any exceptions anymore, but still i do not receive the json packet. When i manually call the website i get a log file entry. Does anyone know, what's wrong? Edit2: When i debug i get as response "HTTP/1.1 400 bad request" i'm sure its not an permission problem. Any ideas?

    Read the article

  • Groovy Grails, How do you stream or buffer a large file in a Controller's response?

    - by Julian Noye
    Hi Guys I have a controller that makes a connection to a url to retrieve a csv file. I am able to send the file in the response using the following code, this works fine. def fileURL = "www.mysite.com/input.csv" def thisUrl = new URL(fileURL); def connection = thisUrl.openConnection(); def output = connection.content.text; response.setHeader "Content-disposition", "attachment; filename=${'output.csv'}" response.contentType = 'text/csv' response.outputStream << output response.outputStream.flush() However I don't think this method is inappropriate for a large file, as the whole file is loaded into the controllers memory. I want to be able to read the file chunk by chunk and write the file to the response chunk by chunk. Any ideas?

    Read the article

  • [Java] Send cookie with http request problem

    - by nkr1pt
    I'm trying to get a certain cookie in a java client by creating a series of Http requests. It looks like I'm getting a valid cookie from the server but when I'm sending out a request to the fnal url with the seemingly valid cookie I should get some lines of xml in the response but the response is blank because the cookie isw rong or is invalidated because a session has closed or an other problem which I can't figure out. The cookie handed out by the server expires at the end of the session. It seems to me the cookie is valid because when I do the same calls in firefox, a similar cookie with the same name and starting with the 3 first same letters and of the same length is stored in firefox, also expiring at the end of the session. If I then make a request to the final url with only this particular cookie stored in firefox (removed all other cookies), the xml is nicely rendered on the page. Any ideas about what I am doing wrong in this piece of code? One other thing, when I use the value from the very similar cookie generated and strored in firefox in this piece of code, the last request does give xml feedback in the http response! // Validate url = new URL(URL_VALIDATE); conn = (HttpURLConnection) url.openConnection(); conn.setRequestProperty("Cookie", cookie); conn.connect(); String headerName = null; for (int i = 1; (headerName = conn.getHeaderFieldKey(i)) != null; i++) { if (headerName.equals("Set-Cookie")) { if (conn.getHeaderField(i).startsWith("JSESSIONID")) { cookie = conn.getHeaderField(i).substring(0, conn.getHeaderField(i).indexOf(";")).trim(); } } } // Get the XML url = new URL(URL_XML_TOTALS); conn = (HttpURLConnection) url.openConnection(); conn.setRequestProperty("Cookie", cookie); conn.connect(); // Get the response StringBuffer answer = new StringBuffer(); BufferedReader reader = new BufferedReader(new InputStreamReader(conn.getInputStream())); String line; while ((line = reader.readLine()) != null) { answer.append(line); } reader.close(); //Output the response System.out.println(answer.toString())

    Read the article

  • Server returned HTTP response code: 500 for URL

    - by user617162
    java.io.IOException: Server returned HTTP response code: 500 for URL: http://ww .huadt.com.cn/zh-cn/i/l/@357671030745308@V500@0000@AUTOLOW@1@11d590f7$GPRMC,065 48.000,A,3959.8587,N,11617.2447,E,0.00,55.32,210311,,,A*56@@ at sun.net.www.protocol.http.HttpURLConnection.getInputStream(Unknown S urce) at hdt.SendCmdToP.Sendplatform(SendCmdToP.java:67) at hdt.SendCmdToP.process(SendCmdToP.java:198) at hdt.SendCmdToP.run(SendCmdToP.java:131) java.lang.NullPointerException at hdt.SendCmdToP.Sendplatform(SendCmdToP.java:91) at hdt.SendCmdToP.process(SendCmdToP.java:198) at hdt.SendCmdToP.run(SendCmdToP.java:131) Appeared pointer, and 500 wrong with the closure of the abnormal, a firewall relationship? If not is it code problems? Please help everybody see how to solve the problem. thanks?

    Read the article

  • nginx status code 200 and 304

    - by Chamnap
    I'm using nginx + passenger. I'm trying to understand the nginx response 200 and 304. What does this both means? Sometimes, it responses back in 304 and others only 200. Reading the YUI blog, it seems browser needs the header "Last-Modified" to verify with the server. I'm wondering why the browser need to verify the last modified date. Here is my nginx configuration: location / { root /var/www/placexpert/public; # <--- be sure to point to 'public'! passenger_enabled on; rack_env development; passenger_use_global_queue on; if ($request_filename ~* ^.+\.(jpg|jpeg|gif|png|ico|css|js|swf)$) { expires max; break; } } How would I add the header "Last-Modified" to the static files? Which value should I set?

    Read the article

  • XML over HTTP with JMS and Spring

    - by Will Sumekar
    I have a legacy HTTP server where I need to send an XML file over HTTP request (POST) using Java (not browser) and the server will respond with another XML in its HTTP response. It is similar to Web Service but there's no WSDL and I have to follow the existing XML structure to construct my XML to be sent. I have done a research and found an example that matches my requirement here. The example uses HttpClient from Apache Commons. (There are also other examples I found but they use java.net networking package (like URLConnection) which is tedious so I don't want to use them). But it's also my requirement to use Spring and JMS. I know from Spring's reference that it's possible to combine HttpClient, JMS and Spring. My question is, how? Note that it's NOT in my requirement to use HttpClient. If you have a better suggestion, I'm welcome. Appreciate it. For your reference, here's the XML-over-HTTP example I've been talking about: /* * $Header: * $Revision$ * $Date$ * ==================================================================== * * Copyright 2002-2004 The Apache Software Foundation * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * ==================================================================== * * This software consists of voluntary contributions made by many * individuals on behalf of the Apache Software Foundation. For more * information on the Apache Software Foundation, please see * <http://www.apache.org/>. * * [Additional notices, if required by prior licensing conditions] * */ import java.io.File; import java.io.FileInputStream; import org.apache.commons.httpclient.HttpClient; import org.apache.commons.httpclient.methods.InputStreamRequestEntity; import org.apache.commons.httpclient.methods.PostMethod; /** * * This is a sample application that demonstrates * how to use the Jakarta HttpClient API. * * This application sends an XML document * to a remote web server using HTTP POST * * @author Sean C. Sullivan * @author Ortwin Glück * @author Oleg Kalnichevski */ public class PostXML { /** * * Usage: * java PostXML http://mywebserver:80/ c:\foo.xml * * @param args command line arguments * Argument 0 is a URL to a web server * Argument 1 is a local filename * */ public static void main(String[] args) throws Exception { if (args.length != 2) { System.out.println( "Usage: java -classpath <classpath> [-Dorg.apache.commons."+ "logging.simplelog.defaultlog=<loglevel>]" + " PostXML <url> <filename>]"); System.out.println("<classpath> - must contain the "+ "commons-httpclient.jar and commons-logging.jar"); System.out.println("<loglevel> - one of error, "+ "warn, info, debug, trace"); System.out.println("<url> - the URL to post the file to"); System.out.println("<filename> - file to post to the URL"); System.out.println(); System.exit(1); } // Get target URL String strURL = args[0]; // Get file to be posted String strXMLFilename = args[1]; File input = new File(strXMLFilename); // Prepare HTTP post PostMethod post = new PostMethod(strURL); // Request content will be retrieved directly // from the input stream // Per default, the request content needs to be buffered // in order to determine its length. // Request body buffering can be avoided when // content length is explicitly specified post.setRequestEntity(new InputStreamRequestEntity( new FileInputStream(input), input.length())); // Specify content type and encoding // If content encoding is not explicitly specified // ISO-8859-1 is assumed post.setRequestHeader( "Content-type", "text/xml; charset=ISO-8859-1"); // Get HTTP client HttpClient httpclient = new HttpClient(); // Execute request try { int result = httpclient.executeMethod(post); // Display status code System.out.println("Response status code: " + result); // Display response System.out.println("Response body: "); System.out.println(post.getResponseBodyAsString()); } finally { // Release current connection to the connection pool // once you are done post.releaseConnection(); } } }

    Read the article

  • Reading the Set-Cookie instructions in an HTTP Response header

    - by Eduardo León
    Is there any standard means in PHP to read the Set-Cookie instructions in an HTTP Response header, without manually parsing it? More specifically, I want to read the value of the ASP.NET_SessionId cookie returned by an ASP.NET Web Service I am consuming. EDIT: I am consuming the Web Service using PHP's native SoapClient class. I can use the __getLastResponseHeaders() method to retrieve the whole of the HTTP response header returned by the Web Service: HTTP/1.1 200 OK Cache-Control: private, max-age=0 Content-Type: text/xml; charset=utf-8 Server: Microsoft-IIS/7.5 Set-Cookie: ASP.NET_SessionId=ku501l55o300ik3sa2gu3vzj; path=/; HttpOnly X-AspNet-Version: 2.0.50727 X-Powered-By: ASP.NET Date: Tue, 11 Jan 2011 23:34:02 GMT Content-Length: 368 But I want to extract the value of the ASP.NET_SessionID cookie: ku501l55o300ik3sa2gu3vzj And, of course, I don't want to do it manually.

    Read the article

  • Why my HttpPost can't receive all response data?

    - by Johnny
    I'm on Android 1.5, and my code is like this: HttpPost httpPost = new HttpPost(url); HttpEntity entity = new UrlEncodedFormEntity(params, HTTP.UTF_8); httpPost.setEntity(entity); HttpResponse response = httpClient.execute(httpPost); HttpEntity respEntity = response.getEntity(); String result = EntityUtils.toString(respEntity, DEFAULT_CHARSET); After successfully executed these codes, the result is a stripped string. I've tried using browser to test the url+param, it works fine and got all data. What's wrong with this code? Is there any parameters I need to specified?

    Read the article

  • Correct syntax of a HTTP 100 Continue response

    - by PartlyCloudy
    For me, one of the weakest points of the HTTP 1.1 RFC and the various implementations around is how to deal with 100 Continue headers. I searched on the web for a while and had a look at different implementations. However, there is one thing I'm not sure of. what is the correct syntax of a 100 Continue message? Several sources claim, that this must be a single response line without any further header lines. However, I can't find that in the RFC 2616 reflected. So what is right? HTTP/1.1 100 Continue or HTTP/1.1 100 Continue [Additional Headers…] ?

    Read the article

  • How to handle "100 continue" HTTP message ?

    - by Stephane
    Hello, I'm writing a simplistic HTTP server that will accept PUT requests mostly from cURL as client and I'm having a bit of an issue with handling the "Expect: 100-continue" header. As I understand it, the server is supposed to read the header, send back a "HTTP/1.1 100 Continue" response on the connection, read the stream up to the value on "Content-Length" and then send back the real response code (Usually "HTTP/1.1 200 OK" but any other valid HTTP answer should do). Well, that's exactly what my server does. The problem is that, apparently, if I send a "100 Continue" answer, cURL fails to report any subsequent HTTP error code and assumes the upload was a success. For instance, if the upload is rejected due to the nature of the content (there is a basic data check happening), I want the calling client to detect the problem and act accordingly. Am I missing something obvious ? Thanks

    Read the article

  • The remote host closed the connection. The error code is 0x80070057

    - by Jalpesh P. Vadgama
    While creating a PDF or any file with asp.net pages I was getting following error. Exception Type:System.Web.HttpException The remote host closed the connection. The error code is 0x80072746. at System.Web.Hosting.ISAPIWorkerRequestInProcForIIS6.FlushCore(Byte[] status, Byte[] header, Int32 keepConnected, Int32 totalBodySize, Int32 numBodyFragments, IntPtr[] bodyFragments, Int32[] bodyFragmentLengths, Int32 doneWithSession, Int32 finalStatus, Boolean& async) at System.Web.Hosting.ISAPIWorkerRequest.FlushCachedResponse(Boolean isFinal) at System.Web.Hosting.ISAPIWorkerRequest.FlushResponse(Boolean finalFlush) at System.Web.HttpResponse.Flush(Boolean finalFlush) at System.Web.HttpResponse.Flush() at System.Web.UI.HttpResponseWrapper.System.Web.UI.IHttpResponse.Flush() at System.Web.UI.PageRequestManager.RenderFormCallback(HtmlTextWriter writer, Control containerControl) at System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) at System.Web.UI.Control.RenderChildren(HtmlTextWriter writer) at System.Web.UI.HtmlControls.HtmlForm.RenderChildren(HtmlTextWriter writer) at System.Web.UI.HtmlControls.HtmlForm.Render(HtmlTextWriter output) at System.Web.UI.Control.RenderControlInternal(HtmlTextWriter writer, ControlAdapter adapter) at System.Web.UI.Control.RenderControl(HtmlTextWriter writer, ControlAdapter adapter) at System.Web.UI.HtmlControls.HtmlForm.RenderControl(HtmlTextWriter writer) at System.Web.UI.HtmlFormWrapper.System.Web.UI.IHtmlForm.RenderControl(HtmlTextWriter writer) at System.Web.UI.PageRequestManager.RenderPageCallback(HtmlTextWriter writer, Control pageControl) at System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) at System.Web.UI.Control.RenderChildren(HtmlTextWriter writer) at System.Web.UI.Page.Render(HtmlTextWriter writer) at System.Web.UI.Control.RenderControlInternal(HtmlTextWriter writer, ControlAdapter adapter) at System.Web.UI.Control.RenderControl(HtmlTextWriter writer, ControlAdapter adapter) at System.Web.UI.Control.RenderControl(HtmlTextWriter writer) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) Exception Type:System.Web.HttpException The remote host closed the connection. The error code is 0x80072746. at System.Web.Hosting.ISAPIWorkerRequestInProcForIIS6.FlushCore(Byte[] status, After searching and analyzing I have found that client was disconnected and still I am flushing the response which I am doing for creating PDF files from the stream. To fix this kind of error we can use Response.IsClientConnected property to check whether client is connected or not and then we can flush and end response from client. Here is the sample code to fix that problem. if (Response.IsClientConnected) { Response.Flush(); Response.End(); } That’s it Hope this will help you..Stay tuned for more.. Till that Happy Programming!! Technorati Tags: Exception,ASp.NET

    Read the article

  • WCF GZip Compression Request/Response Processing

    - by IanT8
    How do I get a WCF client to process server responses which have been GZipped or Deflated by IIS? On IIS, I've followed the instructions here on how to make IIS 6 gzip all responses (where the request contained "Accept-Encoding: gzip, deflate") emitted by .svc wcf services. On the client, I've followed the instructions here and here on how to inject this header into the web request: "Accept-Encoding: gzip, deflate". Fiddler2 shows the response is binary and not plain old Xml. The client crashes with an exception which basically says there's no Xml header, which ofcourse is true. In my IClientMessageInspector, the app crashes before AfterReceiveReply is called. Some further notes: (1) I can't change the WCF service or client as they are supplied by a 3rd party. I can however attach behaviors and/or message inspectors via configuration if this is the right direction to take. (2) I don't want to compress/uncompress just the soap body, but the entire message. Any ideas/solutions? * SOLVED * It was not possible to write a WCF extension to achieve these goals. Instead I followed this CodeProject article which advocate a helper class: public class CompressibleHttpRequestCreator : IWebRequestCreate { public CompressibleHttpRequestCreator() { } WebRequest IWebRequestCreate.Create(Uri uri) { HttpWebRequest httpWebRequest = Activator.CreateInstance(typeof(HttpWebRequest), BindingFlags.CreateInstance | BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance, null, new object[] { uri, null }, null) as HttpWebRequest; if (httpWebRequest == null) { return null; } httpWebRequest.AutomaticDecompression =DecompressionMethods.GZip | DecompressionMethods.Deflate; return httpWebRequest; } } and also, an addition to the application configuration file: <configuration> <system.net> <webRequestModules> <remove prefix="http:"/> <add prefix="http:" type="Pajocomo.Net.CompressibleHttpRequestCreator, Pajocomo" /> </webRequestModules> </system.net> </configuration> What seems to be happening is that WCF eventually asks some factory or other deep down in system.net to provide an HttpWebRequest instance, and we provide the helper that will be asked to create the required instance. In the WCF client configuration file, a simple basicHttpBinding is all that is required, without the need for any custom extensions. When the application runs, the client Http request contains the header "Accept-Encoding: gzip, deflate", the server returns a gzipped web response, and the client transparently decompresses the http response before handing it over to WCF. When I tried to apply this technique to Web Services I found that it did NOT work. Although the helper class was executed in the same was as when used by the WCF client, the http request did not contain the "Accept-Encoding: ..." header. To make this work for Web Services, I had to edit the Web Proxy class, and add this method: protected override System.Net.WebRequest GetWebRequest(Uri uri) { System.Net.HttpWebRequest rq = (System.Net.HttpWebRequest)base.GetWebRequest(uri); rq.AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate; return rq; } Note that it did not matter whether the CompressibleHttpRequestCreator and block from the application config file were present or not. For web services, only overriding GetWebRequest in the Web Service Proxy worked.

    Read the article

  • insert null character using tamper data

    - by Jeremy Comulu
    Using the firefox extension tamper data (for modifing http requests that firefox makes) how do I insert a null character into a post field? I can enter normal characters, but binary characters in it are not urlencoded and are shown as is, so how do I enter the null character into a field? If you know of a firefox extension like tamper data that I can do this or a way to do this using tamper data please post.

    Read the article

  • IIS 401.3 - Unauthorized on only 1 server out of 3 set up for network load balancing

    - by Tony
    Over the weekend our Server Admin set up two virtual Windows 2008 machines with IIS installed and set them up under NLB. I came in and changed the application pool the website was running under to our domain account that has proper access to the database and the file share hosting our .NET web application Sitefinity, and changed it to .NET 4 Integrated. NLB and everything was running fine on both servers. He brought up the third server for our cluster on Tuesday and I performed the same actions.. The only difference was that I was given admin rights for the third server so I could set it up remotely instead of going to his office. He has full control over the share and NTFS perms on \\hostname\Sitefinity and I believe I only had read access. I pointed the web site to the same \\hostname\Sitefinity\sitename share that the others were on and the authentication/authorization test settings passed. I hit the site from http://localhost (like I did successfully from the other two before trying the cluster's IP address) and I received a HTTP Error 401.3 - Unauthorized. I've verified many times that the application pool is running under the same service account. I tried hitting just a simple test.htm.. works fine on both of the first two servers but I get the same 401.3 on the third. I copied my dev project to the local inetpub directory and re-pointed the website and that ran perfectly. I turned on Failed Request Tracing and it acts like it's still running the local IUSR account I guess (instead of my domain account)? Here is an excerpt of the File Cache Access Start and the error from the trace: FileName \\hostname\sitefinity\sitename\test.htm UserName IUSR DomainName NT AUTHORITY ---------- Successful false FileFromCache false FileAddedToCache false FileDirmoned true LastModCheckErrorIgnored true ErrorCode 2147942405 LastModifiedTime ErrorCode Access is denied. (0x80070005) ---------- ModuleName IIS Web Core Notification 2 HttpStatus 401 HttpReason Unauthorized HttpSubStatus 3 ErrorCode 2147942405 ConfigExceptionInfo Notification AUTHENTICATE_REQUEST ErrorCode Access is denied. (0x80070005) ---------- My personal AD account was then granted read/write perms to the share so I created a new application pool and set the site under it in case there was an issue with the application pool but no success. I created another under my own account and it still failed. It just seems like maybe it's not trying to access the files under the account my application pools are running under although that's the only way I've done things before. I set the Physicial Path Credentials in Advanced Settings on the site to the service account and it threw a 500 error of some sort so I assume that's not the answer (and I don't have to do it on the other servers). It's like somehow I'm trying to force impersonation on the IUSR account or something?

    Read the article

  • Logging Into a site that uses Live.com authentication with C#

    - by Josh
    I've been trying to automate a log in to a website I frequent, www.bungie.net. The site is associated with Microsoft and Xbox Live, and as such makes uses of the Windows Live ID API when people log in to their site. I am relatively new to creating web spiders/robots, and I worry that I'm misunderstanding some of the most basic concepts. I've simulated logins to other sites such as Facebook and Gmail, but live.com has given me nothing but trouble. Anyways, I've been using Wireshark and the Firefox addon Tamper Data to try and figure out what I need to post, and what cookies I need to include with my requests. As far as I know these are the steps one must follow to log in to this site. 1. Visit https: //login.live.com/login.srf?wa=wsignin1.0&rpsnv=11&ct=1268167141&rver=5.5.4177.0&wp=LBI&wreply=http:%2F%2Fwww.bungie.net%2FDefault.aspx&id=42917 2. Recieve the cookies MSPRequ and MSPOK. 3. Post the values from the form ID "PPSX", the values from the form ID "PPFT", your username, your password all to a changing URL similar to: https: //login.live.com/ppsecure/post.srf?wa=wsignin1.0&rpsnv=11&ct= (there are a few numbers that change at the end of that URL) 4. Live.com returns the user a page with more hidden forms to post. The client then posts the values from the form "ANON", the value from the form "ANONExp" and the values from the form "t" to the URL: http ://www.bung ie.net/Default.aspx?wa=wsignin1.0 5. After posting that data, the user is returned a variety of cookies the most important of which is "BNGAuth" which is the log in cookie for the site. Where I am having trouble is on fifth step, but that doesn't neccesarily mean I've done all the other steps correctly. I post the data from "ANON", "ANONExp" and "t" but instead of being returned a BNGAuth cookie, I'm returned a cookie named "RSPMaybe" and redirected to the home page. When I review the Wireshark log, I noticed something that instantly stood out to me as different between the log when I logged in with Firefox and when my program ran. It could be nothing but I'll include the picture here for you to review. I'm being returned an HTTP packet from the site before I post the data in the fourth step. I'm not sure how this is happening, but it must be a side effect from something I'm doing wrong in the HTTPS steps. ![alt text][1] http://img391.imageshack.us/img391/6049/31394881.gif using System; using System.Collections.Generic; using System.Collections.Specialized; using System.Text; using System.Net; using System.IO; using System.IO.Compression; using System.Security.Cryptography; using System.Security.Cryptography.X509Certificates; using System.Web; namespace SpiderFromScratch { class Program { static void Main(string[] args) { CookieContainer cookies = new CookieContainer(); Uri url = new Uri("https://login.live.com/login.srf?wa=wsignin1.0&rpsnv=11&ct=1268167141&rver=5.5.4177.0&wp=LBI&wreply=http:%2F%2Fwww.bungie.net%2FDefault.aspx&id=42917"); HttpWebRequest http = (HttpWebRequest)HttpWebRequest.Create(url); http.Timeout = 30000; http.UserAgent = "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.8) Gecko/20100202 Firefox/3.5.8 (.NET CLR 3.5.30729)"; http.Accept = "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"; http.Headers.Add("Accept-Language", "en-us,en;q=0.5"); http.Headers.Add("Accept-Charset", "ISO-8859-1,utf-8;q=0.7,*;q=0.7"); http.Headers.Add("Keep-Alive", "300"); http.Referer = "http://www.bungie.net/"; http.ContentType = "application/x-www-form-urlencoded"; http.CookieContainer = new CookieContainer(); http.Method = WebRequestMethods.Http.Get; HttpWebResponse response = (HttpWebResponse)http.GetResponse(); StreamReader readStream = new StreamReader(response.GetResponseStream()); string HTML = readStream.ReadToEnd(); readStream.Close(); //gets the cookies (they are set in the eighth header) string[] strCookies = response.Headers.GetValues(8); response.Close(); string name, value; Cookie manualCookie; for (int i = 0; i < strCookies.Length; i++) { name = strCookies[i].Substring(0, strCookies[i].IndexOf("=")); value = strCookies[i].Substring(strCookies[i].IndexOf("=") + 1, strCookies[i].IndexOf(";") - strCookies[i].IndexOf("=") - 1); manualCookie = new Cookie(name, "\"" + value + "\""); Uri manualURL = new Uri("http://login.live.com"); http.CookieContainer.Add(manualURL, manualCookie); } //stores the cookies to be used later cookies = http.CookieContainer; //Get the PPSX value string PPSX = HTML.Remove(0, HTML.IndexOf("PPSX")); PPSX = PPSX.Remove(0, PPSX.IndexOf("value") + 7); PPSX = PPSX.Substring(0, PPSX.IndexOf("\"")); //Get this random PPFT value string PPFT = HTML.Remove(0, HTML.IndexOf("PPFT")); PPFT = PPFT.Remove(0, PPFT.IndexOf("value") + 7); PPFT = PPFT.Substring(0, PPFT.IndexOf("\"")); //Get the random URL you POST to string POSTURL = HTML.Remove(0, HTML.IndexOf("https://login.live.com/ppsecure/post.srf?wa=wsignin1.0&rpsnv=11&ct=")); POSTURL = POSTURL.Substring(0, POSTURL.IndexOf("\"")); //POST with cookies http = (HttpWebRequest)HttpWebRequest.Create(POSTURL); http.UserAgent = "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.8) Gecko/20100202 Firefox/3.5.8 (.NET CLR 3.5.30729)"; http.Accept = "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"; http.Headers.Add("Accept-Language", "en-us,en;q=0.5"); http.Headers.Add("Accept-Charset", "ISO-8859-1,utf-8;q=0.7,*;q=0.7"); http.Headers.Add("Keep-Alive", "300"); http.CookieContainer = cookies; http.Referer = "https://login.live.com/login.srf?wa=wsignin1.0&rpsnv=11&ct=1268158321&rver=5.5.4177.0&wp=LBI&wreply=http:%2F%2Fwww.bungie.net%2FDefault.aspx&id=42917"; http.ContentType = "application/x-www-form-urlencoded"; http.Method = WebRequestMethods.Http.Post; Stream ostream = http.GetRequestStream(); //used to convert strings into bytes System.Text.ASCIIEncoding encoding = new System.Text.ASCIIEncoding(); //Post information byte[] buffer = encoding.GetBytes("PPSX=" + PPSX +"&PwdPad=IfYouAreReadingThisYouHaveTooMuc&login=YOUREMAILGOESHERE&passwd=YOURWORDGOESHERE" + "&LoginOptions=2&PPFT=" + PPFT); ostream.Write(buffer, 0, buffer.Length); ostream.Close(); HttpWebResponse response2 = (HttpWebResponse)http.GetResponse(); readStream = new StreamReader(response2.GetResponseStream()); HTML = readStream.ReadToEnd(); response2.Close(); ostream.Dispose(); foreach (Cookie cookie in response2.Cookies) { Console.WriteLine(cookie.Name + ": "); Console.WriteLine(cookie.Value); Console.WriteLine(cookie.Expires); Console.WriteLine(); } //SET POSTURL value string POSTANON = "http://www.bungie.net/Default.aspx?wa=wsignin1.0"; //Get the ANON value string ANON = HTML.Remove(0, HTML.IndexOf("ANON")); ANON = ANON.Remove(0, ANON.IndexOf("value") + 7); ANON = ANON.Substring(0, ANON.IndexOf("\"")); ANON = HttpUtility.UrlEncode(ANON); //Get the ANONExp value string ANONExp = HTML.Remove(0, HTML.IndexOf("ANONExp")); ANONExp = ANONExp.Remove(0, ANONExp.IndexOf("value") + 7); ANONExp = ANONExp.Substring(0, ANONExp.IndexOf("\"")); ANONExp = HttpUtility.UrlEncode(ANONExp); //Get the t value string t = HTML.Remove(0, HTML.IndexOf("id=\"t\"")); t = t.Remove(0, t.IndexOf("value") + 7); t = t.Substring(0, t.IndexOf("\"")); t = HttpUtility.UrlEncode(t); //POST the Info and Accept the Bungie Cookies http = (HttpWebRequest)HttpWebRequest.Create(POSTANON); http.UserAgent = "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.8) Gecko/20100202 Firefox/3.5.8 (.NET CLR 3.5.30729)"; http.Accept = "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"; http.Headers.Add("Accept-Language", "en-us,en;q=0.5"); http.Headers.Add("Accept-Encoding", "gzip,deflate"); http.Headers.Add("Accept-Charset", "ISO-8859-1,utf-8;q=0.7,*;q=0.7"); http.Headers.Add("Keep-Alive", "115"); http.CookieContainer = new CookieContainer(); http.ContentType = "application/x-www-form-urlencoded"; http.Method = WebRequestMethods.Http.Post; http.Expect = null; ostream = http.GetRequestStream(); int test = ANON.Length; int test1 = ANONExp.Length; int test2 = t.Length; buffer = encoding.GetBytes("ANON=" + ANON +"&ANONExp=" + ANONExp + "&t=" + t); ostream.Write(buffer, 0, buffer.Length); ostream.Close(); //Here lies the problem, I am not returned the correct cookies. HttpWebResponse response3 = (HttpWebResponse)http.GetResponse(); GZipStream gzip = new GZipStream(response3.GetResponseStream(), CompressionMode.Decompress); readStream = new StreamReader(gzip); HTML = readStream.ReadToEnd(); //gets both cookies string[] strCookies2 = response3.Headers.GetValues(11); response3.Close(); } } } This has given me problems and I've put many hours into learning about HTTP protocols so any help would be appreciated. If there is an article detailing a similar log in to live.com feel free to point the way. I've been looking far and wide for any articles with working solutions. If I could be clearer, feel free to ask as this is my first time using Stack Overflow. Cheers, --Josh

    Read the article

  • how to do asynchronous http requests with epoll and python 3.1

    - by flow
    there is an interesting page http://scotdoyle.com/python-epoll-howto.html about how to do asnchronous / non-blocking / AIO http serving in python 3. there is the tornado web server which does include a non-blocking http client. i have managed to port parts of the server to python 3.1, but the implementation of the client requires pyCurl and seems to have problems (with one participant stating how ‘Libcurl is such a pain in the neck’, and looking at the incredibly ugly pyCurl page i doubt pyCurl will arrive in py3+ any time soon). now that epoll is available in the standard library, it should be possible to do asynchronous http requests out of the box with python. i really do not want to use asyncore or whatnot; epoll has a reputation for being the ideal tool for the task, and it is part of the python distribution, so using anything but epoll for non-blocking http is highly counterintuitive (prove me wrong if you feel like it). oh, and i feel threading is horrible. no threading. i use stackless. people further interested in the topic of asynchronous http should not miss out on this talk by peter portante at PyCon2010; also of interest is the keynote, where speaker antonio rodriguez at one point emphasizes the importance of having up-to-date web technology libraries right in the standard library.

    Read the article

  • OData / WCF Data Service - HTTP 500 Error

    - by Eric
    I have created an OData/WCF service using Visual Studio 2010 on Windows XP SP3 with all current patches installed. When I click on "view in browser", the service opens and I see the 3 tables from my EF model. However, when I add a table name ("Commands" in this case) to the end of the query string, rather than seeing the data from the table, I get an HTTP 500 error. (This error (HTTP 500 Internal Server Error) means that the website you are visiting had a server problem which prevented the webpage from displaying.). I have not only followed the examples from 2 sites, but have also tried running the sample application that the blog poster sent me (that works on his machine), and still am not having any luck. The blog post is at Exposing OData from an Entity Framework Model Does anyone have an idea why this is occurring and how to resolve it? Here is the output of the "View in Browser": <?xml version="1.0" encoding="utf-8" standalone="yes" ?> - <service xml:base="http://localhost:1883/VistaDBCommandService.svc/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:app="http://www.w3.org/2007/app" xmlns="http://www.w3.org/2007/app"> - <workspace> <atom:title>Default</atom:title> - <collection href="Commands"> <atom:title>Commands</atom:title> </collection> - <collection href="Databases"> <atom:title>Databases</atom:title> </collection> - <collection href="Statuses"> <atom:title>Statuses</atom:title> </collection> </workspace> </service> ============================= Thanks, Eric

    Read the article

  • Relation between HTTP Keep Alive duration and TCP timeout duration

    - by Suresh Kumar
    I am trying to understand the relation between TCP/IP and HTTP timeout values. Are these two timeout values different or same? Most Web servers allow users to set the HTTP Keep Alive timeout value through some configuration. How is this value used by the Web servers? is this value just set on the underlying TCP/IP socket i.e is the HTTP Keep Alive timeout and TCP/IP Keep Alive Timeout same? or are they treated differently? My understanding is (maybe incorrect): The Web server uses the default timeout on the underlying TCP socket (i.e. indefinite) regardless of the configured HTTP Keep Alive timeout and creates a Worker thread that counts down the specified HTTP timeout interval. When the Worker thread hits zero, it closes the connection. EDIT: My question is about the relation or difference between the two timeout durations i.e. what will happen when HTTP keep-alive timeout duration and the timeout on the Socket (SO_TIMEOUT) which the Web server uses is different? should I even worry about these two being same or not?

    Read the article

  • Reading data from an open HTTP stream

    - by allenjones
    Hi, I am trying to use the .NET WebRequest/WebResponse classes to access the Twitter streaming API here "http://stream.twitter.com/spritzer.json". I need to be able to open the connection and read data incrementally from the open connection. Currently, when I call WebRequest.GetResponse method, it blocks until the entire response is downloaded. I know there is a BeginGetResponse method, but this will just do the same thing on a background thread. I need to get access to the response stream while the download is still happening. This just does not seem possible to me with these classes. There is a specific comment about this in the Twitter documentation: "Please note that some HTTP client libraries only return the response body after the connection has been closed by the server. These clients will not work for accessing the Streaming API. You must use an HTTP client that will return response data incrementally. Most robust HTTP client libraries will provide this functionality. The Apache HttpClient will handle this use case, for example." They point to the Appache HttpClient, but that doesn't help much because I need to use .NET. Any ideas whether this is possible with WebRequest/WebResponse, or do I have to go for lower level networking classes? Maybe there are other libraries that will allow me to do this? Thx Allen

    Read the article

  • Rails' page caching vs. HTTP reverse proxy caches

    - by John Topley
    I've been catching up with the Scaling Rails screencasts. In episode 11 which covers advanced HTTP caching (using reverse proxy caches such as Varnish and Squid etc.), they recommend only considering using a reverse proxy cache once you've already exhausted the possibilities of page, action and fragment caching within your Rails application (as well as memcached etc. but that's not relevant to this question). What I can't quite understand is how using an HTTP reverse proxy cache can provide a performance boost for an application that already uses page caching. To simplify matters, let's assume that I'm talking about a single host here. This is my understanding of how both techniques work (maybe I'm wrong): With page caching the Rails process is hit initially and then generates a static HTML file that is served directly by the Web server for subsequent requests, for as long as the cache for that request is valid. If the cache has expired then Rails is hit again and the static file is regenerated with the updated content ready for the next request With an HTTP reverse proxy cache the Rails process is hit when the proxy needs to determine whether the content is stale or not. This is done using various HTTP headers such as ETag, Last-Modified etc. If the content is fresh then Rails responds to the proxy with an HTTP 304 Not Modified and the proxy serves its cached content to the browser, or even better, responds with its own HTTP 304. If the content is stale then Rails serves the updated content to the proxy which caches it and then serves it to the browser If my understanding is correct, then doesn't page caching result in less hits to the Rails process? There isn't all that back and forth to determine if the content is stale, meaning better performance than reverse proxy caching. Why might you use both techniques in conjunction?

    Read the article

  • How can I prevent HTTPS on another domain from wrongly showing on my HTTP-only domain?

    - by Earlz
    So, I have a blog at domain.com. This blog is HTTP-only because I would gain almost nothing from adding SSL support. I have a web service now that I want to enable SSL support on that runs on the same server and IP address as my blog. I got it all working pretty easily, but not if I go to https://domain.com I will see a huge warning about an SSL certificate error and then if I click "ok" through the warning, I'll see the web service with SSL support, not my blog. My biggest fear with this scheme is Google indexing an HTTPS version of it and penalizing my blog because the content between the two doesn't match. How can I somehow for my blog's domain to either not serve anything on HTTPS, or to redirect back to my HTTP blog, or to serve my blog, but with an invalid SSL certificate? What can I do, preferably without buying another dedicated IP for my website?

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >