Search Results

Search found 55168 results on 2207 pages for 'http header'.

Page 165/2207 | < Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >

  • Serializing an object into the body of a WCF request using webHttpBinding

    - by Bert
    I have a WCF service exposed with a webHttpBinding endpoint. [OperationContract(IsOneWay = true)] [WebInvoke(Method = "POST", RequestFormat = WebMessageFormat.Json, BodyStyle = WebMessageBodyStyle.Bare, UriTemplate = "/?action=DoSomething&v1={value1}&v2={value2}")] void DoSomething(string value1, string value2, MySimpleObject value3); In theory, if I call this, the first two parameters (value1 & value 2) are taken from the Uri and the final one (value3) should be deserialized from the body of the request. Assuming I am using Json as the RequestFormat, what is the best way of serialising an instance of MySimpleObject into the body of the request before I send it ? This, for instance, does not seem to work : HttpWebRequest sendRequest = (HttpWebRequest)WebRequest.Create(url); sendRequest.ContentType = "application/json"; sendRequest.Method = "POST"; using (var sendRequestStream = sendRequest.GetRequestStream()) { DataContractJsonSerializer jsonSerializer = new DataContractJsonSerializer(typeof(MySimpleObject)); jsonSerializer.WriteObject(sendRequestStream, obj); sendRequestStream.Close(); } sendRequest.GetResponse().Close();

    Read the article

  • Why will IIS 6 not serve my custom 404 page when I set the URL in 'Custom Errors'?

    - by Glenn Slaven
    I've got an ASP.NET MVC site & I've got an Errors controller with a NotFound action which works great for 404 errors that pass though .NET, but for stuff that doesn't (like static files) I've set the Custom Errors value for 404 to URL with a value of /Errors/NotFound. But when I do this & hit a non-existant page the site just gives me this: The system cannot find the path specified. Is this because it's a dynamic url, can IIS not redirect 404 requests to dynamic urls or have I screwed up the config somewhere?

    Read the article

  • How do I use NTLM authentication with Active Directory

    - by Jon Works
    I am trying to implement NTLM authentication on one of our internal sites and everything is working. The one piece of the puzzle I do not have is how to take the information from NTLM and authenticate with Active Directory. There is a good description of NTLM and the encryption used for the passwords, which I used to implement this, but I am not sure of how to verify if the user's password is valid. I am using Coldfusion but a solution to this problem can be in any language (Java, Python, PHP, etc). Edit: I am using Coldfusion on Redhat Enterprise Linux. Unfortunately we cannot use IIS to manage this and instead have to write or use a 3rd party tool for this.

    Read the article

  • handling filename* parameters with spaces via RFC 5987 results in '+' in filenames

    - by Peter Friend
    I have some legacy code I am dealing with (so no I can't just use a URL with an encoded filename component) that allows a user to download a file from our website. Since our filenames are often in many different languages they are all stored as UTF-8. I wrote some code to handle the RFC5987 conversion to a proper filename* parameter. This works great until I have a filename with non-ascii characters and spaces. Per RFC, the space character is not part of attr_char so it gets encoded as %20. I have new versions of Chrome as well as Firefox and they are all converting to %20 to + on download. I have tried not encoding the space and putting the encoded filename in quotes and get the same result. I have sniffed the response coming from the server to verify that the servlet container wasn't mucking with my headers and they look correct to me. The RFC even has examples that contain %20. Am I missing something, or do all of these browsers have a bug related to this? Many thanks in advance. The code I use to encode the filename is below. Peter public static boolean bcsrch(final char[] chars, final char c) { final int len = chars.length; int base = 0; int last = len - 1; /* Last element in table */ int p; while (last >= base) { p = base + ((last - base) >> 1); if (c == chars[p]) return true; /* Key found */ else if (c < chars[p]) last = p - 1; else base = p + 1; } return false; /* Key not found */ } public static String rfc5987_encode(final String s) { final int len = s.length(); final StringBuilder sb = new StringBuilder(len << 1); final char[] digits = {'0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F'}; final char[] attr_char = {'!','#','$','&','\'','+','-','.','0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z','^','_','a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y','z','|', '~'}; for (int i = 0; i < len; ++i) { final char c = s.charAt(i); if (bcsrch(attr_char, c)) sb.append(c); else { final char[] encoded = {'%', 0, 0}; encoded[1] = digits[0x0f & (c >>> 4)]; encoded[2] = digits[c & 0x0f]; sb.append(encoded); } } return sb.toString(); } Update Here is a screen shot of the download dialog I get for a file with Chinese characters with spaces as mentioned in my comment.

    Read the article

  • .NET: Preserving some, but not all query params during redirect

    - by kasper pedersen
    Hi all, Could someone tell me if the code below would achieve what I want, which is: Check if the query parameters 'return_path' and/or 'user_state' are present in the query string, and if so append them to the query string of the redirect URI. As I'm not a .NET dev and don't have a server to test this on, I was hoping someone could give me some feedback. ArrayList vars = new ArrayList(); vars.Add("return_path"); vars.Add("user_state"); string newUrl = "/new/request/uri" + "?"; ArrayList params = new ArrayList(); foreach ( string key in Request.QueryString ) { if (vars.contains(key)) { params.Add(key + "=" + HttpUtility.URLPathEncode(Request.QueryString[key])); } } String[] paramArr = (String[]) params.ToArray( typeof (string) ); String queryString = String.join("&", paramArr); Response.Redirect(newUrl); Thank you :)

    Read the article

  • Facebook links to my site resolve as 403 forbidden

    - by filip
    Hi I'm experiencing a super weird problem. Whenever I post links to my website on Facebook, they come up as Forbidden. The site itself works great and I have no seen this when linking on other sites. Could this be a server misconfiguration? Any thoughts on where to look? here's some Info: I have a dedicated server running WHM 11.25.0 i have 2 sites hosted here using cPanel 11.25.0 the error msg: Forbidden You don't have permission to access / on this server. Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request. Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8i DAV/2 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 Server at ---- Port 80

    Read the article

  • The remote server returned an error: (404) Not Found.

    - by John
    I am running this piece of code to get the source of my webpage. The problem is why this function returns 404 error? Private Function getPageSource(ByVal URL As String) As String Dim webClient As New System.Net.WebClient() Dim strSource As String = webClient.DownloadString(URL) webClient.Dispose() Return strSource End Function

    Read the article

  • httpUnit class not found

    - by josh
    I am trying httpUnit for the first time and just trying to get a response back from google.com. However, I keep getting the following error: com.meterware.httpunit.dom.HTMLDocumentImpl not found Though, I have placed httpUnit.jar in the libraries folder of my NetBeans project and can actually see that class file is there. Any experiences with this?

    Read the article

  • Why is curl in Ruby slower than command-line curl?

    - by Stiivi
    I am trying to download more than 1m pages (URLs ending by a sequence ID). I have implemented kind of multi-purpose download manager with configurable number of download threads and one processing thread. The downloader downloads files in batches: curl = Curl::Easy.new batch_urls.each { |url_info| curl.url = url_info[:url] curl.perform file = File.new(url_info[:file], "wb") file << curl.body_str file.close # ... some other stuff } I have tried to download 8000 pages sample. When using the code above, I get 1000 in 2 minutes. When I write all URLs into a file and do in shell: cat list | xargs curl I gen all 8000 pages in two minutes. Thing is, I need it to have it in ruby code, because there is other monitoring and processing code. I have tried: Curl::Multi - it is somehow faster, but misses 50-90% of files (does not download them and gives no reason/code) multiple threads with Curl::Easy - around the same speed as single threaded Why is reused Curl::Easy slower than subsequent command line curl calls and how can I make it faster? Or what I am doing wrong? I would prefer to fix my download manager code than to make downloading for this case in a different way. Before this, I was calling command-line wget which I provided with a file with list of URLs. Howerver, not all errors were handled, also it was not possible to specify output file for each URL separately when using URL list. Now it seems to me that the best way would be to use multiple threads with system call to 'curl' command. But why when I can use directly Curl in Ruby? Code for the download manager is here, if it might help: Download Manager (I have played with timeouts, from not-setting it to various values, it did not seem help) Any hints appreciated.

    Read the article

  • Rails 3 get raw post data and write it to tmp file

    - by Andrew
    I'm working on implementing Ajax-Upload for uploading photos in my Rails 3 app. The documentation says: For IE6-8, Opera, older versions of other browsers you get the file as you normally do with regular form-base uploads. For browsers which upload file with progress bar, you will need to get the raw post data and write it to the file. So, how can I receive the raw post data in my controller and write it to a tmp file so my controller can then process it? (In my case the controller is doing some image manipulation and saving to S3.) Some additional info: As I'm configured right now the post is passing these parameters: Parameters: {"authenticity_token"=>"...", "qqfile"=>"IMG_0064.jpg"} ... and the CREATE action looks like this: def create @attachment = Attachment.new @attachment.user = current_user @attachment.file = params[:qqfile] if @attachment.save! respond_to do |format| format.js { render :text => '{"success":true}' } end end end ... but I get this error: ActiveRecord::RecordInvalid (Validation failed: File file name must be set.): app/controllers/attachments_controller.rb:7:in `create'

    Read the article

  • http_post_data adding extra characters in response

    - by Siyam
    Hey Guys I am getting some extra characters like '5ae' and '45c' interspersed along with valid data when using http_post_data. The data I am sending is XML and so is the response. The response contains these weird characters thats making the XML invalid. If I use fsockopen I do not have this issue. Would really like some input on this.

    Read the article

  • Best Approach to process images in Django

    - by primalpop
    I've have an application with Android front end and Django as the back end. As part of the answers here, I'm confused over the approach which I should take to send images to Django Server. I've 2 options at my disposal as Piro pointed out there. 1) Sending images as Multi Part entity 2) Send image as a String after encoding it using Base 64. So I am considering the approach that would make it easy to be processed by Django. The images are small in size (<200kb) and number (<10). Any suggestions or pointers are most welcome.

    Read the article

  • Tomcat fails on first request in combination with jsvc

    - by Roalt
    I have a web application where the first request may take a few seconds as some singletons are initialised. I've used the mod_proxy and jsvc construction mentioned in this question and described on this page to connect apache with tomcat (data is served via SSL) For the sample Tomcat application, everything works as it should. However, when using my application I get the following error in my apache log: [Wed Feb 10 09:48:29 2010] [error] [client 130.12.1.26] (70014)End of file found: proxy: error reading status line from remote server localhost [Wed Feb 10 09:48:29 2010] [error] [client 130.12.1.26] proxy: Error reading from remote server returned by /MyWebApp/MyWebApp.faces and I get the following error in my tomcat output: 10/02/2010 09:48:29 9947 jsvc.exec error: Service exit with a return value of 1 I'm not an expert on this so I would like to know what's the cause of the problem and where I should look for an answer?

    Read the article

  • Open a new tab in a browser with the response to an ASP request

    - by user89691
    It's a bit complicated this one... Lets say I have a listing of PDF files displayed in the user's browser. Each filename is a link pointing not to the file, but to an ASP page, say <--a href="viewfile.asp?file=somefile.pdf">somefile.pdf</a> I want viewfile.asp to fetch the file (I've done that bit OK) but I then want the file to be loaded by the browser as if the user had opened the PDF file directly. And I want it to open in a new tab or browser window. here's (simplified) viewfile.asp: <% var FileID = Request.querystring ("file") ; var ResponseBody = MyGETRequest (SomeURL + FileID) ; if (MyHTTPResult == 200) { if (ExtractFileExt (FileID).toLowerCase = "pdf") { ?????? // return file contents in new browser tab } .... %>

    Read the article

  • Cookie not renewing/overwriting in IE

    - by deceze
    I have a weird quirk with cookies in IE. When a user logs into the site, I'm generating a new session id and hence need to overwrite the cookie. The flow is basically: Client goes to https://secure.example.com/users/login page, automatically receiving a session id Client POSTs login credentials to same address Client receives the following headers together with a 302 redirect to https://secure.example.com/users/mypage: CAKEPHP=deleted; expires=Sun, 05-Apr-2009 04:50:35 GMT; path=/ CAKEPHP=98hnIO23...; expires=Mon, 12 Apr 2010 04:50:36 GMT; path=/; secure Client is supposed to visit https://secure.example.com/users/mypage, presenting the new session id. This works in all browsers, except IE (tested in 7 & 8). IE retains the old, unauthenticated session id, and is redirected back to the login page. It works on my local test environment (using a self-signed certificate at https://localhost:8443/...), but not on the live server. I'm using CakePHP and simply issue a $this->Session->renew(), which produces the above cookie headers. Any ideas how to get IE to accept the new cookie?

    Read the article

  • Tomcat reporting 404 error on all of newly deployed WAR files?

    - by dacracot
    I deployed a WAR file into $TOMCAT_HOME/webapps by copying the file into the directory, just like I've done a thousand times before. Tomcat detects the WAR and inflates it. I can traverse the directory tree on my server at the command line (it's Fedora). But when I address the webapp within my client machine's browser, I get nothing but 404 errors. This has happened to the last two deployments of completely separate WARs. The first was a replacement of an existing WAR. I first deleted the WAR and its inflated directory, and then copied in the WAR which inflated... 404. I deleted everything again, put back the previously working WAR from backup. It inflated and worked. The second was a completely new, never before deployed WAR... nothing but 404. Other WARs are working, but now I'm afraid to change anything until I know what is going on. Any clues? Edit: From my comment you can see that the logs included "SEVERE: Error listenerStart" after the WAR was deployed by Tomcat. There were no stack traces or other errors reported.

    Read the article

  • ASP.NET MVC Custom Error Pages with Magical Unicorn

    - by FLClover
    my question is regarding Pure.Kromes answer to this post. I tried implementing my pages' custom error messages using his method, yet there are some problems I can't quite explain. a) When I provoke a 404 Error by entering in invalid URL such as localhost:3001/NonexistantPage, it defaults to the ServerError() Action of my error controller even though it should go to NotFound(). Here is my ErrorController: public class ErrorController : Controller { public ActionResult NotFound() { Response.TrySkipIisCustomErrors = true; Response.StatusCode = (int)HttpStatusCode.NotFound; var viewModel = new ErrorViewModel() { ServerException = Server.GetLastError(), HTTPStatusCode = Response.StatusCode }; return View(viewModel); } public ActionResult ServerError() { Response.TrySkipIisCustomErrors = true; Response.StatusCode = (int)HttpStatusCode.InternalServerError; var viewModel = new ErrorViewModel() { ServerException = Server.GetLastError(), HTTPStatusCode = Response.StatusCode }; return View(viewModel); } } My error routes in Global.asax.cs: routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.IgnoreRoute("{*favicon}", new { favicon = @"(.*/)?favicon.ico(/.*)?" }); routes.MapRoute( name: "Error - 404", url: "NotFound", defaults: new { controller = "Error", action = "NotFound" } ); routes.MapRoute( name: "Error - 500", url: "ServerError", defaults: new { controller = "Error", action = "ServerError" } ); And my web.config settings: <system.web> <customErrors mode="On" redirectMode="ResponseRewrite" defaultRedirect="/ServerError"> <error statusCode="404" redirect="/NotFound" /> </customErrors> ... </system.web> <system.webServer> <httpErrors errorMode="Custom" existingResponse="Replace"> <remove statusCode="404" subStatusCode="-1" /> <error statusCode="404" path="/NotFound" responseMode="ExecuteURL" /> <remove statusCode="500" subStatusCode="-1" /> <error statusCode="500" path="/ServerError" responseMode="ExecuteURL" /> </httpErrors> ... The Error views are located in /Views/Error/ as NotFound.cshtml and ServerError.cshtml. b) One funny thing is, When a server error occurs, it does in fact display the Server Error view I defined, however it also outputs a default error message as well saying that the Error page could not be found. Here's how it looks like: Do you have any advice how I could fix these two problems? I really like Pure.Kromes approach to implementing these error messages, but if there are better ways of achieving this don't hestitate to tell me. Thanks! *EDIT : * I can directly navigate to my views through the ErrorController by accessing /Error/NotFound or Error/ServerError. The views themselves only contain some text, no markup or anything. As I said, it actually works in some way, just not the way I intended it to work. There seems to be an issue with the redirect in the web.config, but I haven't been able to figure it out.

    Read the article

  • <audio> element autobuffers no matter what

    - by pthulin
    I'm trying to make a web based media player using the HTML5 audio element implemented in Firefox 3.5 and Chrome. Reading Mozillas documentation, omitting the autobuffer attribute should result in the audio src not being requested: if specified, the audio will automatically begin being downloaded, even if not set to automatically play. This continues until the media cache is full, or the entire audio file has been downloaded, whichever comes first However, on the server side I notice the files are being requested anyway. My sample page is very simple: <html> <body> <audio src="1.ogg"></audio> <audio src="2.ogg"></audio> </body> </html>

    Read the article

  • How do you get Lighttpd to compress CodeIgniter's "clean urls"?

    - by ocdcoder
    I was looking at PageSpeed on my test website and noticed that Lighttpd wasn't compressing my HTML (but was compressing my javascript and css files). I'm assuming this is because I'm using CodeIgniter and it's clean url system and since the requests don't have file extensions, Lighttpd doesn't have the rule to compress it. That being the case, how do I get Lighttpd to compress my HTML? Is this something I shouldn't be doing? Or something I need to specially configure Lighttpd for?

    Read the article

< Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >