Search Results

Search found 50062 results on 2003 pages for 'http 1 1'.

Page 130/2003 | < Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >

  • Opening two connections to my Apache server?

    - by Ron Meretns
    Hi guys. I have an Apache server running PHP. Everything works fine. But If I have a script that runs for a long time, and I try to open another page to the same server, the second page waits till my first page finishes. This behavior happens to me on Linux and PC. I'm running Apache V2.2.9, and PHP 5.2.6. I'm not sure why this happens... is this normal behavior? Ron

    Read the article

  • Setting Curl's Timeout in PHP

    - by Moki
    I'm running a curl request on an eXist database through php. The dataset is very large, and as a result, the database consistently takes a long amount of time to return an XML response. To fix that, we set up a curl request, with what is supposed to be a long timeout. $ch = curl_init(); $headers["Content-Length"] = strlen($postString); $headers["User-Agent"] = "Curl/1.0"; curl_setopt($ch, CURLOPT_URL, $requestUrl); curl_setopt($ch, CURLOPT_HEADER, false); curl_setopt($ch, CURLOPT_HTTPHEADER, $headers); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_USERPWD, 'admin:'); curl_setopt($ch,CURLOPT_TIMEOUT,1000); $response = curl_exec($ch); curl_close($ch); However, the curl request consistently ends before the request is completed (<1000 when requested via a browser). Does anyone know if this is the proper way to set timeouts in curl?

    Read the article

  • Fetching custom Authorization header from incoming PHP request

    - by jpatokal
    So I'm trying to parse an incoming request in PHP which has the following header set: Authorization: Custom Username Simple question: how on earth do I get my hands on it? If it was Authorization: Basic, I could get the username from $_SERVER["PHP_AUTH_USER"]. If it was X-Custom-Authorization: Username, I could get the username from $_SERVER["HTTP_X_CUSTOM_AUTHORIZATION"]. But neither of these are set by a custom Authorization, var_dump($_SERVER) reveals no mention of the header (in particular, AUTH_TYPE is missing), and PHP5 functions like get_headers() only work on responses to outgoing requests. I'm running PHP 5 on Apache with an out-of-the box Ubuntu install.

    Read the article

  • How to decompress/inflate an XML response from ASP

    - by krisg
    Can anyone provide some insight into how i'd go about decompressing an XML response in classic ASP. We've been handed some code and asked to get it working: Set oXMLHttp = Server.CreateObject("MSXML2.ServerXMLHTTP") URL = HttpServer + re_domain + ".do;jsessionid=" + ue_session + "?" + data oXMLHttp.setTimeouts 5000, 60000, 1200000, 1200000 oXMLHttp.open "GET", URL, false oXMLHttp.setRequestHeader "Accept-Encoding", "gzip" oXMLHttp.send() if oXMLHttp.status = 200 Then if oXMLHttp.responseText = "" then htmlrequest_get = "Empty Response from Server" else htmlrequest_get = oXMLHttp.responseText end if else ... Apparently now that the response is compressed using gzip, we have to un-compress the XML response before we can start to work with the data. How should i go about this?

    Read the article

  • Download a file from one ASP.NET web application to other (given the credentials)

    - by Tom S.
    Hi everybody! Im working on a asp.net 3.5 web application (C#), where i have a file with some information that is updated frequently, and only few accounts can access to it (the application is using the asp.net authentication system, stored in a SQL database). My task is to parse that file, so i made a small parser (another web app) a to show the information in a more friendly way. However, everytime i want to parse it, i need to enter in the application with one of those accounts, download the file, put in the parser's folder. Is there any way to, given the username and password, download the file directly from the parser application and use that one? Thanks in advance

    Read the article

  • Serializing an object into the body of a WCF request using webHttpBinding

    - by Bert
    I have a WCF service exposed with a webHttpBinding endpoint. [OperationContract(IsOneWay = true)] [WebInvoke(Method = "POST", RequestFormat = WebMessageFormat.Json, BodyStyle = WebMessageBodyStyle.Bare, UriTemplate = "/?action=DoSomething&v1={value1}&v2={value2}")] void DoSomething(string value1, string value2, MySimpleObject value3); In theory, if I call this, the first two parameters (value1 & value 2) are taken from the Uri and the final one (value3) should be deserialized from the body of the request. Assuming I am using Json as the RequestFormat, what is the best way of serialising an instance of MySimpleObject into the body of the request before I send it ? This, for instance, does not seem to work : HttpWebRequest sendRequest = (HttpWebRequest)WebRequest.Create(url); sendRequest.ContentType = "application/json"; sendRequest.Method = "POST"; using (var sendRequestStream = sendRequest.GetRequestStream()) { DataContractJsonSerializer jsonSerializer = new DataContractJsonSerializer(typeof(MySimpleObject)); jsonSerializer.WriteObject(sendRequestStream, obj); sendRequestStream.Close(); } sendRequest.GetResponse().Close();

    Read the article

  • Why will IIS 6 not serve my custom 404 page when I set the URL in 'Custom Errors'?

    - by Glenn Slaven
    I've got an ASP.NET MVC site & I've got an Errors controller with a NotFound action which works great for 404 errors that pass though .NET, but for stuff that doesn't (like static files) I've set the Custom Errors value for 404 to URL with a value of /Errors/NotFound. But when I do this & hit a non-existant page the site just gives me this: The system cannot find the path specified. Is this because it's a dynamic url, can IIS not redirect 404 requests to dynamic urls or have I screwed up the config somewhere?

    Read the article

  • handling filename* parameters with spaces via RFC 5987 results in '+' in filenames

    - by Peter Friend
    I have some legacy code I am dealing with (so no I can't just use a URL with an encoded filename component) that allows a user to download a file from our website. Since our filenames are often in many different languages they are all stored as UTF-8. I wrote some code to handle the RFC5987 conversion to a proper filename* parameter. This works great until I have a filename with non-ascii characters and spaces. Per RFC, the space character is not part of attr_char so it gets encoded as %20. I have new versions of Chrome as well as Firefox and they are all converting to %20 to + on download. I have tried not encoding the space and putting the encoded filename in quotes and get the same result. I have sniffed the response coming from the server to verify that the servlet container wasn't mucking with my headers and they look correct to me. The RFC even has examples that contain %20. Am I missing something, or do all of these browsers have a bug related to this? Many thanks in advance. The code I use to encode the filename is below. Peter public static boolean bcsrch(final char[] chars, final char c) { final int len = chars.length; int base = 0; int last = len - 1; /* Last element in table */ int p; while (last >= base) { p = base + ((last - base) >> 1); if (c == chars[p]) return true; /* Key found */ else if (c < chars[p]) last = p - 1; else base = p + 1; } return false; /* Key not found */ } public static String rfc5987_encode(final String s) { final int len = s.length(); final StringBuilder sb = new StringBuilder(len << 1); final char[] digits = {'0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F'}; final char[] attr_char = {'!','#','$','&','\'','+','-','.','0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z','^','_','a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y','z','|', '~'}; for (int i = 0; i < len; ++i) { final char c = s.charAt(i); if (bcsrch(attr_char, c)) sb.append(c); else { final char[] encoded = {'%', 0, 0}; encoded[1] = digits[0x0f & (c >>> 4)]; encoded[2] = digits[c & 0x0f]; sb.append(encoded); } } return sb.toString(); } Update Here is a screen shot of the download dialog I get for a file with Chinese characters with spaces as mentioned in my comment.

    Read the article

  • Requires a valid Date or x-amz-date header?

    - by Jordan Messina
    I'm getting the following error when attempting to upload a file to S3: S3StorageError: <?xml version="1.0" encoding="UTF-8"?> <Error><Code>AccessDenied</Code><Message>AWS authentication requires a valid Date or x-amz-date header</Message><RequestId>7910FF83F3FE17E2</RequestId><HostId>EjycXTgSwUkx19YNkpAoY2UDDur/0d5SMvGJUicpN6qCZFa2OuqcpibIR3NJ2WKB</HostId></Error> I'm using Django with Django-Storages and Imagekit My S3 settings in my settings.py looks as follows: locale.setlocale(locale.LC_TIME, 'en_US') DEFAULT_FILE_STORAGE = 'backends.s3.S3Storage' AWS_ACCESS_KEY_ID = '************************' AWS_SECRET_ACCESS_KEY = '*****************************' AWS_STORAGE_BUCKET_NAME = 'static.blabla.com' AWS_HEADERS = { 'x-amz-date': datetime.datetime.utcnow().strftime('%a, %d %b %Y %H:%M:%S GMT'), 'Expires': 'Thu, 15 Apr 2200 20:00:00 GMT', } from S3 import CallingFormat AWS_CALLING_FORMAT = CallingFormat.SUBDOMAIN Thanks for any help you can give!

    Read the article

  • .NET: Preserving some, but not all query params during redirect

    - by kasper pedersen
    Hi all, Could someone tell me if the code below would achieve what I want, which is: Check if the query parameters 'return_path' and/or 'user_state' are present in the query string, and if so append them to the query string of the redirect URI. As I'm not a .NET dev and don't have a server to test this on, I was hoping someone could give me some feedback. ArrayList vars = new ArrayList(); vars.Add("return_path"); vars.Add("user_state"); string newUrl = "/new/request/uri" + "?"; ArrayList params = new ArrayList(); foreach ( string key in Request.QueryString ) { if (vars.contains(key)) { params.Add(key + "=" + HttpUtility.URLPathEncode(Request.QueryString[key])); } } String[] paramArr = (String[]) params.ToArray( typeof (string) ); String queryString = String.join("&", paramArr); Response.Redirect(newUrl); Thank you :)

    Read the article

  • How do I use NTLM authentication with Active Directory

    - by Jon Works
    I am trying to implement NTLM authentication on one of our internal sites and everything is working. The one piece of the puzzle I do not have is how to take the information from NTLM and authenticate with Active Directory. There is a good description of NTLM and the encryption used for the passwords, which I used to implement this, but I am not sure of how to verify if the user's password is valid. I am using Coldfusion but a solution to this problem can be in any language (Java, Python, PHP, etc). Edit: I am using Coldfusion on Redhat Enterprise Linux. Unfortunately we cannot use IIS to manage this and instead have to write or use a 3rd party tool for this.

    Read the article

  • httpUnit class not found

    - by josh
    I am trying httpUnit for the first time and just trying to get a response back from google.com. However, I keep getting the following error: com.meterware.httpunit.dom.HTMLDocumentImpl not found Though, I have placed httpUnit.jar in the libraries folder of my NetBeans project and can actually see that class file is there. Any experiences with this?

    Read the article

  • The remote server returned an error: (404) Not Found.

    - by John
    I am running this piece of code to get the source of my webpage. The problem is why this function returns 404 error? Private Function getPageSource(ByVal URL As String) As String Dim webClient As New System.Net.WebClient() Dim strSource As String = webClient.DownloadString(URL) webClient.Dispose() Return strSource End Function

    Read the article

  • Facebook links to my site resolve as 403 forbidden

    - by filip
    Hi I'm experiencing a super weird problem. Whenever I post links to my website on Facebook, they come up as Forbidden. The site itself works great and I have no seen this when linking on other sites. Could this be a server misconfiguration? Any thoughts on where to look? here's some Info: I have a dedicated server running WHM 11.25.0 i have 2 sites hosted here using cPanel 11.25.0 the error msg: Forbidden You don't have permission to access / on this server. Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request. Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8i DAV/2 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 Server at ---- Port 80

    Read the article

  • Why is curl in Ruby slower than command-line curl?

    - by Stiivi
    I am trying to download more than 1m pages (URLs ending by a sequence ID). I have implemented kind of multi-purpose download manager with configurable number of download threads and one processing thread. The downloader downloads files in batches: curl = Curl::Easy.new batch_urls.each { |url_info| curl.url = url_info[:url] curl.perform file = File.new(url_info[:file], "wb") file << curl.body_str file.close # ... some other stuff } I have tried to download 8000 pages sample. When using the code above, I get 1000 in 2 minutes. When I write all URLs into a file and do in shell: cat list | xargs curl I gen all 8000 pages in two minutes. Thing is, I need it to have it in ruby code, because there is other monitoring and processing code. I have tried: Curl::Multi - it is somehow faster, but misses 50-90% of files (does not download them and gives no reason/code) multiple threads with Curl::Easy - around the same speed as single threaded Why is reused Curl::Easy slower than subsequent command line curl calls and how can I make it faster? Or what I am doing wrong? I would prefer to fix my download manager code than to make downloading for this case in a different way. Before this, I was calling command-line wget which I provided with a file with list of URLs. Howerver, not all errors were handled, also it was not possible to specify output file for each URL separately when using URL list. Now it seems to me that the best way would be to use multiple threads with system call to 'curl' command. But why when I can use directly Curl in Ruby? Code for the download manager is here, if it might help: Download Manager (I have played with timeouts, from not-setting it to various values, it did not seem help) Any hints appreciated.

    Read the article

  • Rails 3 get raw post data and write it to tmp file

    - by Andrew
    I'm working on implementing Ajax-Upload for uploading photos in my Rails 3 app. The documentation says: For IE6-8, Opera, older versions of other browsers you get the file as you normally do with regular form-base uploads. For browsers which upload file with progress bar, you will need to get the raw post data and write it to the file. So, how can I receive the raw post data in my controller and write it to a tmp file so my controller can then process it? (In my case the controller is doing some image manipulation and saving to S3.) Some additional info: As I'm configured right now the post is passing these parameters: Parameters: {"authenticity_token"=>"...", "qqfile"=>"IMG_0064.jpg"} ... and the CREATE action looks like this: def create @attachment = Attachment.new @attachment.user = current_user @attachment.file = params[:qqfile] if @attachment.save! respond_to do |format| format.js { render :text => '{"success":true}' } end end end ... but I get this error: ActiveRecord::RecordInvalid (Validation failed: File file name must be set.): app/controllers/attachments_controller.rb:7:in `create'

    Read the article

  • http_post_data adding extra characters in response

    - by Siyam
    Hey Guys I am getting some extra characters like '5ae' and '45c' interspersed along with valid data when using http_post_data. The data I am sending is XML and so is the response. The response contains these weird characters thats making the XML invalid. If I use fsockopen I do not have this issue. Would really like some input on this.

    Read the article

  • Best Approach to process images in Django

    - by primalpop
    I've have an application with Android front end and Django as the back end. As part of the answers here, I'm confused over the approach which I should take to send images to Django Server. I've 2 options at my disposal as Piro pointed out there. 1) Sending images as Multi Part entity 2) Send image as a String after encoding it using Base 64. So I am considering the approach that would make it easy to be processed by Django. The images are small in size (<200kb) and number (<10). Any suggestions or pointers are most welcome.

    Read the article

  • Tomcat fails on first request in combination with jsvc

    - by Roalt
    I have a web application where the first request may take a few seconds as some singletons are initialised. I've used the mod_proxy and jsvc construction mentioned in this question and described on this page to connect apache with tomcat (data is served via SSL) For the sample Tomcat application, everything works as it should. However, when using my application I get the following error in my apache log: [Wed Feb 10 09:48:29 2010] [error] [client 130.12.1.26] (70014)End of file found: proxy: error reading status line from remote server localhost [Wed Feb 10 09:48:29 2010] [error] [client 130.12.1.26] proxy: Error reading from remote server returned by /MyWebApp/MyWebApp.faces and I get the following error in my tomcat output: 10/02/2010 09:48:29 9947 jsvc.exec error: Service exit with a return value of 1 I'm not an expert on this so I would like to know what's the cause of the problem and where I should look for an answer?

    Read the article

< Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >