Search Results

Search found 15035 results on 602 pages for 'request'.

Page 50/602 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • Call java program on server through web request

    - by Rossella
    I need to develop an application where a Java client calls a Java application located on the server with certain parameters, the Java application on the server computes data and sends the data back to the client. Everything needs to go through port 80 on which I have a IIS webserver listening. I cannot open any other port on the server. Is there any way I can do this? The server can write files in a directory that the client can read but I am not sure how to handle the synchronization and I don't know how to call a script on the server from the client through port 80. Every suggestion would be highly appreciated Thanks a lot in advance, this problem is making me crazy

    Read the article

  • Rails - set POST request limit (file upload)

    - by Fabiano PS
    I am building a file uploader for Rails, using CarrierWave. I am pretty happy about it's API, except that I don't seem to be able to cut file uploads that exceed a limit on the fly. I found this plugin for validation, but the problem is that it happens after the upload is completed. It is completely unacceptable in my case, as any user could take the site down by uploading a huge file. So, I figure that the way would be to use some Rack configuration or middleware that will limit POST body size as it receives. I am hosting on Heroku, as context. *I am aware of https://github.com/dwilkie/carrierwave_direct but it doesn't solve my issue as I have to resize first and discard the original large image.

    Read the article

  • request: compile c++ source for 64bit windows

    - by Kellyh
    Hi, First of all, i hope i post in the right section. I came across this best media convertor open source software. It works fine and the most convenience thing about this software is the context menu however the context menu isn't working on Windows 7 64bit. Since i have no knowledge with c++, Can you one here generously help me compile it into 64bit version. Below are the software I am talking about Homepage: htp://www.oxelon.com/media_converter.html Source: http://sourceforge.net/projects/oxelonmediaconv/files/oxelonmediaconv/oxelon_source_code/omc_src.zip/download

    Read the article

  • ajax request internal server error

    - by joe
    Everything is working good on local but when i try same codes in production, i get 500 (Internal Server Error) error. entries.controller def set_spam @entry = Entry.find(params[:entry_id]) @entry.spam = params[:what] == "spam" ? true : false @entry.save respond_to do |format| format.js end end application.js $(".entry-actions .spams img").click(function () { $.post("/set-spam", { entry_id: $(this).attr("entry_id"), what: $(this).attr("class") } ); return false; }); view <div class="spams"> <img title="spam" class="spam" src="/images/pixel.gif" entry_id="<%= entry.id %>" /> </div> route post "/set-spam" => "entries#set_spam"

    Read the article

  • Google Chrome Speed Tracer what does Request Timing and Response Timing actually measure?

    - by Bryce Thomas
    I'm testing out the Google Chrome Speed Tracer on a few common web pages and taking a look through the results. One thing I'm not sure I understand is what the "Request Timing" and "Response Timing" properties of resources are actually measuring. Initially I thought Request Timing must measure the time from a request for a resource being sent and when that request arrived at the server. However, I then wondered how the Speed Tracer would actually have any way of measuring this. Furthermore, the Response Timing that I'm getting for resources tends to be far less than the Request Timing (e.g. 500ms request, 1ms response), which is a little bit suss. So is anyone able to explain exactly what Request Timing and Response Timing are measuring?

    Read the article

  • Different url scheme for Zend Framework

    - by ChrisRamakers
    For our CMS we have a site manager that defines the site's tree structure (sitemap if you want to call it that). A possible url is www.example.com/our-team/developers/chris/ which would map in the tree structure to the node chris, child old developers which is in turn a child of out-team. All this is in place and working the the wonderfully implemented Nested Set behavior in doctrine. The only thing is that i'm struggling to get it working in the front end of our website. By default Zend framework's request object expects controller/action/key/value/key/value/... URI scheme but that isn't quite fitting my needs, i would like to skip the whole controller, action and key part and restrict to values. Something like value1/value2/value3/value4/... Anyone has an idea how to accomplish this?

    Read the article

  • How to measure the time HTTP requests spend sitting in the accept-queue?

    - by David Jones
    I am using Apache2 on Ubuntu 9.10, and I am trying to tune my configuration for a web application to reduce latency of responses to HTTP requests. During a moderately heavy load on my small server, there are 24 apache2 processes handling requests. Additional requests get queued. Using "netstat", I see 24 connections are ESTABLISHED and 125 connections are TIME_WAIT. I am trying to figure out if that is considered a reasonable backlog. Most requests get serviced in a fraction of a second, so I am assuming requests move through the accept-queue fairly quickly, probably within 1 or 2 seconds, but I would like to be more certain. Can anyone recommend an easy way to measure the time an HTTP request sits in the accept-queue? The suggestions I have come across so far seem to start the clock after the apache2 worker accepts the connection. I'm trying to quantify the accept-queue delay before that. thanks in advance, David Jones

    Read the article

  • Communicate between content script and options page

    - by Gaurang Tandon
    I have seen many questions already and all are about background page to content script. Summary My extension has an options page, and a content script. The content script handles the storage functionality (chrome.storage manipulation). Whenever, a user changes a setting in the options page, I want to send a message to the content script to store the new data. My code: options.js var data = "abcd"; // let data chrome.tabs.query({ active: true, currentWindow: true }, function (tabs) { chrome.tabs.sendMessage(tabs[0].id, "storeData:" + data, function(response){ console.log(response); // gives undefined :( }); }); content script js chrome.runtime.onMessage.addListener(function(request, sender, sendResponse) { // not working }); My question: Why isn't the approach not working? Is there any other (better) approach for this procedure.

    Read the article

  • php $_REQUEST data is only half-decoded

    - by hackmaster.a
    I am retrieving a url via querystring. I need to pass it again to the next page. When I retrieve it the first time, using $_REQUEST['url'], only the slashes are decoded, e.g: http://example.com/search~S10?/Xllamas&searchscope=10&SORT=D/Xllamas&searchscope=10&SORT=D&SUBKEY=llamas/51%2C64%2C64%2CB/browse The php docs page for urldecode advises against decoding request data, and says that it will already be decoded. I need it either completely decoded, so I can encode it again without double-encoding some parts, or not decoded at all. I'm not sure why my experience of this data is incongruous with the php docs. Appreciate any help or pointers to same!!

    Read the article

  • How do set default values in django for an HttpRequest.GET?

    - by Mike
    I have a webpage that displays data based on a default date. The user can then change their view of the data by slecting a date with a date picker and clicking a submit button. I already have a variable set so that if no date is chosen, a default date is used.... so what's the problem? The problem comes if the user trys to type in the url page without a parameter... like so: http://mywebpage/viewdata (example A) instead of http://mywebpage/viewdata?date= (example B) I tried using: if request.method == 'GET': but apparently, even example A still returns true. I'm sure I'm doing some obvious beginner's mistake but I'll ask anyway... Is there a simpler way to handle example A other than passing the url to a string and checking the string for "?date="?

    Read the article

  • How can I retrieve the values of controls in the form that posted?

    - by Mike at KBS
    I know this has got to be the simplest-sounding question ever asked about ASP.Net but I'm baffled. I have a form wherein my visitor will enter name, address, etc. Then I am POSTing that form via the PostBackUrl property of my Submit button to another page, where the fields are supposed to be all re-formed into new hidden fields, then POSTed again to Paypal. My problem is I cannot get at the values entered by the visitor in the original page. Any time I put in "runat='server'", ASP.Net completely changes the ID of the control, making it impossible to figure out how to access. In the POSTed form I tried Request.Form["_txtFirstName"] and that turned up null. Then I tried ((TextBox)PreviousPage.FindControl("_txtFirstName")).Text and that was null, too. I've tried variations on those. I cannot figure out how I'm supposed to get at these controls. Why does this stuff need to be so difficult?

    Read the article

  • How to transfer a post request in curl into a ruby script?

    - by 0x90
    I have this post request: curl -i -X POST \ -H "Accept:application/json" \ -H "content-type:application/x-www-form-urlencoded" \ -d "disambiguator=Document&confidence=-1&support=-1&text=President%20Obama%20called%20Wednesday%20on%20Congress%20to%20extend%20a%20tax%20break%20for%20students%20included%20in%20last%20year%27s%20economic%20stimulus%20package" \ http://spotlight.dbpedia.org/dev/rest/annotate/ How can I write it in ruby? I tried this as Kyle told me: require 'rubygems' require 'net/http' require 'uri' uri = URI.parse('http://spotlight.dbpedia.org/rest/annotate') http = Net::HTTP.new(uri.host, uri.port) request = Net::HTTP::Post.new(uri.request_uri) request.set_form_data({ "disambiguator" => "Document", "confidence" => "0.3", "support" => "0", "text" => "President Obama called Wednesday on Congress to extend a tax break for students included in last year's economic stimulus package" }) request.add_field("Accept", "application/json") request.add_field("Content-Type", "application/x-www-form-urlencoded") response = http.request(request) puts response.inspect but got this error: #<Net::HTTPInternalServerError 500 Internal Error readbody=true>

    Read the article

  • How do I get the response time from a jQuery ajax call?

    - by Dumpen
    So I am working on tool that can show long a request to a page is taking. I am doing this by using jQuery Ajax (http://api.jquery.com/jQuery.ajax/) and I want to figure out the best way to get the response time. I found a thread (http://forum.jquery.com/topic/jquery-get-time-of-ajax-post) which describes using the "Date" in JavaScript, but is this method really reliable? An example of my code could be this below $.ajax({ type: "POST", url: "some.php", }).done(function () { // Here I want to get the how long it took to load some.php and use it further });

    Read the article

  • How can I catch connection requests in my framework?

    - by Falx
    I'm building a framework (OSGi-like) where other parties can program a bundle for. But I want my framework to manage the QoS of the connection-requests that the other parties will do. The easy solution would be to ask them to use (or enforce them to use - although I don't know how) a specific ConnectionRequest bundle of the framework. The problem with this approach is that they wouldn't be able to use any of their own preferred libraries that is counting on the standard Java libraries to make a connection(request). So I wondered if there is a way in Java to catch all the requested connections, so I can add some code about my QoS handling, before its is sent of to the underlaying layer?

    Read the article

  • ExpressJS: What is the difference between app.local and res.local?

    - by aeyang
    I'm trying to learn Express and in my app I have middleware that passes the session object from the Request object to my Response object so that I can access it in my views: app.use((req, res, next) -> res.locals.session = req.session next() ) But app.locals is available to the view as well right? So is it the same if I do app.locals.session = req.session? Is there a convention for the types of things app.locals and res.locals are used for? I was also confused on what the difference is between res.render() and res.redirect()? When should each be used? Thanks for reading. Any help related to Express is appreciated!

    Read the article

  • .NET WebRequest.PreAuthenticate not quite what it sounds like

    - by Rick Strahl
    I’ve run into the  problem a few times now: How to pre-authenticate .NET WebRequest calls doing an HTTP call to the server – essentially send authentication credentials on the very first request instead of waiting for a server challenge first? At first glance this sound like it should be easy: The .NET WebRequest object has a PreAuthenticate property which sounds like it should force authentication credentials to be sent on the first request. Looking at the MSDN example certainly looks like it does: http://msdn.microsoft.com/en-us/library/system.net.webrequest.preauthenticate.aspx Unfortunately the MSDN sample is wrong. As is the text of the Help topic which incorrectly leads you to believe that PreAuthenticate… wait for it - pre-authenticates. But it doesn’t allow you to set credentials that are sent on the first request. What this property actually does is quite different. It doesn’t send credentials on the first request but rather caches the credentials ONCE you have already authenticated once. Http Authentication is based on a challenge response mechanism typically where the client sends a request and the server responds with a 401 header requesting authentication. So the client sends a request like this: GET /wconnect/admin/wc.wc?_maintain~ShowStatus HTTP/1.1 Host: rasnote User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en,de;q=0.7,en-us;q=0.3 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive and the server responds with: HTTP/1.1 401 Unauthorized Cache-Control: private Content-Type: text/html; charset=utf-8 Server: Microsoft-IIS/7.5 WWW-Authenticate: basic realm=rasnote" X-AspNet-Version: 2.0.50727 WWW-Authenticate: Negotiate WWW-Authenticate: NTLM WWW-Authenticate: Basic realm="rasnote" X-Powered-By: ASP.NET Date: Tue, 27 Oct 2009 00:58:20 GMT Content-Length: 5163 plus the actual error message body. The client then is responsible for re-sending the current request with the authentication token information provided (in this case Basic Auth): GET /wconnect/admin/wc.wc?_maintain~ShowStatus HTTP/1.1 Host: rasnote User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en,de;q=0.7,en-us;q=0.3 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive Cookie: TimeTrakker=2HJ1998WH06696; WebLogCommentUser=Rick Strahl|http://www.west-wind.com/|[email protected]; WebStoreUser=b8bd0ed9 Authorization: Basic cgsf12aDpkc2ZhZG1zMA== Once the authorization info is sent the server responds with the actual page result. Now if you use WebRequest (or WebClient) the default behavior is to re-authenticate on every request that requires authorization. This means if you look in  Fiddler or some other HTTP client Proxy that captures requests you’ll see that each request re-authenticates: Here are two requests fired back to back: and you can see the 401 challenge, the 200 response for both requests. If you watch this same conversation between a browser and a server you’ll notice that the first 401 is also there but the subsequent 401 requests are not present. WebRequest.PreAuthenticate And this is precisely what the WebRequest.PreAuthenticate property does: It’s a caching mechanism that caches the connection credentials for a given domain in the active process and resends it on subsequent requests. It does not send credentials on the first request but it will cache credentials on subsequent requests after authentication has succeeded: string url = "http://rasnote/wconnect/admin/wc.wc?_maintain~ShowStatus"; HttpWebRequest req = HttpWebRequest.Create(url) as HttpWebRequest; req.PreAuthenticate = true; req.Credentials = new NetworkCredential("rick", "secret", "rasnote"); req.AuthenticationLevel = System.Net.Security.AuthenticationLevel.MutualAuthRequested; req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; WebResponse resp = req.GetResponse(); resp.Close(); req = HttpWebRequest.Create(url) as HttpWebRequest; req.PreAuthenticate = true; req.Credentials = new NetworkCredential("rstrahl", "secret", "rasnote"); req.AuthenticationLevel = System.Net.Security.AuthenticationLevel.MutualAuthRequested; req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; resp = req.GetResponse(); which results in the desired sequence: where only the first request doesn’t send credentials. This is quite useful as it saves quite a few round trips to the server – bascially it saves one auth request request for every authenticated request you make. In most scenarios I think you’d want to send these credentials this way but one downside to this is that there’s no way to log out the client. Since the client always sends the credentials once authenticated only an explicit operation ON THE SERVER can undo the credentials by forcing another login explicitly (ie. re-challenging with a forced 401 request). Forcing Basic Authentication Credentials on the first Request On a few occasions I’ve needed to send credentials on a first request – mainly to some oddball third party Web Services (why you’d want to use Basic Auth on a Web Service is beyond me – don’t ask but it’s not uncommon in my experience). This is true of certain services that are using Basic Authentication (especially some Apache based Web Services) and REQUIRE that the authentication is sent right from the first request. No challenge first. Ugly but there it is. Now the following works only with Basic Authentication because it’s pretty straight forward to create the Basic Authorization ‘token’ in code since it’s just an unencrypted encoding of the user name and password into base64. As you might guess this is totally unsecure and should only be used when using HTTPS/SSL connections (i’m not in this example so I can capture the Fiddler trace and my local machine doesn’t have a cert installed, but for production apps ALWAYS use SSL with basic auth). The idea is that you simply add the required Authorization header to the request on your own along with the authorization string that encodes the username and password: string url = "http://rasnote/wconnect/admin/wc.wc?_maintain~ShowStatus"; HttpWebRequest req = HttpWebRequest.Create(url) as HttpWebRequest; string user = "rick"; string pwd = "secret"; string domain = "www.west-wind.com"; string auth = "Basic " + Convert.ToBase64String(System.Text.Encoding.Default.GetBytes(user + ":" + pwd)); req.PreAuthenticate = true; req.AuthenticationLevel = System.Net.Security.AuthenticationLevel.MutualAuthRequested;req.Headers.Add("Authorization", auth); req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; WebResponse resp = req.GetResponse(); resp.Close(); This works and causes the request to immediately send auth information to the server. However, this only works with Basic Auth because you can actually create the authentication credentials easily on the client because it’s essentially clear text. The same doesn’t work for Windows or Digest authentication since you can’t easily create the authentication token on the client and send it to the server. Another issue with this approach is that PreAuthenticate has no effect when you manually force the authentication. As far as Web Request is concerned it never sent the authentication information so it’s not actually caching the value any longer. If you run 3 requests in a row like this: string url = "http://rasnote/wconnect/admin/wc.wc?_maintain~ShowStatus"; HttpWebRequest req = HttpWebRequest.Create(url) as HttpWebRequest; string user = "ricks"; string pwd = "secret"; string domain = "www.west-wind.com"; string auth = "Basic " + Convert.ToBase64String(System.Text.Encoding.Default.GetBytes(user + ":" + pwd)); req.PreAuthenticate = true; req.Headers.Add("Authorization", auth); req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; WebResponse resp = req.GetResponse(); resp.Close(); req = HttpWebRequest.Create(url) as HttpWebRequest; req.PreAuthenticate = true; req.Credentials = new NetworkCredential(user, pwd, domain); req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; resp = req.GetResponse(); resp.Close(); req = HttpWebRequest.Create(url) as HttpWebRequest; req.PreAuthenticate = true; req.Credentials = new NetworkCredential(user, pwd, domain); req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; resp = req.GetResponse(); you’ll find the trace looking like this: where the first request (the one we explicitly add the header to) authenticates, the second challenges, and any subsequent ones then use the PreAuthenticate credential caching. In effect you’ll end up with one extra 401 request in this scenario, which is still better than 401 challenges on each request. Getting Access to WebRequest in Classic .NET Web Service Clients If you’re running a classic .NET Web Service client (non-WCF) one issue with the above is how do you get access to the WebRequest to actually add the custom headers to do the custom Authentication described above? One easy way is to implement a partial class that allows you add headers with something like this: public partial class TaxService { protected NameValueCollection Headers = new NameValueCollection(); public void AddHttpHeader(string key, string value) { this.Headers.Add(key,value); } public void ClearHttpHeaders() { this.Headers.Clear(); } protected override WebRequest GetWebRequest(Uri uri) { HttpWebRequest request = (HttpWebRequest) base.GetWebRequest(uri); request.Headers.Add(this.Headers); return request; } } where TaxService is the name of the .NET generated proxy class. In code you can then call AddHttpHeader() anywhere to add additional headers which are sent as part of the GetWebRequest override. Nice and simple once you know where to hook it. For WCF there’s a bit more work involved by creating a message extension as described here: http://weblogs.asp.net/avnerk/archive/2006/04/26/Adding-custom-headers-to-every-WCF-call-_2D00_-a-solution.aspx. FWIW, I think that HTTP header manipulation should be readily available on any HTTP based Web Service client DIRECTLY without having to subclass or implement a special interface hook. But alas a little extra work is required in .NET to make this happen Not a Common Problem, but when it happens… This has been one of those issues that is really rare, but it’s bitten me on several occasions when dealing with oddball Web services – a couple of times in my own work interacting with various Web Services and a few times on customer projects that required interaction with credentials-first services. Since the servers determine the protocol, we don’t have a choice but to follow the protocol. Lovely following standards that implementers decide to ignore, isn’t it? :-}© Rick Strahl, West Wind Technologies, 2005-2010Posted in .NET  CSharp  Web Services  

    Read the article

  • Why is ValidateInput(False) not working?

    - by xenosyde
    I am converting an application I created using webforms to the asp.net mvc framework using vb.net. I have a problem with one of my views. I get the yellow screen of death saying "A potentially dangerous Request.Form value was detected from the client" when I submit my form. I am using tinymce as my RTE. I have set on the view itself ValidateRequest="false" I know that in MVC it doesn't respect it on the view from what I've read so far. So I put it on the controller action as well. I have tried different setups: <ValidateInput(False), AcceptVerbs(HttpVerbs.Post)> _ ...and... <AcceptVerbs(HttpVerbs.Post), ValidateInput(False)> _ ...and like this as well... <ValidateInput(False)> _ <AcceptVerbs(HttpVerbs.Post)> _ Just to see if it made a difference, yet I still get the yellow screen of death. I only want to set it for this view and the specific action in my controller that my post pertains to. Am I missing something?

    Read the article

  • Android HttpClient and HTTPS

    - by user309769
    Hi all, I'm new to implementing HTTPS connections in Android. Essentially, I'm trying to connect to a server using the org.apache.http.client.HttpClient. I believe, at some point, I'll need to access the application's keystore in order to authorize my client with a private key. But, for the moment, I'm just trying to connect and see what happens; I keep getting an HTTP/1.1 400 Bad Request error. I can't seem to make heads or tails of this despite many examples (none of them seem to work for me). My code looks like this (the BODY constant is XmlRPC): private void connect() throws IOException, URISyntaxException{ HttpPost post = new HttpPost(new URI(PROD_URL)); HttpClient client = new DefaultHttpClient(); post.setEntity(new StringEntity(BODY)); HttpResponse result = client.execute(post); Log.d("MainActivity", result.getStatusLine().toString()); } So, pretty simple. Let me know if anyone out there has any advice. Thanks!

    Read the article

  • cancelPreviousPerformRequestWithTarget is not canceling my previously delayed thread started with pe

    - by jmurphy
    Hello, I've launched a delayed thread using performSelector but the user still has the ability to hit the back button on the current view causing dealloc to be called. When this happens my thread still seems to be called which causes my app to crash because the properties that thread is trying to write to have been released. To solve this I am trying to call cancelPreviousPerformRequestsWithTarget to cancel the previous request but it doesn't seem to be working. Below are some code snippets. - (void) viewDidLoad { [self performSelector:@selector(myStopUpdatingLocation) withObject:nil afterDelay:6]; } (void)viewWillDisappear:(BOOL)animated { [NSObject cancelPreviousPerformRequestsWithTarget:self selector:@selector(myStopUpdatingLocation) object:nil]; } Am I doing something incorrect here? The method myStopUpdatingLocation is defined in the same class that I'm calling the perform requests. A little more background. The function that I'm trying to implement is to find a users location, search google for some locations around that location and display several annotations on the map. On viewDidLoad I start updating the location with CLLocationManager. I've build in a timeout after 6 seconds if I don't get my desired accuracy within the timeout and I'm using a performSelector to do this. What can happen is the user clicks the back button in the view and this thread will still execute even though all my properties have been released causing a crash. Thanks in advance! James

    Read the article

  • Javascript + PHP $_POST array empty

    - by Peterim
    While trying to send a POST request via xmlhttp.open("POST", "url", true) (javascript) to the server I get an empty $_POST array. Firebug shows that the data is being sent. Here is the data string from Firebug: a=1&q=151a45a150.... But $_POST['q'] returns nothing. The interesting thing is that file_get_contents('php://input') does have my data (the string above), but PHP somehow doesn't recognize it. Tried both $_POST and $_REQUEST, nothing works. Headers being sent: POST /test.php HTTP/1.1 Host: website.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us;q=0.7,en;q=0.3 Accept-Encoding: gzip,deflate Accept-Charset: utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://website.com/ Content-Length: 156 Content-Type: text/plain; charset=UTF-8 Pragma: no-cache Cache-Control: no-cache Thank you for any suggestions.

    Read the article

  • How to set the request start time with HAProxy?

    - by Tupy
    I would like to measure the time of full request stack. The New Relic capture time of the middleware (e.g. java, python, ruby) and request time (See https://newrelic.com/docs/features/tracking-front-end-time). For this, I need to configure the X-Request-Start header as the request pass through the HAProxy load balance. The haproxy.cfg should look like: backend www balance roundrobin mode http reqadd "X-Request-Start" UNKNOWN_TIME_FUNCTION() server servername 192.168.0.1:80 weight 1 check There is a haproxy native function to replace the UNKNOWN_TIME_FUNCTION()?

    Read the article

  • Unable to set maxReceivedMessageSize through web.config

    - by Michael Mortensen
    Hello there, I have now investigated the 400 - BadRequest code for the last two hours. A lot of sugestions goes towards ensuring the bindingConfiguration attribute is set correctly, and in my case, it is. Now, I need YOUR help before destroying the building i am in :-) I run a WCF RestFull service (very lightweight, using this resource for inspiration: http://msdn.microsoft.com/en-us/magazine/dd315413.aspx) which (for now) accepts an XmlElement (POX) provided through the POST verb. I am currently ONLY using Fiddler's request builder before implementing a true client (as this is mixed environments). When I do this for XML smaller than 65K, it works fine - larger, it throws this exception: The maximum message size quota for incoming messages (65536) has been exceeded. To increase the quota, use the MaxReceivedMessageSize property on the appropriate binding element. Here is my web.config file (which I even included the client-tag for (desperate times!)): <system.web> <httpRuntime maxRequestLength="1500000" executionTimeout="180"/> </system.web> <system.serviceModel> <diagnostics> <messageLogging logEntireMessage="true" logMalformedMessages="true" logMessagesAtServiceLevel="true" logMessagesAtTransportLevel="true" /> </diagnostics> <bindings> <webHttpBinding> <binding name="WebHttpBinding" maxReceivedMessageSize="1500000" maxBufferPoolSize="1500000" maxBufferSize="1500000" closeTimeout="00:03:00" openTimeout="00:03:00" receiveTimeout="00:10:00" sendTimeout="00:03:00"> <readerQuotas maxStringContentLength="1500000" maxArrayLength="1500000" maxBytesPerRead="1500000" /> <security mode="None"/> </binding> </webHttpBinding> </bindings> <client> <endpoint address="" binding="webHttpBinding" bindingConfiguration="WebHttpBinding" contract="Commerce.ICatalogue"/> </client> <services> <service behaviorConfiguration="ServiceBehavior" name="Catalogue"> <endpoint address="" behaviorConfiguration="RestFull" binding="webHttpBinding" bindingConfiguration="WebHttpBinding" contract="Commerce.ICatalogue" /> <!-- endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" / --> </service> </services> <behaviors> <endpointBehaviors> <behavior name="RestFull"> <webHttp/> </behavior> </endpointBehaviors> <serviceBehaviors> <behavior name="ServiceBehavior"> <serviceDebug httpHelpPageEnabled="true" includeExceptionDetailInFaults="true"/> <serviceMetadata httpGetEnabled="true"/> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> Thanks in advance for any help leading to succesfull call with 65K XML ;-)

    Read the article

  • Streaming binary data to WCF rest service gives Bad Request (400) when content length is greater than 64k

    - by Mikey Cee
    I have a WCF service that takes a stream: [ServiceContract] public class UploadService : BaseService { [OperationContract] [WebInvoke(BodyStyle=WebMessageBodyStyle.Bare, Method=WebRequestMethods.Http.Post)] public void Upload(Stream data) { // etc. } } This method is to allow my Silverlight application to upload large binary files, the easiest way being to craft the HTTP request by hand from the client. Here is the code in the Silverlight client that does this: const int contentLength = 64 * 1024; // 64 Kb var request = (HttpWebRequest)WebRequest.Create("http://localhost:8732/UploadService/"); request.AllowWriteStreamBuffering = false; request.Method = WebRequestMethods.Http.Post; request.ContentType = "application/octet-stream"; request.ContentLength = contentLength; using (var outputStream = request.GetRequestStream()) { outputStream.Write(new byte[contentLength], 0, contentLength); outputStream.Flush(); using (var response = request.GetResponse()); } Now, in the case above, where I am streaming 64 kB of data (or less), this works OK and if I set a breakpoint in my WCF method, and I can examine the stream and see 64 kB worth of zeros - yay! The problem arises if I send anything more than 64 kB of data, for instance by changing the first line of my client code to the following: const int contentLength = 64 * 1024 + 1; // 64 kB + 1 B This now throws an exception when I call request.GetResponse(): The remote server returned an error: (400) Bad Request. In my WCF configuration I have set maxReceivedMessageSize, maxBufferSize and maxBufferPoolSize to 2147483647, but to no avail. Here are the relevant sections from my service's app.config: <service name="UploadService"> <endpoint address="" binding="webHttpBinding" bindingName="StreamedRequestWebBinding" contract="UploadService" behaviorConfiguration="webBehavior"> <identity> <dns value="localhost" /> </identity> </endpoint> <host> <baseAddresses> <add baseAddress="http://localhost:8732/UploadService/" /> </baseAddresses> </host> </service> <bindings> <webHttpBinding> <binding name="StreamedRequestWebBinding" bypassProxyOnLocal="true" useDefaultWebProxy="false" hostNameComparisonMode="WeakWildcard" sendTimeout="00:05:00" openTimeout="00:05:00" receiveTimeout="00:05:00" maxReceivedMessageSize="2147483647" maxBufferSize="2147483647" maxBufferPoolSize="2147483647" transferMode="StreamedRequest"> <readerQuotas maxArrayLength="2147483647" maxStringContentLength="2147483647" /> </binding> </webHttpBinding> </bindings> <behaviors> <endpointBehaviors> <behavior name="webBehavior"> <webHttp /> </behavior> <endpointBehaviors> </behaviors> How do I make my service accept more than 64 kB of streamed post data?

    Read the article

  • Symfony : ajax call cause server to queue next queries

    - by Remiz
    Hello, I've a problem with my application when an ajax call on the server takes too much time : it queue all the others queries from the user until it's done server side (I realized that canceling the call client side has no effect and the user still have to wait). Here is my test case : <script type="text/javascript" src="jquery-1.4.1.min.js"></script> <a href="another-page.php">Go to another page on the same server</a> <script type="text/javascript"> url = 'http://localserver/some-very-long-complex-query'; $.get(url); </script> So when the get is fired and then after I click on the link, the server finish serving the first call before bringing me to the other page. My problem is that I want to avoid this behavior. I'm on a LAMP server and I'm looking in a way to inform the server that the user aborted the query with function like connection_aborted(), do you think that's the way to go ? Also, I know that the longest part of this PHP script is a MySQL query, so even if I know that connection_aborted() can detect that the user cancel the call, I still need to check this during the MySQL query... I'm not really sure that PHP can handle this kind of "event". So if you have any better idea, I can't wait to hear it. Thank you. Update : After further investigation, I found that the problem happen only with the Symfony framework (that I omitted to precise, my bad). It seems that an Ajax call lock any other future call. It maybe related to the controller or the routing system, I'm looking into it. Also for those interested by the problem here is my new test case : -new project with Symfony 1.4.3, default configuration, I just created an app and a default module. -jquery 1.4 for the ajax query. Here is my actions.class.php (in my unique module) : class defaultActions extends sfActions { public function executeIndex(sfWebRequest $request) { //Do nothing } public function executeNewpage() { //Do also nothing } public function executeWaitingaction(){ // Wait sleep(30); return false; } } Here is my indexSuccess.php template file : <script type="text/javascript" src="jquery-1.4.1.min.js"></script> <a href="<?php echo url_for('default/newpage');?>">Go to another symfony action</a> <script type="text/javascript"> url = '<?php echo url_for('default/waitingaction');?>'; $.get(url); </script> For the new page template, it's not very relevant... But with this, I'm able to reproduce the lock problem I've on my real application. Is somebody else having the same issue ? Thanks.

    Read the article

  • HTTP DOM: request.use? Usage?

    - by Jim G.
    I'm looking at the following code block in javascript: var request = new Request(); if(request.Use()) // What exactly does this do? { // ...do stuff } else { // no ajax support? } I've never seen anyone invoke the request.Use() method. My Question: What exactly does request.Use() check? Does it in fact check for AJAX support? Can anyone redirect me to an online API reference?

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >