Search Results

Search found 58546 results on 2342 pages for 'bpel http basic authentic'.

Page 26/2342 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • getting internal server error using rest-client in ruby to post to HTTP POST

    - by Angela
    Hi, this is my code and I don't know how to debug it because I just get an "internal server error": I am trying to HTTP POST to an external ASPX: def upload uri = 'https://api.postalmethods.com/2009-02-26/PostalWS.asmx' #postalmethods URI #https://api.postalmethods.com/2009-02-26/PostalWS.asmx?op=UploadFile #http://www.postalmethods.com/method/2009-02-26/UploadFile @postalcard = Postalcard.find(:last) #Username=string&Password=string&MyFileName=string&FileBinaryData=string&FileBinaryData=string&Permissions=string&Description=string&Overwrite=string filename = @postalcard.postalimage.original_filename filebinarydata = File.open("#{@postalcard.postalimage.path}",'rb') body = "Username=me&Password=sekret&MyFileName=#{filename}&FileBinaryData=#{filebinarydata}" @response = RestClient.post(uri, body, #body as string {"Content-Type" => 'application/x-www-form-urlencoded', "Content-Length" => @postalcard.postalimage.size} # end headers ) #close arguments to Restclient.post end

    Read the article

  • Track http domain referer

    - by tony noriega
    Can i track the http referer with javascript, and append a variable to the URL string to store into a dbase? or could i track a cookie that the user gets? (very layman's terms here, sorry) if http referrer is domain.com add to url '&referer=google' which should stay with them during their session. OR when a user clicks my Google adwords ad. they get a cookie with a referring domain in it. try to read that cookie, and append the same variable. any thoughts?

    Read the article

  • HTTP redirect fallback

    - by Ondrej Stastny
    Hi, Is there a way to provide fallback URL when HTTP redirect times out? What I'm trying to achieve is that when I hit the original url, I would respond with HTTP redirect (300, 307?) giving the browser new URL and the fallback URL in case the new URL times out? I was also considering doing this client side but it just does not seem to be very effective (in terms of speed, client support). What I would do is probably have a tiny 1x1px image on each server, try loading it and then check with javascript which server is up and redirect there. Any other ideas? Thanks

    Read the article

  • HTTP header for sending PDF, problem in Firefox

    - by David
    In windows when i save a pdf with firefox adobe reader plugin ocurs this problem. The file saved is: http://www.example.com/opendocument.php_doc=._docs_doc01 My headers are: header('Content-type: application/pdf'); //header('Content-Disposition: inline; filename=doc01.pdf'); header("Content-Transfer-Encoding: binary"); header("Content-Length: ".filesize($pdf)); Original call is: http://www.example.com/opendocument.php?doc=./docs/doc01.pdf I'm not interest on attachment header. I must open into the website, not download o external window. Any idea?

    Read the article

  • HTTP Content-type header for cached files

    - by Brian
    Hello, Using Apache with mod_rewrite, when I load a .css or .js file and view the HTTP headers, the Content-type is only set correctly the first time I load it - subsequent refreshes are missing Content-type altogether and it's creating some problems for me. I can get around this by appending a random query string value to the end of each filename, eg. http://www.site.com/script.js?12345 However, I don't want to have to do that, since caching is good and all I want is for the Content-type to be present. I've tried using a RewriteRule to force the type but still didn't solve the problem. Any ideas? Thanks, Brian

    Read the article

  • Jaxer and HTTP proxy requests...

    - by rakhavan
    Thanks to everyone in advance. I'm using Jaxer.sandbox and making requests just fine. I'd like these requests to go through my http proxy (like squid for example). Here is the code I that is currently working for me. window.onload = function() { //the url to scrape var url = "http://www.cnn.com/"; //our sandboxed browser var sandbox = new Jaxer.Sandbox(); //open optons var openOptions = new Jaxer.Sandbox.OpenOptions(); openOptions.allowJavaScript = false; openOptions.allowMetaRedirects = false; openOptions.allowSubFrames = false; openOptions.allowSubFrames = false; openOptions.onload = function() { //do something onload }; //make the call sandbox.open(url, null, openOptions); //write the response Jaxer.response.setContents(sandbox.toHTML()); }; How can I send this request through a proxy server? Thanks, Reza.

    Read the article

  • Are rails timers reliable when using Net::HTTP?

    - by Frank
    Hi All. When reading data from a potentially slow website, I want to ensure that get_response can not hang, and so added a timer to timeout after x seconds. So far, so good. I then read http://ph7spot.com/musings/system-timer which illustrates that in certain situations timer.rb doesn't work due to ruby's implementation of threads. Does anyone know if this is one of these situations? url = URI.parse(someurl) begin Timeout::timeout(30) do response = Net::HTTP.get_response(url) @responseValue = CGI.unescape(response.body) end rescue Exception = e dosomething end

    Read the article

  • How to display characters in http get response correctly with the right encoding

    - by DixieFlatline
    Hello! Does anyone know how to read c,š,ž characters in http get response properly? When i make my request in browser the browser displays all characters correctly. But in java program with apache jars i don't know how to set the encoding right. I tried with client.getParams().setParameter(CoreProtocolPNames.HTTP_CONTENT_CHARSET, "UTF-8"); but it's not working. My code: HttpClient client = new DefaultHttpClient(); String getURL = "http://www.google.com"; HttpGet get = new HttpGet(getURL); HttpResponse responseGet = client.execute(get); HttpEntity resEntityGet = responseGet.getEntity(); if (resEntityGet != null) { Log.i("GET RESPONSE",EntityUtils.toString(resEntityGet)); } } catch (Exception e) { e.printStackTrace(); }

    Read the article

  • HTTP Data chunks over multiple packets?

    - by myforwik
    What is the correct way for a HTTP server to send data over multiple packets? For example I want to transfer a file, the first packet I send is: HTTP/1.1 200 OK Content-type: application/force-download Content-Type: application/download Content-Type: application/octet-stream Content-Description: File Transfer Content-disposition: attachment; filename=test.dat Content-Transfer-Encoding: chunked 400 <first 1024 bytes here> 400 <next 1024 bytes here> 400 <next 1024 bytes here> Now I need to make a new packet, if I just send: 400 <next 1024 bytes here> All the clients close there connections on me and the files are cut short. What headers do I put in a second packet to continue on with the data stream?

    Read the article

  • Ruby on Rails: reducing complexity of parameters in a RESTFul HTTP POST request (multi-model)

    - by randombits
    I'm using cURL to test a RESTFul HTTP web service. The problem is I'm normally submitting a bunch of values normally like this: curl -d "firstname=bob&lastname=smith&age=25&from=kansas&someothermodelattr=val" -H "Content-Type: application/x-www-form-urlencoded" http://mysite/people.xml -i The problem with this is my controller will then have code like this: unless params[:firstname].nil? end unless params[:lastname].nil? end // FINALLY @person = People.new(params[:firstname], params[:lastname], params[:age], params[:from]) etc.. What's the best way to simplify this? My Person model has all the validations it needs. Is there a way (assuming the request has multi-model parameters) that I can just do: @person = People.new(params[:person]) and then the initializer can take care of the rest?

    Read the article

  • Java HTTP Client Request with defined timeout

    - by Maxim Veksler
    Hello, I would like to make BIT (Built in tests) to a number of server in my cloud. I need the request to fail on large timeout. How should I do this with java? Trying something like the below does not seem to work. public class TestNodeAliveness { public static NodeStatus nodeBIT(String elasticIP) throws ClientProtocolException, IOException { HttpClient client = new DefaultHttpClient(); client.getParams().setIntParameter("http.connection.timeout", 1); HttpUriRequest request = new HttpGet("http://192.168.20.43"); HttpResponse response = client.execute(request); System.out.println(response.toString()); return null; } public static void main(String[] args) throws ClientProtocolException, IOException { nodeBIT(""); } } -- EDIT: Clarify what library is being used -- I'm using httpclient from apache, here is the relevant pom.xml section org.apache.httpcomponents httpclient 4.0.1 jar

    Read the article

  • How to communicate/share a session between pages over HTTP and HTTPS

    - by spirytus
    What is common practice for coding web applications where part of the site has to be secured (e.g. checkout section) and part not necessarily, let's say homepage? As far as I know sharing sessions in between HTTP and HTTPS parts of the site is not easily possible (or is it?). What would be common approach if I wanted to display on HTTP page like homepage, shopping cart data (items) that users ordered on HTTPS pages? How those two parts of the site would communicate if necessary? Also isn't it security flaw in popular shopping carts as it seems that many of these have only checkout pages secured (SSL) and the rest not? I'm using PHP if it makes any difference.

    Read the article

  • J2ME's extra annoying HTTP permission prompt

    - by Hans Malherbe
    Some phones only prompt the user for permission the first time a connection is made. Others pop up the permission prompt whenever the MIDlet attempts to make a HTTP connection! What are the options if we want to suppress the prompt? Can we sign the JAR using only one CA (Certificate Authority) and have it work on all devices? Do we have to pay for a signature on every release? Is it an option to create our own CA certificate and tell our customers to install it on there device? Alternatively, it seems that plain socket connections do not suffer so. Is there a free implementation of HTTP on top of TCP for J2ME?

    Read the article

  • failed to open stream: HTTP request failed! HTTP/1.1 400 Bad Request

    - by muralikalpana
    I am accessing images from another website. I am getting "failed to open stream: HTTP request failed! HTTP/1.1 400 Bad Request " error when copying 'some(not all)' images. here is my code. $img=$_GET['img']; //another website url $file=$img; function getFileextension($file) { return end(explode(".", $file)); } $fileext=getFileextension($file); if($fileext=='jpg' || $fileext=='gif' || $fileext=='jpeg' || $fileext=='png' || $fileext=='x-png' || $fileext=='pjpeg'){ if($img!=''){ $rand_variable1=rand(10000,100000); $node_online_name1=$rand_variable1."image.".$fileext; $s=copy($img,"images/".$node_online_name1); }

    Read the article

  • Redirected wikipedia request

    - by Le_Coeur
    Hi people, i need to write a program, that can redirect's http://localhost:8080 to en.wikipedia.org, it seems to be easy, but i have some problems(only with wikipedia with another sites works good). I make url to wikipedia: URL url = new URL("http", "en.wikipedia.org", 80, "/wiki"); than URLConnection, extract headers, and when i want connection.getInputStream(), i received message 404 Not Found. So i have tried some hack for host header, because in this way host header is localhost:8080, therefor i have tried to change host header to wikipedia, and it works, but after request in browser http://localhost:8080 wikipedia opens, but url in browser changes to en.wikipedia.org, but i want proceed with localhost :)

    Read the article

  • Downloading HTTP URLs asynchronously in C++

    - by Joey Adams
    What's a good way to download HTTP URLs (e.g. such as http://0.0.0.0/foo.htm ) in C++ on Linux ? I strongly prefer something asynchronous. My program will have an event loop that repeatedly initiates multiple (very small) downloads and acts on them when they finish (either by polling or being notified somehow). I would rather not have to spawn multiple threads/processes to accomplish this. That shouldn't be necessary. Should I look into libraries like libcurl? I suppose I could implement it manually with non-blocking TCP sockets and select() calls, but that would likely be less convenient.

    Read the article

  • After HTTP GET request, the resulting string is cut-off (incomplete)

    - by Jayomat
    hi all, I'm making a http get request like this: try { HttpClient client = new DefaultHttpClient(); String getURL = "http://busspur02.aseag.de/bs.exe?SID=5FC39&ScreenX=1440&ScreenY=900&CMD=CR&Karten=true&DatumT="+day+"&DatumM="+month+"&DatumJ="+year+"&ZeitH="+hour+"&ZeitM="+min+"&Intervall=60&Suchen=(S)uchen&GT0=Aachen&T0=H&HT0="+start_from+"&GT1=Aachen&T0=H&HT1="+destination+""; HttpGet get = new HttpGet(getURL); HttpResponse responseGet = client.execute(get); HttpEntity resEntityGet = responseGet.getEntity(); if (resEntityGet != null) { //do something with the response Log.i("GET RESPONSE",EntityUtils.toString(resEntityGet)); } ........ It all works well... the only problem: the output from Log.i is cut-off... It's not the complete html page. If I make the same request in a browser, I get 3x the output in opposition to making the request in the emulator and using the above code.... what's wrong?

    Read the article

  • problem with multiple ajax HTTP get requests with different imput variables using jQuery

    - by Thanasis
    I want to make asychronous get requests and to take different results based on the input that I provide to each one. Here is my code: param=1; $.get('http://localhost/my_page_1.php', param, function(data) { alert("id = "+param); $('.resul 5.t').html(data); }); param=2; $.get('http://localhost/my_page_2.php', param, function(data) { alert("id = "+param); $('.result').html(data); }); The result for both requests is: "id = 2" I want the results to be: "id = 1" for the 1st request, and "id = 2" for the second one.. I want to do this for many requests in one HTML file and integrate the results into the HTML as soon as they are ready. Can anyone help me to solve this problem? thank you in advance, Thanasis

    Read the article

  • 2 way communication over http between a .Net service and Windows Forms Client

    - by user1802969
    I am looking to accomplish 2 way communication over http between a .Net service (WCF SOAP or REST, both options are open) hosted on IIS 7 and Windows Forms Client running on Windows 7. WebSockets are not supported with IIS 7 and all other Comet techniques allow only the web server to push data to client and not the other way around. The client will be very chatty, and there are thousands of clients, so I want to avoid creating a new HTTP request for each message to the server, though that is the last option. Is there any way to do this ?

    Read the article

  • checksum error with building an HTTP packet(but over TCP, like syn/ack its ok)

    - by Hila
    I am building a NAT program,I change each packet that comes from our internal subnet, change it's source IP address by libnet functions.( catch the packet with libpcap, put it sniff structures and build the new packet with libnet) I am trying to build an http packet. When I look on wireshark, I see that the new packet that I have built is exectly like the original packet(the only diffrent is that I changed the src port and ip), but there is a checksum error, So the server don't do anything with the packet that I have sent to him, beacuse the cheksum field is wrong. When I send a tcp packet(like syn or ack), the checksum is ok, and the server respons. Is anyone knows what can cause this problem? the new checksum in other packets is calculated as it should be.. but in the HTTP packet it doesn't..

    Read the article

  • Java: stopping long HTTP operations

    - by kilonet
    I'm using Apache Common library for HTTP operations: HttpClient client = getClient(); PutMethod put = new PutMethod(url); FileRequestEntity countingFileRequestEntity = new FileRequestEntity(file, "application/octet-stream"); put.setRequestEntity(countingFileRequestEntity); client.executeMethod(put); put.releaseConnection(); I wonder how can safely interrup long HTTP operation. Running it in new thread and stopping it seems to be wrong way. HttpMethodBase has abort() method, but I can't understand how to use it because client.executeMethod blocks execution until it complets

    Read the article

  • Synchronous HTTP Client with .NET sockets

    - by Ray Wits
    Does anyone know of any open source C# projects or some sample code that implement a synchronous HTTP client using sockets? I'm working on a project where I need a HTTP client using sockets. It can't use WebRequest or WebClient, nor can it use Asynchronous sockets. Don't ask. Also it would ideally be on .NET 2.0, yeah very cutting edge here. I figured the web would have tons of samples for this but suprisingly I couldn't find any. Probably because everyone is fortunate enough to use the built in APIs. If I don't find something I'll have to write it myself, which I don't really want to have to reinvent that wheel.

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >