Search Results

Search found 65593 results on 2624 pages for 'http put'.

Page 6/2624 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Suggested HTTP REST status code for 'request limit reached'

    - by Andras Zoltan
    I'm putting together a spec for a REST service, part of which will incorporate the ability to throttle users service-wide and on groups of, or on individual, resources. Equally, time-outs for these would be configurable per resource/group/service. I'm just looking through the HTTP 1.1 spec and trying to decide how I will communicate to a client that a request will not be fulfilled because they've reached their limit. Initially I figured that client code 403 - Forbidden was the one, but this, from the spec: Authorization will not help and the request SHOULD NOT be repeated bothered me. It actually appears that 503 - Service Unavailable is a better one to use - since it allows for the communication of a retry time through the use of the Retry-After header. It's possible that in the future I might look to support 'purchasing' more requests via eCommerce (in which case it would be nice if client code 402 - Payment Required had been finalized!) - but I figure that this could equally be squeezed into a 503 response too. Which do you think I should use? Or is there another I've not considered?

    Read the article

  • cURL hangs trying to upload file from stdin

    - by SidneySM
    I'm trying to PUT a file with cURL. This hangs: curl -vvv --digest -u user -T - https://example.com/file.txt < file This does not: curl -vvv --digest -u user -T file https://example.com/file.txt What's going on? * About to connect() to example.com port 443 (#0) * Trying 0.0.0.0... connected * Connected to example.com (0.0.0.0) port 443 (#0) * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Server key exchange (12): * SSLv3, TLS handshake, Server finished (14): * SSLv3, TLS handshake, Client key exchange (16): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSL connection using DHE-RSA-AES256-SHA * Server certificate: * subject: serialNumber=jJakwdOewDicmqzIorLkKSiwuqfnzxF/, C=US, O=*.example.com, OU=GT01234567, OU=See www.example.com/resources/cps (c)10, OU=Domain Control Validated - ExampleSSL(R), CN=*.example.com * start date: 2010-01-26 07:06:33 GMT * expire date: 2011-01-28 11:22:07 GMT * common name: *.example.com (matched) * issuer: C=US, O=Equifax, OU=Equifax Secure Certificate Authority * SSL certificate verify ok. * Server auth using Digest with user 'user' > PUT /file.txt HTTP/1.1 > User-Agent: curl/7.19.4 (universal-apple-darwin10.0) libcurl/7.19.4 OpenSSL/0.9.8l zlib/1.2.3 > Host: example.com > Accept: */* > Transfer-Encoding: chunked > Expect: 100-continue > < HTTP/1.1 100 Continue

    Read the article

  • Browser privacy improvement implications for websites

    - by phq
    On https://panopticlick.eff.org/ EFF let you test the number of uniquely identifying bits that the browser gives a website. Among these are HTTP header fields such as User-Agent, Accept, Accept-Language and later perhaps ETAG and If-Modified-Since. Also there is a lot of Information that javascript can get from the browser such as time-zone, screen resolution, complete list of fonts and plugins available. My first impression is, is all this information really usable/used on a majority of all websites? For example, how many sites does really send different content-types depending on the http accept header, or what fonts are available(I thought css had taken care of this)? Let's say of these headers/js functionality on day would be gone. Which ones would; never be noticed they were gone? impact user experience? impact server performance? immediately reimplemented because the Internet cannot work without it? Extra credit for differentiating between what can be done, what should be done and what is done in most situations.

    Read the article

  • What is the HTTP_PROFILE browser header and how is it used?

    - by Tom
    I've just come across the HTTP_PROFILE header that seems to be used by mobile browsers to point to an .xml document describing the device's capabilities. Doing a Google search doesn't turn up any definitive resources on what this is and how it should be used, can anyone point me to something along the lines of a spec/W3C standard?

    Read the article

  • How to redirect http requests to http (nginx)

    - by spuder
    There appear to be many questions and guides out there that instruct how to setup nginx to redirect http requests to https. Many are outdated, or just flat out wrong. server { listen *:80; server_name <%= @fqdn %>; #root /nowhere; #rewrite ^ https://$server_name$request_uri? permanent; #rewrite ^ https://$server_name$request_uri permanent; #return 301 https://$server_name$request_uri; #return 301 http://$server_name$request_uri; #return 301 http://192.168.33.10$request_uri; return 301 http://$host$request_uri; } server { listen *:443 ssl default_server; server_name <%= @fqdn %>; server_tokens off; root <%= @git_home %>/gitlab/public; ssl on; ssl_certificate <%= @gitlab_ssl_cert %>; ssl_certificate_key <%= @gitlab_ssl_key %>; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers AES:HIGH:!ADH:!MDF; ssl_prefer_server_ciphers on; location / { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback, see below try_files $uri $uri/index.html $uri.html @gitlab; } # if a file, which is not found in the root folder is requested, # then the proxy pass the request to the upsteam (gitlab puma) location @gitlab { proxy_read_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_connect_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_redirect off; ect.... I've restarted after every configuration change, and yet I still only get the 'Welcome to nginx' page when visiting http://192.168.33.10. whereas https://192.168.33.10 works perfectly. Why will nginx still not redirect http requests to https? tailf /var/log/nginx/access.log 192.168.33.1 - - [22/Oct/2013:03:41:39 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:24.0) Gecko/20100101 Firefox/24.0" 192.168.33.1 - - [22/Oct/2013:03:44:43 +0000] "GET / HTTP/1.1" 200 133 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:24.0) Gecko/20100101 Firefox/24.0" tailf /var/log/nginx/gitlab_error.lob 2013/10/22 02:29:14 [crit] 27226#0: *1 connect() to unix:/home/git/gitlab/tmp/sockets/gitlab.socket failed (2: No such file or directory) while connecting to upstream, client: 192.168.33.1, server: gitlab.localdomain, request: "GET / HTTP/1.1", upstream: "http://unix:/home/git/gitlab/tmp/sockets/gitlab.socket:/", host: "192.168.33.10" Resources http://wiki.nginx.org/Pitfalls How to make nginx redirect How to force or redirect to SSL in nginx? nginx ssl redirect Nginx & Https Redirection https://www.tinywp.in/301-redirect-wordpress/ How to force or redirect to SSL in nginx?

    Read the article

  • HTTP Content-type header for cached files

    - by Brian
    Hello, Using Apache with mod_rewrite, when I load a .css or .js file and view the HTTP headers, the Content-type is only set correctly the first time I load it - subsequent refreshes are missing Content-type altogether and it's creating some problems for me. Specifically, gzip is not compressing these files. I can get around this by appending a random query string value to the end of each filename, eg. http://www.site.com/script.js?12345 However, I don't want to have to do that, since caching is good and all I want is for the Content-type to be present. I've tried using a RewriteRule to force the type but still didn't solve the problem. Any ideas? Thanks, Brian More Details: HTTP headers WITHOUT random query string value: http://localhost/script.js GET /script.js HTTP/1.1 Host: localhost User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept: */* Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://localhost/ Cookie: PHPSESSID=ke3p35v5qbus24che765p9jni5; If-Modified-Since: Thu, 29 Apr 2010 15:49:56 GMT If-None-Match: "3440e9-119ed-485621404f100" Cache-Control: max-age=0 HTTP/1.1 304 Not Modified Date: Thu, 29 Apr 2010 20:19:44 GMT Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 PHP/5.3.1 Connection: Keep-Alive Keep-Alive: timeout=5, max=100 Etag: "3440e9-119ed-485621404f100" Vary: Accept-Encoding X-Pad: avoid browser bug HTTP headers WITH random query string value: http://localhost/script.js?c947344de8278053f6edbb4365550b25 GET /script.js?c947344de8278053f6edbb4365550b25 HTTP/1.1 Host: localhost User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept: */* Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://localhost/ Cookie: PHPSESSID=ke3p35v5qbus24che765p9jni5; HTTP/1.1 200 OK Date: Thu, 29 Apr 2010 20:14:40 GMT Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 PHP/5.3.1 Last-Modified: Thu, 29 Apr 2010 15:49:56 GMT Etag: "3440e9-119ed-485621404f100" Accept-Ranges: bytes Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 24605 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: application/javascript

    Read the article

  • Nginx fails upon proxying PUT requests

    - by PartlyCloudy
    Hi. I have an arbitrary web server that supports the full range of HTTP methods, including PUT for uploads. The server runs fine in all tests with different clients. I now wanted to set this server behind an nginx reverse proxy. However, each PUT request fails. The entity body is not forwarded to the backend web server. The header fields are sent, but not body. I searched the nginx proxy documentation and find several hints that PUT might not be supported. But I also found people running svn/ web dav stuff behind nginx, so it should work. Any ideas? Here is my config: server { listen 80; server_name my.domain.name; location / { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://127.0.0.1:8000; } } Client == HTTP PUT ==> Nginx == HTTP Proxy ==> Backend Server The error.log shows no entries concerning this behaviour. Thanks in advance!

    Read the article

  • No exception, no error, still i dont recieve the json object from my http post

    - by user2978538
    My source code: final Thread t = new Thread() { public void run() { Looper.prepare(); HttpClient client = new DefaultHttpClient(); HttpConnectionParams.setConnectionTimeout(client.getParams(), 10000); HttpResponse response; JSONObject obj = new JSONObject(); try { HttpPost post = new HttpPost("http://pc.dyndns-office.com/mobile.asp"); obj.put("Model", ReadIn1); obj.put("Product", ReadIn2); obj.put("Manufacturer", ReadIn3); obj.put("RELEASE", ReadIn4); obj.put("SERIAL", ReadIn5); obj.put("ID", ReadIn6); obj.put("ANDROID_ID", ReadIn7); obj.put("Language", ReadIn8); obj.put("BOARD", ReadIn9); obj.put("BOOTLOADER", ReadIn10); obj.put("BRAND", ReadIn11); obj.put("CPU_API", ReadIn12); obj.put("DISPLAY", ReadIn13); obj.put("FINGERPRINT", ReadIn14); obj.put("HARDWARE", ReadIn15); obj.put("UUID", ReadIn16); StringEntity se = new StringEntity(obj.toString()); se.setContentType(new BasicHeader(HTTP.CONTENT_TYPE, "application/json")); post.setEntity(se); post.setHeader("host", "http://pc.dyndns-office.com/mobile.asp"); response = client.execute(post); if (response != null) { InputStream in = response.getEntity().getContent(); } } catch (Exception e) { e.printStackTrace(); } Looper.loop(); } }; t.start(); } } i want to send an Json object to a Website. As far as I can see, I set the header, but still I get this exception, can someone help me? (I'm using Android-Studio) __ Edit: i don't get any exceptions anymore, but still i do not receive the json packet. When i manually call the website i get a log file entry. Does anyone know, what's wrong? Edit2: When i debug i get as response "HTTP/1.1 400 bad request" i'm sure its not an permission problem. Any ideas?

    Read the article

  • WinSCP putting multiple files on sftp site

    - by NewToWinSCP
    WinSCP 5.2 I wanted to put multiple files with a file extension .pgp on an sftp site. When I tested my original command line (see below) and it only placed the first *.pgp alphabetical file (D:\a.csv.pgp) on the sftp site. I tried specifying *.PGP and *.pgp without any changes - only one file (D:\a.csv.pgp) would be copied each time. I got it to work for all files only if I specified a put command for each .pgp file. Any ideas on how to put all *.pgp on the sftp site? Original Command Line - Does Not Work d:\winscp\winscp /command "option echo off" "option batch on" "option confirm off" "open sftp" "put D:\*.pgp" "close" "exit" Works d:\winscp\winscp /command "option echo off" "option batch on" "option confirm off" "open sftp" "put D:\a.csv.pgp" "put D:\b.csv.pgp" "put D:\c.csv.pgp" "put D:\d.csv.pgp" "put D:\e.csv.pgp" "put D:\f.csv.pgp" "put D:\g.csv.pgp" "put D:\h.csv.pgp" "put D:\i.csv.pgp" "close" "exit"

    Read the article

  • Http handler for classic ASP application for introducing a layer between client and server

    - by JPReddy
    I've a huge classic ASP application where in thousands of users manage their company/business data. Currently this is not multi-user so that application users can create users and authorize them to access certain areas in the system. I'm thinking of writing a handler which will act as middle man between client and server and go through every request and find out who the user is and whether he is authorized to access the data he is trying to. For the moment ignore about the part how I'm going to check the authorization and all that stuff. Just want to know whether I can implement a ASP.net handler and use it as middle man for the requests coming for a asp website? I just want to read the url and see what is the page user is trying to access and what are the parameters he is passing in the url the posted data. Is this possible? I read that Asp.net handler cannot be used with asp website and I need to use isapi filter or extensions for that and that can be developed only c/c++. Can anybody through some light on this and guide me whether I'm in the right direction or not?

    Read the article

  • http request terminating early

    - by spiderplant0
    I noticed that on some of my sites, images were occasionally not getting downloaded fully. After a bit of investigation it appears that it is not restricted to images - .css, .js etc were also occasionally terminating early. The faults appear to be random. When I use the debug/proxy tool Fiddler2 reports that fewer bytes have been received than were requested. Firebug reports "Image corrupt or truncated". Obviously this is mainly a concern between me and my hosting company. However despite many emails they have not been able to get to the bottom of it. Transfer to another hosting company is obviously an option but is really a last resort. Has anyone seen this kind of thing before or can anyone suggest what might be causing it. Or any apache setting or something that I can ask them to check out. Will apache log this kind of error - they havent been able to provide me with any logs, but if I know exactly where things have been logged, maybe I can prompt them in to action.

    Read the article

  • HTTP PHP Authentication and Android

    - by edc598
    I am working on a website for which I hope to have an application for as well. Because of this, I am creating PHP API's which will go into my Database and serve specific data based on the method/function called. I want to protect these API's from misuse however, and I plan on implementing Authentication Digest to do so. However one of the OS's I want to support is Android. And I know that a malicious user would be able to reverse engineer the Android app and figure out my authentication scheme. I am left wondering: 1. Is there a better way to protect these API's from misuse? 2. Is there a way to prevent a malicious user from reverse engineering the app and potentially seeing the source code for it, enabling them to see my authentication scheme? 3. If none of these are preventable, then is my only option to have a Username/Password cred specifically for the Android app, and when eventually hacked, change the creds and issue an update for the app? I apologize if this is not the place to post such a question. Still pretty new to StackOverflow. Thanks in advance for any insight, it would be quite helpful.

    Read the article

  • HTTP crawler in Erlang

    - by ctp
    I'm coding on a simple HTTP crawler but I have an issue running the code at the bottom. I'm requesting 50 URLs and get the content of 20+ back. I've generated few files with 150kB size each to test the crawler. So I think the 20+ responses are limited by the bandwidth? BUT: how to tell the Erlang snippet not to quit until the last file is not fetched? The test data server is online, so plz try the code out and any hints are welcome :) -module(crawler). -define(BASE_URL, "http://46.4.117.69/"). -export([start/0, send_reqs/0, do_send_req/1]). start() -> ibrowse:start(), proc_lib:spawn(?MODULE, send_reqs, []). to_url(Id) -> ?BASE_URL ++ integer_to_list(Id). fetch_ids() -> lists:seq(1, 50). send_reqs() -> spawn_workers(fetch_ids()). spawn_workers(Ids) -> lists:foreach(fun do_spawn/1, Ids). do_spawn(Id) -> proc_lib:spawn_link(?MODULE, do_send_req, [Id]). do_send_req(Id) -> io:format("Requesting ID ~p ... ~n", [Id]), Result = (catch ibrowse:send_req(to_url(Id), [], get, [], [], 10000)), case Result of {ok, Status, _H, B} -> io:format("OK -- ID: ~2..0w -- Status: ~p -- Content length: ~p~n", [Id, Status, length(B)]); Err -> io:format("ERROR -- ID: ~p -- Error: ~p~n", [Id, Err]) end. That's the output: Requesting ID 1 ... Requesting ID 2 ... Requesting ID 3 ... Requesting ID 4 ... Requesting ID 5 ... Requesting ID 6 ... Requesting ID 7 ... Requesting ID 8 ... Requesting ID 9 ... Requesting ID 10 ... Requesting ID 11 ... Requesting ID 12 ... Requesting ID 13 ... Requesting ID 14 ... Requesting ID 15 ... Requesting ID 16 ... Requesting ID 17 ... Requesting ID 18 ... Requesting ID 19 ... Requesting ID 20 ... Requesting ID 21 ... Requesting ID 22 ... Requesting ID 23 ... Requesting ID 24 ... Requesting ID 25 ... Requesting ID 26 ... Requesting ID 27 ... Requesting ID 28 ... Requesting ID 29 ... Requesting ID 30 ... Requesting ID 31 ... Requesting ID 32 ... Requesting ID 33 ... Requesting ID 34 ... Requesting ID 35 ... Requesting ID 36 ... Requesting ID 37 ... Requesting ID 38 ... Requesting ID 39 ... Requesting ID 40 ... Requesting ID 41 ... Requesting ID 42 ... Requesting ID 43 ... Requesting ID 44 ... Requesting ID 45 ... Requesting ID 46 ... Requesting ID 47 ... Requesting ID 48 ... Requesting ID 49 ... Requesting ID 50 ... OK -- ID: 49 -- Status: "200" -- Content length: 150000 OK -- ID: 47 -- Status: "200" -- Content length: 150000 OK -- ID: 50 -- Status: "200" -- Content length: 150000 OK -- ID: 17 -- Status: "200" -- Content length: 150000 OK -- ID: 48 -- Status: "200" -- Content length: 150000 OK -- ID: 45 -- Status: "200" -- Content length: 150000 OK -- ID: 46 -- Status: "200" -- Content length: 150000 OK -- ID: 10 -- Status: "200" -- Content length: 150000 OK -- ID: 09 -- Status: "200" -- Content length: 150000 OK -- ID: 19 -- Status: "200" -- Content length: 150000 OK -- ID: 13 -- Status: "200" -- Content length: 150000 OK -- ID: 21 -- Status: "200" -- Content length: 150000 OK -- ID: 16 -- Status: "200" -- Content length: 150000 OK -- ID: 27 -- Status: "200" -- Content length: 150000 OK -- ID: 03 -- Status: "200" -- Content length: 150000 OK -- ID: 23 -- Status: "200" -- Content length: 150000 OK -- ID: 29 -- Status: "200" -- Content length: 150000 OK -- ID: 14 -- Status: "200" -- Content length: 150000 OK -- ID: 18 -- Status: "200" -- Content length: 150000 OK -- ID: 01 -- Status: "200" -- Content length: 150000 OK -- ID: 30 -- Status: "200" -- Content length: 150000 OK -- ID: 40 -- Status: "200" -- Content length: 150000 OK -- ID: 05 -- Status: "200" -- Content length: 150000 Update: thanks stemm for the hint with the wait_workers. I've combined your and mine code but same behaviour :( -module(crawler). -define(BASE_URL, "http://46.4.117.69/"). -export([start/0, send_reqs/0, do_send_req/2]). start() -> ibrowse:start(), proc_lib:spawn(?MODULE, send_reqs, []). to_url(Id) -> ?BASE_URL ++ integer_to_list(Id). fetch_ids() -> lists:seq(1, 50). send_reqs() -> spawn_workers(fetch_ids()). spawn_workers(Ids) -> %% collect reference to each worker Refs = [ do_spawn(Id) || Id <- Ids ], %% wait for response from each worker wait_workers(Refs). wait_workers(Refs) -> lists:foreach(fun receive_by_ref/1, Refs). receive_by_ref(Ref) -> %% receive message only from worker with specific reference receive {Ref, done} -> done end. do_spawn(Id) -> Ref = make_ref(), proc_lib:spawn_link(?MODULE, do_send_req, [Id, {self(), Ref}]), Ref. do_send_req(Id, {Pid, Ref}) -> io:format("Requesting ID ~p ... ~n", [Id]), Result = (catch ibrowse:send_req(to_url(Id), [], get, [], [], 10000)), case Result of {ok, Status, _H, B} -> io:format("OK -- ID: ~2..0w -- Status: ~p -- Content length: ~p~n", [Id, Status, length(B)]), %% send message that work is done Pid ! {Ref, done}; Err -> io:format("ERROR -- ID: ~p -- Error: ~p~n", [Id, Err]), %% repeat request if there was error while fetching a page, do_send_req(Id, {Pid, Ref}) %% or - if you don't want to repeat request, put there: %% Pid ! {Ref, done} end. Running the crawler forks fine for a handful of files, but then the code even doesnt fetch the entire files (file size each 150000 bytes) - he crawler fetches some files partially, see the following web server log :( 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /10 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /1 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /3 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /8 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /39 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /7 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /6 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /2 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /5 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /50 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /9 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /44 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /38 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /47 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /49 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /43 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /37 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /46 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /48 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /36 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /42 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /41 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /45 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /17 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /35 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /16 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /15 HTTP/1.1" 200 17020 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /21 HTTP/1.1" 200 120360 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /40 HTTP/1.1" 200 117600 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /34 HTTP/1.1" 200 60660 "-" "-" Any hints are welcome. I have no clue what's going wrong there :(

    Read the article

  • Send files ending in .mp4 in Apache with HTTP 206 Partial Content

    - by Pacha
    I am using Apache as web server and the return code is always HTTP/1.1 200. I want to set some kind of handler or use a mod to return HTTP/1.1 206 when the extension of the file requested is .mp4 so it can do video seeking, my web server is already returning some headers to do seeking, but it doesn't work. Is this possible? The HTTP headers http://*hidden*/media/movies/file/1080/d3191cd83109c593ec908f3a47efa8a2.mp4 GET /media/movies/file/1080/d3191cd83109c593ec908f3a47efa8a2.mp4 HTTP/1.1 Host: *hidden* User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:31.0) Gecko/20100101 Firefox/31.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-US,en;q=0.5 Accept-Encoding: gzip, deflate Referer: http://vjs.zencdn.net/4.6/video-js.swf Cookie: csrftoken=zXngwwS1S827g7aAJYbHJS3ajn5BGq9M; sessionid=uj1hlj00c85aoehw0n5fye8waggb7uod Connection: keep-alive HTTP/1.1 200 OK Date: Thu, 21 Aug 2014 15:04:46 GMT Server: Apache/2.2.22 (Debian) X-Mod-H264-Streaming: version=2.2.7 Content-Length: 2148905782 Last-Modified: Wed, 13 Aug 2014 11:36:46 GMT Etag: "8e002a-8015b345-5008133ff23c4;-2146061514" Accept-Ranges: bytes Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: video/mp4

    Read the article

  • Problems with Update Manager

    - by user65965
    Whenever I try to update with update manager I get the following errors: W:Failed to fetch http://archive.ubuntu.com/ubuntu/dists/precise/Release Unable to find expected entry 'commercial/source/Sources' in Release file (Wrong sources.list entry or malformed file) W:Failed to fetch http://archive.ubuntu.com/ubuntu/dists/precise-updates/Release Unable to find expected entry 'commercial/source/Sources' in Release file (Wrong sources.list entry or malformed file) W:Failed to fetch http://archive.ubuntu.com/ubuntu/dists/precise-backports/Release Unable to find expected entry 'commercial/source/Sources' in Release file (Wrong sources.list entry or malformed file) W:Failed to fetch http://archive.ubuntu.com/ubuntu/dists/precise-security/Release Unable to find expected entry 'commercial/source/Sources' in Release file (Wrong sources.list entry or malformed file) W:Failed to fetch http://ppa.launchpad.net/iefremov/ppa/ubuntu/dists/precise/main/source/Sources 404 Not Found W:Failed to fetch http://ppa.launchpad.net/iefremov/ppa/ubuntu/dists/precise/main/binary-amd64/Packages 404 Not Found W:Failed to fetch http://ppa.launchpad.net/iefremov/ppa/ubuntu/dists/precise/main/binary-i386/Packages 404 Not Found E:Some index files failed to download. They have been ignored, or old ones used instead. Any help would be much appreciated, thank you. Thank you very much Eliah. I'm still pretty new to Ubuntu. Here's the output I got from the terminal: No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 12.04 LTS Release: 12.04 Codename: precise # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://archive.ubuntu.com/ubuntu precise main restricted commercial deb-src http://archive.ubuntu.com/ubuntu precise restricted main commercial multiverse universe #Added by software-properties ## Major bug fix updates produced after the final release of the ## distribution. deb http://archive.ubuntu.com/ubuntu precise-updates main restricted commercial deb-src http://archive.ubuntu.com/ubuntu precise-updates restricted main commercial multiverse universe #Added by software-properties ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://archive.ubuntu.com/ubuntu precise universe deb http://archive.ubuntu.com/ubuntu precise-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://archive.ubuntu.com/ubuntu precise multiverse deb http://archive.ubuntu.com/ubuntu precise-updates multiverse ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb http://archive.ubuntu.com/ubuntu precise-backports main restricted universe multiverse commercial deb-src http://archive.ubuntu.com/ubuntu precise-backports main restricted universe multiverse commercial #Added by software-properties deb http://archive.ubuntu.com/ubuntu precise-security main restricted commercial deb-src http://archive.ubuntu.com/ubuntu precise-security restricted main commercial multiverse universe #Added by software-properties deb http://archive.ubuntu.com/ubuntu precise-security universe deb http://archive.ubuntu.com/ubuntu precise-security multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. deb http://archive.canonical.com/ubuntu oneiric partner deb-src http://archive.canonical.com/ubuntu precise partner ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. deb http://extras.ubuntu.com/ubuntu precise main deb-src http://extras.ubuntu.com/ubuntu precise main ## This is a 3rd party script to install and update Oracle Java deb http://www.duinsoft.nl/pkg debs all ## Sun-Java6-JRE deb http://security.ubuntu.com/ubuntu hardy-security main multiverse ** /etc/apt/sources.list.d/askubuntu-tools-ppa-precise.list: deb http://ppa.launchpad.net/askubuntu-tools/ppa/ubuntu precise main deb-src http://ppa.launchpad.net/askubuntu-tools/ppa/ubuntu precise main ** /etc/apt/sources.list.d/askubuntu-tools-ppa-precise.list.save: deb http://ppa.launchpad.net/askubuntu-tools/ppa/ubuntu precise main deb-src http://ppa.launchpad.net/askubuntu-tools/ppa/ubuntu precise main ** /etc/apt/sources.list.d/effie-jayx-turpial-oneiric.list: deb http://ppa.launchpad.net/effie-jayx/turpial/ubuntu precise main # disabled on upgrade to precise deb-src http://ppa.launchpad.net/effie-jayx/turpial/ubuntu precise main # disabled on upgrade to precise ** /etc/apt/sources.list.d/effie-jayx-turpial-oneiric.list.distUpgrade: deb http://ppa.launchpad.net/effie-jayx/turpial/ubuntu oneiric main deb-src http://ppa.launchpad.net/effie-jayx/turpial/ubuntu oneiric main ** /etc/apt/sources.list.d/effie-jayx-turpial-oneiric.list.save: deb http://ppa.launchpad.net/effie-jayx/turpial/ubuntu precise main # disabled on upgrade to precise deb-src http://ppa.launchpad.net/effie-jayx/turpial/ubuntu precise main # disabled on upgrade to precise ** /etc/apt/sources.list.d/getdeb.list: # deb http://archive.getdeb.net/ubuntu oneiric-getdeb apps # disabled on upgrade to precise ** /etc/apt/sources.list.d/getdeb.list.distUpgrade: deb http://archive.getdeb.net/ubuntu oneiric-getdeb apps ** /etc/apt/sources.list.d/getdeb.list.save: # deb http://archive.getdeb.net/ubuntu oneiric-getdeb apps # disabled on upgrade to precise ** /etc/apt/sources.list.d/hotot-team-ppa-oneiric.list: deb http://ppa.launchpad.net/hotot-team/ppa/ubuntu precise main # disabled on upgrade to precise deb-src http://ppa.launchpad.net/hotot-team/ppa/ubuntu precise main # disabled on upgrade to precise ** /etc/apt/sources.list.d/hotot-team-ppa-oneiric.list.distUpgrade: deb http://ppa.launchpad.net/hotot-team/ppa/ubuntu oneiric main deb-src http://ppa.launchpad.net/hotot-team/ppa/ubuntu oneiric main ** /etc/apt/sources.list.d/hotot-team-ppa-oneiric.list.save: deb http://ppa.launchpad.net/hotot-team/ppa/ubuntu precise main # disabled on upgrade to precise deb-src http://ppa.launchpad.net/hotot-team/ppa/ubuntu precise main # disabled on upgrade to precise ** /etc/apt/sources.list.d/iefremov-ppa-precise.list: deb http://ppa.launchpad.net/iefremov/ppa/ubuntu precise main deb-src http://ppa.launchpad.net/iefremov/ppa/ubuntu precise main ** /etc/apt/sources.list.d/iefremov-ppa-precise.list.save: deb http://ppa.launchpad.net/iefremov/ppa/ubuntu precise main deb-src http://ppa.launchpad.net/iefremov/ppa/ubuntu precise main ** /etc/apt/sources.list.d/jockey.list: deb http://www.openprinting.org/download/printdriver/debian/ lsb3.2 main-nonfree # disabled on upgrade to precise ** /etc/apt/sources.list.d/jockey.list.distUpgrade: deb http://www.openprinting.org/download/printdriver/debian/ lsb3.2 main-nonfree ** /etc/apt/sources.list.d/jockey.list.save: deb http://www.openprinting.org/download/printdriver/debian/ lsb3.2 main-nonfree # disabled on upgrade to precise ** /etc/apt/sources.list.d/plexydesk-plexydesk-dailybuild-precise.list: deb http://ppa.launchpad.net/plexydesk/plexydesk-dailybuild/ubuntu precise main deb-src http://ppa.launchpad.net/plexydesk/plexydesk-dailybuild/ubuntu precise main ** /etc/apt/sources.list.d/plexydesk-plexydesk-dailybuild-precise.list.save: deb http://ppa.launchpad.net/plexydesk/plexydesk-dailybuild/ubuntu precise main deb-src http://ppa.launchpad.net/plexydesk/plexydesk-dailybuild/ubuntu precise main ** /etc/apt/sources.list.d/precise-partner.list: deb http://archive.canonical.com/ubuntu precise partner #Added by software-center ** /etc/apt/sources.list.d/precise-partner.list.save: deb http://archive.canonical.com/ubuntu precise partner #Added by software-center ** /etc/apt/sources.list.d/private-ppa.launchpad.net_commercial-ppa-uploaders_crossover-pro_ubuntu.list: # deb https://justin-dormandy:[email protected]/commercial-ppa-uploaders/crossover-pro/ubuntu precise main #Added by software-center disabled on upgrade to precise ** /etc/apt/sources.list.d/private-ppa.launchpad.net_commercial-ppa-uploaders_crossover-pro_ubuntu.list.distUpgrade: cat: /etc/apt/sources.list.d/private-ppa.launchpad.net_commercial-ppa-uploaders_crossover-pro_ubuntu.list.distUpgrade: Permission denied ** /etc/apt/sources.list.d/private-ppa.launchpad.net_commercial-ppa-uploaders_crossover-pro_ubuntu.list.save: cat: /etc/apt/sources.list.d/private-ppa.launchpad.net_commercial-ppa-uploaders_crossover-pro_ubuntu.list.save: Permission denied ** /etc/apt/sources.list.d/screenlets-ppa-precise.list: deb http://ppa.launchpad.net/screenlets/ppa/ubuntu precise main deb-src http://ppa.launchpad.net/screenlets/ppa/ubuntu precise main ** /etc/apt/sources.list.d/screenlets-ppa-precise.list.save: deb http://ppa.launchpad.net/screenlets/ppa/ubuntu precise main deb-src http://ppa.launchpad.net/screenlets/ppa/ubuntu precise main ** /etc/apt/sources.list.d/webupd8team-java-precise.list: deb http://ppa.launchpad.net/webupd8team/java/ubuntu precise main deb-src http://ppa.launchpad.net/webupd8team/java/ubuntu precise main ** /etc/apt/sources.list.d/webupd8team-java-precise.list.save: deb http://ppa.launchpad.net/webupd8team/java/ubuntu precise main deb-src http://ppa.launchpad.net/webupd8team/java/ubuntu precise main

    Read the article

  • REST and redirecting the response

    - by Duane Gran
    I'm developing a RESTful service. Here is a map of the current feature set: POST /api/document/file.jpg (creates the resource) GET /api/document/file.jpg (retrieves the resource) DELETE /api/document/file.jpg (removes the resource) So far, it does everything you might expect. I have a particular use case where I need to set up the browser to send a POST request using the multipart/form-data encoding for the document upload but when it is completed I want to redirect them back to the form. I know how to do a redirect, but I'm not certain about how the client and server should negotiate this behavior. Two approaches I'm considering: On the server check for the multipart/form-data encoding and, if present, redirect to the referrer when the request is complete. Add a service URI of /api/document/file.jpg/redirect to redirect to the referrer when the request is complete. I looked into setting an X header (X-myapp-redirect) but you can't tell the browser which headers to use like this. I manage the code for both the client and the server side so I'm flexible on solutions here. Is there a best practice to follow here?

    Read the article

  • HTTP events? Is there a standard / precedent for this?

    - by user619818
    Our architecture is HTTP servers (custom written) which whereby custom clients send a HTTP request for some information and information is returned just as HTTP works. But we need a special custom 'extension' which is a request which is a subscription for receiving asynchronous 'events' on a resource. For example the client sends an http request subscribing for events on some entity. As the 'entity' generates events they are passed to the http server and the http server must then lookup subscriptions for that entity and send the event message to all subscribed clients. Hope that makes sense. So my questions are: Has this been done before / or is there a standard I should be looking at? If no standard, any suggestions on how to implement? How does a http server send an unsolicited 'message' to a client?

    Read the article

  • What is best practice for search engines when a website is under maintenance?

    - by jamescridland
    I need around a week to transition a heavily data-driven website from one back end to another. During that time I do plan to attempt to keep some pages live, but they won't all work well or look brilliant. Some pages won't work at all. What is the best way to ensure I don't scare Google? Should I hide everything from robots.txt, or mark everything that doesn't work as "503", or are there other things that I should be considering?

    Read the article

  • Verifying that a user comes from a 'partner' site?

    - by matt_tm
    We're building a Drupal module that is going to be given to trusted 'corporate partners'. When a user clicks on a link, he should be redirected to our site as if he's a logged in user. How should I verify that the user is indeed coming from that site? It does not look like 'HTTP_REFERER' is enough because it appears it can be faked. We are providing these partner sites with API Keys. If I receive the API-key as a POST value, sent over https, would that be a sufficient indicator that the user is a genuine partner-site user?

    Read the article

  • HaProxy - Http and SSL pass through config

    - by Bill
    I've currently got an HaProxy LB solution in place and everything is working fine however we are having an issue with a very few clients who cannot get to our site via HTTPS (SSL) they can browse our site in Http but as soon as they click on an absolute HTTPS link they are taken to our home page instead. Wondering if anyone can look at our config below and see if there's something awry. I believe we are on HaProxy 1.2.17 global log 127.0.0.1 local0 log 127.0.0.1 local1 notice #log loghost local0 info maxconn 6144 #debug #quiet user haproxy group haproxy defaults log global mode http option httplog option dontlognull retries 3 redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 stats auth # admin password stats uri /monitor listen webfarm # bind :80,:443 bind :443 mode tcp balance source #cookie SERVERID insert indirect #option httpclose #option forwardfor #option httpchk HEAD /check.cfm HTTP/1.0 server webA 111.10.10.1 #server webB 111.10.10.2 server webB 111.10.10.3 server webC 111.10.10.4 listen webfarmhttp :80 mode http balance source # option httpclose option forwardfor # option httpchk HEAD /check.cfm HTTP/1.0 option httpchk /check.cfm server webA 111.10.10.1 #server webB 111.10.10.2 server webB 111.10.10.3 server webC 111.10.10.4 listen monitor :8443 mode http balance roundrobin #cookie SERVERID insert indirect option httpclose option forwardfor #option httpchk HEAD /check.txt HTTP/1.0 #option httpchk HEAD /check.cfm HTTP/1.0 server webA 111.10.10.1 server webB 111.10.10.2

    Read the article

  • HTTP responses curl and wget different results

    - by Fab
    To check HTTP response header for a set of urls I send with curl the following request headers foreach ( $urls as $url ) { // Setup headers - I used the same headers from Firefox version 2.0.0.6 $header[ ] = "Accept: text/xml,application/xml,application/xhtml+xml,"; $header[ ] = "text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5"; $header[ ] = "Cache-Control: max-age=0"; $header[ ] = "Connection: keep-alive"; $header[ ] = "Keep-Alive: 300"; $header[ ] = "Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7"; $header[ ] = "Accept-Language: en-us,en;q=0.5"; $header[ ] = "Pragma: "; // browsers keep this blank. curl_setopt( $ch, CURLOPT_URL, $url ); curl_setopt( $ch, CURLOPT_USERAGENT, 'Googlebot/2.1 (+http://www.google.com/bot.html)'); curl_setopt( $ch, CURLOPT_HTTPHEADER, $header); curl_setopt( $ch, CURLOPT_REFERER, 'http://www.google.com'); curl_setopt( $ch, CURLOPT_HEADER, true ); curl_setopt( $ch, CURLOPT_NOBODY, true ); curl_setopt( $ch, CURLOPT_RETURNTRANSFER, true ); curl_setopt( $ch, CURLOPT_FOLLOWLOCATION, true ); curl_setopt( $ch, CURLOPT_HTTPAUTH, CURLAUTH_ANY ); curl_setopt( $ch, CURLOPT_TIMEOUT, 10 ); //timeout 10 seconds } Sometimes I receive 200 OK which is good other time 301, 302, 307 which I consider good as well, but other times I receive weird status as 406, 500, 504 which should identify an invalid url but when I open it on the browser they are fine for example the script returns http://www.awe.co.uk/ => HTTP/1.1 406 Not Acceptable and wget returns wget http://www.awe.co.uk/ --2011-06-23 15:26:26-- http://www.awe.co.uk/ Resolving www.awe.co.uk... 77.73.123.140 Connecting to www.awe.co.uk|77.73.123.140|:80... connected. HTTP request sent, awaiting response... 200 OK Does anyone know which request header I am missing or adding in excess?

    Read the article

  • Why Wireshark does not recognize this HTTP response?

    - by Alois Mahdal
    I have a trivial CGI script that outputs simple text content. It's written in Perl and using CGI module and it specifies only the most basic headers: print $q->header( -type => 'text/plain', -Content_length => $length, ); print $stuff; There's no apparent issue with functionality, but I'm confused about the fact that Wireshark does not recognize the HTTP response as HTTP--it's marked as TCP. Here is request and response: GET /cgi-bin/memfile/memfile.pl?mbytes=1 HTTP/1.1 Host: 10.6.130.38 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:11.0) Gecko/20100101 Firefox/11.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: cs,en-us;q=0.7,en;q=0.3 Accept-Encoding: gzip, deflate Connection: keep-alive HTTP/1.1 200 OK Date: Thu, 05 Apr 2012 18:52:23 GMT Server: Apache/2.2.15 (Win32) mod_ssl/2.2.15 OpenSSL/0.9.8m Content-length: 1048616 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/plain; charset=ISO-8859-1 XXXXXXXX... And here is the packet overview (Full packet is here on pastebin) No. Time Source srcp Destination dstp Protocol Info tcp.stream abstime 5 0.112749 10.6.130.38 80 10.6.130.53 48072 TCP [TCP segment of a reassembled PDU] 0 20:52:23.228063 Frame 5: 1514 bytes on wire (12112 bits), 1514 bytes captured (12112 bits) Ethernet II, Src: Dell_97:29:ac (00:1e:4f:97:29:ac), Dst: Dell_3b:fe:70 (00:24:e8:3b:fe:70) Internet Protocol Version 4, Src: 10.6.130.38 (10.6.130.38), Dst: 10.6.130.53 (10.6.130.53) Transmission Control Protocol, Src Port: http (80), Dst Port: 48072 (48072), Seq: 1, Ack: 330, Len: 1460 Now when I see this in Wireshark: there's usual TCP handshake then the GET request shown as HTTP with preview then the next packet contains the response, but is not marked as an HTTP response--just a generic "[TCP segment of a reassembled PDU]", and is not caught by "http.response" filter. Can somebody explain why Wireshark does not recognize it? Is there something wrong with the response?

    Read the article

  • Uploading to another domain gives HTTP code 405

    - by dragon112
    I'm trying to upload a file (which can be quite large) from the website of one server to the backend of another server using plupload. Lets say: domain 1 = http://www.websitedomain.com/uploadform domain 2 = http://www.backenddomain.com/uploadhandler Trying to upload i send the following: OPTIONS /main/uploadnetwork.php HTTP/1.1 Host: backenddomain.com Connection: keep-alive Access-Control-Request-Method: POST Origin: http://www.websitedomain.com User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.79 Safari/537.4 Access-Control-Request-Headers: origin, content-type Accept: */* Referer: http://www.websitedomain.com/uploadform Accept-Encoding: gzip,deflate,sdch Accept-Language: nl-NL,nl;q=0.8,en-US;q=0.6,en;q=0.4 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 DNT: 1 But when I try to start the upload the server returns the following: HTTP/1.1 405 Method Not Allowed Allow: GET, HEAD, OPTIONS, TRACE Content-Type: text/html Server: Microsoft-IIS/7.5 X-Powered-By: ASP.NET X-Powered-By-Plesk: PleskWin Date: Mon, 01 Oct 2012 12:41:57 GMT Content-Length: 999 After doing some research I found out that a browser does this to check if the server will accept the intended message. It looks like my server doesn't feel like accepting a simple POST call even tho i use post all the time. The Google Chrome console gives the following error: XMLHttpRequest cannot load http://www.backenddomain.com/uploadhandler. Origin http://www.websitedomain.com is not allowed by Access-Control-Allow-Origin. Does anyone know how to stop the browser from checking or how i can tell my server to just accept the POST?

    Read the article

  • Is it possible to implement X-HTTP-Method-Override in ASP.NET MVC?

    - by Greg Beech
    I'm implementing a prototype of a RESTful API using ASP.NET MVC and apart from the odd bug here and there I've achieve all the requirements I set out at the start, apart from callers being able to use the X-HTTP-Method-Override custom header to override the HTTP method. What I'd like is that the following request... GET /someresource/123 HTTP/1.1 X-HTTP-Method-Override: DELETE ...would be dispatched to my controller method that implements the DELETE functionality rather than the GET functionality for that action (assuming that there are multiple methods implementing the action, and that they are marked with different [AcceptVerbs] attributes). So, given the following two methods, I would like the above request to be dispatched to the second one: [ActionName("someresource")] [AcceptVerbs(HttpVerbs.Get)] public ActionResult GetSomeResource(int id) { /* ... */ } [ActionName("someresource")] [AcceptVerbs(HttpVerbs.Delete)] public ActionResult DeleteSomeResource(int id) { /* ... */ } Does anybody know if this is possible? And how much work would it be to do so...?

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >