Search Results

Search found 51988 results on 2080 pages for 'http headers'.

Page 7/2080 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • HTTP status code for "success with errors"?

    - by Richard Levasseur
    I've poked around a bit, but I don't see an HTTP status code for when a request's succeeds, but there is an error after the "point of no return". e.g., Say you process a request, its committed to the database, but while returning the result you run of memory, or encounter a NPE, or what have you. It would have been a 200 response, but now, internally, you aren't able to return the proper, well-formed response. 202 Accepted doesn't seem to fit since we've already processed the request. What status code means "Success, but errors"? Does one even exist?

    Read the article

  • Choosing http status code for unknown command reply

    - by w0rldart
    So, I'm writing a small test that I have been required to complete and I just want to give it some final touches by adding some header status code responses and some other stuff. Right now, my dilemma is what HTTP status code to choose for my "Unknown command" response after the $_GET['cmd'] has been compared to the existing commands list. case 404: $text = 'Not Found'; break; case 405: $text = 'Method Not Allowed'; break; case 406: $text = 'Not Acceptable'; break; For which one of the above should I go? And if none, which other?

    Read the article

  • How to set RequestBody for Http Delete method.

    - by Tushar Tarkas
    I am writing a client code for a server which Delete API. The API specification requires data to be sent. I am using HttpComponents v3.1 library for writing client code. Using the HtpDelete class I could not find a way to add request data to it. Is there a way to do so ? Below is the code snippet. HttpDelete deleteReq = new HttpDelete(uriBuilder.toString()); List<NameValuePair> postParams = new ArrayList<NameValuePair>(); postParams.add(new BasicNameValuePair(RestConstants.POST_DATA_PARAM_NAME, postData.toString())); try { UrlEncodedFormEntity entity = new UrlEncodedFormEntity(postParams); entity.setContentEncoding(HTTP.UTF_8); //deleteReq.setEntity(entity); // There is no method setEntity() deleteReq.setHeader(RestConstants.CONTENT_TYPE_HEADER, RestConstants.CONTENT_TYPE_HEADER_VAL); } catch (UnsupportedEncodingException e) { logger.error("UnsupportedEncodingException: " + e); } Thanks in advance.

    Read the article

  • Are your redirects HTTP compliant (absolute URI)?

    - by webbiedave
    In the Hypertext Transfer Protocol 1.1 spec, it states for the Location header: The field value consists of a single absolute URI. Location = "Location" ":" absoluteURI An example is: Location: http://www.w3.org/pub/WWW/People.html I have coded many an app that doesn't include the scheme and domain (i.e., /thankyou/) and every browser I've ever tested with redirects correctly. I'm wondering if it's imperative to go back and change all my redirect code or just ignore this part of the spec. Have many of you produced code that doesn't comply and will you go back and change it? Or will you just trust that clients will continue to resolve them well into the future? (going forward I will certainly adhere to this)

    Read the article

  • yum update works but yum --security update fails to work in Fedora 12

    - by bobo
    I had already installed the yum-security before. And I was going to do an update by entering the following command: [root@localhost /]# yum update Loaded plugins: presto, priorities, refresh-packagekit, security Skipping security plugin, no data Setting up Update Process Resolving Dependencies Skipping security plugin, no data --> Running transaction check ---> Package eject.i686 0:2.1.5-17.fc12 set to be updated ---> Package glibc.i686 0:2.11.1-4 set to be updated ---> Package glibc-common.i686 0:2.11.1-4 set to be updated ---> Package glibc-devel.i686 0:2.11.1-4 set to be updated ---> Package glibc-headers.i686 0:2.11.1-4 set to be updated ---> Package gnome-themes.noarch 0:2.28.1-3.fc12 set to be updated ---> Package gtk2.i686 0:2.18.9-3.fc12 set to be updated ---> Package gtk2-immodule-xim.i686 0:2.18.9-3.fc12 set to be updated ---> Package kernel-PAE.i686 0:2.6.32.11-99.fc12 set to be installed ---> Package kernel-PAE-devel.i686 0:2.6.32.11-99.fc12 set to be installed ---> Package kernel-PAEdebug-devel.i686 0:2.6.32.11-99.fc12 set to be installed ---> Package kernel-debug-devel.i686 0:2.6.32.11-99.fc12 set to be installed ---> Package kernel-devel.i686 0:2.6.32.11-99.fc12 set to be installed ---> Package kernel-firmware.noarch 0:2.6.32.11-99.fc12 set to be updated ---> Package kernel-headers.i686 0:2.6.32.11-99.fc12 set to be updated ---> Package libnetfilter_conntrack.i686 0:0.0.101-1.fc12 set to be updated ---> Package media-player-info.noarch 0:5-1.fc12 set to be updated ---> Package nscd.i686 0:2.11.1-4 set to be updated ---> Package perf.noarch 0:2.6.32.11-99.fc12 set to be updated ---> Package rhythmbox.i686 0:0.12.6-5.fc12 set to be updated ---> Package sysvinit-tools.i686 0:2.87-3.dsf.fc12 set to be updated --> Finished Dependency Resolution --> Running transaction check ---> Package kernel-PAE.i686 0:2.6.31.12-174.2.3.fc12 set to be erased --> Finished Dependency Resolution Dependencies Resolved ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: kernel-PAE i686 2.6.32.11-99.fc12 updates 20 M kernel-PAE-devel i686 2.6.32.11-99.fc12 updates 6.2 M kernel-PAEdebug-devel i686 2.6.32.11-99.fc12 updates 6.2 M kernel-debug-devel i686 2.6.32.11-99.fc12 updates 6.2 M kernel-devel i686 2.6.32.11-99.fc12 updates 6.1 M Updating: eject i686 2.1.5-17.fc12 updates 49 k glibc i686 2.11.1-4 updates 4.2 M glibc-common i686 2.11.1-4 updates 14 M glibc-devel i686 2.11.1-4 updates 953 k glibc-headers i686 2.11.1-4 updates 590 k gnome-themes noarch 2.28.1-3.fc12 updates 1.5 M gtk2 i686 2.18.9-3.fc12 updates 3.2 M gtk2-immodule-xim i686 2.18.9-3.fc12 updates 60 k kernel-firmware noarch 2.6.32.11-99.fc12 updates 968 k kernel-headers i686 2.6.32.11-99.fc12 updates 749 k libnetfilter_conntrack i686 0.0.101-1.fc12 updates 37 k media-player-info noarch 5-1.fc12 updates 32 k nscd i686 2.11.1-4 updates 189 k perf noarch 2.6.32.11-99.fc12 updates 79 k rhythmbox i686 0.12.6-5.fc12 updates 4.0 M sysvinit-tools i686 2.87-3.dsf.fc12 updates 58 k Removing: kernel-PAE i686 2.6.31.12-174.2.3.fc12 @updates 72 M Transaction Summary ================================================================================ Install 5 Package(s) Upgrade 16 Package(s) Remove 1 Package(s) Reinstall 0 Package(s) Downgrade 0 Package(s) Total download size: 75 M Is this ok [y/N]: But then I changed my mind, I decided to do a security-only update instead of a full update, so I entered the following command: [root@localhost /]# yum --security update Loaded plugins: presto, priorities, refresh-packagekit, security Setting up Update Process Resolving Dependencies Limiting packages to security relevant ones http://download.fedoraproject.org/pub/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.cuhk.edu.hk/pub/linux/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.cuhk.edu.hk/pub/linux/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.cuhk.edu.hk/pub/linux/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.riken.jp/Linux/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.riken.jp/Linux/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://mirror.cse.iitk.ac.in/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://mirror.cse.iitk.ac.in/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://mirrors.isu.net.sa/pub/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://mirrors.isu.net.sa/pub/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. ftp://ftp.chu.edu.tw/linux/Fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno -1] Metadata file does not match checksum Trying other mirror. http://mirror.yandex.ru/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://mirror.yandex.ru/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://linus.iyte.edu.tr/linux/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://linus.iyte.edu.tr/linux/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.jaist.ac.jp/pub/Linux/Fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.jaist.ac.jp/pub/Linux/Fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.kddilabs.jp/Linux/packages/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://srv2.ftp.ne.jp/Linux/packages/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://www.ftp.ne.jp/Linux/distributions/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://srv2.ftp.ne.jp/Linux/distributions/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.rhd.ru/pub/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.rhd.ru/pub/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://mirrors.163.com/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://mirrors.163.com/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://mirror.nus.edu.sg/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://mirror.nus.edu.sg/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.yz.yamagata-u.ac.jp/pub/linux/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.yz.yamagata-u.ac.jp/pub/linux/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.linux.org.tr/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.linux.org.tr/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://mirrors.cytanet.com.cy/linux/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://mirrors.cytanet.com.cy/linux/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://fedoramirror.hnsdc.com/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://fedoramirror.hnsdc.com/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.twaren.net/Linux/Fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://c147.twaren.net/pub/Linux/Fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.mirror.tw/pub/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.mirror.tw/pub/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.cs.pu.edu.tw/Linux/Fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.cs.pu.edu.tw/Linux/Fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ubuntu.cn99.com/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ubuntu.cn99.com/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. Error: failure: repodata/updateinfo.xml.gz from updates: [Errno 256] No more mirrors to try. You could try using --skip-broken to work around the problem ^C[root@localhost /]# As it can be seen in the output, when I run the yum --security update command, it did show the Limiting packages to security relevant ones message so it's aware of the option. But I don't know why it keeps reporting the http error 416. I searched in google and found the following description of the error but it doesn't seem to help much. HTTP ERROR 416 - Requested Range Not Satisfiable A 416 status code indicates that the server was unable to fulfill the request. This may be, for example, because the client asked for the 800th-900th bytes of a document, but the document was only 200 bytes long. It suggests me to use the --skip-broken option, I tried and the output is the same. I already tested many times, it just doesn't work when the --security option is used. What could be the possible cause for this problem?

    Read the article

  • estimating size of reply HTTP headers

    - by Guanidene
    I am trying to download a file via a basic socket connection, using a HTTP GET request. So, I have to specify how many bytes of data coming in I have to read from the socket. However, I am having trouble deciding the amount of data (say, in bytes) the server would reply back with. I know that there is a "Content-Length" field in the reply sent by server, but that gives me the size of the actual data (without the http headers). Is there a way to get the exact size of HTTP headers sent by the server or an estimation is required? (I am doing this for downloading on a mobile network, where every bit of data matters in terms of time and money, so I don't wish to make an unnecessary larger estimate of the header size.)

    Read the article

  • What's the easiest way to create an HTTP proxy which adds basic authentication to requests?

    - by joshdoe
    I am trying to use a service provided by a server which requires basic HTTP authentication, however the application I am using does not support authentication. What I'd like to do is create a proxy that will enable my auth-less application to connect via the proxy (which will add the authentication information) to the server requiring authentication. I'm sure this can be done, however I'm overwhelmed with the number of proxies out there and couldn't find an answer how to do this. Basically it seems all I want to do is have a proxy serve this URL: http://username:password@remoteserver/path as this URL: http://proxyserver/path I can run it on Linux, but a plus if I can run it Windows as well. Open source or at least free is a must. A big plus is if it's fairly straightforward to setup.

    Read the article

  • gcc precompiled headers weird behaviour with -c option

    - by pachanga
    Folks, I'm using gcc-4.4.1 on Linux and before trying precompiled headers in a really large project I decided to test them on simple program. They "kinda work" but I'm not happy with results and I'm sure there is something wrong about my setup. First of all, I wrote a simple program(main.cpp) to test if they work at all: #include <boost/bind.hpp> #include <boost/function.hpp> #include <boost/type_traits.hpp> int main() { return 0; } Then I created the precompiled headers file pre.h(in the same directory) as follows: #include <boost/bind.hpp> #include <boost/function.hpp> #include <boost/type_traits.hpp> ...and compiled it: $ g++ -I. pre.h (pre.h.gch was created) After that I measured compile time with and without precompiled headers: with pch $ time g++ -I. -include pre.h main.cpp real 0m0.128s user 0m0.088s sys 0m0.048s without pch $ time g++ -I. main.cpp real 0m0.838s user 0m0.784s sys 0m0.056s So far so good! Almost 7 times faster, that's impressive! Now let's try something more realistic. All my sources are built with -c option and for some reason I can't make pch play nicely with it. You can reproduce this with the following steps below... I created the test module foo.cpp as follows: #include <boost/bind.hpp> #include <boost/function.hpp> #include <boost/type_traits.hpp> int whatever() { return 0; } Here are the timings of my attempts to build the module foo.cpp with and without pch: with pch $ time g++ -I. -include pre.h -c foo.cpp real 0m0.357s user 0m0.348s sys 0m0.012s without pch $ time g++ -I. -c foo.cpp real 0m0.330s user 0m0.292s sys 0m0.044s That's quite strange, looks like there is no speed up at all!(I ran timings for several times). It turned out precompiled headers were not used at all in this case, I checked it with -H option(output of "g++ -I. -include pre.h -c foo.cpp -H" didn't list pre.h.gch at all). What am I doing wrong?

    Read the article

  • Accessing inbound HTTP headers in meteor?

    - by Alex K
    I'm working on an application that relies on data that the browser sends within the HTTP headers (and there's no way around this). This also happens to be my first time working with something node.js based, so it's very likely I'm completely missing something simple! Basically what I want to be able to do is call a method on the server from the client, and in that method read the HTTP headers that the client sent.

    Read the article

  • Reading HTTP headers from JAX-WS Web Service

    - by Anonimo
    Hi all, I currently have a JAX-WS Web Service that receives some credentials in the HTTP header. These are used for BASIC authentication. There is a filter that performs authentication by reading the HTTP headers and checking against the database. Still, I need the username from within the Web Service in order to perform other service logic related stuff. Is there a way of accessing the HTTP headers from within the Web Service? Thanks.

    Read the article

  • Understanding HTTP Cookies in Indy 10 for Delphi XE2

    - by Jerry Dodge
    I have been working with Indy 10 HTTP Servers / Clients lately in Delphi XE2, and I need to make sure I'm understanding session management correctly. In the server, I have a "bucket" of sessions, which is a list of objects which each represent a unique session. I don't use username and password to authenticate users, but I rather use a unique API key which is issued to a client, and has an expiration. When a client wishes to connect to the server, it first logs in by calling the "login" command, which is a path like this: http://localhost:1234/login?APIKey=abcdefghij. The server checks this API Key against the database, and if it's valid, it creates a new session in the bucket, issues a new cookie (unique string), and sets the response cookies with Success=Y and Cookie=abcdefghij. This is where I have the question. Assuming the client end has its own method of cookie management, the client will receive this login response back from the server and automatically save the cookies as necessary. Any future request from the client to the server shall automatically send along these cookies, and the client side doesn't have to necessarily worry about setting these cookies when sending requests to the server. Right? PS - I'm asking this question here on programmers.stackexchange.com because I didn't see it fit to ask on stackoverflow.com. If anyone thinks this is appropriate enough for stackoverflow.com, please let me know.

    Read the article

  • https:// search results appearing on Google for purely http:// site

    - by hydrurga
    I started weeding through my site's search results from Google today, using a site: search, to determine if there are any links that cause 404s and thus need redirecting. To my amazement I noticed numerous https:// results relating to various pages. My site doesn't have a SSL certificate, doesn't serve such pages, doesn't internally link to https:// pages, doesn't include any such files in its sitemap.xml and, for all of these, never has. I decided to do a Google search for https://<my site> and found one site that incorrectly refers to the root of my site with a https:// prefix - I will try to contact them to get them to correct this. I'm not sure however how Googlebot managed to index the non-root files as https://. I can't find any external links to them and surely, without certification, Googlebot should have stalled at the first request? I've just added the following lines to the site's .htaccess (although the surfer still has to navigate through the browser's "This site is a security risk. Abandon hope all ye who enter here!" message(s) first to get there): RewriteEngine On RewriteCond %{HTTPS} on RewriteRule ^(.*)$ http://www.<my site>.org/$1 [R=301,L] replacing <my site> with my domain name. My big question is this though - I would like to use the Google Webmaster Tools Remove URLs feature to remove the https:// pages from the index. Can I be guaranteed that this will only remove the https:// versions of each relevant page and not the valid http:// versions? My thanks to anyone who can help me out with this particular question and the issue in general.

    Read the article

  • How to indicate to a web server the language of a resource

    - by Nik M
    I'm writing an HTTP API to a publishing server, and I want resources with representations in multiple languages. A user whose client GETs a resource which has Korean, Japanese and Trad. Chinese representations, and sends Accept-Language: en, ja;q=0.7 should get the Japanese. One resource, identified by one URI, will therefore have a number of different language representations. This seems to me like a totally orthodox use of content negotiation and multiple resource representations. But when each translator comes to provide these alternate language representations to the server, what's the correct way to instruct the server which language to store the representation under? I'm having the translators PUT the representation in its entirety to the same URI, but I can't find out how to do this elegantly. Content-Language is a response header, and none of the request headers seem to fit the bill. It seems my options are Invent a new request header Supply additional metadata in a multipart/related document Provide language as a parameter to the Content-Type of the request, like Content-Type: text/html;language=en I don't want to get into the business of extending HTTP, and I don't feel great about bundling extra metadata into the representation. Neither approach seems friendly to HTTP caches either. So option 3 seems like the best way that I can think of, but even then it's decidedly non-standard to put my own specific parameters on a very well established content type. Is there any by-the-book way of achieving this?

    Read the article

  • Calling Web Services with HTTP Basic Authentication from BPEL 10.1.3.4

    - by Ramkumar Menon
    Are you using BPEL 10.1.3.4 and hunting for the property names in the partnerlinkBindings that will work for outbound HTTP Basic Authentication? Here's the answer. <partnerLinkBinding ...>  <property name="basicHeaders">credentials</property>  <property name="basicUsername">WhoAmI</property>  <property name="basicPassword">thatsASecret</property></partnerLinkBinding>The drop down options in JDeveloper dont seem to work.

    Read the article

  • google chrome : http authentication issue on iframe

    - by Daniel Dzussa
    I have an HTML file with 2 links http://versionplus.in/pass/new.html, both links are in an iframe to load the contents inside the iframe. I have two protected directories one on the same server and other on another server. If you click on either link it will popup the login box, same for both links in all browsers except google chorme. google chrome doesn't show login box for the protected folder on another server, how can I fix this ?

    Read the article

  • How can I set up a 404 error page when people access http://ftp.mydomain.com?

    - by Tim B.
    I am a freelance videographer/developer, and part of my job involves transferring large files over FTP to production houses/television stations. While the majority of people in my industry understand the difference between FTP and HTTP, I've experienced several interactions in the past couple months of people who still open Internet Explorer and try to access http://ftp.mydomain.com, receive an error page served by HostGator, and tell me that they cannot access my FTP server. Instead of spending time delivering instructions via e-mail, I'd much prefer to serve up a custom error page in this instance that instructs them how to download and use an FTP client. I tried setting up a sub-domain in Cpanel hoping I could simply drop in an .htaccess file with the error page, but I got this error: ftp.mydomain.com domainadmin-domainexistsglobal I also tried creating a custom error page in PHP which reads the site URL and serves up the custom content only when http://ftp.mydomain.com is accessed. Unfortunately, the error page works for every subdomain except that one. I'm not entirely sure this is even technically possible, which is why I bring it to the good people of StackOverflow to help. Thanks!

    Read the article

  • Parse an HTTP request Authorization header with Python

    - by Kris Walker
    I need to take a header like this: Authorization: Digest qop="chap", realm="[email protected]", username="Foobear", response="6629fae49393a05397450978507c4ef1", cnonce="5ccc069c403ebaf9f0171e9517f40e41" And parse it into this using Python: {'protocol':'Digest', 'qop':'chap', 'realm':'[email protected]', 'username':'Foobear', 'response':'6629fae49393a05397450978507c4ef1', 'cnonce':'5ccc069c403ebaf9f0171e9517f40e41'} Is there a library to do this, or something I could look at for inspiration? I'm doing this on Google App Engine, and I'm not sure if the Pyparsing library is available, but maybe I could include it with my app if it is the best solution. Currently I'm creating my own MyHeaderParser object and using it with reduce() on the header string. It's working, but very fragile. Brilliant solution by nadia below: import re reg = re.compile('(\w+)[=] ?"?(\w+)"?') s = """Digest realm="stackoverflow.com", username="kixx" """ print str(dict(reg.findall(s)))

    Read the article

  • Finding an HTTP proxy that will intercept static resource requests

    - by pkh
    Background I develop a web application that lives on an embedded device. In order to make dev times sane, frontend development is done using apache serving static documents, with PHP proxying out to the embedded device for specifically configured dynamic resources. This requires that we keep various server-simulation scripts hanging around in source control, and it requires updating those scripts whenever we add a new dynamic resource. Problem I'd like to invert the logic: if the requested document is available in the static documents directory, serve it; otherwise, proxy the request to the embedded device. Optimally, I want a software package that will do this for me (for Windows or buildable on cygwin). I can deal with forcing apache to do it with PHP, but I'm unsure how to configure it to make it happen. I've looked at squid and privoxy, but neither of them seem to do what I want. Any ideas? I'd rather not have to roll my own.

    Read the article

  • What headers do I want to send together with a 304 response?

    - by Willem
    When I send a 304 response. How will the browser interpret other headers which I send together with the 304? E.g. header("HTTP/1.1 304 Not Modified"); header("Expires: " . gmdate("D, d M Y H:i:s", time() + $offset) . " GMT"); Will this make sure the browser will not send another conditional GET request (nor any request) until $offset time has "run out"? Also, what about other headers? Should I send headers like this together with the 304: header('Content-Type: text/html'); Do I have to send: header("Last-Modified:" . $modified); header('Etag: ' . $etag); To make sure the browser sends a conditional GET request the next time the $offset has "run out" or does it simply save the old Last Modified and Etag values? Are there other things I should be aware about when sending a 304 response header?

    Read the article

  • How should clients handle HTTP 401 with unknown authentication schemes?

    - by user113215
    What is the proper behavior for an HTTP client receiving a 401 Unauthorized response that specifies only unrecognized authentication schemes? My server supports Kerberos authentication using WWW-Authenticate: Negotiate. On the first request, the server sends a 401 Unauthorized response with a body containing an HTML document. The behavior that I expect is for clients that support Kerberos to perform that authentication and for other clients to simply display the HTML document (a login form). It seems that most of the "other clients" I've encountered do work this way, but a few do not. I haven't found anything that mandates any particular behavior in this situation. There's a brief mention in RFC 2617: HTTP Authentication: Basic and Digest Access Authentication, but is there anything more concrete? It is possible that a server may want to require Digest as its authentication method, even if the server does not know that the client supports it. A client is encouraged to fail gracefully if the server specifies only authentication schemes it cannot handle.

    Read the article

  • http requests, using sprites and file sizes -

    - by crazy sarah
    Hi all I'm in the process of finding out all about sprites and how they can speed up your pages. So I've used spriteMe to create a overall sprite image which is 130kb, this is made up of 14 images with a combined total size of about 65kb So is it better to have one http request and a file size of 130kb or 14 requests for a total of 65kb? Also there is a detailed image which has been put into the spite which caused it's size to go up by about 60kb odd, this used to be a seperate jpg image which was only 30kb. Would I be better off having it seperate and suffering the additional request?

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >