Search Results

Search found 76777 results on 3072 pages for 'http method'.

Page 5/3072 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • alias_attribute and creating and method with the original attribute name causes a loop

    - by Tiago
    Im trying to dynamically create a method chain in one attribute in my model. By now I have this function: def create_filtered_attribute(attribute_name) alias_attribute "#{attribute_name}_without_filter", attribute_name define_method "#{attribute_name}" do filter_words(self.send("#{attribute_name}_without_filter")) end end so I receive a string with the attribute name, alias it for '*_without_filter*' (alias_method or alias_method_chain fails here, because the attribute isnt there when the class is created), and I create a new method with the attribute name, where I filter its contents. But somehow, when I call *"#{attribute_name}_without_filter"* it calls my new method (i think because the alias_attribute some how), and the program goes into a stack loop. Can someone please enlighten me on this.

    Read the article

  • How to set RequestBody for Http Delete method.

    - by Tushar Tarkas
    I am writing a client code for a server which Delete API. The API specification requires data to be sent. I am using HttpComponents v3.1 library for writing client code. Using the HtpDelete class I could not find a way to add request data to it. Is there a way to do so ? Below is the code snippet. HttpDelete deleteReq = new HttpDelete(uriBuilder.toString()); List<NameValuePair> postParams = new ArrayList<NameValuePair>(); postParams.add(new BasicNameValuePair(RestConstants.POST_DATA_PARAM_NAME, postData.toString())); try { UrlEncodedFormEntity entity = new UrlEncodedFormEntity(postParams); entity.setContentEncoding(HTTP.UTF_8); //deleteReq.setEntity(entity); // There is no method setEntity() deleteReq.setHeader(RestConstants.CONTENT_TYPE_HEADER, RestConstants.CONTENT_TYPE_HEADER_VAL); } catch (UnsupportedEncodingException e) { logger.error("UnsupportedEncodingException: " + e); } Thanks in advance.

    Read the article

  • Are your redirects HTTP compliant (absolute URI)?

    - by webbiedave
    In the Hypertext Transfer Protocol 1.1 spec, it states for the Location header: The field value consists of a single absolute URI. Location = "Location" ":" absoluteURI An example is: Location: http://www.w3.org/pub/WWW/People.html I have coded many an app that doesn't include the scheme and domain (i.e., /thankyou/) and every browser I've ever tested with redirects correctly. I'm wondering if it's imperative to go back and change all my redirect code or just ignore this part of the spec. Have many of you produced code that doesn't comply and will you go back and change it? Or will you just trust that clients will continue to resolve them well into the future? (going forward I will certainly adhere to this)

    Read the article

  • yum update works but yum --security update fails to work in Fedora 12

    - by bobo
    I had already installed the yum-security before. And I was going to do an update by entering the following command: [root@localhost /]# yum update Loaded plugins: presto, priorities, refresh-packagekit, security Skipping security plugin, no data Setting up Update Process Resolving Dependencies Skipping security plugin, no data --> Running transaction check ---> Package eject.i686 0:2.1.5-17.fc12 set to be updated ---> Package glibc.i686 0:2.11.1-4 set to be updated ---> Package glibc-common.i686 0:2.11.1-4 set to be updated ---> Package glibc-devel.i686 0:2.11.1-4 set to be updated ---> Package glibc-headers.i686 0:2.11.1-4 set to be updated ---> Package gnome-themes.noarch 0:2.28.1-3.fc12 set to be updated ---> Package gtk2.i686 0:2.18.9-3.fc12 set to be updated ---> Package gtk2-immodule-xim.i686 0:2.18.9-3.fc12 set to be updated ---> Package kernel-PAE.i686 0:2.6.32.11-99.fc12 set to be installed ---> Package kernel-PAE-devel.i686 0:2.6.32.11-99.fc12 set to be installed ---> Package kernel-PAEdebug-devel.i686 0:2.6.32.11-99.fc12 set to be installed ---> Package kernel-debug-devel.i686 0:2.6.32.11-99.fc12 set to be installed ---> Package kernel-devel.i686 0:2.6.32.11-99.fc12 set to be installed ---> Package kernel-firmware.noarch 0:2.6.32.11-99.fc12 set to be updated ---> Package kernel-headers.i686 0:2.6.32.11-99.fc12 set to be updated ---> Package libnetfilter_conntrack.i686 0:0.0.101-1.fc12 set to be updated ---> Package media-player-info.noarch 0:5-1.fc12 set to be updated ---> Package nscd.i686 0:2.11.1-4 set to be updated ---> Package perf.noarch 0:2.6.32.11-99.fc12 set to be updated ---> Package rhythmbox.i686 0:0.12.6-5.fc12 set to be updated ---> Package sysvinit-tools.i686 0:2.87-3.dsf.fc12 set to be updated --> Finished Dependency Resolution --> Running transaction check ---> Package kernel-PAE.i686 0:2.6.31.12-174.2.3.fc12 set to be erased --> Finished Dependency Resolution Dependencies Resolved ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: kernel-PAE i686 2.6.32.11-99.fc12 updates 20 M kernel-PAE-devel i686 2.6.32.11-99.fc12 updates 6.2 M kernel-PAEdebug-devel i686 2.6.32.11-99.fc12 updates 6.2 M kernel-debug-devel i686 2.6.32.11-99.fc12 updates 6.2 M kernel-devel i686 2.6.32.11-99.fc12 updates 6.1 M Updating: eject i686 2.1.5-17.fc12 updates 49 k glibc i686 2.11.1-4 updates 4.2 M glibc-common i686 2.11.1-4 updates 14 M glibc-devel i686 2.11.1-4 updates 953 k glibc-headers i686 2.11.1-4 updates 590 k gnome-themes noarch 2.28.1-3.fc12 updates 1.5 M gtk2 i686 2.18.9-3.fc12 updates 3.2 M gtk2-immodule-xim i686 2.18.9-3.fc12 updates 60 k kernel-firmware noarch 2.6.32.11-99.fc12 updates 968 k kernel-headers i686 2.6.32.11-99.fc12 updates 749 k libnetfilter_conntrack i686 0.0.101-1.fc12 updates 37 k media-player-info noarch 5-1.fc12 updates 32 k nscd i686 2.11.1-4 updates 189 k perf noarch 2.6.32.11-99.fc12 updates 79 k rhythmbox i686 0.12.6-5.fc12 updates 4.0 M sysvinit-tools i686 2.87-3.dsf.fc12 updates 58 k Removing: kernel-PAE i686 2.6.31.12-174.2.3.fc12 @updates 72 M Transaction Summary ================================================================================ Install 5 Package(s) Upgrade 16 Package(s) Remove 1 Package(s) Reinstall 0 Package(s) Downgrade 0 Package(s) Total download size: 75 M Is this ok [y/N]: But then I changed my mind, I decided to do a security-only update instead of a full update, so I entered the following command: [root@localhost /]# yum --security update Loaded plugins: presto, priorities, refresh-packagekit, security Setting up Update Process Resolving Dependencies Limiting packages to security relevant ones http://download.fedoraproject.org/pub/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.cuhk.edu.hk/pub/linux/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.cuhk.edu.hk/pub/linux/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.cuhk.edu.hk/pub/linux/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.riken.jp/Linux/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.riken.jp/Linux/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://mirror.cse.iitk.ac.in/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://mirror.cse.iitk.ac.in/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://mirrors.isu.net.sa/pub/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://mirrors.isu.net.sa/pub/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. ftp://ftp.chu.edu.tw/linux/Fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno -1] Metadata file does not match checksum Trying other mirror. http://mirror.yandex.ru/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://mirror.yandex.ru/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://linus.iyte.edu.tr/linux/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://linus.iyte.edu.tr/linux/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.jaist.ac.jp/pub/Linux/Fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.jaist.ac.jp/pub/Linux/Fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.kddilabs.jp/Linux/packages/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://srv2.ftp.ne.jp/Linux/packages/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://www.ftp.ne.jp/Linux/distributions/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://srv2.ftp.ne.jp/Linux/distributions/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.rhd.ru/pub/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.rhd.ru/pub/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://mirrors.163.com/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://mirrors.163.com/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://mirror.nus.edu.sg/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://mirror.nus.edu.sg/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.yz.yamagata-u.ac.jp/pub/linux/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.yz.yamagata-u.ac.jp/pub/linux/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.linux.org.tr/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.linux.org.tr/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://mirrors.cytanet.com.cy/linux/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://mirrors.cytanet.com.cy/linux/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://fedoramirror.hnsdc.com/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://fedoramirror.hnsdc.com/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.twaren.net/Linux/Fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://c147.twaren.net/pub/Linux/Fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.mirror.tw/pub/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.mirror.tw/pub/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.cs.pu.edu.tw/Linux/Fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.cs.pu.edu.tw/Linux/Fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ubuntu.cn99.com/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ubuntu.cn99.com/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. Error: failure: repodata/updateinfo.xml.gz from updates: [Errno 256] No more mirrors to try. You could try using --skip-broken to work around the problem ^C[root@localhost /]# As it can be seen in the output, when I run the yum --security update command, it did show the Limiting packages to security relevant ones message so it's aware of the option. But I don't know why it keeps reporting the http error 416. I searched in google and found the following description of the error but it doesn't seem to help much. HTTP ERROR 416 - Requested Range Not Satisfiable A 416 status code indicates that the server was unable to fulfill the request. This may be, for example, because the client asked for the 800th-900th bytes of a document, but the document was only 200 bytes long. It suggests me to use the --skip-broken option, I tried and the output is the same. I already tested many times, it just doesn't work when the --security option is used. What could be the possible cause for this problem?

    Read the article

  • What's the easiest way to create an HTTP proxy which adds basic authentication to requests?

    - by joshdoe
    I am trying to use a service provided by a server which requires basic HTTP authentication, however the application I am using does not support authentication. What I'd like to do is create a proxy that will enable my auth-less application to connect via the proxy (which will add the authentication information) to the server requiring authentication. I'm sure this can be done, however I'm overwhelmed with the number of proxies out there and couldn't find an answer how to do this. Basically it seems all I want to do is have a proxy serve this URL: http://username:password@remoteserver/path as this URL: http://proxyserver/path I can run it on Linux, but a plus if I can run it Windows as well. Open source or at least free is a must. A big plus is if it's fairly straightforward to setup.

    Read the article

  • Does the SPDY protocol eliminate the need for cookieless domains?

    - by Clint Pachl
    With plain HTTP, cookieless domains were an optimization to avoid unnecessarily sending cookie headers for page resources. However, the SPDY protocol compresses HTTP headers and in some cases eliminates unnecessary headers. My question then is, does SPDY make cookieless domains irrelevant? Furthermore, should the page source and all of its resources be hosted at the same domain in order to optimize a SPDY implementation?

    Read the article

  • Understanding HTTP Cookies in Indy 10 for Delphi XE2

    - by Jerry Dodge
    I have been working with Indy 10 HTTP Servers / Clients lately in Delphi XE2, and I need to make sure I'm understanding session management correctly. In the server, I have a "bucket" of sessions, which is a list of objects which each represent a unique session. I don't use username and password to authenticate users, but I rather use a unique API key which is issued to a client, and has an expiration. When a client wishes to connect to the server, it first logs in by calling the "login" command, which is a path like this: http://localhost:1234/login?APIKey=abcdefghij. The server checks this API Key against the database, and if it's valid, it creates a new session in the bucket, issues a new cookie (unique string), and sets the response cookies with Success=Y and Cookie=abcdefghij. This is where I have the question. Assuming the client end has its own method of cookie management, the client will receive this login response back from the server and automatically save the cookies as necessary. Any future request from the client to the server shall automatically send along these cookies, and the client side doesn't have to necessarily worry about setting these cookies when sending requests to the server. Right? PS - I'm asking this question here on programmers.stackexchange.com because I didn't see it fit to ask on stackoverflow.com. If anyone thinks this is appropriate enough for stackoverflow.com, please let me know.

    Read the article

  • https:// search results appearing on Google for purely http:// site

    - by hydrurga
    I started weeding through my site's search results from Google today, using a site: search, to determine if there are any links that cause 404s and thus need redirecting. To my amazement I noticed numerous https:// results relating to various pages. My site doesn't have a SSL certificate, doesn't serve such pages, doesn't internally link to https:// pages, doesn't include any such files in its sitemap.xml and, for all of these, never has. I decided to do a Google search for https://<my site> and found one site that incorrectly refers to the root of my site with a https:// prefix - I will try to contact them to get them to correct this. I'm not sure however how Googlebot managed to index the non-root files as https://. I can't find any external links to them and surely, without certification, Googlebot should have stalled at the first request? I've just added the following lines to the site's .htaccess (although the surfer still has to navigate through the browser's "This site is a security risk. Abandon hope all ye who enter here!" message(s) first to get there): RewriteEngine On RewriteCond %{HTTPS} on RewriteRule ^(.*)$ http://www.<my site>.org/$1 [R=301,L] replacing <my site> with my domain name. My big question is this though - I would like to use the Google Webmaster Tools Remove URLs feature to remove the https:// pages from the index. Can I be guaranteed that this will only remove the https:// versions of each relevant page and not the valid http:// versions? My thanks to anyone who can help me out with this particular question and the issue in general.

    Read the article

  • How to indicate to a web server the language of a resource

    - by Nik M
    I'm writing an HTTP API to a publishing server, and I want resources with representations in multiple languages. A user whose client GETs a resource which has Korean, Japanese and Trad. Chinese representations, and sends Accept-Language: en, ja;q=0.7 should get the Japanese. One resource, identified by one URI, will therefore have a number of different language representations. This seems to me like a totally orthodox use of content negotiation and multiple resource representations. But when each translator comes to provide these alternate language representations to the server, what's the correct way to instruct the server which language to store the representation under? I'm having the translators PUT the representation in its entirety to the same URI, but I can't find out how to do this elegantly. Content-Language is a response header, and none of the request headers seem to fit the bill. It seems my options are Invent a new request header Supply additional metadata in a multipart/related document Provide language as a parameter to the Content-Type of the request, like Content-Type: text/html;language=en I don't want to get into the business of extending HTTP, and I don't feel great about bundling extra metadata into the representation. Neither approach seems friendly to HTTP caches either. So option 3 seems like the best way that I can think of, but even then it's decidedly non-standard to put my own specific parameters on a very well established content type. Is there any by-the-book way of achieving this?

    Read the article

  • Calling Web Services with HTTP Basic Authentication from BPEL 10.1.3.4

    - by Ramkumar Menon
    Are you using BPEL 10.1.3.4 and hunting for the property names in the partnerlinkBindings that will work for outbound HTTP Basic Authentication? Here's the answer. <partnerLinkBinding ...>  <property name="basicHeaders">credentials</property>  <property name="basicUsername">WhoAmI</property>  <property name="basicPassword">thatsASecret</property></partnerLinkBinding>The drop down options in JDeveloper dont seem to work.

    Read the article

  • google chrome : http authentication issue on iframe

    - by Daniel Dzussa
    I have an HTML file with 2 links http://versionplus.in/pass/new.html, both links are in an iframe to load the contents inside the iframe. I have two protected directories one on the same server and other on another server. If you click on either link it will popup the login box, same for both links in all browsers except google chorme. google chrome doesn't show login box for the protected folder on another server, how can I fix this ?

    Read the article

  • How can I set up a 404 error page when people access http://ftp.mydomain.com?

    - by Tim B.
    I am a freelance videographer/developer, and part of my job involves transferring large files over FTP to production houses/television stations. While the majority of people in my industry understand the difference between FTP and HTTP, I've experienced several interactions in the past couple months of people who still open Internet Explorer and try to access http://ftp.mydomain.com, receive an error page served by HostGator, and tell me that they cannot access my FTP server. Instead of spending time delivering instructions via e-mail, I'd much prefer to serve up a custom error page in this instance that instructs them how to download and use an FTP client. I tried setting up a sub-domain in Cpanel hoping I could simply drop in an .htaccess file with the error page, but I got this error: ftp.mydomain.com domainadmin-domainexistsglobal I also tried creating a custom error page in PHP which reads the site URL and serves up the custom content only when http://ftp.mydomain.com is accessed. Unfortunately, the error page works for every subdomain except that one. I'm not entirely sure this is even technically possible, which is why I bring it to the good people of StackOverflow to help. Thanks!

    Read the article

  • Parse an HTTP request Authorization header with Python

    - by Kris Walker
    I need to take a header like this: Authorization: Digest qop="chap", realm="[email protected]", username="Foobear", response="6629fae49393a05397450978507c4ef1", cnonce="5ccc069c403ebaf9f0171e9517f40e41" And parse it into this using Python: {'protocol':'Digest', 'qop':'chap', 'realm':'[email protected]', 'username':'Foobear', 'response':'6629fae49393a05397450978507c4ef1', 'cnonce':'5ccc069c403ebaf9f0171e9517f40e41'} Is there a library to do this, or something I could look at for inspiration? I'm doing this on Google App Engine, and I'm not sure if the Pyparsing library is available, but maybe I could include it with my app if it is the best solution. Currently I'm creating my own MyHeaderParser object and using it with reduce() on the header string. It's working, but very fragile. Brilliant solution by nadia below: import re reg = re.compile('(\w+)[=] ?"?(\w+)"?') s = """Digest realm="stackoverflow.com", username="kixx" """ print str(dict(reg.findall(s)))

    Read the article

  • Finding an HTTP proxy that will intercept static resource requests

    - by pkh
    Background I develop a web application that lives on an embedded device. In order to make dev times sane, frontend development is done using apache serving static documents, with PHP proxying out to the embedded device for specifically configured dynamic resources. This requires that we keep various server-simulation scripts hanging around in source control, and it requires updating those scripts whenever we add a new dynamic resource. Problem I'd like to invert the logic: if the requested document is available in the static documents directory, serve it; otherwise, proxy the request to the embedded device. Optimally, I want a software package that will do this for me (for Windows or buildable on cygwin). I can deal with forcing apache to do it with PHP, but I'm unsure how to configure it to make it happen. I've looked at squid and privoxy, but neither of them seem to do what I want. Any ideas? I'd rather not have to roll my own.

    Read the article

  • Ideal HTTP cache control headers for different types of resources

    - by chris_l
    I want to find a minimal set of headers, that work with "all" caches and browsers (also when using HTTPS!) On my (GWT-based) web site, I'll have three kinds of resources: 1. Forever cacheable (public / equal for all users) These files don't ever change, and they get a filename based on the MD5 of their contents (this is GWT's approach). They should get cached as much as possible, even when using HTTPS (so I assume, I should set Cache-Control: public, especially for Firefox?) 2. Changing for every new version of the site (public / equal for all users) These files can be cached, but probably need to be revalidated every time. 3. Individual for each request (private / user specific) These resources (e. g. JSON responses) should never be cached unencrypted to disk under no circumstances. (Maybe I'll have a few specific requests that could be cached.) I have a general idea on which headers I would probably use for each type, but there's always something I could be missing.

    Read the article

  • How should clients handle HTTP 401 with unknown authentication schemes?

    - by user113215
    What is the proper behavior for an HTTP client receiving a 401 Unauthorized response that specifies only unrecognized authentication schemes? My server supports Kerberos authentication using WWW-Authenticate: Negotiate. On the first request, the server sends a 401 Unauthorized response with a body containing an HTML document. The behavior that I expect is for clients that support Kerberos to perform that authentication and for other clients to simply display the HTML document (a login form). It seems that most of the "other clients" I've encountered do work this way, but a few do not. I haven't found anything that mandates any particular behavior in this situation. There's a brief mention in RFC 2617: HTTP Authentication: Basic and Digest Access Authentication, but is there anything more concrete? It is possible that a server may want to require Digest as its authentication method, even if the server does not know that the client supports it. A client is encouraged to fail gracefully if the server specifies only authentication schemes it cannot handle.

    Read the article

  • http requests, using sprites and file sizes -

    - by crazy sarah
    Hi all I'm in the process of finding out all about sprites and how they can speed up your pages. So I've used spriteMe to create a overall sprite image which is 130kb, this is made up of 14 images with a combined total size of about 65kb So is it better to have one http request and a file size of 130kb or 14 requests for a total of 65kb? Also there is a detailed image which has been put into the spite which caused it's size to go up by about 60kb odd, this used to be a seperate jpg image which was only 30kb. Would I be better off having it seperate and suffering the additional request?

    Read the article

  • How can I use a Windows 2003 server as a HTTP proxy?

    - by Will
    I'd like to set up an HTTP proxy on a windows 2003 server so that I can access blocked websites such as YouTube from behind a corporate firewall (DAMN THE MAN!). I've never done this before, so I'm not even sure if the picture I have in my head is valid or possible. So I'm stuck behind a firewall that blocks sites that I need to access occasionally but that are blocked because of abuse by slackers. I've got a Windows 2003 server hosted out on the internet (i.e., outside of this odious firewall). I know I can configure my browser to use a proxy for my HTTP traffic, so why not use my server? What I'd like to know is: Is my concept valid? Can this be done, and will it work? How do I configure my server to act as a proxy? What applications may I have to install? Free is fine but don't leave out commercial software TIA

    Read the article

  • Are HTTP requests cached? [closed]

    - by nischayn22
    Many HTTP requests are sent repeatedly by browsers on almost every page load, such as requesting the jQuery .js file etc. Since these are already used on too many sites doesn't modern browsers keep a cache for this? I am thinking of a system where the browser has a cached copy of the .js file used very very frequently. On a new request for the .js file, it sends the server a request for a hash of the .js file (provided the server can reply to that) and compares the returned hash with the cached copy's hash... rest is intuitive.

    Read the article

  • Should HTTP Verbs Be Used Semantically?

    - by Xophmeister
    If I'm making a web application which integrates with a server-side backend, would it be considered best practice to use HTTP methods semantically? That is, for example, if I'm fetching data (e.g., to populate a menu, etc.), I would use GET, but to update data (e.g., save a record), I would use POST. (I realise there are other methods that may be even more appropriate, but we need to consider browser support.) I can see the benefits of this in the sense that it's effectively a RESTful API, but at a slightly increased development cost. In my previous projects, I've POST'd everything: Is it worth switching to a RESTful mindset simply for the sake of best practice?

    Read the article

  • Parser, send an argument/receive xml (receive already done/ send not)

    - by bruno
    public List<Afood> getFoodFromCat(String cat) { String resultado = ""; List<Afood> list = new ArrayList<Afood>(); try { URL xpto = new URL("http://10.0.2.2/webservice/nutrituga/get_food_by_cat.php"); HttpURLConnection conn; conn = (HttpURLConnection) xpto.openConnection(); conn.setDoInput(true); conn.connect(); InputStream is = conn.getInputStream(); DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); try { DocumentBuilder db = dbf.newDocumentBuilder(); Document doc = db.parse(is); NodeList nl = doc.getElementsByTagName("item"); // resultado = String.valueOf(nl.getLength()); for (int i = 0; i < nl.getLength(); i++) { Node n = nl.item(i); Node childNode = n.getFirstChild(); while (childNode != null) { if (childNode.getNodeType() == Node.ELEMENT_NODE) { if (childNode.getNodeName().equalsIgnoreCase( "NAME_FOOD")) { Node valor = childNode.getFirstChild(); // resultado = resultado + valor.getNodeValue(); list.add(new Afood(valor.getNodeValue(), "", (int) Math.round(Math.random()), 1, 1, 1, 1, 1, 1)); } } childNode = childNode.getNextSibling(); } } return list; } catch (ParserConfigurationException e1) { e1.printStackTrace(); } catch (SAXException e1) { e1.printStackTrace(); } catch (IOException e1) { e1.printStackTrace(); } } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } return list; } I have this function that receives a xml and copy it to the list. This is well implemented. What i want do to know, is to send a category (that i receive like an argument of the function) and receive only the food from that category. The server is ready to receive the category and to send the food from that category. What do i have to do to send the category and receive the correct xml?

    Read the article

  • When to use http status code 404

    - by Sybiam
    I am working on a project and after arguing with people at work for about more than a hour. I decided to know what people on stack-exchange might say. We're writing an API for a system, there is a query that should return a tree of Organization or a tree of Goals. The tree of Organization is the organization in which the user is present, In other words, this tree should always exists. In the organization, a tree of goal should be always present. (that's where the argument started). In case where the tree doesn't exist, my co-worker decided that it would be right to answer response with status code 200. And then started asking me to fix my code because the application was falling apart when there is no tree. I'll try to spare flames and fury. I suggested to raise a 404 error when there is no tree. It would at least let me know that something is wrong. When using 200, I have to add special check to my response in the success callback to handle errors. I'm expecting to receive an object, but I may actually receive an empty response because nothing is found. It sounds totally fair to mark the response as a 404. And then war started and I got the message that I didn't understand HTTP status code schema. So I'm here and asking what's wrong with 404 in this case? I even got the argument "It found nothing, so it's right to return 200". I believe that it's wrong since the tree should be always present. If we found nothing and we are expecting something, it should be a 404. Extra Also, I believe the best answer to the problem is to create default objects when organizations are created, having no tree shouldn't be a valid case and should be seen as an undefined behavior. There is no way an account can be used without both trees. For that reasons, they should be always present.

    Read the article

  • Suggested HTTP REST status code for 'request limit reached'

    - by Andras Zoltan
    I'm putting together a spec for a REST service, part of which will incorporate the ability to throttle users service-wide and on groups of, or on individual, resources. Equally, time-outs for these would be configurable per resource/group/service. I'm just looking through the HTTP 1.1 spec and trying to decide how I will communicate to a client that a request will not be fulfilled because they've reached their limit. Initially I figured that client code 403 - Forbidden was the one, but this, from the spec: Authorization will not help and the request SHOULD NOT be repeated bothered me. It actually appears that 503 - Service Unavailable is a better one to use - since it allows for the communication of a retry time through the use of the Retry-After header. It's possible that in the future I might look to support 'purchasing' more requests via eCommerce (in which case it would be nice if client code 402 - Payment Required had been finalized!) - but I figure that this could equally be squeezed into a 503 response too. Which do you think I should use? Or is there another I've not considered?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >