Search Results

Search found 127829 results on 5114 pages for 'http status code 403'.

Page 37/5114 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • Connecting to a web server over HTTP, code snippet

    - by Emanuil
    I'v got the following piece of code: try { HttpClient httpClient = new DefaultHttpClient(); HttpPost httpPost = new HttpPost("http://www.flashstall.com/json.txt"); HttpResponse httpResponse = httpClient.execute(httpPost); } catch (Exception e) { Log.e("m40", "Error in http connection " + e.toString()); } When I run it it logs "Error in http connection java.net.UnkownHostException: www.flashstall.com". What am I doing wrong?

    Read the article

  • Should accessible members of an internal class be internal too?

    - by Jeff Mercado
    I'm designing a set of APIs for some applications I'm working on. I want to keep the code style consistent in all the classes I write but I've found that there are a few inconsistencies that I'm introducing and I don't know what the best way to resolve them is. My example here is specific to C# but this would apply to any language with similar mechanisms. There are a few classes that I need for implementation purposes that I don't necessarily want to expose in the API so I make them internal whereever needed. Generally what I would do is design the class as I normally would (e.g., make members public/protected/private where necessary) and change the visibility level of the class itself to internal. So I might have a few classes that look like this: internal interface IMyItem { ItemSet AddTo(ItemSet set); } internal class _SmallItem : IMyItem { private readonly /* parameters */; public _SmallItem(/* small item parameters */) { /* ... */ } public ItemSet AddTo(ItemSet set) { /* ... */ } } internal abstract class _CompositeItem: IMyItem { private readonly /* parameters */; public _CompositeItem(/* composite item parameters */) { /* ... */ } public abstract object UsefulInformation { get; } protected void HelperMethod(/* parameters */) { /* ... */ } } internal class _BigItem : _CompositeItem { private readonly /* parameters */; public _BigItem(/* big item parameters */) { /* ... */ } public override object UsefulInformation { get { /* ... */ } } public ItemSet AddTo(ItemSet set) { /* ... */ } } In another generated class (part of a parser/scanner), there is a structure that contains fields for all possible values it can represent. The class generated is internal too but I have control over the visibility of the members and decided to make them internal as well. internal partial struct ValueType { internal string String; internal ItemSet ItemSet; internal IMyItem MyItem; } internal class TokenValue { internal static int EQ(ItemSetScanner scanner) { /* ... */ } internal static int NAME(ItemSetScanner scanner, string value) { /* ... */ } internal static int VALUE(ItemSetScanner scanner, string value) { /* ... */ } //... } To me, this feels odd because the first set of classes, I didn't necessarily have to make some members public, they very well could have been made internal. internal members of an internal type can only be accessed internally anyway so why make them public? I just don't like the idea that the way I write my classes has to change drastically (i.e., change all uses of public to internal) just because the class is internal. Any thoughts on what I should do here? It makes sense to me that I might want to make some members of a class declared public, internal. But it's less clear to me when the class is declared internal.

    Read the article

  • How-To: AutoVue Bug Status Tracking & Email Notifications

    - by Graham McKendry
    I’ve posted a number of different Support process-related and tool-related blog entries over the past few years, and one common question I received back from various customers and partners is “How can I easily track AutoVue bugs & enhancements for status updates?” The capability to track bug status through the My Oracle Support (MOS) portal has existed in different forms for a while, although hasn’t necessarily been easy to find without going through specific segments of the extensive MOS training. Recently, the instructions were consolidated into the following highly recommended knowledge base article: KM Note 1298390.1 - How to Monitor a Bug from My Oracle Support The note covers various capabilities, including: How to add the new ‘Bug Tracker’ widget to your MOS dashboard How to add and manage bugs within the Bug Tracker and probably most interesting to MOS users... How to enable email notifications for bug status updates Make sure to pass this KM Note along to your MOS users in case they haven’t already configured this valuable feature.

    Read the article

  • Erlang: HTTP Accept Header with Inets

    - by Ted Karmel
    I am trying to do the equivalent of the following curl command : curl -H "Accept: text/plain" http://127.0.0.1:8033/stats I tried with an Inets simple http request. But, it isn't processed. How can I specify in Inets (or some other Erlang http client for that matter) the accept header requirement?

    Read the article

  • Translating from cURL to straight HTTP requests

    - by Joshua
    What would the following cURL command look like as a generic (without cURL) http request? feedUri="https://www.someservice.com/feeds\ ?prettyprint=true" curl $feedUri --silent \ --header "GData-Version: 2" For example how could such an http request be expressed in the browser address bar? Partucluarly, how do I express the --header information if I were to just type out the plain http request?

    Read the article

  • Submit pdf form fields to a HTTP POST request

    - by Josjojo
    I've made a pdf form in Adobe Acrobat. Now I want to make a button that submits the form to a HTTP POST request. I have searched for about 4 hours, but I have not found an example to do this. Here I read that it is possible to send the pdf form fields with a HTTP submission, but there's also no example given: http://acrobatusers.com/tutorials/form-submit-e-mail-demystified I'm looking for a JavaScript example that I can link to the submit button.

    Read the article

  • Reading HTTP headers from JAX-WS Web Service

    - by Anonimo
    Hi all, I currently have a JAX-WS Web Service that receives some credentials in the HTTP header. These are used for BASIC authentication. There is a filter that performs authentication by reading the HTTP headers and checking against the database. Still, I need the username from within the Web Service in order to perform other service logic related stuff. Is there a way of accessing the HTTP headers from within the Web Service? Thanks.

    Read the article

  • Chache problem running two consecutive HTTP GET requests from an APP1 to an APP2

    - by user502052
    I use Ruby on Rails 3 and I have 2 applications (APP1 and APP2) working on two subdomains: app1.domain.local app2.domain.local and I am tryng to run two consecutive HTTP GET requests from APP1 to APP2 like this: Code in APP1 (request): response1 = Net::HTTP.get( URI.parse("http://app2.domain.local?test=first&id=1") ) response2 = Net::HTTP.get( URI.parse("http://app2.domain.local/test=second&id=1") ) Code in APP2 (response): respond_to do |format| if <model_name>.find(params[:id]).<field_name> == "first" <model_name>.find(params[:id]).update_attribute ( <field_name>, <field_value> ) format.xml { render :xml => <model_name>.find(params[:id]).<field_name> } elsif <model_name>.find(params[:id]).<field_name> == "second" format.xml { render :xml => <model_name>.find(params[:id]).<field_name> } end end After the first request I get the correct XML (response1 is what I expect), but on the second it isn't (response2 isn't what I expect). Doing some tests I found that the second time that <model_name>.find(params[:id]).<field_name> run (for the elsif statements) it returns always a blank value so that the code in the elseif statement is never run. Is it possible that the problem is related on caching <model_name>.find(params[:id]).<field_name>? P.S.: I read about eTag and Conditional GET, but I am not sure that I must use that approach. I would like to keep all simple.

    Read the article

  • What is the proper way to handle a fully qualified domain in a GET request?

    - by Mark P Neyer
    I'm writing a proxy server. When I use curl to fetch a page, say http://www.foo.com/pants, curl makes the following request: GET /pants HTTP/1.1 When I have curl send that request through my local proxy, curl changes the GET request to: GET http://www.foo.com/pants HTTP/1.1 This change causes the foo.com server return a 404. Is foo.com broken? Or is the fully qualified domain name only meaningful to proxy servers? Should I always strip http://domain from the requests I send out? Thanks!

    Read the article

  • Make HTTP/1.1 request with PHP

    - by ejunker
    My code is using file_get_contents() to make GET requests to an API endpoint. It looks like it is using HTTP/1.0 and my sysadmin says I need to use HTTP/1.1. How can I make an HTTP/1.1 request? Do I need to use curl or is there a better/easier way?

    Read the article

  • 403 Forbidden for web root on Apache on Mac OS X v10.7, but can access user directories

    - by philosophistry
    When I access http://localhost/ I get 403 Forbidden, but if I access http://localhost/~username it serves up pages. Things I've tried: Checking error logs Swapping out with original httpd conf files Changing DocumentRoot to my user directory (after all that should work if I can access ~username) I've seen 30 plus Q&A sites that all point to people having trouble with user directories being forbidden. I have the opposite problem, and so I'm tearing my hair out here.

    Read the article

  • Help with HTTP Intercepting Proxy in Ruby?

    - by Philip
    I have the beginnings of an HTTP Intercepting Proxy written in Ruby: require 'socket' # Get sockets from stdlib server = TCPServer.open(8080) # Socket to listen on port 8080 loop { # Servers run forever Thread.start(server.accept) do |client| puts "** Got connection!" @output = "" @host = "" @port = 80 while line = client.gets line.chomp! if (line =~ /^(GET|CONNECT) .*(\.com|\.net):(.*) (HTTP\/1.1|HTTP\/1.0)$/) @port = $3 elsif (line =~ /^Host: (.*)$/ && @host == "") @host = $1 end print line + "\n" @output += line + "\n" # This *may* cause problems with not getting full requests, # but without this, the loop never returns. break if line == "" end if (@host != "") puts "** Got host! (#{@host}:#{@port})" out = TCPSocket.open(@host, @port) puts "** Got destination!" out.print(@output) while line = out.gets line.chomp! if (line =~ /^<proxyinfo>.*<\/proxyinfo>$/) # Logic is done here. end print line + "\n" client.print(line + "\n") end out.close end client.close end } This simple proxy that I made parses the destination out of the HTTP request, then reads the HTTP response and performs logic based on special HTML tags. The proxy works for the most part, but seems to have trouble dealing with binary data and HTTPS connections. How can I fix these problems?

    Read the article

  • Customizing the Test Status on the TFS 2010 SSRS Stories Overview Report

    - by Bob Hardister
    This post shows how to customize the SQL query used by the Team Foundation Server 2010 SQL Server Reporting Services (SSRS) Stories Overview Report. The objective is to show test status for the current version while including user story status of the current and prior versions.  Why? Because we don’t copy completed user stories into the next release. We only want one instance of a user story for the product because we believe copies can get out of sync when they are supposed to be the same. In the example below, work items for the current version are on the area path root and prior versions are not on the area path root. However, you can use area path or iteration path criteria in the query as suits your needs. In any case, here’s how you do it: 1. Download a copy of the report RDL file as a backup 2. Open the report by clicking the edit down arrow and selecting “Edit in Report Builder” 3. Right click on the dsOverview Dataset and select Dataset Properties 4. Update the following SQL per the comments in the code: Customization 1 of 3 … -- Get the list deliverable workitems that have Test Cases linked DECLARE @TestCases Table (DeliverableID int, TestCaseID int); INSERT @TestCases     SELECT h.ID, flh.TargetWorkItemID     FROM @Hierarchy h         JOIN FactWorkItemLinkHistory flh             ON flh.SourceWorkItemID = h.ID                 AND flh.WorkItemLinkTypeSK = @TestedByLinkTypeSK                 AND flh.RemovedDate = CONVERT(DATETIME, '9999', 126)                 AND flh.TeamProjectCollectionSK = @TeamProjectCollectionSK         JOIN [CurrentWorkItemView] wi ON flh.TargetWorkItemID = wi.[System_ID]                  AND wi.[System_WorkItemType] = @TestCase             AND wi.ProjectNodeGUID  = @ProjectGuid              --  Customization 1 of 3: only include test status information when test case area path = root. Added the following 2 statements              AND wi.AreaPath = '{the root area path of the team project}'  …          Customization 2 of 3 … -- Get the Bugs linked to the deliverable workitems directly DECLARE @Bugs Table (ID int, ActiveBugs int, ResolvedBugs int, ClosedBugs int, ProposedBugs int) INSERT @Bugs     SELECT h.ID,         SUM (CASE WHEN wi.[System_State] = @Active THEN 1 ELSE 0 END) Active,         SUM (CASE WHEN wi.[System_State] = @Resolved THEN 1 ELSE 0 END) Resolved,         SUM (CASE WHEN wi.[System_State] = @Closed THEN 1 ELSE 0 END) Closed,         SUM (CASE WHEN wi.[System_State] = @Proposed THEN 1 ELSE 0 END) Proposed     FROM @Hierarchy h         JOIN FactWorkItemLinkHistory flh             ON flh.SourceWorkItemID = h.ID             AND flh.TeamProjectCollectionSK = @TeamProjectCollectionSK         JOIN [CurrentWorkItemView] wi             ON wi.[System_WorkItemType] = @Bug             AND wi.[System_Id] = flh.TargetWorkItemID             AND flh.RemovedDate = CONVERT(DATETIME, '9999', 126)             AND wi.[ProjectNodeGUID] = @ProjectGuid              --  Customization 2 of 3: only include test status information when test case area path = root. Added the following statement              AND wi.AreaPath = '{the root area path of the team project}'       GROUP BY h.ID … Customization 2 of 3 … -- Add the Bugs linked to the Test Cases which are linked to the deliverable workitems -- Walks the links from the user stories to test cases (via the tested by link), and then to -- bugs that are linked to the test case. We don't need to join to the test case in the work -- item history view. -- --    [WIT:User Story/Requirement] --> [Link:Tested By]--> [Link:any type] --> [WIT:Bug] INSERT @Bugs SELECT tc.DeliverableID,     SUM (CASE WHEN wi.[System_State] = @Active THEN 1 ELSE 0 END) Active,     SUM (CASE WHEN wi.[System_State] = @Resolved THEN 1 ELSE 0 END) Resolved,     SUM (CASE WHEN wi.[System_State] = @Closed THEN 1 ELSE 0 END) Closed,     SUM (CASE WHEN wi.[System_State] = @Proposed THEN 1 ELSE 0 END) Proposed FROM @TestCases tc     JOIN FactWorkItemLinkHistory flh         ON flh.SourceWorkItemID = tc.TestCaseID         AND flh.RemovedDate = CONVERT(DATETIME, '9999', 126)         AND flh.TeamProjectCollectionSK = @TeamProjectCollectionSK     JOIN [CurrentWorkItemView] wi         ON wi.[System_Id] = flh.TargetWorkItemID         AND wi.[System_WorkItemType] = @Bug         AND wi.[ProjectNodeGUID] = @ProjectGuid         --  Customization 3 of 3: only include test status information when test case area path = root. Added the following statement         AND wi.AreaPath = '{the root area path of the team project}'     GROUP BY tc.DeliverableID … 5. Save the report and you’re all set. Note: you may need to re-apply custom parameter changes like pre-selected sprints.

    Read the article

  • PHP: Cookie only sent to http://www.xxx.com and NOT http://xxx.com

    - by Axel
    Hi, I have a php login which sets 2 cookies once some one login. the problem is that if you login from : http://www.mydomain.com and you go to http://mydomain.com you will find your self not logged in, I think that's because the browser only send the cookies to the first syntax. It's only one domain, the difference is the www. before the domain name, so how to set cookies to the whole domain whatever there is www. or not ? Thanks

    Read the article

  • How to get HTTP status message in (py)curl?

    - by mykhal
    spending some time studying pycurl and libcurl documentation, i still can't find a (simple) way, how to get HTTP status message (reason-phrase) in pycurl. status code is easy: import pycurl import cStringIO curl = pycurl.Curl() buff = cStringIO.StringIO() curl.setopt(pycurl.URL, 'http://example.org') curl.setopt(pycurl.WRITEFUNCTION, buff.write) curl.perform() print "status code: %s" % curl.getinfo(pycurl.HTTP_CODE) # -> 200 # print "status message: %s" % ??? # -> "OK"

    Read the article

  • php curl http 405 error mind body online api

    - by K_G
    I am trying to connect to the Mind Body Online API (their forum is down currently, so I came here). Below is the code I am using. I am receiving this: SERVER ERROR 405 - HTTP verb used to access this page is not allowed. The page you are looking for cannot be displayed because an invalid method (HTTP verb) was used to attempt to access. code: //Data, connection, auth $soapUrl = "http://clients.mindbodyonline.com/api/0_5"; // xml post structure $xml_post_string = '<?xml version="1.0" encoding="utf-8"?> <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns="http://clients.mindbodyonline.com/api/0_5"> <soapenv:Header/> <soapenv:Body> <GetClasses> <Request> <SourceCredentials> <SourceName>username</SourceName> <Password>password</Password> <SiteIDs> <int>site id</int> </SiteIDs> </SourceCredentials> <UserCredentials> <Username>username</Username> <Password>password</Password> <SiteIDs> <int></int> </SiteIDs> </UserCredentials> <Fields> <string>Classes.Resource</string> </Fields> <XMLDetail>Basic</XMLDetail> <PageSize>10</PageSize> <CurrentPageIndex>0</CurrentPageIndex> </Request> </GetClasses> </soapenv:Body> </soapenv:Envelope>'; $headers = array( "Content-type: text/xml;charset=\"utf-8\"", "Accept: text/xml", "Cache-Control: no-cache", "Pragma: no-cache", "SOAPAction: http://clients.mindbodyonline.com/api/0_5", "Content-length: ".strlen($xml_post_string), ); //SOAPAction: your op URL $url = $soapUrl; // PHP cURL for https connection with auth $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_HTTPAUTH, CURLAUTH_ANY); curl_setopt($ch, CURLOPT_TIMEOUT, 10); curl_setopt($ch, CURLOPT_POST, true); curl_setopt($ch, CURLOPT_POSTFIELDS, $xml_post_string); // the SOAP request curl_setopt($ch, CURLOPT_HTTPHEADER, $headers); // converting $response = curl_exec($ch); curl_close($ch); var_dump($response); // converting //$response1 = str_replace("<soap:Body>","",$response); //$response2 = str_replace("</soap:Body>","",$response1); // convertingc to XML //$parser = simplexml_load_string($response2); // user $parser to get your data out of XML response and to display it. Any help would be great and if anyone has any experience working with their API, my first time ;) Here is the post on stack I am going off of: SOAP request in PHP

    Read the article

  • Uploading on Youtube via HTTP Post

    - by sajid.nizami
    I am following the steps provided on this link [http://code.google.com/apis/youtube/2.0/developers_guide_dotnet.html#Browser_based_Upload][1] Whenever I try to upload anything using this method, I get a HTTP 400 error saying that the next_url is not provided. Code is pretty simple and is a copy of Google's own code. <%@ Page Language="C#" AutoEventWireup="true" CodeFile="BrowserUpload.aspx.cs" Inherits="BrowserUpload" %> <%@ Import Namespace="Google.YouTube" %> <%@ Import Namespace="Google.GData.Extensions.MediaRss" %> <%@ Import Namespace="Google.GData" %> <%@ Import Namespace="Google.GData.YouTube" %> <%@ Import Namespace="Google.GData.Client" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title></title> <script type="text/javascript"> function checkForFile() { if (document.getElementById('file').value) { return true; } document.getElementById('errMsg').style.display = ''; return false; } </script> </head> <body> <% YouTubeRequestSettings settings = new YouTubeRequestSettings("Danat", "API-KEY", "loginid", "password" ); YouTubeRequest request = new YouTubeRequest(settings); Video newVideo = new Video(); newVideo.Title = "My Test Movie"; newVideo.Tags.Add(new MediaCategory("Autos", YouTubeNameTable.CategorySchema)); newVideo.Keywords = "cars, funny"; newVideo.Description = "My description"; newVideo.YouTubeEntry.Private = false; newVideo.Tags.Add(new MediaCategory("mydevtag, anotherdevtag", YouTubeNameTable.DeveloperTagSchema)); FormUploadToken token = request.CreateFormUploadToken(newVideo); %> <form action="<%= token.Url %>?next_url=<%= Server.UrlEncode("http://www.danatev.com") %>" name="PostToYoutube" method="post" enctype="multipart/form-data" onsubmit="return checkForFile();" > <input id="file" type="file" name="file" /> <div id="errMsg" style="display: none; color: red"> You need to specify a file. </div> <input type="hidden" name="token" value="<%= token.Token %>" /> <input type="submit" value="go" /> </form>

    Read the article

  • ActiveMQ 5.2.0 + REST + HTTP POST = java.lang.OutOfMemoryError

    - by Bruce Loth
    First off, I am a newbie when it comes to JMS & ActiveMQ. I have been looking into a messaging solution to serve as middleware for a message producer that will insert XML messages into a queue via HTTP POST. The producer is an existing system written in C++ that cannot be modified (so Java and the C++ API are out). Using the "demo" examples and some trial and error, I have cobbled together a working example of what I want to do (on a windows box). The web.xml I configured in a test directory under "webapps" specifies that the HTTP POST messages received from the producer are to be handled by the MessageServlet. I added a line for the text app in "activemq.xml" ('ow' is the test app dir): I created a test script to "insert" messages into the queue which works well. The problem I am running into is that it as I continue to insert messages via REST/HTTP POST, the memory consumption and thread count used by ActiveMQ continues to rise (It happens when I have timely consumers as well as slow or non-existent consumers). When memory consumption gets around 250MB's and the thread count exceeds 5000 (as shown in windows task manager), ActiveMQ crashes and I see this in the log: Exception in thread "ActiveMQ Transport Initiator: vm://localhost#3564" java.lang.OutOfMemoryError: unable to create new native thread It is as if Jetty is spawning a new thread to handle each HTTP POST and the thread never dies. I did look at this page: http://activemq.apache.org/javalangoutofmemory.html and tried but that didn't fix the problem (although I didn't fully understand the implications of the change either). Does anyone have any ideas? Thanks! Bruce Loth PS - I included the "test message producer" python script below for what it is worth. I created batches of 100 messages and continued to run the script manually from the command line while watching the memory consumption and thread count of ActiveMQ in task manager. def foo(): import httplib, urllib body = "<?xml version='1.0' encoding='UTF-8'?>\n \ <ROOT>\n \ [snip: xml deleted to save space] </ROOT>" headers = {"content-type": "text/xml", "content-length": str(len(body))} conn = httplib.HTTPConnection("127.0.0.1:8161") conn.request("POST", "/ow/message/RDRCP_Inbox?type=queue", body, headers) response = conn.getresponse() print response.status, response.reason data = response.read() conn.close() ## end method definition ## Begin test code count = 0; while(count < 100): # Test with batches of 100 msgs count += 1 foo()

    Read the article

  • HTTP caching confusion

    - by Keith
    I'm not sure whether this is a server issue, or whether I'm failing to understand how HTTP caching really works. I have an ASP MVC application running on IIS7. There's a lot of static content as part of the site including lots of CSS, Javascript and image files. For these files I want the browser to cache them for at least a day - our .css, .js, .gif and .png files rarely change. My web.config goes like this: <system.webServer> <staticContent> <clientCache cacheControlMode="UseMaxAge" cacheControlMaxAge="1.00:00:00" /> </staticContent> </system.webServer> The problem I'm getting is that the browser (tested Chrome, IE8 and FX) doesn't seem to be caching the files as I'd expect. I've got the default settings (check for newer pages automatically in IE). On first visit the content downloads as expected HTTP/1.1 200 OK Cache-Control: max-age=86400 Content-Type: image/gif Last-Modified: Fri, 07 Aug 2009 09:55:15 GMT Accept-Ranges: bytes ETag: "3efeb2294517ca1:0" Server: Microsoft-IIS/7.0 X-Powered-By: ASP.NET Date: Mon, 07 Jun 2010 14:29:16 GMT Content-Length: 918 <content> I think that the Cache-Control: max-age=86400 should tell the browser not to request the page again for a day. Ok, so now the page is reloaded and the browser requests the image again. This time it gets an empty response with these headers: HTTP/1.1 304 Not Modified Cache-Control: max-age=86400 Last-Modified: Fri, 07 Aug 2009 09:55:15 GMT Accept-Ranges: bytes ETag: "3efeb2294517ca1:0" Server: Microsoft-IIS/7.0 X-Powered-By: ASP.NET Date: Mon, 07 Jun 2010 14:30:32 GMT So it looks like the browser has sent the ETag back (as a unique id for the resource), and the server's come back with a 304 Not Modified - telling the browser that it can use the previously downloaded file. It seems to me that would be correct for many caching situations, but here I don't want the extra round trip. I don't care if the image gets out of date when the file on the server changes. There are a lot of these files (even with sprite-maps and the like) and many of our clients have very slow networks. Each round trip to ping for that 304 status is taking about a 10th to a 5th of a second. Many also have IE6 which only has 2 HTTP connections at a time. The net result is that our application appears to be very slow for these clients with every page taking an extra couple of seconds to check that the static content hasn't changed. What response header am I missing that would cause the browser to aggressively cache the files? How would I set this in a .Net web.config for IIS7? Am I misunderstanding how HTTP caching works in the first place?

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >