Search Results

Search found 13160 results on 527 pages for 'response redirect'.

Page 123/527 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • Grid View To Excel

    - by rahulchandran
    Hi I am trying to convert the contents of a grid View to an excel file and I am doing it using this code string attachment = "attachment; filename= " + FileName; Response.ClearContent(); Response.AddHeader("content-disposition", attachment); Response.ContentType = "application/excel"; StringWriter sw = new StringWriter(); HtmlTextWriter htw = new HtmlTextWriter(sw); gv.RenderControl(htw); Response.Write(sw.ToString()); Response.End(); The problem is I am getting some sort of html in an excel style format , theres java script in the page links etc what I want is to turn the results of my query into a comma seperated file Is that do-able for free or do I have to run the query myself get the data and write out a csv stream Thanks

    Read the article

  • Scrapy Not Returning Additonal Info from Scraped Link in Item via Request Callback

    - by zoonosis
    Basically the code below scrapes the first 5 items of a table. One of the fields is another href and clicking on that href provides more info which I want to collect and add to the original item. So parse is supposed to pass the semi populated item to parse_next_page which then scrapes the next bit and should return the completed item back to parse Running the code below only returns the info collected in parse If I change the return items to return request I get a completed item with all 3 "things" but I only get 1 of the rows, not all 5. Im sure its something simple, I just can't see it. class ThingSpider(BaseSpider): name = "thing" allowed_domains = ["somepage.com"] start_urls = [ "http://www.somepage.com" ] def parse(self, response): hxs = HtmlXPathSelector(response) items = [] for x in range (1,6): item = ScrapyItem() str_selector = '//tr[@name="row{0}"]'.format(x) item['thing1'] = hxs.select(str_selector")]/a/text()').extract() item['thing2'] = hxs.select(str_selector")]/a/@href').extract() print 'hello' request = Request("www.nextpage.com", callback=self.parse_next_page,meta={'item':item}) print 'hello2' request.meta['item'] = item items.append(item) return items def parse_next_page(self, response): print 'stuff' hxs = HtmlXPathSelector(response) item = response.meta['item'] item['thing3'] = hxs.select('//div/ul/li[1]/span[2]/text()').extract() return item

    Read the article

  • Preventing Duplicates on Google

    - by abel
    I am currently using a rewrite rule to enable access to .php pages, without using the php extension. However to prevent old links from breaking, the pages can still be accessed via links containing the .php extension too. For eg. domain.com/page.php can now be accessed at domain.com/page All the links on the website now use domain.com/page type links within the site. However older incoming links will still link to the .php pages, meaning Google will index both pages and mark them as duplicate. I have two plans to remedy the situation. Use a php 301 redirect: When a page is accessed with the .php extension, I can redirect each page individually using a 301 redirect using php Using Canonical: Place a canonical tag on each page, pointing to the ".php" less version My Question: Are both methods equally efficacious in preventing Google from indexing my ".php" pages? Which method should be preferred, by convention or otherwise?

    Read the article

  • Tracking Redirects Leading to your site

    - by Bill
    Is there a way in which I can find out if a user arrived at my site via a redirect? Here's an example: There are two sites, first.com & second.com. Any request to first.com will do a 302 redirect to second.com. When the request at second.com arrives, is there anyway to know it was redirected from first.com? Note that in this example you have no control over first.com. (In fact, it could be something bad, like kiddieporn.com.) Also note, because it is a redirect, it will not be in the HTTP referrer header.

    Read the article

  • Exporting data from a gridview to different excel worksheets

    - by Alex
    I am binding data from a dataset to a grid and exporting data from the grid to an excel.if the the number of items in the grid is greater than 50000,an error message is displayed. So i want to split the data and display it in different worksheets in excel.(Am working in a web application) using this code for exporting to excel gvExcel.DataSource = DTS; gvExcel.DataBind(); Response.AddHeader("content-disposition", "attachment; filename= filename.xls"); Response.ContentType = "application/excel"; StringWriter sw = new StringWriter(); HtmlTextWriter htw = new HtmlTextWriter(sw); gvExcel.RenderControl(htw); // Style is added dynamically Response.Write(style); Response.Write(sw.ToString()); Response.End(); Can anyone help me on this??

    Read the article

  • javascript table - update on data request

    - by flyingcrab
    Hi, I am trying to update a table based on a json request. The first update / draw works fine - but any subsequent changes to the variables (the start and end date) do not show up - even though the json pulled from the server seems to be correct (according to firebug). AFAIK the code below should re-initialize everything - no sure what is going on (I'm using the Google vizulization api)? function handleQueryResponse(response) { if (response.isError()) { //alert('Error in query: ' + response.getMessage() + ' ' + response.getDetailedMessage()); return; } visualization = new google.visualization.Table(document.getElementById('visualization')); visualization.draw(response.getDataTable(), null); } One more thing: I'm working on a page that displays textbased tables and currently trying to decide between the google table (viz api) and a jQuery alternative I came across jqGrid any good ones I am missing?

    Read the article

  • Download and write .tar.gz files without corruption.

    - by arbales
    I've tried numerous ways of downloading files, specifically .zip and .tar.gz, with Ruby and write them to the disk. I've found that the file appears to be the same as the reference (in size), but the archives refuse to extract. What I'm attempting now is: Thanks! def download_request(url, filePath:path, progressIndicator:progressBar) file = File.open(path, "w+") begin Net::HTTP.get_response URI.parse(url) do |response| if response['Location']!=nil puts 'Direct to: ' + response['Location'] return download_request(response['Location'], filePath:path, progressIndicator:progressBar) end # some stuff response.read_body do |segment| file.write(segment) # some progress stuff. end end ensure file.close end end download_request("http://github.com/jashkenas/coffee-script/tarball/master", filePath:"tarball.tar.gz", progressIndicator:nil)

    Read the article

  • How do I parse an XML file that's on a different web server?

    - by Tim
    I have a list of training dates saved into an XML file, and I have a little javascript file that parses all of the training dates and spits them out into a neatly formatted page. This solution was fine until we decided that we wanted another web-page on another sever to access the same XML file. Since I cannot use JavaScript to parse an XML file that's located on another server, I figured I'd just use an ASP script. However, when I run this following, I get a response that there are 0 nodes matching a value which should have several: <% Dim URL, objXML URL = "http://www.site.com/feed.xml" Set objXML = Server.CreateObject("MSXML2.DOMDocument.3.0") objXML.setProperty "ServerHTTPRequest", True objXML.async = False objXML.Load(URL) If objXML.parseError.errorCode <> 0 Then Response.Write(objXML.parseError.reason) Response.Write(objXML.parseError.errorCode) End If Response.Write(objXML.getElementsByTagName("era").length) %> My question is two-fold: Is there are a way I can use java-script to parse a remote XML file If not, why doesn't my code give me the proper response?

    Read the article

  • Https in java ends up with strange results

    - by Senne
    I'm trying to illustrate to students how https is used in java. But i have the feeling my example is not really the best out there... The code works well on my windows 7: I start the server, go to https://localhost:8080/somefile.txt and i get asked to trust the certificate, and all goes well. When I try over http (before or after accepting the certificate) I just get a blank page, which is ok for me. BUT when I try the exact same thing on my windows XP: Same thing, all goes well. But then (after accepting the certificate first), I'm also able to get all the the files through http! (if I first try http before https followed by accepting the certificate, I get no answer..) I tried refreshing, hard refreshing a million times but this should not be working, right? Is there something wrong in my code? I'm not sure if I use the right approach to implement https here... package Security; import java.io.*; import java.net.*; import java.util.*; import java.util.concurrent.Executors; import java.security.*; import javax.net.ssl.*; import com.sun.net.httpserver.*; public class HTTPSServer { public static void main(String[] args) throws IOException { InetSocketAddress addr = new InetSocketAddress(8080); HttpsServer server = HttpsServer.create(addr, 0); try { System.out.println("\nInitializing context ...\n"); KeyStore ks = KeyStore.getInstance("JKS"); char[] password = "vwpolo".toCharArray(); ks.load(new FileInputStream("myKeys"), password); KeyManagerFactory kmf = KeyManagerFactory.getInstance("SunX509"); kmf.init(ks, password); SSLContext sslContext = SSLContext.getInstance("TLS"); sslContext.init(kmf.getKeyManagers(), null, null); // a HTTPS server must have a configurator for the SSL connections. server.setHttpsConfigurator (new HttpsConfigurator(sslContext) { // override configure to change default configuration. public void configure (HttpsParameters params) { try { // get SSL context for this configurator SSLContext c = getSSLContext(); // get the default settings for this SSL context SSLParameters sslparams = c.getDefaultSSLParameters(); // set parameters for the HTTPS connection. params.setNeedClientAuth(true); params.setSSLParameters(sslparams); System.out.println("SSL context created ...\n"); } catch(Exception e2) { System.out.println("Invalid parameter ...\n"); e2.printStackTrace(); } } }); } catch(Exception e1) { e1.printStackTrace(); } server.createContext("/", new MyHandler1()); server.setExecutor(Executors.newCachedThreadPool()); server.start(); System.out.println("Server is listening on port 8080 ...\n"); } } class MyHandler implements HttpHandler { public void handle(HttpExchange exchange) throws IOException { String requestMethod = exchange.getRequestMethod(); if (requestMethod.equalsIgnoreCase("GET")) { Headers responseHeaders = exchange.getResponseHeaders(); responseHeaders.set("Content-Type", "text/plain"); exchange.sendResponseHeaders(200, 0); OutputStream responseBody = exchange.getResponseBody(); String response = "HTTP headers included in your request:\n\n"; responseBody.write(response.getBytes()); Headers requestHeaders = exchange.getRequestHeaders(); Set<String> keySet = requestHeaders.keySet(); Iterator<String> iter = keySet.iterator(); while (iter.hasNext()) { String key = iter.next(); List values = requestHeaders.get(key); response = key + " = " + values.toString() + "\n"; responseBody.write(response.getBytes()); System.out.print(response); } response = "\nHTTP request body: "; responseBody.write(response.getBytes()); InputStream requestBody = exchange.getRequestBody(); byte[] buffer = new byte[256]; if(requestBody.read(buffer) > 0) { responseBody.write(buffer); } else { responseBody.write("empty.".getBytes()); } URI requestURI = exchange.getRequestURI(); String file = requestURI.getPath().substring(1); response = "\n\nFile requested = " + file + "\n\n"; responseBody.write(response.getBytes()); responseBody.flush(); System.out.print(response); Scanner source = new Scanner(new File(file)); String text; while (source.hasNext()) { text = source.nextLine() + "\n"; responseBody.write(text.getBytes()); } source.close(); responseBody.close(); exchange.close(); } } }

    Read the article

  • Weird issue with iptables redirection

    - by skypemesm
    I am trying to redirect all incoming traffic on UDP port 5060 to port 56790, and all outgoing traffic from 5060 to the port 56789. I used these iptables rules: iptables -t nat -I PREROUTING -p udp ! -s localhost --dport 5060 -j REDIRECT --to-port 56790 iptables -t nat -I OUTPUT -p udp ! -s localhost --sport 5060 -j REDIRECT --to-port 56789 I listen on both ports using RAW SOCKETS after setting the interface to PROMISCUOUS mode using ioctl. I see packets ONLY on 56789 i.e.SENDING side, and I do not see any packets on 56790, while wireshark shows that many packets are delivered to port 5060. Why would this happen? Any ideas? Do you think it's a problem with iptables rules or something to do with raw sockets? [This is ubuntu 10.04 and iptables v1.4.4]

    Read the article

  • Meta Refresh for change of page name and content

    - by user3507399
    Hopefully just a quick one. I've got a client that is changing the name of a workshop that they run. This means a change of url, page title for keywords that they have first page ranking on. The keywords are still relevant so what I want to avoid is a 301 redirect to a page that has different keywords to the previous page. Is the best option to keep the old page live with url and title and use a meta refresh to redirect after a period of time (not instant)? That way the SEO ranking is retained for the previous workshop name while they work on the ranking for the name change? Would a 301 redirect have an inverse effect? Thanks!

    Read the article

  • Javascript: Passing large objects or strings between function considered a bad practice

    - by Mr. Smee
    Is it considered a bad practice to pass around a large string or object (lets say from an ajax response) between functions? Would it be beneficial in any way save the response in a variable and keep reusing that variable? So in the code it would be something like this: var response; $.post(url, function(resp){ response = resp; }) function doSomething() { // do something with the response here } vs $.post(url, function(resp){ doSomething(resp); }) function doSomething(resp) { // do something with the resp here } Assume resp is a large object or string and it can be passed around between multiple functions.

    Read the article

  • Is it possible to track redirects to external sites from our subdomains?

    - by ChaBuku
    I have a handful of subdomains set up as redirects because we are using them for QR codes. I want to be able to track the QR code redirects (which are already set up and printed so no changing them at this point) and see the effectiveness of each. Here's two examples: http://qr.glorkianwarrior.com and http://ad.glorkianwarrior.com are set up to forward to our iTunes page (later on this year it may forward to Google Play or a specific landing page), is there any way on my server to track the redirect from the subdomain to iTunes and see where traffic is coming from first? I have the redirects set up through cPanel presently using subdomains. Edit: From the research I've seen I can't track a 301 directly. If I redirect to an internal page and then do a timed redirect to the iTunes link, how long will it take for the tracking script to track a hit?

    Read the article

  • How to select the most recent set of dated records from a mysql table

    - by Ken
    I am storing the response to various rpc calls in a mysql table with the following fields: Table: rpc_responses timestamp (date) method (varchar) id (varchar) response (mediumtext) PRIMARY KEY(timestamp,method,id) What is the best method of selecting the most recent responses for all existing combinations of method and id? For each date there can only be one response for a given method/id. Not all call combinations are necessarily present for a given date. There are dozens of methods, thousands of ids and at least 356 different dates Sample data: timestamp method id response 2009-01-10 getThud 16 "....." 2009-01-10 getFoo 12 "....." 2009-01-10 getBar 12 "....." 2009-01-11 getFoo 12 "....." 2009-01-11 getBar 16 "....." Desired result: 2009-01-10 getThud 16 "....." 2009-01-10 getBar 12 "....." 2009-01-11 getFoo 12 "....." 2009-01-11 getBar 16 "....." (I don't think this is the same question - it won't give me the most recent response)

    Read the article

  • SEO: best way to deal with short lifetime URLs?

    - by Mike Norgate
    I am currently in the process of redesigning a job advert site and am trying to put a lot more effort into my SEO. My question is how should I deal with the URLs that point to job adverts when the advert expires. The options I have thought of so far are: Return a 404 error and redirect to a 404 page. Will it have an effect on ranking if there are a lot of URLs that return 404s after only being up for a few weeks? Redirect to job listing page - When the user requests a URL for an advert that has expired just redirect to the main job listing page. Show the advert but tell the user to has closed - Show the advert page but with a notification that the advert has closed. The issue I see with this is that the user will visit the page, see its closed and then leave the site again which would not be good for rankings

    Read the article

  • SEO penalty for landing page redirects

    - by therealsix
    Using ebay as an example- lets say I have a large number of items whose URLs' look like this: cgi.ebay.com/ebaymotors/1981-VW-Vanagon-manual-seats-seven-/250953153841 I want to give my client the ability to put links to these items on their website EASILY, without knowing or checking my URL. So I created a redirect service that will map their identifier with my URL: ebay.com/fake_redirect_service/shared_identifier9918 would redirect to the link above. This works great- my clients can easily setup these links with information they already have, and the user will see the page as usual. So on to the problem... I'm concerned that this redirecting service will have a negative impact on my SEO ranking. Having a landing page redirect you immediately to a different URL seems like something a typical spam site would do. Will this hurt me? Any better solutions?

    Read the article

  • From me friends, know who is already using the app

    - by Toni Michel Caubet
    I got working to get all friends from 'me' user, like this: FB.api('/me/friends?fields=id,name,updated_time&date_format=U&<?=$access_token?>', {limit:3, function(response){ console.log('Friend name: '+response.data[0].name); } ); But i need to get if the user is in the app already or not, how can I alter the query to get an extra row in the object 'is_in_app' true/false? FB.api('/me/friends?fields=id,name,updated_time&date_format=U&<?=$access_token?>', {limit:3, function(response){ var text = 'is not in app'; if(response.data[0].is_in_app == true) text = 'is in app!!'; console.log('Friend name: '+response.data[0].name + ' ' + text); } ); How can i achieve this?

    Read the article

  • load a word document inside window browser

    - by netNewbi3
    Hi, I have a page that dynamically links to a document that opens in a new page (the document is stored in a database as binary data and I loaded using the following code: Response.ClearContent() Response.ContentType = myReader("MIMEType").ToString() Response.AddHeader("Content-Disposition", "inline; filename=" & myReader("Filename")) Response.BinaryWrite(myReader("DocBD")) Response.End() This works ok. However some documents have restricted access and before loading the document the user is redirected to a login page. After entering username and passowrd the document is loaded. If it is a pdf file for example, it loads in the same login page, but when it is a word or excel document it opens outside the browser window and the login page remains in the background. Is ther a way to force a word or excel document to open inside the browser window? Many thanks.

    Read the article

  • Data table to Excel conversion leaves my server side button non responsive

    - by Nikhil Vaghela
    We have a web part on which we display some data in a grid. We are exporting grid's underlying datatable to Excel and displaying Open - Save - Cancel dialouge box on click of a server side button. Following is the code we are executing on click of server side button. this.Page.Response.Clear(); this.Page.Response.AppendHeader("Content-Disposition", "attachment; filename=MyTasks.xls"); this.Page.Response.ContentType = "application/ms-excel"; this.Page.Response.Write("...here goes my well formated html...."); this.Page.Response.End(); Problem is that when i click on Cancel in the diaglouge box then dialogue box to Open/Save excel disappears but all the server side buttons placed on my web part becomes non responsive, on click of any of those buttons their server side click event does not get fired !!! Any idea ? Thanks.

    Read the article

  • How to get the action from the HttpServlet request to dispatch to multiple pages

    - by JFB
    I am using the Page Controller pattern. How could I use the same controller for two different pages by detecting the request action and then dispatching according to the result? Here is my code: account.jsp <form name="input" action="<%=request.getContextPath() %>/edit" method="get"> <input type="submit" value="Modifier" /> </form> Account Servlet public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { System.out.println("received HTTP GET"); String action = request.getParameter("action"); if (action == null) { // the account page dispatch(request, response, "/account"); } else if (action == "/edit") { // the popup edit page dispatch(request, response, "/edit"); } protected void dispatch(HttpServletRequest request, HttpServletResponse response, String page) throws javax.servlet.ServletException, java.io.IOException { RequestDispatcher dispatcher = getServletContext() .getRequestDispatcher(page); dispatcher.forward(request, response); } }

    Read the article

  • Difference between redirecting to a page and coming to the same page after pressing back button

    - by Mac
    Actually i have a page in which i am not using cache by using this code HttpContext.Current.Response.Cache.SetExpires(DateTime.UtcNow.AddDays(-1)); HttpContext.Current.Response.Cache.SetValidUntilExpires(false); HttpContext.Current.Response.Cache.SetRevalidation(HttpCacheRevalidation.AllCaches); HttpContext.Current.Response.Cache.SetCacheability(HttpCacheability.NoCache); HttpContext.Current.Response.Cache.SetNoStore(); now i want to know is there any difference between coming to this page using a proper link or coming back using browser back button or is there any way to detect this.

    Read the article

  • Forcing A Postback Asp.Net

    - by Nick LaMarca
    Please take a look at the following click event... Protected Sub btnDownloadEmpl_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles btnDownloadEmpl.Click Dim emplTable As DataTable = SiteAccess.DownloadEmployee_H() Dim d As String = Format(Date.Now, "d") Dim ad() As String = d.Split("/") Dim fd As String = ad(0) & ad(1) Dim fn As String = "E_" & fd & ".csv" Response.ContentType = "text/csv" Response.AddHeader("Content-Disposition", "attachment; filename=" & fn) CreateCSVFile(emplTable, Response.Output) Response.Flush() Response.End() lblEmpl.Visible = True End Sub This code simply exports data from a datatable to a csv file. The problem here is lblEmpl.Visible=true never gets hit because this code doesnt cause a postback to the server. Even if I put the line of code lblEmpl.Visible=true at the top of the click event the line executes fine, but the page is never updated. How can I fix this?

    Read the article

  • Assigning two strings together getting Access Read Violation

    - by Jay Bell
    I am trying to pass a string to a class mutator and set the private member to that string here is the code that is sending the string void parseTradePairs(Exchange::Currency *curr, std::string *response, int begin, int exit) { int start; int end; string temp; string dataResponse; CURL *tempCurl; initializeCurl(tempCurl); int location = response->find("marketid", begin); if(location <= exit) { start = location + 11; begin = response->find("label", start); end = begin - start - 3; findStrings(start, end, temp, response); getMarketInfo(tempCurl, temp, dataResponse); curr->_coin->setExch(temp); // here is the line of code that is sending the string dataResponse >> *(curr->_coin); curr->_next = new Exchange::Currency(curr, curr->_position + 1); parseTradePairs(curr->_next, response, begin, exit); } } and here is the mutator within the coin class that is receiving the string and assigning it to _exch void Coin::setExch(string exch) { _exch = exch; } I have stepped through it and made sure that exch has the string in it. "105" but soon as it hits _exch = exch; I get the reading violation. I tried passing as pointer as well. I do not believe it should go out of scope. and the string variable in the class is initialized to zero in the default constructor but again that should matter unless I am trying to read from it instead of writing to it. /* defualt constructor */ Coin::Coin() { _id = ""; _label = ""; _code= ""; _name = ""; _marketCoin = ""; _volume = 0; _last = 0; _exch = ""; } Exchange::Exchange(std::string str) { _exch = str; _currencies = new Currency; std::string pair; std::string response; CURL *curl; initializeCurl(curl); getTradePairs(curl, response); int exit = response.find_last_of("marketid"); parseTradePairs(_currencies, &response, 0, exit); } int main(void) { CURL *curl; string str; string id; Coin coin1; initializeCurl(curl); Exchange ex("cryptsy"); curl_easy_cleanup(curl); system("pause"); return 0; } class Exchange { public: typedef struct Currency { Currency(Coin *coin, Currency *next, Currency *prev, int position) : _coin(coin), _next(next), _prev(prev), _position(position) {} Currency(Currency *prev, int position) : _prev(prev), _position(position), _next(NULL), _coin(&Coin()){} Currency() : _next(NULL), _prev(NULL), _position(0) {} Coin *_coin; Currency *_next; Currency *_prev; int _position; }; /* constructor and destructor */ Exchange(); Exchange(std::string str); ~Exchange(); /* Assignment operator */ Exchange& operator =(const Exchange& copyExchange); /* Parse Cryptsy Pairs */ friend void parseTradePairs(Currency *curr, std::string *response, int begin, int exit); private: std::string _exch; Currency *_currencies; }; here is what i changed it to to fix it. typedef struct Currency { Currency(Coin *coin, Currency *next, Currency *prev, int position) : _coin(coin), _next(next), _prev(prev), _position(position) {} Currency(Currency *prev, int position) : _prev(prev), _position(position), _next(NULL), _coin(&Coin()){} Currency() { _next = NULL; _prev = NULL; _position = 0; _coin = new Coin(); } Coin *_coin; Currency *_next; Currency *_prev; int _position; };

    Read the article

  • ASP.Net: Expiring a page when navigating back

    - by K2so
    Basically all pages on this site I am building cannot be accessed when the user clicks on "Back" (or with key control) in the browser, and the page should expire if one is trying to navigate back in history. I put into Global.asax::Application_BeginRequest Response.Cache.SetCacheability(HttpCacheability.NoCache) Response.Cache.SetExpires(DateTime.UtcNow.AddDays(-1)) Response.Cache.SetValidUntilExpires(False) Response.Cache.SetRevalidation(HttpCacheRevalidation.AllCaches) Response.Cache.SetNoStore() This would clear out the cache and disallow going back to any pages when the user is logged out, but doesn't do the job while the user is logged in. I saw posts where people suggested using a javascript approach, by calling History.Forward(1) on the page. But I wouldn't like to do this, as it will require javascript enabled to work (which user can disable). Appreciate any suggestions.

    Read the article

  • Output caching in HTTP Handler and SetValidUntilExpires

    - by mayor
    I'm using output caching in my custom HTTP handler in the following way: public void ProcessRequest(HttpContext context) { TimeSpan freshness = new TimeSpan(0, 0, 0, 60); context.Response.Cache.SetExpires(DateTime.Now.Add(freshness)); context.Response.Cache.SetMaxAge(freshness); context.Response.Cache.SetCacheability(HttpCacheability.Public); context.Response.Cache.SetValidUntilExpires(true); ... } It works, but the problem is that refreshing the page with F5 leads to page regeneration (instead of cache usage) despite of the last codeline: context.Response.Cache.SetValidUntilExpires(true); Any suggestions?

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >