Search Results

Search found 15691 results on 628 pages for 'browser caching'.

Page 75/628 | < Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >

  • How could I cache images that I'm pulling from a magento database through ajax?

    - by wes
    Here's script being called through ajax: <?php require_once '../app/Mage.php'; umask(0); /* not Mage::run(); */ Mage::app('default'); $cat_id = ($_POST['cat_id']) ? $_POST['cat_id'] : NULL; try { $category = new Mage_Catalog_Model_Category(); $category->load($cat_id); $collection = $category->getProductCollection(); $output = '<ul>'; foreach ($collection as $product) { $cProduct = Mage::getModel('catalog/product'); $cProduct->load($product->getId()); $output .= '<li><img id="'.$product->getId().'" src="' . (string)Mage::helper('catalog/image')->init($cProduct, 'small_image')->resize(75) . '" class="thumb" /></li>'; } $output .= '</ul>'; echo $output; } catch (Exception $e) { echo 'Caught exception: ', $e->getMessage(), "\n"; } I'm just passing in the Category ID, which I've tacked onto the navigation links, then doing some work to eventually just pass back all product images in that category. I'm using this on a drag and drop build-a-bracelet type of application, and the amount of images returned is sometimes in the 500s. So it get's pretty held up during transmission, sometimes 10 seconds or so. I know I'd do good by caching them some way, just not sure how to go about it. Any help is much appreciated. Thanks. -Wes

    Read the article

  • Cache layer for MVC - Model or controller?

    - by Industrial
    Hi everyone, I am having some second thoughts about where to implement the caching part. Where is the most appropriate place to implement it, you think? Inside every model, or in the controller? Approach 1 (psuedo-code): // mycontroller.php MyController extends Controller_class { function index () { $data = $this->model->getData(); echo $data; } } // myModel.php MyModel extends Model_Class{ function getData() { $data = memcached->get('data'); if (!$data) { $query->SQL_QUERY("Do query!"); } return $data; } } Approach 2: // mycontroller.php MyController extends Controller_class { function index () { $dataArray = $this->memcached->getMulti('data','data2'); foreach ($dataArray as $key) { if (!$key) { $data = $this->model->getData(); $this->memcached->set($key, $data); } } echo $data; } } // myModel.php MyModel extends Model_Class{ function getData() { $query->SQL_QUERY("Do query!"); return $data; } } Thoughts: Approach 1: No multiget/multi-set. If a high number of keys would be returned, overhead would be caused. Easier to maintain, all database/cache handling is in each model Approach 2: Better performancewise - multiset/multiget is used More code required Harder to maintain Tell me what you think!

    Read the article

  • Generated images fail to load in browser

    - by notJim
    I've got a page on a webapp that has about 13 images that are generated by my application, which is written in the Kohana PHP framework. The images are actually graphs. They are cached so they are only generated once, but the first time the user visits the page, and the images all have to be generated, about half of the images don't load in the browser. Once the page has been requested once and images are cached, they all load successfully. Doing some ad-hoc testing, if I load an individual image in the browser, it takes from 450-700 ms to load with an empty cache (I checked this using Google Chrome's resource tracking feature). For reference, it takes around 90-150 ms to load a cached image. Even if the image cache is empty, I have the data and some of the application's startup tasks cached, so that after the first request, none of that data needs to be fetched. My questions are: Why are the images failing to load? It seems like the browser just decides not to download the image after a certain point, rather than waiting for them all to finish loading. What can I do to get them to load the first time, with an empty cache? Obviously one option is to decrease the load times, and I could figure out how to do that by profiling the app, but are there other options? As I mentioned, the app is in the Kohana PHP framework, and it's running on Apache. As an aside, I've solved this problem for now by fetching the page as soon as the data is available (it comes from a batch process), so that the images are always cached by the time the user sees them. That feels like a kludgey solution to me, though, and I'm curious about what's actually going on.

    Read the article

  • How to cache code in PHP?

    - by Janis Peisenieks
    I am creating a custom form building system, which includes various tokens. These tokens are found using Regular Expressions, and depending on the type of toke, parsed. Some require simple replacement, some require cycles, and so forth. Now I know, that RegExp is quite resource and time consuming, so I would like to be able to parse the code for the form once, creating a php code, and then save the PHP code, for next uses. How would I go about doing this? So far I have only seen output caching. Is there a way to cache commands like echo and cycles like foreach()? Because of misunderstandings, I'll create an example. Unparsed template data: Thank You for Your interest, [*Title*] [*Firstname*] [*Lastname*]. Here are the details of Your order! [*KeyValuePairs*] Here is the link to Your request: [*LinkToRequest*]. Parsed template: "Thank You for Your interest, <?php echo $data->title;?> <?php echo $data->firstname;?> <?php echo $data->lastname;?>. Here are the details of Your order! <?php foreach($data->values as $key=>$value){ echo $key."-".$value }?> Here is the link to Your request: <?php echo $data->linkToRequest;?>. I would then save the parsed template, and instead of parsing the template every time, just pass the $data variable to the already parsed one, which would generate an output.

    Read the article

  • Browser timing out attempting to load images

    - by notJim
    I've got a page on a webapp that has about 13 images that are generated by my application, which is written in the Kohana PHP framework. The images are actually graphs. They are cached so they are only generated once, but the first time the user visits the page, and the images all have to be generated, about half of the images don't load in the browser. Once the page has been requested once and images are cached, they all load successfully. Doing some ad-hoc testing, if I load an individual image in the browser, it takes from 450-700 ms to load with an empty cache (I checked this using Google Chrome's resource tracking feature). For reference, it takes around 90-150 ms to load a cached image. Even if the image cache is empty, I have the data and some of the application's startup tasks cached, so that after the first request, none of that data needs to be fetched. My questions are: Why are the images failing to load? It seems like the browser just decides not to download the image after a certain point, rather than waiting for them all to finish loading. What can I do to get them to load the first time, with an empty cache? Obviously one option is to decrease the load times, and I could figure out how to do that by profiling the app, but are there other options? As I mentioned, the app is in the Kohana PHP framework, and it's running on Apache. As an aside, I've solved this problem for now by fetching the page as soon as the data is available (it comes from a batch process), so that the images are always cached by the time the user sees them. That feels like a kludgey solution to me, though, and I'm curious about what's actually going on.

    Read the article

  • Chache problem running two consecutive HTTP GET requests from an APP1 to an APP2

    - by user502052
    I use Ruby on Rails 3 and I have 2 applications (APP1 and APP2) working on two subdomains: app1.domain.local app2.domain.local and I am tryng to run two consecutive HTTP GET requests from APP1 to APP2 like this: Code in APP1 (request): response1 = Net::HTTP.get( URI.parse("http://app2.domain.local?test=first&id=1") ) response2 = Net::HTTP.get( URI.parse("http://app2.domain.local/test=second&id=1") ) Code in APP2 (response): respond_to do |format| if <model_name>.find(params[:id]).<field_name> == "first" <model_name>.find(params[:id]).update_attribute ( <field_name>, <field_value> ) format.xml { render :xml => <model_name>.find(params[:id]).<field_name> } elsif <model_name>.find(params[:id]).<field_name> == "second" format.xml { render :xml => <model_name>.find(params[:id]).<field_name> } end end After the first request I get the correct XML (response1 is what I expect), but on the second it isn't (response2 isn't what I expect). Doing some tests I found that the second time that <model_name>.find(params[:id]).<field_name> run (for the elsif statements) it returns always a blank value so that the code in the elseif statement is never run. Is it possible that the problem is related on caching <model_name>.find(params[:id]).<field_name>? P.S.: I read about eTag and Conditional GET, but I am not sure that I must use that approach. I would like to keep all simple.

    Read the article

  • Memory mapping of files and system cache behavior in WinXP

    - by Canopus
    Our application is memory intensive and deals with reading a large number of disk files. The total load can be more than 3 GB. There is a custom memory manager that uses memory mapped files to achieve reading of such a huge data. The files are mapped into the process memory space only when needed and with this the process memory is well under control. But what is observed is, with memory mapping, the system cache keeps on increasing until it occupies the available physical memory. This leads to the slowing down of the entire system. My question is how to prevent system cache from hogging the physical memory? I attempted to remove the file buffering (by using FILE_FLAG_NO_BUFFERING ), but with this, the read operations take considerable amount of time and slows down the application performance. How to achieve the scalability without sacrificing much on performance. What are the common techniques used in such cases? I dont have a good understanding of the WinXP OS caching behavior. Any good links explaining the same would also be helpful.

    Read the article

  • iOS - is it possible to cache CGContextDrawImage?

    - by woot586
    I used the timing profile tool to identify that 95% of the time is spent calling the function CGContextDrawImage. In my app there are a lot of duplicate images repeatably being chopped from a sprite map and drawn to the screen. I was wondering if it was possible to cache the output of CGContextDrawImage in an NSMutableDictionay, then if the same sprite is requested again it can be just pull it from the cache rather than doing all the work of clipping and rendering it again. This is what i’ve got but I have not been to successful: Definitions if(cache == NULL) cache = [[NSMutableDictionary alloc]init]; //Identifier based on the name of the sprite and location within the sprite. NSString* identifier = [NSString stringWithFormat:@"%@-%d",filename,frame]; Adding to cache CGRect clippedRect = CGRectMake(0, 0, clipRect.size.width, clipRect.size.height); CGContextClipToRect( context, clippedRect); //create a rect equivalent to the full size of the image //offset the rect by the X and Y we want to start the crop //from in order to cut off anything before them CGRect drawRect = CGRectMake(clipRect.origin.x * -1, clipRect.origin.y * -1, atlas.size.width, atlas.size.height); //draw the image to our clipped context using our offset rect CGContextDrawImage(context, drawRect, atlas.CGImage); [cache setValue:UIGraphicsGetImageFromCurrentImageContext() forKey:identifier]; UIGraphicsEndImageContext(); Rendering a cached sprite There is probably a better way to render CGImage which is my ultimate caching goal but at the moment I’m just looking to successfully render the cached image out however this has not been successful. UIImage* cachedImage = [cache objectForKey:identifier]; if(cachedImage){ NSLog(@"Cached %@",identifier); CGRect imageRect = CGRectMake(0, 0, cachedImage.size.width, cachedImage.size.height); if (NULL != UIGraphicsBeginImageContextWithOptions) UIGraphicsBeginImageContextWithOptions(imageRect.size, NO, 0); else UIGraphicsBeginImageContext(imageRect.size); //Use draw for now just to see if the image renders out ok CGContextDrawImage(context, imageRect, cachedImage.CGImage); UIGraphicsEndImageContext(); }

    Read the article

  • ASP.NET or PHP: Is Memcached useful for storing user-state information?

    - by hamlin11
    This question may expose my ignorance as a web developer, but that wouldn't exactly be a bad thing for me now would it? I have the need to store user-state information. Examples of information that I need to store per user. (define user: unauthenticated visitor) User arrived to the site from google/bing/yahoo User utilized the search feature (true/false) List of previous visited product pages on current visit It is my understanding that I could store this in the view state, but that causes a problem with page load from the end-users' perspective because a significant amount of non-viewable information is being transferred to and from the end-users even though the server is the only side that needs the info. On a similar note, it is my understanding that the session state can be used to store such information, but does not this also result in the same information being transferred to the user and stored in their cookie? (Not quite as bad as viewstate, but it does not feel ideal). This leaves me with either a server-only-session storage system or a mem-caching solution. Is memcached the only good option here?

    Read the article

  • Jquery - removing an image before the client browser attempts to download it

    - by ajbrun
    Hi there, I wonder if anyone could help me with a problem I've been having. I have a number of large images available, but due to space limitations, I can't create multiple copies of these at various sizes. I have used PHP GD functions to resize the images to the sizes I need and output them to the browser. This works, but obviously takes some processing time, which therefore impacts pages load times. I'm fine with this, but I only want to show the image once it's fully loaded, and have a loading gif in its place until that time. I'm using jquery to do this. The problem I'm having is making the page functional whether the client has javascript enabled or not. If JS is not enabled, I want standard img tags to be outputted, otherwise the images are removed and replaced with a loading gif until they have been fully loaded. The link below shows a simple non-javascript unfriendly example of a what I want to do (try turning JS off): http://jqueryfordesigners.com/demo/image-load-demo.php I've been testing the basics using the code below. The attr() function will be replaced with something like remove(). This is just a test to make something happen to the image before the browser tries to load it. $(document).ready(function() { $( "#Thumbnails .thumbnail img" ).attr('src', '#'); }); In IE, this works correctly - the image source is replaced with "#" BEFORE the client browser gets a chance to start downloading the image. In firefox however, it downloads the image, and THEN changes the source. It seems to me that firefox is loading the jquery onready event later than it should. As far as I know, this should be executed before the standard onload event and before anything has started loading. If it helps, I'm testing it with a good number of images on screen (81). Am I doing something wrong?

    Read the article

  • make java plugin use the browser certificates

    - by Shalom938
    We have a java applet that communicates with a spring application running on tomcat and using spring's http invoker. We want to secure the applet using ssl with client authentication, we have a jsp page for login, after successful login the applet loads. The jsp page is secured with ssl, when the applet loads the http invoker inside the applet is doing a second handshake apparently not related to the browser handshake, OK, I don't mind that, but I want the java plugin to use browser certificates and client certificates but its not, I have to load the client certificate to the java plugin also using the java ControlPanel, and if my server's certificate is self signed then I have to load the server certificate also to the java ControlPanel and to the nrowser. Another thing is when the applet starts loading the java plugin pops a dialog asking for the client keystore password, I would like to avoid that. So to conclude: I would like the java plugin to use the browser's trusted certificates and client certificates, and to avoid the keystore password dialog that pops up. I have googled for it for two days and can't find any clue of how to accomplish that. I will appreciate any help. I'm using jdk 1.6.0 u23 and firefox 3.6.13.

    Read the article

  • Modify url in browser using javascript?

    - by user246114
    Hi, Is it possible to change the url in the user's browser without actually loading a page, using javascript? I don't think it is (could lead to unwanted behavior), I'm in a situation where this would be convenient though: I have a web app which displays reports generated by users. Layout roughly looks like: ----------------------------------------------------------- Column 1 | Column 2 ----------------------------------------------------------- Report A | Report B | Currently selected report contents here. Report C | right now the user would be looking at a url like: www.mysite.com/user123 To see the above page. When the user clicks the report names in column 1, I load the contents of that report in column 2 using ajax. This is convenient for the user, but the url in their browser remains unchanged. The users want to copy the url for a report to share with friends, so I suppose I could provide a button to generate a url for them, but it would just be more convenient for them to have it already as the url in their browser, something like: www.mysite.com/user123/reportb the alternate is to not load the contents of the report in column 2 using ajax, but rather a full page refresh. This would at least have a linkable url ready for the user in their url bar, but not as convenient as using ajax. Thanks

    Read the article

  • Inconsistent canvas drawing in Android browser

    - by user2943466
    In putting together a small canvas app I've stumbled across a weird behavior that only seems to occur in the default browser in Android. When drawing to a canvas that has the globalCompositeOperation set to 'destination-out' to act as the 'eraser' tool, Android browser sometimes acts as expected, sometimes does not update the pixels in the canvas at all. the setup: context.clearRect(0,0, canvas.width, canvas.height); context.drawImage(img, 0, 0, canvas.width, canvas.height); context.globalCompositeOperation = 'destination-out'; draw a circle to erase pixels from the canvas: context.fillStyle = '#FFFFFF'; context.beginPath(); context.arc(x,y,25,0,TWO_PI,true); context.fill(); context.closePath(); a small demo to illustrate the issue can be seen here: http://gumbojuice.com/files/source-out/ and the javascript is here: http://gumbojuice.com/files/source-out/js/main.js this has been tested in multiple desktop and mobile browsers and behaves as expected. On Android native browser after refreshing the page sometimes it works, sometimes nothing happens. I've seen other hacks that move the canvas by a pixel in order to force a redraw but this is not an ideal solution.. Thanks all.

    Read the article

  • How do browser cookie domains work?

    - by Vilx-
    Due to weird domain/subdomain cookie issues that I'm getting, I'd like to know how browsers handle cookies. If they do it in different ways, it would also be nice to know the differences. In other words - when a browser receives a cookie, that cookie MAY have a domain and a path attached to it. Or not, in which case the browser probably substitutes some defaults for them. Question 1: what are they? Later, when the browser is about to make a request, it checks its cookies and filters out the ones it should send for that request. It does so by matching them against the requests path and domain. Question 2: what are the matching rules? Added: The reason I'm asking this is because I'm interested in some edge cases. Like: Will a cookie for .example.com be available for www.example.com? Will a cookie for .example.com be available for example.com? Will a cookie for example.com be available for www.example.com? Will a cookie for example.com be available for anotherexample.com? Will www.example.com be able to set cookie for example.com? Will www.example.com be able to set cookie for www2.example.com? Will www.example.com be able to set cookie for .com? Etc. Added 2: Also, could someone suggest how I should set a cookie so that: It can be set by either www.example.com or example.com; It is accessible by both www.example.com and example.com.

    Read the article

  • Basic Ajax Cache Issue

    - by michaelespinosa
    I have a single page that I need to on occasion asynchronously check the server to see if the status of the page is current (basically, Live or Offline). You will see I have a function with a var live that is set when the page initially loads. I then do an ajax request to the server to retrieve whether the status of live is true or false. I compare the initial live variable with the newly returned data json object. If they're the same I do nothing, but if there different I apply some css classes. I recursively run it with setTimeout (Is there a better way to recursively do this?). My Problem: data.live doesn't change from it's initial time it runs even when it has changed in the db. I know my mysql is working because it returs the right value on the initial load. It seems like a caching issue. Any help is greatly appreciated. function checkLive() { var live = <?=$result["live"]?>; $.ajax({ type: 'get', url: '/live/live.php', dataType: 'json', success: function(data) { console.log('checking for updates... current:' + data.live); if (data.live == live) { return; } else { var elems = $('div.player_meta, object, h3.offline_message'); if (data.live == '1') { elems.removeClass('offline').addClass('live'); } else { elems.addClass('live').addClass('offline'); } } } }); setTimeout(function() { checkLive() } ,15000); } checkLive();

    Read the article

  • How do you debug Silverlight applications with Chrome AND hit breakpoints?

    - by cplotts
    I am using Visual Studio 2010 to create a Silverlight 4 application. I set a breakpoint in my code-behind, start the debug session from Visual Studio, and unfortunately, my breakpoint never gets hit. So, I eventually I tried setting my default browser to Internet Explorer ... and lo and behold ... my breakpoint gets suddenly hit. Is Chrome a supported browser for debugging Silverlight applications? If so, what am I missing in order to get this to work? Or, is Internet Explorer the only supported browser when it comes to debugging?

    Read the article

  • Cache Refresh in Chrome

    - by gAMBOOKa
    I dunno what exactly it's called, by cache refresh I mean, refresh the page after clearing its cache. I don't want to clear the entire browser cache. I prefer Chrome's Dev panel against firebug... don't ask me why. But I can't seem to cache refresh my pages. In FF, I know it to be Shift+Refresh. In chrome, I've tried Ctrl+R, Ctrl+Refresh, Alt+Refresh, Shift+Refresh but none of them work. EDIT: I got a Notable Question Badge for the lamest question I've ever asked. FML.

    Read the article

  • Re-send POST request easily - what tools?

    - by Fabien
    I am looking for an easy way to re-send POST request to the server within the browser mainly for debug purposes. Say you have a XHR request which contains POST parameters that is to be send to the server. After having changed the script on the server side, you would like to resent the very same request for analyzing the output. What tool could help? I guess it is a browser's extension. I already tried extension Tamper Data for Firefox which does the job as you can "Replay in browser". But for my taste, it is not enough straight forward, as there are 3 - 4 clicks to get the result of the request. Unfortunately, curl would not be suitable for my needs as my application has a session's cookie.

    Read the article

  • Detect WebKit Version 525 and Below With RegEx

    - by Jay
    I'm no good at Regular Expressions, really! I would like to specifically detect WebKit browsers below version 525. I have a regular expression [/WebKit\/[\d.]+/.exec(navigator.appVersion)] that correctly returns WebKit/5….…, really, I'd like it to return only the version number, but if the browser isn't WebKit, return null, or better still 0. For example, if the browser was Trident, Presto or Gecko, return null, whereas if the browser is WebKit, return it's version number. To clarify, I would like the regular expression to check if navigator.appVersion contains WebKit and if it does not, return null, if it does, return the version number. I appreciate all your help! Please let's keep this focused, let's not flirt with jQuery or the sort, it's overkill in this scenario.

    Read the article

  • Is there a nice XSL stylesheet for client-side DocBook rendering?

    - by Steven Huwig
    I want the DocBook documents in my SVN repository to look nice if someone looks at them in a web browser. I've started to write a CSS stylesheet, but I think that it will have significant limitations -- particularly ones regarding hyperlinks. There is a large body of DocBook XSL stylesheets at the DocBook site , but they don't seem to be appropriate for browser rendering. I don't want to generate static documents and put them into SVN. I want them to be basically readable for other developers without much hassle. I could write my own browser-appropriate XSL stylesheet to convert DocBook to HTML, but it seems like someone else must have already done this. I just don't know where to find it.

    Read the article

  • Explanation for expires header

    - by sushil bharwani
    I have a joomla application working on Apache.To improve site performace we have written a .htaccess file to root of the application with setting a far future expires header to all the static content. As desired first time the files load in fresh with 200 status code. when again click on the same link many of the files are served directly from cache. I need explanation for two things When i press f5 then a number of files load with 304 status code however i expected them to be coming directly from cache without hitting the server for a status header? When i close the browser and come back to the same page again i see the same thing happening a number of files load with 304 status code although i thought they will load directly from the browser cache? I understand that 304 also servs file from browser cache but i want to avoid the header communication between servers as my static files wont ever change. Also i want to add that my requests are over a https connection does that create any issue.

    Read the article

  • What is cached on a client machine when using https.

    - by TroyP
    I have an application that is working on https for everybody and on http for all but two users. The two users get a JavaScript error when trying to "edit" a page while on http but can edit the page on https. The problem is for occurs for both IE6 and FF3.6 for one of these users. Others have no problem using any browser. I have used Charles Proxy to look at the server response and no request is being made to https when on http and all browser requests return successfully. I have cleared all caches known to me on the clients (browser, jvm). Are http and https caches stored in different locations on the clients computers. Could a cached encrypted file be being read on the unencrypted port.

    Read the article

  • javascript open window references

    - by duckofrubber
    Hi, I'm having some issues understanding how to reference new browser windows after opening them. As an example, if I created 3 new windows from a main one (index.html): var one = window.open( 'one.html', 'one',"top=10,left=10,width=100,height=100,location=no,menubar=no,scrollbars=no,status=no,toolbar=no,resizable=no"); var two = window.open( 'two.html', 'two',"top=100,left=10,width=100,height=100,location=no,menubar=no,scrollbars=no,status=no,toolbar=no,resizable=no"); var three = window.open( 'three.html', 'three',"top=200,left=10,width=100,height=100,location=no,menubar=no,scrollbars=no,status=no,toolbar=no,resizable=no"); two.focus(); How could I programmatically focus on (or just refer to) browser "three" if browser "two" is currently in focus?

    Read the article

  • Generate image with Drupal imagecache before using imagecache_create_path & getimagesize

    - by ozke
    Hi guys, I'm using imagecache_create_path() and getimagesize() to get the path of a imagecache-generated image and its dimensions. However, if it's the first time we access the page that image doesn't exist yet and imagecache_create_path doesn't generate it either. Here's the code: // we get the image path from a preset (always return the path even if the file doesn't exist) $small_image_path = imagecache_create_path('gallery_image_small', $image["filepath"]); // I get the image dimensions (only if the file exists already) $data_small = list($width, $height, $type, $image_attributes) = @getimagesize($small_image_path); Is there any API method to get the path AND generate the file? In other words, can I generate the image (using a preset) from PHP without showing it in the browser? Thank you in advance

    Read the article

  • I need to debug my BrowserHelperObject (BHO) (in C++ with Visual Studio 2008) after a internet explo

    - by BHOdevelopper
    Hi, here is the situation, i'm developping a Browser Helper Object (BHO) in C++ with Visual Studio 2008, and i learned that the memory wasn't managed the same way in Debug mode than in Release mode. So when i run my BHO in debug mode, internet explorer 8 works just fine and i got no erros at all, the browser stays alive forever, but as soon as i compile it in release mode, i got no errors, no message, nothing, but after 5 minutes i can see through the task manager that internet explorer instances are just eating memory and then the browser just stop responding every time. Please, I really need some hint on how to get a feedback on what could be the error. I heard that, often it was happening because of memory mismanagement. I need a software that just grab a memory dump or something when iexplorer crashes to help me find the problem. Any help is appreciated, I'll be looking for responses every single days, thank you.

    Read the article

< Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >