Search Results

Search found 38010 results on 1521 pages for 'page curl'.

Page 47/1521 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • Using Minified Page Specific JS [migrated]

    - by Mike C
    I've been working on a rather large scale project which makes use of a number of different pages with some very specific Javascript for each of them. To lessen load times, I plan to minify it all in to one file before deploying. The problem is this: how should I avoid launching page specific JS on pages which don't require it? So far my best solution has been to wrap each page in some additional container <div id='some_page'> ...everything else... </div> and I extended jQuery so I can do something like this: // If this element exists when the DOM is ready, execute the function $('#some_page').ready(function() { ... }); Which, while kind of cool, just rubs me the wrong way.

    Read the article

  • How to figure out recent pagrank of websites or any particular page (Homepage)

    - by rajesh.magar
    Question just comes in front because the very recent published algorithm changes by Google been affected my website traffic. And I've been wondering that my homepage page-rank is been also drop to 6 to 4 (Might be I am not sure). I am not using any supernatural SEO tools like SEOMOZ,Majesctic SEO etc. So it's quite difficult for me to ensure weather the page rank is been really affected or not. So can anyone please provide any good resource, tact or tricks to address this question. Thanks!

    Read the article

  • Tracking a single page on another domain in Google Analytics

    - by Ross
    I have access to edit a 'mini-site' hosted on our organisation's parent site. I'd like to track this page using Google Analytics, however I don't have access to the front page so I can't verify this as my domain. Using the tracking code for our main site works, however I don't want this data to be confused with similarly named pages on our site (for example, our mini-site is at /radio, and if we had a /radio at our main site this would be counted as the same). Has anyone been in this situation before? I'd like to just redirect visitors to our mini-site to our main site, seeing as it ranks higher in Google, but I've been told to maintain a separate site with our main features.

    Read the article

  • Joining and compressing all javascript files together - good idea?

    - by Tomáš Zato
    Curently, I avoid loading any unnecesary scripts on individual pages of my site. I have a class that remembers all javascript files that were requested during PHP processing and adds them to HTML. I was just thinking that I could merge the current set of files, save the result in special directory and let the browser download just one, big file. Since the number of possible combinations is not very high, I would end up with about 10 combined files for different pages. I've never seen that on any site. What are the reasons not to do it? I need very fast page load.

    Read the article

  • La page outils Qt mise à jour : les meilleurs outils et bibliothèques à votre disposition

    Bonjour, La rubrique Qt se met à jour, ces temps-ci. Après la FAQ, c'est au tour de la page des Outils de s'y mettre. Que recense-t-elle ? Par exemple, tous les IDE prévus dès l'origine pour fonctionner avec Qt. Mais aussi d'autres bibliothèques, basées sur Qt, qui en étendent les fonctionnalités. Vous trouverez votre bonheur sur cette page mise à jour : les meilleurs outils et bibliothèques pour Qt. Que pensez-vous de ces quelques bibliothèques ? Certaines sont-elles vraiment obsolètes, et ne méritent plus d'être présentes ? Ou bien, au contrai...

    Read the article

  • WGet a Page that Requires Logging in

    - by Synetech inc.
    I’m trying to figure out a way to use WGET or a similar tool so that I can schedule a web page to be downloaded regularly as a sort of updating log. The problem is that the page requires that I be logged in otherwise I get a different page, generic. Further, the page does not take login information as GET parameters in the URL, it uses POST to log in on the login page and cookies to save the login information that’s read by the regular page. I’m currently using GNU Wget 1.10.2 for Windows. I’ve tried using WGET’s cookie functionality but have had mixed results, usually skewing towards it not working. Can anyone please advise on a way to accomplish this? Thanks a lot.

    Read the article

  • Why my Google endpoint is always the same?

    - by joetsuihk
    always: https://www.google.com/accounts/o8/ud i got wordpress openid ok. so i think is is just discovery phase got some probelms.. <?php $ch = curl_init(); $url = 'https://www.google.com/accounts/o8/id'; $url = $url.'?'; $url = $url.'openid.mode=checkid_setup'; $url = $url.'&openid.ns=http://specs.openid.net/auth/2.0'; $url = $url.'&openid.claimed_id=http://specs.openid.net/auth/2.0/identifier_select'; $url = $url.'&openid.identity=http://specs.openid.net/auth/2.0/identifier_select'; $url = $url.'&openid.return_to='.site_url().'/user/openid/login_callback'; $url = $url.'&openid.realm=http://www.example.com/'; // set url curl_setopt($ch, CURLOPT_URL, $url); //return the transfer as a string curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 2); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_HTTPHEADER,array("Accept: */*")); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, FALSE); // $output contains the output string $xdr = curl_exec($ch); if (!$xdr) { die(curl_error($ch)); } // close curl resource to free up system resources curl_close($ch); $xml = new SimpleXMLElement($xdr); $url = $xml->XRD->Service->URI; $request = $connection->begin($url); $request always null...

    Read the article

  • Async trigger for an update panel refreshes entire page when triggering too much in too short of tim

    - by Matt
    I have a search button tied to an update panel as a trigger like so: <asp:Panel ID="CRM_Search" runat="server"> <p>Search:&nbsp;<asp:TextBox ID="CRM_Search_Box" CssClass="CRM_Search_Box" runat="server"></asp:TextBox> <asp:Button ID="CRM_Search_Button" CssClass="CRM_Search_Button" runat="server" Text="Search" OnClick="SearchLeads" /></p> </asp:Panel> <asp:UpdatePanel ID="UpdatePanel1" runat="server"> <Triggers> <asp:AsyncPostBackTrigger ControlID="CRM_Search_Button" /> </Triggers> <ContentTemplate> /* Content Here */ </ContentTemplate> </asp:UpdatePanel> In my javascript I use jQuery to grab the search box and tie it's keyup to make the search button click: $($(".CRM_Search_Box")[0]).keyup( function () { $($(".CRM_Search_Button")[0]).click(); } ); This works perfectly, except when I start typing too fast. As soon as I type too fast (my guess is if it's any faster than the data actually returns) the entire page refreshes (doing a postback?) instead of just the update panel. I've also found that instead of typing, if I just click the button really fast it starts doing the same thing. Is there any way to prevent it from doing this? Possibly prevent 2nd requests until the first has been completed? If I'm not on the right track then anyone have any other ideas? Thanks, Matt

    Read the article

  • Fixing a multi-threaded pycurl crash.

    - by Rook
    If I run pycurl in a single thread everything works great. If I run pycurl in 2 threads python will access violate. The first thing I did was report the problem to pycurl, but the project died about 3 years ago so I'm not holding my breath. My (hackish) solution is to build a 2nd version of pycurl called "pycurl_thread" which will only be used by the 2nd thread. I downloaded the pycurl module from sourceforge and I made a total of 4 line changes. But python is still crashing. My guess is that even though this is a module with a different name (import pycurl_thread), its still sharing memory with the original module (import pycurl). How should I solve this problem? Changes in pycurl.c: initpycurl(void) to initpycurl_thread(void) and m = Py_InitModule3("pycurl", curl_methods, module_doc); to m = Py_InitModule3("pycurl_thread", curl_methods, module_doc); Changes in setup.py: PACKAGE = "pycurl" PY_PACKAGE = "curl" to PACKAGE = "pycurl_thread" PY_PACKAGE = "curl_thread" Here is the seg fault i'm getting. This is happening within the C function do_curl_perform(). *** longjmp causes uninitialized stack frame ***: python2.7 terminated ======= Backtrace: ========= /lib/libc.so.6(__fortify_fail+0x37)[0x7f209421b537] /lib/libc.so.6(+0xff4c9)[0x7f209421b4c9] /lib/libc.so.6(__longjmp_chk+0x33)[0x7f209421b433] /usr/lib/libcurl.so.4(+0xe3a5)[0x7f20931da3a5] /lib/libpthread.so.0(+0xfb40)[0x7f209532eb40] /lib/libc.so.6(__poll+0x53)[0x7f20941f6203] /usr/lib/libcurl.so.4(Curl_socket_ready+0x116)[0x7f2093208876] /usr/lib/libcurl.so.4(+0x2faec)[0x7f20931fbaec] /usr/local/lib/python2.7/dist-packages/pycurl.so(+0x892b)[0x7f209342c92b] python2.7(PyEval_EvalFrameEx+0x58a1)[0x4adf81] python2.7(PyEval_EvalCodeEx+0x891)[0x4af7c1] python2.7(PyEval_EvalFrameEx+0x538b)[0x4ada6b] python2.7(PyEval_EvalFrameEx+0x65f9)[0x4aecd9]

    Read the article

  • Facebook API - delete status

    - by Simon R
    In PHP, I'm using curl to send a delete to the fb graph api - and yet I'm getting the following error; {"error":{"type":"GraphMethodException","message":"Unsupported delete request."}} The code I'm using is; $ch = curl_init("https://graph.facebook.com/" . $status_id . ""); curl_setopt($ch, CURLOPT_VERBOSE, 1); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_HEADER, 0); curl_setopt($ch, CURLOPT_TIMEOUT, 120); curl_setopt($ch, CURLOPT_POST, 1); curl_setopt($ch, CURLOPT_POSTFIELDS, $query); curl_setopt($ch, CURLOPT_CUSTOMREQUEST, "DELETE"); curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, 1); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, 0); curl_setopt($ch, CURLOPT_CAINFO, NULL); curl_setopt($ch, CURLOPT_CAPATH, NULL); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 0); $result = curl_exec($ch); echo $result; $query contains the access token.

    Read the article

  • Unable to HTTP PUT with libcurl to django-piston

    - by Jesse Beder
    I'm trying to PUT data using libcurl to mimic the command curl -u test:test -X PUT --data-binary @data.yaml "http://127.0.0.1:8000/foo/" which works correctly. My options look like: curl_easy_setopt(handle, CURLOPT_USERPWD, "test:test"); curl_easy_setopt(handle, CURLOPT_URL, "http://127.0.0.1:8000/foo/"); curl_easy_setopt(handle, CURLOPT_VERBOSE, 1); curl_easy_setopt(handle, CURLOPT_UPLOAD, 1); curl_easy_setopt(handle, CURLOPT_READFUNCTION, read_data); curl_easy_setopt(handle, CURLOPT_READDATA, &yaml); curl_easy_setopt(handle, CURLOPT_INFILESIZE, yaml.size()); curl_easy_perform(handle); I believe the read_data function works correctly, but if you ask, I'll post that code. I'm using Django with django-piston, and my update function is never called! (It is called when I use the command line version above.) libcurl's output is: * About to connect() to 127.0.0.1 port 8000 (#0) * Trying 127.0.0.1... * connected * Connected to 127.0.0.1 (127.0.0.1) port 8000 (#0) * Server auth using Basic with user 'test' > PUT /foo/ HTTP/1.1 Authorization: Basic dGVzdDp0ZXN0 Host: 127.0.0.1:8000 Accept: */* Content-Length: 244 Expect: 100-continue * Done waiting for 100-continue ** this is where my read_data handler confirms: read 244 bytes ** * HTTP 1.0, assume close after body < HTTP/1.0 400 BAD REQUEST < Date: Thu, 13 May 2010 08:22:52 GMT < Server: WSGIServer/0.1 Python/2.5.1 < Vary: Authorization < Content-Type: text/plain < Bad Request* Closing connection #0

    Read the article

  • Facebook PHP Api cUrl auto post to page's wall (not profile wall)

    - by Ian
    I need to be able to post to the wall of my page, i have given offline_permissions and I got it to post to my profile wall but I need it to post to my pages wall. Anyone know how to do this, where does my code need changing? thanks <?php session_start(); $fb_page_id = 106502962712016; $fb_access_token = '121247121254761|588e45312b074a0ec3dd62c39-1727154049|L0VGSJsCBrsSj5H4w1LwobRGeRc'; $url = 'https://graph.facebook.com/'.$fb_page_id.'/feed'; $attachment = array( 'access_token' => $fb_access_token, 'message' => 'message text', 'name' => 'name text', 'link' => 'http://domain.com/', 'description' => 'Description Text', 'picture'=>'http://domain.com/logo.jpg', ); // set the target url $ch = curl_init(); curl_setopt($ch, CURLOPT_URL,$url); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, FALSE); curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, 2); curl_setopt($ch, CURLOPT_POST, true); curl_setopt($ch, CURLOPT_POSTFIELDS, $attachment); curl_setopt($ch, CURLOPT_HEADER,0); curl_setopt($ch, CURLOPT_RETURNTRANSFER,1); $go = curl_exec($ch); curl_close ($ch); ?

    Read the article

  • Unable to HTTP PUT with libcurl

    - by Jesse Beder
    I'm trying to PUT data using libcurl to mimic the command curl -u test:test -X PUT --data-binary @data.yaml "http://127.0.0.1:8000/foo/" which works correctly. My options look like: curl_easy_setopt(handle, CURLOPT_USERPWD, "test:test"); curl_easy_setopt(handle, CURLOPT_URL, "http://127.0.0.1:8000/foo/"); curl_easy_setopt(handle, CURLOPT_VERBOSE, 1); curl_easy_setopt(handle, CURLOPT_UPLOAD, 1); curl_easy_setopt(handle, CURLOPT_READFUNCTION, read_data); curl_easy_setopt(handle, CURLOPT_READDATA, &yaml); curl_easy_setopt(handle, CURLOPT_INFILESIZE, yaml.size()); curl_easy_perform(handle); I believe the read_data function works correctly, but if you ask, I'll post that code. I'm using Django with django-piston, and my update function is never called! (It is called when I use the command line version above.) libcurl's output is: * About to connect() to 127.0.0.1 port 8000 (#0) * Trying 127.0.0.1... * connected * Connected to 127.0.0.1 (127.0.0.1) port 8000 (#0) * Server auth using Basic with user 'test' > PUT /foo/ HTTP/1.1 Authorization: Basic dGVzdDp0ZXN0 Host: 127.0.0.1:8000 Accept: */* Content-Length: 244 Expect: 100-continue * Done waiting for 100-continue ** this is where my read_data handler confirms: read 244 bytes ** * HTTP 1.0, assume close after body < HTTP/1.0 400 BAD REQUEST < Date: Thu, 13 May 2010 08:22:52 GMT < Server: WSGIServer/0.1 Python/2.5.1 < Vary: Authorization < Content-Type: text/plain < Bad Request* Closing connection #0

    Read the article

  • curl_multi_exec stops if one url is 404, how can I change that?

    - by Rob
    Currently, my cURL multi exec stops if one url it connects to doesn't work, so a few questions: 1: Why does it stop? That doesn't make sense to me. 2: How can I make it continue? EDIT: Here is my code: $SQL = mysql_query("SELECT url FROM shells") ; $mh = curl_multi_init(); $handles = array(); while($resultSet = mysql_fetch_array($SQL)){ //load the urls and send GET data $ch = curl_init($resultSet['url'] . $fullcurl); //Only load it for two seconds (Long enough to send the data) curl_setopt($ch, CURLOPT_TIMEOUT, 5); curl_multi_add_handle($mh, $ch); $handles[] = $ch; } // Create a status variable so we know when exec is done. $running = null; //execute the handles do { // Call exec. This call is non-blocking, meaning it works in the background. curl_multi_exec($mh,$running); // Sleep while it's executing. You could do other work here, if you have any. sleep(2); // Keep going until it's done. } while ($running > 0); // For loop to remove (close) the regular handles. foreach($handles as $ch) { // Remove the current array handle. curl_multi_remove_handle($mh, $ch); } // Close the multi handle curl_multi_close($mh);

    Read the article

  • libcurl (c api) READFUNCTION for http PUT blocking forever

    - by Duane
    I am using libcurl for a RESTful library. I am having two problems with a PUT message, I am just trying to send a small content like "hello" via put. My READFUNCTION for PUT's blocks for a very large amount of time (minutes) when I follow the manual at curl.haxx.se and return a 0 indicating I have finished the content. (on os X) When I return something 0 this succeeds much faster (< 1 sec) When I run this on my linux machine (ubuntu 10.4) this blocking event appears to NEVER return when I return 0, if I change the behavior to return the size written libcurl appends all the data in the http body sending way more data and it fails with a "too much data" message from the server. my readfunction is below, any help would be greatly appreciated. I am using libcurl 7.20.1 typedef struct{ void *data; int body_size; int bytes_remaining; int bytes_written; } postdata; size_t readfunc(void *ptr, size_t size, size_t nmemb, void *stream) { if(stream) { postdata ud = (postdata)stream; if(ud->bytes_remaining) { if(ud->body_size > size*nmemb) { memcpy(ptr, ud->data+ud->bytes_written, size*nmemb); ud->bytes_written+=size+nmemb; ud->bytes_remaining = ud->body_size-size*nmemb; return size*nmemb; } else { memcpy(ptr, ud->data+ud->bytes_written, ud->bytes_remaining); ud->bytes_remaining=0; return 0; } }

    Read the article

  • Refresh page in browser without resubmitting form

    - by Michael
    I'm an ASP.NET developer, and I usually find myself leaving the webpage that I'm working on open in my browser (Chrome is my browser of choice, but this question is relevant for any browser). My workflow typically goes like this: I write code, I rebuild my project in Visual Studio, and then I flip back to my browser with Alt-Tab and hit F5 to refresh the page. This is fine and dandy if a form hasn't been submitted since the page was opened. But if I've been clicking around on ASP.NET form controls, the page has posted form data a number of times, so hitting F5 causes the browser to (sensibly) pop up a confirmation message, e.g., "Confirm Form Resubmission: The page that you're looking for used information that you entered...". Sometimes I do want to resubmit the form, but more often than not, I just want to start over with the page (rather than resubmit form data). The way I usually get around this is to simply add some query string data to the URL so that the browser sees it as a fresh page request, e.g.: page.aspx becomes page.aspx? (or vice-versa). My question is: Is there a better way to quickly request a fresh version of a webpage (and not submit form data) in any of the major browsers? It seems like a no-brainer to me for web development, but maybe I'm missing something. What I'd love to see is something like the last item in this list: F5: refresh page Ctrl-F5: refresh page (and force cache refresh) Alt-F5: request fresh copy of the page without resubmitting the form

    Read the article

  • Android Google cloud messaging - not certain what parameters I should put when creating the push notification

    - by Genadinik
    I am working on a php script to send the notification to the CGM server and I am working from this example: public function send_notification($registatoin_ids, $message) { // include config include_once './config.php'; // Set POST variables $url = 'https://android.googleapis.com/gcm/send'; $fields = array( 'registration_ids' => $registatoin_ids, 'data' => $message, ); $headers = array( 'Authorization: key=' . GOOGLE_API_KEY, 'Content-Type: application/json' ); // Open connection $ch = curl_init(); // Set the url, number of POST vars, POST data curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_POST, true); curl_setopt($ch, CURLOPT_HTTPHEADER, $headers); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); // Disabling SSL Certificate support temporarly curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false); curl_setopt($ch, CURLOPT_POSTFIELDS, json_encode($fields)); // Execute post $result = curl_exec($ch); if ($result === FALSE) { die('Curl failed: ' . curl_error($ch)); } // Close connection curl_close($ch); echo $result; } But I am not certain what the values should be for the variables: CURLOPT_POSTFIELDS , CURLOPT_SSL_VERIFYPEER , CURLOPT_RETURNTRANSFER , CURLOPT_HOST , CURLOPT_URL Would anyone happen to know what the values for these should be? Thank you!

    Read the article

  • ecommerce - use server side code for hidden values in an html form

    - by bsarmi
    I'm trying to learn how to implement a donation form on a website using virtual merchant. The html code from their developer manual goes like this: <form action="https://www.myvirtualmerchant.com/VirtualMerchant/process.do" method="POST"> Your Total: $5.00 <br/> <input type="hidden" name="ssl_amount" value="5.00"> <br/> <input type="hidden" name="ssl_merchant_id" value="my_virtualmerchant_ID"> <input type="hidden" name="ssl_pin" value="my_PIN"> <input type="hidden" name="ssl_transaction_type" value="ccsale"> <input type="hidden" name="ssl_show_form" value="false"> Credit Card Number: <input type="text" name="ssl_card_number"> <br/> Expiration Date (MMYY): <input type="text" name="ssl_exp_date" size="4"> <br/> <br/> <input type="submit" value="Continue"> </form> I have that in an html file and it works fine, but they suggest that the merchant data (the input type="hidden" values) should be in a Server Side Code. I was looking at cURL but it'a all very new to me and I spent a couple of hours trying to find some guide or some sample code on how to accomplish that. Any suggestions or help is greatly appreciated. Thanks!

    Read the article

  • loading remote page into DOM with javascript

    - by scoobydoo
    I am trying to write a web widget which will allow users to display customized information (from my website) in their own web page. The mechanism I want to use (for creating the web widget) is javascript. So basically, I want to be able to write some javascript code like this (this is what the end user copies into their HTML page, to get my widget displayed in their page) <script type="text/javascript"> /* javascript here to fetch page from remote url and insert into DOM */ </script> I have two questions: how do I write a javascript code to fetch the page from the remote url? Ideally this will be PLAIN javascript (i.e. not using jQuery etc - since I dont want to force the user to get third party scripts jQuery which may conflict with other scripts on their page etc) The page I am fetching contains inline javascript, which gets executed in an body.onLoad event, as well as other functions which are used in response to user actions - my questions are: i). will the body.onLoad event be triggered for the retrieved document?. ii). If the retrieved page is dumped directly into the DOM, then the document will contain two <body> sections, which is no longer valid (X)HTML - however, I need the body.onLoad event to be triggered for the page to be setup correctly, and I also need the other functions in the retrieved page, for the retrieved page to be able to respond to the user interaction. Any suggestions/tips on how I can solve these problems?

    Read the article

  • How to open multiple socket connections and do callbacks in PHP

    - by Click Upvote
    I'm writing some code which processes a queue of items. The way it works is this: Get the next item flagged as needing to be processed from the mysql database row. Request some info from a google API using Curl, wait until the info is returned. Do the remainder of the processing based on the info returned. Flag the item as processed in the db, move onto the next item. The problem is that on step # 2. Google sometimes takes 10-15 seconds to return the requested info, during this time my script has to remain halted and wait. I'm wondering if I could change the code to do the following instead: Get the next 5 items to be processed as usual. Request info for items 1-5 from google, one after the other. When the info for item 1 is returned, a 'callback' should be done which calls up a function or otherwise calls some code which then does the remainder of the processing on items 1-5. And then the script starts over until all pending items in db are marked processed. How can something like this be achieved?

    Read the article

  • What is recommended minimum object size for gzip performance benefits?

    - by utt73
    I'm working on improving page speed display times, and one of the methods is to gzip content from the webserver. Google recommends: Note that gzipping is only beneficial for larger resources. Due to the overhead and latency of compression and decompression, you should only gzip files above a certain size threshold; we recommend a minimum range between 150 and 1000 bytes. Gzipping files below 150 bytes can actually make them larger. We serve our content through Akamai, using their network for a proxy and CDN. What they've told me: Following up on your question regarding what is the minimum size Akamai will compress the requested object when sending it to the end user: The minimum size is 860 bytes. My reply: What is the reason(s) for why Akamai's minimum size is 860 bytes? And why, for example, is this not the case for files Akamai serves for facebook? (see below) Google recommends to gzip more agressively. And that seems appropriate on our site where the most frequent hits, by far, are AJAX calls that are <860 bytes. Akamai's response: The reasons 860 bytes is the minimum size for compression is twofold: (1) The overhead of compressing an object under 860 bytes outweighs performance gain. (2) Objects under 860 bytes can be transmitted via a single packet anyway, so there isn't a compelling reason to compress them. So I'm here for some fact checking. Is the 860 byte limit due to packet size the end of this reasoning? Why would high traffic sites push this down to the 150 byte limit... just to save on bandwidth costs (since CDNs base their charges on bandwith offloaded from origin), or is there a performance gain in doing so?

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >