Search Results

Search found 37233 results on 1490 pages for 'page flicker'.

Page 21/1490 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Windows Phone 7 Prototype 002: Animated Page Transitions + Writeable Bitmaps

    Motion is a key part of WP7 application development. Without motion, the WP7 UI is just a bunch of text. Not nearly as exciting. To delight users, you can add some transitions between pages.  The sample app includes some storyboards to animate between two pages. Other people have noted that you can just use the transitioning content control form the SIlverlight toolkit. Peter Torr also had a nice animating frame control in his mix demo code (his blog has some other great code samples for WP7 app dev). I took some of those concepts and the code from the TransitioningContentControl to make a new animating frame control. In this prototype, the frame takes a snapshot of the old content and the new content using writeable bitmaps and animates the snapshots and then replaces those with the actual page. The benefit is smoother animation on pages with lots of controls. Otherwise, if you have a large panorama, it might not animate that cleanly.  Like the other solutions based on the TransitioningContentControl, you can centralize all the animations in one place and not have to handle them on each individual page. Peters code also had a nice snippet for choosing the animation based on the navigation direction so you could just have a forward / backward animation and not have to do anything on each page. You could also probably add some more advanced transitions using pixel shaders or make an default no transition state if you wanted to have some specific animation on a page where individual  controls transitioned out differently like some of the WP7 shell apps. Sample Code 100% guaranteed to work on my emulatorDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Redirecting to a dynamic page

    - by binarydev
    I have a page displaying blog posts (latest_posts.php) and another page that display single blog posts (blog.php) . I intend to link the image title in latest_posts.php so that it redirects to blog.php where it would display the particular post that was clicked. latest_posts.php: <!-- Header --> <h2 class="underline"> <span>What&#039;s new</span> <span></span> </h2> <!-- /Header --> <!-- Posts list --> <ul class="post-list post-list-1"> <?php /* Fetches Date/Time, Post Content and title */ include 'dbconnect.php'; $sql = "SELECT * FROM wp_posts"; $res = mysql_query($sql); while ( $row = mysql_fetch_array($res) ) { ?> <!-- Post #1 --> <li class="clear-fix"> <!-- Date --> <div class="post-list-date"> <div class="post-date-box"> <?php //Timestamp broken down to show accordingly $timestamp = $row['post_date']; $datetime = new DateTime($timestamp); $date = $datetime->format("d"); $month = $datetime->format("M"); ?> <h3> <?php echo $date; ?> </h3> <span> <?php echo $month; ?> </span> </div> </div> <!-- /Date --> <!-- Image + comments count --> <div class="post-list-image"> <!-- Image --> <div class="image image-overlay-url image-fancybox-url"> <a href="post.php" class="preloader-image"> <?php echo '<img src="', $row['image'], '" alt="' , $row['post_title'] , '\'s Blog Image" />'; ?> </a> </div> <!-- /Image --> </div> <!-- /Image + comments count --> <!-- Content --> <div class="post-list-content"> <div> <!-- Header --> <h4> <a href="post.php? . $row['ID'] . "> <?php echo $row['post_title']; ?> </a> </h4> <!-- /Header --> <!-- Excerpt --> <p> <?php echo $row ['post_content']; }?> </p> <!-- /Excerpt --> </div> </div> <!-- /Content --> </li> <!-- /Post #1 --> </ul> <!-- /Posts list --> <a href="blog.php" class="button-browse">Browse All Posts</a> </div> <?php require_once('include/twitter_user_timeline.php'); ?> blog.php: <?php require_once('include/header.php'); ?> <body class="blog"> <?php require_once('include/navigation_bar_blog.php'); ?> <div class="blog"> <div class="main"> <!-- Header --> <h2 class="underline"> <span>What&#039;s new</span> <span></span> </h2> <!-- /Header --> <!-- Layout 66x33 --> <div class="layout-p-66x33 clear-fix"> <!-- Left column --> <!-- <div class="column-left"> --> <!-- Posts list --> <ul class="post-list post-list-2"> <?php /* Fetches Date/Time, Post Content and title with Pagination */ include 'dbconnect.php'; //sets to default page if(empty($_GET['pn'])){ $page=1; } else { $page = $_GET['pn']; } // Index of the page $index = ($page-1)*3; $sql = "SELECT * FROM `wp_posts` ORDER BY `post_date` DESC LIMIT " . $index . " ,3"; $res = mysql_query($sql); //Loops through the values while ( $row = mysql_fetch_array($res) ) { ?> <!-- Post #1 --> <li class="clear-fix"> <!-- Date --> <div class="post-list-date"> <div class="post-date-box"> <?php //Timestamp broken down to show accordingly $timestamp = $row['post_date']; $datetime = new DateTime($timestamp); $date = $datetime->format("d"); $month = $datetime->format("M"); ?> <h3> <?php echo $date; ?> </h3> <span> <?php echo $month; ?> </span> </div> </div> <!-- /Date --> <!-- Image + comments count --> <div class="post-list-image"> <!-- Image --> <div class="image image-overlay-url image-fancybox-url"> <a href="post.php" class="preloader-image"> <?php echo '<img src="', $row['image'], '" alt="' , $row['post_title'] , '\'s Blog Image" />'; ?> </a> </div> <!-- /Image --> </div> <!-- /Image + comments count --> <!-- Content --> <div class="post-list-content"> <div> <?php $id = $_GET['ID']; $post = lookup_post_somehow($id); if($post) { // render post } else { echo 'blog post not found..'; } ?> <!-- Header --> <h4> <a href="post.php"> <?php echo $row['post_title']; ?> </a> </h4> <!-- /Header --> <!-- Excerpt --> <p> <?php echo $row ['post_content']; ?> </p> <!-- /Excerpt --> </div> </div> <!-- /Content --> </li> <!-- /Post #1 --> <?php } // close while loop ?> </ul> <!-- /Posts list --> <div><!-- Pagination --> <ul class="blog-pagination clear-fix"> <?php //Count the number of rows $numberofrows = mysql_query("SELECT COUNT(ID) FROM `wp_posts`"); //Do ciel() to round the result according to number of posts $postsperpage = 4; $numOfPages = ceil($numberofrows / $postsperpage); for($i=1; $i < $numOfPages; $i++) { //echos links for each page $paginationDisplay = '<li><a href="blog.php?pn=' . $i . '">' . $i . '</a></li>'; echo $paginationDisplay; } ?> <!-- <li><a href="#" class="selected">1</a></li> <li><a href="#">2</a></li> <li><a href="#">3</a></li> <li><a href="#">4</a></li> --> </ul> </div><!-- /Pagination --> <!-- /div> --> <!-- Left column --> </div> <!-- /Layout 66x33 --> </div> </div> <?php require_once('include/twitter_user_timeline.php'); ?> <?php require_once('include/footer_blog.php'); ?> How do I render?

    Read the article

  • Letting search engines know that different links to identical pages stress different parts of the page

    - by balpha
    When you follow a permalink to a chat message in the Stack Exchange chat, you get a view of the transcript page for the day that contains the particular message. This message is highlighted in yellow, and the page is scrolled to its position. Sometimes – admittedly rarely, but it happens – a web search will result in such a transcript link. Here's a (constructed, obviously) example: A Google search for strange behavior of the \bibliography command site:chat.stackexchange.com gives me a link to this chat message. This message is obiously unrelated to my query, but the transcript page does indeed contain my search terms – just in a totally different spot. Both the above links lead to the same content, and Google knows this, since both pages have <link rel="canonical" href="/transcript/41/2012/4/9/0-24" /> in their <head>. The only difference between the two links is Which message has the highlight css class?. Is there a way to let Google know that while all three links have the same content, they put an emphasis on a different part of the content? Note that the permalinks on the transcript page already have a #12345 hash to "point" to the relavant chat message, but Google appears to drop it.

    Read the article

  • page rank 0 penalty

    - by mark
    I have a wordpress blog and a www-website on the same domain for about one year. Together it is about 170 pages. The page rank is still 0. I understand that page rank 0 is a penalty for duplicate content. The pages are indexed in google but still no page rank. In google webmaster tools there is no indication for any problem. I asked for reconsideration of both blog and website a month ago. Google accepted the reconsideration but it did not change anything. Other pages of similar size and similar audience earn PR 4-6. Is there something I can do in order to get a fair page rank? A coworker told me that it might be the case that a link farm is using the content and I can do nothing about it. Is there a reliable way to check for something like that? I do not like to give up so quickly is there a chance to fix this by for example moving to another domain?

    Read the article

  • URL is generating a /#!/splash-page

    - by user32642
    My site for some reason is generating a shebang - /#!/splash-page on the URL. For example when I type www.modernvintage1005.com, the browser returns www.modernvintage1005.com/#!/splash-page and every subsequent page is /#!/about, /#!/contact, and so forth. There's absolutely nothing on the Google about this. There is a lot of rewrite help to eliminate .index.php from the home page, but that's it. How do I rewrite it to just say domain.com and domain.com/about.html, etc.? Here is my .htaccess file if you need to see it. # Rewrite Rule <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # compress text, html, javascript, css, xml: <IfModule mod_deflate.c> AddOutputFilterByType DEFLATE text/plain AddOutputFilterByType DEFLATE text/html AddOutputFilterByType DEFLATE text/xml AddOutputFilterByType DEFLATE text/css AddOutputFilterByType DEFLATE application/xml AddOutputFilterByType DEFLATE application/xhtml+xml AddOutputFilterByType DEFLATE application/rss+xml AddOutputFilterByType DEFLATE application/javascript AddOutputFilterByType DEFLATE application/x-javascript AddType x-font/otf .otf AddType x-font/ttf .ttf AddType x-font/eot .eot AddType x-font/woff .woff AddType image/x-icon .ico AddType image/png .png </IfModule> ## EXPIRES CACHING ## <IfModule mod_expires.c> ExpiresActive On ExpiresByType image/jpg "access 1 year" ExpiresByType image/jpeg "access 1 year" ExpiresByType image/gif "access 1 year" ExpiresByType image/png "access 1 year" ExpiresByType text/css "access 1 month" ExpiresByType application/pdf "access 1 month" ExpiresByType text/x-javascript "access 1 month" ExpiresByType application/x-shockwave-flash "access 1 month" ExpiresByType image/x-icon "access 1 year" ExpiresDefault "access 2 days" </IfModule> ## EXPIRES CACHING ##

    Read the article

  • 'Buy the app' landing page implementations

    - by benwad
    My site (using Django) has an app that I'm trying to push - I currently have a piece of middleware that redirects the user to a page advertising the app if they're accessing the page on the iPhone, then setting a cookie so that the user isn't bugged by the message every time they visit the site. This works fine, however checking the page with the mobile Googlebot checker shows that the Googlebot gets stuck in the redirect (since it doesn't store cookies) and therefore won't index the proper content. So, I'm trying to think of an alternative implementation that won't hurt the site's Google ranking and won't have any other adverse effects. I've considered a couple of options: Redirect (the current solution), but don't redirect if the user agent matches the Googlebot's UA string. This would be ideal, however I'm not sure if Google like their bot being treated differently from other users, and I'm afraid the site's ranking may be somehow penalised if I go ahead with this. Use a Javascript popup instead of a redirect. This would make sure the Googlebot finds the content it needs, however I envision this approach causing compatibility issues with the myriad mobile devices/browsers out there, and may affect the page load time. How valid are these options? And is there a better option for implementing this feature out there? I've tried researching this topic but surprisingly can't find any reputable-looking blog posts that explore this topic. EDIT: I posted this on SF because it seemed unsuitable for SO, but if there's another site that would be better for this issue then I'd be happy to move the question elsewhere.

    Read the article

  • Helping to Reduce Page Compression Failures Rate

    - by Vasil Dimov
    When InnoDB compresses a page it needs the result to fit into its predetermined compressed page size (specified with KEY_BLOCK_SIZE). When the result does not fit we call that a compression failure. In this case InnoDB needs to split up the page and try to compress again. That said, compression failures are bad for performance and should be minimized.Whether the result of the compression will fit largely depends on the data being compressed and some tables and/or indexes may contain more compressible data than others. And so it would be nice if the compression failure rate, along with other compression stats, could be monitored on a per table or even on a per index basis, wouldn't it?This is where the new INFORMATION_SCHEMA table in MySQL 5.6 kicks in. INFORMATION_SCHEMA.INNODB_CMP_PER_INDEX provides exactly this helpful information. It contains the following fields: +-----------------+--------------+------+ | Field | Type | Null | +-----------------+--------------+------+ | database_name | varchar(192) | NO | | table_name | varchar(192) | NO | | index_name | varchar(192) | NO | | compress_ops | int(11) | NO | | compress_ops_ok | int(11) | NO | | compress_time | int(11) | NO | | uncompress_ops | int(11) | NO | | uncompress_time | int(11) | NO | +-----------------+--------------+------+ similarly to INFORMATION_SCHEMA.INNODB_CMP, but this time the data is grouped by "database_name,table_name,index_name" instead of by "page_size".So a query like SELECT database_name, table_name, index_name, compress_ops - compress_ops_ok AS failures FROM information_schema.innodb_cmp_per_index ORDER BY failures DESC; would reveal the most problematic tables and indexes that have the highest compression failure rate.From there on the way to improving performance would be to try to increase the compressed page size or change the structure of the table/indexes or the data being stored and see if it will have a positive impact on performance.

    Read the article

  • Page URL and database organization.

    - by shurik2533
    I want that its name would be the page address. For example, if page has heading "Some Page", than its address should be http://somesite/some_page/. "some_page"-name generated by system automatically. "some_page" - is the unique identifier of page. The problem in that the user in the future can enter a name which already exists that will cause an error. It is necessary to find an optimum variant of the decision of a problem for great volumes of the data. I have solved a problem as follows: The page identifier in a database is the name of page and a suffix which is by default equal to zero. At page addition there is a check on existence. If such page does not exist, the suffix is equal 0 and its name is "some_page", if page is exist, than - search for the maximum number of a suffix and suffix=suffix+1 and page name become "some_page_1". For this I create in a database the compound key from fields "suffix" and "pageName": Table Pages suffix|pageName |pageTitle 0 |some_page |Some Page 1 |some_page |Some Page 0 |other_page|Other Page Addition of pages occurs through stored procedure: CREATE PROCEDURE addPage (pageNameVal VARCHAR(100), pageTitleVal VARCHAR(100)) BEGIN DECLARE v INT DEFAULT 0; SELECT MAX(suffix) FROM pages WHERE pageName=pageNameVal INTO v; IF v >= 0 THEN SET v = v + 1; ELSE SET v = 0; END IF; INSERT INTO pages (suffix, pageName) VALUES (pageNameVal, v, pageTitleVal); END; Whether there are more the best decisions?

    Read the article

  • Google PageSpeed, optimizing Google's own elements

    - by mowgli
    I'm trying Google's PageSpeed online service Ironically, it's primarily highlighting Google's own services as something that needs improvement on my site 1) jQuery from Google: blocking. So I moved all javascript from <head> to the end of the document before </body>. That helped 2) Linking to external Google Font CSS (in <head>): blocking. But the font is critical to the design of the page and should load before much else 3) Google Analytics: Caching is not good. (Google has set it internally to 2 hours expiration). Don't know how to change this (this is also placed at the bottom of page) The Google Font is highlighted as a big priority to change. How can I fix this? Where/how should I call the the font?

    Read the article

  • Country selection, when country is not listed

    - by David Balažic
    While this might not 100% match the intent of this site, it was the closest match from Stackexchange sites. So, if a web site (the "entrance" page) offers a choice (a list) of countries, with the text "Chose your country", but the users country is not listed, what should he do? One example is http://www.samsung.com/countryselection.do Addition: I ask this standing in the users position. I encounter a web site and it gives me the above page. What to do? Another issue: What is "my" country? My current location? My permanent residence? The country of my citizenship? Something else?

    Read the article

  • Playing with aspx page cycle using JustMock

    In this post , I will cover a test code that will mock the various elements needed to complete a HTTP page request and  assert the expected page cycle steps. To begin, i have a simple enumeration that has my predefined page steps: public enum PageStep {     PreInit,     Load,     PreRender,     UnLoad } Once doing so, i  first...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to print a web page that contains flash

    - by Richard
    I am using the chromium browser to display the following web page: http://www.primaryworksheets.co.uk/multiws/multi23.html I want to print off this maths worksheet for my son, but all I ever get out of my printer is a blank page. The web page appears to be produced using flash. I have been to the software centre and re-installed the flash plugin, but that did not help. I don't seem to have problems printing anything else. Firefox isn't any better. Can anyone tell me what else I might try? I'm using '11.04'. Thanks, Richard

    Read the article

  • .htaccess Redirect 301 in Wordpress – From Post to Page

    - by elocman
    By default, Wordpress posts are added to RSS feed. For my website, I want to include Wordpress pages to RSS feed. I know that some plugins could help me. Instead, I try to use redirect 301 in .htacccess file. My question is, will this way work fine for Google and other search engines? Here’s what I did: Published a new page and then a new post with the same title, desc, keywords and content (though I know that if there’s redirect 301 Google won’t "read" the post but switch to the page) Added the line Redirect 301 etc. to my .htacccess file Now my post is listed in RSS feed, and when you click on it you’re redirected to the page

    Read the article

  • Adding Async=true to the page- no side effects noticed.

    - by Michael Freidgeim
    Recently I needed to implement PageAsyncTask  in .Net 4 web forms application.According to http://msdn.microsoft.com/en-us/library/system.web.ui.pageasynctask.aspx"A PageAsyncTask object must be registered to the page through the RegisterAsyncTask method. The page itself does not have to be processed asynchronously to execute asynchronous tasks. You can set the Async attribute to either true (as shown in the following code example) or false on the page directive and the asynchronous tasks will still be processed asynchronously:<%@ Page Async="true" %>When the Async attribute is set to false, the thread that executes the page will be blocked until all asynchronous tasks are complete."I was worry about any site effects if I will set  Async=true on the existing page.The only documented restrictions, that I found are that@Async is not compatible with @AspCompat and Transaction attributes (from @ Page directive  MSDN article). In other words, Asynchronous pages do not work when the AspCompat attribute is set to true or the Transactionattribute is set to a value other than Disabled in the @ Page directiveFrom our tests we conclude, that adding Async=true to the page is quite safe, even if you don't always call Async tasks from the page

    Read the article

  • Google webmaster tools: parameters that only apply on one page

    - by Imagine digital
    I'm trying to get my e-commerce website on google and still figuring out how it all works. Now, I have seen this feature named URL-parameters, allowing me to set different parameters that affect page content to be indexed (one can also set parameters that do not affect the page, but for me that does not apply..). The question I have about this is whether and how I should add parameters that I only have on some pages of my site. example: The homepage of my site is www.mysite.nl. no parameters at all. But when a user clicks the navigation bar, it links to www.mysite.nl/itemList.php?category=&....subCategory=.... The parameters category and subCategory define whether there is content on my itemList page and what content that is. It gets matching products out of my database based on those 2 variables. The question: How do I make sure that I apply the google URL Parameters function decently for my website?

    Read the article

  • Do the "Contact us" and "Privacy policy" pages affect SEO?

    - by Gkhan14
    Just like the title says, what are the effects of having a "Contact us" and a "Privacy policy" on your site? I've read that it could build up your trust with Google, is this true? I've also read that some people said that you should add a noindex tag to your "Privacy policy" page, would this be a good idea? I say this because many websites have similar privacy policies, and I don't want any duplicate content issues. (For example, many people could be using the same WordPress privacy policy generator). I'm wondering the same things for the "Contact us" page as well.

    Read the article

  • Canonical url for a home page and trailing slashes

    - by serg
    My home page could be potentially linked as: http://example.com http://example.com/ http://example.com/?ref=1 http://example.com/index.html http://example.com/index.html?ref=2 (the same page is served for all those urls) I am thinking about defining a canonical url to make sure google doesn't consider those urls to be different pages: <link rel="canonical" href="/" /> (relative) <link rel="canonical" href="http://example.com/" /> (trailing slash) <link rel="canonical" href="http://example.com" /> (no trailing slash) Which one should be used? I would just slap / but messing with canonical seems like a scary business so I wanted double check first. Is it a good idea at all for defining a canonical url for a home page?

    Read the article

  • Crawling an ajax based page with both a hash fragment and a meta tag

    - by Christofian
    According to google's documentation on crawling ajax based web pages, if a url contains a hash fragment, or something at the end of an url that looks like #helloworld, and if there is an ! after the #, as in #!helloworld, google will then request the url url?_escaped_fragment_=helloworld. I currently have an ajax based webpage that I want google to be able to crawl. Sometimes, the page uses hash fragments, and for those situations I set up the server so it will return an html snapshot for that page using _escaped_fragment_. However, that webpage often does not load a hash fragment, and when that happens the webpage still loads content using ajax. I couldn't find a good solution to enable ajax crawling for pages that sometimes have a hash fragment and sometimes don't. How can I tell google to use _escaped_fragment_ when there is a hash fragment, and to use something else to get an html snapshot of a page when there isn't a hash fragment?

    Read the article

  • Setting the Default Wiki Page in a SharePoint Wiki Library

    - by Damon Armstrong
    I’ve seen a number of blog posts about setting the default homepage in a wiki library, and most of them offer ways of accomplishing this task through PowerShell or through SharePoint designer.  Although I have become an ever increasing fan of PowerShell, I still prefer to stay away from it unless I’m trying to do something fairly complicated or I need a script that I can run over and over again.  If all you need to do is set the default homepage in a wiki library, there is an easier way! First, navigate to the wiki page you want to use as the default homepage.  Then click the Page tab in the ribbon.  In the Page Actions group there is a button called Make Homepage.  Click it.  A confirmation displays informing you that you are about to change the homepage.  Click OK and you will have a new homepage for your wiki library.  No PowerShell required.

    Read the article

  • Links to facebook.com/company-page redirect to facebook.com

    - by Teo
    For the last 2 days I've been trying to find the reason why the link to my website's Facebook page doesn't work anymore. The link went to facebook.com/company-page, but now redirects to facebook.com. I assume that I mistakenly changed something in the Facebook developer area, but I can't remember what it was. I guess I saw some redirect in the tab, but I'm not sure since it's changing too fast to facebook.com. The original link in the footer is correct: <a href="http://facebook.com/company-page " target="_blank" class="facebook_ico"></a> Any ideas?

    Read the article

  • How prevent useless content load on the page in Responsive Design

    - by Ícaro Leandro
    In Responsive Design, we hide elements in the page with @media queries and display: hide in the CSS. Ok, But in my system: Browsers that have less than width: 800px, the layout must hide some content, not only hide, but avoid them load fully. I mean, in access with desktop with more than 800px of screen, the page load fully; In mobile devices, or even in desktop with less than 800px, not load some content. I want to make the page load faster in this browsers. The system are maked in PHP and have some Javascript. Thanks...

    Read the article

  • Searching for a page with a Very Unique title, doesnt find that intended page... Why?

    - by Sam
    Dear folks, a question about appearing in search results in google: A page of mine has this extremely unique page title: Ein gutes Logo passt wie ein Handschuh auf Ihre Marke in die Hände Now, when I search the phrase: Ein gutes Logo passt wie ein Handschuh auf Ihre Marke in die Hände Then all kinds of other irrelevant pages show up having only 1 or at best two words from my unqie title appearing, although I have searched for the entire phrase! And when I search the phrase in between quotes: "Ein gutes Logo passt wie ein Handschuh auf Ihre Marke in die Hände" Then it finds 1 result, which is my page. What is going on? Why doesn't show the unique result without the quotes? Thanks: your ideas and suggestions are welcome and much appreciated

    Read the article

  • How to add a holding page in front of a domain

    - by Jason Bradberry
    I have set up a holding page to announce a new version of a website coming soon. I wanted people to still be able to access the original site, so my approach was to place the holding page in the root folder on the server, and move the original site to a subfolder and link to it from the holding page. However, on testing this setup it appears to have hurt the SEO placing of the website. Is there a better approach to this? I'm a bit stumped as I want both to share the same URL.

    Read the article

  • Page load speeds effect on crawl rate

    - by Sam Pegler
    We've noticed a big drop in the total pages crawled per day on our site, we have no control over the crawl rate in google webmaster tools so it's possible this has been changed by google. However it's a fairly large site and I wouldn't of thought that the crawl rate would've been decreased. What we have noticed though is a sizeable increase in page load times, in my mind this would be the cause. Can anyone else confirm if the crawl rate is directly correlated to page load time? Seems logical, longer page load time, less pages crawled. Any decent documentation on this would be appreciated, I don't normally have any input on SEO so this is new to me.

    Read the article

  • Google's Opinion on Javascript Page Refresh

    - by user35306
    I was wondering if anyone knows Google's view on this. My company has a homepage that features a lot of 3rd parties on it and it needs to inform customers which ones are currently online, which aren't, and which are currently busy. Because this constantly changes, we have the homepage refresh to show the most relevant and up-to-date content to our users. I'm not using a meta refresh element in the http-equiv parameter to do this. Instead I have this js element to refresh the page: window.setTimeout("refreshPage()", 120000); I just want to know whether people think Google might consider this a violation of the content guidelines or not. Or if it's not an outright violation, then at least if Google frowns on this or not. It doesn't redirect the user to a different page or anything, just refreshes the page so that they can see the most relevant content.

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >