Search Results

Search found 9960 results on 399 pages for 'iwork pages'.

Page 5/399 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • A Small Blog About Huge Pages

    - by rickramsey
    Video Interview: What Are Linux Huge Pages?, by Ed Whalen, Oracle ACE Blog: There's Been a Change In How Huge Pages Are Allocated, by Tanel Poder, Oracle ACE Director Blog: Performance Issues with Transparent Huge Pages (thank you, Bjoern Rost!) Web: About the Car, by Smart Ridez LLC, of Woodland Hills, California - Rick Follow me on: Blog | Facebook | Twitter | Personal Twitter | YouTube | The Great Peruvian Novel

    Read the article

  • How to setup up A record for GitHub pages for NearlyFreeSpeech.net

    - by zenstealth
    I own the domain zenstealth.com and I have decided that the easiest way for me to "do" a blog is via GitHub pages and Jekyll, which is already built-in in GitHub pages. I've done that already, and for now I've already setup a CNAME record so that my GitHub pages repo zenstealth.github.com redirects to blog.zenstealth.com. What I want to do is instead of using a sub-domain for the blog, I'd like to make it use the top level domain zenstealth.com. The GitHub Pages instructions say to the set an A record to the ip 207.97.227.245. The problem in NearlyFreeSpeech.NET (let's call it NFSN for short) is that it already already sets A records to files which are hosted directly in NFSN, and I have absolutely no idea on how to override this.

    Read the article

  • Best way of accessing data on different pages

    - by Gaz83
    I'm looking for a way to load data into properties/variables etc and have this information accessible to all the pages of my app. I want the information to be loaded via a background thread to keep UI thread free. Some of the pages will have various properties of their controls binding to these global properties. Here is what I tried. Created a static class. All pages could access the data but can't bind. Changed the static class to a Singleton and used DependencyProperty's. All pages could access data and binding worked fine but cross-threading issues when accessing via background threads. I have read in various places on this subject but haven't really come up with the best method yet for my situation.

    Read the article

  • Avoid pagination pages from appearing higher than real content in SERPS

    - by WordPress Developer
    I have a gallery-type website that has about 20k of pages and naturally it uses pagination. However, sometimes /page/2 appears higher in search results than /post/201339 for example. I'd like to give emphasis to the actual content (posts, videos, whatever the site is about) and not on pages that merely list this content in a paginated matter. What is the best way to avoid this issue? Maybe a NOINDEX,FOLLOW meta tag on the paginated pages?

    Read the article

  • List of events triggered on pages matching regex

    - by Cubius
    Is there a way to get the grouped list of events (such as in Top events) which were triggered on pages matching a regular expression? I may add the Page secondary dimension in Top events and apply the regex filter but this way I won't get a grouped list. I may apply the filter to Events - Pages report but this way the events will be grouped only inside pages whilst I need global grouping. Any suggestions?

    Read the article

  • Why are new pages not being indexed and old pages stay in the index?

    - by ZakGottlieb
    I currently have a site that was recently restructured, causing much of its content to be reposted, creating new URL's for each page. To avoid duplicates, all of the existing pages were added to the robots file. That said, it has now been over a week - I know Google has recrawled the site - and when I search for term X, it is stil the old page that is ranking, with the new one nowhere to be seen. I'm assuming it's a cached version, but why are so many of the old pages still appearing in the index? Furthermore, all "tags" pages (it's a Q&A site, like this one) were also added to the robots a few months ago, yet I think they are all still appearing in the index. Anyone got any ideas about why this is happening, and how I can get my new pages indexed?

    Read the article

  • Easily Create High Converting Landing Pages For SEO

    The term landing page is nothing new, those of us working in PPC or pay-per-click, have been carefully crafting highly specific and targeted landing pages, to achieve a single desire action, for some time. However some of the principles and techniques that can be applied to PPC landing pages can also be applied with equal effect to SEO landing pages.

    Read the article

  • pages still show up in google search even after disallowed in robots.txt [duplicate]

    - by Jota Onasys
    This question already has an answer here: With Robots.txt disallow all, why was my site still getting traffic? 5 answers Why is it that some pages still show up in google search even though disallowed in robots.txt? Is the best solution here to remove the Disallow from Robots.txt and just add noindex, nofollow meta tag to those pages you want blocked? Or should I submit a request to Google directly to remove those pages?

    Read the article

  • How Many Web Pages Should Be Indexed?

    Search engines are crawling websites around the clock for unique web pages and content.Google has always been on the top in indexing deep-links of any website, Google indexed 26 million pages in 1998 and in past 10 years Google have indexed over 1 trillion pages. So, this gives a fair idea that how big this cyber world is.

    Read the article

  • How Many Web Pages Should Be Indexed?

    Search engines are crawling websites around the clock for unique web pages and content.Google has always been on the top in indexing deep-links of any website, Google indexed 26 million pages in 1998 and in past 10 years Google have indexed over 1 trillion pages. So, this gives a fair idea that how big this cyber world is.

    Read the article

  • black and white pages not recognized by printer

    - by user46627
    I have a document which has color on about 25% of its pages. When I print it in the copy shop, the printer's technically supposed to recognize the b/w pages. However, all pages are registered as colored, i.e. the pages are color-enabled pages which happen to not have any colors on them (but I'm paying for the color-enabled-ness). Regrettably, the staff have to charge me for color because the printer's leased and they have to pay for color pages, so showing them that there's no color doesn't help me. What are possible sources for b/w pages showing up like that?

    Read the article

  • Moving from a static site to a CMS with new URLs and meta-data for pages

    - by Chris J
    Hi I am in the process of rebuilding a site from static pages to a CMS which will be using mod_rewrite to generate new page URLs. In this process our marketing people and myself have decided to tidy up the descriptions, keywords and titles. Eg: a page which who's URL is currently "website-name/about_us.html" and has a title of "website-name - something not quite page specific" will change to "website-name/about-us/" and title: "about us - website-name" and may have a few keywords and the description changed. Our goal with updating the meta data is to improve our page rankings and try to keep in line with some best practices for SEO. Though our current page rankings are quite good in many aspects, there is room for improvement. All of the pages will also have content changes (like rearranging heading tags, new menu on all pages, new content in footer, extra pieces of dynamic content relating to other pages). In this new site process I plan to use 301 redirects for all the old URLs pointing to the new URLs. My question is what can I expect to happen to the page rankings in Google, in the sort term and long term? Will this be like kicking off a new site which will have to build up trust over time or will the original page rankings have affect?

    Read the article

  • How to Create Custom Cover Pages in Microsoft Word 2010

    - by Zainul Franciscus
    A great cover page draws readers, and if you know Word, then you are in luck, because Word gives ready to use cover pages. But did you know that Word lets you create your own cover pages? Head over to the “Insert” ribbon and you’ll find that Microsoft Office gives some cover pages that you can use. Although, normally a cover page appears in the first page, Word lets you place the cover page anywhere in the document. How to Make and Install an Electric Outlet in a Cabinet or DeskHow To Recover After Your Email Password Is CompromisedHow to Clean Your Filthy Keyboard in the Dishwasher (Without Ruining it)

    Read the article

  • Removing existing filtered pages from Google's index: noindex / 301 / canonical to non-filtered page?

    - by Noam
    I've decided to remove some of my site's pages from the Google index to focus more of the indexed pages on higher quality pages. The pages I'm going to remove are already in the index. These removed pages are filtered pages which will continue to exist, I just don't want them in the google index because they add little quality to the same page without any filter selected. I've added in webmaster tools specification of narrow for the parameters that set these filters, but it doesn't seem this changes anything in how he handles these pages. So I'm considering three options: Adding <meta name="robots" content="noindex" /> to the html header of these filtered pages 301 to the non-filtered page that contains the most similar information and will remain in the index Canonical tag. Which I'm not sure is exactly the mainstream use case, as these aren't really the same pages. Which should I use?

    Read the article

  • Updating Pages after migration of website

    - by DLackey
    My web site was coded in Coldfusion and over the years has obtained a good ranking. I recently migrated the front-end to a Wordpress site and wanted to know what is ideal way of updating Google and the various search engines of of the updates. For example, the home page of index.cfm is no longer valid since it's index.php. I've submitted an updated sitemap.xml file to Google. I'm sure my site will slip some while the search engines re-index my site but I'd like to try and minimize this as much as possible with the holidays coming up (my site is a service oriented site that caters to people who travel during the holidays). Right now, the old .cfm pages are still online but are re-routed to the appropriate Wordpress page (for example, about.cfm is now routed to /about/ using a cflocation tag.). Not sure if I should pull down the .cfm pages all together or leave them in place until the new pages are picked up by the search engines. Any advice would be helpful.

    Read the article

  • Showing All Pages in a SharePoint Wiki Library

    - by Damon Armstrong
    Opening a SharePoint wiki takes you to the wiki homepage, which is what most users want and expect.  Administrators, on the other hand, will occasionally need to see a full list of wiki pages in the wiki library.  Getting to this view is really easy, but you have to know where to look. The problem is that when viewing a wiki page SharePoint conveniently removes the Library tab from the ribbon, and the Library tab houses the controls you normally use to switch views.  Many an admin has been frustrated by the fact that they cannot get to this functionality.  A bit more searching, however, reveals that the Page tab in the ribbon contains a button in the Page Library group called View All Pages.  As the name suggests, clicking this button displays a document library style view all the pages in the wiki.  It also makes the Library tab available to switch views and gives administrators access to all of the standard Library tab functionality.

    Read the article

  • Potential issues with multiple home pages

    - by Maxim Zaslavsky
    I have a site where I want to have two different home pages: a general description page for anonymous users, and a dashboard page for logged-in users. I am debating between two implementations: Both pages live at / The page for anonymous users is located at / and the dashboard is at /dashboard, with automatic redirection between them based on whether a given user is logged in (e.g., if you're logged in and navigate to /, you are redirected to /dashboard. Is it cleaner to have both pages use the same URL or separate URLs? Also, I imagine that choices for that question will affect the following: Caching: the anonymous page would be completely cached, while the logged-in page would not be cached at all (except for static resources). This could lead to issues with server caching, request speed, and UX (such as if one version of the page is cached in a user's browser when the other version should be displayed, instead). SEO: how would search engines react to such canonical URLs? Load time (due to redirects or to the server having to always reevaluate which page to display)

    Read the article

  • Redirect pages to fix crawl errors

    - by sarah
    Google is giving me a crawl error for pages that I have removed like www.mysite.com/mypage.html. I want to redirect this pages to the new page www.mysite.com/mysite/mypage. I tried to do that by using .htaccess but instead of fixing the problem, the crawl pages increased and a new crawl came www.mysite.com/www.mysite.com. This is my .htaccess file: <IfModule mod_rewrite.c> RewriteEngine On RewriteBase /sitename/ RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /sitename/index.php [L] </IfModule> # END WordPress Should I add this after the rewrite rule or I should do something else? RewriteRule ^pagename\.html$ http://www.sitename.com/pagename [R=301]

    Read the article

  • Google search does not show sub-pages from my website

    - by Chang
    My website appears in Google search, but only the first page. Of course I have sub-pages linked from the first page, but the sub-pages do not show in Google search. Not in Yahoo, not in Bing. What should I do? It has been three years that sub-pages do not show. (I tried searching site:mydomain.com and pressed 'repeat the search with the omitted results included' link) What would you suspect the reason? My website addresses were like xxx.php?yy=zzz etc, etc, so I changed it to /yy/zzz using mod_rewrite. I thought it might be (X)HTML standard violations, so now I changed it. I hope Google will soon have my entire website, but I am a little bit pessimistic. Do you have any thought?

    Read the article

  • Google search does not show sub-pages from my website

    - by user5679
    My website appears in Google search, but only the first page. Of course I have sub-pages linked from the first page, but the sub-pages do not show in Google search. Not in Yahoo, not in Bing. What should I do? It has been three years that sub-pages do not show. (I tried searching site:mydomain.com and pressed 'repeat the search with the omitted results included' link) What would you suspect the reason? My website addresses were like xxx.php?yy=zzz etc, etc, so I changed it to /yy/zzz using mod_rewrite. I thought it might be (X)HTML standard violations, so now I changed it. I hope Google will soon have my entire website, but I am a little bit pessimistic. Do you have any thought?

    Read the article

  • Google publie PageSpeed Insights 2, un ensemble d'outils open source d'analyse et d'optimisation des pages Web

    Google publie PageSpeed Insights 2.0, un ensemble d'outils open source d'analyse et d'optimisation des pages Web Google a publié la version 2.0 de l'outil PageSpeed Insights, qui apporte un nombre intéressant de nouveautés et améliorations. PageSpeed Insights est un ensemble d'outils open source d'analyse des performances des pages Web et d'optimisation de celles-ci pour améliorer leur temps de chargement. Les outils d'analyses sont disponibles comme des extensions pour Chrome et Firefox , et également comme un service en ligne. Une API d'analyse peut aussi être utilisée via JavaScript, .NET, Go, Java et plusieurs autres langages. Les pages et leurs ressources a...

    Read the article

  • Can AdSense crawler view pages that require cookies?

    - by moomoochoo
    Details I require users to agree to terms and conditions before they can view several pages on my site. Once they have agreed a cookie is set and they can proceed to the webpage. If a user somehow manages to end up on the webpage without a cookie they will not be able to access the page's content. My question(s) Is the AdSense crawler able to set the cookie and visit these pages? If yes, how will it know to agree to the TOS? Is there some way to allow it access to the pages even if it couldn't use cookies?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >