Search Results

Search found 42887 results on 1716 pages for 'page header'.

Page 32/1716 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • Canonical url for a home page and trailing slashes

    - by serg
    My home page could be potentially linked as: http://example.com http://example.com/ http://example.com/?ref=1 http://example.com/index.html http://example.com/index.html?ref=2 (the same page is served for all those urls) I am thinking about defining a canonical url to make sure google doesn't consider those urls to be different pages: <link rel="canonical" href="/" /> (relative) <link rel="canonical" href="http://example.com/" /> (trailing slash) <link rel="canonical" href="http://example.com" /> (no trailing slash) Which one should be used? I would just slap / but messing with canonical seems like a scary business so I wanted double check first. Is it a good idea at all for defining a canonical url for a home page?

    Read the article

  • Do the "Contact us" and "Privacy policy" pages affect SEO?

    - by Gkhan14
    Just like the title says, what are the effects of having a "Contact us" and a "Privacy policy" on your site? I've read that it could build up your trust with Google, is this true? I've also read that some people said that you should add a noindex tag to your "Privacy policy" page, would this be a good idea? I say this because many websites have similar privacy policies, and I don't want any duplicate content issues. (For example, many people could be using the same WordPress privacy policy generator). I'm wondering the same things for the "Contact us" page as well.

    Read the article

  • Crawling an ajax based page with both a hash fragment and a meta tag

    - by Christofian
    According to google's documentation on crawling ajax based web pages, if a url contains a hash fragment, or something at the end of an url that looks like #helloworld, and if there is an ! after the #, as in #!helloworld, google will then request the url url?_escaped_fragment_=helloworld. I currently have an ajax based webpage that I want google to be able to crawl. Sometimes, the page uses hash fragments, and for those situations I set up the server so it will return an html snapshot for that page using _escaped_fragment_. However, that webpage often does not load a hash fragment, and when that happens the webpage still loads content using ajax. I couldn't find a good solution to enable ajax crawling for pages that sometimes have a hash fragment and sometimes don't. How can I tell google to use _escaped_fragment_ when there is a hash fragment, and to use something else to get an html snapshot of a page when there isn't a hash fragment?

    Read the article

  • Setting the Default Wiki Page in a SharePoint Wiki Library

    - by Damon Armstrong
    I’ve seen a number of blog posts about setting the default homepage in a wiki library, and most of them offer ways of accomplishing this task through PowerShell or through SharePoint designer.  Although I have become an ever increasing fan of PowerShell, I still prefer to stay away from it unless I’m trying to do something fairly complicated or I need a script that I can run over and over again.  If all you need to do is set the default homepage in a wiki library, there is an easier way! First, navigate to the wiki page you want to use as the default homepage.  Then click the Page tab in the ribbon.  In the Page Actions group there is a button called Make Homepage.  Click it.  A confirmation displays informing you that you are about to change the homepage.  Click OK and you will have a new homepage for your wiki library.  No PowerShell required.

    Read the article

  • Links to facebook.com/company-page redirect to facebook.com

    - by Teo
    For the last 2 days I've been trying to find the reason why the link to my website's Facebook page doesn't work anymore. The link went to facebook.com/company-page, but now redirects to facebook.com. I assume that I mistakenly changed something in the Facebook developer area, but I can't remember what it was. I guess I saw some redirect in the tab, but I'm not sure since it's changing too fast to facebook.com. The original link in the footer is correct: <a href="http://facebook.com/company-page " target="_blank" class="facebook_ico"></a> Any ideas?

    Read the article

  • How prevent useless content load on the page in Responsive Design

    - by Ícaro Leandro
    In Responsive Design, we hide elements in the page with @media queries and display: hide in the CSS. Ok, But in my system: Browsers that have less than width: 800px, the layout must hide some content, not only hide, but avoid them load fully. I mean, in access with desktop with more than 800px of screen, the page load fully; In mobile devices, or even in desktop with less than 800px, not load some content. I want to make the page load faster in this browsers. The system are maked in PHP and have some Javascript. Thanks...

    Read the article

  • Searching for a page with a Very Unique title, doesnt find that intended page... Why?

    - by Sam
    Dear folks, a question about appearing in search results in google: A page of mine has this extremely unique page title: Ein gutes Logo passt wie ein Handschuh auf Ihre Marke in die Hände Now, when I search the phrase: Ein gutes Logo passt wie ein Handschuh auf Ihre Marke in die Hände Then all kinds of other irrelevant pages show up having only 1 or at best two words from my unqie title appearing, although I have searched for the entire phrase! And when I search the phrase in between quotes: "Ein gutes Logo passt wie ein Handschuh auf Ihre Marke in die Hände" Then it finds 1 result, which is my page. What is going on? Why doesn't show the unique result without the quotes? Thanks: your ideas and suggestions are welcome and much appreciated

    Read the article

  • last-modified/etags - to include or not?

    - by Kae Verens
    Google's PageSpeed plugin suggests that a website should include Last-Modified and ETag headers: Specify a cache validator "Resources that do not specify a cache validator cannot be refreshed efficiently. Specify a Last-Modified or ETag header to enable cache validation" However, Apache suggests that by not including them at all, we speed up websites by eliminating If-Modified-Since and If-None-Match requests: http://www.askapache.com/htaccess/apache-speed-last-modified.html these are in direct opposition - which should be implemented? I'm leaning towards Apache's suggestion, as when I want a file cached, I don't want it refreshed.

    Read the article

  • Page load speeds effect on crawl rate

    - by Sam Pegler
    We've noticed a big drop in the total pages crawled per day on our site, we have no control over the crawl rate in google webmaster tools so it's possible this has been changed by google. However it's a fairly large site and I wouldn't of thought that the crawl rate would've been decreased. What we have noticed though is a sizeable increase in page load times, in my mind this would be the cause. Can anyone else confirm if the crawl rate is directly correlated to page load time? Seems logical, longer page load time, less pages crawled. Any decent documentation on this would be appreciated, I don't normally have any input on SEO so this is new to me.

    Read the article

  • Google's Opinion on Javascript Page Refresh

    - by user35306
    I was wondering if anyone knows Google's view on this. My company has a homepage that features a lot of 3rd parties on it and it needs to inform customers which ones are currently online, which aren't, and which are currently busy. Because this constantly changes, we have the homepage refresh to show the most relevant and up-to-date content to our users. I'm not using a meta refresh element in the http-equiv parameter to do this. Instead I have this js element to refresh the page: window.setTimeout("refreshPage()", 120000); I just want to know whether people think Google might consider this a violation of the content guidelines or not. Or if it's not an outright violation, then at least if Google frowns on this or not. It doesn't redirect the user to a different page or anything, just refreshes the page so that they can see the most relevant content.

    Read the article

  • How to add a holding page in front of a domain

    - by Jason Bradberry
    I have set up a holding page to announce a new version of a website coming soon. I wanted people to still be able to access the original site, so my approach was to place the holding page in the root folder on the server, and move the original site to a subfolder and link to it from the holding page. However, on testing this setup it appears to have hurt the SEO placing of the website. Is there a better approach to this? I'm a bit stumped as I want both to share the same URL.

    Read the article

  • Resolving an App-Relative URL without a Page Object Reference

    - by Damon
    If you've worked with ASP.NET before then you've almost certainly seen an application-relative URL like ~/SomeFolder/SomePage.aspx.  The tilde at the beginning is a stand in for the application path, and it can easily be resolved using the Page object's ResolveUrl method: string url = Page.ResolveUrl("~/SomeFolder/SomePage.aspx"); There are times, however, when you don't have a page object available and you need to resolve an application relative URL.  Assuming you have an HttpContext object available, the following method will accomplish just that: public static string ResolveAppRelativeUrl(string url) {      return url.Replace("~", System.Web.HttpContext.Current.Request.ApplicationPath); } It just replaces the tilde with the application path, which is essentially all the ResolveUrl method does.

    Read the article

  • Which prediction model for web page recommendation?

    - by Nilesh
    I am trying to implement a web page recommendation wherein registered users will be given a recommendation of which page to visit depending upon the previous data.So with initial study I decided to go on with clustering the data with rough sets and then will move forward to find out the sequential patters with the use of prefix span algorithm.So now I want to have a better prediction model in place which can predict the access frequency of pages.I have figured out with Markov model but still some more suggestions will be valuable.Also please help me with some references of the models too.Is it possible to directly predict the next page access with the result of PrefixSpan.If so how?

    Read the article

  • Why Google Analytics is displaying wrong landing pages?

    - by Salman
    I see all of my pages as Landing Pages in Google Analytics which cannot be true as I did not post those pages anywhere and I don't see any traffic hitting directly to that page. Also, I am using virtual page views on few buttons and I see those virtual pages as Landing pages too. For example, /click/request-a-quote 35000 views 35000 is too big a number to be ignored. Even if I ignore Virtual Pages Views, I see a lot of pages as Landing Pages that I am 100% sure that visitors ( atleast not so many users) are NOT hitting directly. Any advice, how to debug it? PS: I'm using the following code: var _gaq = _gaq || []; _gaq.push(['_setAccount', '<']); _gaq.push(['_setDomainName', 'none']); _gaq.push(['setLocalGifPath', '/images/_utm.gif']); _gaq.push(['_setAllowLinker', true]); _gaq.push(['_trackPageview','account/phase1']);

    Read the article

  • How to create email request forms and auto-responder?

    - by mfc
    I'm building a site in css and I'm pretty new to any code or script other than html and css. I'm trying to create a landing page that requires an email from visitors and set up an auto responder to send to that newly submitted email. This would also be a signup for email newsletters. I have some idea how to create the form and have looked into a bit. I don't know how to make it a requirement to get past the landing page and into the actual website or set up the auto-responder. Any help would be much appreciated. Or if someone knows of a source that explains how to do this thing in particular it would be wonderful. I tried lynda.com but everything is so general and I can't seem to find info on exactly how to do this but I know its quite common. Thanks!

    Read the article

  • Web Page Execution Internals

    - by octopusgrabbus
    My question is what is the subject area that covers web page execution/loading. I am looking to purchase a book by subject area that covers when things execute or load in a web page, whether it's straight html, html and Javascript, or a PHP page. Is that topic covered by a detailed html book, or should I expect to find information like that in a JavaScript of PHP book? I understand that PHP and Perl execute on the server and that Javascript is client side, and I know there is a lot of on-line documentation describing <html>, <head>, <body>, and so on. I'm just wondering what subject area a book would be in to cover all that, not a discussion of the best book or someone's favorite book, but the subject area.

    Read the article

  • Sometimes my page can't access PHP session varible

    - by Anusha
    I am working on E-commerce Web Application, which is having users and permissions to them.. So according to their permission, For Ex: I am storing variable $chk = 'write' or $chk = 'read' on session and my condition is if ($chk == 'write') { // some function here to modify the page & its content // If true, then display SAVE button to save all changes made. } But, Sometimes my page cant access this variable, the value of $chk is unknown hence its not displaying SAVE button. But, it shows the button after refreshing the page or visiting sometime later. Can anyone help me to solve this.. Thanks in advance

    Read the article

  • Print Problem: Page Squeezed in Half

    - by iam
    I've just managed to successfully set up my printer (Canon MX320) using the Printing app on Ubuntu 12.04. However the only 1 remaining problem is that each time I tried to print it will only print the file on the top half of the page only: meaning that for some reason, the printer "squeezes" the whole content of each page in the file to fit into the top half of the page only (so the proportion in the print-out is not correct vertically). This happens with every type of file I tried to print (Documents, Images, Web pages). I checked the Printing's setting & properties and couldn't find anything related to this issue yet and I've already made sure to set all the information correct (paper size, source etc.). The Print Preview always display correctly on the screen, but it's only the actual print-out that shows this problem. I also tried with several different types of papers (A4, photos etc.) but the result is always the same: the printer keeps putting the content in the top half of the paper only.

    Read the article

  • Google search results page titles "hijacked" by porn

    - by rfoote
    Sorry, that title probably doesn't make much sense. Over the past couple of weeks, we've noticed that the search results from Google for some of our drupal-powered sites are having their page titles hijacked somehow. An example would be: free streaming porn - [Actual page title] There are other variations of the porn prefix, that's one of the more tame ones. I looked in the databases for each of these sites and the titles haven't actually been changed or anything along those lines. When you click on the result to visit the page everything looks normal (sans porn stuff). Would anyone be able to point me in the right direction as to what the cause of this is? Searching Google for the potential problem isn't being much help, yet. Thanks in advance!

    Read the article

  • Why do Blogger pageview stats and AdSense pageviews differ?

    - by HTML Developer
    I run many blogs for online earnings but my blog in blogger page views: Total Pageviews 90,085 And that same blog page views in Google AdSense Total Pageviews 19,347 are different why? they reduced show for earnings? My Google AdSense Code: <script type="text/javascript"><!-- google_ad_width=336; google_ad_height=280; google_ad_format="336x280_as"; google_ad_type="text_image"; google_ad_host_channel="0001+S0011+L0007"; google_color_border="CCCCCC"; google_color_bg="FFFFFF"; google_color_link="000000"; google_color_url="336699"; google_color_text="000000"; //--></script>

    Read the article

  • List of common pages to have in the footer [closed]

    - by user359650
    I would like to post this question as a reference for webmasters wondering what pages they should include in the footer. I will use answers to complete my initial list: About us / About MyCompany / MyCompany About / About us: description about the company, its mission, and its vision. History: summary of milestones achieved by the company. The team / Management / Board of directors: depending on size of the company there may be one of more pages describing the people involved in the company, depending on their position. Awards: list of awards received by the company if any. In the press / They're talking about us: list of links to external websites, usually highly regarded news websites, which mentioned the company in one of their articles. Media Wallpapers: wallpapers with company logo in different colors and formats that fans can set as desktop image for their computer. logos: company logo in different colors and formats that websites/blogs posting about the website can use for illustration purposes. Media kits: documents, usually in PDF format summarizing the key company figures and facts that journalists can download and read to get a quick overview of the company. Misc Contact / Contact us: contact details the company is prepared to disclose if any (address, email, phone) or contact form. Careers / Jobs / Join us: list of open vacancies with contact form to apply. Investors / Partners / Publishers: information and contact forms for companies willing to become Investors/Partners/Publishers or login page to access portal restricted to those who already are. FAQ: list of common questions and answers to guide users and reduce number of support requests. Follow us / Community Facebook / Twitter / Google+: links to the company's pages/accounts on various social networks. Legal Terms / Terms of use / Terms & Conditions: rules users must follow when browsing the website. Privacy / Privacy Statement: explanations as to how the company deals with users' personal data and what users can do about it (request information to be deleted...). cookies: page that starts appearing on more and more websites due to new regulation (notably EU) imposing more transparency and control for users about cookies (e.g. BBC cookie page). Any input is welcome PS: if someone with enough rep could add the footer tag that would be great (min. 300 required).

    Read the article

  • How to approach multiple page form with just one save option

    - by Dano007
    The screen shot shows the magento product upload page. The left nav allows you to switch to different options for the product. Basically each option in the left nav appears as a different page. However when you save and close, it will save all the updates made on each page. Using Foundation4, html, css, js - what would be the best approach to replicating something similar? Say I want 3 pages and one save button option. Using http://foundation.zurb.com/docs/components/section.html#panel2 and having the save buttons the top line form level seems a possible option.

    Read the article

  • Which tags to use for good SEO on the page

    - by Aaditi Sharma
    I have a event page, where it has the following items. Event Name Venue Name(s) {some cases go upto 5 or more venues} Event Info {Genre(s),Language,type(s)} Date(s) on which the event is. Event Description. Since, the Event name is unique, and present in the title, I am assigning <H1> to it. However, venue names are multiple, plus the same venue may be repeated across the page, along with dates. (Each)Event Info, is used a single time on the page Dates, are descriped in a styled manner using multiple spans, however, I am going to use a title on them. Event description is in <p> tag. So My question is which heading tags to use for a good symentic description and SEO. Also the title on the dates, which format should I keep the date in? (dd/mm/yyyy)?

    Read the article

  • Dynamically change page content based on URL parameter?

    - by volume one
    The title of my question seems simple but here is an example of what I want to do: http://www.mayoclinic.com/health/infant-jaundice/DS00107 What happens on that page is whenever you click on a link to go a section (e.g. "Symptoms") in the article on "Infant Jaundice", it provides a URL parameter like this: http://www.mayoclinic.com/health/infant-jaundice/DS00107/DSECTION=symptoms As the DESCTION parameter changes, you get different content on the same page DS00107. The content changes as well as <meta keywords>. Can someone please tell me how this is achieved? I was thinking it was an if/else situation programmed into the page itself to display different properties depending on the URL parameter. Any help or suggestions are very much appreciated and my thanks to you for reading my question.

    Read the article

  • Page Équipe Java: nouveaux membres, nouvelles prises de responsabilités et nouveau look

    La page Équipe Java fait peau neuve ! Pour rappel cette page a pour but de présenter l'équipe de la rédaction de la rubrique Java et de ses sous rubriques. Chacune de ces personnes contribue grandement à l'essor de la rubrique Java par la rédaction de tutoriels, la gestion de FAQ, la création d'outils, etc. Leurs réalisations sont essentielles et sont parmi celles les plus populaires auprès des internautes. Concernant les modifications apportées à cette page Equipe, on notera des changements important au niveau de la responsabilité des rubriques. Enfin, la rubrique Java recrute des bénévoles : des modérateurs du f...

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >