Search Results

Search found 4781 results on 192 pages for 'seo audit'.

Page 129/192 | < Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >

  • How Search Engine Bots Crawl Forums?

    - by Waleed Eissa
    If I have a forums site with a large number of threads, will the search engine bot crawl the whole site every time? Say I have over 1,000,000 threads in my site, will they get crawled every time the bot crawls my site? or how does it work? I want my website to be indexed but I don't want the bot to kill my website! In other words I don't want the bot to keep crawling the old threads again and again every time it crawls my website. Also, what about the pages crawled before? Will the bot request them every time it crawls my website to make sure they are still on the site? I'm asking this because I only link to the latest threads, i.e. there's a page that contains a list of all the latest threads, but I don't link to the older threads, they have to be explicitly requested by URL, e.g. http://www.mysite.com/showthread.aspx?threadid=7 , will this work to stop the bot from bringing my site down and consuming all my bandwidth? P.S. The site is still under development but I want to know in order to design the site so that search engine bots don't bring it down. Thanks

    Read the article

  • 301 redirect vs parking

    - by Pat
    I have several domain names registered, each a slight variant of each other. E.g, fastcar.com fast-car.com fastcar.co.uk fast-car.co.uk etc.. I don't wish to be penalized for duplicate content or spammy links by any of the major search engines. Should I park them all directly on the main domain I wish to promote, 301 redirect them to the main domain or not use them at all? Thanks

    Read the article

  • Website Sitemaps and <priority>, is it working?

    - by Mike Gleason jr Couturier
    Hi, My "Privacy Policy" page is seen more important by Google than other really more important pages on my website. I'm currently creating a script to generate a sitemap, should I bother with the priority? How do you effectively assign priorities to pages? I consider one of my page important but the page have less content than another one less important to my eyes... but maybe Google bot will see it the other way around. If my degree of "importantness" differs from the one of Google, will I get penalized on the ranking for a particular page? Thank you for sharing your black art with us :P

    Read the article

  • Will rel=canonical break site: queries ?

    - by Justin Grant
    Our company publishes our software product's documentation using a custom-built content management system using a dynamic URL namespace like this: http://ourproduct.com/documentation/version/pageid Where "version" is the version number to which the documentation applies, and "pageid" is a unique string which identifies that page in our back-end content management system. For example, if content (e.g. a page about configuration best practices) is unchanged from version 3.0 and 4.0 of our product, it'd be reachable by two different URLs: http://ourproduct.com/documentation/3.0/configuration-best-practices http://ourproduct.com/documentation/4.0/configuration-best-practices This URL scheme allows us to scope Google search results to see only documentaiton for a particular product version, like this: configuration site:ourproduct.com/documentation/4.0 But when the user is searching across all versions, we don't want Google to arbitrarily choose one of the URLs to show in results. Instead, we always want the latest version to show up. Hence our planned use of rel=canonical so we can proscriptively tell Google which URL we want to show up if multiple versions are being searched. (Users who do oddball things like searching 2 versions but not all of them are a corner case, so we don't care which version(s) show up in that case-- the primary use-cases we care about is searching one version or searching all versions) But what will happen to scoped searches if we do this? If my rel=canonical URL points to version 4.0, but my search is scoped to 3.0, will Google return a result? Even if you don't know the answer offhand, do you know a site which uses rel=canonical to redirect across folders in a URL namespace. If so, I could run a few Google searches and figure out the answer.

    Read the article

  • How to return proper 404 for google while providing user friendly content to the user?

    - by Marek
    I am bouncing between posting this here and on Superuser. Please excuse me if you feel this does not belong here. I am observing the behavior described here - Googlebot is requesting random urls on my site, like aecgeqfx.html or sutwjemebk.html. I am sure that I am not linking these urls from anywhere on my site. I suspect this may be google probing how we handle non existent content - to cite from an answer to the linked question: [google is requesting random urls to] see if your site correctly handles non-existent files (by returning a 404 response header) We have a custom page for nonexistent content - a styled page saying "Content not found, if you believe you got here by error, please contact us", with a few internal links, served (naturally) with a 200 OK. The URL is served directly (no redirection to a single url). I am afraid this may discriminate the site at google - they may not interpret the user friendly page as a 404 - not found and may think we are trying to fake something and provide duplicate content. How should I proceed to ensure that google will not think the site is bogus while providing user friendly message to users in case they click on dead links by accident?

    Read the article

  • Is there anyway of making json data readable by a Google spider?

    - by leeand00
    Is it possible to make JSON data readable by a Google spider? Say for instance that I have a JSON feed that contains the data for an e-commerce site. This JSON data is used to populate a human-readable page in the users browser. (I.E. The translation from JSON data to human displayed page is done inside the users browser; not my choice, just what I've been given to work with, its an old legacy CGI application and not an actual server-side scripting language.) My concern here is that, the google spiders will not be able to pickup/directly link to the item in question when a user clicks on it in google, being presented with an index page full of all the items, rather than being linked directly to the item they clicked on. Is there anyway of "informing" the google spider in the JSON that what they should feed the user a different link?

    Read the article

  • friendly url in categories

    - by ntan
    Hi to all, i am trying to use friendly url for my categories. Example Database cat_id | parent_id | name | url 1 0 cat1 cat1 2 1 cat2 cat2 My approach to do is to pass the parameter cat with url value for example show.php?cat=cat1 and in .htaccess i must rewrite to /cat1 BUT what about when i want to access cat2. I want to rewrite as cat1/cat2 so the parameter is show.php?cat=cat1/cat2 and then parse the value to secure that cat2 belong to cat1. And so on. I am not using MVC so i have to do it on my own. Please if any other solutions is better please advice or suggest me reading Thank in advance.

    Read the article

  • Tool to Verify Site URLs/SiteMap?

    - by LockeCJ
    I'm moving a site from one e-commerce software to another, and I've created URL Rewriter rules to do 301 redirects from the Old URLs to the new ones. I've tested them with a small sample of URLs, but I'm looking for some sort of tool that will let me test as many of the URLs as possible. Does anyone know of a tool that I can feed a list of URLs (or a sitemap.xml). This tool will attempt to retrieve each URL, and then report the status code for each. The result should be a list of URLs with the status code, something like this: www.site.com/oldurlformat1/ 301 Permanently Moved www.site.com/newurlformat1/ 200 OK www.site.com/oldurlformat2/ 301 Permanently Moved www.site.com/newurlformat2/ 200 OK I can almost do this with wget, but getting the summary/report at the end is where I'm stuck.

    Read the article

  • I want to combine my www and non www and keep the link jucie from both.

    - by John Ray
    My website shows up for some keywords in the www and some in the non www. Seaquake shows more links to the non www version. It is a PR2 either way. I would like to combine the link juice of the two versions into the non www version. Does anyone know the best way to combine the two and keep the link juice of both. It is as simple as a 301 redirect and if so does the 301 need to be handled in any specific way.

    Read the article

  • Strange routing

    - by astropanic
    How I can setup my rails app to respond to such urls: http://mydomain.com/white-halogene-lamp http://mydomain.com/children-lamps http://mydomain.com/contact-form The first one should link to my products controller, and show my product with this name The second one should link to my categories controller, and show the category with this name The third one should link to my sites controller, and show the site with this title. All three models (product, category, site) have a method to_seo, giving the mentioned above urls (after the slash) I know it's not the restful way, but please don't disuss here, it is wrong approach or not, that's not the question. The question is how to accomplish this weird routing ? I know we have catch all routes, but how I can tell rails to use different controllers on different urls in the catch all route ? Or have You an better idea ? I would avoid redirects to other urls.

    Read the article

  • Does URL Shortening affect Page Ranking?

    - by rc
    Recently there has been a lot of hype about URL Shortening. I guess some URL Shortening services even offer tracking stats. But, doesn't adding one more level of look-up to the original URL affect page ranking in any way? Just curious to know.

    Read the article

  • mod_rewrite and htaccess

    - by chris
    I have set up a few rules based on other questions but now my css breaks I did have the URL / /eshop/cart.php?products_id=bla and everything work fine. but now with my mod rewrite url- /product/product-title/ It loose the base directory. Is there an option to fix this? So i dont have to go back with the full url in all the img src tags and so on?

    Read the article

  • Will news ticker using overflow:hidden cause Google to see site as spam?

    - by molipix
    In the hope of tempting Googlebot with fresh content, I've implemented a homepage news ticker which displays the 20 most recent headlines on our site. The implementation I have chosen is a <ul> with each headline being a <li> Initially all the <li> elements have no style but Javascript kicks in on page load and gives all but one of them a display="style:none" attribute. Javascript then displays each of the other 19 headlines in a loop. So far so good. However, in order to prevent a visually unplesant page load where the 20 items display and then immediately collapse, I am using overflow:hidden on the <ul> element. Anyone got a view on what Googlebot is likely to make of this? Does the fact that I'm using overflow:hidden make the content look like spam?

    Read the article

  • Cannot see my wordpress website on google search

    - by ion
    Hi guys I recently uploaded a site made with wordpress. The site url is oakabeachvolley.gr I have set on the privacy settings of wordpress for the site to be visible by search engines. However after almost 45 days the site is invisible on google even when I'm searching using the url name and very specific keywords. Since I have made quite a few sites with wordpress I have never seen this behavior before. Sites will eventually be visible to google engine, sometimes even in the first day. However in this case the site does not show nowhere in the first 20 pages. Any help would be greatly appreciated.

    Read the article

  • Do you know best site on Net to learn XHTML 1.0 strict , Like other than W3schools.com but with bett

    - by metal-gear-solid
    Do you know best site on Net to learn XHTML , Like other than W3schools.com but with better and latest content? I have to link some friends who want to learn HTML? I like the "Try it editor" of W3C schools but not the content. I need semantic discussion also. what is the element all about and what is the semantic value, even if's it's valid, should we use or not. etc Is there any other site focused on Semantic , accessible and Valid XHTML with good content with "try it editor" like w3c schools? or now i should suggest to someone to learn HTML 5 Directly?

    Read the article

  • URL Rewrite query database?

    - by Liam
    Im trying to understand how URL rewriting works. I have the following link... mysite.com/profile.php?id=23 I want to rewrite the above url with the Users first and last name... mysite.com/directory/liam-gallagher From what Ive read however you specify the rule for what the url should be output as, But how do i query my table to get each users name? Sorry if this is hard to understand, ive confused myself!

    Read the article

  • Dynamic 'twitter style' urls with ASP.NET

    - by Desiny
    I am looking to produce an MVC site which has complete control of the url structure using routing. The specific requirements are: www.mysite.com/ = homepage (home controller) www.mysite.com/common/about = content page (common controller) www.mysite.com/common/contact = content page (common controller) www.mysite.com/john = twitter style user page (dynamic controller) www.mysite.com/sarah = twitter style user page (dynamic controller) www.mysite.com/me = premium style user page (premium controller) www.mysite.com/oldpage.html = 301 redirect to new page www.mysite.com/oldpage.asp?id=3333 = 301 redirect to new page My routes look as follows: routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( "Common", "common/{action}/{id}", new { controller = "common", action = "Index", id = "" } ); routes.MapRoute( "Home", "", new { controller = "Home", action = "Index", id = "" } ); routes.MapRoute( "Dynamic", "{id}", new { controller = "dynamic", action = "Index", id = "" } ); In order to handle the 301 rredirct, I have a database defining the old pages and their new page urls and a stored procdure to handle the lookup. The code (handler) looks like this: public class AspxCatchHandler : IHttpHandler, IRequiresSessionState { #region IHttpHandler Members public bool IsReusable { get { return true; } } public void ProcessRequest(HttpContext context) { if (context.Request.Url.AbsolutePath.Contains("aspx") && !context.Request.Url.AbsolutePath.ToLower().Contains("default.aspx")) { string strurl = context.Request.Url.PathAndQuery.ToString(); string chrAction = ""; string chrDest = ""; try { DataTable dtRedirect = SqlFactory.Execute( ConfigurationManager.ConnectionStrings["emptum"].ConnectionString, "spGetRedirectAction", new SqlParameter[] { new SqlParameter("@chrURL", strurl) }, true); chrAction = dtRedirect.Rows[0]["chrAction"].ToString(); chrDest = dtRedirect.Rows[0]["chrDest"].ToString(); chrDest = context.Request.Url.Host.ToString() + "/" + chrDest; chrDest = "http://" + chrDest; if (string.IsNullOrEmpty(strurl)) context.Response.Redirect("~/"); } catch { chrDest = "/";// context.Request.Url.Host.ToString(); } context.Response.Clear(); context.Response.Status = "301 Moved Permanently"; context.Response.AddHeader("Location", chrDest); context.Response.End(); } else { string originalPath = context.Request.Path; HttpContext.Current.RewritePath("/", false); IHttpHandler httpHandler = new MvcHttpHandler(); httpHandler.ProcessRequest(HttpContext.Current); HttpContext.Current.RewritePath(originalPath, false); } } #endregion } It is very simple to look up a user and in fact the above code does this. My problem is in the dynamic / premium part. I am trying to do the following: 1) in the dynamic controller, lookup the username. 2) if the username is in the user list (database), show the Index ActionResult of the Dynamic controller. 3) if the username is not found, look up the username in the premium list 4) if the username is fund in the premium list (database) then show the Index ActionResult of the Preium controller. 5) If all else fails jump to the 404 page (which will ask the user to sign up) Is this possible? Looking up the user twice is a bad idea for performance? How do I do this without redirecting?

    Read the article

  • mod_rewrite different rules for different pages

    - by Sophia Gavish
    Hi, I'm trying to understand how mod_rewrite works. I've been using it before but this week I tried to write rules to a new website and it doesn't works. I want to make a rule to make : www.example.com/media/?gallery=galleryname&album=albumname&pid=pictureid looks like: www.example.com/media/galleryname/albumname/pictureid The rule is: RewriteRule ^([^/])/([^/])/([^/]*)$ /media/?gallery=$1&album=$2&pid=$3 [L] and here is the code below: Options -Indexes Options +FollowSymLinks RewriteEngine on RewriteBase / RewriteCond %{REQUEST_METHOD} !^(TRACE|TRACK|GET|POST|HEAD)$ RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-l RewriteRule ^([^/])/([^/])/([^/]*)$ /media/?gallery=$1&album=$2&pid=$3 [L] I really want to know what I'm missing, because I tried some examples and it looks fine to me. maybe because /media/ is an actual folder the rule is wrong? Thanks.

    Read the article

  • MySQL 5.1 / phpMyAdmin - logging CREATE/ALTER statements

    - by pako
    Is it possible to log CREATE / ALTER statements issued on a MySQL server through phpMyAdmin? I heard that it could be done with a trigger, but I can't seem to find suitable code anywhere. I would like to log these statements to a table, preferably with the timestamp of when they were issued. Can someone provide me with a sample trigger that would enable me to accomplish this? I would like to log these statements so I can easily synchronize the changes with another MySQL server.

    Read the article

  • generic async loading method for page web scripts?

    - by boomhauer
    The google analytics code went to an async load model some time back. I've noticed that a lot of the other scripts I use on many sites are causing slow load times - specifically the addthis script and the facebook like button. I'm noticing that the slow load times of these scripts is causing the google bot to calc my page loadtimes as being much slower than previously. I'd like to know if there is a standard/generic way of causing these scripts to load async as well, or perhaps a pointer to someone who has done the work for this already. Seems this would be a popular thing to do, but not much luck searching around.

    Read the article

  • how to get google sitelinks on a website

    - by altvali
    Hi all! There are a lot of websites that look professional in Google results. Try searching for 'stackoverflow' and you'll see at the top a result with a title, a description and a table of 8 links to stackoverflow categories. That's what i'm interested in producing for future websites. So what must be done? Does it depend on the number of visitors? how long does it take until the results start looking like that?

    Read the article

< Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >