Search Results

Search found 20852 results on 835 pages for 'local seo'.

Page 166/835 | < Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >

  • Sitemaps on multiple front end servers using a http handler, is that a good idea?

    - by Rihan Meij
    Question 1 We would like to generate a site map for our CMS site We have multiple front end servers with approx a million articles. Environment multiple MS SQL servers multiple front end servers (load balanced) ASP.net - and IIS 6 Windows 2003 To have the site maps (the site map index file, and the site map files) physically on the front end servers will be a operations nightmare and error prone. So we are considering using http handlers instead so that it does not matter what server gets the request, it will be able to serve the correct xml file. Question 2 If we ping Google each time we publish a new article will that effect us negatively if that happens more than once a hour. Thanks!

    Read the article

  • Need to have either example.com/username or username.example.com, but how?

    - by Stefan
    Hey guys, I'm almost finished developing my large project, however I would love it if I could make it so instead of having the users profile pages at: http://example.com/profile/username/USERNAME (i'm currently using .htaccess to rewrite the GET data into forward slashes and profile(.php) being read as just 'profile' profile.php also parses the url correctly to retrieve the GET data) But it would be some much better if I could do it so that it's like http://www.example.com/USERNAME (preferred) or http://www.USERNAME.example.com Any ideas or resources? Thanks, Stefan

    Read the article

  • URL rewriting IIS6

    - by GigaPr
    Hi i was using HttpModule to Perform Extension-Less URL Rewriting as explained at http://weblogs.asp.net/scottgu/archive/2007/02/26/tip-trick-url-rewriting-with-asp-net.aspx Approach 3: works perfectly in IIS7, now i just published the solution to the live environment it doesnt work. the reason seems to be the fact that they are running IIS6 the question is does anyone now any quick solution to make it work in IIS6 without having to change all the code? thanks

    Read the article

  • Algorithm to calculate a page importance based on its views / comments

    - by stacker
    I need an algorithm that allows me to determine an appropriate <priority> field for my website's sitemap based on the page's views and comments count. For those of you unfamiliar with sitemaps, the priority field is used to signal the importance of a page relative to the others on the same website. It must be a decimal number between 0 and 1. The algorithm will accept two parameters, viewCount and commentCount, and will return the priority value. For example: GetPriority(100000, 100000); // Damn, a lot of views/comments! The returned value will be very close to 1, for example 0.995 GetPriority(3, 2); // Ok not many users are interested in this page, so for example it will return 0.082

    Read the article

  • Do I need to use http redirect code 302 or 307?

    - by Iain Fraser
    I am working on a CMS that uses a search facility to output a list of content items. You can use this facility as a search engine, but in this instance I am using it to output the current month's Media Releases from an archive of all Media Releases. The default parameters for these "Data Lists" as they are called, don't allow you to specify "current month" or "current year" for publication date - only "last x days" or "from dateA to dateB". The search facility will accept querystring parameters though, so I intend to code around it like this: Page loads How many days into the current month are we? Do we have a query string that asks for a list including this many days? If no, redirect the client back to this page with the appropriate query-string included. If yes, allow the CMS to process the query Now here's the rub. Suppose the spider from your favourite search engine comes along and tries to index your main Media Releases page. If you were to use a 301 redirect to the default query page, the spider would assume the main page was defunct and choose to add the query page to its index instead of the main page. Now I see that 302 and 307 indicate that a page has been moved temporarily; if I do this, are spiders likely to pop the main page into their index like I want them to? Thanks very much in advance for your help and advice. Kind regards Iain

    Read the article

  • Robots.txt syntax

    - by Sinan
    I not expert on robots.txt and i have the following in one of my clients robots.txt User-agent: * Disallow: Disallow: /backup/ Disallow: /stylesheets/ Disallow: /admin/ I am not sure about the second line. Is this line disallows all spiders?

    Read the article

  • Google and Mirror Websites

    - by Roberto Aloi
    Which is the best way to manage a website with one or more mirrors so that: Google don't consider it as "dupicated content" The website is correctly indexed No inconsistencies or duplicated information are present in Google Analytics The Google webmaster guidelines in general are respected NOTE: I'm not sure if I should ask this question here or in ServerFault. It looks a bit in the middle between programming and server administration. Let me know if you think ServerFault represent a more appropriate place for this and I'll move it. Thanks.

    Read the article

  • How will a search engine read data from my Ajax-based webapp?

    - by Jack W-H
    OK, not entirely related to programming, so I'm sorry. But I'd like to know about this: So I've got a webapp. There's one column where a list of results are fetched from the database. When you click one, jQuery fetches the information associated with that result and puts it into the second column - all without a refresh and using Ajax. Is it possible for Google to still read it etc.? I understand it can follow links... but presumably not Javascript actions etc.? If this is the case, what do other Ajax-heavy websites do about search engine optimisation? Jack

    Read the article

  • How Can I Deal With Those Dead Links After Revamping My Web Site?

    - by skyflyer
    Couple of months ago, we revamped our web site. We adopted totally new site structure, specifically merged several pages into one. Everything looks charming. However, there are lots of dead links which produce a large number of 404 errors. So how can I do with it? If I leave it alone, could it bite back someday, say eating up my pr? One basic option is using 301 redirect, however it is almost impossible considering the number of it. So is there any workaround? Thanks for your considering!

    Read the article

  • URL Generation Technique with PHP

    - by harigm
    I have a build a web portal based on the Cricket concept, I have build a Custom based CMS where I can upload the News for the site Once I upload the news, the URL Will be like this http://cricandcric.com/news/news.php?id=841&An-emotional-moment:-Dhoni.html But I am trying to have the above Url as follows (some thing like this) http://cricandcric.com/news/An-emotional-moment:-Dhoni.html Or similar to Stackoverflow.com, Can any one please help me how can i build that? Do I need to rewrite the URL ?

    Read the article

  • How long does it take Google to update all links from R 301 ?

    - by romant
    I just changed the location of my blog, and have done the appropriate redirects. Does anyone have knowledge or experience for the delay in updating all the links across Google? Reason I ask, I wish to change the A record. So this will eliminate the .htaccess file, and thus null and void the redirect. How long must I wait prior to the undertaking? Thank you.

    Read the article

  • Multilanguage website sitemap

    - by Alex
    My site is i18n based on the following structure: mydomain.com/en mydomain.com/en/product/blue-widgets mydomain.com/fr mydomain.com/fr/product/blue-widgets The site is internationalised not localised, therefore i don't want to GEO target specific locals just target "french" or "english" speaking users. When submitting a sitemap to the search engines should i send one sitemap with links to all the different language versions or have one separate sitemap for each language. Is that even possible?

    Read the article

  • How to add page title in url in asp.net mvc? (url generation)

    - by Ante
    How to dynamically create urls/links like: www.restaurant.com/restaurant/restaurant-name-without-some-characters-like-space-coma-etc/132 what are the keywords i can use to google some articles on this topic? (how to genererate and handle this kind of urls inside asp.net mvc) There are some questions: How to generate links? (store slugs in db?) Redirect or not if slug isn't canonical? edit: apparently they are called slugs

    Read the article

  • how to category/subcategory/city/firm-name url?

    - by kkalgidim
    iam using ruby on rails i have models -category -subcategory -city -firm when i click on category it will show sub categories and permalink should be: xxx.com/category when i click subcategory it will show firms and city names. xxx.xom/category/subcategory when i clikc on city name it will filter firms belongs to that city xxx.com/category/subcategory/city when i clikc on firm name it will show xxx.com/category/subcategory/city/firm-name firms may have more than one sub category i used premalink_fu but i could not do that sub category system. category,subcategory,city,firm tables have their own permalink field on db. but i dont know how to combine them dynamically. i can do xxx.com/category but icant do xxx.com/category/subcategory how can i do that please help me

    Read the article

  • Are AJAX sites crawlable by search engines?

    - by frankadelic
    I had always assumed that AJAX-driven content was invisible to search engines. (i.e. content inserted into the DOM via XMLHTTPRequest) For example, in this site, the main content is loaded via AJAX request by the browser: http://www.trustedsource.org/query/terra.cl ...if you view this page with Javascript disabled, the main content area is blank. However, Google cache shows the full content after the AJAX load: http://74.125.155.132/search?q=cache:JqcT6EVDHBoJ:www.trustedsource.org/query/terra.cl+http://www.trustedsource.org/query/terra.cl&cd=1&hl=en&ct=clnk&gl=us So, apparently search engines do index content loaded by AJAX. Questions: Is this a new feature in search engines? Most postings on the web indicate that you have to publish duplicate static HTML content for search engines to find them. Are there any tricks to get an AJAX-driven content to be crawled by search engines (besides creating duplicate static HTML content). Will the AJAX-driven content be indexed if it is loaded from a separate subdomain? How about a separate domain?

    Read the article

  • How Search Engine Bots Crawl Forums?

    - by Waleed Eissa
    If I have a forums site with a large number of threads, will the search engine bot crawl the whole site every time? Say I have over 1,000,000 threads in my site, will they get crawled every time the bot crawls my site? or how does it work? I want my website to be indexed but I don't want the bot to kill my website! In other words I don't want the bot to keep crawling the old threads again and again every time it crawls my website. Also, what about the pages crawled before? Will the bot request them every time it crawls my website to make sure they are still on the site? I'm asking this because I only link to the latest threads, i.e. there's a page that contains a list of all the latest threads, but I don't link to the older threads, they have to be explicitly requested by URL, e.g. http://www.mysite.com/showthread.aspx?threadid=7 , will this work to stop the bot from bringing my site down and consuming all my bandwidth? P.S. The site is still under development but I want to know in order to design the site so that search engine bots don't bring it down. Thanks

    Read the article

  • 301 redirect vs parking

    - by Pat
    I have several domain names registered, each a slight variant of each other. E.g, fastcar.com fast-car.com fastcar.co.uk fast-car.co.uk etc.. I don't wish to be penalized for duplicate content or spammy links by any of the major search engines. Should I park them all directly on the main domain I wish to promote, 301 redirect them to the main domain or not use them at all? Thanks

    Read the article

  • Website Sitemaps and <priority>, is it working?

    - by Mike Gleason jr Couturier
    Hi, My "Privacy Policy" page is seen more important by Google than other really more important pages on my website. I'm currently creating a script to generate a sitemap, should I bother with the priority? How do you effectively assign priorities to pages? I consider one of my page important but the page have less content than another one less important to my eyes... but maybe Google bot will see it the other way around. If my degree of "importantness" differs from the one of Google, will I get penalized on the ranking for a particular page? Thank you for sharing your black art with us :P

    Read the article

  • Will rel=canonical break site: queries ?

    - by Justin Grant
    Our company publishes our software product's documentation using a custom-built content management system using a dynamic URL namespace like this: http://ourproduct.com/documentation/version/pageid Where "version" is the version number to which the documentation applies, and "pageid" is a unique string which identifies that page in our back-end content management system. For example, if content (e.g. a page about configuration best practices) is unchanged from version 3.0 and 4.0 of our product, it'd be reachable by two different URLs: http://ourproduct.com/documentation/3.0/configuration-best-practices http://ourproduct.com/documentation/4.0/configuration-best-practices This URL scheme allows us to scope Google search results to see only documentaiton for a particular product version, like this: configuration site:ourproduct.com/documentation/4.0 But when the user is searching across all versions, we don't want Google to arbitrarily choose one of the URLs to show in results. Instead, we always want the latest version to show up. Hence our planned use of rel=canonical so we can proscriptively tell Google which URL we want to show up if multiple versions are being searched. (Users who do oddball things like searching 2 versions but not all of them are a corner case, so we don't care which version(s) show up in that case-- the primary use-cases we care about is searching one version or searching all versions) But what will happen to scoped searches if we do this? If my rel=canonical URL points to version 4.0, but my search is scoped to 3.0, will Google return a result? Even if you don't know the answer offhand, do you know a site which uses rel=canonical to redirect across folders in a URL namespace. If so, I could run a few Google searches and figure out the answer.

    Read the article

  • How to return proper 404 for google while providing user friendly content to the user?

    - by Marek
    I am bouncing between posting this here and on Superuser. Please excuse me if you feel this does not belong here. I am observing the behavior described here - Googlebot is requesting random urls on my site, like aecgeqfx.html or sutwjemebk.html. I am sure that I am not linking these urls from anywhere on my site. I suspect this may be google probing how we handle non existent content - to cite from an answer to the linked question: [google is requesting random urls to] see if your site correctly handles non-existent files (by returning a 404 response header) We have a custom page for nonexistent content - a styled page saying "Content not found, if you believe you got here by error, please contact us", with a few internal links, served (naturally) with a 200 OK. The URL is served directly (no redirection to a single url). I am afraid this may discriminate the site at google - they may not interpret the user friendly page as a 404 - not found and may think we are trying to fake something and provide duplicate content. How should I proceed to ensure that google will not think the site is bogus while providing user friendly message to users in case they click on dead links by accident?

    Read the article

  • Is there anyway of making json data readable by a Google spider?

    - by leeand00
    Is it possible to make JSON data readable by a Google spider? Say for instance that I have a JSON feed that contains the data for an e-commerce site. This JSON data is used to populate a human-readable page in the users browser. (I.E. The translation from JSON data to human displayed page is done inside the users browser; not my choice, just what I've been given to work with, its an old legacy CGI application and not an actual server-side scripting language.) My concern here is that, the google spiders will not be able to pickup/directly link to the item in question when a user clicks on it in google, being presented with an index page full of all the items, rather than being linked directly to the item they clicked on. Is there anyway of "informing" the google spider in the JSON that what they should feed the user a different link?

    Read the article

  • firefox reading web page from local JS file -- access to restricted URI denied, code: 1012, nsresult

    - by macias
    My problem is -- I have a html file which is really JS program, which reads web pages and show them in customized manner (i.e. it displays the same content in a different way). Basically, I create XMLHttpRequest object and then req.open("GET", web_page_address, false); req.send(""); This gives me (in firefox) an error: Error: uncaught exception: [Exception... "Access to restricted URI denied" code: "1012" nsresult: "0x805303f4 (NS_ERROR_DOM_BAD_URI)" I already googled, and looked at SO but all other issues are very similar with those two exceptions: the file I open in firefox is a local file, opened directly in browser -- I don't have www server running at localhost I don't have any control over the web pages I am reading stuff from So, several solutions I've seen so far (like adding PHP proxy, changing the way external server sends data) cannot be applied here. What else can be done in such case? Another story is I am wondering if such strict security for directly local file has any sense. Thank you in advance for tips/links/etc. Have a nice day!

    Read the article

  • friendly url in categories

    - by ntan
    Hi to all, i am trying to use friendly url for my categories. Example Database cat_id | parent_id | name | url 1 0 cat1 cat1 2 1 cat2 cat2 My approach to do is to pass the parameter cat with url value for example show.php?cat=cat1 and in .htaccess i must rewrite to /cat1 BUT what about when i want to access cat2. I want to rewrite as cat1/cat2 so the parameter is show.php?cat=cat1/cat2 and then parse the value to secure that cat2 belong to cat1. And so on. I am not using MVC so i have to do it on my own. Please if any other solutions is better please advice or suggest me reading Thank in advance.

    Read the article

  • Using A Local file path in a Streamwriter object ASP.Net

    - by Nick LaMarca
    I am trying to create a csv file of some data. I have wrote a function that successfully does this.... Private Sub CreateCSVFile(ByVal dt As DataTable, ByVal strFilePath As String) Dim sw As New StreamWriter(strFilePath, False) ''# First we will write the headers. ''EDataTable dt = m_dsProducts.Tables[0]; Dim iColCount As Integer = dt.Columns.Count For i As Integer = 0 To iColCount - 1 sw.Write(dt.Columns(i)) If i < iColCount - 1 Then sw.Write(",") End If Next sw.Write(sw.NewLine) ''# Now write all the rows. For Each dr As DataRow In dt.Rows For i As Integer = 0 To iColCount - 1 If Not Convert.IsDBNull(dr(i)) Then sw.Write(dr(i).ToString()) End If If i < iColCount - 1 Then sw.Write(",") End If Next sw.Write(sw.NewLine) Next sw.Close() End Sub The problem is I am not using the streamwriter object correctly for what I trying to accomplish. Since this is an asp.net I need the user to pick a local filepath to put the file on. If I pass any path to this function its gonna try to write it to the directory specified on the server where the code is. I would like this to popup and let the user select a place on their local machine to put the file.... Dim exData As Byte() = File.ReadAllBytes(Server.MapPath(eio)) File.Delete(Server.MapPath(eio)) Response.AddHeader("content-disposition", String.Format("attachment; filename={0}", fn)) Response.ContentType = "application/x-msexcel" Response.BinaryWrite(exData) Response.Flush() Response.End() I am calling the first function in code like this... Dim emplTable As DataTable = SiteAccess.DownloadEmployee_H() CreateCSVFile(emplTable, "C:\\EmplTable.csv") Where I dont want to have specify the file loaction (because this will put the file on the server and not on a client machine) but rather let the user select the location on their client machine. Can someone help me put this together? Thanks in advance.

    Read the article

< Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >