Search Results

Search found 27327 results on 1094 pages for 'search results'.

Page 99/1094 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • Handling SEO for Infinite pages that cause external slow API calls

    - by Noam
    I have an 'infinite' amount of pages in my site which rely on an external API. Generating each page takes time (1 minute). Links in the site point to such pages, and when a users clicks them they are generated and he waits. Considering I cannot pre-create them all, I am trying to figure out the best SEO approach to handle these pages. Options: Create really simple pages for the web spiders and only real users will fetch the data and generate the page. A little bit 'afraid' google will see this as low quality content, which might also feel duplicated. Put them under a directory in my site (e.g. /non-generated/) and put a disallow in robots.txt. Problem here is I don't want users to have to deal with a different URL when wanting to share this page or make sense of it. Thought about maybe redirecting real users from this URL back to the regular hierarchy and that way 'fooling' google not to get to them. Again not sure he will like me for that. Letting him crawl these pages. Main problem is I can't control to rate of the API calls and also my site seems slower than it should from a spider's perspective (if he only crawled the generated pages, he'd think it's much faster). Which approach would you suggest?

    Read the article

  • Narrowing down my large keyword list for new PPC campaign

    - by gijoemike
    If I have a list of 100 keywords that are candidates for a PPC campaign (my list is actually 1000+). What is the best approach to narrowing this down to the top 5-10 keywords I should start with? I'm also wondering if my top chosen keywords for PPC campaign should be my main keywords for SEO site optimization for organic traffic. I also have another question on this site asking: How does one estimate where a competitor is getting most of their traffic from? Thanks. The website isn't created yet, but will be up in January.

    Read the article

  • Why Facebook profiles are Google-searchable?

    - by Jose
    Facebook has around 1B user profiles. They can be found by searching in Google. However, I don't think these profiles are linked from anywhere, so how could Google discover them? As far as I know, sitemaps are not enough for that (http://webmasters.stackexchange.com/a/5151), as all URLs should be crawlable anyway. I ask the question as I also have a site with user profiles and would like to make them discoverable.

    Read the article

  • Best CMS to handle short snippets of text?

    - by Federico Poloni
    I have to install a CMS to manage a set of mathematical problems, i.e., our main content will be short (~3 lines) snippets of text. We need the ability to add comments and categories/tags, possibly with a powerful search function combining different constraints on the categories. A crucial ability is the possibility to combine the results of a search in the same page to produce a (printable) problem sheet: not many CMS's seem to be able to do so, and it is difficult for me to test every one for this specific function. Do you guys know of a CMS that is capable to return formatted search result in this fashion? Thanks in advance!

    Read the article

  • My blog not even ranking for exact title match [on hold]

    - by Akshay Hallur
    I have original in detail blog posts related to blogging and SEO. This domain has been dropped (expired) 2 times before my acquisition. I am the 3rd owner of the domain since 143 days. Blog posts are not ranking even for exact titles. Google+ or LinkedIn shares will show up instead of my content.Some blog posts are not even indexed. I am hardly getting around 7 organic visits / day. Example 1 : http://www.infoflame.com/offer-pdf-of-blog-posts-for-likes-and-shares/    Title: Offer Readers PDF of Blog Posts for Their Likes and Shares not indexed at all.  Example 2 : http://www.infoflame.com/anchor-text-for-seo/    is indexed but not coming up for the exact title. Suspect: Dropped domain, less likely used for spam( WayBack machine (2 drops) 3 captures since 2004, I don't know whether there was Email spam) (But no manual actions in WMT, so no reconsideration request). What's the reason for this? Should I wait? How can I tell Google that ownership is changed and the domain is now spam-free? or should I de-index it and start a new blog? Thank you, for any advises.

    Read the article

  • Does Google treat AWS IP addresses as related?

    - by ElHaix
    We are hosting several websites on one of our servers, and wondering if because they are on the same subnet that they have been somehow penalized. We are not inter-linking between websites. However in an attempt to have everything hosted in AWS, we will have some sites that we do want to be interlinked. If the sites resided on the same subnet, this could be bad. However, with AWS, we can allocate multiple elastic IP addresses that do reside on different subnets. How does Google deal with this?

    Read the article

  • What are the most common AI systems implemented in Tower Defense Games

    - by the_Dan
    I'm currently in the middle of researching on the various types of AI techniques used in tower defense type games. If someone could be help me in understanding the different types of techniques and their associated advantages. Using Google I already found several techniques. Random Map traversal Path finding e.g. Cost based Traversing Algorithms i.e. A* I have already found a great answer to this type of question with the below link, but I feel that this answer is tailored to FPS. If anyone could add to this and make it specific to tower defense games then I would be truly great-full. How is AI most commonly implemented in popular games? Example of such games would be: Radiant Defense Plant Vs Zombies - Not truly Intelligent, but there must be an AI system used right? Field Runners Edit: After further research I found an interesting book that may be useful: http://www.amazon.com/dp/0123747317/?tag=stackoverfl08-20

    Read the article

  • Tips for switching jobs and moving into web based programming?

    - by JerryC
    I graduated in 2006 with a computer science degree and got solid grades (3.5 overall 3.8 in my major) For the past 4.5 years I've been working as a Software Engineer doing primarily rich client development. Most of my experience is with Java, Swing and C++. I've done a lot of network programming and I have acquired some skill working & debugging in distributed environments. I would like to switch jobs and move into a role where I can get exposure to some new technologies and frameworks. I would like to move into a more web development role but I find my lack of web development experience is hurting me. 90% of the jobs I see advertised are looking for one of two skill sets: 1) Stereotypical server side Java web developer. Experience with Spring, Hibernate, J2EE, etc. 2) Stereotypical front end web developer. Experience with Javascript, jQuery, HTML5, GWT, CSS, etc I find most of these companies are looking really specifically for this experience and they are not willing to take on good programmers/ CS fundamental guys who lack experience with this stuff. I would love to get a job doing stuff like this, but have my skills become out of date and unmarketable? Any opinions on ways to sell myself to help get a new position?

    Read the article

  • Creating Google sitemap.xml , is it okay for the images to be wrapped in url tags?

    - by AzizAG
    I'm using a tool to generate the sitemap.xml file for me, it started to crawl my website, got the pages and all images, but when exporting it, I review the xml(to make sure nothing is wrong) and I noticed that the images in my website are wrapped in url tags(I think it should be in image tags). See this: <url><loc>http://mywebsite.com/images/12.jpg</loc><lastmod>2012-05-23T13:39:02+00:00</lastmod><changefreq>weekly</changefreq><priority>0.50</priority></url> Shouldn't it be wrapped in image tag?(just like videos wrapped in video tag) Thanks.

    Read the article

  • Reset / Remove - Google Keywords

    - by Herr Kaleun
    Summary: My site is ranking for filthy keywords and i would like to remove them from google ranking/keywords. Background: My server was hacked using the timthumb exploit/security vulnerability, apparently i was the last person on earth to read the news about the exploit, several months after it appeared. Anyway, the "hacker" was so friendly to modify the index.php file in such a fashion, that it generated random sexual oriented keywords if the website is fetched as google-bot. So if you would fetch it as google bot/it gets indexed, you would get randomly generated keywords like: sex videos teenager teen sex adult sex preteen A LINK TO A RANDOM CONTENT OF MY WEBPAGE anime sex videos a rough list something similar to that, about 180-200 per page. I've discovered it far too late, so that google had me indexed for the words "sex" and certain adult oriented keywords, about roughly 2000. I've removed all the content, toke the site down, replaced the index.php with a static HTML and added a "ERROR 410" title to the website so that the content is no longer here and removed permanently. I've also applied for a manual review of my website, about 1.5 months ago but still, the keywords are there, and very strange, some of the keyword rankings actually "improve" over time. Here are some screenshots from webmasters tools: Question: How can i remove this filthy keywords and re-rank my website as a "normal" website on the fastest way? I want to "REMOVE" the keywords if possible. Please help me or point me into a direction. Thank you

    Read the article

  • Webmaster Tools: root and subdirectories?

    - by nick
    We have all our international sites on our .com domain like this: site.com/uk site.com/us etc... When creating the sites in Webmaster Tools I've created different sites and submitted sitemaps for each directory so that we can appropriately geotarget the site. Is it also recommended to add the root .com with its geotargeting set to international? If so should I also add all the seperate site maps (like the /us/sitemap.xml) even though they have been added to the directory level sites?

    Read the article

  • How to Avoid Duplicate Content in Wordpress Ecommerce Store

    - by Bhanuprakash Moturu
    hi i run a word press eCommerce store powered by woo commerce . i have a large inventory of products most of the product description is same for all products and its mandatory to include it. its creating a large duplicate content on site each category have 6 products i thought of a solution can you suggest which one is good 1 no index and follow product page and link it to categories page using canonical tag 2 index and nofollow product page and link it to categories page using canonical tag which is the best solution and is it a good practice to use canonical tag to link to categories page

    Read the article

  • How to remove old robots.txt from google as old file block the whole site

    - by KnowledgeSeeker
    I have a website which still shows old robots.txt in the google webmaster tools. User-agent: * Disallow: / Which is blocking Googlebot. I have removed old file updated new robots.txt file with almost full access & uploaded it yesterday but it is still showing me the old version of robots.txt Latest updated copy contents are below User-agent: * Disallow: /flipbook/ Disallow: /SliderImage/ Disallow: /UserControls/ Disallow: /Scripts/ Disallow: /PDF/ Disallow: /dropdown/ I submitted request to remove this file using Google webmaster tools but my request was denied I would appreciate if someone can tell me how i can clear it from the google cache and make google read the latest version of robots.txt file.

    Read the article

  • How to register properly to the most famous SEOs? [closed]

    - by Olivier Pons
    I know it may have been asked many times, but here's my question: I'm about to open my website which I'm more than proud of (I'll talk about its capabilities on my blog). Anyway I want it to be registered by all the most famous SEOs and to be fetched often because it may grow up quickly. I know that a lot of people may have already asked this question but nevertheless I didn't find something relevant to that. I just want to know where I should register on all major SEOs when I release a website. Maybe this is a wiki, but I didn't find anything helpful on the subject. Any advice welcome.

    Read the article

  • Robots.txt Disallow command [on hold]

    - by Saahil Sinha
    How to disallow folders through Robots.txt, which are been crawled due to wrong url structure, which thus cause duplicate page error The URL been crawled as incorrectly by Google leading to duplicate page error: www.abc.com/forum/index.php?option=com_forum However, The actual correct pages however are: www.abc.com/index.php?option=com_forum Is this a correct way by excluding them through robots.txt: To exclude www.abc.com/forum/index.php?option=com_forum Below is command Disallow: /forum/ Will it not block in legitimate component folder 'Forum' of site?

    Read the article

  • Linear Search in Python? [closed]

    - by POTUS
    def find_interval(mesh,x): '''This function finds the interval containing x according to the following rules, mesh is an ordered list with n numbers return 0 if x < mesh[0] return n if mesh[n-1] < x return k if mesh[k-1] <= x < mesh[k] return n-1 if mesh[n-2] <= x <= mesh[n-1] This function does a Linear search. 08/29/2012 ''' for n in range(len(mesh)): for k in range(len(mesh)): if x == mesh[n]: print "Found x at index:" return n elif x<mesh[n]: return 0 elif mesh[n-1]<x: return n elif mesh[n-2]<=x<=mesh[n-1]: return n-1 elif mesh[k-1]<=x<mesh[k]: return k mesh = [0, 0.1, 0.25, 0.5, 0.6, 0.75, 0.9, 1] print mesh print find_interval(mesh, -1) print find_interval(mesh, 0) print find_interval(mesh, 0.1) print find_interval(mesh, 0.8) print find_interval(mesh, 0.9) print find_interval(mesh, 1) print find_interval(mesh, 1.01) Output: [0, 0.100000000000000, 0.250000000000000, 0.500000000000000, 0.600000000000000, 0.750000000000000, 0.900000000000000, 1] 0 Found x at index: 0 2 6 -1 -1 0 I don't think the output is correct. Can anyone help me fix it? Thanks.

    Read the article

  • I have done everything correct on my asp.net website, re: SEO; why aren't google backlinks showing?

    - by Jason Weber
    I recently implemented many SEO techniques for a company on their asp.net website; in 6 months, we jumped from a PR1 to a PR3. But I'm having issues with google backlinking. Here are some of the things I've done: Not only did I set up their own Google+ page 6 months ago, I update it pretty much daily with links, pictures, etc., and I blog about it on my own personal Google+ page and post links, etc. ... They have their own Twitter, Facebook, YouTube, and all are updated almost daily. I've listed in as many quality, relevant directories as possible 6 months ago; I've avoided link farms. The site is solid SEO-wise. Key-phrase rich URLs, schema.org & rich snippets. No duplicate content ... www or non-www 301's, trailing slashes, etc. ... all taken care of. Probably a ton of other things, but basically, the site is all set, SEO-wise. Here's what's confounding: When I do a link:www.example.com in Bing/Yahoo, it shows many backlinks. When I do a link:www.example.com in google, it shows up 0 links. Or when I use a site-ranker like Web Site Rank Tool it's showing 0 backlinks from Google. Any suggestions would be appreciated!

    Read the article

  • Why is <my site url> not indexed by search engines? [closed]

    - by Henrik Erlandsson
    was indexed fine until about a year ago. The only thing I can think of is that search engines throw up at using h5 before h4, or that some person (fantasizing now) has reported my site as unsafe to every search engine. However, I'm not here to speculate. The site validates, and has an RSS feed on the front page, for Pat Morita's sake! To me, it looks like the kind of site search engines would feast on. It's got more than a dozen blogs on it, if nothing else. Hah. :) I was thinking you could identify basically what has changed in search engines (currently, google, yahoo, bing which used to work fine) the last year to make them not find news and blog articles on this site. The site was submitted to Google, oh, way back in 2006. With online crawler tests I get mixed results, some crawlers index fine, some go blank. I don't really know which ones are reliable and am looking to you guys for advice on that. Yes, I am prepared to again verify my site with Google and upload a sitemap, but that's not the topic here. I really would first like to know what change on the site last year could make search engines not index it. (Yees, the robots.txt is fine. Should be nothing to discourage bots there.) It's a very intriguing problem. One which I have yet to find the reason for but would like to know the reason for. Any and all input appreciated, but I would heavily enjoy pertinent advice the most. ;) Edit: Some google searches that don't show up include - aca630 All of which are posted in the news and blogs that are on the front page there. Now, these search terms are extremely specific as the term in is almost unique on the web and ACA630 is also a very qualified search term that can't be confused with mainstream search terms.

    Read the article

  • How can I find files quicker than find or locate?

    - by Chaitanya
    I have been using find command to find files on my 1 tb hard disk. it takes very long. then I used locate which proved to be faster with regular update using updatedb. But the limitation of locate is that I cannot find files with certain size or modified/created time. can you suggest me any ideas on how to find files at more speed or in that case how to pipe output of locate command in a way that all other information like size, time, etc. can be displayed or redirected to a file.

    Read the article

  • Blog not even ranking for exact title match, after domain has been dropped twice [on hold]

    - by Akshay Hallur
    Consider a blog, related to blogging and SEO. The domain has been dropped (expired) 2 times before acquisition. The current owner is the 3rd owner of the domain since 5 months. Blog posts are not ranking, even for exact titles. Google+ or other shares will show up instead of the content. Some blog posts are not even indexed. Let us TAKE that it gets around 7 organic visits / day. Dropped domain, less likely used for spam (WayBack machine (2 Reframed drops) 3 captures since 2004, Don't know whether there was Email spam) (But no manual actions in WMT, so no reconsideration request). What could be the reason for this? How can Google be told that ownership is changed and the domain is now spam-free? Would this domain be salvageble, or does this only change after relocating to another domain?

    Read the article

  • REL ME tag - trying to figure it out

    - by nekdo
    Regarding http://support.google.com/webmasters/bin/answer.py?hl=en&answer=1229920 to scrolled down section ''Examples'' to the point ''1.'' to the second code line which is: <a rel="me" href="https://plus.google.com/105240469625818678725/"> <img src="//www.google.com/images/icons/ui/gprofile_button-16.png"></a> On the page says that I have to add this line to the Contact Me page of own website in order to get Google Profile button. Exact code which one should be copy and pasted I am able to get here: http://www.google.com/webmasters/profilebutton/ Questions: 1). As you can see on the second URL, to make Google Profile button I need to use "author" tag and not "me" tag. But the first URL which I showed (the line in this message above) shows that I have to use "me" tag and even without this: width="32" height="32". I am already aware that I have to type (second URL) my own Google Profile URL. So do I just MANUALLY ( ! ) change this: <a rel="author" href="https://profiles.google.com/109412257237874861202"> <img src="http://www.google.com/images/icons/ui/gprofile_button-32.png" width="32" height="32"></a> to this (note: two changes done): <a rel="me" href="https://profiles.google.com/109412257237874861202"> <img src="http://www.google.com/images/icons/ui/gprofile_button-32.png"></a> Is this correct? I assume that plus.google.com is the same as profiles.google.com (both is URL of Google Profile). 2). If I was wrong with my first question then the second answer probably won't be even useful but still: Where exactly should I paste the code: <a rel="me" href="https://profiles.google.com/109412257237874861202"> <img src="http://www.google.com/images/icons/ui/gprofile_button-32.png"></a> inside Author Page of own website? I think it doesn't matter where. Also: will this icon be for sure enough or do I also have to make such anchor text with rel me in a ''shape'' of text (for word sentence such as ''Look At My Google Profile'')? Or is just icon really enough? 3). In the same section (''1.'') of the same page (link [first one] provided above) it says that I need to use first author tag to link to Author/Contact Me page of own website in order to later use Me tag. But I think in the explanation is little mistake. Shouldn't be instead of: <a rel="author" href="http://www.cnet.com/profile/iamjaygreene/">Jay Greene</a> this: <a rel="author" href="http://www.cnet.com/profile/iamjaygreene.html">Jay Greene</a> ?

    Read the article

  • Why my site is not ranking for particular keyword

    - by user543087
    My site is only 3 days to be 6 months old. This website is unique, that is there is no competitor to this type site in India, providing comparison of payment gateways in India, besides the payment gateways companies itself. I've optimized it for key word : "payment gateway" I've changed the url's twice, latest being 3 months back, in which case Google Webmaster gave plently of 404's. I corrected the useful 404's and left meaningless ones as it is. What is the reason it's not ranking well for payment gateways? Even site with single page about "Payment gateways" seem to be ranking better than this. Is it does to: 1) Lot of outbound links to in-context companies and information 2) 404's as reported in Google Webmaster My another site is successfully getting 1500 unique visitors daily and is up in Google ranking. I don't know why it is not!

    Read the article

  • My approach towards SEO implementation needs improvements. [closed]

    - by Eritrea
    I have always copy/paste this below code as a template for meta tags on project, but I think they are not effective as they could possible be. So, I need to know if there is anyway I can improve it. suppose I have a site called coop.com for a company called Coopm and we do import and export as a business in France. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html;charset=utf-8" /> <meta name="rpbots" content="index, follow" /> <meta name="description" content="ccop is a major import and export company" /> <meta name="keywords" content="coop, coop.com coop company, import export, import export france, " /> <meta name='REVISIT-AFTER' content='30 DAYS'> <title>coop is an import and export company located in France</title> </head> The reason I am asking, is because I want to know if there are better ways of constructing your SEO tags, and construction.

    Read the article

  • How to handle URLs with diacritic characters

    - by user359650
    I am wondering how to handle URLs which correspond to strings containing diacritic (á, u, ´...). I believe what we're seeing mostly are URLs where diacritic characters where converted to their closest ASCII equivalent, for instance Rånades på Skyttis i Ö-vik converted to ranades-pa-skyttis-i-o-vik. However depending on the corresponding language, such conversion might be incorrect. For instance in German, ü should be converted to ue and not just u, as seen with the below URL representing the Bayern München string as bayern-muenchen: http://www.bundesliga.de/en/liga/clubs/fc-bayern-muenchen/index.php However what I've also noticed, is that browsers can render non-ASCII characters when they are percent-encoded in the URL, which is the approach Wikipedia has chosen, for instance http://de.wikipedia.org/wiki/FC_Bayern_M%C3%BCnchen which is rendered as: Therefore I'm considering the following approach for creating URL slugs: -(1) convert strings while replacing non-ASCII characters to their recommended ASCII representation: Bayern München - bayern-muenchen -(2) also convert strings to percent encoding: Bayern München - bayern_m%C3%BCnchen -create a 301 redirect from version (1) to version (2) Version (1) URLs could be used for marketing purposes (e.g. mywebsite.com/bayern-muenchen) but the URLs that would end being displayed in the browser bar would be version (2) URLs (e.g. mywebsite.com/bayern-münchen). Can you foresee particular problems with this approach? (Wikipedia is not doing it and I wonder why, apart from the fact that they don't need to market their URLs)

    Read the article

  • Ranking drop after using reverse proxy for blog subdirectory and robots.txt for old blog subdomain

    - by user40387
    We have a 3Dcart store and a WordPress blog hosted on a separate server. Originally, we had a CNAME set up to point the blog to http://blog.example.com/. However, in our attempt to boost link-based and traffic-based authority on the main site, we've opted to do a reverse proxy to http://www.example.com/blog/. It’s been about two months since we finished the reverse proxy migration. It appears that everything is technically working as intended, including some robots and sitemap changes; the new URLs are even generating some traffic, as indicated on Google Analytics. While Google has been indexing the new URL locations, they’re ranking very poorly, even for non-competitive, long-tail keywords. Meanwhile, the old subdomain URLs are still ranking mostly as well as they used to (even though they aren’t showing meta titles and descriptions due to being blocked by robots.txt). Our working theory is that Google has an old index of the subdomain URLs, and is considering the new URLs to be duplicate content, since it’s being told not to crawl the subdomain and therefore can’t see the rel canonicals we have in place. To resolve this, we’ve updated the subdomain’s robot.txt to no longer block crawling and indexing. Theoretically, seeing the canonical tag on the subdomain pages will resolve any perceived duplicate content issues. In the meantime, we were wondering if anyone would have any other ideas. We are very concerned that we’ll be losing valuable traffic, as we’re entering our on season at the moment.

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >