Search Results

Search found 18077 results on 724 pages for 'search tricks'.

Page 131/724 | < Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >

  • What are the most common AI systems implemented in Tower Defense Games

    - by the_Dan
    I'm currently in the middle of researching on the various types of AI techniques used in tower defense type games. If someone could be help me in understanding the different types of techniques and their associated advantages. Using Google I already found several techniques. Random Map traversal Path finding e.g. Cost based Traversing Algorithms i.e. A* I have already found a great answer to this type of question with the below link, but I feel that this answer is tailored to FPS. If anyone could add to this and make it specific to tower defense games then I would be truly great-full. How is AI most commonly implemented in popular games? Example of such games would be: Radiant Defense Plant Vs Zombies - Not truly Intelligent, but there must be an AI system used right? Field Runners Edit: After further research I found an interesting book that may be useful: http://www.amazon.com/dp/0123747317/?tag=stackoverfl08-20

    Read the article

  • How to register properly to the most famous SEOs? [closed]

    - by Olivier Pons
    I know it may have been asked many times, but here's my question: I'm about to open my website which I'm more than proud of (I'll talk about its capabilities on my blog). Anyway I want it to be registered by all the most famous SEOs and to be fetched often because it may grow up quickly. I know that a lot of people may have already asked this question but nevertheless I didn't find something relevant to that. I just want to know where I should register on all major SEOs when I release a website. Maybe this is a wiki, but I didn't find anything helpful on the subject. Any advice welcome.

    Read the article

  • How long does it take for Google to re-index pages or update the link titles?

    - by ElHaix
    On one of our classified sites, when doing site:[mysite.com] in Google, the link text is simply [product name] - [mysite.com], where as it should read [product name] classifieds for sale in... I suspect that the site map may have been submitted when we just had [product name], and updated the page titles later. However, it has been a couple of weeks that I have confirmed the longer page titles, and still they appear shortened in organic results. How can I get this looking right in Google's organic results?

    Read the article

  • Linear Search in Python? [closed]

    - by POTUS
    def find_interval(mesh,x): '''This function finds the interval containing x according to the following rules, mesh is an ordered list with n numbers return 0 if x < mesh[0] return n if mesh[n-1] < x return k if mesh[k-1] <= x < mesh[k] return n-1 if mesh[n-2] <= x <= mesh[n-1] This function does a Linear search. 08/29/2012 ''' for n in range(len(mesh)): for k in range(len(mesh)): if x == mesh[n]: print "Found x at index:" return n elif x<mesh[n]: return 0 elif mesh[n-1]<x: return n elif mesh[n-2]<=x<=mesh[n-1]: return n-1 elif mesh[k-1]<=x<mesh[k]: return k mesh = [0, 0.1, 0.25, 0.5, 0.6, 0.75, 0.9, 1] print mesh print find_interval(mesh, -1) print find_interval(mesh, 0) print find_interval(mesh, 0.1) print find_interval(mesh, 0.8) print find_interval(mesh, 0.9) print find_interval(mesh, 1) print find_interval(mesh, 1.01) Output: [0, 0.100000000000000, 0.250000000000000, 0.500000000000000, 0.600000000000000, 0.750000000000000, 0.900000000000000, 1] 0 Found x at index: 0 2 6 -1 -1 0 I don't think the output is correct. Can anyone help me fix it? Thanks.

    Read the article

  • How to Avoid Duplicate Content in Wordpress Ecommerce Store

    - by Bhanuprakash Moturu
    hi i run a word press eCommerce store powered by woo commerce . i have a large inventory of products most of the product description is same for all products and its mandatory to include it. its creating a large duplicate content on site each category have 6 products i thought of a solution can you suggest which one is good 1 no index and follow product page and link it to categories page using canonical tag 2 index and nofollow product page and link it to categories page using canonical tag which is the best solution and is it a good practice to use canonical tag to link to categories page

    Read the article

  • Robots.txt Disallow command [on hold]

    - by Saahil Sinha
    How to disallow folders through Robots.txt, which are been crawled due to wrong url structure, which thus cause duplicate page error The URL been crawled as incorrectly by Google leading to duplicate page error: www.abc.com/forum/index.php?option=com_forum However, The actual correct pages however are: www.abc.com/index.php?option=com_forum Is this a correct way by excluding them through robots.txt: To exclude www.abc.com/forum/index.php?option=com_forum Below is command Disallow: /forum/ Will it not block in legitimate component folder 'Forum' of site?

    Read the article

  • How to remove old robots.txt from google as old file block the whole site

    - by KnowledgeSeeker
    I have a website which still shows old robots.txt in the google webmaster tools. User-agent: * Disallow: / Which is blocking Googlebot. I have removed old file updated new robots.txt file with almost full access & uploaded it yesterday but it is still showing me the old version of robots.txt Latest updated copy contents are below User-agent: * Disallow: /flipbook/ Disallow: /SliderImage/ Disallow: /UserControls/ Disallow: /Scripts/ Disallow: /PDF/ Disallow: /dropdown/ I submitted request to remove this file using Google webmaster tools but my request was denied I would appreciate if someone can tell me how i can clear it from the google cache and make google read the latest version of robots.txt file.

    Read the article

  • I have done everything correct on my asp.net website, re: SEO; why aren't google backlinks showing?

    - by Jason Weber
    I recently implemented many SEO techniques for a company on their asp.net website; in 6 months, we jumped from a PR1 to a PR3. But I'm having issues with google backlinking. Here are some of the things I've done: Not only did I set up their own Google+ page 6 months ago, I update it pretty much daily with links, pictures, etc., and I blog about it on my own personal Google+ page and post links, etc. ... They have their own Twitter, Facebook, YouTube, and all are updated almost daily. I've listed in as many quality, relevant directories as possible 6 months ago; I've avoided link farms. The site is solid SEO-wise. Key-phrase rich URLs, schema.org & rich snippets. No duplicate content ... www or non-www 301's, trailing slashes, etc. ... all taken care of. Probably a ton of other things, but basically, the site is all set, SEO-wise. Here's what's confounding: When I do a link:www.example.com in Bing/Yahoo, it shows many backlinks. When I do a link:www.example.com in google, it shows up 0 links. Or when I use a site-ranker like Web Site Rank Tool it's showing 0 backlinks from Google. Any suggestions would be appreciated!

    Read the article

  • Unindex google code svn repository content from google index

    - by matcheek
    I developed a small web site and saved the code to google code repository. Everything has been running smoothly for a while until results from google code svn repository started showing up before the results from the actual website. Is there any way I could stop google from indexing google code repository content or at least make its rank lower than the web site? I am not talking sophisticated seo techniques but rather some simple settings if there are any.

    Read the article

  • Why is <my site url> not indexed by search engines? [closed]

    - by Henrik Erlandsson
    was indexed fine until about a year ago. The only thing I can think of is that search engines throw up at using h5 before h4, or that some person (fantasizing now) has reported my site as unsafe to every search engine. However, I'm not here to speculate. The site validates, and has an RSS feed on the front page, for Pat Morita's sake! To me, it looks like the kind of site search engines would feast on. It's got more than a dozen blogs on it, if nothing else. Hah. :) I was thinking you could identify basically what has changed in search engines (currently, google, yahoo, bing which used to work fine) the last year to make them not find news and blog articles on this site. The site was submitted to Google, oh, way back in 2006. With online crawler tests I get mixed results, some crawlers index fine, some go blank. I don't really know which ones are reliable and am looking to you guys for advice on that. Yes, I am prepared to again verify my site with Google and upload a sitemap, but that's not the topic here. I really would first like to know what change on the site last year could make search engines not index it. (Yees, the robots.txt is fine. Should be nothing to discourage bots there.) It's a very intriguing problem. One which I have yet to find the reason for but would like to know the reason for. Any and all input appreciated, but I would heavily enjoy pertinent advice the most. ;) Edit: Some google searches that don't show up include - aca630 All of which are posted in the news and blogs that are on the front page there. Now, these search terms are extremely specific as the term in is almost unique on the web and ACA630 is also a very qualified search term that can't be confused with mainstream search terms.

    Read the article

  • How to Fix this specific Google "Fetch as Googlebot" error appearing on my Webmaster Tools?

    - by UXdesigner
    Good day, I'm currently finding out why I have lost all of my website's rank in google. I don't even appear in google results by the domain. But other sites do link me and they appear in the google results. I think it's all about leaving my site two months alone and finding out I had 20k in comment spam, which I completely deleted and fixed with filters and adding a new Disqus comment service. Thing is, I added my site to Google Webmaster Tools and I'm finding out several awful things. For example, when I click in Google Fetch As GoogleBot. I receive this error message below in response to my request. And I don't even know what's the real problem and how to fix it. I simply don't get it. This is what appears: Date: Wednesday, July 20, 2011 9:43:35 AM PDT Googlebot Type: Web Download Time (in milliseconds): 55 HTTP/1.1 403 Forbidden Date: Wed, 20 Jul 2011 16:43:36 GMT Server: Apache Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 248 Keep-Alive: timeout=2, max=100 Connection: Keep-Alive Content-Type: text/html; charset=iso-8859-1 403 Forbidden Forbidden You don't have permission to access / on this server. Additionally, a 403 Forbidden error was encountered while trying to use an ErrorDocument to handle the request. Do you guys know anything about this problem ? I need to have Google crawl my site again. I used to have a really nice google result in the past three years. Now, there's nothing. thanks,

    Read the article

  • How can I find files quicker than find or locate?

    - by Chaitanya
    I have been using find command to find files on my 1 tb hard disk. it takes very long. then I used locate which proved to be faster with regular update using updatedb. But the limitation of locate is that I cannot find files with certain size or modified/created time. can you suggest me any ideas on how to find files at more speed or in that case how to pipe output of locate command in a way that all other information like size, time, etc. can be displayed or redirected to a file.

    Read the article

  • My approach towards SEO implementation needs improvements. [closed]

    - by Eritrea
    I have always copy/paste this below code as a template for meta tags on project, but I think they are not effective as they could possible be. So, I need to know if there is anyway I can improve it. suppose I have a site called coop.com for a company called Coopm and we do import and export as a business in France. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html;charset=utf-8" /> <meta name="rpbots" content="index, follow" /> <meta name="description" content="ccop is a major import and export company" /> <meta name="keywords" content="coop, coop.com coop company, import export, import export france, " /> <meta name='REVISIT-AFTER' content='30 DAYS'> <title>coop is an import and export company located in France</title> </head> The reason I am asking, is because I want to know if there are better ways of constructing your SEO tags, and construction.

    Read the article

  • REL ME tag - trying to figure it out

    - by nekdo
    Regarding http://support.google.com/webmasters/bin/answer.py?hl=en&answer=1229920 to scrolled down section ''Examples'' to the point ''1.'' to the second code line which is: <a rel="me" href="https://plus.google.com/105240469625818678725/"> <img src="//www.google.com/images/icons/ui/gprofile_button-16.png"></a> On the page says that I have to add this line to the Contact Me page of own website in order to get Google Profile button. Exact code which one should be copy and pasted I am able to get here: http://www.google.com/webmasters/profilebutton/ Questions: 1). As you can see on the second URL, to make Google Profile button I need to use "author" tag and not "me" tag. But the first URL which I showed (the line in this message above) shows that I have to use "me" tag and even without this: width="32" height="32". I am already aware that I have to type (second URL) my own Google Profile URL. So do I just MANUALLY ( ! ) change this: <a rel="author" href="https://profiles.google.com/109412257237874861202"> <img src="http://www.google.com/images/icons/ui/gprofile_button-32.png" width="32" height="32"></a> to this (note: two changes done): <a rel="me" href="https://profiles.google.com/109412257237874861202"> <img src="http://www.google.com/images/icons/ui/gprofile_button-32.png"></a> Is this correct? I assume that plus.google.com is the same as profiles.google.com (both is URL of Google Profile). 2). If I was wrong with my first question then the second answer probably won't be even useful but still: Where exactly should I paste the code: <a rel="me" href="https://profiles.google.com/109412257237874861202"> <img src="http://www.google.com/images/icons/ui/gprofile_button-32.png"></a> inside Author Page of own website? I think it doesn't matter where. Also: will this icon be for sure enough or do I also have to make such anchor text with rel me in a ''shape'' of text (for word sentence such as ''Look At My Google Profile'')? Or is just icon really enough? 3). In the same section (''1.'') of the same page (link [first one] provided above) it says that I need to use first author tag to link to Author/Contact Me page of own website in order to later use Me tag. But I think in the explanation is little mistake. Shouldn't be instead of: <a rel="author" href="http://www.cnet.com/profile/iamjaygreene/">Jay Greene</a> this: <a rel="author" href="http://www.cnet.com/profile/iamjaygreene.html">Jay Greene</a> ?

    Read the article

  • How to handle URLs with diacritic characters

    - by user359650
    I am wondering how to handle URLs which correspond to strings containing diacritic (á, u, ´...). I believe what we're seeing mostly are URLs where diacritic characters where converted to their closest ASCII equivalent, for instance Rånades på Skyttis i Ö-vik converted to ranades-pa-skyttis-i-o-vik. However depending on the corresponding language, such conversion might be incorrect. For instance in German, ü should be converted to ue and not just u, as seen with the below URL representing the Bayern München string as bayern-muenchen: http://www.bundesliga.de/en/liga/clubs/fc-bayern-muenchen/index.php However what I've also noticed, is that browsers can render non-ASCII characters when they are percent-encoded in the URL, which is the approach Wikipedia has chosen, for instance http://de.wikipedia.org/wiki/FC_Bayern_M%C3%BCnchen which is rendered as: Therefore I'm considering the following approach for creating URL slugs: -(1) convert strings while replacing non-ASCII characters to their recommended ASCII representation: Bayern München - bayern-muenchen -(2) also convert strings to percent encoding: Bayern München - bayern_m%C3%BCnchen -create a 301 redirect from version (1) to version (2) Version (1) URLs could be used for marketing purposes (e.g. mywebsite.com/bayern-muenchen) but the URLs that would end being displayed in the browser bar would be version (2) URLs (e.g. mywebsite.com/bayern-münchen). Can you foresee particular problems with this approach? (Wikipedia is not doing it and I wonder why, apart from the fact that they don't need to market their URLs)

    Read the article

  • Why my site is not ranking for particular keyword

    - by user543087
    My site is only 3 days to be 6 months old. This website is unique, that is there is no competitor to this type site in India, providing comparison of payment gateways in India, besides the payment gateways companies itself. I've optimized it for key word : "payment gateway" I've changed the url's twice, latest being 3 months back, in which case Google Webmaster gave plently of 404's. I corrected the useful 404's and left meaningless ones as it is. What is the reason it's not ranking well for payment gateways? Even site with single page about "Payment gateways" seem to be ranking better than this. Is it does to: 1) Lot of outbound links to in-context companies and information 2) 404's as reported in Google Webmaster My another site is successfully getting 1500 unique visitors daily and is up in Google ranking. I don't know why it is not!

    Read the article

  • Blog not even ranking for exact title match, after domain has been dropped twice [on hold]

    - by Akshay Hallur
    Consider a blog, related to blogging and SEO. The domain has been dropped (expired) 2 times before acquisition. The current owner is the 3rd owner of the domain since 5 months. Blog posts are not ranking, even for exact titles. Google+ or other shares will show up instead of the content. Some blog posts are not even indexed. Let us TAKE that it gets around 7 organic visits / day. Dropped domain, less likely used for spam (WayBack machine (2 Reframed drops) 3 captures since 2004, Don't know whether there was Email spam) (But no manual actions in WMT, so no reconsideration request). What could be the reason for this? How can Google be told that ownership is changed and the domain is now spam-free? Would this domain be salvageble, or does this only change after relocating to another domain?

    Read the article

  • Why the amount of 'indexed' images can go down?

    - by Roman Matveev
    I have a site with several thousand of images. All those images included into the sitemap submitted to Google Webmaster Tools. The amount of 'submitted' images is OK, but the amount of 'indexed' is significantly lower than the amount of 'submitted' and it is going DOWN! I'd understand if not all of my images got indexed (however it is also not clear and very frustrating for me) but I can not understand how the indexing can go in the negative direction?! All the images stays on their places. And pages containing them stays unchanged. At least they intended to be. Any thoughts?

    Read the article

  • Ranking drop after using reverse proxy for blog subdirectory and robots.txt for old blog subdomain

    - by user40387
    We have a 3Dcart store and a WordPress blog hosted on a separate server. Originally, we had a CNAME set up to point the blog to http://blog.example.com/. However, in our attempt to boost link-based and traffic-based authority on the main site, we've opted to do a reverse proxy to http://www.example.com/blog/. It’s been about two months since we finished the reverse proxy migration. It appears that everything is technically working as intended, including some robots and sitemap changes; the new URLs are even generating some traffic, as indicated on Google Analytics. While Google has been indexing the new URL locations, they’re ranking very poorly, even for non-competitive, long-tail keywords. Meanwhile, the old subdomain URLs are still ranking mostly as well as they used to (even though they aren’t showing meta titles and descriptions due to being blocked by robots.txt). Our working theory is that Google has an old index of the subdomain URLs, and is considering the new URLs to be duplicate content, since it’s being told not to crawl the subdomain and therefore can’t see the rel canonicals we have in place. To resolve this, we’ve updated the subdomain’s robot.txt to no longer block crawling and indexing. Theoretically, seeing the canonical tag on the subdomain pages will resolve any perceived duplicate content issues. In the meantime, we were wondering if anyone would have any other ideas. We are very concerned that we’ll be losing valuable traffic, as we’re entering our on season at the moment.

    Read the article

  • SH404SEF URLs in Joomla 1.5

    - by Tao Bellamine
    I have two modules to play with urls, the global configuration module and the sh404sef module. The global config is set to "Sef urls: YES" and "mod rewrite enabled: YES" and the sh404sef is set "url optimization: NO". My problem is, even with "Sef urls" set in the global config, my urls still don't seem to be that "user friendly" so I turn on the "Url optimization" using the sh404sef module, and I get better descriptive urls. However, the problem I inherit from doing this is that my dynamically populated chronoforms get messed up (only the chrono forms, other forms are fine); These forms are now showing up at the homepage instead of their own reserved page. Here's an example: Old form "GOOD" url: http://www.mycraftwork.com/index.php?option=com_content&view=article&id=94 New optimized "BAD" URL: http://www.mycraftwork.com/handthrown-pottery/alladin-teapot/index.php?option=com_content&view=article&id=94 Any help would be GREATLY appreciated! I can even turn the sh404sef on and off if some people are interested in seeing the issue LIVE. Thanks!! Tao Bellamine

    Read the article

  • Does text size and placement on page have an effect on seo

    - by sam
    I was wandering seeing as Google and others keep trying to get more and more 'human' in terms of rating whats good and whats spam, is it known if they take into account the size of a heading ie. an thats font size is 40px is going to speak allot more to the user than a thats font size is 14px.. similarly does placement factor ? ie. a 300 word article at the bottom of a landing page (not in the footer but bellow the useful content) would just be there for seo purposes. i know they look at if your doing things like text-indent:-9999px; and white text on a white background, but what about these more border line practices that both have legitimate uses but also the possibility to be spammy

    Read the article

  • Is there a weight for html link?

    - by Questions
    I would like to know if there is any weight associated with a html link (its backlink) when google does crawling/indexing. Will 1,2 & 3 be ever considered as a backlink by Google? 1. <a href="xyz.com">1</a> // one character link 2. <a href="xyz.com"> </a> //blank space link 3. <a href="xyz.com">!</a> //special character link 4. <a href="xyz.com">keyword</a> //meaningful word link I hope my question is understandable and guess this is right forum. I don't know to put it in other words. Thanks in advance.

    Read the article

  • Will Google crawl session based website

    - by DonShwep
    I have a website, it is split into 3 categories but using PHP its an all-in one kind of style. When a user chooses a category on the home page a session is set, this is then used to set the style and contents of the website. Would Googlebot and other bots be able to still scan my website? If a page is accessed and no session is set then the user is sent back to the home page. I have created special links, that set a session but go straight to the contact page. Even this page doesn't seem to be showing up. Any ideas if a sitemap with specially crafted links (to set the session) will help Google?

    Read the article

  • Cropping images & SEO

    - by user1181950
    So I have a page with a bunch of images with largely varying sizes. Also the layout of the page is such that the images are all in the shape of square tiles, so just resizing will cause distorted images. What I've been doing previously is when users upload images, I resize and crop them appropriately and display the new image as the thumbnail and load full image when user clicks on it. However, I just realized this is an issue with SEO as google will crawl the thumbnails and stick the thumbnails on Google Images instead of the full images. Is there any way to show a cropped/resized image but have Google Image show the full image? I can do something with css using an enclosing div and overflow:hidden, but I'd imagine the performance on that would be pretty bad. Any suggestions? Thanks! PS. I saw this (Make google index the actual image not the thumbnail), but in my case I have users continuously uploading images, and the database of images is always changing and pretty big (thousands), so sitemap will be pretty unwieldy..

    Read the article

  • Finding duplicate files?

    - by ub3rst4r
    I am going to be developing a program that detects duplicate files and I was wondering what the best/fastest method would be to do this? I am more interested in what the best hash algorithm would be to do this? For example, I was thinking of having it get the hash of each files contents and then group the hashes that are the same. Also, should there be a limit set for what the maximum file size can be or is there a hash that is suitable for large files?

    Read the article

< Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >