Search Results

Search found 247 results on 10 pages for 'bots'.

Page 5/10 | < Previous Page | 1 2 3 4 5 6 7 8 9 10  | Next Page >

  • 3D game engine for networked world simulation / AI sandbox

    - by Martin
    More than 5 years ago I was playing with DirectSound and Direct3D and I found it really exciting although it took much time to get some good results with C++. I was a college student then. Now I have mostly enterprise development experience in C# and PHP, and I do it for living. There is really no chance to earn money with serious game development in our country. Each day more and more I find that I miss something. So I decided to spend an hour or so each day to do programming for fun. So my idea is to build a world simulation. I would like to begin with something simple - some human-like creatures that live their life - like Sims 3 but much more simple, just basic needs, basic animations, minimum graphic assets - I guess it won't be a city but just a large house for a start. The idea is to have some kind of a server application which stores the world data in MySQL database, and some client applications - body-less AI bots which simulate movement and some interactions with the world and each other. But it wouldn't be fun without 3D. So there are also 3D clients - I can enter that virtual world and see the AI bots living. When the bot enters visible area, it becomes material - loads a mesh and animations, so I can see it. When I leave, the bots lose their 3d mesh bodies again, but their virtual life still continues. With time I hope to make it like some expandable scriptable sandbox to experiment with various AI algorithms and so on. But I am not intended to create a full-blown MMORPG :D I have looked for many possible things I would need (free and open source) and now I have to make a choice: OGRE3D + enet (or RakNet). Old good C++. But won't it slow me down so much that I won't have fun any more? CrystalSpace. Formally not a game engine but very close to that. C++ again. MOgre (OGRE3D wrapper for .NET) + lidgren (networking library which is already used in some gaming projects). Good - I like C#, it is good for fast programming and also can be used for scripting. XNA seems just a framework, not an engine, so really have doubts, should I even look at XNA Game Studio :( Panda3D - full game engine with positive feedback. I really like idea to have all the toolset in one package, it has good reviews as a beginner-friendly engine...if you know Python. On the C++ side, Panda3D has almost non-existent documentation. I have 0 experience with Python, but I've heard it is easy to learn. And if it will be fun and challenging then I guess I would benefit from experience in one more programming language. Which of those would you suggest, not because of advanced features or good platform support but mostly for fun, easy workflow and expandability, and so I can create and integrate all the components I need - the server with the database, AI bots and a 3D client application?

    Read the article

  • Hide email adress with JavaScript

    - by Martin Aleksander
    I read somewhere that hiding email address behind JavaScript code, could reduce spam bots harvesting the email address. <script language="javascript" type="text/javascript"> var a = "Red"; var t = "no"; var doc = document; var b = "ITpro"; var ad = a; ad += "@"; ad += b; ad += "."; ad += t; var mt = "ma"; mt += "il"; mt += "to"; var text = ""; if (text == null || text.length == 0) text = ad; doc.write("<"+"a hr"+"ef=\""+mt+":"+ad+"\">"+text+"</"+"a>"); </script> This will not display the actual email-address in the sourcecode of the page, but it will display and work like a normal link for human users. Is it any point of doing this? Will it reduce spam bots, or is it just nonsense that might slow down performance of the page because of the JavaScript?

    Read the article

  • How to interpret number of URL errors in Google webmaster tools

    - by user359650
    Recently Google has made some changes to Webmaster tools which are explained below: http://googlewebmastercentral.blogspot.com/2012/03/crawl-errors-next-generation.html One thing I could not find out is how to interpret the number of errors over time. At the end of February we've recently migrated our website and didn't implement redirect rules for some pages (quite a few actually). Here is what we're getting from the Crawl errors: What I don't know is if the number of errors is cumulative over time or not (i.e. if Google bots crawl your website on 2 different days and find 1 separate issue on each day, whether they will report 1 error for each day, or 1 for the 1st, and 2 for the 2nd). Based on the Crawl stats we can see that the number of requests made by Google bots doesn't increase: Therefore I believe the number of errors reported is cumulative and that an error detected on 1 day is taken into account and reported on the subsequent days until the underlying problem is fixed and the page it's crawled again (or if you manually Mark as fixed the error) because if you don't make more requests to a website, there is no way you can check new pages and old pages at the same time. Q: Am I interpreting the number of errors correctly?

    Read the article

  • How should I deal with user agent parsing in logs?

    - by Mr. Jefferson
    My web app project includes logging functionality so we can see where visitors are coming from (referrer URL), what the popular user agents are, what pages are most popular, etc. The log is stored in SQL Server, and when I query the user agents I use a large (almost 100 lines) and growing CASE statement to separate the user agents using string matching (i.e. if the user agent contains the string "Firefox/9" then it's Firefox 9). Is there a better way to do this so I don't have to continually add to that CASE statement to deal with new browser releases? Also, how should I deal with less common, weird/unknown user agents? I've seen the following in the logs and been unable to find good information online about what they are: WordPress/3.3.1; http://www.facecolony.org Mozilla/4.0 ( http://www.hairirons.org redips; <a href=http://hairirons.org/>chi hair iron</a>) I'd guess they're bots/crawlers, but the sites they point to don't appear to reference web crawlers (or even be available sometimes). I've seen other user agents aren't familiar to me, but I know they're bots because they include "bot" or "spider" or something similar in them.

    Read the article

  • Why is Yahoo Indexing Bot considered as "evil"?

    - by bigstylee
    After reading and commenting on this question PHP Library for Keeping your site index by Google, Bing, etc, I was curious to look at StackOverFlow's sitemap. This returned a 404 error which I am guessing is just a protected page by determining if your are a Index Bot or simply doesnt exists. This then lead me to look at the robots.txt for StackOverFlow. I was surprised to see the comment "Yahoo bot is evil" along with a couple other Indexing bots (Spinn3r and KSCrawler) . I am unfamilular with Spinn3r and KSCrawler but my question is, why are these bots (particular Yahoo) considered as evil? Surely any and all indexing of any Search Engine is a good thing?

    Read the article

  • Programmatic Bot Detection

    - by matt
    Hi, I need to write some code to analyze whether or not a given user on our site is a bot. If it's a bot, we'll take some specific action. Looking at the User Agent is not something that is successful for anything but friendly bots, as you can specify any user agent you want in a bot. I'm after behaviors of unfriendly bots. Various ideas I've had so far are: If you don't have a browser ID If you don't have a session ID Unable to write a cookie Obviously, there are some cases where a legitimate user will look like a bot, but that's ok. Are there other programmatic ways to detect a bot, or either detect something that looks like a bot? thanks!

    Read the article

  • Security programming jobs

    - by Mike Smith
    I am a student, about to finish my undergraduate in Computer Science in about a year. I am very interested in computer/network security, but I also love programming. Is there a job or subfield that is a fusion of both? I have programmed everything from games to barcode readers to web bots, and I know for sure that I want to do some kind of programming, but ideally I would like to do some kind of software development involving computer security. Any advice would be appreciated.

    Read the article

  • getting 500 intenal error when setting 301 redirect using .htaccess

    - by sam
    im trying to use a 301 redirect to direct users and bots to my new site but when i put the .htaccess live i keep getting a 500 internal error shown. The site is actually a subdomain which i want to redirect to another subdomain on another site (im not sure if thats relivant but i thought i should include it) the site is hosted on a apache server The 301 htaccess code im using is : Options +FollowSymLinks RewriteEngine on RewriteRule (.*) http://www.blog.mysite.co.uk/$1 [R=301,L] any idea what might be wrong with this ?

    Read the article

  • Is dynamic HTML layout good from an SEO perspective?

    - by sll
    Just wondering whether dynamically built HTML layout is fine from SEO perspectives? So let's assume e-commerce engine and its most popular page - products catalog. So 90% of the page is built using AJAX and MVVM library knockoutjs which builds HTML on the fly on the client side. So how search bots would parse such content? Is it fine indexed and would be such effective as server-side built HTML pages from the SEO perspectives?

    Read the article

  • Beginning programming for real clients, what copyright should I put in the code?

    - by Igor Marvinsky
    Hello. So far, I've been writing projects for my friends and friends of my friends, which required no legal stuff. Now I've moved on to freelance programming on websites like vworker.com and I'm wondering what should I put in the comments on top of the code. I'm not doing big, serious serious projects, just frontends and scrapers/bots for what I gather is personal use. Would my usual // Written by Igor Marvinsky, 2011 be enough?

    Read the article

  • How do I make my hosting detect _escaped_fragment_ and fetch the corresponding HTML? [on hold]

    - by Eric
    I have an AJAX site and I'm using shebangs (#!) in my urls with the intention of then providing the correct HTML versions when google bots replace the #! with _escaped_fragment_. How do I go about routing/proxying/redirecting the url with _escaped_fragment_ to the corresponding html pages? I can't find documentation on this part of the process specifically, and my first thought was that I should be using a 301 or 302 redirect, but I was told that wasn't the case, albeit not given any more info.

    Read the article

  • Using CSS3 is a bad practice? [closed]

    - by Qmal
    Possible Duplicate: Should I use HTML5 and/or CSS3 to build my website? I just want to know if it's considered as a "bad practice" to use things like rounded corners, gradients and so on... I understand that there are bots and crawlers that do not process CSS, but they don't need to. And nowadays most people use browsers that can process CSS3 with no problem. So should I make my buttons and shadows and such look pretty with CSS3 or with images?

    Read the article

  • Google met à jour reCAPTCHA et teste si vous êtes un humain avant, pendant et après que vous ayez interagi avec le CAPTCHA

    Google met à jour reCAPTCHA et teste si vous êtes un humain avant, pendant et après que vous ayez interagi avec le CAPTCHA Le CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) est un moyen de sécurité sur le net permettant de différencier un humain des robots ou des spammeurs par la génération à l'écran des caractères dont le but est d'être difficilement déchiffrés par les bots.Plusieurs CAPTCHA irritent la plus part de temps les utilisateurs, les mots générés...

    Read the article

  • Should I disallow(robots.txt) archive/author pages with links already available on the front page? [on hold]

    - by WPRookie82
    I am working on a simple Wordpress blog where when an article is published, it appears on ALL these pages: Homepage - Headline(clickable) + 3-line summary Parent category page - Headline(clickable) + 3-line summary Child category page - Headline(clickable) + 3-line summary Author page - Headline(clickable) sitemap.xml I've been told that I should add all author pages to my robots.txt, under disallow, so as search engine bots do not spider /author/* since all links on these pages are available elsewhere. Is this a good approach or maybe rel=nofollow is better, or maybe I shouldn't worry about this at all?

    Read the article

  • The Power of a Sitemap

    You have a website and would like search engine bots to index it. This is because you would like to higher your standings in search engine results. The only thing that would seem reasonable to do is to create back links either in the pages or script to enhance the pages indexing.

    Read the article

  • Good SEO - How to Achieve It?

    There are many ways to describe what exactly is SEO or Search Engine Optimization, but basically, it is the way to find and promote certain markets via internet search, and gain a high rank for your web page in search engines result. You have a couple of steps in good Search Engine Optimization. First, you need to create a web page that can be reviewed and indexed quickly by spiders and bots from Google and other search engines like Yahoo and Bing.

    Read the article

  • Selectively allow unsafe html tags in Plone

    - by dhill
    I'm searching for a way to put widgets from several services (PicasaWeb, Yahoo Pipes, Delicious bookmarks, etc.) on the community site I host on Plone (currently 3.2.1). I'm looking for a way to allow a group of users to use dangerous html tags. There are some ways I see, but I don't know how to implement those. One would be changing safe_html for the pages editors own (1). Another would be to allow those tags on some subtree (2). And yet another finding an equivalent of "static text portlet" that would display in the middle panel (3). We could then use some of the composite products (I stumbled upon Collage and CMFContentPanels), to include the unsafe content on other sites. My site has been ridden by advert bots, so I don't want to remove the filtering all together. I don't have an easy (no false positives) way of checking which users are bots, so deploying captcha now wouldn't help either. The question is: How to implement any of those solutions? (I already asked that on plone mailing list without an answer, so I thought I would give it another try here.)

    Read the article

  • Selectively allow unsafe html tags in Plone

    - by dhill
    I'm searching for a way to put widgets from several services (PicasaWeb, Yahoo Pipes, Delicious bookmarks, etc.) on the community site I host on Plone (currently 3.2.1). I'm looking for a way to allow a group of users to use dangerous html tags. There are some ways I see, but I don't know how to implement those. One would be changing safe_html for the pages editors own (1). Another would be to allow those tags on some subtree (2). And yet another finding an equivalent of "static text portlet" that would display in the middle panel (3). We could then use some of the composite products (I stumbled upon Collage and CMFContentPanels), to include the unsafe content on other sites. My site has been ridden by advert bots, so I don't want to remove the filtering all together. I don't have an easy (no false positives) way of checking which users are bots, so deploying captcha now wouldn't help either. The question is: How to implement any of those solutions? (I already asked that on plone mailing list without an answer, so I thought I would give it another try here.)

    Read the article

  • What's with all the mailing archives?

    - by Yuval
    When I google for certain questions, or problems I run into I would sometimes run into 'archive' sites - these sites contain forum questions or information from other sites that are extremely poorly formatted, and are copies of the well formatted original posts from various forums. An example of one of those sites in mail-archive.com and various similar sites. Can anybody explain to me why those sites exist and how come they don't get banned from Google (since all they have is copied content that is really poorly formatted by bots)? Thanks!

    Read the article

  • Strange Ubuntu linux boot behaviour

    - by Slartibartfast
    I've recently installed Ubuntu 9.10 on a desktop machine as the only OS. If put in a hibernate, it wakes up normally, but if turned of completely, after turning on there is no "beep" sound from BIOS and HD lamp blinks for a while then stops. When I hit reset in that state, it bots normally. What is going on and how could I fix it?

    Read the article

  • Echo 404 directly from nginx to improve performance

    - by user64204
    I am in charge of production servers serving static content for a website. Those servers are constantly being crawled by bots looking for potential exploits (which isn't that much of a problem security-wise because no application can be reached behind the web server) but generates thousands of 404 per day, sometimes per hour. I am looking into ways of blocking those requests but it's tricky (you want to make sure you don't block legitimate traffic and these bots are becoming more and more clever at looking like they're legit) and is going to take me a while to find an acceptable solution. In the meantime I would like to reduce the performance impact of serving those 404 pages. Indeed we're using nginx which by default is configured to serve it's 404 page from the disk (This can be changed using the error_page directive but in the end the 404 will either have to be served from disk or from another external source (e.g. upstream application which would be worst)) which isn't ideal. I ran a test with ab on my local machine with a basic configuration: in one case I echo a message directly from nginx so the disk isn't touched at all, in the other case I hit a missing page and nginx serves its 404 from disk. server { # [...] the default nginx stuff location / { } location /this_page_exists { echo "this page was found"; } } Here are the test results (my laptop has Intel(R) Core(TM) i7-2670QM + SSD in case you're wondering why they are so high): $ ab -n 500000 -c 1000 http://localhost/this_page_exists Requests per second: 25609.16 [#/sec] (mean) $ ab -n 500000 -c 1000 http://localhost/this_page_doesnt_exists Requests per second: 22905.72 [#/sec] (mean) As you can see, returning a value with echo is 11% ((25609-22905)÷22905×100) faster than serving the 404 page from disk. Accordingly I would like to echo a simple 404 Page not Found string from nginx. I tried many things so far but they all failed, essentially the idea was this: location / { try_files $uri @not_found; } location @not_found { echo "404 - Page not found"; } The problem is that as soon as the echo directive is used, the http response code is set to 200. I tried changing that by doing error_page 200 = 400 but that breaks the configuration. How can I serve a 404 page directly from nginx? (without hacking the source which may be might next step)

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10  | Next Page >