Search Results

Search found 437 results on 18 pages for 'bot'.

Page 6/18 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Is it possible to programmatically prevent a game from pausing when its window loses focus?

    - by user836045
    I'm playing Skyrim in windowed mode and I am trying to create a bot for this game for personal use. I would like to have the bot play the game in the background, while I do other things, the only problem is that the game window pauses when it loses focus. Is there a way to make the Skyrim process think that it still has the focus, so it continues to run while I do something else on another window? I'm not a windows programming expert but would this be possible if I could somehow intercept the message that says unfocused or minimized to the process, and thus let the process think its still focused? I think Skyrim uses directx, so is it possible to come up with a solution from that end?

    Read the article

  • When Google gives up recrawling 301 that led to 404?

    - by Easy Life
    I've transferred a domain and made a mistake in the redirects (the URL structure is identical). Even though they went to the new domain, the error caused a 404 when crawled by Google bot. 10 days after I saw and corrected my redirect mistake, and now the site should (hopefully) redirect to proper pages. Q1: The URLs of the 404 pages in the Webmaster Tools all bear the mistake and will never be available at the new site. I marked them as fixed in the tools. Do I need to do something about that, like 301 rewrite them with a condition to fix the error? Q2: Does Google bot attempt to recrawl 301 pages that pointed to a 404?

    Read the article

  • C program runs in Cygwin but not Linux (Malloc)

    - by Shawn
    I have a heap allocation error that I cant spot in my code that is picked up on vanguard/gdb on Linux but runs perfectly on a Windows cygwin environment. I understand that Linux could be tighter with its heap allocation than Windows but I would really like to have a response that discovers the issue/possible fix. I'm also aware that I shouldn't typecast malloc in C but it's a force of habit and doesn't change my problem from happening. My program actually compiles without error on both Linux & Windows but when I run it in Linux I get a scary looking result: malloc.c:3074: sYSMALLOc: Assertion `(old_top == (((mbinptr) (((char *) &((av)-bins[((1) - 1) * 2])) - __builtin_offsetof (struct malloc_chunk, fd)))) && old_size == 0) || ((unsigned long) (old_size) = (unsigned long)((((__builtin_offsetof (struct malloc_chunk, fd_nextsize))+((2 * (sizeof(size_t))) - 1)) & ~((2 * (sizeof(size_t))) - 1))) && ((old_top)-size & 0x1) && ((unsigned long)old_end & pagemask) == 0)' failed. Aborted Attached snippet from my code that is being pointed to as the error for review: /* Main */ int main(int argc, char * argv[]) { FILE *pFile; unsigned char *buffer; long int lSize; pFile = fopen ( argv[1] , "r" ); if (pFile==NULL) {fputs ("File error on arg[1]",stderr); return 1;} fseek (pFile , 0 , SEEK_END); lSize = ftell (pFile); rewind (pFile); buffer = (char*) malloc(sizeof(char) * lSize+1); if (buffer == NULL) {fputs ("Memory error",stderr); return 2;} bitpair * ppairs = (bitpair *) malloc(sizeof(bitpair) * (lSize+1)); //line 51 below calcpair(ppairs, (lSize+1)); /* irrelevant stuff */ fclose(pFile); free(buffer); free(ppairs); } typedef struct { long unsigned int a; //not actual variable names... Yes I need them to be long unsigned long unsigned int b; long unsigned int c; long unsigned int d; long unsigned int e; } bitpair; void calcpair(bitpair * ppairs, long int bits); void calcPairs(bitpair * ppairs, long int bits) { long int i, top, bot, var_1, var_2; int count = 0; for(i = 0; i < cs; i++) { top = 0; ppairs[top].e = 1; do { bot = count; count++; } while(ppairs[bot].e != 0); ppairs[bot].e = 1; var_1 = bot; var_2 = top; calcpair * bp = &ppairs[var_2]; bp->a = var_2; bp->b = var_1; bp->c = i; bp = &ppairs[var_1]; bp->a = var_2; bp->b = var_1; bp->c = i; } return; } gdb reports: free(): invalid pointer: 0x0000000000603290 * valgrind reports the following message 5 times before exiting due to "VALGRIND INTERNAL ERROR" signal 11 (SIGSEGV): Invalid read of size 8 ==2727== at 0x401043: calcPairs (in /home/user/Documents/5-3/ubuntu test/main) ==2727== by 0x400C9A: main (main.c:51) ==2727== Address 0x5a607a0 is not stack'd, malloc'd or (recently) free'd

    Read the article

  • Comparing two large sets of attributes

    - by andyashton
    Suppose you have a Django view that has two functions: The first function renders some XML using a XSLT stylesheet and produces a div with 1000 subelements like this: <div id="myText"> <p id="p1"><a class="note-p1" href="#" style="display:none" target="bot">?</a></strong>Lorem ipsum</p> <p id="p2"><a class="note-p2" href="#" style="display:none" target="bot">?</a></strong>Foo bar</p> <p id="p3"><a class="note-p3" href="#" style="display:none" target="bot">?</a></strong>Chocolate peanut butter</p> (etc for 1000 lines) <p id="p1000"><a class="note-p1000" href="#" style="display:none" target="bot">?</a></strong>Go Yankees!</p> </div> The second function renders another XML document using another stylesheet to produce a div like this: <div id="myNotes"> <p id="n1"><cite class="note-p1"><sup>1</sup><span>Trololo</span></cite></p> <p id="n2"><cite class="note-p1"><sup>2</sup><span>Trololo</span></cite></p> <p id="n3"><cite class="note-p2"><sup>3</sup><span>lololo</span></cite></p> (etc for n lines) <p id="n"><cite class="note-p885"><sup>n</sup><span>lololo</span></cite></p> </div> I need to see which elements in #myText have classes that match elements in #myNotes, and display them. I can do this using the following jQuery: $('#myText').find('a').each(function() { var $anchor = $(this); $('#myNotes').find('cite').each(function() { if($(this).attr('class') == $anchor.attr('class')) { $anchor.show(); }); }); However this is incredibly slow and inefficient for a large number of comparisons. What is the fastest/most efficient way to do this - is there a jQuery/js method that is reasonable for a large number of items? Or do I need to reengineer the Django code to do the work before passing it to the template?

    Read the article

  • Managed c++ cross threading

    - by Nitroglycerin
    I got a thread: static void TestThread(System::Object ^obj) { Bot ^ob = (Bot^) obj; while( ob->Threads[0]->IsAlive ){ ob->textBox->text = "test"; // Cross threading error... Thread::Sleep(100); } } Dont know what to do i read about InvokeRequired and Invoke but didnt understand it.. Please help

    Read the article

  • How do I detect bots programatically

    - by Tom
    we have a situation where we log visits and visitors on page hits and bots are clogging up our database. We can't use captcha or other techniques like that because this is before we even ask for human input, basically we are logging page hits and we would like to only log page hits by humans. Is there a list of known bot IP out there? Does checking known bot user-agents work?

    Read the article

  • Developing Bots__Where can I start?

    - by user947659
    i have a pet programming goal: to develop a bot for a game. Now, this won't be anything malicious, and I just want to do this to further my knowledge in programming. Can anyone help me out by pointing to where i can start learning how to develop a bot? the type of bots i want to make are video game bots (online multiplayer, first person shooters, and offline games(like solitaire and such)). Thanks!

    Read the article

  • Is F# a good language for card game AI?

    - by Anthony Brien
    I'm writing a Mahjong Game in C# (the Chinese traditional game, not the solitaire kind). While writing the code for the bot player's AI, I'm wondering if a functional language like F# would be a more suitable language than what I currently use which is C# with a lot of Linq. I don't know much about F# which is why I ask here. To illustrate what I try to solve, here's a quick summary of Mahjong: Mahjong plays a bit like Gin Rummy. You have 13 tiles in your hand, and each turn, you draw a tile and discard another one, trying to improve your hand towards a winning Mahjong hand, which consists or 4 sets and a pair. Sets can be a 3 of a kind (pungs), 4 of a kind (kongs) or a sequence of 3 consecutive tiles (chows). You can also steal another player's discard, if it can complete one of your sets. The code I had to write to detect if the bot can declare 3 consecutive tiles set (chow) is pretty tedious. I have to find all the unique tiles in the hand, and then start checking if there's a sequence of 3 tiles that contain that one in the hand. Detecting if the bot can go Mahjong is even more complicated since it's a combination of detecting if there's 4 sets and a pair in his hand. And that's just a standard Mahjong hand. There's also numerous "special" hands that break those rules but are still a Mahjong hand. For example, "13 unique wonders" consists of 13 specific tiles, "Jade Empire" consists of only tiles colored green, etc. In a perfect world, I'd love to be able to just state the 'rules' of Mahjong, and have the language be able to match a set of 13 tiles against those rules to retrieve which rules it fulfills, for example, checking if it's a Mahjong hand or if it includes a 4 of a kind. Is this something F#'s pattern matching feature can help solve?

    Read the article

  • How do I fork correctly in a perl module for znc?

    - by rarbox
    I'm currently writing an IRC bot. The scripts are loaded as perl modules in ZNC but the bot gets disconnected with an Input/Output error if I create a forked process. This is a working example script without fork, but this causes the bot to freeze until the script finishes doing its task. package imdb; use warnings; use strict; sub new { my ($class) = @_; my $self = {}; bless( $self, $class ); return( $self ); } sub OnChanMsg { my ($self, $nick, $channel,$text) = @_; #unless (my $pid = fork()) { my $result = a_slow_process($text); ZNC::PutIRC( "PRIVMSG $channel :$result" ); # exit; #} return( ZNC::CONTINUE ); } sub OnShutdown { my ( $me ) = @_; } sub a_slow_process { my $input = shift; sleep 10; return "You said $input."; } 1; The fork code that is causing the error is commented out. How do I fix this? Edited to add: I was told that ZNC::PutIRC should not be put in the child process.

    Read the article

  • Googlebot cant access my site webmaster tools reply Unreachable robots.txt

    - by Ahmad Ahmadi
    When I try to fetch my site as a googlebot in webmaster tools it return Unreachable robots.txt, after investigate I understood google bot can see my server: tcpdump | grep google it return that google can access my server with IP 66.249.81.172 or 66.249.75.111. but there is not any think in access log or error log or other apache logs. cat access_log | grep google or cat error_log | grep 66.249.81.172 Other bot (bing,...) can access apache but google cant. there is not any problem in my robots.txt or its permissions because as you know robots.txt is not necessary so I delete it but again webmaster tools returned Unreachable robots.txt not 404 not found! information about server: Server OS : CentOS 6 Web Server : Apache 2.x Firewall : IPTables is stoped SELinux is Disabled There is not any think else for security on my server. how can I investigate the problem and is there any other command that can help me to find the problem.

    Read the article

  • Finding a private (NAT) host's IP using historic destination data

    - by l0c0b0x
    The issue: An unknown private (NAT) client is infected with malware and it's trying to access a Bot server at random times/dates. How we know about this: We receive bot traffic notices/alerts from REN-ISAC. Unfortunately, we don't receive those until the next day after it has happened. What they provide to us is: The source address (of the firewall) The destination addresses (it varies, but they're going to network subnet allocated to a German ISP) The source port (which varies--dynamic ports). Question: What would be the best approach to finding this internal host (historically) with a Cisco ASA as firewall? I'm guessing blocking anything to the destination address(es), and logging that type of traffic/access might allow me to find the source host, but I'm not sure which tool/command would be the most useful. I've seen Netflow thrown into a few responses when it comes to logging, but I'm confused with it's association of Logging, NAL, and nBAR, and how they relate to Netflow.

    Read the article

  • IRC Services with failover support?

    - by insertjokehere
    I run a single server (call it 'server A') IRC 'network', and thank to the generosity of some friends, I have been given a second server ('server B') that I can run an IRCd on in order to provide redundancy in case server A crashes. This is fine, I can set up a round-robin DNS with the servers linked. The problem I have is what to do about services? Does anyone know of a way to get the services to 'fail over' in case of a server failure? Eg, Server A starts off running the services, but suddenly crashes. Server B detects this and starts its own copy of the services (ideally with the same configuration and data as the services on Server B) One solution that comes it mind is to write a bot that each server runs, that sit in a channel periodically checking if the bot from the other server is in the channel. If it is, then all is well. If not, then failover. I would prefer not to have to code this myself though We are currently using Unreal IRCd and Anope services on Linux

    Read the article

  • How can I use target mode in Linux with USB?

    - by dash17291
    Kernel 3.5 introduces: This release includes a driver for using an IEEE-1394 connection as a SCSI transport. This enables to expose SCSI devices to other nodes on the Firewire bus, for example hard disk drives. It's a similar functionality to Firewire Target Disk Mode on many Apple computers. This release also adds a usb-gadget driver that does the same with USB. The driver supports two USB protocols are supported that is BBB or BOT (Bulk Only Transport) and UAS (USB Attached SCSI). BOT is advertised on alternative interface 0 (primary) and UAS is on alternative interface 1. Both protocols can work on USB 2.0 and USB 3.0. UAS utilizes the USB 3.0 feature called streams support. http://kernelnewbies.org/Linux_3.5 I have an Arch Linux with kernel 3.5.3-1 and wanna try out this feature.

    Read the article

  • Reset / Remove - Google Keywords

    - by Herr Kaleun
    Summary: My site is ranking for filthy keywords and i would like to remove them from google ranking/keywords. Background: My server was hacked using the timthumb exploit/security vulnerability, apparently i was the last person on earth to read the news about the exploit, several months after it appeared. Anyway, the "hacker" was so friendly to modify the index.php file in such a fashion, that it generated random sexual oriented keywords if the website is fetched as google-bot. So if you would fetch it as google bot/it gets indexed, you would get randomly generated keywords like: sex videos teenager teen sex adult sex preteen A LINK TO A RANDOM CONTENT OF MY WEBPAGE anime sex videos a rough list something similar to that, about 180-200 per page. I've discovered it far too late, so that google had me indexed for the words "sex" and certain adult oriented keywords, about roughly 2000. I've removed all the content, toke the site down, replaced the index.php with a static HTML and added a "ERROR 410" title to the website so that the content is no longer here and removed permanently. I've also applied for a manual review of my website, about 1.5 months ago but still, the keywords are there, and very strange, some of the keyword rankings actually "improve" over time. Here are some screenshots from webmasters tools: Question: How can i remove this filthy keywords and re-rank my website as a "normal" website on the fastest way? I want to "REMOVE" the keywords if possible. Please help me or point me into a direction. Thank you

    Read the article

  • Language redirect affecting pagerank and search listing?

    - by Janoszen
    Preface We have a number of sites that use the same redirect mechanism across the board. We recently transitioned one site from non-localised to localised and detected that the Google+ integration doesn't show up on the search results any more AND the PageRank is gone from 2 to 0. How the redirect works If the UA sends a cookie (e.g. lang=en), redirect the user to /language (e.g. /en) If the UA is a bot (.*bot.*), redirect to /en If the Accept-Language header contains a usable, non-English language, redirect to /language (English is the default on many browsers in non-English regions) If there is a valid GeoIP lookup and the detected region is linked to a supported language, redirect to /language Redirect to /en We do of course on all pages have the proper markup to indicate the alternate language: <link hreflang="de" href="/de" rel="alternate" /> As far as we can tell, we follow all publicly available guidelines from Google, so we are a bit at odds if this is a bug in Google or we have done something wrong. Question Does not having content on the root URL of a domain adversely affect search engine rankings and if yes, how does one implement a proper language redirection?

    Read the article

  • Security in Robots and Automated Systems

    - by Roger Brinkley
    Alex Dropplinger posted a Freescale blog on Securing Robotics and Automated Systems where she asks the question,“How should we secure robotics and automated systems?”.My first thought on this was duh, make sure your robot is running Java. Java's built-in services for authentication, authorization, encryption/confidentiality, and the like can be leveraged and benefit robotic or autonomous implementations. Leveraging these built-in services and pluggable encryption models of Java makes adding security to an exist bot implementation much easier. But then I thought I should ask an expert on robotics so I fired the question off to Paul Perrone of Perrone Robotics. Paul's build automated vehicles and other forms of embedded devices like auto monitoring of commercial vehicles on highways.He says that most of the works that robots do now are autonomous so it isn't a problem in the short term. But long term projects like collision avoidance technology in automobiles are going to require it.Some of the work he's doing with his Java-based MAX, set of software building blocks containing a wide range of low level and higher level software modules that developers can use to build simple to complex robot and automation applications faster and cheaper, already provide some support for JAUS compliance and because their based on Java, access to standards based security APIs.But, as Paul explained to me, "the bottom line is…it depends on the criticality level of the bot, it's network connectivity, and whether or not a standards compliance is required."

    Read the article

  • What are the requirements to test a website using jquery.get() ? [migrated]

    - by Frankie
    I am working on a simple website. It has to search quite a few text files in different sub-folders. The rest of the page uses jquery, so I would like to use it for this also. The function I am looking at is .get() for downloading the files. So my main question is, can I test this on my local computer (Ubuntu Linux) or do I have to have it uploaded to a server? Also, if there's a better way to go about this, that would be nice to know. However, I'm more worried about getting it working. Thanks, Frankie PS: Heres the JS/jQuery code for downloading the files to an array. g_lists = new Array(); $(":checkbox").each(function(i){ if ($(this).attr("name") != "0") { var path = "../" + $(this).attr("name") + ".txt"; $("#bot").append("<br />" + path); // debug $.get(path, function(data){ g_lists[i] = data; $("#bot").html(data); }); } else { g_lists[i] = ""; } }); Edit: Just a note about the path variable. I think it's correct, but I'm not 100% sure. I'm new to web development. Here's some examples it produces and the directory tree of the site. Maybe it will help, can't hurt. . +-- include ¦   +-- jquery.js ¦   +-- load.js +-- index.xhtml +-- style.css +-- txt    +-- Scripting_Tools    +-- Editors.txt    +-- Other.txt Examples of path: ../txt/Scripting_Tools/Editors.txt ../txt/Scripting_Tools/Other.txt Well I'm a new user, so I can't "answer" my own question, so I'll just post it here: After asking for help on a IRC chat channel specific to jQuery, I was told I could use this on a local host. To do this I installed Apache web server, and copied my site into it's directory. More information on setting it up can be found here: http://www.howtoforge.com/ubuntu_debian_lamp_server Then to run the site I navigated my browser to "localhost" and everything works.

    Read the article

  • Web master tools is throwing out 404 errors on link not on page

    - by plantify
    Webmaster tools is showing thousands of 404 errors, where pages on the site are referring to another incorrect url. For example, URL not found www.plantify.co.uk/shop/=, linked from http://www.plantify.co.uk/shop/gift-voucher and http://www.plantify.co.uk/shop/special-plant-offers. I obviously have checked the source and cannot find any references to this link on any page. The only consistent issue is that it only seems to report this error on pages with two section i.e. www.plantify.co.uk/shop does not report any error whilst all pages with www.plantify.co.uk/shop/xxx (where xxx can be several different pages such as gift-voucher) all report this. I cannot seem to duplicate this error. I have run a link checker (we use Screaming Frog) and it does not report this error. I have fetched these pages as a bot, and these do not report this error. I am at a total loss. I cannot even duplicate the issue, but it is most definitely an issue, as Webmaster Tools is reporting new errors every day. Is this perhaps google bot doing its own thing?

    Read the article

  • How do spambots work?

    - by rlb.usa
    I have a forum that's getting hit a lot by forum spambots, and of course the best way to defeat something is to know thy enemy. I'll worry about defeating those spambots later, but right now I'd like to know more about them. Reading around, I felt surprised about the lack of thorough information on the subject (or perhaps my ineptness to input the correct search terms for better google results). I'm interested in learning all about spambots. I've asked on other forums and gotten brush-off answers like "Spambots are always users registering on your site." How do forum spambots work? How do they find the 'new user registration' page? (I'm especially surprised because some forums don't have a dedicated URL for this eg, www.forum.com/register.html , but instead use query strings or even other methods invisible to the URL bar) How do they know what to enter into each 'new user registration' field? How do they determine what's a page they can spam / enter data into and what is not? Do they even 'view' this page at all? ..If not, then I'd assume they're communicating with the server directly - how is - this possible? How do they do it? Can forum spambots break CAPTCHAs? Can they solve logic questions (how?)? Math questions? Do they reverse-engineer client-side anti-bot validation scripts? Server-side scripts? What techniques are still valid to prevent them? Where do spambots come from? Is someone sitting behind the computer snickering as they watch their bot destroy site after site? Or are they snickering as they simply 'release' it onto the internet somehow? Are spambots 'run' by an infected computer somewhere? Do they replicate themselves? etc

    Read the article

  • How do web crawlers affect site statistics?

    - by LM
    What are ways in which web crawlers (both from search engines and non-search engines) could affect site statistics (e.g., when doing AB-testing different page variations)? And what are ways to take care of these problems? For example: Do a lot of people writing web crawlers often delete their cookies and mask their IPs, so that web crawlers often show up as different users each time they crawl the site? What are heuristics to use to recognize that something is a bot? (I'm guessing any sophisticated enough bot can be indistinguishable from a real user, if it wants to -- is this correct?)

    Read the article

  • Robots Crawling Across Namespace?

    - by Codex73
    I migrated site from one domain to another. Also placed permanent redirection on old account. My stats logs are capturing this: Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html) /libro_metaboforte_chap5.php/members/members/file_chap6.php I placed this on robots which wasn't present at time of migration. Robots.txt Contents User-agent: * Allow: / Disallow: /members/ Disallow: /includes/ HTACCESS FILE CONTENTS DirectoryIndex index.php index.html Options +FollowSymlinks RewriteEngine On # Turn on the rewriting engine RewriteBase / RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_URI} !^/store/?$ RewriteCond %{QUERY_STRING} !. RewriteRule ^.+/?$ index.php [QSA,L] RewriteCond %{QUERY_STRING} ^curlang=([a-z]*)$ RewriteRule ^.+/?$ index.php? [QSA,L] Will continue to log incoming bot captures. My htaccess does rewrite. I just added the robot file. The funny part is that is stepping in double directories... I don't know if the problem was not having the 'robots.txt' in place or the actual in place htaccess doing rewrites?

    Read the article

  • how to click on a button in python

    - by Ciobanu Alexandru
    Trying to build some bot for clicking "skip-ad" button on a page. So far, i manage to use Mechanize to load a web-driver browser and to connect to some page but Mechanize module do not support js directly so now i need something like Selenium if i understand correct. I am also a beginner in programming so please be specific. How can i use Selenium or if there is any other solution, please explain details. This is the inner html code for the button: <a id="skip-ad" class="btn btn-inverse" onclick="open_url('http://imgur.com/gallery/tDK9V68', 'go'); return false;" style="font-weight: bold; " target="_blank" href="http://imgur.com/gallery/tDK9V68"> … </a> And this is my source so far: #!/usr/bin/python # FILENAME: test.py import mechanize import os, time from random import choice, randrange prox_list = [] #list of common UAS to apply to each connection attempt to impersonate browsers user_agent_strings = [ 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1468.0 Safari/537.36', 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.1) Gecko/2008071615 Fedora/3.0.1-1.fc9 Firefox/3.0.1', 'Opera/9.80 (Windows NT 6.0) Presto/2.12.388 Version/12.14', 'Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; fr) Presto/2.9.168 Version/11.52', 'Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:23.0) Gecko/20131011 Firefox/23.0', 'Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; Media Center PC 6.0; InfoPath.3; MS-RTC LM 8; Zune 4.7', 'Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 2.0.50727; SLCC2; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; Zune 4.0; Tablet PC 2.0; InfoPath.3; .NET4.0C; .NET4.0E)', 'Mozilla/5.0 (compatible; MSIE 10.6; Windows NT 6.1; Trident/5.0; InfoPath.2; SLCC1; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; .NET CLR 2.0.50727) 3gpp-gba UNTRUSTED/1.0', 'Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.0; Trident/5.0; chromeframe/11.0.696.57)', 'Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; InfoPath.1; SV1; .NET CLR 3.8.36217; WOW64; en-US)', 'Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; GTB7.4; InfoPath.2; SV1; .NET CLR 3.3.69573; WOW64; en-US)' ] def load_proxy_list(target): #loads and parses the proxy list file = open(target, 'r') count = 0 for line in file: prox_list.append(line) count += 1 print "Loaded " + str(count) + " proxies!" load_proxy_list('proxies.txt') #for i in range(1,(len(prox_list) - 1)): # depreceated for overloading for i in range(1,30): br = mechanize.Browser() #pick a random UAS to add some extra cover to the bot br.addheaders = [('User-agent', choice(user_agent_strings))] print "----------------------------------------------------" #This is bad internet ethics br.set_handle_robots(False) #choose a proxy proxy = choice(prox_list) br.set_proxies({"http": proxy}) br.set_debug_http(True) try: print "Trying connection with: " + str(proxy) #currently using: BTC CoinURL - Grooveshark Broadcast br.open("http://cur.lv/4czwj") print "Opened successfully!" #act like a nice little drone and view the ads sleep_time_on_link = randrange(17.0,34.0) time.sleep(sleep_time_on_link) except mechanize.HTTPError, e: print "Oops Request threw " + str(e.code) #future versions will handle codes properly, 404 most likely means # the ad-linker has noticed bot-traffic and removed the link # or the used proxy is terrible. We will either geo-locate # proxies beforehand and pick good hosts, or ditch the link # which is worse case scenario, account is closed because of botting except mechanize.URLError, e: print "Oops! Request was refused, blacklisting proxy!" + str(e) prox_list.remove(proxy) del br #close browser entirely #wait between 5-30 seconds like a good little human sleep_time = randrange(5.0, 30.0) print "Waiting for %.1f seconds like a good bot." % (sleep_time) time.sleep(sleep_time)

    Read the article

  • chatbot using twisted and wokkel

    - by dmitriy k.
    I am writing a chatbot using Twisted and wokkel and everything seems to be working except that bot periodically logs off. To temporarily fix that I set presence to available on every connection initialized. Does anyone know how to prevent going offline? (I assume if i keep sending available presence every minute or so bot wont go offline but that just seems too wasteful.) Suggestions anyone? Here is the presence code: class BotPresenceClientProtocol(PresenceClientProtocol): def connectionInitialized(self): PresenceClientProtocol.connectionInitialized(self) self.available(statuses={None: 'Here'}) def subscribeReceived(self, entity): self.subscribed(entity) self.available(statuses={None: 'Here'}) def unsubscribeReceived(self, entity): self.unsubscribed(entity) Thanks in advance.

    Read the article

  • "import as" leads to unresolved import error, "from .. import" does not

    - by Markus R.
    I'm trying to write some plugins for the irc bot supybot with eclipse/pydev. Pydev gives me errors about unresolved imports on supybot-modules/packages (e. g. import supybot.utils as utils), but works ok on e. g. "from supybot.commands import *". So I guess I set up dydev correctly, as it finds the wanted modules. The problem must be in pydev/eclipse, as the bot works correct and in eric5 I get also no errors about that. Removing the interpreter and setting it up didn't help. Any other ideas on how to fix this? System: Arch Linux, Eclipse Juno, PyDev 2.7.1, wanted (and set up) python interpreter is 2.7, supybot is installed in site-packages for Python 2.7. Edit: Just noticed: PyDev doesn't mark the "from ... import *" as error, but if I use functions imported from there I get an error on that function.

    Read the article

  • Add Recaptcha and GridView to an ASP.NET 3.5 Guestbook using MS SQL Server and VB.NET

    This is the conclusion to a four-part ASP.NET 3.5 guest book application tutorial series. In this last part you will learn how to integrate Recaptcha which is used to prevent spam automatic bot submission. Also to be discussed is how to add a GridView web control which is used to display all guest book comments retrieved from the database.... Download a Free Trial of Windows 7 Reduce Management Costs and Improve Productivity with Windows 7

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >