Is it worthwhile to block malicious crawlers via iptables?
Posted
by EarthMind
on Server Fault
See other posts from Server Fault
or by EarthMind
Published on 2010-05-11T17:41:53Z
Indexed on
2010/05/11
17:44 UTC
Read the original article
Hit count: 397
I periodically check my server logs and I notice a lot of crawlers search for the location of phpmyadmin, zencart, roundcube, administrator sections and other sensitive data. Then there are also crawlers under the name "Morfeus Fucking Scanner" or "Morfeus Strikes Again" searching for vulnerabilities in my PHP scripts and crawlers that perform strange (XSS?) GET requests such as:
GET /static/)self.html(selector?jQuery(
GET /static/]||!jQuery.support.htmlSerialize&&[1,
GET /static/);display=elem.css(
GET /static/.*.
GET /static/);jQuery.removeData(elem,
Until now I've always been storing these IPs manually to block them using iptables. But as these requests are only performed a maximum number of times from the same IP, I'm having my doubts if it does provide any advantage security related by blocking them.
I'd like to know if it does anyone any good to block these crawlers in the firewall, and if so if there's a (not too complex) way of doing this automatically. And if it's wasted effort, maybe because these requests come from from new IPs after a while, if anyone can elaborate on this and maybe provide suggestion for more efficient ways of denying/restricting malicious crawler access.
FYI: I'm also already blocking w00tw00t.at.ISC.SANS.DFind:)
crawls using these instructions: http://spamcleaner.org/en/misc/w00tw00t.html
© Server Fault or respective owner