Search Results

Search found 6460 results on 259 pages for 'spam filter'.

Page 23/259 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • manage messages reported as spam

    - by parm.95
    I want to have an admin page to manage user messages reported as 'spam'. I know how to use MySql and php to have a list of messages reported but I don't know what is the more safe way to access to this page. Local, https,... What strategy use big websites as Facebook, MySpace,... to verify that a message is a true spam and to delete it?

    Read the article

  • blocking bad bots with robots.txt in 2012 [closed]

    - by Rachel Sparks
    does it still work good? I have this: # Generated using http://solidshellsecurity.com services # Begin block Bad-Robots from robots.txt User-agent: asterias Disallow:/ User-agent: BackDoorBot/1.0 Disallow:/ User-agent: Black Hole Disallow:/ User-agent: BlowFish/1.0 Disallow:/ User-agent: BotALot Disallow:/ User-agent: BuiltBotTough Disallow:/ User-agent: Bullseye/1.0 Disallow:/ User-agent: BunnySlippers Disallow:/ User-agent: Cegbfeieh Disallow:/ User-agent: CheeseBot Disallow:/ User-agent: CherryPicker Disallow:/ User-agent: CherryPickerElite/1.0 Disallow:/ User-agent: CherryPickerSE/1.0 Disallow:/ User-agent: CopyRightCheck Disallow:/ User-agent: cosmos Disallow:/ User-agent: Crescent Disallow:/ User-agent: Crescent Internet ToolPak HTTP OLE Control v.1.0 Disallow:/ User-agent: DittoSpyder Disallow:/ User-agent: EmailCollector Disallow:/ User-agent: EmailSiphon Disallow:/ User-agent: EmailWolf Disallow:/ User-agent: EroCrawler Disallow:/ User-agent: ExtractorPro Disallow:/ User-agent: Foobot Disallow:/ User-agent: Harvest/1.5 Disallow:/ User-agent: hloader Disallow:/ User-agent: httplib Disallow:/ User-agent: humanlinks Disallow:/ User-agent: InfoNaviRobot Disallow:/ User-agent: JennyBot Disallow:/ User-agent: Kenjin Spider Disallow:/ User-agent: Keyword Density/0.9 Disallow:/ User-agent: LexiBot Disallow:/ User-agent: libWeb/clsHTTP Disallow:/ User-agent: LinkextractorPro Disallow:/ User-agent: LinkScan/8.1a Unix Disallow:/ User-agent: LinkWalker Disallow:/ User-agent: LNSpiderguy Disallow:/ User-agent: lwp-trivial Disallow:/ User-agent: lwp-trivial/1.34 Disallow:/ User-agent: Mata Hari Disallow:/ User-agent: Microsoft URL Control - 5.01.4511 Disallow:/ User-agent: Microsoft URL Control - 6.00.8169 Disallow:/ User-agent: MIIxpc Disallow:/ User-agent: MIIxpc/4.2 Disallow:/ User-agent: Mister PiX Disallow:/ User-agent: moget Disallow:/ User-agent: moget/2.1 Disallow:/ User-agent: mozilla/4 Disallow:/ User-agent: Mozilla/4.0 (compatible; BullsEye; Windows 95) Disallow:/ User-agent: Mozilla/4.0 (compatible; MSIE 4.0; Windows 95) Disallow:/ User-agent: Mozilla/4.0 (compatible; MSIE 4.0; Windows 98) Disallow:/ User-agent: Mozilla/4.0 (compatible; MSIE 4.0; Windows NT) Disallow:/ User-agent: Mozilla/4.0 (compatible; MSIE 4.0; Windows XP) Disallow:/ User-agent: Mozilla/4.0 (compatible; MSIE 4.0; Windows 2000) Disallow:/ User-agent: Mozilla/4.0 (compatible; MSIE 4.0; Windows ME) Disallow:/ User-agent: mozilla/5 Disallow:/ User-agent: NetAnts Disallow:/ User-agent: NICErsPRO Disallow:/ User-agent: Offline Explorer Disallow:/ User-agent: Openfind Disallow:/ User-agent: Openfind data gathere Disallow:/ User-agent: ProPowerBot/2.14 Disallow:/ User-agent: ProWebWalker Disallow:/ User-agent: QueryN Metasearch Disallow:/ User-agent: RepoMonkey Disallow:/ User-agent: RepoMonkey Bait & Tackle/v1.01 Disallow:/ User-agent: RMA Disallow:/ User-agent: SiteSnagger Disallow:/ User-agent: SpankBot Disallow:/ User-agent: spanner Disallow:/ User-agent: suzuran Disallow:/ User-agent: Szukacz/1.4 Disallow:/ User-agent: Teleport Disallow:/ User-agent: TeleportPro Disallow:/ User-agent: Telesoft Disallow:/ User-agent: The Intraformant Disallow:/ User-agent: TheNomad Disallow:/ User-agent: TightTwatBot Disallow:/ User-agent: Titan Disallow:/ User-agent: toCrawl/UrlDispatcher Disallow:/ User-agent: True_Robot Disallow:/ User-agent: True_Robot/1.0 Disallow:/ User-agent: turingos Disallow:/ User-agent: URLy Warning Disallow:/ User-agent: VCI Disallow:/ User-agent: VCI WebViewer VCI WebViewer Win32 Disallow:/ User-agent: Web Image Collector Disallow:/ User-agent: WebAuto Disallow:/ User-agent: WebBandit Disallow:/ User-agent: WebBandit/3.50 Disallow:/ User-agent: WebCopier Disallow:/ User-agent: WebEnhancer Disallow:/ User-agent: WebmasterWorldForumBot Disallow:/ User-agent: WebSauger Disallow:/ User-agent: Website Quester Disallow:/ User-agent: Webster Pro Disallow:/ User-agent: WebStripper Disallow:/ User-agent: WebZip Disallow:/ User-agent: WebZip/4.0 Disallow:/ User-agent: Wget Disallow:/ User-agent: Wget/1.5.3 Disallow:/ User-agent: Wget/1.6 Disallow:/ User-agent: WWW-Collector-E Disallow:/ User-agent: Xenu's Disallow:/ User-agent: Xenu's Link Sleuth 1.1c Disallow:/ User-agent: Zeus Disallow:/ User-agent: Zeus 32297 Webster Pro V2.9 Win32 Disallow:/

    Read the article

  • How can I block abusive bots from accessing my Heroku app?

    - by aem
    My Heroku (Bamboo) app has been getting a bunch of hits from a scraper identifying itself as GSLFBot. Googling for that name produces various results of people who've concluded that it doesn't respect robots.txt (eg, http://www.0sw.com/archives/96). I'm considering updating my app to have a list of banned user-agents, and serving all requests from those user-agents a 400 or similar and adding GSLFBot to that list. Is that an effective technique, and if not what should I do instead? (As a side note, it seems weird to have an abusive scraper with a distinctive user-agent.)

    Read the article

  • Issue with Godaddy DNS manager

    - by Fischer
    I'm using domains.live.com to setup an email to a domain registered on Godaddy. The domains.live.com configuration page says: Godaddy's DNS manager isn't accepting this string Value: v=spf1 include:hotmail.com ~all it gives an error, something is wrong, either with the string or with the DNS manager and I would like to know how to fix it. Notes: The more information link is dead, Godaddy no longer gives support by email, no Microsoft support

    Read the article

  • What is the most time-effective way to monitor & manage threats from bots and/or humans?

    - by CheeseConQueso
    I'm usually overwhelmed by the amount of tools that hosting companies provide to track & quantify traffic data and statistics. I'm equally overwhelmed by the countless flavors of malicious 'attacks' that target any and every web site known to man. The security methods used to protect both the back and front end of a website are documented well and are straight-forward in terms of ease of implementation and application, but the army of autonomous bots knows no boundaries and will always find a niche of a website to infest. So what can be done to handle the inevitable swarm of bots that pound your domain with brute force? Whenever I look at error logs for my domains, there are always thousands of entries that look like bots trying to sneak sql code into the database by tricking the variables in the url into giving them schema information or private data within the database. My barbaric and time-consuming plan of defense is just to monitor visitor statistics for those obvious patterns of abuse and either ban the ips or range of ips accordingly. Aside from that, I don't know much else I could do to prevent all of the ping pong going on all day. Are there any good tools that automatically monitor this background activity (specifically activity that throws errors on the web & db server) and proactively deal with these source(s) of mayhem?

    Read the article

  • Bingbot requests from Google IP address

    - by JITHIN JOSE
    We have some suspicious requests to our server, 74.125.186.46 - - [24/Aug/2014:23:24:11 -0500] "GET <url> HTTP/1.1" 200 16912 "-" "Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)" 74.125.187.193 - - [24/Aug/2014:23:24:12 -0500] "GET <url> HTTP/1.1" 200 20119 "-" "Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)" As it shows, user-agent shows it is bingbot. But whois data of IP address(74.125.186.46 and 74.125.187.193) shows it is from google servers. So is it Google,Bing or any other content scrappers?

    Read the article

  • Getting a lot of postmaster undeliverable notices for non-existent users

    - by Mike Walsh
    I've had my domain (straightpathsql.com) for a few years now. I host my e-mail with Google Accounts for business and have for awhile. ALl of the sudden in the past week I am starting to get a lot of postmaster delivery fail notices from various domains, most of them involving bogus e-mail addresses at my domain ([email protected], for example)... My assumption here is that someone is trying to relay on some other host (not my hosts which are secure through google apps for business, I presume) and there isn't much I can do to stop it. But I just want to make sure there isn't something else I need to be looking at here.. An example delivery fail notice is below.. I know nothing of those addresses below and they look like garbage... (Quick edit: the reason I get these messages is I set myself up as a catch all, so it doesn't matter what e-mail you send a note to at my domain, I'll get it if the account isn't setup... All of the failure messages are sent to bogus addresses on my domain) The following message to <[email protected]> was undeliverable. The reason for the problem: 5.1.0 - Unknown address error 553-'sorry, this recipient is in my badrecipientto list (#5.7.1)' Final-Recipient: rfc822;[email protected] Action: failed Status: 5.0.0 (permanent failure) Remote-MTA: dns; [118.82.83.11] Diagnostic-Code: smtp; 5.1.0 - Unknown address error 553-'sorry, this recipient is in my badrecipientto list (#5.7.1)' (delivery attempts: 0) ---------- Forwarded message ---------- From: Howard Blankenship <[email protected]> To: omiivi2922 <[email protected]> Cc: Date: Subject: Hi omiivi2922

    Read the article

  • Do Spambots have access to unlimited IP addresses?

    - by Reg Gordon
    I have been attacked for weeks by the same spambot trying to brute force the login page. I have a login security module now installed on my Drupal 6 website and it bans on IP after x amount of attempts. It's been going on for ever and I have banned about 1000 IP addresses. Is there any point in me banning on IP due to the spambot having access to unlimited IP addresses or will they run out of them eventually?

    Read the article

  • Grapeshot crawler ignoring robots.txt

    - by QF_Developer
    Has anyone come across a crawler called Grapeshot? They are hammering the same page repeatedly on our website. I believe they are looking for ad related keywords, based on previous content ad campaigns. The odd thing is we never ran any such campaigns on the page they are so interested in. We do have only a few pages running AdSense, is this what has attracted Grapeshot? I've added the following declaration to my robots.txt, but they don't seem to be honouring it? User-agent: grapeshot Disallow: / Any ideas on how to block this nuisance crawler? I'm starting to think the best way is by setting up IP rules in IIS?

    Read the article

  • How do I deal with content scrapers? [closed]

    - by aem
    Possible Duplicate: How to protect SHTML pages from crawlers/spiders/scrapers? My Heroku (Bamboo) app has been getting a bunch of hits from a scraper identifying itself as GSLFBot. Googling for that name produces various results of people who've concluded that it doesn't respect robots.txt (eg, http://www.0sw.com/archives/96). I'm considering updating my app to have a list of banned user-agents, and serving all requests from those user-agents a 400 or similar and adding GSLFBot to that list. Is that an effective technique, and if not what should I do instead? (As a side note, it seems weird to have an abusive scraper with a distinctive user-agent.)

    Read the article

  • How do you determine whether a website is a scam [closed]

    - by Tom
    What's the best way to determine if a website is a scam. For example, at first sight (no pun intended) the following website seems to be legitimate. But the price of the product is suspiciously low (all the reviews point to an RRP of approximately £1000). http://www.maxiargos.com/index.php/asus-zenbook-ux31e-dh72-13-3-inch-thin-and-light-ultrabook-silver-aluminum.html Another indication is the lack of SSL for the checkout page, and lack of useful information in the WHOIS record. Registration Service Provided By: TMDHOSTING Contact: +1.8665325635 Domain Name: MAXIARGOS.COM Registrant: PrivacyProtect.org Domain Admin ([email protected]) ID#10760, PO Box 16 Note - All Postal Mails Rejected, visit Privacyprotect.org Nobby Beach null,QLD 4218 AU Tel. +45.36946676 Creation Date: 09-Nov-2011 Expiration Date: 09-Nov-2012 Domain servers in listed order: ns1.tmdhosting410.com ns2.tmdhosting410.com Administrative Contact: PrivacyProtect.org Domain Admin ([email protected]) ID#10760, PO Box 16 Note - All Postal Mails Rejected, visit Privacyprotect.org Nobby Beach null,QLD 4218 AU Tel. +45.36946676 Technical Contact: PrivacyProtect.org Domain Admin ([email protected]) ID#10760, PO Box 16 Note - All Postal Mails Rejected, visit Privacyprotect.org Nobby Beach null,QLD 4218 AU Tel. +45.36946676 Billing Contact: PrivacyProtect.org Domain Admin ([email protected]) ID#10760, PO Box 16 Note - All Postal Mails Rejected, visit Privacyprotect.org Nobby Beach null,QLD 4218 AU Tel. +45.36946676

    Read the article

  • Possible automated Bing Ads fraud?

    - by Gary Joynes
    I run a website that generates life insurance leads. The site is very simple a) there is a form for capturing the user's details, life insurance requirements etc b) A quote comparison feature We drive traffic to our site using conventional Google Adwords and Bing Ads campaigns. Since the 6th January we have received 30-40 dodgy leads which have the following in common: All created between 2 and 8 AM Phone number always in the format "123 1234 1234' Name, Date Of Birth, Policy details, Address all seem valid and are unique across the leads Email addresses from "disposable" email accounts including dodgit.com, mailinator.com, trashymail.com, pookmail.com Some leads come from the customer form, some via the quote comparison feature All come from different IP addresses We get the keyword information passed through from the URLs All look to be coming from Bing Ads All come from Internet Explorer v7 and v8 The consistency of the data and the random IP addresses seem to suggest an automated approach but I'm not sure of the intent. We can handle identifying these leads within our database but is there anyway of stopping this at the Ad level i.e. before the click through.

    Read the article

  • What are some internet trends that you've noticed over the past ~10 years? [closed]

    - by Michael
    I'll give an example of one that I've noticed: the number of web sites that ask for your email address (GOOG ID, YAHOO! ID, etc.) has skyrocketed. I can come up with no legitimate reason for this other than (1) password reset [other ways to do this], or (2) to remind you that you have an account there, based upon the time of your last visit. Why does a web site need to know your email address (Google ID, etc.) if all you want to do is... download a file (no legit reason whatsoever) play a game (no legit reason whatsoever) take an IQ test or search a database (no legit reason whatsoever) watch a video or view a picture (no legit reason whatsoever) read a forum (no legit reason whatsoever) post on a forum (mildly legit reason: password reset) newsletter (only difference between a newsletter and a blog is that you're more likely to forget about the web site than you are to forget about your email address -- the majority of web sites do not send out newsletters, however, so this can't be the justification) post twitter messages or other instant messaging (mildly legit reason: password reset) buy something (mildly legit reasons: password reset + giving you a copy of a receipt that they can't delete, as receipts stored on their server can be deleted) On the other hand, I can think of plenty of very shady reasons for asking for this information: so the NSA, CIA, FBI, etc. can very easily track what you do by reading your email or asking GOOG, etc. what sites you used your GOOG ID at to use the password that you provide for your account in order to get into your email account (most people use the same password for all of their accounts), find all of your other accounts in your inbox, and then get into all of those accounts sell your email address to spammers These reasons, I believe, are why you are constantly asked to provide your email address. I can come up with no other explanations whatsoever. Question 1: Can anyone think of any legitimate or illegitimate reasons for asking for someone's email address? Question 2: What are some other interesting internet trends of the past ~10 years?

    Read the article

  • Huge surge in direct traffic from one particular town

    - by Jack Lockyer
    Last month I noticed that the direct visits on our site have increased by nearly 150% whilst bounce rate is also considerably up. After drilling down further I can see that we have had nearly 2000 direct visits from one town in Connecticut called Stamford, with a bounce rate of 100%! I have been scratching around for answers but I can only find that it may be to do with our uptime monitoring tool; Pingdom. Does anyone know/have any experience with this kind of issue, any help is appreciated I have just noticed that we are receiving identical traffic in a town in England and a town in Scotland... This definitely makes me think it's to do with our uptime monitoring tool.

    Read the article

  • What are some potential issues in blocking all incoming requests from the Amazon cloud?

    - by ElHaix
    Recently I, along with the rest of the world, have seen a significant increase in what appears to be scraping from Amazon AWS-related sources. So simply put, I blocked all incoming requests from the Amazon cloud for our hosted application. I know that some good services/bots are now hosted on the cloud, and I'm wondering if certain IP addresses should be allowed, as they may gather data that would in the end benefit our site's SEO rankings? -- UPDATE -- I added a feature to block requests from the following hosts: Amazon Softlayer ServerDeals GigAvenue Since then, I have seen my network traffic decrease (monitored by network out bytes). Average operation is around 10,000,000 bytes. You can see where last week I was not blocking, then started blocking. I've since removed the blocks and will see what the outcome is.

    Read the article

  • Chinese bots in my forum

    - by TdotThomas
    I have a small community forum that doesn't really get posts or any real traffic. The only thing that happens on the regular is bots with Chinese IPs signing up gibberish usernames. Most bots don't make it past the captcha but some do. I try to stay on top of this by banning IPs and ranges of IPs but it doesn't really seem to help. The bots never post anything so what are they doing? Should I be worried? Should I keep banning IPs or is it futile?

    Read the article

  • We've had our content copied under a different URL - why and what do we do?

    - by Shaun
    We have a problem. We've noticed a large amount of traffic showing up on our Google Analytics. Upon further investigation we have found that we've had our content copied under a different URL. Our site: http://www.targetis.co.uk The coppied site: http://www.target-is.com (isn't showing up with Chrome for us) We don't own this domain. Their content is hosted with them (not via proxy). The large part of the traffic is coming from video hosting site. What do we do?

    Read the article

  • How to create filter option in RDLC report column ?

    - by prasanna
    I wanted to have filter option (like the one available in Excel sheet) in rdlc report column. is it possible and how ? http://img62.imageshack.us/img62/2223/sample1n.png i am trying in Visual studio 2005 and Microsoft report viewer. kindly let me know the option for this or a work around for it. Thanks

    Read the article

  • Adding a Taxonomy Filter to a Custom Post Type

    - by ken
    There is an amazing conversation from about two years ago on the Wordpress Answer site where a number of people came up with good solutions for adding a taxonomy filter to the admin screen for your custom post types (see URL for screen I'm referring to): http://[yoursite.com]/wp-admin/edit.php?s&post_status=all&post_type=[post-type] Anyway, I loved Michael's awesome contribution but in the end used Somatic's implementation with the hierarchy option from Manny. I wrapped it in a class - cuz that's how I like to do things -- and it ALMOST works. The dropdown appears but the values in the dropdown are all looking in the $_GET property for the taxonomies slug-name that you are filtering by. For some reason I don't get anything. I looked at the HTML of the dropdown and it appears ok to me. Here's a quick screen shot for some context: You can tell from this that my post-type is called "exercise" and that the Taxonomy I'm trying to use as a filter is "actions". Here then is the HTML surrounding the dropdown list: <select name="actions" id="actions" class="postform"> <option value="">Show all Actions</option> <option value="ate-dinner">Ate dinner(1)</option> <option value="went-running">Went running(1)</option> </select> I have also confirmed that all of the form elements are within the part of the DOM. And yet if I choose "Went running" and click on the filter button the URL query string comes back without ANY reference to what i've picked. More explicitly, the page first loads with the following URL: /wp-admin/edit.php?post_type=exercise and after pressing the filter button while having picked "Went Running" as an option from the actions filter: /wp-admin/edit.php?s&post_status=all&post_type=exercise&action=-1&m=0&actions&paged=1&mode=list&action2=-1 actually you can see a reference to an "actions" variable but it's set to nothing and as I now look in detail it appears that the moment I hit "filter" on the page it resets the filter dropdown to the default "Show All Actions". Can anyone help me with this?

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >