Search Results

Search found 497 results on 20 pages for 'xss prevention'.

Page 9/20 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Penetration testing with Nikto, unknown results found

    - by heldrida
    I've scanned my new webserver and I'm surprised to find that in the results there's programs that I never installed. This is a fresh new install of Ubuntu 12.04 and just installed Php 5.3, mysql, fail2ban, apache2, git, a few other things. Not sure if related, but I've got Wordpress installed but this doesn't have anything to do with myphpnuke does it? I'd like to understand why am I getting this results ? + OSVDB-27071: /phpimageview.php?pic=javascript:alert(8754): PHP Image View 1.0 is vulnerable to Cross Site Scripting (XSS). http://www.cert.org/advisories/CA-2000-02.html. + OSVDB-3931: /myphpnuke/links.php?op=search&query=[script]alert('Vulnerable);[/script]?query=: myphpnuke is vulnerable to Cross Site Scripting (XSS). http://www.cert.org/advisories/CA-2000-02.html. + OSVDB-3931: /myphpnuke/links.php?op=MostPopular&ratenum=[script]alert(document.cookie);[/script]&ratetype=percent: myphpnuke is vulnerable to Cross Site Scripting (XSS). http://www.cert.org/advisories/CA-2000-02.html. + /modules.php?op=modload&name=FAQ&file=index&myfaq=yes&id_cat=1&categories=%3Cimg%20src=javascript:alert(9456);%3E&parent_id=0: Post Nuke 0.7.2.3-Phoenix is vulnerable to Cross Site Scripting (XSS). http://www.cert.org/advisories/CA-2000-02.html. + /modules.php?letter=%22%3E%3Cimg%20src=javascript:alert(document.cookie);%3E&op=modload&name=Members_List&file=index: Post Nuke 0.7.2.3-Phoenix is vulnerable to Cross Site Scripting (XSS). http://www.cert.org/advisories/CA-2000-02.html. + OSVDB-4598: /members.asp?SF=%22;}alert('Vulnerable');function%20x(){v%20=%22: Web Wiz Forums ver. 7.01 and below is vulnerable to Cross Site Scripting (XSS). http://www.cert.org/advisories/CA-2000-02.html. + OSVDB-2946: /forum_members.asp?find=%22;}alert(9823);function%20x(){v%20=%22: Web Wiz Forums ver. 7.01 and below is vulnerable to Cross Site Scripting (XSS). http://www.cert.org/advisories/CA-2000-02.html. Thanks for looking!

    Read the article

  • .NET HTML Sanitation for rich HTML Input

    - by Rick Strahl
    Recently I was working on updating a legacy application to MVC 4 that included free form text input. When I set up the new site my initial approach was to not allow any rich HTML input, only simple text formatting that would respect a few simple HTML commands for bold, lists etc. and automatically handles line break processing for new lines and paragraphs. This is typical for what I do with most multi-line text input in my apps and it works very well with very little development effort involved. Then the client sprung another note: Oh by the way we have a bunch of customers (real estate agents) who need to post complete HTML documents. Oh uh! There goes the simple theory. After some discussion and pleading on my part (<snicker>) to try and avoid this type of raw HTML input because of potential XSS issues, the client decided to go ahead and allow raw HTML input anyway. There has been lots of discussions on this subject on StackOverFlow (and here and here) but to after reading through some of the solutions I didn't really find anything that would work even closely for what I needed. Specifically we need to be able to allow just about any HTML markup, with the exception of script code. Remote CSS and Images need to be loaded, links need to work and so. While the 'legit' HTML posted by these agents is basic in nature it does span most of the full gamut of HTML (4). Most of the solutions XSS prevention/sanitizer solutions I found were way to aggressive and rendered the posted output unusable mostly because they tend to strip any externally loaded content. In short I needed a custom solution. I thought the best solution to this would be to use an HTML parser - in this case the Html Agility Pack - and then to run through all the HTML markup provided and remove any of the blacklisted tags and a number of attributes that are prone to JavaScript injection. There's much discussion on whether to use blacklists vs. whitelists in the discussions mentioned above, but I found that whitelists can make sense in simple scenarios where you might allow manual HTML input, but when you need to allow a larger array of HTML functionality a blacklist is probably easier to manage as the vast majority of elements and attributes could be allowed. Also white listing gets a bit more complex with HTML5 and the new proliferation of new HTML tags and most new tags generally don't affect XSS issues directly. Pure whitelisting based on elements and attributes also doesn't capture many edge cases (see some of the XSS cheat sheets listed below) so even with a white list, custom logic is still required to handle many of those edge cases. The Microsoft Web Protection Library (AntiXSS) My first thought was to check out the Microsoft AntiXSS library. Microsoft has an HTML Encoding and Sanitation library in the Microsoft Web Protection Library (formerly AntiXSS Library) on CodePlex, which provides stricter functions for whitelist encoding and sanitation. Initially I thought the Sanitation class and its static members would do the trick for me,but I found that this library is way too restrictive for my needs. Specifically the Sanitation class strips out images and links which rendered the full HTML from our real estate clients completely useless. I didn't spend much time with it, but apparently I'm not alone if feeling this library is not really useful without some way to configure operation. To give you an example of what didn't work for me with the library here's a small and simple HTML fragment that includes script, img and anchor tags. I would expect the script to be stripped and everything else to be left intact. Here's the original HTML:var value = "<b>Here</b> <script>alert('hello')</script> we go. Visit the " + "<a href='http://west-wind.com'>West Wind</a> site. " + "<img src='http://west-wind.com/images/new.gif' /> " ; and the code to sanitize it with the AntiXSS Sanitize class:@Html.Raw(Microsoft.Security.Application.Sanitizer.GetSafeHtmlFragment(value)) This produced a not so useful sanitized string: Here we go. Visit the <a>West Wind</a> site. While it removed the <script> tag (good) it also removed the href from the link and the image tag altogether (bad). In some situations this might be useful, but for most tasks I doubt this is the desired behavior. While links can contain javascript: references and images can 'broadcast' information to a server, without configuration to tell the library what to restrict this becomes useless to me. I couldn't find any way to customize the white list, nor is there code available in this 'open source' library on CodePlex. Using Html Agility Pack for HTML Parsing The WPL library wasn't going to cut it. After doing a bit of research I decided the best approach for a custom solution would be to use an HTML parser and inspect the HTML fragment/document I'm trying to import. I've used the HTML Agility Pack before for a number of apps where I needed an HTML parser without requiring an instance of a full browser like the Internet Explorer Application object which is inadequate in Web apps. In case you haven't checked out the Html Agility Pack before, it's a powerful HTML parser library that you can use from your .NET code. It provides a simple, parsable HTML DOM model to full HTML documents or HTML fragments that let you walk through each of the elements in your document. If you've used the HTML or XML DOM in a browser before you'll feel right at home with the Agility Pack. Blacklist based HTML Parsing to strip XSS Code For my purposes of HTML sanitation, the process involved is to walk the HTML document one element at a time and then check each element and attribute against a blacklist. There's quite a bit of argument of what's better: A whitelist of allowed items or a blacklist of denied items. While whitelists tend to be more secure, they also require a lot more configuration. In the case of HTML5 a whitelist could be very extensive. For what I need, I only want to ensure that no JavaScript is executed, so a blacklist includes the obvious <script> tag plus any tag that allows loading of external content including <iframe>, <object>, <embed> and <link> etc. <form>  is also excluded to avoid posting content to a different location. I also disallow <head> and <meta> tags in particular for my case, since I'm only allowing posting of HTML fragments. There is also some internal logic to exclude some attributes or attributes that include references to JavaScript or CSS expressions. The default tag blacklist reflects my use case, but is customizable and can be added to. Here's my HtmlSanitizer implementation:using System.Collections.Generic; using System.IO; using System.Xml; using HtmlAgilityPack; namespace Westwind.Web.Utilities { public class HtmlSanitizer { public HashSet<string> BlackList = new HashSet<string>() { { "script" }, { "iframe" }, { "form" }, { "object" }, { "embed" }, { "link" }, { "head" }, { "meta" } }; /// <summary> /// Cleans up an HTML string and removes HTML tags in blacklist /// </summary> /// <param name="html"></param> /// <returns></returns> public static string SanitizeHtml(string html, params string[] blackList) { var sanitizer = new HtmlSanitizer(); if (blackList != null && blackList.Length > 0) { sanitizer.BlackList.Clear(); foreach (string item in blackList) sanitizer.BlackList.Add(item); } return sanitizer.Sanitize(html); } /// <summary> /// Cleans up an HTML string by removing elements /// on the blacklist and all elements that start /// with onXXX . /// </summary> /// <param name="html"></param> /// <returns></returns> public string Sanitize(string html) { var doc = new HtmlDocument(); doc.LoadHtml(html); SanitizeHtmlNode(doc.DocumentNode); //return doc.DocumentNode.WriteTo(); string output = null; // Use an XmlTextWriter to create self-closing tags using (StringWriter sw = new StringWriter()) { XmlWriter writer = new XmlTextWriter(sw); doc.DocumentNode.WriteTo(writer); output = sw.ToString(); // strip off XML doc header if (!string.IsNullOrEmpty(output)) { int at = output.IndexOf("?>"); output = output.Substring(at + 2); } writer.Close(); } doc = null; return output; } private void SanitizeHtmlNode(HtmlNode node) { if (node.NodeType == HtmlNodeType.Element) { // check for blacklist items and remove if (BlackList.Contains(node.Name)) { node.Remove(); return; } // remove CSS Expressions and embedded script links if (node.Name == "style") { if (string.IsNullOrEmpty(node.InnerText)) { if (node.InnerHtml.Contains("expression") || node.InnerHtml.Contains("javascript:")) node.ParentNode.RemoveChild(node); } } // remove script attributes if (node.HasAttributes) { for (int i = node.Attributes.Count - 1; i >= 0; i--) { HtmlAttribute currentAttribute = node.Attributes[i]; var attr = currentAttribute.Name.ToLower(); var val = currentAttribute.Value.ToLower(); span style="background: white; color: green">// remove event handlers if (attr.StartsWith("on")) node.Attributes.Remove(currentAttribute); // remove script links else if ( //(attr == "href" || attr== "src" || attr == "dynsrc" || attr == "lowsrc") && val != null && val.Contains("javascript:")) node.Attributes.Remove(currentAttribute); // Remove CSS Expressions else if (attr == "style" && val != null && val.Contains("expression") || val.Contains("javascript:") || val.Contains("vbscript:")) node.Attributes.Remove(currentAttribute); } } } // Look through child nodes recursively if (node.HasChildNodes) { for (int i = node.ChildNodes.Count - 1; i >= 0; i--) { SanitizeHtmlNode(node.ChildNodes[i]); } } } } } Please note: Use this as a starting point only for your own parsing and review the code for your specific use case! If your needs are less lenient than mine were you can you can make this much stricter by not allowing src and href attributes or CSS links if your HTML doesn't allow it. You can also check links for external URLs and disallow those - lots of options.  The code is simple enough to make it easy to extend to fit your use cases more specifically. It's also quite easy to make this code work using a WhiteList approach if you want to go that route. The code above is semi-generic for allowing full featured HTML fragments that only disallow script related content. The Sanitize method walks through each node of the document and then recursively drills into all of its children until the entire document has been traversed. Note that the code here uses an XmlTextWriter to write output - this is done to preserve XHTML style self-closing tags which are otherwise left as non-self-closing tags. The sanitizer code scans for blacklist elements and removes those elements not allowed. Note that the blacklist is configurable either in the instance class as a property or in the static method via the string parameter list. Additionally the code goes through each element's attributes and looks for a host of rules gleaned from some of the XSS cheat sheets listed at the end of the post. Clearly there are a lot more XSS vulnerabilities, but a lot of them apply to ancient browsers (IE6 and versions of Netscape) - many of these glaring holes (like CSS expressions - WTF IE?) have been removed in modern browsers. What a Pain To be honest this is NOT a piece of code that I wanted to write. I think building anything related to XSS is better left to people who have far more knowledge of the topic than I do. Unfortunately, I was unable to find a tool that worked even closely for me, or even provided a working base. For the project I was working on I had no choice and I'm sharing the code here merely as a base line to start with and potentially expand on for specific needs. It's sad that Microsoft Web Protection Library is currently such a train wreck - this is really something that should come from Microsoft as the systems vendor or possibly a third party that provides security tools. Luckily for my application we are dealing with a authenticated and validated users so the user base is fairly well known, and relatively small - this is not a wide open Internet application that's directly public facing. As I mentioned earlier in the post, if I had my way I would simply not allow this type of raw HTML input in the first place, and instead rely on a more controlled HTML input mechanism like MarkDown or even a good HTML Edit control that can provide some limits on what types of input are allowed. Alas in this case I was overridden and we had to go forward and allow *any* raw HTML posted. Sometimes I really feel sad that it's come this far - how many good applications and tools have been thwarted by fear of XSS (or worse) attacks? So many things that could be done *if* we had a more secure browser experience and didn't have to deal with every little script twerp trying to hack into Web pages and obscure browser bugs. So much time wasted building secure apps, so much time wasted by others trying to hack apps… We're a funny species - no other species manages to waste as much time, effort and resources as we humans do :-) Resources Code on GitHub Html Agility Pack XSS Cheat Sheet XSS Prevention Cheat Sheet Microsoft Web Protection Library (AntiXss) StackOverflow Links: http://stackoverflow.com/questions/341872/html-sanitizer-for-net http://blog.stackoverflow.com/2008/06/safe-html-and-xss/ http://code.google.com/p/subsonicforums/source/browse/trunk/SubSonic.Forums.Data/HtmlScrubber.cs?r=61© Rick Strahl, West Wind Technologies, 2005-2012Posted in Security  HTML  ASP.NET  JavaScript   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • How good is Dotfuscator Community Edition? What is "good enough obfuscator"?

    - by zendar
    I plan to release one small, low priced utility. Since this is more hobby than business, I planned to use Dotfuscator Community Edition that is shipped with VS2008. How good is it? I could also use definition of "good enough obfuscator" - what features are missing from Dotfuscator Community Edition to make it good enough. Edit: I checked pricing on number of commercial obfuscators and they cost a lot. Is it worth it? Are commercial versions that much better protecting from reverse engineering? I'm not very afraid of my application being cracked (it will be disappointing if application is so bad that no one is interested in cracking it). It's not heavily protected anyway, not overly complex serial key and licence checks on few places in code. It just bugs me that without obfuscation, somebody can easily get source code, rebrand it and sell it as its own. Does this happens a lot? Edit 2: Can somebody recommend commercial obfuscator. I found lots of them, all of them are expensive, some even don't have price listed on web site. Feature wise, all products seem more or less similar. What is minimal set of features obfuscator should have?

    Read the article

  • Spam proof hit counter in Django

    - by Jim Robert
    I already looked at the most popular Django hit counter solutions and none of them seem to solve the issue of spamming the refresh button. Do I really have to log the IP of every visitor to keep them from artificially boosting page view counts by spamming the refresh button (or writing a quick and dirty script to do it for them)? More information So right now you can inflate your view count with the following few lines of Python code. Which is so little that you don't even really need to write a script, you could just type it into an interactive session: from urllib import urlopen num_of_times_to_hit_page = 100 url_of_the_page = "http://example.com" for x in range(num_of_times_to_hit_page): urlopen(url_of_the_page) Solution I'll probably use To me, it's a pretty rough situation when you need to do a bunch of writes to the database on EVERY page view, but I guess it can't be helped. I'm going to implement IP logging due to several users artificially inflating their view count. It's not that they're bad people or even bad users. See the answer about solving the problem with caching... I'm going to pursue that route first. Will update with results. For what it's worth, it seems Stack Overflow is using cookies (I can't increment my own view count, but it increased when I visited the site in another browser.) I think that the benefit is just too much, and this sort of 'cheating' is just too easy right now. Thanks for the help everyone!

    Read the article

  • jquery newbie: combine validate with hidding submit button.

    - by Jeffb
    I'm new a jQuery. I have gotten validate to work with my form (MVC 1.0 / C#) with this: <script type="text/javascript"> if (document.forms.length > 0) { document.forms[0].id = "PageForm"; document.forms[0].name = "PageForm"; } $(document).ready(function() { $("#PageForm").validate({ rules: { SigP: { required: true } }, messages: { SigP: "<font color='red'><b>A Sig Value is required. </b></font>" } }); }); </script> I also want to hide the Submit button to prevent twitchy mouse syndrome from causing duplicate entry before the controller completes and redirects (I'm using an GPR pattern). The following works for this purpose: <script type="text/javascript"> // // prevent double-click on submit // jQuery('input[type=submit]').click(function() { if (jQuery.data(this, 'clicked')) { return false; } else { jQuery.data(this, 'clicked', true); return true; } }); </script> However, I can't get the two to work together. Specifically, if validate fails after the Submit button is clicked (which happens given how the form works), then I can't get the form submitted again unless I do a browser refresh that resets the 'clicked' property. How can I rewrite the second method above to not set the clicked property unless the form validates? Thx.

    Read the article

  • Anyone know of a good open source spam checker in java or c#?

    - by Spines
    I'm creating a site where users can write articles and comment on the articles. I want to automatically check to see if a new article or comment is spam. What are good libraries for doing this? I looked at bayesian classifier libraries, but it seems that I would have to gather a large amount of samples and classify them all as spam or not spam myself... I'm looking for something that can hopefully just tell me right out of the box.

    Read the article

  • Reducing piracy of iPhone applications

    - by Alex Reynolds
    What are accepted methods to reduce iPhone application piracy, which do not violate Apple's evaluation process? If my application "phones home" to provide the unique device ID on which it runs, what other information would I need to collect (e.g., the Apple ID used to purchase the application) to create a valid registration token that authorizes use of the application? Likewise, what code would I use to access that extra data? What seem to be the best available technical approaches to this problem, at the present time? (Please refrain from non-programming answers about how piracy is inevitable, etc.)

    Read the article

  • Avoid running of software after copying to next machine?

    - by KoolKabin
    Hi guys, I have developed a small software. I want to provide and run it commercially only. I want it to be run in the machines who have purchased it from me. If someone copies it from my clients computer and runs it in next computer, I would like to stop functioning/running the software. What can be the ways to prevent the piracy of my software?

    Read the article

  • How do I stop image spam from being uploaded to my (future) site?

    - by Pete Lacey
    I have in mind an idea for a generally accessible site that needs to allow images to be uploaded. But I'm stymied on how to prevent image spam: porn, ads in image form, etc. Assumptions: I'm assuming that the spammers are clever, even human. I'm skeptical of the efficacy of image analysis software. I do not have the resources to approve all uploads manually. I am willing to spend money on the solution -- within reason. This site will be location-aware, if that helps. How does Flickr do it or imgur? Or do they?

    Read the article

  • spam and dirty words comment post filtering in python (django)

    - by sintaloo
    Hi All, My basic question is how to filter spam and dirty words in a comment post system under python (django). I have a collection of phrases (approximately 3000 phrases) to be filtered. Question (1), are there any existing open source python (or django) package/module/plugin which can handle this job? I knew there was one called Akismet. But from what I understood, it will not solve my problem. Akismet is just a web service and filter the words dictionary defined by Akismet. But I have my own collection of words. Please correct me if I am wrong. Question (2), If there is no such open source package I can use, how to create my own one? The only thing I can think of it's to use regular expression and join all the word phrases with 'or' in a regular expression. but I have 3000 phrases, I think it won't work in term of performance and filter every comment post. any suggestions where should I start from? Thank you very much for your help and time.

    Read the article

  • Query DNSBL or other block lists using PHP

    - by 55skidoo
    Is there any way to use PHP code to query a DNSBL (block list) provider and find out if the IP address submitted is a bad actor? I would like to take an existing IP address out of a registration database, then check whether it's a known block-listed IP address by performing a lookup on it, then if it's a blacklisted, do an action on it (such as, delete entry from registration database). Most of the instructions I have seen assume you are trying to query the blocklist via a mail server, which I can't do. I tried querying via web browser by typing in queries such as "58.64.xx.xxx.dnsbl.sorbs.net" but that didn't work.

    Read the article

  • Prevent Rails link_to_remote multiple submits w Javascript

    - by Chris
    In a Rails project I need to keep a link_to_remote from getting double-clicked. It looks like :before and :after are my only choices - they get prepended/appended to the onclick Ajax call, respectively. But if I try something like: :before => "self.stopObserving()" t,he Ajax is never run. If I try it for :after the Ajax is run but the link never stops observing. The solutions I've seen rely on creating a variable and blocking the whole form, but there are multiple link_to_remote rows on this page and it is valid to click more than one of them at a time - just not the same one twice. One variable per row declared outside of link_to_remote seems very kludgey... Instead of using Prototype I originally tried plain Javascript first for this proof of concept - but it fails too: <a href="#" onclick="self.onclick = function(){alert('foo');};"click</a just puts up an alert when clicked - the lambda here does nothing? This next one is more like the desired goal and should only alert the first time. But instead it alerts every time: <a href="#" onclick="alert('bar'); self.onclick = function(){return false;};"click</a All ideas appreciated!

    Read the article

  • Why doesn't my form post when I disable the submit button to prevent double clicking?

    - by John MacIntyre
    Like every other web developer on the planet, I have an issue with users double clicking the submit button on my forms. My understanding is that the conventional way to handle this issue, is to disable the button immediately after the first click, however when I do this, it doesn't post. I did do some research on this, god knows there's enough information, but other questions like Disable button on form submission, disabling the button appears to work. The original poster of Disable button after submit appears to have had the same problem as me, but there is no mention on how/if he resolved it. Here's some code on how to repeat it (tested in IE8 Beta2, but had same problem in IE7) My aspx code <%@ Page Language="C#" CodeFile="Default.aspx.cs" Inherits="_Default" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <script language="javascript" type="text/javascript"> function btn_onClick() { var chk = document.getElementById("chk"); if(chk.checked) { var btn = document.getElementById("btn"); btn.disabled = true; } } </script> <body> <form id="form1" runat="server"> <asp:Literal ID="lit" Text="--:--:--" runat="server" /> <br /> <asp:Button ID="btn" Text="Submit" runat="server" /> <br /> <input type="checkbox" id="chk" />Disable button on first click </form> </body> </html> My cs code using System; public partial class _Default : System.Web.UI.Page { protected override void OnInit(EventArgs e) { base.OnInit(e); btn.Click += new EventHandler(btn_Click); btn.OnClientClick = "btn_onClick();"; } void btn_Click(object sender, EventArgs e) { lit.Text = DateTime.Now.ToString("HH:mm:ss"); } } Notice that when you click the button, a postback occurs, and the time is updated. But when you check the check box, the next time you click the button, the button is disabled (as expected), but never does the postback. WHAT THE HECK AM I MISSING HERE??? Thanks in advance.

    Read the article

  • Spotting similarities and patterns within a string - Python

    - by RadiantHex
    Hi folks, this is the use case I'm trying to figure this out for. I have a list of spam subscriptions to a service and they are killing conversion rate and other usability studies. The emails inserted look like the following: [email protected] [email protected] [email protected] roger[...]_surname[...]@hotmail.com What would be your suggestions on spotting these entries by using an automated script? It feels a little more complicated than it actually looks. Help would be very much appreciated!

    Read the article

  • Calculating probability that a string has been randomized? - Python

    - by RadiantHex
    Hi folks, this is correlated to a question I asked earlier (question) I have a list of manually created strings such as: lucy87 gordan_king fancy_unicorn77 joplucky_kanga90 base_belong_to_narwhals and a list of randomized strings: johnkdf pancake90kgjd fancy_jagookfk manhattanljg What gives away that the last set of strings are randomized is that sequences such as 'kjg', 'jgf', 'lkd', ... . Any clever way I could separate strings that contain these apparently randomized strings from the crowd? I guess that this plays a lot on the fact that certain characters are more likely to be placed next to others (e.g. 'co', 'ka', 'ja', ...). Any ideas on this one? Kylotan mentioned Reverend, but I am not sure if it can be used fr such purpose. Help would be much appreciated!

    Read the article

  • Malicious crawler blocker for ASP.NET

    - by Marek
    I have just stumbled upon Bad Behavior - a plugin for PHP that promises to detect spam and malicious crawlers by preventing them from accessing the site at all. Does something similar exist for ASP.NET/ASP.NET MVC? I am interested in blocking access to the site altogether, not in detecting spam after it was posted.

    Read the article

  • Where can I learn about security and online privacy?

    - by user278457
    I'd really like to start including shopping cart functionality in my projects. At first im content relying on paypal links, but I really want to be learning about specific security threats and how to combat them. Eventually I want to feel comfortable receiving and sending customer credit card details for ecommerce. Obviously this is a common thing on the net but most tutorials and resources are content to say "it's every web developers responsibility to consider security, but we're not going to cover that here/today/ever." so, my question is, where is a good place to learn? And once I've learned, how do I stay abreast of new vulnerabilities as the web evolves?

    Read the article

  • Attack from anonymous proxy

    - by mmgn
    We got attacked by some very-bored teenagers registering in our forums and posting very explicit material using anonymous proxy websites, like http://proxify.com/ Is there a way to check the registration IP against a black list database? Has anyone experienced this and had success?

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >