Search Results

Search found 1430 results on 58 pages for 'spam prevention'.

Page 23/58 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Imposing email limits on web page

    - by Martin
    To avoid spammers, what's a good strategy for imposing limits on users when sending email from our site? A count limit per day on individual IPs? Sender emails? Domains? In general terms, but recommended figures will also be helpful. Our users can send emails through our web page. They can register and log in but are also allowed to do this without logging in, but with a captcha and with a field for the senders email. Certainly, there is a header, "The user has sent you the following message.", limiting the use for spammers, so perhaps it's not a big problem. Any comments on what I'm doing will be greatly appreciated.

    Read the article

  • Microsoft pourrait être à l'origine de l'arrêt du Botnet « Rustock », le plus prolifique de l'histoire du spam

    Microsoft pourrait être à l'origine de l'arrêt du Botnet « Rustock » Le plus prolifique de l'histoire du spam Un groupe anonyme d'experts en sécurité a réussi à paralyser les opérations de spams produites depuis des années par Rustock, le plus prolifique Botnet de l'histoire de l'Internet. Depuis l'après-midi du 16, aucun des serveurs de contrôle et commande (CnC) de Rustock ne répond plus. Un exploit qui concorde avec une baisse drastique du nombre de pourriels à l'échelle mondiale. [IMG]http://idelways.developpez.com/news/images/rustock.png[/IMG] Bien que cette opération n'ait pas encore été revendiquée officiellement, une enquête du Wall St...

    Read the article

  • Google pénalise le référencement des sites truffés de publicités, compliquant l'accès au contenu, ils sont considérés comme du spam

    Google pénalise le référencement des sites truffés de publicités Compliquant l'accès au contenu, ils sont considérés comme du spam Matt Cutts, responsable de l'équipe antispam de Google, s'est présenté à la conférence PubCon avec de bonnes et de mauvaises nouvelles. Le moteur de recherche pénalise désormais le référencement des sites truffés de publicités rendant difficile l'accès aux contenus pertinents des pages. Ce qui compte, a expliqué Cutts dans sa keynote est « combien de contenu est au-dessus de la ligne de flottaison [...] Si vous avez des publicités obscurcissant votre contenu, vous avez intérêt à y repenser », laissant entendre que les sites compli...

    Read the article

  • How good is Dotfuscator Community Edition? What is "good enough obfuscator"?

    - by zendar
    I plan to release one small, low priced utility. Since this is more hobby than business, I planned to use Dotfuscator Community Edition that is shipped with VS2008. How good is it? I could also use definition of "good enough obfuscator" - what features are missing from Dotfuscator Community Edition to make it good enough. Edit: I checked pricing on number of commercial obfuscators and they cost a lot. Is it worth it? Are commercial versions that much better protecting from reverse engineering? I'm not very afraid of my application being cracked (it will be disappointing if application is so bad that no one is interested in cracking it). It's not heavily protected anyway, not overly complex serial key and licence checks on few places in code. It just bugs me that without obfuscation, somebody can easily get source code, rebrand it and sell it as its own. Does this happens a lot? Edit 2: Can somebody recommend commercial obfuscator. I found lots of them, all of them are expensive, some even don't have price listed on web site. Feature wise, all products seem more or less similar. What is minimal set of features obfuscator should have?

    Read the article

  • jquery newbie: combine validate with hidding submit button.

    - by Jeffb
    I'm new a jQuery. I have gotten validate to work with my form (MVC 1.0 / C#) with this: <script type="text/javascript"> if (document.forms.length > 0) { document.forms[0].id = "PageForm"; document.forms[0].name = "PageForm"; } $(document).ready(function() { $("#PageForm").validate({ rules: { SigP: { required: true } }, messages: { SigP: "<font color='red'><b>A Sig Value is required. </b></font>" } }); }); </script> I also want to hide the Submit button to prevent twitchy mouse syndrome from causing duplicate entry before the controller completes and redirects (I'm using an GPR pattern). The following works for this purpose: <script type="text/javascript"> // // prevent double-click on submit // jQuery('input[type=submit]').click(function() { if (jQuery.data(this, 'clicked')) { return false; } else { jQuery.data(this, 'clicked', true); return true; } }); </script> However, I can't get the two to work together. Specifically, if validate fails after the Submit button is clicked (which happens given how the form works), then I can't get the form submitted again unless I do a browser refresh that resets the 'clicked' property. How can I rewrite the second method above to not set the clicked property unless the form validates? Thx.

    Read the article

  • Reducing piracy of iPhone applications

    - by Alex Reynolds
    What are accepted methods to reduce iPhone application piracy, which do not violate Apple's evaluation process? If my application "phones home" to provide the unique device ID on which it runs, what other information would I need to collect (e.g., the Apple ID used to purchase the application) to create a valid registration token that authorizes use of the application? Likewise, what code would I use to access that extra data? What seem to be the best available technical approaches to this problem, at the present time? (Please refrain from non-programming answers about how piracy is inevitable, etc.)

    Read the article

  • Avoid running of software after copying to next machine?

    - by KoolKabin
    Hi guys, I have developed a small software. I want to provide and run it commercially only. I want it to be run in the machines who have purchased it from me. If someone copies it from my clients computer and runs it in next computer, I would like to stop functioning/running the software. What can be the ways to prevent the piracy of my software?

    Read the article

  • Prevent Rails link_to_remote multiple submits w Javascript

    - by Chris
    In a Rails project I need to keep a link_to_remote from getting double-clicked. It looks like :before and :after are my only choices - they get prepended/appended to the onclick Ajax call, respectively. But if I try something like: :before => "self.stopObserving()" t,he Ajax is never run. If I try it for :after the Ajax is run but the link never stops observing. The solutions I've seen rely on creating a variable and blocking the whole form, but there are multiple link_to_remote rows on this page and it is valid to click more than one of them at a time - just not the same one twice. One variable per row declared outside of link_to_remote seems very kludgey... Instead of using Prototype I originally tried plain Javascript first for this proof of concept - but it fails too: <a href="#" onclick="self.onclick = function(){alert('foo');};"click</a just puts up an alert when clicked - the lambda here does nothing? This next one is more like the desired goal and should only alert the first time. But instead it alerts every time: <a href="#" onclick="alert('bar'); self.onclick = function(){return false;};"click</a All ideas appreciated!

    Read the article

  • Why doesn't my form post when I disable the submit button to prevent double clicking?

    - by John MacIntyre
    Like every other web developer on the planet, I have an issue with users double clicking the submit button on my forms. My understanding is that the conventional way to handle this issue, is to disable the button immediately after the first click, however when I do this, it doesn't post. I did do some research on this, god knows there's enough information, but other questions like Disable button on form submission, disabling the button appears to work. The original poster of Disable button after submit appears to have had the same problem as me, but there is no mention on how/if he resolved it. Here's some code on how to repeat it (tested in IE8 Beta2, but had same problem in IE7) My aspx code <%@ Page Language="C#" CodeFile="Default.aspx.cs" Inherits="_Default" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <script language="javascript" type="text/javascript"> function btn_onClick() { var chk = document.getElementById("chk"); if(chk.checked) { var btn = document.getElementById("btn"); btn.disabled = true; } } </script> <body> <form id="form1" runat="server"> <asp:Literal ID="lit" Text="--:--:--" runat="server" /> <br /> <asp:Button ID="btn" Text="Submit" runat="server" /> <br /> <input type="checkbox" id="chk" />Disable button on first click </form> </body> </html> My cs code using System; public partial class _Default : System.Web.UI.Page { protected override void OnInit(EventArgs e) { base.OnInit(e); btn.Click += new EventHandler(btn_Click); btn.OnClientClick = "btn_onClick();"; } void btn_Click(object sender, EventArgs e) { lit.Text = DateTime.Now.ToString("HH:mm:ss"); } } Notice that when you click the button, a postback occurs, and the time is updated. But when you check the check box, the next time you click the button, the button is disabled (as expected), but never does the postback. WHAT THE HECK AM I MISSING HERE??? Thanks in advance.

    Read the article

  • What am I missing in this ASP.NET XSS Security Helper class?

    - by smartcaveman
    I need a generic method for preventing XSS attacks in ASP.NET. The approach I came up with is a ValidateRequest method that evaluates the HttpRequest for any potential issues, and if issues are found, redirect the user to the same page, but in a away that is not threatening to the application. (Source code below) While I know this method will prevent most XSS attacks, I am not certain that I am adequately preventing all possible attacks while also minimizing false positives. So, what is the most effective way to adequately prevent all possible attacks, while minimizing false positives? Are there changes I should make to the helper class below, or is there an alternative approach or third party library that offers something more convincing? public static class XssSecurity { public const string PotentialXssAttackExpression = "(http(s)*(%3a|:))|(ftp(s)*(%3a|:))|(javascript)|(alert)|(((\\%3C) <)[^\n]+((\\%3E) >))"; private static readonly Regex PotentialXssAttackRegex = new Regex(PotentialXssAttackExpression, RegexOptions.IgnoreCase); public static bool IsPotentialXssAttack(this HttpRequest request) { if(request != null) { string query = request.QueryString.ToString(); if(!string.IsNullOrEmpty(query) && PotentialXssAttackRegex.IsMatch(query)) return true; if(request.HttpMethod.Equals("post", StringComparison.InvariantCultureIgnoreCase)) { string form = request.Form.ToString(); if (!string.IsNullOrEmpty(form) && PotentialXssAttackRegex.IsMatch(form)) return true; } if(request.Cookies.Count > 0) { foreach(HttpCookie cookie in request.Cookies) { if(PotentialXssAttackRegex.IsMatch(cookie.Value)) { return true; } } } } return false; } public static void ValidateRequest(this HttpContext context, string redirectToPath = null) { if(context == null || !context.Request.IsPotentialXssAttack()) return; // expire all cookies foreach(HttpCookie cookie in context.Request.Cookies) { cookie.Expires = DateTime.Now.Subtract(TimeSpan.FromDays(1)); context.Response.Cookies.Set(cookie); } // redirect to safe path bool redirected = false; if(redirectToPath != null) { try { context.Response.Redirect(redirectToPath,true); redirected = true; } catch { redirected = false; } } if (redirected) return; string safeUrl = context.Request.Url.AbsolutePath.Replace(context.Request.Url.Query, string.Empty); context.Response.Redirect(safeUrl,true); } }

    Read the article

  • Do I need to sanitize the callback parameter from a JSONP call?

    - by christian studer
    I would like to offer a webservice via JSONP and was wondering, if I need to sanitize the value from the callback parameter. My current server side script looks like this currently (More or less. Code is in PHP, but could be anything really.): header("Content-type: application/javascript"); echo $_GET['callback'] . '(' . json_encode($data) . ')'; This is a classic XSS-vulnerability. If I need to sanitize it, then how? I was unable to find enough information about what might be allowed callback strings.

    Read the article

  • Where can I learn about security and online privacy?

    - by user278457
    I'd really like to start including shopping cart functionality in my projects. At first im content relying on paypal links, but I really want to be learning about specific security threats and how to combat them. Eventually I want to feel comfortable receiving and sending customer credit card details for ecommerce. Obviously this is a common thing on the net but most tutorials and resources are content to say "it's every web developers responsibility to consider security, but we're not going to cover that here/today/ever." so, my question is, where is a good place to learn? And once I've learned, how do I stay abreast of new vulnerabilities as the web evolves?

    Read the article

  • Microsoft AntiXSS in Medium Trust :Error

    - by aramugam
    I want to include Microsoft AntiXss V1.5 library on my live site running in a medium trust setting.However, I got an error something like: Required permissions cannot be acquired. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Security.Policy.PolicyException: Required permissions cannot be acquired. I tried this in full trust setting on my development machine and everything works good.Looks like this will run only in full trust configuration......Does anybody knows a solution or workaround for this?

    Read the article

  • PHP Security checklist (injection, sessions etc)

    - by NoviceCoding
    So what kind of things should a person using PHP and MySql be focused on to maximize security. Things I have done: -mysql_real_escape_string all inputs -validate all inputs after escaping em -Placed random alpha numerics before my table names -50character salt + Ripemd passwords Heres where I think I am slacking: -I know know nothing about sessions and securing them. How unsafe/safe is it if all you are doing is: session_start(); $_SESSION['login']= $login; and checking it with: session_start(); if(isset($_SESSION['login'])){ -I heard something about other forms of injection like cross site injection and what not... -And probably many other things I dont know about. Is there a "checklist"/Quicktut on making php secure? I dont even know what I should be worried about.I kinda regret now not building off cakephp since I am not a pro.

    Read the article

  • Preventing decompilation of C# application

    - by Kalpak
    Hi, We are planning to develop a client server application using C# and MySQL. We plan to sell the product on the shelf like any other software utility. We are worried about the decompilation of our product which does have some sort of edge over our competitors in terms of usability and bundled functionality. How can we prevent our software from decompilation, so the business logic of the product remains intact? We have heard about Reflector and other decompilers which makes our code very much vulnerable for copying. Our customer base is not Corporates by medical practitioners who themselves may not do it but our competitors may want to copy/disable licensing so value of our software goes down. Any suggestion to prevent this is most welcome. regards.. Obelisk

    Read the article

  • Does Exchange support plussed users (e.g. [email protected]) or a similar mechanism?

    - by Jens Bannmann
    Sendmail supports a feature called 'plussed users'. Once enabled, emails sent to [email protected], [email protected] and [email protected] are automatically delivered just like mails to [email protected]. There is no need to register or set up these 'plus suffixes'. The user can just use them and set up client-side filtering rules on his own. Does Exchange support a similar mechanism? If so, how to enable it? Note that I don't want answers about other means of filtering, e.g. spam/junk filtering, server-side or client-side rules, email aliases/addresses that are configured explicitly and so on.

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >