Search Results

Search found 1430 results on 58 pages for 'spam prevention'.

Page 38/58 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • Receive anonymous users' input by web upload form or email. Any online service for that?

    - by sja
    Are you aware of any online service or online "platform" allowing users, not previously registered, to upload pairs of picture+comment to a database? It would be a collaborative database of picture+comment pairs. I'm not going wiki or googlegroup, picasa or such because I'd like the user to have the least to do to participate, that is e.g.: take a picture with his phone and email it to an email to an email address. Or go to a web page with an upload form, type in a description, hit OK and that's it. And the goal is also that it be as hassle-less to put up as possible. Yeah I know, it can't programme itself to my requirements :) by I'm suspecting there's a tool or tool combination going a decent way through my needs. Thanks for any info/advice! SJA (NB the final goal is a kind of crowd-sourced census of specific urban items. If you have comment about the potential for spam-overload of my idea, other than "you're doomed", you're welcome!)

    Read the article

  • Where can I find SPF record information for Network Solution hosted email?

    - by Steve Brown
    I have a client who has their e-mail hosted with Network Solutions and some of the e-mails they are sending are going to spam. I've searched Google for information on network solutions and SPF records but all the results have to do with "how to set up an SPF record" - which I understand how to do. The problem is that I can't find any information about what SPF records to use for network solutions hosted e-mail. I even tried lookup up spf records on networksolutions.com but it appears there are none. Where can I find SPF record information for Network Solution hosted email?

    Read the article

  • Website with over 1 million posts with not much textual content

    - by Far Se
    I've made a website which crawls files from all over the Internet and I feel like Google will ban me if I sent it sitemaps which contain all of these pages (1m+), because they contain only the file name/size/no of downloads and the download link(s). I'm considering this thought because I've made another website like this in the past and Google banned me after one week with the reason: "spam", even it was not (maybe somebody falsely reported me?!). Does someone have an idea about how to keep Google form banning my website? I've seen several other sites like mine and they don't get banned or... anything. And also, should I sent the sitemap or wait until Google indexes every page as it finds them? Thanks in advance :)

    Read the article

  • Another website is mirroring and ranks above my site in search results

    - by Marlboro Goodluck
    There is a site of ill-repute known as thedirty which has completely mirrored my site and now has links appearing on Google at the #1 spot using my content. I checked my log files and noticed that this site has been crawling mine for sometime, and also has 10,000 links from their site to mine. I have blocked user access which is referred from this site and reported them as web spam to Google already. I also disavowed the domain. How are they getting top links in Google (even overtaking mine) for such nefarious tactics? What are the steps to completely eliminating an issue such as this?

    Read the article

  • Facebook corrige un bogue susceptible d'avoir provoqué une fuite de données de six millions d'utilisateurs

    Facebook corrige un bogue susceptible d'avoir provoqué une fuite de données de six millions d'utilisateursSi vous avez reçu ce weekend un courriel de la part de Facebook vous expliquant que votre compte a été compromis suite à un bogue, sachez que ce n'est pas un SPAM.En effet, vendredi dernier, Facebook a voulu jouer la carte de la transparence en annonçant dans un billet blog avoir été victime d'une panne logicielle qui a été à l'origine de la fuite de données de près de six millions d'utilisateurs. Download Your Information (DYI) est l'outil qui a provoqué le dysfonctionnement. Il est chargé de récupérer les adresses de courriel et numéros de téléphone des utilisateurs dans le cadre de la sécu...

    Read the article

  • Google va renforcer ses mesures anti-cloacking, un changement important qui pourrait impacter les sites mobiles et les contenus riches

    Google va renforcer ses mesures anti-cloacking Un changement important qui pourrait impacter les sites mobiles et les contenus riches Comme résolution du nouvel an, Google a décidé de resserrer l'étau sur le « cloaking », cette pratique qui consiste à présenter aux moteurs de recherche un contenu différent de celui présenté à l'utilisateur. Souvent utilisée à des fins de spam, cette technique est réprimée par Google et les autres moteurs de recherche. Pourtant, de nombreux webmasters la trouvent utile pour améliorer le référencement de sites conçus avec des technologies encore mal prises en compte par les bots des moteurs de recherche, comme le Flash ou le Rich Medi...

    Read the article

  • Is it okay for an application to check for automatic updates in less than 20 hour interval?

    - by FlameStream
    I have a desktop application that has the ability to automatically update itself on the next restart (without the user's consent - but this is another issue altogether). Assuming that the user would never notice anything related to application updating (such as a progress bar, or pop-up requiring restart), and that our server would support the request spam load, is there any reason why it should not check for updates in less than 20 hour interval? The reason I'm asking this is because all applications that I know that have auto-update capability check for update every 20 to 24 hours and at startup. I was just wondering if there was an ethical rule about it, or simply because of the risk of overloading the server.

    Read the article

  • Another website is mirroring my site

    - by Marlboro Goodluck
    Question for you all. There is a site of ill repute known as thedirty which has completely mirrored my site and now has links appearing on Google at the #1 spot using my content. I checked my log file and noticed that this site has been crawling mine from sometime, and also has 10k links from their site to mine. I have blocked user access which is referred from this site and reported them as web spam to Google already. I also disavowed the domain. How are they getting top links in Google (even overtaking mine) for such nefarious tactics? What are the steps to completely eliminating an issue such as this?

    Read the article

  • .NET HTML Sanitation for rich HTML Input

    - by Rick Strahl
    Recently I was working on updating a legacy application to MVC 4 that included free form text input. When I set up the new site my initial approach was to not allow any rich HTML input, only simple text formatting that would respect a few simple HTML commands for bold, lists etc. and automatically handles line break processing for new lines and paragraphs. This is typical for what I do with most multi-line text input in my apps and it works very well with very little development effort involved. Then the client sprung another note: Oh by the way we have a bunch of customers (real estate agents) who need to post complete HTML documents. Oh uh! There goes the simple theory. After some discussion and pleading on my part (<snicker>) to try and avoid this type of raw HTML input because of potential XSS issues, the client decided to go ahead and allow raw HTML input anyway. There has been lots of discussions on this subject on StackOverFlow (and here and here) but to after reading through some of the solutions I didn't really find anything that would work even closely for what I needed. Specifically we need to be able to allow just about any HTML markup, with the exception of script code. Remote CSS and Images need to be loaded, links need to work and so. While the 'legit' HTML posted by these agents is basic in nature it does span most of the full gamut of HTML (4). Most of the solutions XSS prevention/sanitizer solutions I found were way to aggressive and rendered the posted output unusable mostly because they tend to strip any externally loaded content. In short I needed a custom solution. I thought the best solution to this would be to use an HTML parser - in this case the Html Agility Pack - and then to run through all the HTML markup provided and remove any of the blacklisted tags and a number of attributes that are prone to JavaScript injection. There's much discussion on whether to use blacklists vs. whitelists in the discussions mentioned above, but I found that whitelists can make sense in simple scenarios where you might allow manual HTML input, but when you need to allow a larger array of HTML functionality a blacklist is probably easier to manage as the vast majority of elements and attributes could be allowed. Also white listing gets a bit more complex with HTML5 and the new proliferation of new HTML tags and most new tags generally don't affect XSS issues directly. Pure whitelisting based on elements and attributes also doesn't capture many edge cases (see some of the XSS cheat sheets listed below) so even with a white list, custom logic is still required to handle many of those edge cases. The Microsoft Web Protection Library (AntiXSS) My first thought was to check out the Microsoft AntiXSS library. Microsoft has an HTML Encoding and Sanitation library in the Microsoft Web Protection Library (formerly AntiXSS Library) on CodePlex, which provides stricter functions for whitelist encoding and sanitation. Initially I thought the Sanitation class and its static members would do the trick for me,but I found that this library is way too restrictive for my needs. Specifically the Sanitation class strips out images and links which rendered the full HTML from our real estate clients completely useless. I didn't spend much time with it, but apparently I'm not alone if feeling this library is not really useful without some way to configure operation. To give you an example of what didn't work for me with the library here's a small and simple HTML fragment that includes script, img and anchor tags. I would expect the script to be stripped and everything else to be left intact. Here's the original HTML:var value = "<b>Here</b> <script>alert('hello')</script> we go. Visit the " + "<a href='http://west-wind.com'>West Wind</a> site. " + "<img src='http://west-wind.com/images/new.gif' /> " ; and the code to sanitize it with the AntiXSS Sanitize class:@Html.Raw(Microsoft.Security.Application.Sanitizer.GetSafeHtmlFragment(value)) This produced a not so useful sanitized string: Here we go. Visit the <a>West Wind</a> site. While it removed the <script> tag (good) it also removed the href from the link and the image tag altogether (bad). In some situations this might be useful, but for most tasks I doubt this is the desired behavior. While links can contain javascript: references and images can 'broadcast' information to a server, without configuration to tell the library what to restrict this becomes useless to me. I couldn't find any way to customize the white list, nor is there code available in this 'open source' library on CodePlex. Using Html Agility Pack for HTML Parsing The WPL library wasn't going to cut it. After doing a bit of research I decided the best approach for a custom solution would be to use an HTML parser and inspect the HTML fragment/document I'm trying to import. I've used the HTML Agility Pack before for a number of apps where I needed an HTML parser without requiring an instance of a full browser like the Internet Explorer Application object which is inadequate in Web apps. In case you haven't checked out the Html Agility Pack before, it's a powerful HTML parser library that you can use from your .NET code. It provides a simple, parsable HTML DOM model to full HTML documents or HTML fragments that let you walk through each of the elements in your document. If you've used the HTML or XML DOM in a browser before you'll feel right at home with the Agility Pack. Blacklist based HTML Parsing to strip XSS Code For my purposes of HTML sanitation, the process involved is to walk the HTML document one element at a time and then check each element and attribute against a blacklist. There's quite a bit of argument of what's better: A whitelist of allowed items or a blacklist of denied items. While whitelists tend to be more secure, they also require a lot more configuration. In the case of HTML5 a whitelist could be very extensive. For what I need, I only want to ensure that no JavaScript is executed, so a blacklist includes the obvious <script> tag plus any tag that allows loading of external content including <iframe>, <object>, <embed> and <link> etc. <form>  is also excluded to avoid posting content to a different location. I also disallow <head> and <meta> tags in particular for my case, since I'm only allowing posting of HTML fragments. There is also some internal logic to exclude some attributes or attributes that include references to JavaScript or CSS expressions. The default tag blacklist reflects my use case, but is customizable and can be added to. Here's my HtmlSanitizer implementation:using System.Collections.Generic; using System.IO; using System.Xml; using HtmlAgilityPack; namespace Westwind.Web.Utilities { public class HtmlSanitizer { public HashSet<string> BlackList = new HashSet<string>() { { "script" }, { "iframe" }, { "form" }, { "object" }, { "embed" }, { "link" }, { "head" }, { "meta" } }; /// <summary> /// Cleans up an HTML string and removes HTML tags in blacklist /// </summary> /// <param name="html"></param> /// <returns></returns> public static string SanitizeHtml(string html, params string[] blackList) { var sanitizer = new HtmlSanitizer(); if (blackList != null && blackList.Length > 0) { sanitizer.BlackList.Clear(); foreach (string item in blackList) sanitizer.BlackList.Add(item); } return sanitizer.Sanitize(html); } /// <summary> /// Cleans up an HTML string by removing elements /// on the blacklist and all elements that start /// with onXXX . /// </summary> /// <param name="html"></param> /// <returns></returns> public string Sanitize(string html) { var doc = new HtmlDocument(); doc.LoadHtml(html); SanitizeHtmlNode(doc.DocumentNode); //return doc.DocumentNode.WriteTo(); string output = null; // Use an XmlTextWriter to create self-closing tags using (StringWriter sw = new StringWriter()) { XmlWriter writer = new XmlTextWriter(sw); doc.DocumentNode.WriteTo(writer); output = sw.ToString(); // strip off XML doc header if (!string.IsNullOrEmpty(output)) { int at = output.IndexOf("?>"); output = output.Substring(at + 2); } writer.Close(); } doc = null; return output; } private void SanitizeHtmlNode(HtmlNode node) { if (node.NodeType == HtmlNodeType.Element) { // check for blacklist items and remove if (BlackList.Contains(node.Name)) { node.Remove(); return; } // remove CSS Expressions and embedded script links if (node.Name == "style") { if (string.IsNullOrEmpty(node.InnerText)) { if (node.InnerHtml.Contains("expression") || node.InnerHtml.Contains("javascript:")) node.ParentNode.RemoveChild(node); } } // remove script attributes if (node.HasAttributes) { for (int i = node.Attributes.Count - 1; i >= 0; i--) { HtmlAttribute currentAttribute = node.Attributes[i]; var attr = currentAttribute.Name.ToLower(); var val = currentAttribute.Value.ToLower(); span style="background: white; color: green">// remove event handlers if (attr.StartsWith("on")) node.Attributes.Remove(currentAttribute); // remove script links else if ( //(attr == "href" || attr== "src" || attr == "dynsrc" || attr == "lowsrc") && val != null && val.Contains("javascript:")) node.Attributes.Remove(currentAttribute); // Remove CSS Expressions else if (attr == "style" && val != null && val.Contains("expression") || val.Contains("javascript:") || val.Contains("vbscript:")) node.Attributes.Remove(currentAttribute); } } } // Look through child nodes recursively if (node.HasChildNodes) { for (int i = node.ChildNodes.Count - 1; i >= 0; i--) { SanitizeHtmlNode(node.ChildNodes[i]); } } } } } Please note: Use this as a starting point only for your own parsing and review the code for your specific use case! If your needs are less lenient than mine were you can you can make this much stricter by not allowing src and href attributes or CSS links if your HTML doesn't allow it. You can also check links for external URLs and disallow those - lots of options.  The code is simple enough to make it easy to extend to fit your use cases more specifically. It's also quite easy to make this code work using a WhiteList approach if you want to go that route. The code above is semi-generic for allowing full featured HTML fragments that only disallow script related content. The Sanitize method walks through each node of the document and then recursively drills into all of its children until the entire document has been traversed. Note that the code here uses an XmlTextWriter to write output - this is done to preserve XHTML style self-closing tags which are otherwise left as non-self-closing tags. The sanitizer code scans for blacklist elements and removes those elements not allowed. Note that the blacklist is configurable either in the instance class as a property or in the static method via the string parameter list. Additionally the code goes through each element's attributes and looks for a host of rules gleaned from some of the XSS cheat sheets listed at the end of the post. Clearly there are a lot more XSS vulnerabilities, but a lot of them apply to ancient browsers (IE6 and versions of Netscape) - many of these glaring holes (like CSS expressions - WTF IE?) have been removed in modern browsers. What a Pain To be honest this is NOT a piece of code that I wanted to write. I think building anything related to XSS is better left to people who have far more knowledge of the topic than I do. Unfortunately, I was unable to find a tool that worked even closely for me, or even provided a working base. For the project I was working on I had no choice and I'm sharing the code here merely as a base line to start with and potentially expand on for specific needs. It's sad that Microsoft Web Protection Library is currently such a train wreck - this is really something that should come from Microsoft as the systems vendor or possibly a third party that provides security tools. Luckily for my application we are dealing with a authenticated and validated users so the user base is fairly well known, and relatively small - this is not a wide open Internet application that's directly public facing. As I mentioned earlier in the post, if I had my way I would simply not allow this type of raw HTML input in the first place, and instead rely on a more controlled HTML input mechanism like MarkDown or even a good HTML Edit control that can provide some limits on what types of input are allowed. Alas in this case I was overridden and we had to go forward and allow *any* raw HTML posted. Sometimes I really feel sad that it's come this far - how many good applications and tools have been thwarted by fear of XSS (or worse) attacks? So many things that could be done *if* we had a more secure browser experience and didn't have to deal with every little script twerp trying to hack into Web pages and obscure browser bugs. So much time wasted building secure apps, so much time wasted by others trying to hack apps… We're a funny species - no other species manages to waste as much time, effort and resources as we humans do :-) Resources Code on GitHub Html Agility Pack XSS Cheat Sheet XSS Prevention Cheat Sheet Microsoft Web Protection Library (AntiXss) StackOverflow Links: http://stackoverflow.com/questions/341872/html-sanitizer-for-net http://blog.stackoverflow.com/2008/06/safe-html-and-xss/ http://code.google.com/p/subsonicforums/source/browse/trunk/SubSonic.Forums.Data/HtmlScrubber.cs?r=61© Rick Strahl, West Wind Technologies, 2005-2012Posted in Security  HTML  ASP.NET  JavaScript   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Bayesian filtering for forum posts

    - by Andrew Davey
    Has anyone used a Bayesian filter to let forum members classify posts and so over time only display interesting posts? A Bayesian filter seems to work well for detecting email spam. Is this a viable approach to filter forum posts for users?

    Read the article

  • What is a good Fax Server to use?

    - by JoeJoe
    Any recommendation of a Fax server to use in Windows? Looking for good API support for incoming faxes. (Lots of them are good for outgoing). I want to route incoming faxes to email programmatically after the server gets the fax and set Whitelist of phone numbers to filter spam - all in code.

    Read the article

  • Most mature ASP.NET MVC Blog Engine?

    - by Tony_Henrich
    What's the best ASP.NET MVC based blog engine out there which is ready to deploy? I am guessing there are no MVC blog engines which are comparable with WebForms based blog engines like dasBlog, BlogEngine and subText? I think Oxite is a dead end and Orchid is more like a CMS. Looking for an engine which can do these: - RSS feeds support - anti comment spam functionality like support for Akismet - comment post approval support - Some kind of theming is possible

    Read the article

  • How do i deserialize an object with pyYaml using safe_load?

    - by systempuntoout
    Having a snippet like this: import yaml class User(object): def __init__(self, name, surname): self.name= name self.surname= surname user = User('spam', 'eggs') serialized_user = yaml.dump(user) deserialized_user = yaml.load(serialized_user) print "name: %s, surname %s" % (deserialized_user.name, deserialized_user.surname) Yaml docs says that it is not safe to call yaml.load with any data received from an untrusted source; so, what do i need to modify to my snippet\class to use safe_load method? Is it possible?

    Read the article

  • How to deserialize an object with pyYaml using safe_load?

    - by systempuntoout
    Having a snippet like this: import yaml class User(object): def __init__(self, name, surname): self.name= name self.surname= surname user = User('spam', 'eggs') serialized_user = yaml.dump(user) #Network deserialized_user = yaml.load(serialized_user) print "name: %s, sname: %s" % (deserialized_user.name, deserialized_user.surname) Yaml docs says that it is not safe to call yaml.load with any data received from an untrusted source; so, what do i need to modify to my snippet\class to use safe_load method? Is it possible?

    Read the article

  • Ask a DNS server what sites it hosts - and how to possibly prevent misuse

    - by Exit
    I've got a server which I host my company website as well as some of my clients. I noticed a domain which I created, but never used, was being attacked by a poke and hope hacker. I imagine that the hacker collected the domain from either hitting my DNS server and requesting what domains are hosted. So, in the interest of prevention and better server management, how would I ask my own DNS server (Linux CentOS 4) what sites are being hosted on it? Also, is there a way to prevent these types of attacks by hiding this information? I would assume that DNS servers would need to keep some information public, but I'm not sure if there is something that most hosts do to help prevent these bandwidth wasting poke and hope attacks. Thanks in advance.

    Read the article

  • Are there any differences between SQL Server and MySQL when it comes to preventing SQL injection?

    - by Derek Adair
    I am used to developing in PHP/MySQL and have no experience developing with SQL Server. I've skimmed over the PHP MSSQL documentation and it looks similar to MySQLi in some of the methods I read about. For example, with MySQL I utilize the function mysql_real_excape_string(). Is there a similar function with PHP/SQL Server? What steps do I need to take in order to protect against SQL injection with SQL Server? What are the differences between SQL Server and MySQL pertaining to SQL injection prevention? also - is this post accurate? is the escape string character for SQL Server a single quote?

    Read the article

  • Website url whitelists

    - by buggedcom
    I'm building a user content parser and am adding an automatic link parser. I'm adding a dialogue, that confirms that the user wants to go to the particular site being linked to. This is for two reasons. Anti phishing and spam combating. However I want to be able to disable both the dialogue and nofollow additions with commonly used websites, so I'm building a whitelist. Are there any common whitelists about or should I start building one from scratch?

    Read the article

  • postfix check outgoing email headers

    - by Ke
    hi, ive set up postfix but unfortunately my emails get caught in hotmails junk/spam filter I would like to see the email headers before they are sent to hotmail. Is there any way i can do this on linux/ubuntu? Cheers Ke

    Read the article

  • Exchange server actively refuses my connection

    - by Roy
    I'm writing a ASP.NET application used within our company. Now I want to send emails to some users via a email account on our exchange server. I tried to use System.Net.Mail.SmtpClient, where a proprietary account and password are given to specify the credentials. But the code failed with following exception: "No connection could be made because the target machine actively refused it" Is that due to Exchange Server policies to prevent spam? How can I get it to work?

    Read the article

  • Using C preprocessor to construct a string literal for scanf?

    - by Brett
    I'm attempting to create an sscanf string literal to aid in buffer overrun prevention in C99. The goal is something like: #define MAX_ARG_LEN 16 char arg[MAX_ARG_LEN] = ""; if (sscanf(arg, "%"(MAX_ARG_LEN-1)"X", &input) > 0) The obvious "manual" solution is something like: #define MAX_ARG_LEN 16 #define MAX_ARG_CHARS "15" char arg[MAX_ARG_LEN] = ""; if (sscanf(arg, "%"MAX_ARG_CHARS"X", &input) > 0) However, I would prefer something to automatically generate "%15X" given a buffer size of 16. This link is almost works for my application: http://stackoverflow.com/questions/240353/convert-a-preprocessor-token-to-a-string but it does not handle the -1. Suggestions?

    Read the article

  • Another answer to the CAPTCHA problem?

    - by Xeoncross
    Most sites at least employ server access log checking and banning along with some kind of bot prevention measure like a CAPTCHA (those messed-up text images). The problem with CAPTCHAs is that they poss a threat to the user experience. Luckily they now come with user friendly features like refresh and audio versions. Anyway, like linux vs windows, it isn't worth the time of a spammer to customize and/or build a script to handle a custom CAPTCHA example that only pertains to one site. Therefore, I was wondering if there might be better ways to handle the whole CAPTCHA thing. In A Better CAPTCHA Peter Bromberg mentions that one way would be to convert the image to HTML and display it embedded in the page. On http://shiflett.org/ Chris simply asks users to type his name into an input. Examples like this are ways to simplifying the CAPTCHA experience while decreasing the value for spammers. Does anyone know of more good examples I could use or see any problem with the embedded image idea?

    Read the article

  • SQL server 2005 remote connection problem, cannot solve it help please thank you

    - by user287745
    note:- if this question does not fit this site please do not just close it but also redirect the question to the fitting sister site, thank you" the steps taken and the error are mentioned please help, i am stuck here! installed sql server 2005 express on both computers installed sql server management studio express on both computers ran each management studio and connect to instance sqlserver using windows authentication ( one computer connection example "A-63A9D4D7E7834\SQLEXPRESS" ) created a database in the databases named as "test1" created a few tables with data saved and exit. did everything what this site says " How to configure SQL Server 2005 to allow remote connections" [add h t t p here as spam prevention] ://support.microsoft.com/kb/914277/en-us" but i have just disable the firewalls completely :turn off connecting to A-63A9D4D7E7834 started "SQL Server Management Studio Express" on computer A-63A9D4D7E7834 sever name: "ALL-E425BE6C41D\SQLEXPRESS" authentication: "windows authentication" and CONNECT I GET THE FOLLOWING ERROR Cannot connect to ALL-E425BE6C41D\SQLEXPRESS. ADDITIONAL INFORMATION: Login failed for user 'ALL-E425BE6C41D\Guest'. (Microsoft SQL Server, Error: 18456) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=18456&LinkId=20476 BUTTONS: OK HELP

    Read the article

  • Are there any differences between MSSQL and MySQL when it comes to preventing SQL injection?

    - by Derek Adair
    I am used to developing in PHP/MySQL and have no experience developing with MSSQL. I've skimmed over the PHP MSSQL documentation and it looks similar to MySQLi in some of the methods I read about. For example, with MySQL I utilize the function mysql_real_excape_string(). Is there a similar function with PHP/MSSQL? What steps do I need to take in order to protect against SQL injection with MSSQL? What are the differences between MSSQL and MySQL pertaining to SQL injection prevention?

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >