Search Results

Search found 113649 results on 4546 pages for 'how to get web site thumbnail image in asp net'.

Page 166/4546 | < Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >

  • POP Forums v10 beta posted for ASP.NET MVC 4

    - by Jeff
    Finally got some momentum and replaced the beta formerly known as v9.3. You can get it here, where you’ll find the information below. You can also read my previous post on why I ditched jQuery Mobile. This is the beta for POP Forums v10, with the mobile special sauce. It requires ASP.NET MVC 4 RC, which you can download here. Of course, feel free to submit bugs to the issue tracker. See a live demo here: http://popforums.com/Forums What's new? Uses a very light weight CSS and Javascript package to provide a touch-friendly interface for mobile devices. Numbers are formatted (sensitive to culture) when 1,000 or higher. CSS is more integration friendly, and specific to the ForumContainer element. Mail delivery from queue is now parallel, so you can specify a sending interval, and the number of messages to process on each interval. Background "services" refactored, and will only run with a call on app start to PopForumsActivation.StartServices(). This is partly to facilitate future use in Web farms/multiple Web roles. Update to jQuery v1.7.1. Replaced use of .live() with .on() in script, pursuant to jQuery update, which deprecates .live(). FIX: Bug in topic repository around caching keys for single-server data layer. FIX: Pager links on recent topics pointed to incorrect route. FIX: Deleting a post didn't update last user/post time. FIX: Ditched attempt at writing to event log with super failures, since almost no one has permission in production. FIX: Bug in grayed-out fields in admin mail setup. FIX: Weird color profiles would break loading of images for resize. FIX: TOS text on account sign-up was double encoded. Known issues None yet, but ditching jQuery Mobile from the previous beta turned out to be a good decision.

    Read the article

  • 50% off ASP.NET hosting

    - by Fabrice Marguerie
    I haven't blogged for a long time because I'm busy working on an exciting new project. It's too early to tell you more. I'll provide details in a few months. Meanwhile, I wanted to write a quick post to share an excellent offer with you. It's that time of the year when you can get deals on many things, including web hosting. I'd like to remind you about Arvixe, a great web hosting provider for Windows (for ASP.NET) and Linux. For 48 hours, between Thursday November 24th at 00:00 PST (08:00 GMT/UTC) and Friday November 25th, Arvixe will be offering 50% off all of their shared hosting products. This will be for all new accounts, for life (as long as you continue to renew the account)!I've been using Arvixe for my websites for more than one year and a half now, and I highly recommend them. Here is an overview of what I get for a very good price:Unlimited diskspaceUnlimited data transferUnlimited domainsUnlimited POP3 and IMAP mailboxesUnlimited SQL Server 2008 databasesUnlimited MySQL databases.NET 1.1, 2, 3.5 and 4Dedicated application poolsFull trustIIS 7Daily backupsand more... And now, you can get that too for half the price. Just go to Arvixe.com and secure your own hosting account now by using the coupon code "Black Friday" during checkout.Disclaimer: the links to Arvixe are affiliate links that may bring me some money home if you sign up. Still, I recommend Arvixe because I use them and I'm very happy with what they offer.

    Read the article

  • Html.ValidationSummary and Multiple Forms

    - by MightyZot
    Originally posted on: http://geekswithblogs.net/MightyZot/archive/2013/11/11/html.validationsummary-and-multiple-forms.aspxThe Html.ValidationSummary helper writes a div with a list of general errors added to the model state while a request is being serviced. There is generally one form per view or partial view, I think, so often there is only one call to Html.ValidationSummary in the page resulting from the assembly of your views. And, consequently, there is no problem with the markup that Html.ValidationSummary spits out as a result. What if you want to put multiple forms in one view? Even if you create a view model that’s an aggregate of the view models for each form, the error validation summary is going to contain errors from both forms. Check out this screen shot, which shows a page with multiple forms. Notice how the error validation summary shows up twice. Grrr! Errors for the login form also show up in the registration form. Luckily, there is an easy way around this. Pull the errors out of the model state and separate them for each form. You’ll need to identify the appropriate form by setting the key when you make calls to ModelState.AddModelError. Assume in my example that errors for the login form are added to model state using the “LoginForm” key. And, likewise, assume that errors for the registration form are added to model state using the “RegistrationForm” key. An example of that might look like this… // If we got this far, something failed, redisplay form ModelState.AddModelError("LoginForm", "User name or password is not right..."); return View(model); Over in the code for your View, you can pull each form’s errors from the model state using lambda expressions that look like these… var LoginFormErrors = ViewData.ModelState.Where(ms => ms.Key == "LoginForm"); var RegistrationFormErrors = ViewData.ModelState.Where(ms => ms.Key == "RegistrationForm"); Now that you have two collections containing errors, you can display only the errors specific to each form. I’m doing that in my code by removing the calls to Html.ValidationSummary and replacing them with enumerators that look like this… if(LoginFormErrors.Count() > 0) { <div class="cdt-error-list">     <ul>     @foreach (var entry in LoginFormErrors)     {         foreach (var error in entry.Value.Errors)         {             <li>@error.ErrorMessage</li>         }     }     </ul> </div> } …and for the registration form, the code looks like this… @if(RegistrationFormErrors.Count() > 0) { <div class="cdt-error-list">     <ul>     @foreach (var entry in RegistrationFormErrors)     {         foreach (var error in entry.Value.Errors)         {             <li>@error.ErrorMessage</li>         }     }     </ul> </div> } The result is a nice clean separation of the list of errors that are specific to each form. And, this is important because each form is submitted separately in my case, so both forms don’t generate errors in the same context. As you’ll see in the screen shot below, errors added to the model state when the login form is submitted do not show up in the registration form’s validation summary.

    Read the article

  • Microsoft 2003 DNS sometimes cant query for some A pointers when their TTL expires

    - by Bq
    Warning Long question :) We have a win 2003 server with a DNS server, every now and then it cant provide us with some A pointers for a specific domain. I have a small script running which asks for SOA,NS and A records for the domain in question and sometimes when the TTL expires the DNS fails to get the A records again, a Clear Cache fixes the problem.. Have a look Here it worked when the TTL expired Thu Apr 29 15:24:20 METDST 2010 dig basefarm.net soa basefarm.net. 64908 IN SOA ns01.osl.basefarm.net. hostmaster.basefarm.net. 2010042613 86400 3600 2419200 600 ns01.osl.basefarm.net. 299 IN A 81.93.160.4 dig basefarm.net ns basefarm.net. 64908 IN NS ns01.sth.basefarm.net. basefarm.net. 64908 IN NS ns01.osl.basefarm.net. ns01.sth.basefarm.net. 299 IN A 80.76.149.76 ns01.osl.basefarm.net. 299 IN A 81.93.160.4 dig ns01.sth.basefarm.net a ns01.sth.basefarm.net. 299 IN A 80.76.149.76 The TTL expired for ns01.sth.basefarm.net and ns01.osl.basefarm.net but the DNS managed to get the new values (TTL 3600) Thu Apr 29 15:29:20 METDST 2010 dig basefarm.net soa basefarm.net. 64608 IN SOA ns01.osl.basefarm.net. hostmaster.basefarm.net. 2010042613 86400 3600 2419200 600 ns01.osl.basefarm.net. 3600 IN A 81.93.160.4 dig basefarm.net ns basefarm.net. 64608 IN NS ns01.sth.basefarm.net. basefarm.net. 64608 IN NS ns01.osl.basefarm.net. ns01.sth.basefarm.net. 3600 IN A 80.76.149.76 ns01.osl.basefarm.net. 3600 IN A 81.93.160.4 dig ns01.sth.basefarm.net a ns01.sth.basefarm.net. 3600 IN A 80.76.149.76 But then another time it fails, and we need to clear the dns cache for it to start working again... Thu Apr 29 17:24:23 METDST 2010 dig basefarm.net soa basefarm.net. 57705 IN SOA ns01.osl.basefarm.net. hostmaster.basefarm.net. 2010042613 86400 3600 2419200 600 ns01.osl.basefarm.net. 299 IN A 81.93.160.4 dig basefarm.net ns basefarm.net. 57705 IN NS ns01.sth.basefarm.net. basefarm.net. 57705 IN NS ns01.osl.basefarm.net. ns01.sth.basefarm.net. 299 IN A 80.76.149.76 ns01.osl.basefarm.net. 299 IN A 81.93.160.4 dig ns01.sth.basefarm.net a ns01.sth.basefarm.net. 299 IN A 80.76.149.76 The TTL expires but the DNS cant get the ip addresses for ns01.sth.basefarm.net and ns01.osl.basefarm.net Thu Apr 29 17:29:23 METDST 2010 dig basefarm.net soa basefarm.net. 57405 IN SOA ns01.osl.basefarm.net. hostmaster.basefarm.net. 2010042613 86400 3600 2419200 600 ns01.osl.basefarm.net. 3600 IN A 81.93.160.4 dig basefarm.net ns basefarm.net. 57405 IN NS ns01.sth.basefarm.net. basefarm.net. 57405 IN NS ns01.osl.basefarm.net. dig ns01.sth.basefarm.net a Lookup failed I'm really lost on this one and have tried asking Google but to no avail..

    Read the article

  • Windows 2012 Server: Enable .NET 3.5

    - by Meengla
    I have a Windows 2012 Server which needs .NET 3.5 installed. For background info/solutions, please see this: http://sqlblog.com/blogs/sergio_govoni/archive/2013/07/13/how-to-install-netfx3-on-windows-server-2012-required-by-sql-server-2012.aspx I have tried this but it doesn't work for me. Here is what I am trying: Using a Windows 2012 ISO file, 'mount' on the 2012 Server as 'D' drive and then tried both GUI and Command prompt. In case of GUI, I specified the 'alternate source' path to the D drive's 'source/sxs folder but that failed without giving enough info. In case of the command prompt, here is what's happening: dism /online /enable-feature /featurename:NetFx3 /source:d:\sources\sxs I get error: Installed but Parent feature not enabled. So I tried another approached: dism /online /enable-feature /featurename:NetFx3 /all /source:d:\sources\sxs the above command, per http://blogs.technet.com/b/askcore/archive/2012/05/14/windows-8-and-net-framework-3-5.aspx is supposed to enable parent elements; but running this I get error like 'source not found'. Is there some error in my second command? What else I could do? This is Windows 2012 Server Standard edition. Thanks!

    Read the article

  • .NET 3.5 installation comes up with Error 0x800F0906, then 0x800F0081F using dism

    - by Austin Meadows
    I've recently tried installing .NET 3.5 for an application on Windows 8.1. I used the OS's popup thing to download/install .NET 3.5 and always get error code 0x800F0906. Upon further research, I found I would have to pop in my Windows 8 CD and install it with this command, where "E:\" is where my CD is mounted: Dism /online /enable-feature /featurename:NetFx3 /All /Source:E:\sources\sxs /LimitAccess This and any derivative of it (e.g., removing /LimitAccess) has not worked for me and has either given me the same error code (0x800F0906) or a different one, 0x800F0081F. I've even copied the sxs folder to my hard drive, just in case something was going on with the CD Drive, only to have the same results. In that case, I used this command line: Dism /online /enable-feature /featurename:NetFx3 /All /Source:C:\dotnet35 /LimitAccess I find this surreal because in both cases, the files are indeed there but the program thinks it's not. Here's the CBS.log file. Any ideas on how to fix this? Any help is very appreciated :) EDIT: I now have a proper dism.log file, I'm not sure what happened to the last one or why it did that. Here's the link to the new log file. It's interesting to note that it doesn't recognize some of the commands in the script such as "featurename" or "source".

    Read the article

  • The trust relationship between the primary domain and the trusted domain failed. ASP.NET 2.0

    - by Dasupalouie
    Anyone run into this issue? Any help would be appretiated :) Server Error in '/CTCWeb' Application. The trust relationship between the primary domain and the trusted domain failed. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.SystemException: The trust relationship between the primary domain and the trusted domain failed. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [SystemException: The trust relationship between the primary domain and the trusted domain failed. ] System.Security.Principal.NTAccount.TranslateToSids(IdentityReferenceCollection sourceAccounts, Boolean& someFailed) +1185 System.Security.Principal.NTAccount.Translate(IdentityReferenceCollection sourceAccounts, Type targetType, Boolean& someFailed) +44 System.Security.Principal.NTAccount.Translate(IdentityReferenceCollection sourceAccounts, Type targetType, Boolean forceSuccess) +47 System.Security.Principal.WindowsPrincipal.IsInRole(String role) +101 System.Web.Configuration.AuthorizationRule.IsTheUserInAnyRole(StringCollection roles, IPrincipal principal) +123 System.Web.Configuration.AuthorizationRule.IsUserAllowed(IPrincipal user, String verb) +256 System.Web.Configuration.AuthorizationRuleCollection.IsUserAllowed(IPrincipal user, String verb) +199 System.Web.Security.UrlAuthorizationModule.OnEnter(Object source, EventArgs eventArgs) +8771980 System.Web.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +68 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +75 -------------------------------------------------------------------------------- Version Information: Microsoft .NET Framework Version:2.0.50727.3603; ASP.NET Version:2.0.50727.3053

    Read the article

  • Asp.net mvc 2 .net 4.0 error when View model type is Tuple with more than 4 items

    - by Bojan
    When I create strongly typed View in Asp.net mvc 2, .net 4.0 with model type Tuple I get error when Tuple have more than 4 items example 1: type of view is Tuple<string, string, string, string> (4-tuple) and everything works fine view: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/WebUI.Master" Inherits="System.Web.Mvc.ViewPage<Tuple<string, string, string, string>>" %> controller: var tuple = Tuple.Create("a", "b", "c", "d"); return View(tuple); example 2: type of view is Tuple<string, string, string, string, string> (5-tuple) and I have this error: Compiler Error Message: CS1003: Syntax error, '>' expected view: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/WebUI.Master" Inherits="System.Web.Mvc.ViewPage<Tuple<string, string, string, string, string>>" %> controller: var tuple = Tuple.Create("a", "b", "c", "d", "e"); return View(tuple); example 3 if my view model is of type dynamic I can use both 4-tuple and 5-tuple and there is no error on page view: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/WebUI.Master" Inherits="System.Web.Mvc.ViewPage<dynamic>" %> controller: dynamic model = new ExpandoObject(); model.tuple = Tuple.Create("a", "b", "c", "d"); return View(model); or view: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/WebUI.Master" Inherits="System.Web.Mvc.ViewPage<dynamic>" %> controller: dynamic model = new ExpandoObject(); model.tuple = Tuple.Create("a", "b", "c", "d", "e"); return View(model); Even if I have something like Tuple<string, Tuple<string, string, string>, string> 3-tuple and one of the items is also a tuple and sum of items in all tuples is more than 4 I get the same error, Tuple<string, Tuple<string, string>, string> works fine

    Read the article

  • VB.net (aspx) mysql connection

    - by StealthRT
    Hey all i am new to ASP.NET and VB.net code behind. I have a classic ASP page that connects to the mySQL server with the following code: Set oConnection = Server.CreateObject("ADODB.Connection") Set oRecordset = Server.CreateObject("ADODB.Recordset") oConnection.Open "DRIVER={MySQL ODBC 3.51 Driver}; SERVER=xxx.com; PORT=3306; DATABASE=xxx; USER=xxx; PASSWORD=xxx; OPTION=3;" sqltemp = "select * from userinfo WHERE emailAddress = '" & theUN & "'" oRecordset.Open sqltemp, oConnection,3,3 if oRecordset.EOF then ... However, i am unable to find anything to connect to mySQL in ASP.NET (VB.NET). I have only found this peice of code that does not seem to work once it gets to the "Dim conn As New OdbcConnection(MyConString)" code: Dim MyConString As String = "DRIVER={MySQL ODBC 3.51 Driver};" & _ "SERVER=xxx.com;" & _ "DATABASE=xxx;" & _ "UID=xxx;" & _ "PASSWORD=xxx;" & _ "OPTION=3;" Dim conn As New OdbcConnection(MyConString) MyConnection.Open() Dim MyCommand As New OdbcCommand MyCommand.Connection = MyConnection MyCommand.CommandText = "select * from userinfo WHERE emailAddress = '" & theUN & "'"" MyCommand.ExecuteNonQuery() MyConnection.Close() I have these import statements also: <%@ Import Namespace=System %> <%@ Import Namespace=System.IO %> <%@ Import Namespace=System.Web %> <%@ Import Namespace=System.ServiceProcess %> <%@ Import Namespace=Microsoft.Data.Odbc %> <%@ Import Namespace=MySql.Data.MySqlClient %> <%@ Import Namespace=MySql.Data %> <%@ Import Namespace=System.Data %> So any help would be great! :o) David

    Read the article

  • Rendering a control generates security exception in .Net 4

    - by Jason Short
    I am having a problem with code that worked fine in .Net 2 giving this error under .Net 4. Build (web): Inheritance security rules violated while overriding member: 'Controls.RelatedPosts.RenderControl(System.Web.UI.HtmlTextWriter)'. Security accessibility of the overriding method must match the security accessibility of the method being overriden. This is in DotNetBlogEngine. There were several other security demands in the code that .Net 4 didn't seem to like. I followed some of the advice I found on blogs (and here) and got rid of all the other errors. But this one still eludes me. The Main blogengine core dll is not set for security demands anylonger and is compiled for .Net 4 as well. This error is in the website side attempting to use the dll. There are controls that call a RenderControl method taking an HtmlTextWriter. Apparently the text writer now has some soft of security attributes set on it. Each of the controls implements a custom interface ( public interface ICustomFilter ), there are no security permissions present or demands. The site is running full trust on my local dev machine.

    Read the article

  • Quartz.Net scheduler works locally but not on remote host

    - by Glinkot
    Hi. I have a timed quartz.net job working fine on my dev machine, but once deployed to a remote server it is not triggering. I believe the job is scheduled ok, because if I postback, it tells me the job already exists (I normally check for postback however). The email code definitely works, as the 'button1_click' event sends emails successfully. I understand I have full or medium trust on the remove server. My host says they don't apply restrictions that they know of which would affect it. Any other things I need to do to get it running? using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using Quartz; using Quartz.Impl; using Quartz.Core; using Aspose.Network.Mail; using Aspose.Network; using Aspose.Network.Mime; using System.Text; namespace QuartzTestASP { public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { if (!Page.IsPostBack) { ISchedulerFactory schedFact = new StdSchedulerFactory(); IScheduler sched = schedFact.GetScheduler(); JobDetail jobDetail = new JobDetail("testJob2", null, typeof(testJob)); //Trigger trigger = TriggerUtils.MakeMinutelyTrigger(1, 3); Trigger trigger = TriggerUtils.MakeSecondlyTrigger(10, 5); trigger.StartTimeUtc = DateTime.UtcNow; trigger.Name = "TriggertheTest"; sched.Start(); sched.ScheduleJob(jobDetail, trigger); } } protected void Button1_Click1(object sender, EventArgs e) { myutil.sendEmail(); } } class testJob : IStatefulJob { public testJob() { } public void Execute(JobExecutionContext context) { myutil.sendEmail(); } } public static class myutil { public static void sendEmail() { // tested code lives here and works fine when called from elsewhere } } }

    Read the article

  • Converting Python Script to Vb.NET - Involves Post and Input XML String

    - by Jason Shoulders
    I'm trying to convert a Python Script to Vb.Net. The Python Script appears to accept some XML input data and then takes it to a web URL and does a "POST". I tried some VB.NET code to do it, but I think my approach is off because I got an error back "BadXmlDataErr" plus I can't really format my input XML very well - I'm only doing string and value. The input XML is richer than that. Here is an example of what the XML input data looks like in the Python script: <obj is="MyOrg:realCommand_v1/" > <int name="priority" val="1" /> <real name="value" val="9.5" /> <str name="user" val="MyUserName" /> <reltime name="overrideTime" val="PT60S"/> </obj> Here's the Vb.net code I attempted to convert that: Dim reqparm As New Specialized.NameValueCollection reqparm.Add("priority", "1") reqparm.Add("value", "9.5") reqparm.Add("user", "MyUserName") reqparm.Add("overrideTime", "PT60S") Using client As New Net.WebClient Dim sTheUrl As String = "[My URL]" Dim responsebytes = client.UploadValues(sTheUrl, "POST", MyReqparm) Dim responsebody = (New System.Text.UTF8Encoding).GetString(responsebytes) End Using I feel like I should be doing something else. Can anyone point me to the right direction?

    Read the article

  • .NET HTML Sanitation for rich HTML Input

    - by Rick Strahl
    Recently I was working on updating a legacy application to MVC 4 that included free form text input. When I set up the new site my initial approach was to not allow any rich HTML input, only simple text formatting that would respect a few simple HTML commands for bold, lists etc. and automatically handles line break processing for new lines and paragraphs. This is typical for what I do with most multi-line text input in my apps and it works very well with very little development effort involved. Then the client sprung another note: Oh by the way we have a bunch of customers (real estate agents) who need to post complete HTML documents. Oh uh! There goes the simple theory. After some discussion and pleading on my part (<snicker>) to try and avoid this type of raw HTML input because of potential XSS issues, the client decided to go ahead and allow raw HTML input anyway. There has been lots of discussions on this subject on StackOverFlow (and here and here) but to after reading through some of the solutions I didn't really find anything that would work even closely for what I needed. Specifically we need to be able to allow just about any HTML markup, with the exception of script code. Remote CSS and Images need to be loaded, links need to work and so. While the 'legit' HTML posted by these agents is basic in nature it does span most of the full gamut of HTML (4). Most of the solutions XSS prevention/sanitizer solutions I found were way to aggressive and rendered the posted output unusable mostly because they tend to strip any externally loaded content. In short I needed a custom solution. I thought the best solution to this would be to use an HTML parser - in this case the Html Agility Pack - and then to run through all the HTML markup provided and remove any of the blacklisted tags and a number of attributes that are prone to JavaScript injection. There's much discussion on whether to use blacklists vs. whitelists in the discussions mentioned above, but I found that whitelists can make sense in simple scenarios where you might allow manual HTML input, but when you need to allow a larger array of HTML functionality a blacklist is probably easier to manage as the vast majority of elements and attributes could be allowed. Also white listing gets a bit more complex with HTML5 and the new proliferation of new HTML tags and most new tags generally don't affect XSS issues directly. Pure whitelisting based on elements and attributes also doesn't capture many edge cases (see some of the XSS cheat sheets listed below) so even with a white list, custom logic is still required to handle many of those edge cases. The Microsoft Web Protection Library (AntiXSS) My first thought was to check out the Microsoft AntiXSS library. Microsoft has an HTML Encoding and Sanitation library in the Microsoft Web Protection Library (formerly AntiXSS Library) on CodePlex, which provides stricter functions for whitelist encoding and sanitation. Initially I thought the Sanitation class and its static members would do the trick for me,but I found that this library is way too restrictive for my needs. Specifically the Sanitation class strips out images and links which rendered the full HTML from our real estate clients completely useless. I didn't spend much time with it, but apparently I'm not alone if feeling this library is not really useful without some way to configure operation. To give you an example of what didn't work for me with the library here's a small and simple HTML fragment that includes script, img and anchor tags. I would expect the script to be stripped and everything else to be left intact. Here's the original HTML:var value = "<b>Here</b> <script>alert('hello')</script> we go. Visit the " + "<a href='http://west-wind.com'>West Wind</a> site. " + "<img src='http://west-wind.com/images/new.gif' /> " ; and the code to sanitize it with the AntiXSS Sanitize class:@Html.Raw(Microsoft.Security.Application.Sanitizer.GetSafeHtmlFragment(value)) This produced a not so useful sanitized string: Here we go. Visit the <a>West Wind</a> site. While it removed the <script> tag (good) it also removed the href from the link and the image tag altogether (bad). In some situations this might be useful, but for most tasks I doubt this is the desired behavior. While links can contain javascript: references and images can 'broadcast' information to a server, without configuration to tell the library what to restrict this becomes useless to me. I couldn't find any way to customize the white list, nor is there code available in this 'open source' library on CodePlex. Using Html Agility Pack for HTML Parsing The WPL library wasn't going to cut it. After doing a bit of research I decided the best approach for a custom solution would be to use an HTML parser and inspect the HTML fragment/document I'm trying to import. I've used the HTML Agility Pack before for a number of apps where I needed an HTML parser without requiring an instance of a full browser like the Internet Explorer Application object which is inadequate in Web apps. In case you haven't checked out the Html Agility Pack before, it's a powerful HTML parser library that you can use from your .NET code. It provides a simple, parsable HTML DOM model to full HTML documents or HTML fragments that let you walk through each of the elements in your document. If you've used the HTML or XML DOM in a browser before you'll feel right at home with the Agility Pack. Blacklist based HTML Parsing to strip XSS Code For my purposes of HTML sanitation, the process involved is to walk the HTML document one element at a time and then check each element and attribute against a blacklist. There's quite a bit of argument of what's better: A whitelist of allowed items or a blacklist of denied items. While whitelists tend to be more secure, they also require a lot more configuration. In the case of HTML5 a whitelist could be very extensive. For what I need, I only want to ensure that no JavaScript is executed, so a blacklist includes the obvious <script> tag plus any tag that allows loading of external content including <iframe>, <object>, <embed> and <link> etc. <form>  is also excluded to avoid posting content to a different location. I also disallow <head> and <meta> tags in particular for my case, since I'm only allowing posting of HTML fragments. There is also some internal logic to exclude some attributes or attributes that include references to JavaScript or CSS expressions. The default tag blacklist reflects my use case, but is customizable and can be added to. Here's my HtmlSanitizer implementation:using System.Collections.Generic; using System.IO; using System.Xml; using HtmlAgilityPack; namespace Westwind.Web.Utilities { public class HtmlSanitizer { public HashSet<string> BlackList = new HashSet<string>() { { "script" }, { "iframe" }, { "form" }, { "object" }, { "embed" }, { "link" }, { "head" }, { "meta" } }; /// <summary> /// Cleans up an HTML string and removes HTML tags in blacklist /// </summary> /// <param name="html"></param> /// <returns></returns> public static string SanitizeHtml(string html, params string[] blackList) { var sanitizer = new HtmlSanitizer(); if (blackList != null && blackList.Length > 0) { sanitizer.BlackList.Clear(); foreach (string item in blackList) sanitizer.BlackList.Add(item); } return sanitizer.Sanitize(html); } /// <summary> /// Cleans up an HTML string by removing elements /// on the blacklist and all elements that start /// with onXXX . /// </summary> /// <param name="html"></param> /// <returns></returns> public string Sanitize(string html) { var doc = new HtmlDocument(); doc.LoadHtml(html); SanitizeHtmlNode(doc.DocumentNode); //return doc.DocumentNode.WriteTo(); string output = null; // Use an XmlTextWriter to create self-closing tags using (StringWriter sw = new StringWriter()) { XmlWriter writer = new XmlTextWriter(sw); doc.DocumentNode.WriteTo(writer); output = sw.ToString(); // strip off XML doc header if (!string.IsNullOrEmpty(output)) { int at = output.IndexOf("?>"); output = output.Substring(at + 2); } writer.Close(); } doc = null; return output; } private void SanitizeHtmlNode(HtmlNode node) { if (node.NodeType == HtmlNodeType.Element) { // check for blacklist items and remove if (BlackList.Contains(node.Name)) { node.Remove(); return; } // remove CSS Expressions and embedded script links if (node.Name == "style") { if (string.IsNullOrEmpty(node.InnerText)) { if (node.InnerHtml.Contains("expression") || node.InnerHtml.Contains("javascript:")) node.ParentNode.RemoveChild(node); } } // remove script attributes if (node.HasAttributes) { for (int i = node.Attributes.Count - 1; i >= 0; i--) { HtmlAttribute currentAttribute = node.Attributes[i]; var attr = currentAttribute.Name.ToLower(); var val = currentAttribute.Value.ToLower(); span style="background: white; color: green">// remove event handlers if (attr.StartsWith("on")) node.Attributes.Remove(currentAttribute); // remove script links else if ( //(attr == "href" || attr== "src" || attr == "dynsrc" || attr == "lowsrc") && val != null && val.Contains("javascript:")) node.Attributes.Remove(currentAttribute); // Remove CSS Expressions else if (attr == "style" && val != null && val.Contains("expression") || val.Contains("javascript:") || val.Contains("vbscript:")) node.Attributes.Remove(currentAttribute); } } } // Look through child nodes recursively if (node.HasChildNodes) { for (int i = node.ChildNodes.Count - 1; i >= 0; i--) { SanitizeHtmlNode(node.ChildNodes[i]); } } } } } Please note: Use this as a starting point only for your own parsing and review the code for your specific use case! If your needs are less lenient than mine were you can you can make this much stricter by not allowing src and href attributes or CSS links if your HTML doesn't allow it. You can also check links for external URLs and disallow those - lots of options.  The code is simple enough to make it easy to extend to fit your use cases more specifically. It's also quite easy to make this code work using a WhiteList approach if you want to go that route. The code above is semi-generic for allowing full featured HTML fragments that only disallow script related content. The Sanitize method walks through each node of the document and then recursively drills into all of its children until the entire document has been traversed. Note that the code here uses an XmlTextWriter to write output - this is done to preserve XHTML style self-closing tags which are otherwise left as non-self-closing tags. The sanitizer code scans for blacklist elements and removes those elements not allowed. Note that the blacklist is configurable either in the instance class as a property or in the static method via the string parameter list. Additionally the code goes through each element's attributes and looks for a host of rules gleaned from some of the XSS cheat sheets listed at the end of the post. Clearly there are a lot more XSS vulnerabilities, but a lot of them apply to ancient browsers (IE6 and versions of Netscape) - many of these glaring holes (like CSS expressions - WTF IE?) have been removed in modern browsers. What a Pain To be honest this is NOT a piece of code that I wanted to write. I think building anything related to XSS is better left to people who have far more knowledge of the topic than I do. Unfortunately, I was unable to find a tool that worked even closely for me, or even provided a working base. For the project I was working on I had no choice and I'm sharing the code here merely as a base line to start with and potentially expand on for specific needs. It's sad that Microsoft Web Protection Library is currently such a train wreck - this is really something that should come from Microsoft as the systems vendor or possibly a third party that provides security tools. Luckily for my application we are dealing with a authenticated and validated users so the user base is fairly well known, and relatively small - this is not a wide open Internet application that's directly public facing. As I mentioned earlier in the post, if I had my way I would simply not allow this type of raw HTML input in the first place, and instead rely on a more controlled HTML input mechanism like MarkDown or even a good HTML Edit control that can provide some limits on what types of input are allowed. Alas in this case I was overridden and we had to go forward and allow *any* raw HTML posted. Sometimes I really feel sad that it's come this far - how many good applications and tools have been thwarted by fear of XSS (or worse) attacks? So many things that could be done *if* we had a more secure browser experience and didn't have to deal with every little script twerp trying to hack into Web pages and obscure browser bugs. So much time wasted building secure apps, so much time wasted by others trying to hack apps… We're a funny species - no other species manages to waste as much time, effort and resources as we humans do :-) Resources Code on GitHub Html Agility Pack XSS Cheat Sheet XSS Prevention Cheat Sheet Microsoft Web Protection Library (AntiXss) StackOverflow Links: http://stackoverflow.com/questions/341872/html-sanitizer-for-net http://blog.stackoverflow.com/2008/06/safe-html-and-xss/ http://code.google.com/p/subsonicforums/source/browse/trunk/SubSonic.Forums.Data/HtmlScrubber.cs?r=61© Rick Strahl, West Wind Technologies, 2005-2012Posted in Security  HTML  ASP.NET  JavaScript   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Adding a hyperlink in a client report definition file (RDLC)

    - by rajbk
    This post shows you how to add a hyperlink to your RDLC report. In a previous post, I showed you how to create an RDLC report. We have been given the requirement to the report we created earlier, the Northwind Product report, to add a column that will contain hyperlinks which are unique per row.  The URLs will be RESTful with the ProductID at the end. Clicking on the URL will take them to a website like so: http://localhost/products/3  where 3 is the primary key of the product row clicked on. To start off, open the RDLC and add a new column to the product table.   Add text to the header (Details) and row (Product Website). Right click on the row (not header) and select “TextBox properties” Select Action – Go to URL. You could hard code a URL here but what we need is a URL that changes based on the ProductID.   Click on the expression button (fx) The expression builder gives you access to several functions and constants including the fields in your dataset. See this reference for more details: Common Expressions for ReportViewer Reports. Add the following expression: = "http://localhost/products/" & Fields!ProductID.Value Click OK to exit the Expression Builder. The report will not render because hyperlinks are disabled by default in the ReportViewer control. To enable it, add the following in your page load event (where rvProducts is the ID of your ReportViewerControl): protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { rvProducts.LocalReport.EnableHyperlinks = true; } } We want our links to open in a new window so set the HyperLinkTarget property of the ReportViewer control to “_blank”   We are done adding hyperlinks to our report. Clicking on the links for each product pops open a new windows. The URL has the ProductID added at the end. Enjoy!

    Read the article

  • Tulsa SharePoint Interest Group – SharePoint 2010 Mini-Launch Event - Review

    - by dmccollough
    The Tulsa SharePoint Interest Group set a record for attendance last night at our SharePoint 2010 Mini-Launch Event. Approximately 40+ people showed up to listen to SharePoint MVP Eric Shupps, The SharePoint Cowboy to discuss all of the new features for both administrators and developers. All of the Tulsa SharePoint Interest Group Officers worked very hard to ensure that this event happened. We hosted our event at our local Dave & Busters and it was a great location with good food and great service. All of the officers of the Tulsa SharePoint Interest Group would like to extend a big Thank You to all of our sponsor that helped us in making our SharePoint 2010 Mini-Launch Event a reality.

    Read the article

  • Which cloud hosting should I use? [closed]

    - by Alyssa Marie Isk
    Possible Duplicate: How to find web hosting that meets my requirements? If anyone wants to get some real life karma by giving a tiny non-profit pointers, please advise! I posted a thread about our website with highly variable traffic (www.WorldOceansDay.org). The event is on June 8th, and the traffic goes from 100-400/day in the off-season, to about 200,000 trying to access the site at any one time on June 8th. It's a Wordpress site hosted on GoDaddy shared hosting and predictably crashed horribly. From the internet's feedback, we've decided to move to a cloud server to handle the traffic, but I'm a huge newbie and I don't have very reliable mentorship, so I'm turning to crowdsourcing. We're trying to decide between Amazon Web Services and RackSpace Cloud servers. Our sys admin consultant also suggested GoDaddy's new 4GH but I have had such incredibly bad experiences with GoDaddy thus far that I am hesitant. From what I've read on the internet, RackSpace might be cheaper? Would AWS totally break the bank? We don't have a ton of money to spend on hosting. We'll also be using CloudFlare to cache and serve the pages since they're dynamic. I've found a few AWS & RackSpace calculators but I am not 100% on how to find those numbers... GoDaddy? Google Analytics? AWS calc is here: http://calculator.s3.amazonaws.com/calc5.html Rackspace is on the right: http://www.rackspace.com/cloud/cloud_hosting_products/servers/pricing/?0a313380 If anyone can help, or through some miracle feels like walking me through this, I would be incredibly appreciative.

    Read the article

  • How to introduce web development to non-programmers?

    - by Gulshan
    Once one of my non-programmer friends asked, "I have a cool website idea that I don't want to share. Rather I want to develop it on my own. So, I want to learn web development. Tell me what to do?" And sometimes many other people asked about how to start with web development as a profession. But they are non-programmers or not from Computer Science background. What should I suggest to them? Learning programming from the scratch? Or using CMS-like tools? Or anything else?

    Read the article

  • The perfect DotNetNuke Christmas present

    - by Chris Hammond
    Are you racking your brain trying to come up with that DotNetNuke person in your life? If so, I’ve got just the solution! You can buy them my book! DotNetNuke 5 User’s Guide: Get your website up and running ! It’s the perfect item for the DotNetNuke love of your life. If you buy a copy and want it signed, I’ll even offer to sign it if you mail it to me. Please be sure to include postage both ways. You probably won’t be able to get it to me and back in time for Christmas but the signing can happen...(read more)

    Read the article

  • “Unplugged” LIDNUG online talk with me on Monday (April 16th)

    - by ScottGu
    This coming Monday (April 16th) I’m doing another online LIDNUG session.  The talk will be from 10am to 11:30am (Pacific Time).  I do these talks a few times a year and they tend to be pretty fun.  Attendees can ask any questions they want to me, and listen to me answer them live via LiveMeeting.  We usually end up having some really good discussions on a wide variety of topics.  Any topic or question is fair game. You can learn more and register to attend the online event for free here. I’ll update this post with a download link to a recorded audio version of the talk after the event is over. Hope to get a chance to chat with some of you there! Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Authorize.Net, Silent Posts, and URL Rewriting Don't Mix

    The too long, didn't read synopsis: If you use Authorize.Net and its silent post feature and it stops working, make sure that if your website uses URL rewriting to strip or add a www to the domain name that the URL you specify for the silent post matches the URL rewriting rule because Authorize.Net's silent post feature won't resubmit the post request to URL specified via the redirect response. I have a client that uses Authorize.Net to manage and bill customers. Like many payment gateways, Authorize.Net supports recurring payments. For example, a website may charge members a monthly fee to access their services. With Authorize.Net you can provide the billing amount and schedule and at each interval Authorize.Net will automatically charge the customer's credit card and deposit the funds to your account. You may want to do something whenever Authorize.Net performs a recurring payment. For instance, if the recurring payment charge was a success you would extend the customer's service; if the transaction was denied then you would cancel their service (or whatever). To accomodate this, Authorize.Net offers a silent post feature. Properly configured, Authorize.Net will send an HTTP request that contains details of the recurring payment transaction to a URL that you specify. This URL could be an ASP.NET page on your server that then parses the data from Authorize.Net and updates the specified customer's account accordingly. (Of course, you can always view the history of recurring payments through the reporting interface on Authorize.Net's website; the silent post feature gives you a way to programmatically respond to a recurring payment.) Recently, this client of mine that uses Authorize.Net informed me that several paying customers were telling him that their access to the site had been cut off even though their credit cards had been recently billed. Looking through our logs, I noticed that we had not shown any recurring payment log activity for over a month. I figured one of two things must be going on: either Authorize.Net wasn't sending us the silent post requests anymore or the page that was processing them wasn't doing so correctly. I started by verifying that our Authorize.Net account was properly setup to use the silent post feature and that it was pointing to the correct URL. Authorize.Net's site indicated the silent post was configured and that recurring payment transaction details were being sent to http://example.com/AuthorizeNetProcessingPage.aspx. Next, I wanted to determine what information was getting sent to that URL.The application was setup tolog the parsed results of the Authorize.Net request, such as what customer the recurring payment applied to; however,we were not logging the actual HTTP request coming from Authorize.Net. I contacted Authorize.Net's support to inquire if they logged the HTTP request send via the silent post feature and was told that they did not. I decided to add a bit of code to log the incoming HTTP request, which you can do by using the Request object's SaveAs method. This allowed me to saveevery incoming HTTP request to the silent post page to a text file on the server. Upon the next recurring payment, I was able to see the HTTP request being received by the page: GET /AuthorizeNetProcessingPage.aspx HTTP/1.1Connection: CloseAccept: */*Host: www.example.com That was it. Two things alarmed me: first, the request was obviously a GET and not a POST; second, there was no POST body (obviously), which is where Authorize.Net passes along thedetails of the recurring payment transaction.What stuck out was the Host header, which differed slightly from the silent post URL configured in Authorize.Net. Specifically, the Host header in the above logged request pointed to www.example.com, whereas the Authorize.Net configuration used example.com (no www). About a month ago - the same time these recurring payment transaction detailswere no longer being processed by our ASP.NET page - we had implemented IIS 7's URL rewriting feature to permanently redirect all traffic to example.com to www.example.com. Could that be the problem? I contacted Authorize.Net's support again and asked them if their silent post algorithmwould follow the301HTTP response and repost the recurring payment transaction details. They said, Yes, the silent post would follow redirects. Their reports didn't jive with my observations, so I went ahead and updated our Authorize.Net configuration to point to http://www.example.com/AuthorizeNetProcessingPage.aspx instead of http://example.com/AuthorizeNetProcessingPage.aspx. And, I'm happy to report, recurring payments and correctly being processed again! If you use Authorize.Net and the silent post feature, and you notice that your processing page is not longer working, make sure you are not using any URL rewriting rules that may conflict with the silent post URL configuration. Hope this saves someone the time it took me to get to the bottom of this. Happy Programming!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Configuring ASP.NET MVC2 on Apache 2.2 using mod_aspdotnet

    - by user40684
    Trying to get an MVC2 website to run on Apache 2.2 web server (running on Windows) that utilizes the mod_aspdotnet module. Have several ASP.NET Virtual Hosts running, trying to add another. MVC2 has NO default page (like the first version of MVC had e.g default.aspx). I have tried various changes to the config: commented out 'DirectoryIndex', changed it to '/'. Set 'ASPNET' to 'Virtual', will not load first page, always get: '403 Forbidden, You don't have permission to access / on this server.' Below is from my http.conf: LoadModule aspdotnet_module "modules/mod_aspdotnet.so" AddHandler asp.net asax ascx ashx asmx aspx axd config cs csproj licx rem resources resx soap vb vbproj vsdisco webinfo <IfModule aspdotnet_module> # Mount the ASP.NET /asp application #AspNetMount /MyWebSiteName "D:/ApacheNET/MyWebSiteName.com" Alias /MyWebSiteName" D:/ApacheNET/MyWebSiteName.com" <VirtualHost *:80> DocumentRoot "D:/ApacheNET/MyWebSiteName.com" ServerName www.MyWebSiteName.com ServerAlias MyWebSiteName.com AspNetMount / "D:/ApacheNET/MyWebSiteName.com" # Other directives here <Directory "D:/ApacheNET/MyWebSiteName.com"> Options FollowSymlinks ExecCGI AspNet All #AspNet Virtual Files Directory Order allow,deny Allow from all DirectoryIndex default.aspx index.aspx index.html #default the index page to .htm and .aspx </Directory> </VirtualHost> # For all virtual ASP.NET webs, we need the aspnet_client files # to serve the client-side helper scripts. AliasMatch /aspnet_client/system_web/(\d+)_(\d+)_(\d+)_(\d+)/(.*) "C:/Windows /Microsoft.NET/Framework/v$1.$2.$3/ASP.NETClientFiles/$4" <Directory "C:/Windows/Microsoft.NET/Framework/v*/ASP.NETClientFiles"> Options FollowSymlinks Order allow,deny Allow from all </Directory> </IfModule> Has anyone successfully run MVC2 (or the first version of MVC) on Apache with the mod_aspdotnet module? Thanks !

    Read the article

  • Help A Hacker: Give ‘Em The Windows Source Code

    - by Ken Cox [MVP]
    The announcement of another Windows megapatch reminded me of a WikiLeaks story about Microsoft Windows that hasn’t attracted much attention. Alarmingly, we learn that the hackers have the Windows source code to study and test for vulnerabilities. Chinese hackers used the knowledge to breach Google’s accounts and servers: “In 2003, the CNITSEC signed a Government Security Program (GSP) international agreement with Microsoft that allowed select companies such as TOPSEC access to Microsoft source code in order to secure the Windows platform” “CNITSEC enterprises has recruited Chinese hackers in support of nationally-funded "network attack scientific research projects." From June 2002 to March 2003, TOPSEC employed a known Chinese hacker, Lin Yong (a.k.a. Lion and owner of the Honker Union of China), as senior security service engineer…” Windows is widely seen as unsecurable. It doesn’t help that Chinese government-funded hackers are probing the source code for vulnerabilities. It seems odd that people who didn’t write the code can find vulnerabilities faster than the owners of the code. Perhaps the U.S. government should hire its own hackers to go over the same Windows source code and then tell Microsoft how to secure its product?

    Read the article

< Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >