Search Results

Search found 54019 results on 2161 pages for 'asp net weblogs'.

Page 44/2161 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • Different types of Session state management options available with ASP.NET

    - by Aamir Hasan
    ASP.NET provides In-Process and Out-of-Process state management.In-Process stores the session in memory on the web server.This requires the a "sticky-server" (or no load-balancing) so that the user is always reconnected to the same web server.Out-of-Process Session state management stores data in an external data source.The external data source may be either a SQL Server or a State Server service.Out-of-Process state management requires that all objects stored in session are serializable.Linkhttp://msdn.microsoft.com/en-us/library/ms178586%28VS.80%29.aspx

    Read the article

  • Membership in ASP.Net applications - part 1

    - by nikolaosk
    So far in all my posts, I have never mentioned anything about how to implement authentication/authorisation mechanisms in a web site. In all our professional web applications we do need some sort of mechanism to verify who are users are and what privileges have in our site. This is the first post in a series of posts investigating how to implement membership (authentication+authorisation) in ASP.Net applications. We will look into the built-in web server security controls.We will look at the built...(read more)

    Read the article

  • Asp.Net 1 -> Asp.Net 2 upgrade - Machine.Config - unrecognized parameter

    - by Chris
    Hi All, I am working on upgrading a web app to asp.net 2 from 1. VS 2008 did its conversion things, and everything is building successfully and has been converted to a web application via the appropriate menu item in VS 2008. On launching the site using the Asp.net development server I am receiving a configuration error on the appsettings line in the machine config of Unrecognized attribute 'restartOnExternalChanges'. The app targets asp.net 2 in the projects properties in VS, and the error page indicates similar : Version Information: Microsoft .NET Framework Version:2.0.50727.3053; ASP.NET Version:2.0.50727.3053 The error message seems to indicate I am trying to run this in an asp.net 1 environment, but surely that isnt the case, and if so how do I rectify this. Any help would be appreciated Thanks,

    Read the article

  • ASP.NET MVC: Using ProfileRequiredAttribute to restrict access to pages

    - by DigiMortal
    If you are using AppFabric Access Control Services to authenticate users when they log in to your community site using Live ID, Google or some other popular identity provider, you need more than AuthorizeAttribute to make sure that users can access the content that is there for authenticated users only. In this posting I will show you hot to extend the AuthorizeAttribute so users must also have user profile filled. Semi-authorized users When user is authenticated through external identity provider then not all identity providers give us user name or other information we ask users when they join with our site. What all identity providers have in common is unique ID that helps you identify the user. Example. Users authenticated through Windows Live ID by AppFabric ACS have no name specified. Google’s identity provider is able to provide you with user name and e-mail address if user agrees to publish this information to you. They both give you unique ID of user when user is successfully authenticated in their service. There is logical shift between ASP.NET and my site when considering user as authorized. For ASP.NET MVC user is authorized when user has identity. For my site user is authorized when user has profile and row in my users table. Having profile means that user has unique username in my system and he or she is always identified by this username by other users. My solution is simple: I created my own action filter attribute that makes sure if user has profile to access given method and if user has no profile then browser is redirected to join page. Illustrating the problem Usually we restrict access to page using AuthorizeAttribute. Code is something like this. [Authorize] public ActionResult Details(string id) {     var profile = _userRepository.GetUserByUserName(id);     return View(profile); } If this page is only for site users and we have user profiles then all users – the ones that have profile and all the others that are just authenticated – can access the information. It is okay because all these users have successfully logged in in some service that is supported by AppFabric ACS. In my site the users with no profile are in grey spot. They are on half way to be users because they have no username and profile on my site yet. So looking at the image above again we need something that adds profile existence condition to user-only content. [ProfileRequired] public ActionResult Details(string id) {     var profile = _userRepository.GetUserByUserName(id);     return View(profile); } Now, this attribute will solve our problem as soon as we implement it. ProfileRequiredAttribute: Profiles are required to be fully authorized Here is my implementation of ProfileRequiredAttribute. It is pretty new and right now it is more like working draft but you can already play with it. public class ProfileRequiredAttribute : AuthorizeAttribute {     private readonly string _redirectUrl;       public ProfileRequiredAttribute()     {         _redirectUrl = ConfigurationManager.AppSettings["JoinUrl"];         if (string.IsNullOrWhiteSpace(_redirectUrl))             _redirectUrl = "~/";     }              public override void OnAuthorization(AuthorizationContext filterContext)     {         base.OnAuthorization(filterContext);           var httpContext = filterContext.HttpContext;         var identity = httpContext.User.Identity;           if (!identity.IsAuthenticated || identity.GetProfile() == null)             if(filterContext.Result == null)                 httpContext.Response.Redirect(_redirectUrl);          } } All methods with this attribute work as follows: if user is not authenticated then he or she is redirected to AppFabric ACS identity provider selection page, if user is authenticated but has no profile then user is by default redirected to main page of site but if you have application setting with name JoinUrl then user is redirected to this URL. First case is handled by AuthorizeAttribute and the second one is handled by custom logic in ProfileRequiredAttribute class. GetProfile() extension method To get user profile using less code in places where profiles are needed I wrote GetProfile() extension method for IIdentity interface. There are some more extension methods that read out user and identity provider identifier from claims and based on this information user profile is read from database. If you take this code with copy and paste I am sure it doesn’t work for you but you get the idea. public static User GetProfile(this IIdentity identity) {     if (identity == null)         return null;       var context = HttpContext.Current;     if (context.Items["UserProfile"] != null)         return context.Items["UserProfile"] as User;       var provider = identity.GetIdentityProvider();     var nameId = identity.GetNameIdentifier();       var rep = ObjectFactory.GetInstance<IUserRepository>();     var profile = rep.GetUserByProviderAndNameId(provider, nameId);       context.Items["UserProfile"] = profile;       return profile; } To avoid round trips to database I cache user profile to current request because the chance that profile gets changed meanwhile is very minimal. The other reason is maybe more tricky – profile objects are coming from Entity Framework context and context has also HTTP request as lifecycle. Conclusion This posting gave you some ideas how to finish user profiles stuff when you use AppFabric ACS as external authentication provider. Although there was little shift between us and ASP.NET MVC with interpretation of “authorized” we were easily able to solve the problem by extending AuthorizeAttribute to get all our requirements fulfilled. We also write extension method for IIdentity that returns as user profile based on username and caches the profile in HTTP request scope.

    Read the article

  • Google Chrome Loses ASP.NET Sessions - Need FavIcon

    - by nannette
    I had programmed a brilliant web page in ASP.NET 4.0 and lo and behold, this one page lost its sessions in Google Chrome. I could run it locally in debug and could not reproduce the issues in Chrome. Didn't happen in IE or Firefox, only Chrome on the published server. I finally found in a forum where someone mentioned that Google Chrome looks for favicons and if it doesn't find one it will throw a 302 redirect and kill the session. http://stackoverflow.com/questions/8247842/session-data-lost-in-chrome...(read more)

    Read the article

  • Spring.NET and ADO.NET Entity Data Model

    - by Jason
    Having defined an ADO.NET Entity Data Model, I can then instantiate it in a Repository class to query against the database. using (ApplicationEntities ctx = new ApplicationEntities()) { // query, CRUD, etc } However, that particular line of code becomes boilerplate in most of the methods in the repository class. Is it possible to just use Spring.NET to inject the Entity Data Model, either in the class or, even better, in an abstract parent class that all the repositories inherit from?

    Read the article

  • RESTful .NET and protobuf-net

    - by rxm0203
    Is it possible to use protobuf-net in RESTful webservices using WCF RESTful starter kit or OpenRasta? If it possible, are there any examples or code snippets available? I am creating a .NET Web Service which will be consumed by Java client.

    Read the article

  • Default.aspx with IIS 6.0 and .Net 4?

    - by Amitabh
    We have deployed a .net 4 asp.net site on IIS 6.0. Default.aspx is configured as one of the default document. When we access the site using the following url http://testsite We expect it to render http://testsite/Default.aspx But instead we get 404 Not found error. We did not had this issue when it was deployed on .Net 2.0. Only thing that has changed on the server is that we use .Net 4 instead of .Net 2.

    Read the article

  • Capturing and Transforming ASP.NET Output with Response.Filter

    - by Rick Strahl
    During one of my Handlers and Modules session at DevConnections this week one of the attendees asked a question that I didn’t have an immediate answer for. Basically he wanted to capture response output completely and then apply some filtering to the output – effectively injecting some additional content into the page AFTER the page had completely rendered. Specifically the output should be captured from anywhere – not just a page and have this code injected into the page. Some time ago I posted some code that allows you to capture ASP.NET Page output by overriding the Render() method, capturing the HtmlTextWriter() and reading its content, modifying the rendered data as text then writing it back out. I’ve actually used this approach on a few occasions and it works fine for ASP.NET pages. But this obviously won’t work outside of the Page class environment and it’s not really generic – you have to create a custom page class in order to handle the output capture. [updated 11/16/2009 – updated ResponseFilterStream implementation and a few additional notes based on comments] Enter Response.Filter However, ASP.NET includes a Response.Filter which can be used – well to filter output. Basically Response.Filter is a stream through which the OutputStream is piped back to the Web Server (indirectly). As content is written into the Response object, the filter stream receives the appropriate Stream commands like Write, Flush and Close as well as read operations although for a Response.Filter that’s uncommon to be hit. The Response.Filter can be programmatically replaced at runtime which allows you to effectively intercept all output generation that runs through ASP.NET. A common Example: Dynamic GZip Encoding A rather common use of Response.Filter hooking up code based, dynamic  GZip compression for requests which is dead simple by applying a GZipStream (or DeflateStream) to Response.Filter. The following generic routines can be used very easily to detect GZip capability of the client and compress response output with a single line of code and a couple of library helper routines: WebUtils.GZipEncodePage(); which is handled with a few lines of reusable code and a couple of static helper methods: /// <summary> ///Sets up the current page or handler to use GZip through a Response.Filter ///IMPORTANT:  ///You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() {     HttpResponse Response = HttpContext.Current.Response;     if(IsGZipSupported())     {         stringAcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"];         if(AcceptEncoding.Contains("deflate"))         {             Response.Filter = newSystem.IO.Compression.DeflateStream(Response.Filter,                                        System.IO.Compression.CompressionMode.Compress);             Response.AppendHeader("Content-Encoding", "deflate");         }         else        {             Response.Filter = newSystem.IO.Compression.GZipStream(Response.Filter,                                       System.IO.Compression.CompressionMode.Compress);             Response.AppendHeader("Content-Encoding", "gzip");                            }     }     // Allow proxy servers to cache encoded and unencoded versions separately    Response.AppendHeader("Vary", "Content-Encoding"); } /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } GZipStream and DeflateStream are streams that are assigned to Response.Filter and by doing so apply the appropriate compression on the active Response. Response.Filter content is chunked So to implement a Response.Filter effectively requires only that you implement a custom stream and handle the Write() method to capture Response output as it’s written. At first blush this seems very simple – you capture the output in Write, transform it and write out the transformed content in one pass. And that indeed works for small amounts of content. But you see, the problem is that output is written in small buffer chunks (a little less than 16k it appears) rather than just a single Write() statement into the stream, which makes perfect sense for ASP.NET to stream data back to IIS in smaller chunks to minimize memory usage en route. Unfortunately this also makes it a more difficult to implement any filtering routines since you don’t directly get access to all of the response content which is problematic especially if those filtering routines require you to look at the ENTIRE response in order to transform or capture the output as is needed for the solution the gentleman in my session asked for. So in order to address this a slightly different approach is required that basically captures all the Write() buffers passed into a cached stream and then making the stream available only when it’s complete and ready to be flushed. As I was thinking about the implementation I also started thinking about the few instances when I’ve used Response.Filter implementations. Each time I had to create a new Stream subclass and create my custom functionality but in the end each implementation did the same thing – capturing output and transforming it. I thought there should be an easier way to do this by creating a re-usable Stream class that can handle stream transformations that are common to Response.Filter implementations. Creating a semi-generic Response Filter Stream Class What I ended up with is a ResponseFilterStream class that provides a handful of Events that allow you to capture and/or transform Response content. The class implements a subclass of Stream and then overrides Write() and Flush() to handle capturing and transformation operations. By exposing events it’s easy to hook up capture or transformation operations via single focused methods. ResponseFilterStream exposes the following events: CaptureStream, CaptureString Captures the output only and provides either a MemoryStream or String with the final page output. Capture is hooked to the Flush() operation of the stream. TransformStream, TransformString Allows you to transform the complete response output with events that receive a MemoryStream or String respectively and can you modify the output then return it back as a return value. The transformed output is then written back out in a single chunk to the response output stream. These events capture all output internally first then write the entire buffer into the response. TransformWrite, TransformWriteString Allows you to transform the Response data as it is written in its original chunk size in the Stream’s Write() method. Unlike TransformStream/TransformString which operate on the complete output, these events only see the current chunk of data written. This is more efficient as there’s no caching involved, but can cause problems due to searched content splitting over multiple chunks. Using this implementation, creating a custom Response.Filter transformation becomes as simple as the following code. To hook up the Response.Filter using the MemoryStream version event: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformStream += filter_TransformStream; Response.Filter = filter; and the event handler to do the transformation: MemoryStream filter_TransformStream(MemoryStream ms) { Encoding encoding = HttpContext.Current.Response.ContentEncoding; string output = encoding.GetString(ms.ToArray()); output = FixPaths(output); ms = new MemoryStream(output.Length); byte[] buffer = encoding.GetBytes(output); ms.Write(buffer,0,buffer.Length); return ms; } private string FixPaths(string output) { string path = HttpContext.Current.Request.ApplicationPath; // override root path wonkiness if (path == "/") path = ""; output = output.Replace("\"~/", "\"" + path + "/").Replace("'~/", "'" + path + "/"); return output; } The idea of the event handler is that you can do whatever you want to the stream and return back a stream – either the same one that’s been modified or a brand new one – which is then sent back to as the final response. The above code can be simplified even more by using the string version events which handle the stream to string conversions for you: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; and the event handler to do the transformation calling the same FixPaths method shown above: string filter_TransformString(string output) { return FixPaths(output); } The events for capturing output and capturing and transforming chunks work in a very similar way. By using events to handle the transformations ResponseFilterStream becomes a reusable component and we don’t have to create a new stream class or subclass an existing Stream based classed. By the way, the example used here is kind of a cool trick which transforms “~/” expressions inside of the final generated HTML output – even in plain HTML controls not HTML controls – and transforms them into the appropriate application relative path in the same way that ResolveUrl would do. So you can write plain old HTML like this: <a href=”~/default.aspx”>Home</a>  and have it turned into: <a href=”/myVirtual/default.aspx”>Home</a>  without having to use an ASP.NET control like Hyperlink or Image or having to constantly use: <img src=”<%= ResolveUrl(“~/images/home.gif”) %>” /> in MVC applications (which frankly is one of the most annoying things about MVC especially given the path hell that extension-less and endpoint-less URLs impose). I can’t take credit for this idea. While discussing the Response.Filter issues on Twitter a hint from Dylan Beattie who pointed me at one of his examples which does something similar. I thought the idea was cool enough to use an example for future demos of Response.Filter functionality in ASP.NET next I time I do the Modules and Handlers talk (which was great fun BTW). How practical this is is debatable however since there’s definitely some overhead to using a Response.Filter in general and especially on one that caches the output and the re-writes it later. Make sure to test for performance anytime you use Response.Filter hookup and make sure it' doesn’t end up killing perf on you. You’ve been warned :-}. How does ResponseFilterStream work? The big win of this implementation IMHO is that it’s a reusable  component – so for implementation there’s no new class, no subclassing – you simply attach to an event to implement an event handler method with a straight forward signature to retrieve the stream or string you’re interested in. The implementation is based on a subclass of Stream as is required in order to handle the Response.Filter requirements. What’s different than other implementations I’ve seen in various places is that it supports capturing output as a whole to allow retrieving the full response output for capture or modification. The exception are the TransformWrite and TransformWrite events which operate only active chunk of data written by the Response. For captured output, the Write() method captures output into an internal MemoryStream that is cached until writing is complete. So Write() is called when ASP.NET writes to the Response stream, but the filter doesn’t pass on the Write immediately to the filter’s internal stream. The data is cached and only when the Flush() method is called to finalize the Stream’s output do we actually send the cached stream off for transformation (if the events are hooked up) and THEN finally write out the returned content in one big chunk. Here’s the implementation of ResponseFilterStream: /// <summary> /// A semi-generic Stream implementation for Response.Filter with /// an event interface for handling Content transformations via /// Stream or String. /// <remarks> /// Use with care for large output as this implementation copies /// the output into a memory stream and so increases memory usage. /// </remarks> /// </summary> public class ResponseFilterStream : Stream { /// <summary> /// The original stream /// </summary> Stream _stream; /// <summary> /// Current position in the original stream /// </summary> long _position; /// <summary> /// Stream that original content is read into /// and then passed to TransformStream function /// </summary> MemoryStream _cacheStream = new MemoryStream(5000); /// <summary> /// Internal pointer that that keeps track of the size /// of the cacheStream /// </summary> int _cachePointer = 0; /// <summary> /// /// </summary> /// <param name="responseStream"></param> public ResponseFilterStream(Stream responseStream) { _stream = responseStream; } /// <summary> /// Determines whether the stream is captured /// </summary> private bool IsCaptured { get { if (CaptureStream != null || CaptureString != null || TransformStream != null || TransformString != null) return true; return false; } } /// <summary> /// Determines whether the Write method is outputting data immediately /// or delaying output until Flush() is fired. /// </summary> private bool IsOutputDelayed { get { if (TransformStream != null || TransformString != null) return true; return false; } } /// <summary> /// Event that captures Response output and makes it available /// as a MemoryStream instance. Output is captured but won't /// affect Response output. /// </summary> public event Action<MemoryStream> CaptureStream; /// <summary> /// Event that captures Response output and makes it available /// as a string. Output is captured but won't affect Response output. /// </summary> public event Action<string> CaptureString; /// <summary> /// Event that allows you transform the stream as each chunk of /// the output is written in the Write() operation of the stream. /// This means that that it's possible/likely that the input /// buffer will not contain the full response output but only /// one of potentially many chunks. /// /// This event is called as part of the filter stream's Write() /// operation. /// </summary> public event Func<byte[], byte[]> TransformWrite; /// <summary> /// Event that allows you to transform the response stream as /// each chunk of bytep[] output is written during the stream's write /// operation. This means it's possibly/likely that the string /// passed to the handler only contains a portion of the full /// output. Typical buffer chunks are around 16k a piece. /// /// This event is called as part of the stream's Write operation. /// </summary> public event Func<string, string> TransformWriteString; /// <summary> /// This event allows capturing and transformation of the entire /// output stream by caching all write operations and delaying final /// response output until Flush() is called on the stream. /// </summary> public event Func<MemoryStream, MemoryStream> TransformStream; /// <summary> /// Event that can be hooked up to handle Response.Filter /// Transformation. Passed a string that you can modify and /// return back as a return value. The modified content /// will become the final output. /// </summary> public event Func<string, string> TransformString; protected virtual void OnCaptureStream(MemoryStream ms) { if (CaptureStream != null) CaptureStream(ms); } private void OnCaptureStringInternal(MemoryStream ms) { if (CaptureString != null) { string content = HttpContext.Current.Response.ContentEncoding.GetString(ms.ToArray()); OnCaptureString(content); } } protected virtual void OnCaptureString(string output) { if (CaptureString != null) CaptureString(output); } protected virtual byte[] OnTransformWrite(byte[] buffer) { if (TransformWrite != null) return TransformWrite(buffer); return buffer; } private byte[] OnTransformWriteStringInternal(byte[] buffer) { Encoding encoding = HttpContext.Current.Response.ContentEncoding; string output = OnTransformWriteString(encoding.GetString(buffer)); return encoding.GetBytes(output); } private string OnTransformWriteString(string value) { if (TransformWriteString != null) return TransformWriteString(value); return value; } protected virtual MemoryStream OnTransformCompleteStream(MemoryStream ms) { if (TransformStream != null) return TransformStream(ms); return ms; } /// <summary> /// Allows transforming of strings /// /// Note this handler is internal and not meant to be overridden /// as the TransformString Event has to be hooked up in order /// for this handler to even fire to avoid the overhead of string /// conversion on every pass through. /// </summary> /// <param name="responseText"></param> /// <returns></returns> private string OnTransformCompleteString(string responseText) { if (TransformString != null) TransformString(responseText); return responseText; } /// <summary> /// Wrapper method form OnTransformString that handles /// stream to string and vice versa conversions /// </summary> /// <param name="ms"></param> /// <returns></returns> internal MemoryStream OnTransformCompleteStringInternal(MemoryStream ms) { if (TransformString == null) return ms; //string content = ms.GetAsString(); string content = HttpContext.Current.Response.ContentEncoding.GetString(ms.ToArray()); content = TransformString(content); byte[] buffer = HttpContext.Current.Response.ContentEncoding.GetBytes(content); ms = new MemoryStream(); ms.Write(buffer, 0, buffer.Length); //ms.WriteString(content); return ms; } /// <summary> /// /// </summary> public override bool CanRead { get { return true; } } public override bool CanSeek { get { return true; } } /// <summary> /// /// </summary> public override bool CanWrite { get { return true; } } /// <summary> /// /// </summary> public override long Length { get { return 0; } } /// <summary> /// /// </summary> public override long Position { get { return _position; } set { _position = value; } } /// <summary> /// /// </summary> /// <param name="offset"></param> /// <param name="direction"></param> /// <returns></returns> public override long Seek(long offset, System.IO.SeekOrigin direction) { return _stream.Seek(offset, direction); } /// <summary> /// /// </summary> /// <param name="length"></param> public override void SetLength(long length) { _stream.SetLength(length); } /// <summary> /// /// </summary> public override void Close() { _stream.Close(); } /// <summary> /// Override flush by writing out the cached stream data /// </summary> public override void Flush() { if (IsCaptured && _cacheStream.Length > 0) { // Check for transform implementations _cacheStream = OnTransformCompleteStream(_cacheStream); _cacheStream = OnTransformCompleteStringInternal(_cacheStream); OnCaptureStream(_cacheStream); OnCaptureStringInternal(_cacheStream); // write the stream back out if output was delayed if (IsOutputDelayed) _stream.Write(_cacheStream.ToArray(), 0, (int)_cacheStream.Length); // Clear the cache once we've written it out _cacheStream.SetLength(0); } // default flush behavior _stream.Flush(); } /// <summary> /// /// </summary> /// <param name="buffer"></param> /// <param name="offset"></param> /// <param name="count"></param> /// <returns></returns> public override int Read(byte[] buffer, int offset, int count) { return _stream.Read(buffer, offset, count); } /// <summary> /// Overriden to capture output written by ASP.NET and captured /// into a cached stream that is written out later when Flush() /// is called. /// </summary> /// <param name="buffer"></param> /// <param name="offset"></param> /// <param name="count"></param> public override void Write(byte[] buffer, int offset, int count) { if ( IsCaptured ) { // copy to holding buffer only - we'll write out later _cacheStream.Write(buffer, 0, count); _cachePointer += count; } // just transform this buffer if (TransformWrite != null) buffer = OnTransformWrite(buffer); if (TransformWriteString != null) buffer = OnTransformWriteStringInternal(buffer); if (!IsOutputDelayed) _stream.Write(buffer, offset, buffer.Length); } } The key features are the events and corresponding OnXXX methods that handle the event hookups, and the Write() and Flush() methods of the stream implementation. All the rest of the members tend to be plain jane passthrough stream implementation code without much consequence. I do love the way Action<t> and Func<T> make it so easy to create the event signatures for the various events – sweet. A few Things to consider Performance Response.Filter is not great for performance in general as it adds another layer of indirection to the ASP.NET output pipeline, and this implementation in particular adds a memory hit as it basically duplicates the response output into the cached memory stream which is necessary since you may have to look at the entire response. If you have large pages in particular this can cause potentially serious memory pressure in your server application. So be careful of wholesale adoption of this (or other) Response.Filters. Make sure to do some performance testing to ensure it’s not killing your app’s performance. Response.Filter works everywhere A few questions came up in comments and discussion as to capturing ALL output hitting the site and – yes you can definitely do that by assigning a Response.Filter inside of a module. If you do this however you’ll want to be very careful and decide which content you actually want to capture especially in IIS 7 which passes ALL content – including static images/CSS etc. through the ASP.NET pipeline. So it is important to filter only on what you’re looking for – like the page extension or maybe more effectively the Response.ContentType. Response.Filter Chaining Originally I thought that filter chaining doesn’t work at all due to a bug in the stream implementation code. But it’s quite possible to assign multiple filters to the Response.Filter property. So the following actually works to both compress the output and apply the transformed content: WebUtils.GZipEncodePage(); ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; However the following does not work resulting in invalid content encoding errors: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; WebUtils.GZipEncodePage(); In other words multiple Response filters can work together but it depends entirely on the implementation whether they can be chained or in which order they can be chained. In this case running the GZip/Deflate stream filters apparently relies on the original content length of the output and chokes when the content is modified. But if attaching the compression first it works fine as unintuitive as that may seem. Resources Download example code Capture Output from ASP.NET Pages © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • Running an intern program

    - by dotneteer
    This year I am running an unpaid internship program for high school students. I work for a small company. We have ideas for a few side projects but never have time to do them. So we experiment by making them intern projects. In return, we give these interns guidance to learn, personal attentions, and opportunities with real-world projects. A few years ago, I blogged about the idea of teaching kids to write application with no more than 6 hours of training. This time, I was able to reduce the instruction time to 4 hours and immediately put them into real work projects. When they encounter problems, I combine directions, pointer to various materials on w3school, Udacity, Codecademy and UTube, as well as encouraging them to  search for solutions with search engines. Now entering the third week, I am more than encouraged and feeling accomplished. Our the most senior intern, Christopher Chen, is a recent high school graduate and is heading to UC Berkeley to study computer science after the summer. He previously only had one year of Java experience through the AP computer science course but had no web development experience. Only 12 days into his internship, he has already gain advanced css skills with deeper understanding than more than half of the “senior” developers that I have ever worked with. I put him on a project to migrate an existing website to the Orchard content management system (CMS) with which I am new as well. We were able to teach each other and quickly gain advanced Orchard skills such as creating custom theme and modules. I felt very much a relationship similar to the those between professors and graduate students. On the other hand, I quite expect that I will lose him the next summer to companies like Google, Facebook or Microsoft. As a side note, Christopher and I will do a two part Orchard presentations together at the next SoCal code camp at UC San Diego July 27-28. The first part, “creating an Orchard website on Azure in 60 minutes”, is an introductory lecture and we will discuss how to create a website using Orchard without writing code. The 2nd part, “customizing Orchard websites without limit”, is an advanced lecture and we will discuss custom theme and module development with WebMatrix and Visual Studio.

    Read the article

  • GZip/Deflate Compression in ASP.NET MVC

    - by Rick Strahl
    A long while back I wrote about GZip compression in ASP.NET. In that article I describe two generic helper methods that I've used in all sorts of ASP.NET application from WebForms apps to HttpModules and HttpHandlers that require gzip or deflate compression. The same static methods also work in ASP.NET MVC. Here are the two routines:/// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } /// <summary> /// Sets up the current page or handler to use GZip through a Response.Filter /// IMPORTANT: /// You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() { HttpResponse Response = HttpContext.Current.Response; if (IsGZipSupported()) { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (AcceptEncoding.Contains("gzip")) { Response.Filter = new System.IO.Compression.GZipStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "gzip"); } else { Response.Filter = new System.IO.Compression.DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "deflate"); } } // Allow proxy servers to cache encoded and unencoded versions separately Response.AppendHeader("Vary", "Content-Encoding"); } The first method checks whether the client sending the request includes the accept-encoding for either gzip or deflate, and if if it does it returns true. The second function uses IsGzipSupported() to decide whether it should encode content and uses an Response Filter to do its job. Basically response filters look at the Response output stream as it's written and convert the data flowing through it. Filters are a bit tricky to work with but the two .NET filter streams for GZip and Deflate Compression make this a snap to implement. In my old code and even now in MVC I can always do:public ActionResult List(string keyword=null, int category=0) { WebUtils.GZipEncodePage(); …} to encode my content. And that works just fine. The proper way: Create an ActionFilterAttribute However in MVC this sort of thing is typically better handled by an ActionFilter which can be applied with an attribute. So to be all prim and proper I created an CompressContentAttribute ActionFilter that incorporates those two helper methods and which looks like this:/// <summary> /// Attribute that can be added to controller methods to force content /// to be GZip encoded if the client supports it /// </summary> public class CompressContentAttribute : ActionFilterAttribute { /// <summary> /// Override to compress the content that is generated by /// an action method. /// </summary> /// <param name="filterContext"></param> public override void OnActionExecuting(ActionExecutingContext filterContext) { GZipEncodePage(); } /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } /// <summary> /// Sets up the current page or handler to use GZip through a Response.Filter /// IMPORTANT: /// You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() { HttpResponse Response = HttpContext.Current.Response; if (IsGZipSupported()) { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (AcceptEncoding.Contains("gzip")) { Response.Filter = new System.IO.Compression.GZipStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "gzip"); } else { Response.Filter = new System.IO.Compression.DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "deflate"); } } // Allow proxy servers to cache encoded and unencoded versions separately Response.AppendHeader("Vary", "Content-Encoding"); } } It's basically the same code wrapped into an ActionFilter attribute, which intercepts requests MVC requests to Controller methods and lets you hook up logic before and after the methods have executed. Here I want to override OnActionExecuting() which fires before the Controller action is fired. With the CompressContentAttribute created, it can now be applied to either the controller as a whole:[CompressContent] public class ClassifiedsController : ClassifiedsBaseController { … } or to one of the Action methods:[CompressContent] public ActionResult List(string keyword=null, int category=0) { … } The former applies compression to every action method, while the latter is selective and only applies it to the individual action method. Is the attribute better than the static utility function? Not really, but it is the standard MVC way to hook up 'filter' content and that's where others are likely to expect to set options like this. In fact,  you have a bit more control with the utility function because you can conditionally apply it in code, but this is actually much less likely in MVC applications than old WebForms apps since controller methods tend to be more focused. Compression Caveats Http compression is very cool and pretty easy to implement in ASP.NET but you have to be careful with it - especially if your content might get transformed or redirected inside of ASP.NET. A good example, is if an error occurs and a compression filter is applied. ASP.NET errors don't clear the filter, but clear the Response headers which results in some nasty garbage because the compressed content now no longer matches the headers. Another issue is Caching, which has to account for all possible ways of compression and non-compression that the content is served. Basically compressed content and caching don't mix well. I wrote about several of these issues in an old blog post and I recommend you take a quick peek before diving into making every bit of output Gzip encoded. None of these are show stoppers, but you have to be aware of the issues. Related Posts GZip Compression with ASP.NET Content ASP.NET GZip Encoding Caveats© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • .NET Reflector Pro Coming…

    The very best software is almost always originally the creation of a single person. Readers of our 'Geek of the Week' will know of a few of them.  Even behemoths such as MS Word or Excel started out with one programmer.  There comes a time with any software that it starts to grow up, and has to move from this form of close parenting to being developed by a team.  This has happened several times within Red-Gate: SQL Refactor, SQL Compare, and SQL Dependency Tracker, not to mention SQL Backup, were all originally the work of a lone coder, who subsequently handed over the development to a structured team of programmers, test engineers and usability designers. Because we loved .NET Reflector when Lutz Roeder wrote and nurtured it, and, like many other .NET developers, used it as a development tool ourselves, .NET Reflector's progress from being the apple of Lutz's eye to being a Red-Gate team-based development  seemed natural.  Lutz, after all, eventually felt he couldn't afford the time to develop it to the extent it deserved. Why, then, did we want to take on .NET Reflector?  Different people may give you different answers, but for us in the .NET team, it just seemed a natural progression. We're always very surprised when anyone suggests that we want to change the nature of the tool since it seems right just as it is. .NET Reflector will stay very much the tool we all use and appreciate, although the new version will support .NET 4, and will have many improvements in the accuracy of its decompiling. Whilst we've made a lot of improvements to Reflector, the radical addition, which we hope you'll want to try out as well, is '.NET Reflector Pro'. This is an extension to .NET Reflector that allows the debugging of decompiled code using the Visual Studio debugger. It is an add-in, but we'll be charging for it, mainly because we prefer to live indoors with a warm meal, rather than outside in tents, particularly when the winter's been as cold as this one has. We're hoping (we're even pretty confident!) that you'll share our excitement about .NET Reflector Pro. .NET Reflector Pro integrates .NET Reflector into Visual Studio, allowing you to seamlessly debug into third-party code and assemblies, even if you don't have the source code for them. You can now treat decompiled assemblies much like your own code: you can step through them and use all the debugging techniques that you would use on your own code. Try the beta now. span.fullpost {display:none;}

    Read the article

  • .NET Reflector Pro Coming…

    The very best software is almost always originally the creation of a single person. Readers of our 'Geek of the Week' will know of a few of them.  Even behemoths such as MS Word or Excel started out with one programmer.  There comes a time with any software that it starts to grow up, and has to move from this form of close parenting to being developed by a team.  This has happened several times within Red-Gate: SQL Refactor, SQL Compare, and SQL Dependency Tracker, not to mention SQL Backup, were all originally the work of a lone coder, who subsequently handed over the development to a structured team of programmers, test engineers and usability designers. Because we loved .NET Reflector when Lutz Roeder wrote and nurtured it, and, like many other .NET developers, used it as a development tool ourselves, .NET Reflector's progress from being the apple of Lutz's eye to being a Red-Gate team-based development  seemed natural.  Lutz, after all, eventually felt he couldn't afford the time to develop it to the extent it deserved. Why, then, did we want to take on .NET Reflector?  Different people may give you different answers, but for us in the .NET team, it just seemed a natural progression. We're always very surprised when anyone suggests that we want to change the nature of the tool since it seems right just as it is. .NET Reflector will stay very much the tool we all use and appreciate, although the new version will support .NET 4, and will have many improvements in the accuracy of its decompiling. Whilst we've made a lot of improvements to Reflector, the radical addition, which we hope you'll want to try out as well, is '.NET Reflector Pro'. This is an extension to .NET Reflector that allows the debugging of decompiled code using the Visual Studio debugger. It is an add-in, but we'll be charging for it, mainly because we prefer to live indoors with a warm meal, rather than outside in tents, particularly when the winter's been as cold as this one has. We're hoping (we're even pretty confident!) that you'll share our excitement about .NET Reflector Pro. .NET Reflector Pro integrates .NET Reflector into Visual Studio, allowing you to seamlessly debug into third-party code and assemblies, even if you don't have the source code for them. You can now treat decompiled assemblies much like your own code: you can step through them and use all the debugging techniques that you would use on your own code. Try the beta now. span.fullpost {display:none;}

    Read the article

  • How to structure a XML-based order form using ASP.NET

    - by Brendan
    First question here; please help me if I'm doing something wrong. I'm a graphic designer who's trying to teach himself ASP.NET/C#. My server-side background is PHP/WordPress and some ASP Classic, and when I do code I've hand-coded just about everything since I started learning HTML. So, as I've started to learn .NET, my code has been very manual and procedural. I'm now trying to create a really basic order form that pulls from an XML file to populate the form; there's an image, a title, a price, and selectable quantities. If I was making this form as a static HTML file, I'd have each field named manually and so on postback I could query each field to get the values. But I'm trying to do this dynamically so that I can add/remove items from the form and not have to change the code. In terms of displaying the XML, I rolled my own by loading XmlDocument and using XmlNodeList and a bunch of foreach loops to get things displayed. Then, I learned about <asp:XmlDataSource> and <asp:Repeater>, which made displaying the XML simpler by a large margin. However, I've had a really hard time getting the data that's been submitted on postback (it was implied on SO that there are better ways to get data than nested RepeaterItems). So, what I've learned so far is that you can do things a bunch of different ways in .NET. that's why I thought it'd be good to ask for answers regarding the best way to use ASP.NET to display a XML document and dynamically capture the data that's submitted. Any help is appreciated! I'm using Notepad++ to code .NET 2.0.

    Read the article

  • MvcExtensions - PerRequestTask

    - by kazimanzurrashid
    In the previous post, we have seen the BootstrapperTask which executes when the application starts and ends, similarly there are times when we need to execute some custom logic when a request starts and ends. Usually, for this kind of scenario we create HttpModule and hook the begin and end request events. There is nothing wrong with this approach, except HttpModules are not at all IoC containers friendly, also defining the HttpModule execution order is bit cumbersome, you either have to modify the machine.config or clear the HttpModules and add it again in web.config. Instead, you can use the PerRequestTask which is very much container friendly as well as supports execution orders. Lets few examples where it can be used. Remove www Subdomain Lets say we want to remove the www subdomain, so that if anybody types http://www.mydomain.com it will automatically redirects to http://mydomain.com. public class RemoveWwwSubdomain : PerRequestTask { public RemoveWww() { Order = DefaultOrder - 1; } protected override TaskContinuation ExecuteCore(PerRequestExecutionContext executionContext) { const string Prefix = "http://www."; Check.Argument.IsNotNull(executionContext, "executionContext"); HttpContextBase httpContext = executionContext.HttpContext; string url = httpContext.Request.Url.ToString(); bool startsWith3W = url.StartsWith(Prefix, StringComparison.OrdinalIgnoreCase); bool shouldContinue = true; if (startsWith3W) { string newUrl = "http://" + url.Substring(Prefix.Length); HttpResponseBase response = httpContext.Response; response.StatusCode = (int)HttpStatusCode.MovedPermanently; response.Status = "301 Moved Permanently"; response.RedirectLocation = newUrl; response.SuppressContent = true; shouldContinue = false; } return shouldContinue ? TaskContinuation.Continue : TaskContinuation.Break; } } As you can see, first, we are setting the order so that we do not have to execute the remaining tasks of the chain when we are redirecting, next in the ExecuteCore, we checking the whether www is present, if present we are sending a permanently moved http status code and breaking the task execution chain otherwise we are continuing with the chain. Blocking IP Address Lets take another scenario, your application is hosted in a shared hosting environment where you do not have the permission to change the IIS setting and you want to block certain IP addresses from visiting your application. Lets say, you maintain a list of IP address in database/xml files which you want to block, you have a IBannedIPAddressRepository service which is used to match banned IP Address. public class BlockRestrictedIPAddress : PerRequestTask { protected override TaskContinuation ExecuteCore(PerRequestExecutionContext executionContext) { bool shouldContinue = true; HttpContextBase httpContext = executionContext.HttpContext; if (!httpContext.Request.IsLocal) { string ipAddress = httpContext.Request.UserHostAddress; HttpResponseBase httpResponse = httpContext.Response; if (executionContext.ServiceLocator.GetInstance<IBannedIPAddressRepository>().IsMatching(ipAddress)) { httpResponse.StatusCode = (int)HttpStatusCode.Forbidden; httpResponse.StatusDescription = "IPAddress blocked."; shouldContinue = false; } } return shouldContinue ? TaskContinuation.Continue : TaskContinuation.Break; } } Managing Database Session Now, let see how it can be used to manage NHibernate session, assuming that ISessionFactory of NHibernate is already registered in our container. public class ManageNHibernateSession : PerRequestTask { private ISession session; protected override TaskContinuation ExecuteCore(PerRequestExecutionContext executionContext) { ISessionFactory factory = executionContext.ServiceLocator.GetInstance<ISessionFactory>(); session = factory.OpenSession(); return TaskContinuation.Continue; } protected override void DisposeCore() { session.Close(); session.Dispose(); } } As you can see PerRequestTask can be used to execute small and precise tasks in the begin/end request, certainly if you want to execute other than begin/end request there is no other alternate of HttpModule. That’s it for today, in the next post, we will discuss about the Action Filters, so stay tuned.

    Read the article

  • Use IIS Application Initialization for keeping ASP.NET Apps alive

    - by Rick Strahl
    Ever want to run a service-like, always-on application inside of ASP.NET instead of creating a Windows Service or running a Console application? Need to make sure that your ASP.NET application is always running and comes up immediately after an Application Pool restart even if nobody hits your site? The IIS Application Initialization Module provides this functionality in IIS 7 and later, making it much easier to create always-on ASP.NET applications that can act like a service.

    Read the article

  • Building Web Applications with ACT and jQuery

    - by dwahlin
    My second talk at TechEd is focused on integrating ASP.NET AJAX and jQuery features into websites (if you’re interested in Silverlight you can download code/slides for that talk here). The content starts out by discussing ScriptManager features available in ASP.NET 3.5 and ASP.NET 4 and provides details on why you should consider using a Content Delivery Network (CDN).  If you’re running an external facing site then checking out the CDN features offered by Microsoft or Google is definitely recommended. The talk also goes into the process of contributing to the Ajax Control Toolkit as well as the new Ajax Minifier tool that’s available to crunch JavaScript and CSS files. The extra fun starts in the next part of the talk which details some of the work Microsoft is doing with the jQuery team to donate template, globalization and data linking code to the project. I go into jQuery templates, data linking and a new globalization option that are all being worked on. I want to thank Stephen Walther, Dave Reed and James Senior for their thoughts and contributions since some of the topics covered are pretty bleeding edge right now.The slides and sample code for the talk can be downloaded below.     Download Slides and Samples

    Read the article

  • jQuery AJAX Validation Using The Validity Plugin

    - by schnieds
    Input validation is one of those areas that most developers view as a necessary evil. We know that it is necessary and we really do want to ensure that we get good input from our users. But most of us are lazy (me included) and input validation is one of those things that gets done but usually is a quick and dirty implementation. This is partly due to laziness and partly do to input validation being painful. Thanks to the amazing jQuery Validity plug in, input validation can be really slick, easy and robust enough to work any any scenario. I specifically like the Validity plugin because it supports jQuery AJAX input validation. Other input validation implementations that I have worked with require a form post to take place. However, if you are using jQuery.ajax methods then there isn’t a form and you need to validate the formless input. [Read More] Aaron Schniederhttp://www.churchofficeonline.com

    Read the article

  • Web Platform Installer 2.0 and Visual Studio Web Developer 2010 Express

    - by The Official Microsoft IIS Site
    I was setting up a new machine for presentations and I was getting ready to install Visual Studio 2010 Express   and figured I'd go see if the Web Platform Installer (we call it "Web-P-I") had the new versions of VS2010 ready to go. If you're not familiar, I've blogged about this before. WebPI is a 2meg download that basically sets up your machine for Web Development and downloads whatever you need automatically. It's a cafeteria plan for Microsoft Web Development....(read more)

    Read the article

  • Code refactoring with Visual Studio 2010 Part-2

    - by Jalpesh P. Vadgama
    In previous post I have written about Extract Method Code refactoring option. In this post I am going to some other code refactoring features of Visual Studio 2010.  Renaming variables and methods is one of the most difficult task for a developer. Normally we do like this. First we will rename method or variable and then we will find all the references then do remaining over that stuff. This will be become difficult if your variable or method are referenced at so many files and so many place. But once you use refactor menu rename it will be bit Easy. I am going to use same code which I have created in my previous post. I am just once again putting that code here for your reference. using System; namespace CodeRefractoring { class Program { static void Main(string[] args) { string firstName = "Jalpesh"; string lastName = "Vadgama"; Print(firstName, lastName); } private static void Print(string firstName, string lastName) { Console.WriteLine(string.Format("FirstName:{0}", firstName)); Console.WriteLine(string.Format("LastName:{0}", lastName)); Console.ReadLine(); } } } Now I want to rename print method in this code. To rename the method you can select method name and then select Refactor-> Rename . Once I selected Print method and then click on rename a dialog box will appear like following. Now I am renaming this Print method to PrintMyName like following.   Now once you click OK a dialog will appear with preview of code like following. It will show preview of code. Now once you click apply. You code will be changed like following. using System; namespace CodeRefractoring { class Program { static void Main(string[] args) { string firstName = "Jalpesh"; string lastName = "Vadgama"; PrintMyName(firstName, lastName); } private static void PrintMyName(string firstName, string lastName) { Console.WriteLine(string.Format("FirstName:{0}", firstName)); Console.WriteLine(string.Format("LastName:{0}", lastName)); Console.ReadLine(); } } } So that’s it. This will work in multiple files also. Hope you liked it.. Stay tuned for more.. Till that Happy Programming.

    Read the article

  • Keyboard locking up in Visual Studio 2010, Part 2

    - by Jim Wang
    Last week I posted about looking into the keyboard locking up issue in Visual Studio.  So far it looks like not a lot of people have replied to provide concrete repro steps, which confirms my suspicion that this is somewhat of a random issue. So at this point, I have a couple of choices.  I can either wait for somebody in the community to provide a repro of the problem that I can reliably run into, or I can do the work myself. I’m going to do both, so while I’m waiting for more possible bug reports, I’m going to write a tool that models the behavior of a typical Visual Studio user and use that to hopefully isolate the problem. I’ve chosen to go with this path since given the information in the bug reports, it seems people hit the issue with many different configurations in many different scenarios.  This means that me sitting down without any solid repro steps is likely not going to be a good use of time.  Instead, I’m going to go with a model-based testing approach where I will define a series of actions that a user in VS can do, and then proceed to run my model.  I’ll let you guys know how this works out for isolating bugs :) I’m using an internal tool for the model engine and AutoIt for the UI automation (I want something lightweight for a one-off).  One of the challenges will be getting feedback: AutoIt is great at driving, but not so great at understanding what success and failure means.

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >