Search Results

Search found 53998 results on 2160 pages for 'asp net webapi'.

Page 13/2160 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • ASP.NET 4.0- Menu control enhancement.

    - by Jalpesh P. Vadgama
    Till asp.net 3.5 asp.net menu control was rendered through table. And we all know that it is very hard to have CSS applied to table. For a professional look of our website a CSS is must required thing. But in asp.net 4.0 Menu control is table less it will loaded with UL and LI tags which is easier to manage through CSS. Another problem with table is it will create a large html which will increase your asp.net page KB and decrease your performance. While with UL and LI Tags its very easy very short. So You page KB Size will also be down. Let’s take a simple example. Let’s Create a menu control in asp.net with four menu item like following. <asp:Menu ID="myCustomMenu" runat="server" > <Items> <asp:MenuItem Text="Menu1" Value="Menu1"></asp:MenuItem> <asp:MenuItem Text="Menu2" Value="Menu2"></asp:MenuItem> <asp:MenuItem Text="Menu3" Value="Menu3"></asp:MenuItem> <asp:MenuItem Text="Menu4" Value="Menu4"></asp:MenuItem> </Items></asp:Menu> It will render menu in browser like following. Now If we render this menu control with tables then HTML as you can see via view page source like following.   Now If in asp.net 4.0 It will be loaded with UL and LI tags and if you now see page source then it will look like following. Which will have must lesser HTML then it was earlier like following. So isn’t that great performance enhancement?.. It’s very cool. If you still like old way doing with tables then in asp.net 4.0 there is property called ‘RenderingMode’ is given. So you can set RenderingMode=Table then it will load menu control with table otherwise it will load menu control with UL and LI Tags. That’s it..Stay tuned for more..Happy programming.. Technorati Tags: Menu,Asp.NET 4.0

    Read the article

  • Class-Level Model Validation with EF Code First and ASP.NET MVC 3

    - by ScottGu
    Earlier this week the data team released the CTP5 build of the new Entity Framework Code-First library.  In my blog post a few days ago I talked about a few of the improvements introduced with the new CTP5 build.  Automatic support for enforcing DataAnnotation validation attributes on models was one of the improvements I discussed.  It provides a pretty easy way to enable property-level validation logic within your model layer. You can apply validation attributes like [Required], [Range], and [RegularExpression] – all of which are built-into .NET 4 – to your model classes in order to enforce that the model properties are valid before they are persisted to a database.  You can also create your own custom validation attributes (like this cool [CreditCard] validator) and have them be automatically enforced by EF Code First as well.  This provides a really easy way to validate property values on your models.  I showed some code samples of this in action in my previous post. Class-Level Model Validation using IValidatableObject DataAnnotation attributes provides an easy way to validate individual property values on your model classes.  Several people have asked - “Does EF Code First also support a way to implement class-level validation methods on model objects, for validation rules than need to span multiple property values?”  It does – and one easy way you can enable this is by implementing the IValidatableObject interface on your model classes. IValidatableObject.Validate() Method Below is an example of using the IValidatableObject interface (which is built-into .NET 4 within the System.ComponentModel.DataAnnotations namespace) to implement two custom validation rules on a Product model class.  The two rules ensure that: New units can’t be ordered if the Product is in a discontinued state New units can’t be ordered if there are already more than 100 units in stock We will enforce these business rules by implementing the IValidatableObject interface on our Product class, and by implementing its Validate() method like so: The IValidatableObject.Validate() method can apply validation rules that span across multiple properties, and can yield back multiple validation errors. Each ValidationResult returned can supply both an error message as well as an optional list of property names that caused the violation (which is useful when displaying error messages within UI). Automatic Validation Enforcement EF Code-First (starting with CTP5) now automatically invokes the Validate() method when a model object that implements the IValidatableObject interface is saved.  You do not need to write any code to cause this to happen – this support is now enabled by default. This new support means that the below code – which violates one of our above business rules – will automatically throw an exception (and abort the transaction) when we call the “SaveChanges()” method on our Northwind DbContext: In addition to reactively handling validation exceptions, EF Code First also allows you to proactively check for validation errors.  Starting with CTP5, you can call the “GetValidationErrors()” method on the DbContext base class to retrieve a list of validation errors within the model objects you are working with.  GetValidationErrors() will return a list of all validation errors – regardless of whether they are generated via DataAnnotation attributes or by an IValidatableObject.Validate() implementation.  Below is an example of proactively using the GetValidationErrors() method to check (and handle) errors before trying to call SaveChanges(): ASP.NET MVC 3 and IValidatableObject ASP.NET MVC 2 included support for automatically honoring and enforcing DataAnnotation attributes on model objects that are used with ASP.NET MVC’s model binding infrastructure.  ASP.NET MVC 3 goes further and also honors the IValidatableObject interface.  This combined support for model validation makes it easy to display appropriate error messages within forms when validation errors occur.  To see this in action, let’s consider a simple Create form that allows users to create a new Product: We can implement the above Create functionality using a ProductsController class that has two “Create” action methods like below: The first Create() method implements a version of the /Products/Create URL that handles HTTP-GET requests - and displays the HTML form to fill-out.  The second Create() method implements a version of the /Products/Create URL that handles HTTP-POST requests - and which takes the posted form data, ensures that is is valid, and if it is valid saves it in the database.  If there are validation issues it redisplays the form with the posted values.  The razor view template of our “Create” view (which renders the form) looks like below: One of the nice things about the above Controller + View implementation is that we did not write any validation logic within it.  The validation logic and business rules are instead implemented entirely within our model layer, and the ProductsController simply checks whether it is valid (by calling the ModelState.IsValid helper method) to determine whether to try and save the changes or redisplay the form with errors. The Html.ValidationMessageFor() helper method calls within our view simply display the error messages our Product model’s DataAnnotations and IValidatableObject.Validate() method returned.  We can see the above scenario in action by filling out invalid data within the form and attempting to submit it: Notice above how when we hit the “Create” button we got an error message.  This was because we ticked the “Discontinued” checkbox while also entering a value for the UnitsOnOrder (and so violated one of our business rules).  You might ask – how did ASP.NET MVC know to highlight and display the error message next to the UnitsOnOrder textbox?  It did this because ASP.NET MVC 3 now honors the IValidatableObject interface when performing model binding, and will retrieve the error messages from validation failures with it. The business rule within our Product model class indicated that the “UnitsOnOrder” property should be highlighted when the business rule we hit was violated: Our Html.ValidationMessageFor() helper method knew to display the business rule error message (next to the UnitsOnOrder edit box) because of the above property name hint we supplied: Keeping things DRY ASP.NET MVC and EF Code First enables you to keep your validation and business rules in one place (within your model layer), and avoid having it creep into your Controllers and Views.  Keeping the validation logic in the model layer helps ensure that you do not duplicate validation/business logic as you add more Controllers and Views to your application.  It allows you to quickly change your business rules/validation logic in one single place (within your model layer) – and have all controllers/views across your application immediately reflect it.  This help keep your application code clean and easily maintainable, and makes it much easier to evolve and update your application in the future. Summary EF Code First (starting with CTP5) now has built-in support for both DataAnnotations and the IValidatableObject interface.  This allows you to easily add validation and business rules to your models, and have EF automatically ensure that they are enforced anytime someone tries to persist changes of them to a database.  ASP.NET MVC 3 also now supports both DataAnnotations and IValidatableObject as well, which makes it even easier to use them with your EF Code First model layer – and then have the controllers/views within your web layer automatically honor and support them as well.  This makes it easy to build clean and highly maintainable applications. You don’t have to use DataAnnotations or IValidatableObject to perform your validation/business logic.  You can always roll your own custom validation architecture and/or use other more advanced validation frameworks/patterns if you want.  But for a lot of applications this built-in support will probably be sufficient – and provide a highly productive way to build solutions. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • jQuery Templates with ASP.NET MVC

    - by hajan
    In my three previous blogs, I’ve shown how to use Templates in your ASPX website. Introduction to jQuery TemplatesjQuery Templates - tmpl(), template() and tmplItem()jQuery Templates - {Supported Tags}Now, I will show one real-world example which you may use it in your daily work of developing applications with ASP.NET MVC and jQuery. In the following example I will use Pubs database so that I will retrieve values from the authors table. To access the data, I’m using Entity Framework. Let’s pass throughout each step of the scenario: 1. Create new ASP.NET MVC Web application 2. Add new View inside Home folder but do not select a master page, and add Controller for your View 3. BODY code in the HTML <body>     <div>         <h1>Pubs Authors</h1>         <div id="authorsList"></div>     </div> </body> As you can see  in the body we have only one H1 tag and a div with id authorsList where we will append the data from database.   4. Now, I’ve created Pubs model which is connected to the Pub database and I’ve selected only the authors table in my EDMX model. You can use your own database. 5. Next, lets create one method of JsonResult type which will get the data from database and serialize it into JSON string. public JsonResult GetAuthors() {     pubsEntities pubs = new pubsEntities();     var authors = pubs.authors.ToList();     return Json(authors, JsonRequestBehavior.AllowGet); } So, I’m creating object instance of pubsEntities and get all authors in authors list. Then returning the authors list by serializing it to JSON using Json method. The JsonRequestBehaviour.AllowGet parameter is used to make the GET requests from the client become allowed. By default in ASP.NET MVC 2 the GET is not allowed because of security issue with JSON hijacking.   6. Next, lets create jQuery AJAX function which will call the GetAuthors method. We will use $.getJSON jQuery method. <script language="javascript" type="text/javascript">     $(function () {         $.getJSON("GetAuthors", "", function (data) {             $("#authorsTemplate").tmpl(data).appendTo("#authorsList");         });     }); </script>   Once the web page is downloaded, the method will be called. The first parameter of $.getJSON() is url string in our case the method name. The second parameter (which in the example is empty string) is the key value pairs that will be send to the server, and the third function is the callback function or the result which is going to be returned from the server. Inside the callback function we have code that renders data with template which has id #authorsTemplate and appends it to element which has #authorsList ID.   7. The jQuery Template <script id="authorsTemplate" type="text/html">     <div id="author">         ${au_lname} ${au_fname}         <div id="address">${address}, ${city}</div>         <div id="contractType">                     {{if contract}}             <font color="green">Has contract with the publishing house</font>         {{else}}             <font color="red">Without contract</font>         {{/if}}         <br />         <em> ${printMessage(state)} </em>         <br />                     </div>     </div> </script> As you can see, I have tags containing fields (au_lname, au_fname… etc.) that corresponds to the table in the EDMX model which is the same as in the database. One more thing to note here is that I have printMessage(state) function which is called inside ${ expression/function/field } tag. The printMessage function <script language="javascript" type="text/javascript">     function printMessage(s) {         if (s=="CA") return "The author is from California";         else return "The author is not from California";     } </script> So, if state is “CA” print “The author is from California” else “The author is not from California”   HERE IS THE COMPLETE ASPX CODE <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server">     <title>Database Example :: jQuery Templates</title>     <style type="text/css">         body           {             font-family:Verdana,Arial,Courier New, Sans-Serif;             color:Black;             padding:2px, 2px, 2px, 2px;             background-color:#FF9640;         }         #author         {             display:block;             float:left;             text-decoration:none;             border:1px solid black;             background-color:White;             padding:20px 20px 20px 20px;             margin-top:2px;             margin-right:2px;             font-family:Verdana;             font-size:12px;             width:200px;             height:70px;}         #address           {             font-style:italic;             color:Blue;             font-size:12px;             font-family:Verdana;         }         .author_hover {background-color:Yellow;}     </style>     <script src="http://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.4.4.min.js" type="text/javascript"></script>     <script src="http://ajax.aspnetcdn.com/ajax/jquery.templates/beta1/jquery.tmpl.js" type="text/javascript"></script>     <script language="javascript" type="text/javascript">         function printMessage(s) {             if (s=="CA") return "The author is from California";             else return "The author is not from California";         }     </script>     <script id="authorsTemplate" type="text/html">         <div id="author">             ${au_lname} ${au_fname}             <div id="address">${address}, ${city}</div>             <div id="contractType">                         {{if contract}}                 <font color="green">Has contract with the publishing house</font>             {{else}}                 <font color="red">Without contract</font>             {{/if}}             <br />             <em> ${printMessage(state)} </em>             <br />                         </div>         </div>     </script>     <script language="javascript" type="text/javascript">         $(function () {             $.getJSON("GetAuthors", "", function (data) {                 $("#authorsTemplate").tmpl(data).appendTo("#authorsList");             });         });     </script> </head>     <body>     <div id="title">Pubs Authors</div>     <div id="authorsList"></div> </body> </html> So, in the complete example you also have the CSS style I’m using to stylize the output of my page. Here is print screen of the end result displayed on the web page: You can download the complete source code including examples shown in my previous blog posts about jQuery templates and PPT presentation from my last session I had in the local .NET UG meeting in the following DOWNLOAD LINK. Do let me know your feedback. Regards, Hajan

    Read the article

  • Include weather information in ASP.Net site from weather.com services

    - by sreejukg
    In this article, I am going to demonstrate how you can use the XMLOAP services (referred as XOAP from here onwards) provided by weather.com to display the weather information in your website. The XOAP services are available to be used for free of charge, provided you are comply with requirements from weather.com. I am writing this article from a technical point of view. If you are planning to use weather.com XOAP services in your application, please refer to the terms and conditions from weather.com website. In order to start using the XOAP services, you need to sign up the XOAP datafeed. The signing process is simple, you simply browse the url http://www.weather.com/services/xmloap.html. The URL looks similar to the following. Click on the sign up button, you will reach the registration page. Here you need to specify the site name you need to use this feed for. The form looks similar to the following. Once you fill all the mandatory information, click on save and continue button. That’s it. The registration is over. You will receive an email that contains your partner id, license key and SDK. The SDK available in a zipped format, contains the terms of use and documentation about the services available. Other than this the SDK includes the logos and icons required to display the weather information. As per the SDK, currently there are 2 types of information available through XOAP. These services are Current Conditions for over 30,000 U.S. and over 7,900 international Location IDs Updated at least Hourly Five-Day Forecast (today + 4 additional forecast days in consecutive order beginning with tomorrow) for over 30,000 U.S. and over 7,900 international Location IDs Updated at least Three Times Daily The SDK provides detailed information about the fields included in response of each service. Additionally there is a refresh rate that you need to comply with. As per the SDK, the refresh rate means the following “Refresh Rate” shall mean the maximum frequency with which you may call the XML Feed for a given LocID requesting a data set for that LocID. During the time period in between refresh periods the data must be cached by you either in the memory on your servers or in Your Desktop Application. About the Services Weather.com will provide you with access to the XML Feed over the Internet through the hostname xoap.weather.com. The weather data from the XML feed must be requested for a specific location. So you need a location ID (LOC ID). The XML feed work with 2 types of location IDs. First one is with City Identifiers and second one is using 5 Digit US postal codes. If you do not know your location ID, don’t worry, there is a location id search service available for you to retrieve the location id from city name. Since I am a resident in the Kingdom of Bahrain, I am going to retrieve the weather information for Manama(the capital of Bahrain) . In order to get the location ID for Manama, type the following URL in your address bar. http://xoap.weather.com/search/search?where=manama I got the following XML output. <?xml version="1.0" encoding="UTF-8"?> <!-- This document is intended only for use by authorized licensees of The –> <!-- Weather Channel. Unauthorized use is prohibited. Copyright 1995-2011, –> <!-- The Weather Channel Interactive, Inc. All Rights Reserved. –> <search ver="3.0">       <loc id="BAXX0001" type="1">Al Manama, Bahrain</loc> </search> You can try this with any city name, if the city is available, it will return the location id, and otherwise, it will return nothing. In order to get the weather information, from XOAP,  you need to pass certain parameters to the XOAP service. A brief about the parameters are as follows. Please refer SDK for more details. Parameter name Possible Value cc Optional, if you include this, the current condition will be returned. Value can be anything, as it will be ignored e.g. cc=* dayf If you want the forecast for 5 days, specify dayf=5 This is optional iink Value should be XOAP par Your partner id. You can find this in your registration email from weather.com prod Value should be XOAP key The license key assigned to you. This will be available in the registration email unit s or m (standard or matric or you can think of Celsius/Fahrenheit) this is optional field, if not specified the unit will be standard(s) The URL host for the XOAP service is http://xoap.weather.com. So for my purpose, I need the following request to be made to access the XOAP services. http://xoap.weather.com/weather/local/BAXX0001?cc=*&link=xoap&prod=xoap&par=*********&key=************** (The ***** to be replaced with the corresponding alternatives) The response XML have a root element “weather”. Under the root element, it has the following sections <head> - the meta data information about the weather results returned. <loc> - the location data block that provides, the information about the location for which the wheather data is retrieved. <lnks> - the 4 promotional links you need to place along with the weather display. Additional to these 4 links, there should be another link with weather channel logo to the home page of weather.com. <cc> - the current condition data. This element will be there only if you specify the cc element in the request. <dayf> - the forcast data as you specified. This element will be there only if you specify the dayf in the request. In this walkthrough, I am going to capture the weather information for Manama (Location ID: BAXX0001). You need 2 applications to display weather information in your website. A Console application that retrieves data from the XMLOAP and store in the SQL Server database (or any data store as you prefer).This application will be scheduled to execute in every 25 minutes using windows task scheduler, so that we can comply with the refresh rate. A web application that display data from the SQL Server database Retrieve the Weather from XOAP I have created a console application named, Weather Service. I created a SQL server database, with the following columns. I named the table as tblweather. You are free to choose any name. Column name Description lastUpdated Datetime, this is the last time when the weather data is updated. This is the time of the service running TemparatureDateTime The date and time returned by XML feed Temparature The temperature returned by the XML feed. TemparatureUnit The unit of the temperature returned by the XML feed iconId The id of the icon to be used. Currently 48 icons from 0 to 47 are available. WeatherDescription The Weather Description Phrase returned by the feed. Link1url The url to the first promo link Link1Text The text for the first promo link Link2url The url to the second promo link Link2Text The text for the second promo link Link3url The url to the third promo link Link3Text The text for the third promo link Link4url The url to the fourth promo link Link4Text The text for the fourth promo link Every time when the service runs, the application will update the database columns from the XOAP data feed. When the application starts, It is going to get the data as XML from the url. This demonstration uses LINQ to extract the necessary data from the fetched XML. The following are the code segment for extracting data from the weather XML using LINQ. // first, create an instance of the XDocument class with the XOAP URL. replace **** with the corresponding values. XDocument weather = XDocument.Load("http://xoap.weather.com/weather/local/BAXX0001?cc=*&link=xoap&prod=xoap&par=***********&key=c*********"); // construct a query using LINQ var feedResult = from item in weather.Descendants() select new { unit = item.Element("head").Element("ut").Value, temp = item.Element("cc").Element("tmp").Value, tempDate = item.Element("cc").Element("lsup").Value, iconId = item.Element("cc").Element("icon").Value, description = item.Element("cc").Element("t").Value, links = from link in item.Elements("lnks").Elements("link") select new { url = link.Element("l").Value, text = link.Element("t").Value } }; // Load the root node to a variable, you may use foreach construct instead. var item1 = feedResult.First(); *If you want to learn more about LINQ and XML, read this nice blog from Scott GU. http://weblogs.asp.net/scottgu/archive/2007/08/07/using-linq-to-xml-and-how-to-build-a-custom-rss-feed-reader-with-it.aspx Now you have all the required values in item1. For e.g. if you want to get the temperature, use item1.temp; Now I just need to execute an SQL query against the database. See the connection part. using (SqlConnection conn = new SqlConnection(@"Data Source=sreeju\sqlexpress;Initial Catalog=Sample;Integrated Security=True")) { string strSql = @"update tblweather set lastupdated=getdate(), temparatureDateTime = @temparatureDateTime, temparature=@temparature, temparatureUnit=@temparatureUnit, iconId = @iconId, description=@description, link1url=@link1url, link1text=@link1text, link2url=@link2url, link2text=@link2text,link3url=@link3url, link3text=@link3text,link4url=@link4url, link4text=@link4text"; SqlCommand comm = new SqlCommand(strSql, conn); comm.Parameters.AddWithValue("temparatureDateTime", item1.tempDate); comm.Parameters.AddWithValue("temparature", item1.temp); comm.Parameters.AddWithValue("temparatureUnit", item1.unit); comm.Parameters.AddWithValue("description", item1.description); comm.Parameters.AddWithValue("iconId", item1.iconId); var lstLinks = item1.links; comm.Parameters.AddWithValue("link1url", lstLinks.ElementAt(0).url); comm.Parameters.AddWithValue("link1text", lstLinks.ElementAt(0).text); comm.Parameters.AddWithValue("link2url", lstLinks.ElementAt(1).url); comm.Parameters.AddWithValue("link2text", lstLinks.ElementAt(1).text); comm.Parameters.AddWithValue("link3url", lstLinks.ElementAt(2).url); comm.Parameters.AddWithValue("link3text", lstLinks.ElementAt(2).text); comm.Parameters.AddWithValue("link4url", lstLinks.ElementAt(3).url); comm.Parameters.AddWithValue("link4text", lstLinks.ElementAt(3).text); conn.Open(); comm.ExecuteNonQuery(); conn.Close(); Console.WriteLine("database updated"); } Now click ctrl + f5 to run the service. I got the following output Check your database and make sure, the data is updated with the latest information from the service. (Make sure you inserted one row in the database by entering some values before executing the service. Otherwise you need to modify your application code to count the rows and conditionally perform insert/update query) Display the Weather information in ASP.Net page Now you got all the data in the database. You just need to create a web application and display the data from the database. I created a new ASP.Net web application with a default.aspx page. In order to comply with the terms of weather.com, You need to use Weather.com logo along with the weather display. You can find the necessary logos to use under the folder “logos” in the SDK. Additionally copy any of the icon set from the folder “icons” to your web application. I used the 93x93 icon set. You are free to use any other sizes available. The design view of the page in VS2010 looks similar to the following. The page contains a heading, an image control (for displaying the weather icon), 2 label controls (for displaying temperature and weather description), 4 hyperlinks (for displaying the 4 promo links returned by the XOAP service) and weather.com logo with hyperlink to the weather.com home page. I am going to write code that will update the values of these controls from the values stored in the database by the service application as mentioned in the previous step. Go to the code behind file for the webpage, enter the following code under Page_Load event handler. using (SqlConnection conn = new SqlConnection(@"Data Source=sreeju\sqlexpress;Initial Catalog=Sample;Integrated Security=True")) { SqlCommand comm = new SqlCommand("select top 1 * from tblweather", conn); conn.Open(); SqlDataReader reader = comm.ExecuteReader(); if (reader.Read()) { lblTemparature.Text = reader["temparature"].ToString() + "&deg;" + reader["temparatureUnit"].ToString(); lblWeatherDescription.Text = reader["description"].ToString(); imgWeather.ImageUrl = "icons/" + reader["iconId"].ToString() + ".png"; lnk1.Text = reader["link1text"].ToString(); lnk1.NavigateUrl = reader["link1url"].ToString(); lnk2.Text = reader["link2text"].ToString(); lnk2.NavigateUrl = reader["link2url"].ToString(); lnk3.Text = reader["link3text"].ToString(); lnk3.NavigateUrl = reader["link3url"].ToString(); lnk4.Text = reader["link4text"].ToString(); lnk4.NavigateUrl = reader["link4url"].ToString(); } conn.Close(); } Press ctrl + f5 to run the page. You will see the following output. That’s it. You need to configure the console application to run every 25 minutes so that the database is updated. Also you can fetch the forecast information and store those in the database, and then retrieve it later in your web page. Since the data resides in your database, you have the full control over your display. You need to make sure your website comply with weather.com license requirements. If you want to get the source code of this walkthrough through the application, post your email address below. Hope you enjoy the reading.

    Read the article

  • NoSQL with MongoDB, NoRM and ASP.NET MVC - Part 2

    - by shiju
     In my last post, I have given an introduction to MongoDB and NoRM using an ASP.NET MVC demo app. I have updated the demo ASP.NET MVC app and a created a new drop at codeplex. You can download the demo at http://mongomvc.codeplex.com/In my last post, we have discussed to doing basic CRUD operations against a simple domain entity. In this post, let’s discuss on domain entity with deep object graph.The below is our domain entities  public class Category {       [MongoIdentifier]     public ObjectId Id { get; set; }       [Required(ErrorMessage = "Name Required")]     [StringLength(25, ErrorMessage = "Must be less than 25 characters")]     public string Name { get; set;}     public string Description { get; set; }     public List<Expense> Expenses { get; set; }       public Category()     {         Expenses = new List<Expense>();     } }    public class Expense {     [MongoIdentifier]     public ObjectId Id { get; set; }     public Category Category { get; set; }     public string  Transaction { get; set; }     public DateTime Date { get; set; }     public double Amount { get; set; }   }   We have two domain entities - Category and Expense. A single category contains a list of expense transactions and every expense transaction should have a Category.The MongoSession class  internal class MongoSession : IDisposable {     private readonly MongoQueryProvider provider;       public MongoSession()     {         this.provider = new MongoQueryProvider("Expense");     }       public IQueryable<Category> Categories     {         get { return new MongoQuery<Category>(this.provider); }     }     public IQueryable<Expense> Expenses     {         get { return new MongoQuery<Expense>(this.provider); }     }     public MongoQueryProvider Provider     {         get { return this.provider; }     }       public void Add<T>(T item) where T : class, new()     {         this.provider.DB.GetCollection<T>().Insert(item);     }       public void Dispose()     {         this.provider.Server.Dispose();     }     public void Delete<T>(T item) where T : class, new()     {         this.provider.DB.GetCollection<T>().Delete(item);     }       public void Drop<T>()     {         this.provider.DB.DropCollection(typeof(T).Name);     }       public void Save<T>(T item) where T : class,new()     {         this.provider.DB.GetCollection<T>().Save(item);                }     }     ASP.NET MVC view model  for Expense transaction  public class ExpenseViewModel {     public ObjectId Id { get; set; }       public ObjectId CategoryId { get; set; }       [Required(ErrorMessage = "Transaction Required")]            public string Transaction { get; set; }       [Required(ErrorMessage = "Date Required")]            public DateTime Date { get; set; }       [Required(ErrorMessage = "Amount Required")]        public double Amount { get; set; }       public IEnumerable<SelectListItem> Category { get; set; } }  Let's create action method for Insert and Update a expense transaction   [HttpPost] public ActionResult Save(ExpenseViewModel expenseViewModel) {     try     {         if (!ModelState.IsValid)         {             using (var session = new MongoSession())             {                 var categories = session.Categories.AsEnumerable<Category>();                 expenseViewModel.Category = categories.ToSelectListItems(expenseViewModel.CategoryId);                }             return View("Save", expenseViewModel);         }           var expense=new Expense();         ModelCopier.CopyModel(expenseViewModel, expense);           using (var session = new MongoSession())         {             ObjectId Id = expenseViewModel.CategoryId;             var category = session.Categories                 .Where(c => c.Id ==Id  )                 .FirstOrDefault();             expense.Category = category;             session.Save(expense);         }         return RedirectToAction("Index");     }     catch     {         return View();     } } Query with Expenses  using (var session = new MongoSession()) {     var expenses = session.Expenses.         Where(exp => exp.Date >= StartDate && exp.Date <= EndDate)         .AsEnumerable<Expense>(); }  We are doing a LINQ query expression with a Date filter. We can easily work with MongoDB using NoRM driver and can managing object graph of domain entities are pretty cool. Download the Source - You can download the source code form http://mongomvc.codeplex.com

    Read the article

  • Functions inside page using Razor View Engine – ASP.NET MVC

    - by hajan
    As we already know, Razor is probably the best view engine for ASP.NET MVC so far. It keeps your code fluid and very expressive. Besides the other functionalities Razor has, it also supports writing local functions. If you want to write a function, you can’t just open new @{ } razor block and write it there… it won’t work. Instead, you should specify @functions { } so that inside the brackets you will write your own C#/VB.NET functions/methods. Lets see an example: 1. I have the following loop that prints data using Razor <ul> @{     int N = 10;     for (int i = 1; i<=N; i++)     {         <li>Number @i</li>     }     } </ul> This code will print the numbers from 1 to 10: Number 1 Number 2 Number 3 Number 4 Number 5 Number 6 Number 7 Number 8 Number 9 Number 10 So, now lets write a function that will check if current number is even, if yes… add Even before Number word. Function in Razor @functions{     public bool isEven(int number)     {         return number % 2 == 0 ? true : false;     } } The modified code which creates unordered list is: <ul> @{     int N = 10;     for (int i = 1; i<=N; i++)     {         if (isEven(@i)) {             <li>Even number @i</li>         }         else {             <li>Number @i</li>         }                 }             } </ul> As you can see, in the modified code we use the isEven(@i) function to check if the current number is even or not… The result is: Number 1 Even number 2 Number 3 Even number 4 Number 5 Even number 6 Number 7 Even number 8 Number 9 Even number 10 So, the main point of this blog was to show how you can define your own functions inside page using Razor View Engine. Of course you can define multiple functions inside the same @functions { } defined razor statement. The complete code: @{     Layout = null; } <!DOCTYPE html> <html> <head>     <title>ASP.NET MVC - Razor View Engine :: Functions</title> </head> <body>     <div>         <ul>         @{             int N = 10;             for (int i = 1; i<=N; i++)             {                 if (isEven(@i)) {                     <li>Even number @i</li>                 }                 else {                     <li>Number @i</li>                 }                         }                     }         </ul>         @functions{             public bool isEven(int number)             {                 return number % 2 == 0 ? true : false;             }         }     </div> </body> </html> Hope you like it. Regards, Hajan

    Read the article

  • When to favor ASP.NET WebForms over MVC

    - by P.Brian.Mackey
    I know Microsoft has said "ASP.NET MVC is not a replacement for WebForms". Some developers say WebForms is faster to develop than MVC, but I believe this all comes down to comfort level with the technology; so I don't want any answers in this direction. Given that ASP.NET MVC gives a developer more control over our application, why is WebForms not considered obsolete? When should I favor WebForms over MVC for new development?

    Read the article

  • Guarding against CSRF Attacks in ASP.NET MVC2

    - by srkirkland
    Alongside XSS (Cross Site Scripting) and SQL Injection, Cross-site Request Forgery (CSRF) attacks represent the three most common and dangerous vulnerabilities to common web applications today. CSRF attacks are probably the least well known but they are relatively easy to exploit and extremely and increasingly dangerous. For more information on CSRF attacks, see these posts by Phil Haack and Steve Sanderson. The recognized solution for preventing CSRF attacks is to put a user-specific token as a hidden field inside your forms, then check that the right value was submitted. It's best to use a random value which you’ve stored in the visitor’s Session collection or into a Cookie (so an attacker can't guess the value). ASP.NET MVC to the rescue ASP.NET MVC provides an HTMLHelper called AntiForgeryToken(). When you call <%= Html.AntiForgeryToken() %> in a form on your page you will get a hidden input and a Cookie with a random string assigned. Next, on your target Action you need to include [ValidateAntiForgeryToken], which handles the verification that the correct token was supplied. Good, but we can do better Using the AntiForgeryToken is actually quite an elegant solution, but adding [ValidateAntiForgeryToken] on all of your POST methods is not very DRY, and worse can be easily forgotten. Let's see if we can make this easier on the program but moving from an "Opt-In" model of protection to an "Opt-Out" model. Using AntiForgeryToken by default In order to mandate the use of the AntiForgeryToken, we're going to create an ActionFilterAttribute which will do the anti-forgery validation on every POST request. First, we need to create a way to Opt-Out of this behavior, so let's create a quick action filter called BypassAntiForgeryToken: [AttributeUsage(AttributeTargets.Method, AllowMultiple=false)] public class BypassAntiForgeryTokenAttribute : ActionFilterAttribute { } Now we are ready to implement the main action filter which will force anti forgery validation on all post actions within any class it is defined on: [AttributeUsage(AttributeTargets.Class, AllowMultiple = false)] public class UseAntiForgeryTokenOnPostByDefault : ActionFilterAttribute { public override void OnActionExecuting(ActionExecutingContext filterContext) { if (ShouldValidateAntiForgeryTokenManually(filterContext)) { var authorizationContext = new AuthorizationContext(filterContext.Controller.ControllerContext);   //Use the authorization of the anti forgery token, //which can't be inhereted from because it is sealed new ValidateAntiForgeryTokenAttribute().OnAuthorization(authorizationContext); }   base.OnActionExecuting(filterContext); }   /// <summary> /// We should validate the anti forgery token manually if the following criteria are met: /// 1. The http method must be POST /// 2. There is not an existing [ValidateAntiForgeryToken] attribute on the action /// 3. There is no [BypassAntiForgeryToken] attribute on the action /// </summary> private static bool ShouldValidateAntiForgeryTokenManually(ActionExecutingContext filterContext) { var httpMethod = filterContext.HttpContext.Request.HttpMethod;   //1. The http method must be POST if (httpMethod != "POST") return false;   // 2. There is not an existing anti forgery token attribute on the action var antiForgeryAttributes = filterContext.ActionDescriptor.GetCustomAttributes(typeof(ValidateAntiForgeryTokenAttribute), false);   if (antiForgeryAttributes.Length > 0) return false;   // 3. There is no [BypassAntiForgeryToken] attribute on the action var ignoreAntiForgeryAttributes = filterContext.ActionDescriptor.GetCustomAttributes(typeof(BypassAntiForgeryTokenAttribute), false);   if (ignoreAntiForgeryAttributes.Length > 0) return false;   return true; } } The code above is pretty straight forward -- first we check to make sure this is a POST request, then we make sure there aren't any overriding *AntiForgeryTokenAttributes on the action being executed. If we have a candidate then we call the ValidateAntiForgeryTokenAttribute class directly and execute OnAuthorization() on the current authorization context. Now on our base controller, you could use this new attribute to start protecting your site from CSRF vulnerabilities. [UseAntiForgeryTokenOnPostByDefault] public class ApplicationController : System.Web.Mvc.Controller { }   //Then for all of your controllers public class HomeController : ApplicationController {} What we accomplished If your base controller has the new default anti-forgery token attribute on it, when you don't use <%= Html.AntiForgeryToken() %> in a form (or of course when an attacker doesn't supply one), the POST action will throw the descriptive error message "A required anti-forgery token was not supplied or was invalid". Attack foiled! In summary, I think having an anti-CSRF policy by default is an effective way to protect your websites, and it turns out it is pretty easy to accomplish as well. Enjoy!

    Read the article

  • Converting a generic list into JSON string and then handling it in java script

    - by Jalpesh P. Vadgama
    We all know that JSON (JavaScript Object Notification) is very useful in case of manipulating string on client side with java script and its performance is very good over browsers so let’s create a simple example where convert a Generic List then we will convert this list into JSON string and then we will call this web service from java script and will handle in java script. To do this we need a info class(Type) and for that class we are going to create generic list. Here is code for that I have created simple class with two properties UserId and UserName public class UserInfo { public int UserId { get; set; } public string UserName { get; set; } } Now Let’s create a web service and web method will create a class and then we will convert this with in JSON string with JavaScriptSerializer class. Here is web service class. using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Services; namespace Experiment.WebService { /// <summary> /// Summary description for WsApplicationUser /// </summary> [WebService(Namespace = "http://tempuri.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] [System.ComponentModel.ToolboxItem(false)] // To allow this Web Service to be called from script, using ASP.NET AJAX, uncomment the following line. [System.Web.Script.Services.ScriptService] public class WsApplicationUser : System.Web.Services.WebService { [WebMethod] public string GetUserList() { List<UserInfo> userList = new List<UserInfo>(); for (int i = 1; i <= 5; i++) { UserInfo userInfo = new UserInfo(); userInfo.UserId = i; userInfo.UserName = string.Format("{0}{1}", "J", i.ToString()); userList.Add(userInfo); } System.Web.Script.Serialization.JavaScriptSerializer jSearializer = new System.Web.Script.Serialization.JavaScriptSerializer(); return jSearializer.Serialize(userList); } } } Note: Here you must have this attribute here in web service class ‘[System.Web.Script.Services.ScriptService]’ as this attribute will enable web service to call from client side. Now we have created a web service class let’s create a java script function ‘GetUserList’ which will call web service from JavaScript like following function GetUserList() { Experiment.WebService.WsApplicationUser.GetUserList(ReuqestCompleteCallback, RequestFailedCallback); } After as you can see we have inserted two call back function ReuqestCompleteCallback and RequestFailedCallback which handle errors and result from web service. ReuqestCompleteCallback will handle result of web service and if and error comes then RequestFailedCallback will print the error. Following is code for both function. function ReuqestCompleteCallback(result) { result = eval(result); var divResult = document.getElementById("divUserList"); CreateUserListTable(result); } function RequestFailedCallback(error) { var stackTrace = error.get_stackTrace(); var message = error.get_message(); var statusCode = error.get_statusCode(); var exceptionType = error.get_exceptionType(); var timedout = error.get_timedOut(); // Display the error. var divResult = document.getElementById("divUserList"); divResult.innerHTML = "Stack Trace: " + stackTrace + "<br/>" + "Service Error: " + message + "<br/>" + "Status Code: " + statusCode + "<br/>" + "Exception Type: " + exceptionType + "<br/>" + "Timedout: " + timedout; } Here in above there is a function called you can see that we have use ‘eval’ function which parse string in enumerable form. Then we are calling a function call ‘CreateUserListTable’ which will create a table string and paste string in the a div. Here is code for that function. function CreateUserListTable(userList) { var tablestring = '<table ><tr><td>UsreID</td><td>UserName</td></tr>'; for (var i = 0, len = userList.length; i < len; ++i) { tablestring=tablestring + "<tr>"; tablestring=tablestring + "<td>" + userList[i].UserId + "</td>"; tablestring=tablestring + "<td>" + userList[i].UserName + "</td>"; tablestring=tablestring + "</tr>"; } tablestring = tablestring + "</table>"; var divResult = document.getElementById("divUserList"); divResult.innerHTML = tablestring; } Now let’s create div which will have all html that is generated from this function. Here is code of my web page. We also need to add a script reference to enable web service from client side. Here is all HTML code we have. <form id="form1" runat="server"> <asp:ScriptManager ID="myScirptManger" runat="Server"> <Services> <asp:ServiceReference Path="~/WebService/WsApplicationUser.asmx" /> </Services> </asp:ScriptManager> <div id="divUserList"> </div> </form> Now as we have not defined where we are going to call ‘GetUserList’ function so let’s call this function on windows onload event of javascript like following. window.onload=GetUserList(); That’s it. Now let’s run it on browser to see whether it’s work or not and here is the output in browser as expected. That’s it. This was very basic example but you can crate your own JavaScript enabled grid from this and you can see possibilities are unlimited here. Stay tuned for more.. Happy programming.. Technorati Tags: JSON,Javascript,ASP.NET,WebService

    Read the article

  • Learning MVC for a JSP Resource and ASP.Net WebForms Resource

    - by Lijo
    Statement from a colleque: - "People with ASP.Net WebForms skills should be able to learn it easily as the fundamental concept is same.” Consider two people –one from JSP background and other from ASP.Net WebForms background. Now both need to learn ASP.Net MVC in RAZOR. Do you think the person from ASP.Net Webforms background has significant advantage over the person from JSP background? My feeling is – it is equally difficult for JSP person and ASP.Net Webforms person to learn MVC with RAZOR. What is your take on it? Any statistics that you can provide for this?

    Read the article

  • Membership in ASP.Net applications - part 4

    - by nikolaosk
    This is the fourth post in a series of posts regarding ASP.Net built in membership functionality,providers,controls. You can read the first one here . You can read the second post here . You can read the third post here . In this post I will show you how to add users programmatically to a role. In the third post we saw how to get users in a specific role.I will also show you how to delete a user and a role programmatically. 1) Launch Visual Studio 2005,2008/2010. Express editions will work fine....(read more)

    Read the article

  • ASP.NET MVC Custom Profile Provider

    - by Ben Griswold
    It’s been a long while since I last used the ASP.NET Profile provider. It’s a shame, too, because it just works with very little development effort: Membership tables installed? Check. Profile enabled in web.config? Check. SqlProfileProvider connection string set? Check.  Profile properties defined in said web.config file? Check. Write code to set value, read value, build and test. Check. Check. Check.  Yep, I thought the built-in Profile stuff was pure gold until I noticed how the user-based information is persisted to the database. It’s stored as xml and, well, that was going to be trouble if I ever wanted to query the profile data.  So, I have avoided the super-easy-to-use ASP.NET Profile provider ever since, until this week, when I decided I could use it to store user-specific properties which I am 99% positive I’ll never need to query against ever.  I opened up my ASP.NET MVC application, completed steps 1-4 (above) in about 3 minutes, started writing my profile get/set code and that’s where the plan broke down.  Oh yeah. That’s right.  Visual Studio auto-generates a strongly-type Profile reference for web site projects but not for ASP.NET MVC or Web Applications.  Bummer. So, I went through the steps of getting a customer profile provider working in my ASP.NET MVC application: First, I defined a CurrentUser routine and my profile properties in a custom Profile class like so: using System.Web.Profile; using System.Web.Security; using Project.Core;   namespace Project.Web.Context {     public class MemberPreferencesProfile : ProfileBase     {         static public MemberPreferencesProfile CurrentUser         {             get             {                 return (MemberPreferencesProfile)                     Create(Membership.GetUser().UserName);             }         }           public Enums.PresenceViewModes? ViewMode         {             get { return ((Enums.PresenceViewModes)                     ( base["ViewMode"] ?? Enums.PresenceViewModes.Category)); }             set { base["ViewMode"] = value; Save(); }         }     } } And then I replaced the existing profile configuration web.config with the following: <profile enabled="true" defaultProvider="MvcSqlProfileProvider"          inherits="Project.Web.Context.MemberPreferencesProfile">        <providers>     <clear/>     <add name="MvcSqlProfileProvider"          type="System.Web.Profile.SqlProfileProvider, System.Web,          Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"          connectionStringName="ApplicationServices" applicationName="/"/>   </providers> </profile> Notice that profile is enabled, I’ve defined the defaultProvider and profile is now inheriting from my custom MemberPreferencesProfile class.  Finally, I am now able to set and get profile property values nearly the same way as I did with website projects: viewMode = MemberPreferencesProfile.CurrentUser.ViewMode; MemberPreferencesProfile.CurrentUser.ViewMode = viewMode;

    Read the article

  • Persisting model state in ASP.NET MVC using Serialize HTMLHelper

    - by shiju
    ASP.NET MVC 2 futures assembly provides a HTML helper method Serialize that can be use for persisting your model object. The Serialize  helper method will serialize the model object and will persist it in a hidden field in the HTML form. The Serialize  helper is very useful when situations like you are making multi-step wizard where a single model class is using for all steps in the wizard. For each step you want to retain the model object's whole state.The below is serializing our model object. The model object should be a Serializable class in order to work with Serialize helper method. <% using (Html.BeginForm("Register","User")) {%><%= Html.Serialize("User",Model) %> This will generate hidden field with name "user" and the value will the serialized format of our model object.In the controller action, you can place the DeserializeAttribute in the action method parameter. [HttpPost]               public ActionResult Register([DeserializeAttribute] User user, FormCollection userForm) {     TryUpdateModel(user, userForm.ToValueProvider());     //To Do } In the above action method you will get the same model object that you serialized in your view template. We are updating the User model object with the form field values.

    Read the article

  • Novo Suporte para Combinação e Minificação de Arquivos JavaScript e CSS (Série de posts sobre a ASP.NET 4.5)

    - by Leniel Macaferi
    Este é o sexto post de uma série de posts que estou escrevendo sobre a ASP.NET 4.5. Os próximos lançamentos do .NET e Visual Studio incluem vários novos e ótimos recursos e capacidades. Com a ASP.NET 4.5 você vai ver um monte de melhorias realmente emocionantes em formulários da Web ( Web Forms ) e MVC - assim como no núcleo da base de código da ASP.NET, no qual estas tecnologias são baseadas. O post de hoje cobre um pouco do trabalho que estamos realizando para adicionar suporte nativo para combinação e minificação de arquivos JavaScript e CSS dentro da ASP.NET - o que torna mais fácil melhorar o desempenho das aplicações. Este recurso pode ser utilizado por todas as aplicações ASP.NET, incluindo tanto a ASP.NET MVC quanto a ASP.NET Web Forms. Noções básicas sobre Combinação e Minificação Como mais e mais pessoas usando dispositivos móveis para navegar na web, está se tornando cada vez mais importante que os websites e aplicações que construímos tenham um bom desempenho neles. Todos nós já tentamos carregar sites em nossos smartphones - apenas para, eventualmente, desistirmos em meio à frustração porque os mesmos são carregados lentamente através da lenta rede celular. Se o seu site/aplicação carrega lentamente assim, você está provavelmente perdendo clientes em potencial por causa do mau desempenho/performance. Mesmo com máquinas desktop poderosas, o tempo de carregamento do seu site e o desempenho percebido podem contribuir enormemente para a percepção do cliente. A maioria dos websites hoje em dia são construídos com múltiplos arquivos de JavaScript e CSS para separar o código e para manter a base de código coesa. Embora esta seja uma boa prática do ponto de vista de codificação, muitas vezes isso leva a algumas consequências negativas no tocante ao desempenho geral do site. Vários arquivos de JavaScript e CSS requerem múltiplas solicitações HTTP provenientes do navegador - o que pode retardar o tempo de carregamento do site.  Exemplo Simples A seguir eu abri um site local no IE9 e gravei o tráfego da rede usando as ferramentas do desenvolvedor nativas do IE (IE Developer Tools) que podem ser acessadas com a tecla F12. Como mostrado abaixo, o site é composto por 5 arquivos CSS e 4 arquivos JavaScript, os quais o navegador tem que fazer o download. Cada arquivo é solicitado separadamente pelo navegador e retornado pelo servidor, e o processo pode levar uma quantidade significativa de tempo proporcional ao número de arquivos em questão. Combinação A ASP.NET está adicionando um recurso que facilita a "união" ou "combinação" de múltiplos arquivos CSS e JavaScript em menos solicitações HTTP. Isso faz com que o navegador solicite muito menos arquivos, o que por sua vez reduz o tempo que o mesmo leva para buscá-los. A seguir está uma versão atualizada do exemplo mostrado acima, que tira vantagem desta nova funcionalidade de combinação de arquivos (fazendo apenas um pedido para JavaScript e um pedido para CSS): O navegador agora tem que enviar menos solicitações ao servidor. O conteúdo dos arquivos individuais foram combinados/unidos na mesma resposta, mas o conteúdo dos arquivos permanece o mesmo - por isso o tamanho do arquivo geral é exatamente o mesmo de antes da combinação (somando o tamanho dos arquivos separados). Mas note como mesmo em uma máquina de desenvolvimento local (onde a latência da rede entre o navegador e o servidor é mínima), o ato de combinar os arquivos CSS e JavaScript ainda consegue reduzir o tempo de carregamento total da página em quase 20%. Em uma rede lenta a melhora de desempenho seria ainda maior. Minificação A próxima versão da ASP.NET também está adicionando uma nova funcionalidade que facilita reduzir ou "minificar" o tamanho do download do conteúdo. Este é um processo que remove espaços em branco, comentários e outros caracteres desnecessários dos arquivos CSS e JavaScript. O resultado é arquivos menores, que serão enviados e carregados no navegador muito mais rapidamente. O gráfico a seguir mostra o ganho de desempenho que estamos tendo quando os processos de combinação e minificação dos arquivos são usados ??em conjunto: Mesmo no meu computador de desenvolvimento local (onde a latência da rede é mínima), agora temos uma melhoria de desempenho de 40% a partir de onde originalmente começamos. Em redes lentas (e especialmente com clientes internacionais), os ganhos seriam ainda mais significativos. Usando Combinação e Minificação de Arquivos dentro da ASP.NET A próxima versão da ASP.NET torna realmente fácil tirar proveito da combinação e minificação de arquivos dentro de projetos, possibilitando ganhos de desempenho como os que foram mostrados nos cenários acima. A forma como ela faz isso, te permite evitar a execução de ferramentas personalizadas/customizadas, como parte do seu processo de construção da aplicação/website - ao invés disso, a ASP.NET adicionou suporte no tempo de execução/runtime para que você possa executar a combinação/minificação dos arquivos dinamicamente (cacheando os resultados para ter certeza de que a performance seja realmente satisfatória). Isto permite uma experiência de desenvolvimento realmente limpa e torna super fácil começar a tirar proveito destas novas funcionalidades. Vamos supor que temos um projeto simples com 4 arquivos JavaScript e 6 arquivos CSS: Combinando e Minificando os Arquivos CSS Digamos que você queira referenciar em uma página todas as folhas de estilo que estão dentro da pasta "Styles" mostrada acima. Hoje você tem que adicionar múltiplas referências para os arquivos CSS para obter todos eles - o que se traduziria em seis requisições HTTP separadas: O novo recurso de combinação/minificação agora permite que você combine e minifique todos os arquivos CSS da pasta Styles - simplesmente enviando uma solicitação de URL para a pasta (neste caso, "styles"), com um caminho adicional "/css" na URL. Por exemplo:    Isso fará com que a ASP.NET verifique o diretório, combine e minifique os arquivos CSS que estiverem dentro da pasta, e envie uma única resposta HTTP para o navegador com todo o conteúdo CSS. Você não precisa executar nenhuma ferramenta ou pré-processamento para obter esse comportamento. Isso te permite separar de maneira limpa seus estilos em arquivos CSS separados e condizentes com cada funcionalidade da aplicação mantendo uma experiência de desenvolvimento extremamente limpa - e mesmo assim você não terá um impacto negativo de desempenho no tempo de execução da aplicação. O designer do Visual Studio também vai honrar a lógica de combinação/minificação - assim você ainda terá uma experiência WYSWIYG no designer dentro VS. Combinando e Minificando os Arquivos JavaScript Como a abordagem CSS mostrada acima, se quiséssemos combinar e minificar todos os nossos arquivos de JavaScript em uma única resposta, poderíamos enviar um pedido de URL para a pasta (neste caso, "scripts"), com um caminho adicional "/js":   Isso fará com que a ASP.NET verifique o diretório, combine e minifique os arquivos com extensão .js dentro dele, e envie uma única resposta HTTP para o navegador com todo o conteúdo JavaScript. Mais uma vez - nenhuma ferramenta customizada ou etapas de construção foi necessária para obtermos esse comportamento. Este processo funciona em todos os navegadores. Ordenação dos Arquivos dentro de um Pacote Por padrão, quando os arquivos são combinados pela ASP.NET, eles são ordenados em ordem alfabética primeiramente, exatamente como eles são mostrados no Solution Explorer. Em seguida, eles são automaticamente reorganizados de modo que as bibliotecas conhecidas e suas extensões personalizadas, tais como jQuery, MooTools e Dojo sejam carregadas antes de qualquer outra coisa. Assim, a ordem padrão para a combinação dos arquivos da pasta Scripts, como a mostrada acima será: jquery-1.6.2.js jquery-ui.js jquery.tools.js a.js Por padrão, os arquivos CSS também são classificados em ordem alfabética e depois são reorganizados de forma que o arquivo reset.css e normalize.css (se eles estiverem presentes na pasta) venham sempre antes de qualquer outro arquivo. Assim, o padrão de classificação da combinação dos arquivos da pasta "Styles", como a mostrada acima será: reset.css content.css forms.css globals.css menu.css styles.css A ordenação/classificação é totalmente personalizável, e pode ser facilmente alterada para acomodar a maioria dos casos e qualquer padrão de nomenclatura que você prefira. O objetivo com a experiência pronta para uso, porém, é ter padrões inteligentes que você pode simplesmente usar e ter sucesso com os mesmos. Qualquer número de Diretórios/Subdiretórios é Suportado No exemplo acima, nós tivemos apenas uma única pasta "Scripts" e "Styles" em nossa aplicação. Isso funciona para alguns tipos de aplicação (por exemplo, aplicações com páginas simples). Muitas vezes, porém, você vai querer ter múltiplos pacotes/combinações de arquivos CSS/JS dentro de sua aplicação - por exemplo: um pacote "comum", que tem o núcleo dos arquivos JS e CSS que todas as páginas usam, e então arquivos específicos para páginas ou seções que não são utilizados globalmente. Você pode usar o suporte à combinação/minificação em qualquer número de diretórios ou subdiretórios em seu projeto - isto torna mais fácil estruturar seu código de forma a maximizar os benefícios da combinação/minificação dos arquivos. Cada diretório por padrão pode ser acessado como um pacote separado e endereçável através de uma URL.  Extensibilidade para Combinação/Minificação de Arquivos O suporte da ASP.NET para combinar e minificar é construído com extensibilidade em mente e cada parte do processo pode ser estendido ou substituído. Regras Personalizadas Além de permitir a abordagem de empacotamento - baseada em diretórios - que vem pronta para ser usada, a ASP.NET também suporta a capacidade de registrar pacotes/combinações personalizadas usando uma nova API de programação que estamos expondo.  O código a seguir demonstra como você pode registrar um "customscript" (script personalizável) usando código dentro da classe Global.asax de uma aplicação. A API permite que você adicione/remova/filtre os arquivos que farão parte do pacote de maneira muito granular:     O pacote personalizado acima pode ser referenciado em qualquer lugar dentro da aplicação usando a referência de <script> mostrada a seguir:     Processamento Personalizado Você também pode substituir os pacotes padrão CSS e JavaScript para suportar seu próprio processamento personalizado dos arquivos do pacote (por exemplo: regras personalizadas para minificação, suporte para Saas, LESS ou sintaxe CoffeeScript, etc). No exemplo mostrado a seguir, estamos indicando que queremos substituir as transformações nativas de minificação com classes MyJsTransform e MyCssTransform personalizadas. Elas são subclasses dos respectivos minificadores padrão para CSS e JavaScript, e podem adicionar funcionalidades extras:     O resultado final desta extensibilidade é que você pode se plugar dentro da lógica de combinação/minificação em um nível profundo e fazer algumas coisas muito legais com este recurso. Vídeo de 2 Minutos sobre Combinação e Minificacão de Arquivos em Ação Mads Kristensen tem um ótimo vídeo de 90 segundo (em Inglês) que demonstra a utilização do recurso de Combinação e Minificação de Arquivos. Você pode assistir o vídeo de 90 segundos aqui. Sumário O novo suporte para combinação e minificação de arquivos CSS e JavaScript dentro da próxima versão da ASP.NET tornará mais fácil a construção de aplicações web performáticas. Este recurso é realmente fácil de usar e não requer grandes mudanças no seu fluxo de trabalho de desenvolvimento existente. Ele também suporta uma rica API de extensibilidade que permite a você personalizar a lógica da maneira que você achar melhor. Você pode facilmente tirar vantagem deste novo suporte dentro de aplicações baseadas em ASP.NET MVC e ASP.NET Web Forms. Espero que ajude, Scott P.S. Além do blog, eu uso o Twitter para disponibilizar posts rápidos e para compartilhar links.Lidar com o meu Twitter é: @scottgu Texto traduzido do post original por Leniel Macaferi. google_ad_client = "pub-8849057428395760"; /* 728x90, created 2/15/09 */ google_ad_slot = "4706719075"; google_ad_width = 728; google_ad_height = 90;

    Read the article

  • PDC and Tech-Ed Europe Slides and Code

    - by Stephen Walther
    I spent close to three weeks on the road giving talks at Tech-Ed Europe (Berlin), PDC (Los Angeles), and the Los Angeles Code Camp (Los Angeles). I got to talk about two topics that I am very passionate about: ASP.NET MVC and Ajax. Thanks everyone for coming to all my talks! At PDC, I announced all of the new features of our ASP.NET Ajax Library. In particular, I made five big announcements: ASP.NET Ajax Library Beta Released – You can download the beta from Ajax.CodePlex.com ASP.NET Ajax Library includes the AJAX Control Toolkit – You can use the Ajax Control Toolkit with ASP.NET MVC. ASP.NET Ajax Library being contributed to the CodePlex Foundation – ASP.NET Ajax is the founding project for the CodePlex Foundation (see CodePlex.org) ASP.NET Ajax Library is receiving full product support – Complain to Microsoft Customer Service at midnight on Christmas ASP.NET Ajax Library supports jQuery integration – Use (almost) all of the Ajax Control Toolkit controls in jQuery For more details on the Ajax announcements, see James Senior’s blog entry on the Ajax announcements at: http://jamessenior.com/post/News-on-the-ASPNET-Ajax-Library.aspx In my MVC talks, I discussed the new features being introduced with ASP.NET MVC 2. Here are three of my favorite new features: Client Validation – Client validation done the right way. Do your validation in your model and let the validation bubble up to JavaScript code automatically. Areas – Divide your ASP.NET MVC application into sub-applications. Great for managing both medium and large projects. RenderAction() – Finally, a way to add content to master pages and multiple pages without doing anything strange or twisted. There are demos of all of these features in the MVC downloads below. Here are the power point and code from all of the talks: PDC – Introducing the New ASP.NET Ajax Library PDC – ASP.NET MVC: The New Stuff Tech-Ed Europe - What's New in Microsoft ASP.NET Model-View-Controller Tech-Ed Europe - Microsoft ASP.NET AJAX: Taking AJAX to the Next Level

    Read the article

  • Persisting model state in ASP.NET MVC using Serialize HTMLHelper

    - by shiju
    ASP.NET MVC 2 futures assembly provides a HTML helper method Serialize that can be use for persisting your model object. The Serialize  helper method will serialize the model object and will persist it in a hidden field in the HTML form. The Serialize  helper is very useful when situations like you are making multi-step wizard where a single model class is using for all steps in the wizard. For each step you want to retain the model object's whole state.The below is serializing our model object. The model object should be a Serializable class in order to work with Serialize helper method. <% using (Html.BeginForm("Register","User")) {%><%= Html.Serialize("User",Model) %> This will generate hidden field with name "user" and the value will the serialized format of our model object.In the controller action, you can place the DeserializeAttribute in the action method parameter. [HttpPost]               public ActionResult Register([DeserializeAttribute] User user, FormCollection userForm) {     TryUpdateModel(user, userForm.ToValueProvider());     //To Do } In the above action method you will get the same model object that you serialized in your view template. We are updating the User model object with the form field values.

    Read the article

  • How to add a default value to a custom ASP.NET Profile property

    - by mr.moses
    I know you can add defaultValues using the web.config like this: <profile> <properties> <add name="AreCool" type="System.Boolean" defaultValue="False" /> </properties> </profile> but I have the Profile inherited from a class: <profile inherits="CustomProfile" defaultProvider="CustomProfileProvider" enabled="true"> <providers> <clear /> <add name="CustomProfileProvider" type="CustomProfileProvider" /> </providers> </profile> Heres the class: Public Class CustomProfile Inherits ProfileBase Public Property AreCool() As Boolean Get Return Me.GetPropertyValue("AreCool") End Get Set(ByVal value As Boolean) Me.SetPropertyValue("AreCool", value) End Set End Property End Class I don't know how to set the default value of the property. Its causing errors because without a default value, it uses an empty string, which cannot be converted to a Boolean. I tried adding <DefaultSettingValue("False")> _ but that didn't seem to make a difference. I'm also using a custom ProfileProvider (CustomProfileProvider).

    Read the article

  • Breaking Changes in Asp.Net 4

    - by joelvarty
    I upgraded an app to .net just for fun and a bunch of things broke. Turns out there are quite a few things that are officially broken between anything and .Net 4.0… http://www.asp.net/%28S%28ywiyuluxr3qb2dfva1z5lgeg%29%29/learn/whitepapers/aspnet4/breaking-changes/ more later – joel

    Read the article

  • Returning Images from ASP.NET Web API

    - by bipinjoshi
    Sometimes you need to save and retrieve image data in SQL Server as a part of Web API functionality. A common approach is to save images as physical image files on the web server and then store the image URL in a SQL Server database. However, at times you need to store image data directly into a SQL Server database rather than the image URL. While dealing with the later scenario you need to read images from a database and then return this image data from your Web API. This article shows the steps involved in this process. http://www.bipinjoshi.net/articles/4b9922c3-0982-4e8f-812c-488ff4dbd507.aspx

    Read the article

  • Azure, don't give me multiple VMs, give me one elastic VM

    - by FransBouma
    Yesterday, Microsoft revealed new major features for Windows Azure (see ScottGu's post). It all looks shiny and great, but after reading most of the material describing the new features, I still find the overall idea behind all of it flawed: why should I care on how much VMs my web app runs? Isn't that a problem to solve for the Windows Azure engineers / software? And what if I need the file system, why can't I simply get a virtual filesystem ? To illustrate my point, let's use a real example: a product website with a customer system/database and next to it a support site with accompanying database. Both are written in .NET, using ASP.NET and use a SQL Server database each. The product website offers files to download by customers, very simple. You have a couple of options to host these websites: Buy a server, place it in a rack at an ISP and run the sites on that server Use 'shared hosting' with an ISP, which means your sites' appdomains are running on the same machine, as well as the files stored, and the databases are hosted in the same server as the other shared databases. Hire a VM, install your OS of choice at an ISP, and host the sites on that VM, basically the same as the first option, except you don't have a physical server At some cloud-vendor, either host the sites 'shared' or in a VM. See above. With all of those options, scalability is a problem, even the cloud-based ones, though not due to the same reasons: The physical server solution has the obvious problem that if you need more power, you need to buy a bigger server or more servers which requires you to add replication and other overhead Shared hosting solutions are almost always capped on memory usage / traffic and database size: if your sites get too big, you have to move out of the shared hosting environment and start over with one of the other solutions The VM solution, be it a VM at an ISP or 'in the cloud' at e.g. Windows Azure or Amazon, in theory allows scaling out by simply instantiating more VMs, however that too introduces the same overhead problems as with the physical servers: suddenly more than 1 instance runs your sites. If a cloud vendor offers its services in the form of VMs, you won't gain much over having a VM at some ISP: the main problems you have to work around are still there: when you spin up more than one VM, your application must be completely stateless at any moment, including the DB sub system, because what's in memory in instance 1 might not be in memory in instance 2. This might sounds trivial but it's not. A lot of the websites out there started rather small: they were perfectly runnable on a single machine with normal memory and CPU power. After all, you don't need a big machine to run a website with even thousands of users a day. Moving these sites to a multi-VM environment will cause a problem: all the in-memory state they use, all the multi-page transitions they use while keeping state across the transition, they can't do that anymore like they did that on a single machine: state is something of the past, you have to store every byte of state in either a DB or in a viewstate or in a cookie somewhere so with the next request, all state information is available through the request, as nothing is kept in-memory. Our example uses a bunch of files in a file system. Using multiple VMs will require that these files move to a cloud storage system which is mounted in each VM so we don't have to store the files on each VM. This might require different file paths, but this change should be minor. What's perhaps less minor is the maintenance procedure in place on the new type of cloud storage used: instead of ftp-ing into a VM, you might have to update the files using different ways / tools. All in all this makes moving an existing website which was written for an environment that's based around a VM (namely .NET with its CLR) overly cumbersome and problematic: it forces you to refactor your website system to be able to be used 'in the cloud', which is caused by the limited way how e.g. Windows Azure offers its cloud services: in blocks of VMs. Offer a scalable, flexible VM which extends with my needs Instead, cloud vendors should offer simply one VM to me. On that VM I run the websites, store my DB and my files. As it's a virtual machine, how this machine is actually ran on physical hardware (e.g. partitioned), I don't care, as that's the problem for the cloud vendor to solve. If I need more resources, e.g. I have more traffic to my server, way more visitors per day, the VM stretches, like I bought a bigger box. This frees me from the problem which comes with multiple VMs: I don't have any refactoring to do at all: I can simply build my website as if it runs on my local hardware server, upload it to the VM offered by the cloud vendor, install it on the VM and I'm done. "But that might require changes to windows!" Yes, but Microsoft is Windows. Windows Azure is their service, they can make whatever change to what they offer to make it look like it's windows. Yet, they're stuck, like Amazon, in thinking in VMs, which forces developers to 'think ahead' and gamble whether they would need to migrate to a cloud with multiple VMs in the future or not. Which comes down to: gamble whether they should invest time in code / architecture which they might never need. (YAGNI anyone?) So the VM we're talking about, is that a low-level VM which runs a guest OS, or is that VM a different kind of VM? The flexible VM: .NET's CLR ? My example websites are ASP.NET based, which means they run inside a .NET appdomain, on the .NET CLR, which is a VM. The only physical OS resource the sites need is the file system, however this too is accessed through .NET. In short: all the websites see is what .NET allows the websites to see, the world as the websites know it is what .NET shows them and lets them access. How the .NET appdomain is run physically, that's the concern of .NET, not mine. This begs the question why Windows Azure doesn't offer virtual appdomains? Or better: .NET environments which look like one machine but could be physically multiple machines. In such an environment, no change has to be made to the websites to migrate them from a local machine or own server to the cloud to get proper scaling: the .NET VM will simply scale with the need: more memory needed, more CPU power needed, it stretches. What it offers to the application running inside the appdomain is simply increasing, but not fragmented: all resources are available to the application: this means that the problem of how to scale is back to where it should be: with the cloud vendor. "Yeah, great, but what about the databases?" The .NET application communicates with the database server through a .NET ADO.NET provider. Where the database is located is not a problem of the appdomain: the ADO.NET provider has to solve that. I.o.w.: we can host the databases in an environment which offers itself as a single resource and is accessible through one connection string without replication overhead on the outside, and use that environment inside the .NET VM as if it was a single DB. But what about memory replication and other problems? This environment isn't simple, at least not for the cloud vendor. But it is simple for the customer who wants to run his sites in that cloud: no work needed. No refactoring needed of existing code. Upload it, run it. Perhaps I'm dreaming and what I described above isn't possible. Yet, I think if cloud vendors don't move into that direction, what they're offering isn't interesting: it doesn't solve a problem at all, it simply offers a way to instantiate more VMs with the guest OS of choice at the cost of me needing to refactor my website code so it can run in the straight jacket form factor dictated by the cloud vendor. Let's not kid ourselves here: most of us developers will never build a website which needs a truck load of VMs to run it: almost all websites created by developers can run on just a few VMs at most. Yet, the most expensive change is right at the start: moving from one to two VMs. As soon as you have refactored your website code to run across multiple VMs, adding another one is just as easy as clicking a mouse button. But that first step, that's the problem here and as it's right there at the beginning of scaling the website, it's particularly strange that cloud vendors refuse to solve that problem and leave it to the developers to solve that. Which makes migrating 'to the cloud' particularly expensive.

    Read the article

  • Using SocialCounter.NET with ASP.NET MVC

    - by DigiMortal
    I found small library called SocialCounter.NET that is able to display some data from popular social sites. Although it is possible to use widgets offered by social networks there are also scenarios when you don’t want or can’t use these JavaScript based widgets. In this posting I will show you how to use SocialCounter.NET. Start with downloading SocialCounter.NET. You can also use NuGet package manager to download SocialCounter.NET. Using SocialCounter.NET is very easy as you can see from this example view: @using SocialCounter.NET; @{      ViewBag.Title = "Home Page"; } <h2>Social</h2> <p>     Twitter followers: @Counter.GetTwitterFollowersCount("gpeipman")<br />     Facebook friends: @Counter.GetFacebookFriendsCount("gpeipman")<br />     Facebook likes: @Counter.GetFacebookLikes("http://www.eindhovenmetalmeeting.nl/")<br />     Delicious saves count: @Counter.GetDeliciousSaveCount("http://youreffectiveleadership.com/")<br /> </p> And the result is shown on image on right. You can use SocialCounter.NET by example on user profile pages and on your content pages where you want to show how many people have saved current page as bookmark. SocialCounter.NET supports also LinkedIn, RSS-feeds and Google Plus accounts. In future – I hope – they will add support for more social networks to their library.

    Read the article

  • Creating an ASP.NET report using Visual Studio 2010 - Part 3

    - by rajbk
    We continue building our report in this three part series. Creating an ASP.NET report using Visual Studio 2010 - Part 1 Creating an ASP.NET report using Visual Studio 2010 - Part 2 Adding the ReportViewer control and filter drop downs. Open the source code for index.aspx and add a ScriptManager control. This control is required for the ReportViewer control. Add a DropDownList for the categories and suppliers. Add the ReportViewer control. The markup after these steps is shown below. <div> <asp:ScriptManager ID="smScriptManager" runat="server"> </asp:ScriptManager> <div id="searchFilter"> Filter by: Category : <asp:DropDownList ID="ddlCategories" runat="server" /> and Supplier : <asp:DropDownList ID="ddlSuppliers" runat="server" /> </div> <rsweb:ReportViewer ID="rvProducts" runat="server"> </rsweb:ReportViewer> </div> The design view for index.aspx is shown below. The dropdowns will display the categories and suppliers in the database. Changing the selection in the drop downs will cause the report to be filtered by the selections in the dropdowns. You will see how to do this in the next steps.   Attaching the RDLC to the ReportViewer control by clicking on the top right of the control, going to Report Viewer tasks and selecting Products.rdlc.   Resize the ReportViewer control by dragging at the bottom right corner. I set mine to 800px x 500px. You can also set this value in source view. Defining the data sources. We will now define the Data Source used to populate the report. Go back to the “ReportViewer Tasks” and select “Choose Data Sources” Select a “New data source..” Select “Object” and name your Data Source ID “odsProducts”   In the next screen, choose “ProductRepository” as your business object. Choose “GetProductsProjected” in the next screen.   The method requires a SupplierID and CategoryID. We will set these so that our data source gets the values from the drop down lists we defined earlier. Set the parameter source to be of type “Control” and set the ControlIDs to be ddlSuppliers and ddlCategories respectively. Your screen will look like this: We are now going to define the data source for our drop downs. Select the ddlCategory drop down and pick “Choose Data Source”. Pick “Object” and give it an id “odsCategories”   In the next screen, choose “ProductRepository” Select the GetCategories() method in the next screen.   Select “CategoryName” and “CategoryID” in the next screen. We are done defining the data source for the Category drop down. Perform the same steps for the Suppliers drop down.   Select each dropdown and set the AppendDataBoundItems to true and AutoPostback to true.     The AppendDataBoundItems is needed because we are going to insert an “All“ list item with a value of empty. Go to each drop down and add this list item markup as shown below> Finally, double click on each drop down in the designer and add the following code in the code behind. This along with the “Autopostback= true” attribute refreshes the report anytime a drop down is changed. protected void ddlCategories_SelectedIndexChanged(object sender, EventArgs e) { rvProducts.LocalReport.Refresh(); }   protected void ddlSuppliers_SelectedIndexChanged(object sender, EventArgs e) { rvProducts.LocalReport.Refresh(); } Compile your report and run the page. You should see the report rendered. Note that the tool bar in the ReportViewer control gives you a couple of options including the ability to export the data to Excel, PDF or word.   Conclusion Through this three part series, we did the following: Created a data layer for use by our RDLC. Created an RDLC using the report wizard and define a dataset for the report. Used the report design surface to design our report including adding a chart. Used the ReportViewer control to attach the RDLC. Connected our ReportWiewer to a data source and take parameter values from the drop downlists. Used AutoPostBack to refresh the reports when the dropdown selection was changed. RDLCs allow you to create interactive reports including drill downs and grouping. For even more advanced reports you can use Microsoft® SQL Server™ Reporting Services with RDLs. With RDLs, the report is rendered on the report server instead of the web server. Another nice thing about RDLs is that you can define a parameter list for the report and it gets rendered automatically for you. RDLCs and RDLs both have their advantages and its best to compare them and choose the right one for your requirements. Download VS2010 RTM Sample project NorthwindReports.zip   Alfred Borden: Are you watching closely?

    Read the article

  • ASP.NET Frameworks and Raw Throughput Performance

    - by Rick Strahl
    A few days ago I had a curious thought: With all these different technologies that the ASP.NET stack has to offer, what's the most efficient technology overall to return data for a server request? When I started this it was mere curiosity rather than a real practical need or result. Different tools are used for different problems and so performance differences are to be expected. But still I was curious to see how the various technologies performed relative to each just for raw throughput of the request getting to the endpoint and back out to the client with as little processing in the actual endpoint logic as possible (aka Hello World!). I want to clarify that this is merely an informal test for my own curiosity and I'm sharing the results and process here because I thought it was interesting. It's been a long while since I've done any sort of perf testing on ASP.NET, mainly because I've not had extremely heavy load requirements and because overall ASP.NET performs very well even for fairly high loads so that often it's not that critical to test load performance. This post is not meant to make a point  or even come to a conclusion which tech is better, but just to act as a reference to help understand some of the differences in perf and give a starting point to play around with this yourself. I've included the code for this simple project, so you can play with it and maybe add a few additional tests for different things if you like. Source Code on GitHub I looked at this data for these technologies: ASP.NET Web API ASP.NET MVC WebForms ASP.NET WebPages ASMX AJAX Services  (couldn't get AJAX/JSON to run on IIS8 ) WCF Rest Raw ASP.NET HttpHandlers It's quite a mixed bag, of course and the technologies target different types of development. What started out as mere curiosity turned into a bit of a head scratcher as the results were sometimes surprising. What I describe here is more to satisfy my curiosity more than anything and I thought it interesting enough to discuss on the blog :-) First test: Raw Throughput The first thing I did is test raw throughput for the various technologies. This is the least practical test of course since you're unlikely to ever create the equivalent of a 'Hello World' request in a real life application. The idea here is to measure how much time a 'NOP' request takes to return data to the client. So for this request I create the simplest Hello World request that I could come up for each tech. Http Handler The first is the lowest level approach which is an HTTP handler. public class Handler : IHttpHandler { public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/plain"; context.Response.Write("Hello World. Time is: " + DateTime.Now.ToString()); } public bool IsReusable { get { return true; } } } WebForms Next I added a couple of ASPX pages - one using CodeBehind and one using only a markup page. The CodeBehind page simple does this in CodeBehind without any markup in the ASPX page: public partial class HelloWorld_CodeBehind : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { Response.Write("Hello World. Time is: " + DateTime.Now.ToString() ); Response.End(); } } while the Markup page only contains some static output via an expression:<%@ Page Language="C#" AutoEventWireup="false" CodeBehind="HelloWorld_Markup.aspx.cs" Inherits="AspNetFrameworksPerformance.HelloWorld_Markup" %> Hello World. Time is <%= DateTime.Now %> ASP.NET WebPages WebPages is the freestanding Razor implementation of ASP.NET. Here's the simple HelloWorld.cshtml page:Hello World @DateTime.Now WCF REST WCF REST was the token REST implementation for ASP.NET before WebAPI and the inbetween step from ASP.NET AJAX. I'd like to forget that this technology was ever considered for production use, but I'll include it here. Here's an OperationContract class: [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class WcfService { [OperationContract] [WebGet] public Stream HelloWorld() { var data = Encoding.Unicode.GetBytes("Hello World" + DateTime.Now.ToString()); var ms = new MemoryStream(data); // Add your operation implementation here return ms; } } WCF REST can return arbitrary results by returning a Stream object and a content type. The code above turns the string result into a stream and returns that back to the client. ASP.NET AJAX (ASMX Services) I also wanted to test ASP.NET AJAX services because prior to WebAPI this is probably still the most widely used AJAX technology for the ASP.NET stack today. Unfortunately I was completely unable to get this running on my Windows 8 machine. Visual Studio 2012  removed adding of ASP.NET AJAX services, and when I tried to manually add the service and configure the script handler references it simply did not work - I always got a SOAP response for GET and POST operations. No matter what I tried I always ended up getting XML results even when explicitly adding the ScriptHandler. So, I didn't test this (but the code is there - you might be able to test this on a Windows 7 box). ASP.NET MVC Next up is probably the most popular ASP.NET technology at the moment: MVC. Here's the small controller: public class MvcPerformanceController : Controller { public ActionResult Index() { return View(); } public ActionResult HelloWorldCode() { return new ContentResult() { Content = "Hello World. Time is: " + DateTime.Now.ToString() }; } } ASP.NET WebAPI Next up is WebAPI which looks kind of similar to MVC. Except here I have to use a StringContent result to return the response: public class WebApiPerformanceController : ApiController { [HttpGet] public HttpResponseMessage HelloWorldCode() { return new HttpResponseMessage() { Content = new StringContent("Hello World. Time is: " + DateTime.Now.ToString(), Encoding.UTF8, "text/plain") }; } } Testing Take a minute to think about each of the technologies… and take a guess which you think is most efficient in raw throughput. The fastest should be pretty obvious, but the others - maybe not so much. The testing I did is pretty informal since it was mainly to satisfy my curiosity - here's how I did this: I used Apache Bench (ab.exe) from a full Apache HTTP installation to run and log the test results of hitting the server. ab.exe is a small executable that lets you hit a URL repeatedly and provides counter information about the number of requests, requests per second etc. ab.exe and the batch file are located in the \LoadTests folder of the project. An ab.exe command line  looks like this: ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorld which hits the specified URL 100,000 times with a load factor of 20 concurrent requests. This results in output like this:   It's a great way to get a quick and dirty performance summary. Run it a few times to make sure there's not a large amount of varience. You might also want to do an IISRESET to clear the Web Server. Just make sure you do a short test run to warm up the server first - otherwise your first run is likely to be skewed downwards. ab.exe also allows you to specify headers and provide POST data and many other things if you want to get a little more fancy. Here all tests are GET requests to keep it simple. I ran each test: 100,000 iterations Load factor of 20 concurrent connections IISReset before starting A short warm up run for API and MVC to make sure startup cost is mitigated Here is the batch file I used for the test: IISRESET REM make sure you add REM C:\Program Files (x86)\Apache Software Foundation\Apache2.2\bin REM to your path so ab.exe can be found REM Warm up ab.exe -n100 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldJsonab.exe -n100 -c20 http://localhost/aspnetperf/api/HelloWorldJson ab.exe -n100 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorld ab.exe -n100000 -c20 http://localhost/aspnetperf/handler.ashx > handler.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/HelloWorld_CodeBehind.aspx > AspxCodeBehind.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/HelloWorld_Markup.aspx > AspxMarkup.txt ab.exe -n100000 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorld > Wcf.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldCode > Mvc.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorld > WebApi.txt I ran each of these tests 3 times and took the average score for Requests/second, with the machine otherwise idle. I did see a bit of variance when running many tests but the values used here are the medians. Part of this has to do with the fact I ran the tests on my local machine - result would probably more consistent running the load test on a separate machine hitting across the network. I ran these tests locally on my laptop which is a Dell XPS with quad core Sandibridge I7-2720QM @ 2.20ghz and a fast SSD drive on Windows 8. CPU load during tests ran to about 70% max across all 4 cores (IOW, it wasn't overloading the machine). Ideally you can try running these tests on a separate machine hitting the local machine. If I remember correctly IIS 7 and 8 on client OSs don't throttle so the performance here should be Results Ok, let's cut straight to the chase. Below are the results from the tests… It's not surprising that the handler was fastest. But it was a bit surprising to me that the next fastest was WebForms and especially Web Forms with markup over a CodeBehind page. WebPages also fared fairly well. MVC and WebAPI are a little slower and the slowest by far is WCF REST (which again I find surprising). As mentioned at the start the raw throughput tests are not overly practical as they don't test scripting performance for the HTML generation engines or serialization performances of the data engines. All it really does is give you an idea of the raw throughput for the technology from time of request to reaching the endpoint and returning minimal text data back to the client which indicates full round trip performance. But it's still interesting to see that Web Forms performs better in throughput than either MVC, WebAPI or WebPages. It'd be interesting to try this with a few pages that actually have some parsing logic on it, but that's beyond the scope of this throughput test. But what's also amazing about this test is the sheer amount of traffic that a laptop computer is handling. Even the slowest tech managed 5700 requests a second, which is one hell of a lot of requests if you extrapolate that out over a 24 hour period. Remember these are not static pages, but dynamic requests that are being served. Another test - JSON Data Service Results The second test I used a JSON result from several of the technologies. I didn't bother running WebForms and WebPages through this test since that doesn't make a ton of sense to return data from the them (OTOH, returning text from the APIs didn't make a ton of sense either :-) In these tests I have a small Person class that gets serialized and then returned to the client. The Person class looks like this: public class Person { public Person() { Id = 10; Name = "Rick"; Entered = DateTime.Now; } public int Id { get; set; } public string Name { get; set; } public DateTime Entered { get; set; } } Here are the updated handler classes that use Person: Handler public class Handler : IHttpHandler { public void ProcessRequest(HttpContext context) { var action = context.Request.QueryString["action"]; if (action == "json") JsonRequest(context); else TextRequest(context); } public void TextRequest(HttpContext context) { context.Response.ContentType = "text/plain"; context.Response.Write("Hello World. Time is: " + DateTime.Now.ToString()); } public void JsonRequest(HttpContext context) { var json = JsonConvert.SerializeObject(new Person(), Formatting.None); context.Response.ContentType = "application/json"; context.Response.Write(json); } public bool IsReusable { get { return true; } } } This code adds a little logic to check for a action query string and route the request to an optional JSON result method. To generate JSON, I'm using the same JSON.NET serializer (JsonConvert.SerializeObject) used in Web API to create the JSON response. WCF REST   [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class WcfService { [OperationContract] [WebGet] public Stream HelloWorld() { var data = Encoding.Unicode.GetBytes("Hello World " + DateTime.Now.ToString()); var ms = new MemoryStream(data); // Add your operation implementation here return ms; } [OperationContract] [WebGet(ResponseFormat=WebMessageFormat.Json,BodyStyle=WebMessageBodyStyle.WrappedRequest)] public Person HelloWorldJson() { // Add your operation implementation here return new Person(); } } For WCF REST all I have to do is add a method with the Person result type.   ASP.NET MVC public class MvcPerformanceController : Controller { // // GET: /MvcPerformance/ public ActionResult Index() { return View(); } public ActionResult HelloWorldCode() { return new ContentResult() { Content = "Hello World. Time is: " + DateTime.Now.ToString() }; } public JsonResult HelloWorldJson() { return Json(new Person(), JsonRequestBehavior.AllowGet); } } For MVC all I have to do for a JSON response is return a JSON result. ASP.NET internally uses JavaScriptSerializer. ASP.NET WebAPI public class WebApiPerformanceController : ApiController { [HttpGet] public HttpResponseMessage HelloWorldCode() { return new HttpResponseMessage() { Content = new StringContent("Hello World. Time is: " + DateTime.Now.ToString(), Encoding.UTF8, "text/plain") }; } [HttpGet] public Person HelloWorldJson() { return new Person(); } [HttpGet] public HttpResponseMessage HelloWorldJson2() { var response = new HttpResponseMessage(HttpStatusCode.OK); response.Content = new ObjectContent<Person>(new Person(), GlobalConfiguration.Configuration.Formatters.JsonFormatter); return response; } } Testing and Results To run these data requests I used the following ab.exe commands:REM JSON RESPONSES ab.exe -n100000 -c20 http://localhost/aspnetperf/Handler.ashx?action=json > HandlerJson.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldJson > MvcJson.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorldJson > WebApiJson.txt ab.exe -n100000 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorldJson > WcfJson.txt The results from this test run are a bit interesting in that the WebAPI test improved performance significantly over returning plain string content. Here are the results:   The performance for each technology drops a little bit except for WebAPI which is up quite a bit! From this test it appears that WebAPI is actually significantly better performing returning a JSON response, rather than a plain string response. Snag with Apache Benchmark and 'Length Failures' I ran into a little snag with Apache Benchmark, which was reporting failures for my Web API requests when serializing. As the graph shows performance improved significantly from with JSON results from 5580 to 6530 or so which is a 15% improvement (while all others slowed down by 3-8%). However, I was skeptical at first because the WebAPI test reports showed a bunch of errors on about 10% of the requests. Check out this report: Notice the Failed Request count. What the hey? Is WebAPI failing on roughly 10% of requests when sending JSON? Turns out: No it's not! But it took some sleuthing to figure out why it reports these failures. At first I thought that Web API was failing, and so to make sure I re-ran the test with Fiddler attached and runiisning the ab.exe test by using the -X switch: ab.exe -n100 -c10 -X localhost:8888 http://localhost/aspnetperf/api/HelloWorldJson which showed that indeed all requests where returning proper HTTP 200 results with full content. However ab.exe was reporting the errors. After some closer inspection it turned out that the dates varying in size altered the response length in dynamic output. For example: these two results: {"Id":10,"Name":"Rick","Entered":"2012-09-04T10:57:24.841926-10:00"} {"Id":10,"Name":"Rick","Entered":"2012-09-04T10:57:24.8519262-10:00"} are different in length for the number which results in 68 and 69 bytes respectively. The same URL produces different result lengths which is what ab.exe reports. I didn't notice at first bit the same is happening when running the ASHX handler with JSON.NET result since it uses the same serializer that varies the milliseconds. Moral: You can typically ignore Length failures in Apache Benchmark and when in doubt check the actual output with Fiddler. Note that the other failure values are accurate though. Another interesting Side Note: Perf drops over Time As I was running these tests repeatedly I was finding that performance steadily dropped from a startup peak to a 10-15% lower stable level. IOW, with Web API I'd start out with around 6500 req/sec and in subsequent runs it keeps dropping until it would stabalize somewhere around 5900 req/sec occasionally jumping lower. For these tests this is why I did the IIS RESET and warm up for individual tests. This is a little puzzling. Looking at Process Monitor while the test are running memory very quickly levels out as do handles and threads, on the first test run. Subsequent runs everything stays stable, but the performance starts going downwards. This applies to all the technologies - Handlers, Web Forms, MVC, Web API - curious to see if others test this and see similar results. Doing an IISRESET then resets everything and performance starts off at peak again… Summary As I stated at the outset, these were informal to satiate my curiosity not to prove that any technology is better or even faster than another. While there clearly are differences in performance the differences (other than WCF REST which was by far the slowest and the raw handler which was by far the highest) are relatively minor, so there is no need to feel that any one technology is a runaway standout in raw performance. Choosing a technology is about more than pure performance but also about the adequateness for the job and the easy of implementation. The strengths of each technology will make for any minor performance difference we see in these tests. However, to me it's important to get an occasional reality check and compare where new technologies are heading. Often times old stuff that's been optimized and designed for a time of less horse power can utterly blow the doors off newer tech and simple checks like this let you compare. Luckily we're seeing that much of the new stuff performs well even in V1.0 which is great. To me it was very interesting to see Web API perform relatively badly with plain string content, which originally led me to think that Web API might not be properly optimized just yet. For those that caught my Tweets late last week regarding WebAPI's slow responses was with String content which is in fact considerably slower. Luckily where it counts with serialized JSON and XML WebAPI actually performs better. But I do wonder what would make generic string content slower than serialized code? This stresses another point: Don't take a single test as the final gospel and don't extrapolate out from a single set of tests. Certainly Twitter can make you feel like a fool when you post something immediate that hasn't been fleshed out a little more <blush>. Egg on my face. As a result I ended up screwing around with this for a few hours today to compare different scenarios. Well worth the time… I hope you found this useful, if not for the results, maybe for the process of quickly testing a few requests for performance and charting out a comparison. Now onwards with more serious stuff… Resources Source Code on GitHub Apache HTTP Server Project (ab.exe is part of the binary distribution)© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • AppFabric OutputCaching for ASP.NET Web API

    - by cibrax
    ASP.NET Web API does not provide any output caching capabilities out of the box other than the ones you would traditionally find in the ASP.NET caching module. Fortunately, Filip wrote a very nice library that you can use to decorate your Web API controller methods with an [OutputCaching] attribute, which is similar to the one you can find in ASP.NET MVC. This library provides a way to configure different persistence storages for the cached data, which uses memory by default. As part of this post, I will show how you can implement your own persistence provider for AppFabric in order to support distributed caching on web applications running on premises. Read more here  

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >