Search Results

Search found 33 results on 2 pages for 'entitydatasource'.

Page 2/2 | < Previous Page | 1 2 

  • Identity Map Pattern and the Entity Framework

    - by nikolaosk
    This is going to be the seventh post of a series of posts regarding ASP.Net and the Entity Framework and how we can use Entity Framework to access our datastore. You can find the first one here , the second one here and the third one here , the fourth one here , the fifth one here and the sixth one here . I have a post regarding ASP.Net and EntityDataSource. You can read it here .I have 3 more posts on Profiling Entity Framework applications. You can have a look at them here , here and here . In...(read more)

    Read the article

  • how to query with child relations to same table and order this correctly

    - by robertpnl
    Hi, Take this table: id name sub_id --------------------------- 1 A (null) 2 B (null) 3 A2 1 4 A3 1 The sub_id column is a relation to his owm table, to column ID. subid --- 0:1 --- id Now I have the problem to make a correctly SELECT query to show that the child rows (which sub_id is not null) directly selected under his parent row. So this must be a correctly order: 1 A (null) 3 A2 1 4 A3 1 2 B (null) A normal SELECT order the id. But how or which keyword help me to order this correctly? JOIN isn't possible I think because I want to get all the rows seperated. Because the rows will be displayed on a Gridview (ASP.Net) with EntityDataSource but the child rows must be displayed directly under his parent. Thank you.

    Read the article

  • ASP.NET GridView throwing error: "Update is disabled for this control".

    - by NTDLS
    I have the absolutely most simple setup imaginable. A single table defined in an Entity model in ASP.net v4, the model is bound directly to a GridView with AutoGenerateEditButton enabled. However, each time I hit edit, then save, the page throws the error “Update is disabled for this control" for which I cannot find a solution. What is causing this error? What can do to resolve it? <% <asp:GridView ID="MenuItemsGrid" runat="server" DataSourceID="gridDataSource" AutoGenerateEditButton="true" AutoGenerateColumns="true"> </asp:GridView> <asp:EntityDataSource ID="gridDataSource" runat="server" ConnectionString="name=dataEntitiesModel" DefaultContainerName="dataEntities" EntitySetName="MenuItems" /> %>

    Read the article

  • Bind button visibility to the expresion (C#)

    - by drasto
    I have a delete button in each row of GridView (component ASP.NET). I want some of the delete buttons to be invisible. The visibility of the delete button should depend on the data that are back the row. GridView is backed by EntityDataSource. GridView displays entities called Category, one instance in each row. Entity Category has (besides others) also a field of type EntityCollection. Name of that field is Items. Basically I want to allow user to delete a row only if the Items field of backing Category entity is an empty collection. I cannot make up the binding of Visible property. I have no experience with bindings and Google does not really help. This is how the button looks right now: <asp:Button ID="DeleteButton" runat="server" CommandName="Delete" Text="Delete" Visible=??? ></asp:Button> I don't know what should replace ???. The button schold be visible only when this expression evaluates to true: ((SimpleEShop.Model.Category) dataItem).Items.LongCount() <= 0 where dataItem variable contains data of current row in the table. What is the binding that I need ?

    Read the article

  • Differences Between NHibernate and Entity Framework

    - by Ricardo Peres
    Introduction NHibernate and Entity Framework are two of the most popular O/RM frameworks on the .NET world. Although they share some functionality, there are some aspects on which they are quite different. This post will describe this differences and will hopefully help you get started with the one you know less. Mind you, this is a personal selection of features to compare, it is by no way an exhaustive list. History First, a bit of history. NHibernate is an open-source project that was first ported from Java’s venerable Hibernate framework, one of the first O/RM frameworks, but nowadays it is not tied to it, for example, it has .NET specific features, and has evolved in different ways from those of its Java counterpart. Current version is 3.3, with 3.4 on the horizon. It currently targets .NET 3.5, but can be used as well in .NET 4, it only makes no use of any of its specific functionality. You can find its home page at NHForge. Entity Framework 1 came out with .NET 3.5 and is now on its second major version, despite being version 4. Code First sits on top of it and but came separately and will also continue to be released out of line with major .NET distributions. It is currently on version 4.3.1 and version 5 will be released together with .NET Framework 4.5. All versions will target the current version of .NET, at the time of their release. Its home location is located at MSDN. Architecture In NHibernate, there is a separation between the Unit of Work and the configuration and model instances. You start off by creating a Configuration object, where you specify all global NHibernate settings such as the database and dialect to use, the batch sizes, the mappings, etc, then you build an ISessionFactory from it. The ISessionFactory holds model and metadata that is tied to a particular database and to the settings that came from the Configuration object, and, there will typically be only one instance of each in a process. Finally, you create instances of ISession from the ISessionFactory, which is the NHibernate representation of the Unit of Work and Identity Map. This is a lightweight object, it basically opens and closes a database connection as required and keeps track of the entities associated with it. ISession objects are cheap to create and dispose, because all of the model complexity is stored in the ISessionFactory and Configuration objects. As for Entity Framework, the ObjectContext/DbContext holds the configuration, model and acts as the Unit of Work, holding references to all of the known entity instances. This class is therefore not lightweight as its NHibernate counterpart and it is not uncommon to see examples where an instance is cached on a field. Mappings Both NHibernate and Entity Framework (Code First) support the use of POCOs to represent entities, no base classes are required (or even possible, in the case of NHibernate). As for mapping to and from the database, NHibernate supports three types of mappings: XML-based, which have the advantage of not tying the entity classes to a particular O/RM; the XML files can be deployed as files on the file system or as embedded resources in an assembly; Attribute-based, for keeping both the entities and database details on the same place at the expense of polluting the entity classes with NHibernate-specific attributes; Strongly-typed code-based, which allows dynamic creation of the model and strongly typing it, so that if, for example, a property name changes, the mapping will also be updated. Entity Framework can use: Attribute-based (although attributes cannot express all of the available possibilities – for example, cascading); Strongly-typed code mappings. Database Support With NHibernate you can use mostly any database you want, including: SQL Server; SQL Server Compact; SQL Server Azure; Oracle; DB2; PostgreSQL; MySQL; Sybase Adaptive Server/SQL Anywhere; Firebird; SQLLite; Informix; Any through OLE DB; Any through ODBC. Out of the box, Entity Framework only supports SQL Server, but a number of providers exist, both free and commercial, for some of the most used databases, such as Oracle and MySQL. See a list here. Inheritance Strategies Both NHibernate and Entity Framework support the three canonical inheritance strategies: Table Per Type Hierarchy (Single Table Inheritance), Table Per Type (Class Table Inheritance) and Table Per Concrete Type (Concrete Table Inheritance). Associations Regarding associations, both support one to one, one to many and many to many. However, NHibernate offers far more collection types: Bags of entities or values: unordered, possibly with duplicates; Lists of entities or values: ordered, indexed by a number column; Maps of entities or values: indexed by either an entity or any value; Sets of entities or values: unordered, no duplicates; Arrays of entities or values: indexed, immutable. Querying NHibernate exposes several querying APIs: LINQ is probably the most used nowadays, and really does not need to be introduced; Hibernate Query Language (HQL) is a database-agnostic, object-oriented SQL-alike language that exists since NHibernate’s creation and still offers the most advanced querying possibilities; well suited for dynamic queries, even if using string concatenation; Criteria API is an implementation of the Query Object pattern where you create a semi-abstract conceptual representation of the query you wish to execute by means of a class model; also a good choice for dynamic querying; Query Over offers a similar API to Criteria, but using strongly-typed LINQ expressions instead of strings; for this, although more refactor-friendlier that Criteria, it is also less suited for dynamic queries; SQL, including stored procedures, can also be used; Integration with Lucene.NET indexer is available. As for Entity Framework: LINQ to Entities is fully supported, and its implementation is considered very complete; it is the API of choice for most developers; Entity-SQL, HQL’s counterpart, is also an object-oriented, database-independent querying language that can be used for dynamic queries; SQL, of course, is also supported. Caching Both NHibernate and Entity Framework, of course, feature first-level cache. NHibernate also supports a second-level cache, that can be used among multiple ISessionFactorys, even in different processes/machines: Hashtable (in-memory); SysCache (uses ASP.NET as the cache provider); SysCache2 (same as above but with support for SQL Server SQL Dependencies); Prevalence; SharedCache; Memcached; Redis; NCache; Appfabric Caching. Out of the box, Entity Framework does not have any second-level cache mechanism, however, there are some public samples that show how we can add this. ID Generators NHibernate supports different ID generation strategies, coming from the database and otherwise: Identity (for SQL Server, MySQL, and databases who support identity columns); Sequence (for Oracle, PostgreSQL, and others who support sequences); Trigger-based; HiLo; Sequence HiLo (for databases that support sequences); Several GUID flavors, both in GUID as well as in string format; Increment (for single-user uses); Assigned (must know what you’re doing); Sequence-style (either uses an actual sequence or a single-column table); Table of ids; Pooled (similar to HiLo but stores high values in a table); Native (uses whatever mechanism the current database supports, identity or sequence). Entity Framework only supports: Identity generation; GUIDs; Assigned values. Properties NHibernate supports properties of entity types (one to one or many to one), collections (one to many or many to many) as well as scalars and enumerations. It offers a mechanism for having complex property types generated from the database, which even include support for querying. It also supports properties originated from SQL formulas. Entity Framework only supports scalars, entity types and collections. Enumerations support will come in the next version. Events and Interception NHibernate has a very rich event model, that exposes more than 20 events, either for synchronous pre-execution or asynchronous post-execution, including: Pre/Post-Load; Pre/Post-Delete; Pre/Post-Insert; Pre/Post-Update; Pre/Post-Flush. It also features interception of class instancing and SQL generation. As for Entity Framework, only two events exist: ObjectMaterialized (after loading an entity from the database); SavingChanges (before saving changes, which include deleting, inserting and updating). Tracking Changes For NHibernate as well as Entity Framework, all changes are tracked by their respective Unit of Work implementation. Entities can be attached and detached to it, Entity Framework does, however, also support self-tracking entities. Optimistic Concurrency Control NHibernate supports all of the imaginable scenarios: SQL Server’s ROWVERSION; Oracle’s ORA_ROWSCN; A column containing date and time; A column containing a version number; All/dirty columns comparison. Entity Framework is more focused on Entity Framework, so it only supports: SQL Server’s ROWVERSION; Comparing all/some columns. Batching NHibernate has full support for insertion batching, but only if the ID generator in use is not database-based (for example, it cannot be used with Identity), whereas Entity Framework has no batching at all. Cascading Both support cascading for collections and associations: when an entity is deleted, their conceptual children are also deleted. NHibernate also offers the possibility to set the foreign key column on children to NULL instead of removing them. Flushing Changes NHibernate’s ISession has a FlushMode property that can have the following values: Auto: changes are sent to the database when necessary, for example, if there are dirty instances of an entity type, and a query is performed against this entity type, or if the ISession is being disposed; Commit: changes are sent when committing the current transaction; Never: changes are only sent when explicitly calling Flush(). As for Entity Framework, changes have to be explicitly sent through a call to AcceptAllChanges()/SaveChanges(). Lazy Loading NHibernate supports lazy loading for Associated entities (one to one, many to one); Collections (one to many, many to many); Scalar properties (thing of BLOBs or CLOBs). Entity Framework only supports lazy loading for: Associated entities; Collections. Generating and Updating the Database Both NHibernate and Entity Framework Code First (with the Migrations API) allow creating the database model from the mapping and updating it if the mapping changes. Extensibility As you can guess, NHibernate is far more extensible than Entity Framework. Basically, everything can be extended, from ID generation, to LINQ to SQL transformation, HQL native SQL support, custom column types, custom association collections, SQL generation, supported databases, etc. With Entity Framework your options are more limited, at least, because practically no information exists as to what can be extended/changed. It features a provider model that can be extended to support any database. Integration With Other Microsoft APIs and Tools When it comes to integration with Microsoft technologies, it will come as no surprise that Entity Framework offers the best support. For example, the following technologies are fully supported: ASP.NET (through the EntityDataSource); ASP.NET Dynamic Data; WCF Data Services; WCF RIA Services; Visual Studio (through the integrated designer). Documentation This is another point where Entity Framework is superior: NHibernate lacks, for starters, an up to date API reference synchronized with its current version. It does have a community mailing list, blogs and wikis, although not much used. Entity Framework has a number of resources on MSDN and, of course, several forums and discussion groups exist. Conclusion Like I said, this is a personal list. I may come as a surprise to some that Entity Framework is so behind NHibernate in so many aspects, but it is true that NHibernate is much older and, due to its open-source nature, is not tied to product-specific timeframes and can thus evolve much more rapidly. I do like both, and I chose whichever is best for the job I have at hands. I am looking forward to the changes in EF5 which will add significant value to an already interesting product. So, what do you think? Did I forget anything important or is there anything else worth talking about? Looking forward for your comments!

    Read the article

  • VS 2010 SP1 and SQL CE

    - by ScottGu
    Last month we released the Beta of VS 2010 Service Pack 1 (SP1).  You can learn more about the VS 2010 SP1 Beta from Jason Zander’s two blog posts about it, and from Scott Hanselman’s blog post that covers some of the new capabilities enabled with it.   You can download and install the VS 2010 SP1 Beta here. Last week I blogged about the new Visual Studio support for IIS Express that we are adding with VS 2010 SP1. In today’s post I’m going to talk about the new VS 2010 SP1 tooling support for SQL CE, and walkthrough some of the cool scenarios it enables.  SQL CE – What is it and why should you care? SQL CE is a free, embedded, database engine that enables easy database storage. No Database Installation Required SQL CE does not require you to run a setup or install a database server in order to use it.  You can simply copy the SQL CE binaries into the \bin directory of your ASP.NET application, and then your web application can use it as a database engine.  No setup or extra security permissions are required for it to run. You do not need to have an administrator account on the machine. Just copy your web application onto any server and it will work. This is true even of medium-trust applications running in a web hosting environment. SQL CE runs in-memory within your ASP.NET application and will start-up when you first access a SQL CE database, and will automatically shutdown when your application is unloaded.  SQL CE databases are stored as files that live within the \App_Data folder of your ASP.NET Applications. Works with Existing Data APIs SQL CE 4 works with existing .NET-based data APIs, and supports a SQL Server compatible query syntax.  This means you can use existing data APIs like ADO.NET, as well as use higher-level ORMs like Entity Framework and NHibernate with SQL CE.  This enables you to use the same data programming skills and data APIs you know today. Supports Development, Testing and Production Scenarios SQL CE can be used for development scenarios, testing scenarios, and light production usage scenarios.  With the SQL CE 4 release we’ve done the engineering work to ensure that SQL CE won’t crash or deadlock when used in a multi-threaded server scenario (like ASP.NET).  This is a big change from previous releases of SQL CE – which were designed for client-only scenarios and which explicitly blocked running in web-server environments.  Starting with SQL CE 4 you can use it in a web-server as well. There are no license restrictions with SQL CE.  It is also totally free. Easy Migration to SQL Server SQL CE is an embedded database – which makes it ideal for development, testing, and light-usage scenarios.  For high-volume sites and applications you’ll probably want to migrate your database to use SQL Server Express (which is free), SQL Server or SQL Azure.  These servers enable much better scalability, more development features (including features like Stored Procedures – which aren’t supported with SQL CE), as well as more advanced data management capabilities. We’ll ship migration tools that enable you to optionally take SQL CE databases and easily upgrade them to use SQL Server Express, SQL Server, or SQL Azure.  You will not need to change your code when upgrading a SQL CE database to SQL Server or SQL Azure.  Our goal is to enable you to be able to simply change the database connection string in your web.config file and have your application just work. New Tooling Support for SQL CE in VS 2010 SP1 VS 2010 SP1 includes much improved tooling support for SQL CE, and adds support for using SQL CE within ASP.NET projects for the first time.  With VS 2010 SP1 you can now: Create new SQL CE Databases Edit and Modify SQL CE Database Schema and Indexes Populate SQL CE Databases within Data Use the Entity Framework (EF) designer to create model layers against SQL CE databases Use EF Code First to define model layers in code, then create a SQL CE database from them, and optionally edit the DB with VS Deploy SQL CE databases to remote servers using Web Deploy and optionally convert them to full SQL Server databases You can take advantage of all of the above features from within both ASP.NET Web Forms and ASP.NET MVC based projects. Download You can enable SQL CE tooling support within VS 2010 by first installing VS 2010 SP1 (beta). Once SP1 is installed, you’ll also then need to install the SQL CE Tools for Visual Studio download.  This is a separate download that enables the SQL CE tooling support for VS 2010 SP1. Walkthrough of Two Scenarios In this blog post I’m going to walkthrough how you can take advantage of SQL CE and VS 2010 SP1 using both an ASP.NET Web Forms and an ASP.NET MVC based application. Specifically, we’ll walkthrough: How to create a SQL CE database using VS 2010 SP1, then use the EF4 visual designers in Visual Studio to construct a model layer from it, and then display and edit the data using an ASP.NET GridView control. How to use an EF Code First approach to define a model layer using POCO classes and then have EF Code-First “auto-create” a SQL CE database for us based on our model classes.  We’ll then look at how we can use the new VS 2010 SP1 support for SQL CE to inspect the database that was created, populate it with data, and later make schema changes to it.  We’ll do all this within the context of an ASP.NET MVC based application. You can follow the two walkthroughs below on your own machine by installing VS 2010 SP1 (beta) and then installing the SQL CE Tools for Visual Studio download (which is a separate download that enables SQL CE tooling support for VS 2010 SP1). Walkthrough 1: Create a SQL CE Database, Create EF Model Classes, Edit the Data with a GridView This first walkthrough will demonstrate how to create and define a SQL CE database within an ASP.NET Web Form application.  We’ll then build an EF model layer for it and use that model layer to enable data editing scenarios with an <asp:GridView> control. Step 1: Create a new ASP.NET Web Forms Project We’ll begin by using the File->New Project menu command within Visual Studio to create a new ASP.NET Web Forms project.  We’ll use the “ASP.NET Web Application” project template option so that it has a default UI skin implemented: Step 2: Create a SQL CE Database Right click on the “App_Data” folder within the created project and choose the “Add->New Item” menu command: This will bring up the “Add Item” dialog box.  Select the “SQL Server Compact 4.0 Local Database” item (new in VS 2010 SP1) and name the database file to create “Store.sdf”: Note that SQL CE database files have a .sdf filename extension. Place them within the /App_Data folder of your ASP.NET application to enable easy deployment. When we clicked the “Add” button above a Store.sdf file was added to our project: Step 3: Adding a “Products” Table Double-clicking the “Store.sdf” database file will open it up within the Server Explorer tab.  Since it is a new database there are no tables within it: Right click on the “Tables” icon and choose the “Create Table” menu command to create a new database table.  We’ll name the new table “Products” and add 4 columns to it.  We’ll mark the first column as a primary key (and make it an identify column so that its value will automatically increment with each new row): When we click “ok” our new Products table will be created in the SQL CE database. Step 4: Populate with Data Once our Products table is created it will show up within the Server Explorer.  We can right-click it and choose the “Show Table Data” menu command to edit its data: Let’s add a few sample rows of data to it: Step 5: Create an EF Model Layer We have a SQL CE database with some data in it – let’s now create an EF Model Layer that will provide a way for us to easily query and update data within it. Let’s right-click on our project and choose the “Add->New Item” menu command.  This will bring up the “Add New Item” dialog – select the “ADO.NET Entity Data Model” item within it and name it “Store.edmx” This will add a new Store.edmx item to our solution explorer and launch a wizard that allows us to quickly create an EF model: Select the “Generate From Database” option above and click next.  Choose to use the Store.sdf SQL CE database we just created and then click next again.  The wizard will then ask you what database objects you want to import into your model.  Let’s choose to import the “Products” table we created earlier: When we click the “Finish” button Visual Studio will open up the EF designer.  It will have a Product entity already on it that maps to the “Products” table within our SQL CE database: The VS 2010 SP1 EF designer works exactly the same with SQL CE as it does already with SQL Server and SQL Express.  The Product entity above will be persisted as a class (called “Product”) that we can programmatically work against within our ASP.NET application. Step 6: Compile the Project Before using your model layer you’ll need to build your project.  Do a Ctrl+Shift+B to compile the project, or use the Build->Build Solution menu command. Step 7: Create a Page that Uses our EF Model Layer Let’s now create a simple ASP.NET Web Form that contains a GridView control that we can use to display and edit the our Products data (via the EF Model Layer we just created). Right-click on the project and choose the Add->New Item command.  Select the “Web Form from Master Page” item template, and name the page you create “Products.aspx”.  Base the master page on the “Site.Master” template that is in the root of the project. Add an <h2>Products</h2> heading the new Page, and add an <asp:gridview> control within it: Then click the “Design” tab to switch into design-view. Select the GridView control, and then click the top-right corner to display the GridView’s “Smart Tasks” UI: Choose the “New data source…” drop down option above.  This will bring up the below dialog which allows you to pick your Data Source type: Select the “Entity” data source option – which will allow us to easily connect our GridView to the EF model layer we created earlier.  This will bring up another dialog that allows us to pick our model layer: Select the “StoreEntities” option in the dropdown – which is the EF model layer we created earlier.  Then click next – which will allow us to pick which entity within it we want to bind to: Select the “Products” entity in the above dialog – which indicates that we want to bind against the “Product” entity class we defined earlier.  Then click the “Enable automatic updates” checkbox to ensure that we can both query and update Products.  When you click “Finish” VS will wire-up an <asp:EntityDataSource> to your <asp:GridView> control: The last two steps we’ll do will be to click the “Enable Editing” checkbox on the Grid (which will cause the Grid to display an “Edit” link on each row) and (optionally) use the Auto Format dialog to pick a UI template for the Grid. Step 8: Run the Application Let’s now run our application and browse to the /Products.aspx page that contains our GridView.  When we do so we’ll see a Grid UI of the Products within our SQL CE database. Clicking the “Edit” link for any of the rows will allow us to edit their values: When we click “Update” the GridView will post back the values, persist them through our EF Model Layer, and ultimately save them within our SQL CE database. Learn More about using EF with ASP.NET Web Forms Read this tutorial series on the http://asp.net site to learn more about how to use EF with ASP.NET Web Forms.  The tutorial series uses SQL Express as the database – but the nice thing is that all of the same steps/concepts can also now also be done with SQL CE.   Walkthrough 2: Using EF Code-First with SQL CE and ASP.NET MVC 3 We used a database-first approach with the sample above – where we first created the database, and then used the EF designer to create model classes from the database.  In addition to supporting a designer-based development workflow, EF also enables a more code-centric option which we call “code first development”.  Code-First Development enables a pretty sweet development workflow.  It enables you to: Define your model objects by simply writing “plain old classes” with no base classes or visual designer required Use a “convention over configuration” approach that enables database persistence without explicitly configuring anything Optionally override the convention-based persistence and use a fluent code API to fully customize the persistence mapping Optionally auto-create a database based on the model classes you define – allowing you to start from code first I’ve done several blog posts about EF Code First in the past – I really think it is great.  The good news is that it also works very well with SQL CE. The combination of SQL CE, EF Code First, and the new VS tooling support for SQL CE, enables a pretty nice workflow.  Below is a simple example of how you can use them to build a simple ASP.NET MVC 3 application. Step 1: Create a new ASP.NET MVC 3 Project We’ll begin by using the File->New Project menu command within Visual Studio to create a new ASP.NET MVC 3 project.  We’ll use the “Internet Project” template so that it has a default UI skin implemented: Step 2: Use NuGet to Install EFCodeFirst Next we’ll use the NuGet package manager (automatically installed by ASP.NET MVC 3) to add the EFCodeFirst library to our project.  We’ll use the Package Manager command shell to do this.  Bring up the package manager console within Visual Studio by selecting the View->Other Windows->Package Manager Console menu command.  Then type: install-package EFCodeFirst within the package manager console to download the EFCodeFirst library and have it be added to our project: When we enter the above command, the EFCodeFirst library will be downloaded and added to our application: Step 3: Build Some Model Classes Using a “code first” based development workflow, we will create our model classes first (even before we have a database).  We create these model classes by writing code. For this sample, we will right click on the “Models” folder of our project and add the below three classes to our project: The “Dinner” and “RSVP” model classes above are “plain old CLR objects” (aka POCO).  They do not need to derive from any base classes or implement any interfaces, and the properties they expose are standard .NET data-types.  No data persistence attributes or data code has been added to them.   The “NerdDinners” class derives from the DbContext class (which is supplied by EFCodeFirst) and handles the retrieval/persistence of our Dinner and RSVP instances from a database. Step 4: Listing Dinners We’ve written all of the code necessary to implement our model layer for this simple project.  Let’s now expose and implement the URL: /Dinners/Upcoming within our project.  We’ll use it to list upcoming dinners that happen in the future. We’ll do this by right-clicking on our “Controllers” folder and select the “Add->Controller” menu command.  We’ll name the Controller we want to create “DinnersController”.  We’ll then implement an “Upcoming” action method within it that lists upcoming dinners using our model layer above.  We will use a LINQ query to retrieve the data and pass it to a View to render with the code below: We’ll then right-click within our Upcoming method and choose the “Add-View” menu command to create an “Upcoming” view template that displays our dinners.  We’ll use the “empty” template option within the “Add View” dialog and write the below view template using Razor: Step 4: Configure our Project to use a SQL CE Database We have finished writing all of our code – our last step will be to configure a database connection-string to use. We will point our NerdDinners model class to a SQL CE database by adding the below <connectionString> to the web.config file at the top of our project: EF Code First uses a default convention where context classes will look for a connection-string that matches the DbContext class name.  Because we created a “NerdDinners” class earlier, we’ve also named our connectionstring “NerdDinners”.  Above we are configuring our connection-string to use SQL CE as the database, and telling it that our SQL CE database file will live within the \App_Data directory of our ASP.NET project. Step 5: Running our Application Now that we’ve built our application, let’s run it! We’ll browse to the /Dinners/Upcoming URL – doing so will display an empty list of upcoming dinners: You might ask – but where did it query to get the dinners from? We didn’t explicitly create a database?!? One of the cool features that EF Code-First supports is the ability to automatically create a database (based on the schema of our model classes) when the database we point it at doesn’t exist.  Above we configured  EF Code-First to point at a SQL CE database in the \App_Data\ directory of our project.  When we ran our application, EF Code-First saw that the SQL CE database didn’t exist and automatically created it for us. Step 6: Using VS 2010 SP1 to Explore our newly created SQL CE Database Click the “Show all Files” icon within the Solution Explorer and you’ll see the “NerdDinners.sdf” SQL CE database file that was automatically created for us by EF code-first within the \App_Data\ folder: We can optionally right-click on the file and “Include in Project" to add it to our solution: We can also double-click the file (regardless of whether it is added to the project) and VS 2010 SP1 will open it as a database we can edit within the “Server Explorer” tab of the IDE. Below is the view we get when we double-click our NerdDinners.sdf SQL CE file.  We can drill in to see the schema of the Dinners and RSVPs tables in the tree explorer.  Notice how two tables - Dinners and RSVPs – were automatically created for us within our SQL CE database.  This was done by EF Code First when we accessed the NerdDinners class by running our application above: We can right-click on a Table and use the “Show Table Data” command to enter some upcoming dinners in our database: We’ll use the built-in editor that VS 2010 SP1 supports to populate our table data below: And now when we hit “refresh” on the /Dinners/Upcoming URL within our browser we’ll see some upcoming dinners show up: Step 7: Changing our Model and Database Schema Let’s now modify the schema of our model layer and database, and walkthrough one way that the new VS 2010 SP1 Tooling support for SQL CE can make this easier.  With EF Code-First you typically start making database changes by modifying the model classes.  For example, let’s add an additional string property called “UrlLink” to our “Dinner” class.  We’ll use this to point to a link for more information about the event: Now when we re-run our project, and visit the /Dinners/Upcoming URL we’ll see an error thrown: We are seeing this error because EF Code-First automatically created our database, and by default when it does this it adds a table that helps tracks whether the schema of our database is in sync with our model classes.  EF Code-First helpfully throws an error when they become out of sync – making it easier to track down issues at development time that you might otherwise only find (via obscure errors) at runtime.  Note that if you do not want this feature you can turn it off by changing the default conventions of your DbContext class (in this case our NerdDinners class) to not track the schema version. Our model classes and database schema are out of sync in the above example – so how do we fix this?  There are two approaches you can use today: Delete the database and have EF Code First automatically re-create the database based on the new model class schema (losing the data within the existing DB) Modify the schema of the existing database to make it in sync with the model classes (keeping/migrating the data within the existing DB) There are a couple of ways you can do the second approach above.  Below I’m going to show how you can take advantage of the new VS 2010 SP1 Tooling support for SQL CE to use a database schema tool to modify our database structure.  We are also going to be supporting a “migrations” feature with EF in the future that will allow you to automate/script database schema migrations programmatically. Step 8: Modify our SQL CE Database Schema using VS 2010 SP1 The new SQL CE Tooling support within VS 2010 SP1 makes it easy to modify the schema of our existing SQL CE database.  To do this we’ll right-click on our “Dinners” table and choose the “Edit Table Schema” command: This will bring up the below “Edit Table” dialog.  We can rename, change or delete any of the existing columns in our table, or click at the bottom of the column listing and type to add a new column.  Below I’ve added a new “UrlLink” column of type “nvarchar” (since our property is a string): When we click ok our database will be updated to have the new column and our schema will now match our model classes. Because we are manually modifying our database schema, there is one additional step we need to take to let EF Code-First know that the database schema is in sync with our model classes.  As i mentioned earlier, when a database is automatically created by EF Code-First it adds a “EdmMetadata” table to the database to track schema versions (and hash our model classes against them to detect mismatches between our model classes and the database schema): Since we are manually updating and maintaining our database schema, we don’t need this table – and can just delete it: This will leave us with just the two tables that correspond to our model classes: And now when we re-run our /Dinners/Upcoming URL it will display the dinners correctly: One last touch we could do would be to update our view to check for the new UrlLink property and render a <a> link to it if an event has one: And now when we refresh our /Dinners/Upcoming we will see hyperlinks for the events that have a UrlLink stored in the database: Summary SQL CE provides a free, embedded, database engine that you can use to easily enable database storage.  With SQL CE 4 you can now take advantage of it within ASP.NET projects and applications (both Web Forms and MVC). VS 2010 SP1 provides tooling support that enables you to easily create, edit and modify SQL CE databases – as well as use the standard EF designer against them.  This allows you to re-use your existing skills and data knowledge while taking advantage of an embedded database option.  This is useful both for small applications (where you don’t need the scalability of a full SQL Server), as well as for development and testing scenarios – where you want to be able to rapidly develop/test your application without having a full database instance.  SQL CE makes it easy to later migrate your data to a full SQL Server or SQL Azure instance if you want to – without having to change any code in your application.  All we would need to change in the above two scenarios is the <connectionString> value within the web.config file in order to have our code run against a full SQL Server.  This provides the flexibility to scale up your application starting from a small embedded database solution as needed. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • What's New in ASP.NET 4

    - by Navaneeth
    The .NET Framework version 4 includes enhancements for ASP.NET 4 in targeted areas. Visual Studio 2010 and Microsoft Visual Web Developer Express also include enhancements and new features for improved Web development. This document provides an overview of many of the new features that are included in the upcoming release. This topic contains the following sections: ASP.NET Core Services ASP.NET Web Forms ASP.NET MVC Dynamic Data ASP.NET Chart Control Visual Web Developer Enhancements Web Application Deployment with Visual Studio 2010 Enhancements to ASP.NET Multi-Targeting ASP.NET Core Services ASP.NET 4 introduces many features that improve core ASP.NET services such as output caching and session state storage. Extensible Output Caching Since the time that ASP.NET 1.0 was released, output caching has enabled developers to store the generated output of pages, controls, and HTTP responses in memory. On subsequent Web requests, ASP.NET can serve content more quickly by retrieving the generated output from memory instead of regenerating the output from scratch. However, this approach has a limitation — generated content always has to be stored in memory. On servers that experience heavy traffic, the memory requirements for output caching can compete with memory requirements for other parts of a Web application. ASP.NET 4 adds extensibility to output caching that enables you to configure one or more custom output-cache providers. Output-cache providers can use any storage mechanism to persist HTML content. These storage options can include local or remote disks, cloud storage, and distributed cache engines. Output-cache provider extensibility in ASP.NET 4 lets you design more aggressive and more intelligent output-caching strategies for Web sites. For example, you can create an output-cache provider that caches the "Top 10" pages of a site in memory, while caching pages that get lower traffic on disk. Alternatively, you can cache every vary-by combination for a rendered page, but use a distributed cache so that the memory consumption is offloaded from front-end Web servers. You create a custom output-cache provider as a class that derives from the OutputCacheProvider type. You can then configure the provider in the Web.config file by using the new providers subsection of the outputCache element For more information and for examples that show how to configure the output cache, see outputCache Element for caching (ASP.NET Settings Schema). For more information about the classes that support caching, see the documentation for the OutputCache and OutputCacheProvider classes. By default, in ASP.NET 4, all HTTP responses, rendered pages, and controls use the in-memory output cache. The defaultProvider attribute for ASP.NET is AspNetInternalProvider. You can change the default output-cache provider used for a Web application by specifying a different provider name for defaultProvider attribute. In addition, you can select different output-cache providers for individual control and for individual requests and programmatically specify which provider to use. For more information, see the HttpApplication.GetOutputCacheProviderName(HttpContext) method. The easiest way to choose a different output-cache provider for different Web user controls is to do so declaratively by using the new providerName attribute in a page or control directive, as shown in the following example: <%@ OutputCache Duration="60" VaryByParam="None" providerName="DiskCache" %> Preloading Web Applications Some Web applications must load large amounts of data or must perform expensive initialization processing before serving the first request. In earlier versions of ASP.NET, for these situations you had to devise custom approaches to "wake up" an ASP.NET application and then run initialization code during the Application_Load method in the Global.asax file. To address this scenario, a new application preload manager (autostart feature) is available when ASP.NET 4 runs on IIS 7.5 on Windows Server 2008 R2. The preload feature provides a controlled approach for starting up an application pool, initializing an ASP.NET application, and then accepting HTTP requests. It lets you perform expensive application initialization prior to processing the first HTTP request. For example, you can use the application preload manager to initialize an application and then signal a load-balancer that the application was initialized and ready to accept HTTP traffic. To use the application preload manager, an IIS administrator sets an application pool in IIS 7.5 to be automatically started by using the following configuration in the applicationHost.config file: <applicationPools> <add name="MyApplicationPool" startMode="AlwaysRunning" /> </applicationPools> Because a single application pool can contain multiple applications, you specify individual applications to be automatically started by using the following configuration in the applicationHost.config file: <sites> <site name="MySite" id="1"> <application path="/" serviceAutoStartEnabled="true" serviceAutoStartProvider="PrewarmMyCache" > <!-- Additional content --> </application> </site> </sites> <!-- Additional content --> <serviceAutoStartProviders> <add name="PrewarmMyCache" type="MyNamespace.CustomInitialization, MyLibrary" /> </serviceAutoStartProviders> When an IIS 7.5 server is cold-started or when an individual application pool is recycled, IIS 7.5 uses the information in the applicationHost.config file to determine which Web applications have to be automatically started. For each application that is marked for preload, IIS7.5 sends a request to ASP.NET 4 to start the application in a state during which the application temporarily does not accept HTTP requests. When it is in this state, ASP.NET instantiates the type defined by the serviceAutoStartProvider attribute (as shown in the previous example) and calls into its public entry point. You create a managed preload type that has the required entry point by implementing the IProcessHostPreloadClient interface, as shown in the following example: public class CustomInitialization : System.Web.Hosting.IProcessHostPreloadClient { public void Preload(string[] parameters) { // Perform initialization. } } After your initialization code runs in the Preload method and after the method returns, the ASP.NET application is ready to process requests. Permanently Redirecting a Page Content in Web applications is often moved over the lifetime of the application. This can lead to links to be out of date, such as the links that are returned by search engines. In ASP.NET, developers have traditionally handled requests to old URLs by using the Redirect method to forward a request to the new URL. However, the Redirect method issues an HTTP 302 (Found) response (which is used for a temporary redirect). This results in an extra HTTP round trip. ASP.NET 4 adds a RedirectPermanent helper method that makes it easy to issue HTTP 301 (Moved Permanently) responses, as in the following example: RedirectPermanent("/newpath/foroldcontent.aspx"); Search engines and other user agents that recognize permanent redirects will store the new URL that is associated with the content, which eliminates the unnecessary round trip made by the browser for temporary redirects. Session State Compression By default, ASP.NET provides two options for storing session state across a Web farm. The first option is a session state provider that invokes an out-of-process session state server. The second option is a session state provider that stores data in a Microsoft SQL Server database. Because both options store state information outside a Web application's worker process, session state has to be serialized before it is sent to remote storage. If a large amount of data is saved in session state, the size of the serialized data can become very large. ASP.NET 4 introduces a new compression option for both kinds of out-of-process session state providers. By using this option, applications that have spare CPU cycles on Web servers can achieve substantial reductions in the size of serialized session state data. You can set this option using the new compressionEnabled attribute of the sessionState element in the configuration file. When the compressionEnabled configuration option is set to true, ASP.NET compresses (and decompresses) serialized session state by using the .NET Framework GZipStreamclass. The following example shows how to set this attribute. <sessionState mode="SqlServer" sqlConnectionString="data source=dbserver;Initial Catalog=aspnetstate" allowCustomSqlDatabase="true" compressionEnabled="true" /> ASP.NET Web Forms Web Forms has been a core feature in ASP.NET since the release of ASP.NET 1.0. Many enhancements have been in this area for ASP.NET 4, such as the following: The ability to set meta tags. More control over view state. Support for recently introduced browsers and devices. Easier ways to work with browser capabilities. Support for using ASP.NET routing with Web Forms. More control over generated IDs. The ability to persist selected rows in data controls. More control over rendered HTML in the FormView and ListView controls. Filtering support for data source controls. Enhanced support for Web standards and accessibility Setting Meta Tags with the Page.MetaKeywords and Page.MetaDescription Properties Two properties have been added to the Page class: MetaKeywords and MetaDescription. These two properties represent corresponding meta tags in the HTML rendered for a page, as shown in the following example: <head id="Head1" runat="server"> <title>Untitled Page</title> <meta name="keywords" content="keyword1, keyword2' /> <meta name="description" content="Description of my page" /> </head> These two properties work like the Title property does, and they can be set in the @ Page directive. For more information, see Page.MetaKeywords and Page.MetaDescription. Enabling View State for Individual Controls A new property has been added to the Control class: ViewStateMode. You can use this property to disable view state for all controls on a page except those for which you explicitly enable view state. View state data is included in a page's HTML and increases the amount of time it takes to send a page to the client and post it back. Storing more view state than is necessary can cause significant decrease in performance. In earlier versions of ASP.NET, you could reduce the impact of view state on a page's performance by disabling view state for specific controls. But sometimes it is easier to enable view state for a few controls that need it instead of disabling it for many that do not need it. For more information, see Control.ViewStateMode. Support for Recently Introduced Browsers and Devices ASP.NET includes a feature that is named browser capabilities that lets you determine the capabilities of the browser that a user is using. Browser capabilities are represented by the HttpBrowserCapabilities object which is stored in the HttpRequest.Browser property. Information about a particular browser's capabilities is defined by a browser definition file. In ASP.NET 4, these browser definition files have been updated to contain information about recently introduced browsers and devices such as Google Chrome, Research in Motion BlackBerry smart phones, and Apple iPhone. Existing browser definition files have also been updated. For more information, see How to: Upgrade an ASP.NET Web Application to ASP.NET 4 and ASP.NET Web Server Controls and Browser Capabilities. The browser definition files that are included with ASP.NET 4 are shown in the following list: •blackberry.browser •chrome.browser •Default.browser •firefox.browser •gateway.browser •generic.browser •ie.browser •iemobile.browser •iphone.browser •opera.browser •safari.browser A New Way to Define Browser Capabilities ASP.NET 4 includes a new feature referred to as browser capabilities providers. As the name suggests, this lets you build a provider that in turn lets you write custom code to determine browser capabilities. In ASP.NET version 3.5 Service Pack 1, you define browser capabilities in an XML file. This file resides in a machine-level folder or an application-level folder. Most developers do not need to customize these files, but for those who do, the provider approach can be easier than dealing with complex XML syntax. The provider approach makes it possible to simplify the process by implementing a common browser definition syntax, or a database that contains up-to-date browser definitions, or even a Web service for such a database. For more information about the new browser capabilities provider, see the What's New for ASP.NET 4 White Paper. Routing in ASP.NET 4 ASP.NET 4 adds built-in support for routing with Web Forms. Routing is a feature that was introduced with ASP.NET 3.5 SP1 and lets you configure an application to use URLs that are meaningful to users and to search engines because they do not have to specify physical file names. This can make your site more user-friendly and your site content more discoverable by search engines. For example, the URL for a page that displays product categories in your application might look like the following example: http://website/products.aspx?categoryid=12 By using routing, you can use the following URL to render the same information: http://website/products/software The second URL lets the user know what to expect and can result in significantly improved rankings in search engine results. the new features include the following: The PageRouteHandler class is a simple HTTP handler that you use when you define routes. You no longer have to write a custom route handler. The HttpRequest.RequestContext and Page.RouteData properties make it easier to access information that is passed in URL parameters. The RouteUrl expression provides a simple way to create a routed URL in markup. The RouteValue expression provides a simple way to extract URL parameter values in markup. The RouteParameter class makes it easier to pass URL parameter values to a query for a data source control (similar to FormParameter). You no longer have to change the Web.config file to enable routing. For more information about routing, see the following topics: ASP.NET Routing Walkthrough: Using ASP.NET Routing in a Web Forms Application How to: Define Routes for Web Forms Applications How to: Construct URLs from Routes How to: Access URL Parameters in a Routed Page Setting Client IDs The new ClientIDMode property makes it easier to write client script that references HTML elements rendered for server controls. Increasing use of Microsoft Ajax makes the need to do this more common. For example, you may have a data control that renders a long list of products with prices and you want to use client script to make a Web service call and update individual prices in the list as they change without refreshing the entire page. Typically you get a reference to an HTML element in client script by using the document.GetElementById method. You pass to this method the value of the id attribute of the HTML element you want to reference. In the case of elements that are rendered for ASP.NET server controls earlier versions of ASP.NET could make this difficult or impossible. You were not always able to predict what id values ASP.NET would generate, or ASP.NET could generate very long id values. The problem was especially difficult for data controls that would generate multiple rows for a single instance of the control in your markup. ASP.NET 4 adds two new algorithms for generating id attributes. These algorithms can generate id attributes that are easier to work with in client script because they are more predictable and that are easier to work with because they are simpler. For more information about how to use the new algorithms, see the following topics: ASP.NET Web Server Control Identification Walkthrough: Making Data-Bound Controls Easier to Access from JavaScript Walkthrough: Making Controls Located in Web User Controls Easier to Access from JavaScript How to: Access Controls from JavaScript by ID Persisting Row Selection in Data Controls The GridView and ListView controls enable users to select a row. In previous versions of ASP.NET, row selection was based on the row index on the page. For example, if you select the third item on page 1 and then move to page 2, the third item on page 2 is selected. In most cases, is more desirable not to select any rows on page 2. ASP.NET 4 supports Persisted Selection, a new feature that was initially supported only in Dynamic Data projects in the .NET Framework 3.5 SP1. When this feature is enabled, the selected item is based on the row data key. This means that if you select the third row on page 1 and move to page 2, nothing is selected on page 2. When you move back to page 1, the third row is still selected. This is a much more natural behavior than the behavior in earlier versions of ASP.NET. Persisted selection is now supported for the GridView and ListView controls in all projects. You can enable this feature in the GridView control, for example, by setting the EnablePersistedSelection property, as shown in the following example: <asp:GridView id="GridView2" runat="server" PersistedSelection="true"> </asp:GridView> FormView Control Enhancements The FormView control is enhanced to make it easier to style the content of the control with CSS. In previous versions of ASP.NET, the FormView control rendered it contents using an item template. This made styling more difficult in the markup because unexpected table row and table cell tags were rendered by the control. The FormView control supports RenderOuterTable, a property in ASP.NET 4. When this property is set to false, as show in the following example, the table tags are not rendered. This makes it easier to apply CSS style to the contents of the control. <asp:FormView ID="FormView1" runat="server" RenderTable="false"> For more information, see FormView Web Server Control Overview. ListView Control Enhancements The ListView control, which was introduced in ASP.NET 3.5, has all the functionality of the GridView control while giving you complete control over the output. This control has been made easier to use in ASP.NET 4. The earlier version of the control required that you specify a layout template that contained a server control with a known ID. The following markup shows a typical example of how to use the ListView control in ASP.NET 3.5. <asp:ListView ID="ListView1" runat="server"> <LayoutTemplate> <asp:PlaceHolder ID="ItemPlaceHolder" runat="server"></asp:PlaceHolder> </LayoutTemplate> <ItemTemplate> <% Eval("LastName")%> </ItemTemplate> </asp:ListView> In ASP.NET 4, the ListView control does not require a layout template. The markup shown in the previous example can be replaced with the following markup: <asp:ListView ID="ListView1" runat="server"> <ItemTemplate> <% Eval("LastName")%> </ItemTemplate> </asp:ListView> For more information, see ListView Web Server Control Overview. Filtering Data with the QueryExtender Control A very common task for developers who create data-driven Web pages is to filter data. This traditionally has been performed by building Where clauses in data source controls. This approach can be complicated, and in some cases the Where syntax does not let you take advantage of the full functionality of the underlying database. To make filtering easier, a new QueryExtender control has been added in ASP.NET 4. This control can be added to EntityDataSource or LinqDataSource controls in order to filter the data returned by these controls. Because the QueryExtender control relies on LINQ, but you do not to need to know how to write LINQ queries to use the query extender. The QueryExtender control supports a variety of filter options. The following lists QueryExtender filter options. Term Definition SearchExpression Searches a field or fields for string values and compares them to a specified string value. RangeExpression Searches a field or fields for values in a range specified by a pair of values. PropertyExpression Compares a specified value to a property value in a field. If the expression evaluates to true, the data that is being examined is returned. OrderByExpression Sorts data by a specified column and sort direction. CustomExpression Calls a function that defines custom filter in the page. For more information, see QueryExtenderQueryExtender Web Server Control Overview. Enhanced Support for Web Standards and Accessibility Earlier versions of ASP.NET controls sometimes render markup that does not conform to HTML, XHTML, or accessibility standards. ASP.NET 4 eliminates most of these exceptions. For details about how the HTML that is rendered by each control meets accessibility standards, see ASP.NET Controls and Accessibility. CSS for Controls that Can be Disabled In ASP.NET 3.5, when a control is disabled (see WebControl.Enabled), a disabled attribute is added to the rendered HTML element. For example, the following markup creates a Label control that is disabled: <asp:Label id="Label1" runat="server"   Text="Test" Enabled="false" /> In ASP.NET 3.5, the previous control settings generate the following HTML: <span id="Label1" disabled="disabled">Test</span> In HTML 4.01, the disabled attribute is not considered valid on span elements. It is valid only on input elements because it specifies that they cannot be accessed. On display-only elements such as span elements, browsers typically support rendering for a disabled appearance, but a Web page that relies on this non-standard behavior is not robust according to accessibility standards. For display-only elements, you should use CSS to indicate a disabled visual appearance. Therefore, by default ASP.NET 4 generates the following HTML for the control settings shown previously: <span id="Label1" class="aspNetDisabled">Test</span> You can change the value of the class attribute that is rendered by default when a control is disabled by setting the DisabledCssClass property. CSS for Validation Controls In ASP.NET 3.5, validation controls render a default color of red as an inline style. For example, the following markup creates a RequiredFieldValidator control: <asp:RequiredFieldValidator ID="RequiredFieldValidator1" runat="server"   ErrorMessage="Required Field" ControlToValidate="RadioButtonList1" /> ASP.NET 3.5 renders the following HTML for the validator control: <span id="RequiredFieldValidator1"   style="color:Red;visibility:hidden;">RequiredFieldValidator</span> By default, ASP.NET 4 does not render an inline style to set the color to red. An inline style is used only to hide or show the validator, as shown in the following example: <span id="RequiredFieldValidator1"   style"visibility:hidden;">RequiredFieldValidator</span> Therefore, ASP.NET 4 does not automatically show error messages in red. For information about how to use CSS to specify a visual style for a validation control, see Validating User Input in ASP.NET Web Pages. CSS for the Hidden Fields Div Element ASP.NET uses hidden fields to store state information such as view state and control state. These hidden fields are contained by a div element. In ASP.NET 3.5, this div element does not have a class attribute or an id attribute. Therefore, CSS rules that affect all div elements could unintentionally cause this div to be visible. To avoid this problem, ASP.NET 4 renders the div element for hidden fields with a CSS class that you can use to differentiate the hidden fields div from others. The new classvalue is shown in the following example: <div class="aspNetHidden"> CSS for the Table, Image, and ImageButton Controls By default, in ASP.NET 3.5, some controls set the border attribute of rendered HTML to zero (0). The following example shows HTML that is generated by the Table control in ASP.NET 3.5: <table id="Table2" border="0"> The Image control and the ImageButton control also do this. Because this is not necessary and provides visual formatting information that should be provided by using CSS, the attribute is not generated in ASP.NET 4. CSS for the UpdatePanel and UpdateProgress Controls In ASP.NET 3.5, the UpdatePanel and UpdateProgress controls do not support expando attributes. This makes it impossible to set a CSS class on the HTMLelements that they render. In ASP.NET 4 these controls have been changed to accept expando attributes, as shown in the following example: <asp:UpdatePanel runat="server" class="myStyle"> </asp:UpdatePanel> The following HTML is rendered for this markup: <div id="ctl00_MainContent_UpdatePanel1" class="expandoclass"> </div> Eliminating Unnecessary Outer Tables In ASP.NET 3.5, the HTML that is rendered for the following controls is wrapped in a table element whose purpose is to apply inline styles to the entire control: FormView Login PasswordRecovery ChangePassword If you use templates to customize the appearance of these controls, you can specify CSS styles in the markup that you provide in the templates. In that case, no extra outer table is required. In ASP.NET 4, you can prevent the table from being rendered by setting the new RenderOuterTable property to false. Layout Templates for Wizard Controls In ASP.NET 3.5, the Wizard and CreateUserWizard controls generate an HTML table element that is used for visual formatting. In ASP.NET 4 you can use a LayoutTemplate element to specify the layout. If you do this, the HTML table element is not generated. In the template, you create placeholder controls to indicate where items should be dynamically inserted into the control. (This is similar to how the template model for the ListView control works.) For more information, see the Wizard.LayoutTemplate property. New HTML Formatting Options for the CheckBoxList and RadioButtonList Controls ASP.NET 3.5 uses HTML table elements to format the output for the CheckBoxList and RadioButtonList controls. To provide an alternative that does not use tables for visual formatting, ASP.NET 4 adds two new options to the RepeatLayout enumeration: UnorderedList. This option causes the HTML output to be formatted by using ul and li elements instead of a table. OrderedList. This option causes the HTML output to be formatted by using ol and li elements instead of a table. For examples of HTML that is rendered for the new options, see the RepeatLayout enumeration. Header and Footer Elements for the Table Control In ASP.NET 3.5, the Table control can be configured to render thead and tfoot elements by setting the TableSection property of the TableHeaderRow class and the TableFooterRow class. In ASP.NET 4 these properties are set to the appropriate values by default. CSS and ARIA Support for the Menu Control In ASP.NET 3.5, the Menu control uses HTML table elements for visual formatting, and in some configurations it is not keyboard-accessible. ASP.NET 4 addresses these problems and improves accessibility in the following ways: The generated HTML is structured as an unordered list (ul and li elements). CSS is used for visual formatting. The menu behaves in accordance with ARIA standards for keyboard access. You can use arrow keys to navigate menu items. (For information about ARIA, see Accessibility in Visual Studio and ASP.NET.) ARIA role and property attributes are added to the generated HTML. (Attributes are added by using JavaScript instead of included in the HTML, to avoid generating HTML that would cause markup validation errors.) Styles for the Menu control are rendered in a style block at the top of the page, instead of inline with the rendered HTML elements. If you want to use a separate CSS file so that you can modify the menu styles, you can set the Menu control's new IncludeStyleBlock property to false, in which case the style block is not generated. Valid XHTML for the HtmlForm Control In ASP.NET 3.5, the HtmlForm control (which is created implicitly by the <form runat="server"> tag) renders an HTML form element that has both name and id attributes. The name attribute is deprecated in XHTML 1.1. Therefore, this control does not render the name attribute in ASP.NET 4. Maintaining Backward Compatibility in Control Rendering An existing ASP.NET Web site might have code in it that assumes that controls are rendering HTML the way they do in ASP.NET 3.5. To avoid causing backward compatibility problems when you upgrade the site to ASP.NET 4, you can have ASP.NET continue to generate HTML the way it does in ASP.NET 3.5 after you upgrade the site. To do so, you can set the controlRenderingCompatibilityVersion attribute of the pages element to "3.5" in the Web.config file of an ASP.NET 4 Web site, as shown in the following example: <system.web>   <pages controlRenderingCompatibilityVersion="3.5"/> </system.web> If this setting is omitted, the default value is the same as the version of ASP.NET that the Web site targets. (For information about multi-targeting in ASP.NET, see .NET Framework Multi-Targeting for ASP.NET Web Projects.) ASP.NET MVC ASP.NET MVC helps Web developers build compelling standards-based Web sites that are easy to maintain because it decreases the dependency among application layers by using the Model-View-Controller (MVC) pattern. MVC provides complete control over the page markup. It also improves testability by inherently supporting Test Driven Development (TDD). Web sites created using ASP.NET MVC have a modular architecture. This allows members of a team to work independently on the various modules and can be used to improve collaboration. For example, developers can work on the model and controller layers (data and logic), while the designer work on the view (presentation). For tutorials, walkthroughs, conceptual content, code samples, and a complete API reference, see ASP.NET MVC 2. Dynamic Data Dynamic Data was introduced in the .NET Framework 3.5 SP1 release in mid-2008. This feature provides many enhancements for creating data-driven applications, such as the following: A RAD experience for quickly building a data-driven Web site. Automatic validation that is based on constraints defined in the data model. The ability to easily change the markup that is generated for fields in the GridView and DetailsView controls by using field templates that are part of your Dynamic Data project. For ASP.NET 4, Dynamic Data has been enhanced to give developers even more power for quickly building data-driven Web sites. For more information, see ASP.NET Dynamic Data Content Map. Enabling Dynamic Data for Individual Data-Bound Controls in Existing Web Applications You can use Dynamic Data features in existing ASP.NET Web applications that do not use scaffolding by enabling Dynamic Data for individual data-bound controls. Dynamic Data provides the presentation and data layer support for rendering these controls. When you enable Dynamic Data for data-bound controls, you get the following benefits: Setting default values for data fields. Dynamic Data enables you to provide default values at run time for fields in a data control. Interacting with the database without creating and registering a data model. Automatically validating the data that is entered by the user without writing any code. For more information, see Walkthrough: Enabling Dynamic Data in ASP.NET Data-Bound Controls. New Field Templates for URLs and E-mail Addresses ASP.NET 4 introduces two new built-in field templates, EmailAddress.ascx and Url.ascx. These templates are used for fields that are marked as EmailAddress or Url using the DataTypeAttribute attribute. For EmailAddress objects, the field is displayed as a hyperlink that is created by using the mailto: protocol. When users click the link, it opens the user's e-mail client and creates a skeleton message. Objects typed as Url are displayed as ordinary hyperlinks. The following example shows how to mark fields. [DataType(DataType.EmailAddress)] public object HomeEmail { get; set; } [DataType(DataType.Url)] public object Website { get; set; } Creating Links with the DynamicHyperLink Control Dynamic Data uses the new routing feature that was added in the .NET Framework 3.5 SP1 to control the URLs that users see when they access the Web site. The new DynamicHyperLink control makes it easy to build links to pages in a Dynamic Data site. For information, see How to: Create Table Action Links in Dynamic Data Support for Inheritance in the Data Model Both the ADO.NET Entity Framework and LINQ to SQL support inheritance in their data models. An example of this might be a database that has an InsurancePolicy table. It might also contain CarPolicy and HousePolicy tables that have the same fields as InsurancePolicy and then add more fields. Dynamic Data has been modified to understand inherited objects in the data model and to support scaffolding for the inherited tables. For more information, see Walkthrough: Mapping Table-per-Hierarchy Inheritance in Dynamic Data. Support for Many-to-Many Relationships (Entity Framework Only) The Entity Framework has rich support for many-to-many relationships between tables, which is implemented by exposing the relationship as a collection on an Entity object. New field templates (ManyToMany.ascx and ManyToMany_Edit.ascx) have been added to provide support for displaying and editing data that is involved in many-to-many relationships. For more information, see Working with Many-to-Many Data Relationships in Dynamic Data. New Attributes to Control Display and Support Enumerations The DisplayAttribute has been added to give you additional control over how fields are displayed. The DisplayNameAttribute attribute in earlier versions of Dynamic Data enabled you to change the name that is used as a caption for a field. The new DisplayAttribute class lets you specify more options for displaying a field, such as the order in which a field is displayed and whether a field will be used as a filter. The attribute also provides independent control of the name that is used for the labels in a GridView control, the name that is used in a DetailsView control, the help text for the field, and the watermark used for the field (if the field accepts text input). The EnumDataTypeAttribute class has been added to let you map fields to enumerations. When you apply this attribute to a field, you specify an enumeration type. Dynamic Data uses the new Enumeration.ascx field template to create UI for displaying and editing enumeration values. The template maps the values from the database to the names in the enumeration. Enhanced Support for Filters Dynamic Data 1.0 had built-in filters for Boolean columns and foreign-key columns. The filters did not let you specify the order in which they were displayed. The new DisplayAttribute attribute addresses this by giving you control over whether a column appears as a filter and in what order it will be displayed. An additional enhancement is that filtering support has been rewritten to use the new QueryExtender feature of Web Forms. This lets you create filters without requiring knowledge of the data source control that the filters will be used with. Along with these extensions, filters have also been turned into template controls, which lets you add new ones. Finally, the DisplayAttribute class mentioned earlier allows the default filter to be overridden, in the same way that UIHint allows the default field template for a column to be overridden. For more information, see Walkthrough: Filtering Rows in Tables That Have a Parent-Child Relationship and QueryableFilterRepeater. ASP.NET Chart Control The ASP.NET chart server control enables you to create ASP.NET pages applications that have simple, intuitive charts for complex statistical or financial analysis. The chart control supports the following features: Data series, chart areas, axes, legends, labels, titles, and more. Data binding. Data manipulation, such as copying, splitting, merging, alignment, grouping, sorting, searching, and filtering. Statistical formulas and financial formulas. Advanced chart appearance, such as 3-D, anti-aliasing, lighting, and perspective. Events and customizations. Interactivity and Microsoft Ajax. Support for the Ajax Content Delivery Network (CDN), which provides an optimized way for you to add Microsoft Ajax Library and jQuery scripts to your Web applications. For more information, see Chart Web Server Control Overview. Visual Web Developer Enhancements The following sections provide information about enhancements and new features in Visual Studio 2010 and Visual Web Developer Express. The Web page designer in Visual Studio 2010 has been enhanced for better CSS compatibility, includes additional support for HTML and ASP.NET markup snippets, and features a redesigned version of IntelliSense for JScript. Improved CSS Compatibility The Visual Web Developer designer in Visual Studio 2010 has been updated to improve CSS 2.1 standards compliance. The designer better preserves HTML source code and is more robust than in previous versions of Visual Studio. HTML and JScript Snippets In the HTML editor, IntelliSense auto-completes tag names. The IntelliSense Snippets feature auto-completes whole tags and more. In Visual Studio 2010, IntelliSense snippets are supported for JScript, alongside C# and Visual Basic, which were supported in earlier versions of Visual Studio. Visual Studio 2010 includes over 200 snippets that help you auto-complete common ASP.NET and HTML tags, including required attributes (such as runat="server") and common attributes specific to a tag (such as ID, DataSourceID, ControlToValidate, and Text). You can download additional snippets, or you can write your own snippets that encapsulate the blocks of markup that you or your team use for common tasks. For more information on HTML snippets, see Walkthrough: Using HTML Snippets. JScript IntelliSense Enhancements In Visual 2010, JScript IntelliSense has been redesigned to provide an even richer editing experience. IntelliSense now recognizes objects that have been dynamically generated by methods such as registerNamespace and by similar techniques used by other JavaScript frameworks. Performance has been improved to analyze large libraries of script and to display IntelliSense with little or no processing delay. Compatibility has been significantly increased to support almost all third-party libraries and to support diverse coding styles. Documentation comments are now parsed as you type and are immediately leveraged by IntelliSense. Web Application Deployment with Visual Studio 2010 For Web application projects, Visual Studio now provides tools that work with the IIS Web Deployment Tool (Web Deploy) to automate many processes that had to be done manually in earlier versions of ASP.NET. For example, the following tasks can now be automated: Creating an IIS application on the destination computer and configuring IIS settings. Copying files to the destination computer. Changing Web.config settings that must be different in the destination environment. Propagating changes to data or data structures in SQL Server databases that are used by the Web application. For more information about Web application deployment, see ASP.NET Deployment Content Map. Enhancements to ASP.NET Multi-Targeting ASP.NET 4 adds new features to the multi-targeting feature to make it easier to work with projects that target earlier versions of the .NET Framework. Multi-targeting was introduced in ASP.NET 3.5 to enable you to use the latest version of Visual Studio without having to upgrade existing Web sites or Web services to the latest version of the .NET Framework. In Visual Studio 2008, when you work with a project targeted for an earlier version of the .NET Framework, most features of the development environment adapt to the targeted version. However, IntelliSense displays language features that are available in the current version, and property windows display properties available in the current version. In Visual Studio 2010, only language features and properties available in the targeted version of the .NET Framework are shown. For more information about multi-targeting, see the following topics: .NET Framework Multi-Targeting for ASP.NET Web Projects ASP.NET Side-by-Side Execution Overview How to: Host Web Applications That Use Different Versions of the .NET Framework on the Same Server How to: Deploy Web Site Projects Targeted for Earlier Versions of the .NET Framework

    Read the article

  • What are good design practices when working with Entity Framework

    - by AD
    This will apply mostly for an asp.net application where the data is not accessed via soa. Meaning that you get access to the objects loaded from the framework, not Transfer Objects, although some recommendation still apply. This is a community post, so please add to it as you see fit. Applies to: Entity Framework 1.0 shipped with Visual Studio 2008 sp1. Why pick EF in the first place? Considering it is a young technology with plenty of problems (see below), it may be a hard sell to get on the EF bandwagon for your project. However, it is the technology Microsoft is pushing (at the expense of Linq2Sql, which is a subset of EF). In addition, you may not be satisfied with NHibernate or other solutions out there. Whatever the reasons, there are people out there (including me) working with EF and life is not bad.make you think. EF and inheritance The first big subject is inheritance. EF does support mapping for inherited classes that are persisted in 2 ways: table per class and table the hierarchy. The modeling is easy and there are no programming issues with that part. (The following applies to table per class model as I don't have experience with table per hierarchy, which is, anyway, limited.) The real problem comes when you are trying to run queries that include one or many objects that are part of an inheritance tree: the generated sql is incredibly awful, takes a long time to get parsed by the EF and takes a long time to execute as well. This is a real show stopper. Enough that EF should probably not be used with inheritance or as little as possible. Here is an example of how bad it was. My EF model had ~30 classes, ~10 of which were part of an inheritance tree. On running a query to get one item from the Base class, something as simple as Base.Get(id), the generated SQL was over 50,000 characters. Then when you are trying to return some Associations, it degenerates even more, going as far as throwing SQL exceptions about not being able to query more than 256 tables at once. Ok, this is bad, EF concept is to allow you to create your object structure without (or with as little as possible) consideration on the actual database implementation of your table. It completely fails at this. So, recommendations? Avoid inheritance if you can, the performance will be so much better. Use it sparingly where you have to. In my opinion, this makes EF a glorified sql-generation tool for querying, but there are still advantages to using it. And ways to implement mechanism that are similar to inheritance. Bypassing inheritance with Interfaces First thing to know with trying to get some kind of inheritance going with EF is that you cannot assign a non-EF-modeled class a base class. Don't even try it, it will get overwritten by the modeler. So what to do? You can use interfaces to enforce that classes implement some functionality. For example here is a IEntity interface that allow you to define Associations between EF entities where you don't know at design time what the type of the entity would be. public enum EntityTypes{ Unknown = -1, Dog = 0, Cat } public interface IEntity { int EntityID { get; } string Name { get; } Type EntityType { get; } } public partial class Dog : IEntity { // implement EntityID and Name which could actually be fields // from your EF model Type EntityType{ get{ return EntityTypes.Dog; } } } Using this IEntity, you can then work with undefined associations in other classes // lets take a class that you defined in your model. // that class has a mapping to the columns: PetID, PetType public partial class Person { public IEntity GetPet() { return IEntityController.Get(PetID,PetType); } } which makes use of some extension functions: public class IEntityController { static public IEntity Get(int id, EntityTypes type) { switch (type) { case EntityTypes.Dog: return Dog.Get(id); case EntityTypes.Cat: return Cat.Get(id); default: throw new Exception("Invalid EntityType"); } } } Not as neat as having plain inheritance, particularly considering you have to store the PetType in an extra database field, but considering the performance gains, I would not look back. It also cannot model one-to-many, many-to-many relationship, but with creative uses of 'Union' it could be made to work. Finally, it creates the side effet of loading data in a property/function of the object, which you need to be careful about. Using a clear naming convention like GetXYZ() helps in that regards. Compiled Queries Entity Framework performance is not as good as direct database access with ADO (obviously) or Linq2SQL. There are ways to improve it however, one of which is compiling your queries. The performance of a compiled query is similar to Linq2Sql. What is a compiled query? It is simply a query for which you tell the framework to keep the parsed tree in memory so it doesn't need to be regenerated the next time you run it. So the next run, you will save the time it takes to parse the tree. Do not discount that as it is a very costly operation that gets even worse with more complex queries. There are 2 ways to compile a query: creating an ObjectQuery with EntitySQL and using CompiledQuery.Compile() function. (Note that by using an EntityDataSource in your page, you will in fact be using ObjectQuery with EntitySQL, so that gets compiled and cached). An aside here in case you don't know what EntitySQL is. It is a string-based way of writing queries against the EF. Here is an example: "select value dog from Entities.DogSet as dog where dog.ID = @ID". The syntax is pretty similar to SQL syntax. You can also do pretty complex object manipulation, which is well explained [here][1]. Ok, so here is how to do it using ObjectQuery< string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); The first time you run this query, the framework will generate the expression tree and keep it in memory. So the next time it gets executed, you will save on that costly step. In that example EnablePlanCaching = true, which is unnecessary since that is the default option. The other way to compile a query for later use is the CompiledQuery.Compile method. This uses a delegate: static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => ctx.DogSet.FirstOrDefault(it => it.ID == id)); or using linq static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => (from dog in ctx.DogSet where dog.ID == id select dog).FirstOrDefault()); to call the query: query_GetDog.Invoke( YourContext, id ); The advantage of CompiledQuery is that the syntax of your query is checked at compile time, where as EntitySQL is not. However, there are other consideration... Includes Lets say you want to have the data for the dog owner to be returned by the query to avoid making 2 calls to the database. Easy to do, right? EntitySQL string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)).Include("Owner"); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); CompiledQuery static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => (from dog in ctx.DogSet.Include("Owner") where dog.ID == id select dog).FirstOrDefault()); Now, what if you want to have the Include parametrized? What I mean is that you want to have a single Get() function that is called from different pages that care about different relationships for the dog. One cares about the Owner, another about his FavoriteFood, another about his FavotireToy and so on. Basicly, you want to tell the query which associations to load. It is easy to do with EntitySQL public Dog Get(int id, string include) { string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)) .IncludeMany(include); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); } The include simply uses the passed string. Easy enough. Note that it is possible to improve on the Include(string) function (that accepts only a single path) with an IncludeMany(string) that will let you pass a string of comma-separated associations to load. Look further in the extension section for this function. If we try to do it with CompiledQuery however, we run into numerous problems: The obvious static readonly Func<Entities, int, string, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, string, Dog>((ctx, id, include) => (from dog in ctx.DogSet.Include(include) where dog.ID == id select dog).FirstOrDefault()); will choke when called with: query_GetDog.Invoke( YourContext, id, "Owner,FavoriteFood" ); Because, as mentionned above, Include() only wants to see a single path in the string and here we are giving it 2: "Owner" and "FavoriteFood" (which is not to be confused with "Owner.FavoriteFood"!). Then, let's use IncludeMany(), which is an extension function static readonly Func<Entities, int, string, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, string, Dog>((ctx, id, include) => (from dog in ctx.DogSet.IncludeMany(include) where dog.ID == id select dog).FirstOrDefault()); Wrong again, this time it is because the EF cannot parse IncludeMany because it is not part of the functions that is recognizes: it is an extension. Ok, so you want to pass an arbitrary number of paths to your function and Includes() only takes a single one. What to do? You could decide that you will never ever need more than, say 20 Includes, and pass each separated strings in a struct to CompiledQuery. But now the query looks like this: from dog in ctx.DogSet.Include(include1).Include(include2).Include(include3) .Include(include4).Include(include5).Include(include6) .[...].Include(include19).Include(include20) where dog.ID == id select dog which is awful as well. Ok, then, but wait a minute. Can't we return an ObjectQuery< with CompiledQuery? Then set the includes on that? Well, that what I would have thought so as well: static readonly Func<Entities, int, ObjectQuery<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, ObjectQuery<Dog>>((ctx, id) => (ObjectQuery<Dog>)(from dog in ctx.DogSet where dog.ID == id select dog)); public Dog GetDog( int id, string include ) { ObjectQuery<Dog> oQuery = query_GetDog(id); oQuery = oQuery.IncludeMany(include); return oQuery.FirstOrDefault; } That should have worked, except that when you call IncludeMany (or Include, Where, OrderBy...) you invalidate the cached compiled query because it is an entirely new one now! So, the expression tree needs to be reparsed and you get that performance hit again. So what is the solution? You simply cannot use CompiledQueries with parametrized Includes. Use EntitySQL instead. This doesn't mean that there aren't uses for CompiledQueries. It is great for localized queries that will always be called in the same context. Ideally CompiledQuery should always be used because the syntax is checked at compile time, but due to limitation, that's not possible. An example of use would be: you may want to have a page that queries which two dogs have the same favorite food, which is a bit narrow for a BusinessLayer function, so you put it in your page and know exactly what type of includes are required. Passing more than 3 parameters to a CompiledQuery Func is limited to 5 parameters, of which the last one is the return type and the first one is your Entities object from the model. So that leaves you with 3 parameters. A pitance, but it can be improved on very easily. public struct MyParams { public string param1; public int param2; public DateTime param3; } static readonly Func<Entities, MyParams, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, MyParams, IEnumerable<Dog>>((ctx, myParams) => from dog in ctx.DogSet where dog.Age == myParams.param2 && dog.Name == myParams.param1 and dog.BirthDate > myParams.param3 select dog); public List<Dog> GetSomeDogs( int age, string Name, DateTime birthDate ) { MyParams myParams = new MyParams(); myParams.param1 = name; myParams.param2 = age; myParams.param3 = birthDate; return query_GetDog(YourContext,myParams).ToList(); } Return Types (this does not apply to EntitySQL queries as they aren't compiled at the same time during execution as the CompiledQuery method) Working with Linq, you usually don't force the execution of the query until the very last moment, in case some other functions downstream wants to change the query in some way: static readonly Func<Entities, int, string, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, IEnumerable<Dog>>((ctx, age, name) => from dog in ctx.DogSet where dog.Age == age && dog.Name == name select dog); public IEnumerable<Dog> GetSomeDogs( int age, string name ) { return query_GetDog(YourContext,age,name); } public void DataBindStuff() { IEnumerable<Dog> dogs = GetSomeDogs(4,"Bud"); // but I want the dogs ordered by BirthDate gridView.DataSource = dogs.OrderBy( it => it.BirthDate ); } What is going to happen here? By still playing with the original ObjectQuery (that is the actual return type of the Linq statement, which implements IEnumerable), it will invalidate the compiled query and be force to re-parse. So, the rule of thumb is to return a List< of objects instead. static readonly Func<Entities, int, string, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, IEnumerable<Dog>>((ctx, age, name) => from dog in ctx.DogSet where dog.Age == age && dog.Name == name select dog); public List<Dog> GetSomeDogs( int age, string name ) { return query_GetDog(YourContext,age,name).ToList(); //<== change here } public void DataBindStuff() { List<Dog> dogs = GetSomeDogs(4,"Bud"); // but I want the dogs ordered by BirthDate gridView.DataSource = dogs.OrderBy( it => it.BirthDate ); } When you call ToList(), the query gets executed as per the compiled query and then, later, the OrderBy is executed against the objects in memory. It may be a little bit slower, but I'm not even sure. One sure thing is that you have no worries about mis-handling the ObjectQuery and invalidating the compiled query plan. Once again, that is not a blanket statement. ToList() is a defensive programming trick, but if you have a valid reason not to use ToList(), go ahead. There are many cases in which you would want to refine the query before executing it. Performance What is the performance impact of compiling a query? It can actually be fairly large. A rule of thumb is that compiling and caching the query for reuse takes at least double the time of simply executing it without caching. For complex queries (read inherirante), I have seen upwards to 10 seconds. So, the first time a pre-compiled query gets called, you get a performance hit. After that first hit, performance is noticeably better than the same non-pre-compiled query. Practically the same as Linq2Sql When you load a page with pre-compiled queries the first time you will get a hit. It will load in maybe 5-15 seconds (obviously more than one pre-compiled queries will end up being called), while subsequent loads will take less than 300ms. Dramatic difference, and it is up to you to decide if it is ok for your first user to take a hit or you want a script to call your pages to force a compilation of the queries. Can this query be cached? { Dog dog = from dog in YourContext.DogSet where dog.ID == id select dog; } No, ad-hoc Linq queries are not cached and you will incur the cost of generating the tree every single time you call it. Parametrized Queries Most search capabilities involve heavily parametrized queries. There are even libraries available that will let you build a parametrized query out of lamba expressions. The problem is that you cannot use pre-compiled queries with those. One way around that is to map out all the possible criteria in the query and flag which one you want to use: public struct MyParams { public string name; public bool checkName; public int age; public bool checkAge; } static readonly Func<Entities, MyParams, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, MyParams, IEnumerable<Dog>>((ctx, myParams) => from dog in ctx.DogSet where (myParams.checkAge == true && dog.Age == myParams.age) && (myParams.checkName == true && dog.Name == myParams.name ) select dog); protected List<Dog> GetSomeDogs() { MyParams myParams = new MyParams(); myParams.name = "Bud"; myParams.checkName = true; myParams.age = 0; myParams.checkAge = false; return query_GetDog(YourContext,myParams).ToList(); } The advantage here is that you get all the benifits of a pre-compiled quert. The disadvantages are that you most likely will end up with a where clause that is pretty difficult to maintain, that you will incur a bigger penalty for pre-compiling the query and that each query you run is not as efficient as it could be (particularly with joins thrown in). Another way is to build an EntitySQL query piece by piece, like we all did with SQL. protected List<Dod> GetSomeDogs( string name, int age) { string query = "select value dog from Entities.DogSet where 1 = 1 "; if( !String.IsNullOrEmpty(name) ) query = query + " and dog.Name == @Name "; if( age > 0 ) query = query + " and dog.Age == @Age "; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>( query, YourContext ); if( !String.IsNullOrEmpty(name) ) oQuery.Parameters.Add( new ObjectParameter( "Name", name ) ); if( age > 0 ) oQuery.Parameters.Add( new ObjectParameter( "Age", age ) ); return oQuery.ToList(); } Here the problems are: - there is no syntax checking during compilation - each different combination of parameters generate a different query which will need to be pre-compiled when it is first run. In this case, there are only 4 different possible queries (no params, age-only, name-only and both params), but you can see that there can be way more with a normal world search. - Noone likes to concatenate strings! Another option is to query a large subset of the data and then narrow it down in memory. This is particularly useful if you are working with a definite subset of the data, like all the dogs in a city. You know there are a lot but you also know there aren't that many... so your CityDog search page can load all the dogs for the city in memory, which is a single pre-compiled query and then refine the results protected List<Dod> GetSomeDogs( string name, int age, string city) { string query = "select value dog from Entities.DogSet where dog.Owner.Address.City == @City "; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>( query, YourContext ); oQuery.Parameters.Add( new ObjectParameter( "City", city ) ); List<Dog> dogs = oQuery.ToList(); if( !String.IsNullOrEmpty(name) ) dogs = dogs.Where( it => it.Name == name ); if( age > 0 ) dogs = dogs.Where( it => it.Age == age ); return dogs; } It is particularly useful when you start displaying all the data then allow for filtering. Problems: - Could lead to serious data transfer if you are not careful about your subset. - You can only filter on the data that you returned. It means that if you don't return the Dog.Owner association, you will not be able to filter on the Dog.Owner.Name So what is the best solution? There isn't any. You need to pick the solution that works best for you and your problem: - Use lambda-based query building when you don't care about pre-compiling your queries. - Use fully-defined pre-compiled Linq query when your object structure is not too complex. - Use EntitySQL/string concatenation when the structure could be complex and when the possible number of different resulting queries are small (which means fewer pre-compilation hits). - Use in-memory filtering when you are working with a smallish subset of the data or when you had to fetch all of the data on the data at first anyway (if the performance is fine with all the data, then filtering in memory will not cause any time to be spent in the db). Singleton access The best way to deal with your context and entities accross all your pages is to use the singleton pattern: public sealed class YourContext { private const string instanceKey = "On3GoModelKey"; YourContext(){} public static YourEntities Instance { get { HttpContext context = HttpContext.Current; if( context == null ) return Nested.instance; if (context.Items[instanceKey] == null) { On3GoEntities entity = new On3GoEntities(); context.Items[instanceKey] = entity; } return (YourEntities)context.Items[instanceKey]; } } class Nested { // Explicit static constructor to tell C# compiler // not to mark type as beforefieldinit static Nested() { } internal static readonly YourEntities instance = new YourEntities(); } } NoTracking, is it worth it? When executing a query, you can tell the framework to track the objects it will return or not. What does it mean? With tracking enabled (the default option), the framework will track what is going on with the object (has it been modified? Created? Deleted?) and will also link objects together, when further queries are made from the database, which is what is of interest here. For example, lets assume that Dog with ID == 2 has an owner which ID == 10. Dog dog = (from dog in YourContext.DogSet where dog.ID == 2 select dog).FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; Person owner = (from o in YourContext.PersonSet where o.ID == 10 select dog).FirstOrDefault(); //dog.OwnerReference.IsLoaded == true; If we were to do the same with no tracking, the result would be different. ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>) (from dog in YourContext.DogSet where dog.ID == 2 select dog); oDogQuery.MergeOption = MergeOption.NoTracking; Dog dog = oDogQuery.FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; ObjectQuery<Person> oPersonQuery = (ObjectQuery<Person>) (from o in YourContext.PersonSet where o.ID == 10 select o); oPersonQuery.MergeOption = MergeOption.NoTracking; Owner owner = oPersonQuery.FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; Tracking is very useful and in a perfect world without performance issue, it would always be on. But in this world, there is a price for it, in terms of performance. So, should you use NoTracking to speed things up? It depends on what you are planning to use the data for. Is there any chance that the data your query with NoTracking can be used to make update/insert/delete in the database? If so, don't use NoTracking because associations are not tracked and will causes exceptions to be thrown. In a page where there are absolutly no updates to the database, you can use NoTracking. Mixing tracking and NoTracking is possible, but it requires you to be extra careful with updates/inserts/deletes. The problem is that if you mix then you risk having the framework trying to Attach() a NoTracking object to the context where another copy of the same object exist with tracking on. Basicly, what I am saying is that Dog dog1 = (from dog in YourContext.DogSet where dog.ID == 2).FirstOrDefault(); ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>) (from dog in YourContext.DogSet where dog.ID == 2 select dog); oDogQuery.MergeOption = MergeOption.NoTracking; Dog dog2 = oDogQuery.FirstOrDefault(); dog1 and dog2 are 2 different objects, one tracked and one not. Using the detached object in an update/insert will force an Attach() that will say "Wait a minute, I do already have an object here with the same database key. Fail". And when you Attach() one object, all of its hierarchy gets attached as well, causing problems everywhere. Be extra careful. How much faster is it with NoTracking It depends on the queries. Some are much more succeptible to tracking than other. I don't have a fast an easy rule for it, but it helps. So I should use NoTracking everywhere then? Not exactly. There are some advantages to tracking object. The first one is that the object is cached, so subsequent call for that object will not hit the database. That cache is only valid for the lifetime of the YourEntities object, which, if you use the singleton code above, is the same as the page lifetime. One page request == one YourEntity object. So for multiple calls for the same object, it will load only once per page request. (Other caching mechanism could extend that). What happens when you are using NoTracking and try to load the same object multiple times? The database will be queried each time, so there is an impact there. How often do/should you call for the same object during a single page request? As little as possible of course, but it does happens. Also remember the piece above about having the associations connected automatically for your? You don't have that with NoTracking, so if you load your data in multiple batches, you will not have a link to between them: ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>)(from dog in YourContext.DogSet select dog); oDogQuery.MergeOption = MergeOption.NoTracking; List<Dog> dogs = oDogQuery.ToList(); ObjectQuery<Person> oPersonQuery = (ObjectQuery<Person>)(from o in YourContext.PersonSet select o); oPersonQuery.MergeOption = MergeOption.NoTracking; List<Person> owners = oPersonQuery.ToList(); In this case, no dog will have its .Owner property set. Some things to keep in mind when you are trying to optimize the performance. No lazy loading, what am I to do? This can be seen as a blessing in disguise. Of course it is annoying to load everything manually. However, it decreases the number of calls to the db and forces you to think about when you should load data. The more you can load in one database call the better. That was always true, but it is enforced now with this 'feature' of EF. Of course, you can call if( !ObjectReference.IsLoaded ) ObjectReference.Load(); if you want to, but a better practice is to force the framework to load the objects you know you will need in one shot. This is where the discussion about parametrized Includes begins to make sense. Lets say you have you Dog object public class Dog { public Dog Get(int id) { return YourContext.DogSet.FirstOrDefault(it => it.ID == id ); } } This is the type of function you work with all the time. It gets called from all over the place and once you have that Dog object, you will do very different things to it in different functions. First, it should be pre-compiled, because you will call that very often. Second, each different pages will want to have access to a different subset of the Dog data. Some will want the Owner, some the FavoriteToy, etc. Of course, you could call Load() for each reference you need anytime you need one. But that will generate a call to the database each time. Bad idea. So instead, each page will ask for the data it wants to see when it first request for the Dog object: static public Dog Get(int id) { return GetDog(entity,"");} static public Dog Get(int id, string includePath) { string query = "select value o " + " from YourEntities.DogSet as o " +

    Read the article

< Previous Page | 1 2