Search Results

Search found 22354 results on 895 pages for 'visual studio (vs net2003'.

Page 181/895 | < Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >

  • Running ASP.NET Webforms and ASP.NET MVC side by side

    - by rajbk
    One of the nice things about ASP.NET MVC and its older brother ASP.NET WebForms is that they are both built on top of the ASP.NET runtime environment. The advantage of this is that, you can still run them side by side even though MVC and WebForms are different frameworks. Another point to note is that with the release of the ASP.NET routing in .NET 3.5 SP1, we are able to create SEO friendly URLs that do not map to specific files on disk. The routing is part of the core runtime environment and therefore can be used by both WebForms and MVC. To run both frameworks side by side, we could easily create a separate folder in your MVC project for all our WebForm files and be good to go. What this post shows you instead, is how to have an MVC application with WebForm pages  that both use a common master page and common routing for SEO friendly URLs.  A sample project that shows WebForms and MVC running side by side is attached at the bottom of this post. So why would we want to run WebForms and MVC in the same project?  WebForms come with a lot of nice server controls that provide a lot of functionality. One example is the ReportViewer control. Using this control and client report definition files (RDLC), we can create rich interactive reports (with charting controls). I show you how to use the ReportViewer control in a WebForm project here :  Creating an ASP.NET report using Visual Studio 2010. We can create even more advanced reports by using SQL reporting services that can also be rendered by the ReportViewer control. Now, consider the sample MVC application I blogged about called ASP.NET MVC Paging/Sorting/Filtering using the MVCContrib Grid and Pager. Assume you were given the requirement to add a UI to the MVC application where users could interact with a report and be given the option to export the report to Excel, PDF or Word. How do you go about doing it?   This is a perfect scenario to use the ReportViewer control and RDLCs. As you saw in the post on creating the ASP.NET report, the ReportViewer control is a Web Control and is designed to be run in a WebForm project with dependencies on, amongst others, a ScriptManager control and the beloved Viewstate.  Since MVC and WebForm both run under the same runtime, the easiest thing to is to add the WebForm application files (index.aspx, rdlc, related class files) into our MVC project. You can copy the files over from the WebForm project into the MVC project. Create a new folder in our MVC application called CommonReports. Add the index.aspx and rdlc file from the Webform project   Right click on the Index.aspx file and convert it to a web application. This will add the index.aspx.designer.cs file (this step is not required if you are manually adding a WebForm aspx file into the MVC project).    Verify that all the type names for the ObjectDataSources in code behind to point to the correct ProductRepository and fix any compiler errors. Right click on Index.aspx and select “View in browser”. You should see a screen like the one below:   There are two issues with our page. It does not use our site master page and the URL is not SEO friendly. Common Master Page The easiest way to use master pages with both MVC and WebForm pages is to have a common master page that each inherits from as shown below. The reason for this is most WebForm controls require them to be inside a Form control and require ControlState or ViewState. ViewMasterPages used in MVC, on the other hand, are designed to be used with content pages that derive from ViewPage with Viewstate turned off. By having a separate master page for MVC and WebForm that inherit from the Root master page,, we can set properties that are specific to each. For example, in the Webform master, we can turn on ViewState, add a form tag etc. Another point worth noting is that if you set a WebForm page to use a MVC site master page, you may run into errors like the following: A ViewMasterPage can be used only with content pages that derive from ViewPage or ViewPage<TViewItem> or Control 'MainContent_MyButton' of type 'Button' must be placed inside a form tag with runat=server. Since the ViewMasterPage inherits from MasterPage as seen below, we make our Root.master inherit from MasterPage, MVC.master inherit from ViewMasterPage and Webform.master inherits from MasterPage. We define the attributes on the master pages like so: Root.master <%@ Master Inherits="System.Web.UI.MasterPage"  … %> MVC.master <%@ Master MasterPageFile="~/Views/Shared/Root.Master" Inherits="System.Web.Mvc.ViewMasterPage" … %> WebForm.master <%@ Master MasterPageFile="~/Views/Shared/Root.Master" Inherits="NorthwindSales.Views.Shared.Webform" %> Code behind: public partial class Webform : System.Web.UI.MasterPage {} We make changes to our reports aspx file to use the Webform.master. See the source of the master pages in the sample project for a better understanding of how they are connected. SEO friendly links We want to create SEO friendly links that point to our report. A request to /Reports/Products should render the report located in ~/CommonReports/Products.aspx. Simillarly to support future reports, a request to /Reports/Sales should render a report in ~/CommonReports/Sales.aspx. Lets start by renaming our index.aspx file to Products.aspx to be consistent with our routing criteria above. As mentioned earlier, since routing is part of the core runtime environment, we ca easily create a custom route for our reports by adding an entry in Global.asax. public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}");   //Custom route for reports routes.MapPageRoute( "ReportRoute", // Route name "Reports/{reportname}", // URL "~/CommonReports/{reportname}.aspx" // File );     routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = UrlParameter.Optional } // Parameter defaults ); } With our custom route in place, a request to Reports/Employees will render the page at ~/CommonReports/Employees.aspx. We make this custom route the first entry since the routing system walks the table from top to bottom, and the first route to match wins. Note that it is highly recommended that you write unit tests for your routes to ensure that the mappings you defined are correct. Common Menu Structure The master page in our original MVC project had a menu structure like so: <ul id="menu"> <li> <%=Html.ActionLink("Home", "Index", "Home") %></li> <li> <%=Html.ActionLink("Products", "Index", "Products") %></li> <li> <%=Html.ActionLink("Help", "Help", "Home") %></li> </ul> We want this menu structure to be common to all pages/views and hence should reside in Root.master. Unfortunately the Html.ActionLink helpers will not work since Root.master inherits from MasterPage which does not have the helper methods available. The quickest way to resolve this issue is to use RouteUrl expressions. Using  RouteUrl expressions, we can programmatically generate URLs that are based on route definitions. By specifying parameter values and a route name if required, we get back a URL string that corresponds to a matching route. We move our menu structure to Root.master and change it to use RouteUrl expressions: <ul id="menu"> <li> <asp:HyperLink ID="hypHome" runat="server" NavigateUrl="<%$RouteUrl:routename=default,controller=home,action=index%>">Home</asp:HyperLink></li> <li> <asp:HyperLink ID="hypProducts" runat="server" NavigateUrl="<%$RouteUrl:routename=default,controller=products,action=index%>">Products</asp:HyperLink></li> <li> <asp:HyperLink ID="hypReport" runat="server" NavigateUrl="<%$RouteUrl:routename=ReportRoute,reportname=products%>">Product Report</asp:HyperLink></li> <li> <asp:HyperLink ID="hypHelp" runat="server" NavigateUrl="<%$RouteUrl:routename=default,controller=home,action=help%>">Help</asp:HyperLink></li> </ul> We are done adding the common navigation to our application. The application now uses a common theme, routing and navigation structure. Conclusion We have seen how to do the following through this post Add a WebForm page from a WebForm project to an existing ASP.NET MVC application Use a common master page for both WebForm and MVC pages Use routing for SEO friendly links Use a common menu structure for both WebForm and MVC. The sample project is attached below. Version: VS 2010 RTM Remember to change your connection string to point to your Northwind database NorthwindSalesMVCWebform.zip

    Read the article

  • Looking into Entity Framework Code First Migrations

    - by nikolaosk
    In this post I will introduce you to Code First Migrations, an Entity Framework feature introduced in version 4.3 back in February of 2012.I have extensively covered Entity Framework in this blog. Please find my other Entity Framework posts here .   Before the addition of Code First Migrations (4.1,4.2 versions), Code First database initialisation meant that Code First would create the database if it does not exist (the default behaviour - CreateDatabaseIfNotExists). The other pattern we could use is DropCreateDatabaseIfModelChanges which means that Entity Framework, will drop the database if it realises that model has changes since the last time it created the database.The final pattern is DropCreateDatabaseAlways which means that Code First will recreate the database every time one runs the application.That is of course fine for the development database but totally unacceptable and catastrophic when you have a production database. We cannot lose our data because of the work that Code First works.Migrations solve this problem.With migrations we can modify the database without completely dropping it.We can modify the database schema to reflect the changes to the model without losing data.In version EF 5.0 migrations are fully included and supported. I will demonstrate migrations with a hands-on example.Let me say a few words first about Entity Framework first. The .Net framework provides support for Object Relational Mappingthrough EF. So EF is a an ORM tool and it is now the main data access technology that microsoft works on. I use it quite extensively in my projects. Through EF we have many things out of the box provided for us. We have the automatic generation of SQL code.It maps relational data to strongly types objects.All the changes made to the objects in the memory are persisted in a transactional way back to the data store. You can find in this post an example on how to use the Entity Framework to retrieve data from an SQL Server Database using the "Database/Schema First" approach.In this approach we make all the changes at the database level and then we update the model with those changes. In this post you can see an example on how to use the "Model First" approach when working with ASP.Net and the Entity Framework.This model was firstly introduced in EF version 4.0 and we could start with a blank model and then create a database from that model.When we made changes to the model , we could recreate the database from the new model. The Code First approach is the more code-centric than the other two. Basically we write POCO classes and then we persist to a database using something called DBContext.Code First relies on DbContext. We create 2,3 classes (e.g Person,Product) with properties and then these classes interact with the DbContext class we can create a new database based upon our POCOS classes and have tables generated from those classes.We do not have an .edmx file in this approach.By using this approach we can write much easier unit tests.DbContext is a new context class and is smaller,lightweight wrapper for the main context class which is ObjectContext (Schema First and Model First).Let's move on to our hands-on example.I have installed VS 2012 Ultimate edition in my Windows 8 machine. 1)  Create an empty asp.net web application. Give your application a suitable name. Choose C# as the development language2) Add a new web form item in your application. Leave the default name.3) Create a new folder. Name it CodeFirst .4) Add a new item in your application, a class file. Name it Footballer.cs. This is going to be a simple POCO class.Place this class file in the CodeFirst folder.The code follows    public class Footballer     {         public int FootballerID { get; set; }         public string FirstName { get; set; }         public string LastName { get; set; }         public double Weight { get; set; }         public double Height { get; set; }              }5) We will have to add EF 5.0 to our project. Right-click on the project in the Solution Explorer and select Manage NuGet Packages... for it.In the window that will pop up search for Entity Framework and install it.Have a look at the picture below   If you want to find out if indeed EF version is 5.0 version is installed have a look at the References. Have a look at the picture below to see what you will see if you have installed everything correctly.Have a look at the picture below 6) Then we need to create a context class that inherits from DbContext.Add a new class to the CodeFirst folder.Name it FootballerDBContext.Now that we have the entity classes created, we must let the model know.I will have to use the DbSet<T> property.The code for this class follows     public class FootballerDBContext:DbContext     {         public DbSet<Footballer> Footballers { get; set; }             }    Do not forget to add  (using System.Data.Entity;) in the beginning of the class file 7) We must take care of the connection string. It is very easy to create one in the web.config.It does not matter that we do not have a database yet.When we run the DbContext and query against it , it will use a connection string in the web.config and will create the database based on the classes.I will use the name "FootballTraining" for the database.In my case the connection string inside the web.config, looks like this    <connectionStrings>    <add name="CodeFirstDBContext" connectionString="server=.;integrated security=true; database=FootballTraining" providerName="System.Data.SqlClient"/>                       </connectionStrings>8) Now it is time to create Linq to Entities queries to retrieve data from the database . Add a new class to your application in the CodeFirst folder.Name the file DALfootballer.csWe will create a simple public method to retrieve the footballers. The code for the class followspublic class DALfootballer     {         FootballerDBContext ctx = new FootballerDBContext();         public List<Footballer> GetFootballers()         {             var query = from player in ctx.Footballers select player;             return query.ToList();         }     } 9) Place a GridView control on the Default.aspx page and leave the default name.Add an ObjectDataSource control on the Default.aspx page and leave the default name. Set the DatasourceID property of the GridView control to the ID of the ObjectDataSource control.(DataSourceID="ObjectDataSource1" ). Let's configure the ObjectDataSource control. Click on the smart tag item of the ObjectDataSource control and select Configure Data Source. In the Wizzard that pops up select the DALFootballer class and then in the next step choose the GetFootballers() method.Click Finish to complete the steps of the wizzard.Build and Run your application.  10) Obviously you will not see any records coming back from your database, because we have not inserted anything. The database is created, though.Have a look at the picture below.  11) Now let's change the POCO class. Let's add a new property to the Footballer.cs class.        public int Age { get; set; } Build and run your application again. You will receive an error. Have a look at the picture below 12) That was to be expected.EF Code First Migrations is not activated by default. We have to activate them manually and configure them according to your needs. We will open the Package Manager Console from the Tools menu within Visual Studio 2012.Then we will activate the EF Code First Migration Features by writing the command “Enable-Migrations”.  Have a look at the picture below. This adds a new folder Migrations in our project. A new auto-generated class Configuration.cs is created.Another class is also created [CURRENTDATE]_InitialCreate.cs and added to our project.The Configuration.cs  is shown in the picture below. The [CURRENTDATE]_InitialCreate.cs is shown in the picture below  13) ??w we are ready to migrate the changes in the database. We need to run the Add-Migration Age command in Package Manager ConsoleAdd-Migration will scaffold the next migration based on changes you have made to your model since the last migration was created.In the Migrations folder, the file 201211201231066_Age.cs is created.Have a look at the picture below to see the newly generated file and its contents. Now we can run the Update-Database command in Package Manager Console .See the picture above.Code First Migrations will compare the migrations in our Migrations folder with the ones that have been applied to the database. It will see that the Age migration needs to be applied, and run it.The EFMigrations.CodeFirst.FootballeDBContext database is now updated to include the Age column in the Footballers table.Build and run your application.Everything will work fine now.Have a look at the picture below to see the migrations applied to our table. 14) We may want it to automatically upgrade the database (by applying any pending migrations) when the application launches.Let's add another property to our Poco class.          public string TShirtNo { get; set; }We want this change to migrate automatically to the database.We go to the Configuration.cs we enable automatic migrations.     public Configuration()        {            AutomaticMigrationsEnabled = true;        } In the Page_Load event handling routine we have to register the MigrateDatabaseToLatestVersion database initializer. A database initializer simply contains some logic that is used to make sure the database is setup correctly.   protected void Page_Load(object sender, EventArgs e)        {            Database.SetInitializer(new MigrateDatabaseToLatestVersion<FootballerDBContext, Configuration>());        } Build and run your application. It will work fine. Have a look at the picture below to see the migrations applied to our table in the database. Hope it helps!!!  

    Read the article

  • HTML5-MVC application using VS2010 SP1

    - by nmarun
    This is my first attempt at creating HTML5 pages. VS 2010 allows working with HTML5 now (you just need to make a small change after installing SP1). So my Razor view is now a HTML5 page. I call this application - 5Commerce – (an over-simplified) HTML5 ECommerce site. So here’s the flow of the application: home page renders user enters first and last name, chooses a product and the quantity can enter additional instructions for the order place the order user is then taken to another page showing the order details Off to the details. This is what my page looks in Google Chrome 10 beta (or later) soon after it renders. Here are some of the things to observe on this. Look a little closer and you’ll see a border around the first name textbox – this is ‘autofocus’ in action. I’ve set the autofocus attribute on this textbox. So as soon as the page loads, this control gets focus. 1: <input type="text" autofocus id="firstName" class="inputWidth" data_minlength="" 2: data_maxlength="" placeholder="first name" /> See a partially grayed out ‘last name’ text in the second textbox. This is set using a placeholder attribute (see above). It gets wiped out on-focus and improves the UI visuals in general. The quantity textbox is actually a numerical-only textbox. 1: <input type="number" id="quantity" data_mincount="" class="inputWidth" /> The last line is for additional instructions. This looks like a label but it’s content is editable. Just adding the ‘contenteditable’ attribute to the span allow the user to edit the text inside. 1: <span contenteditable id="additionalInstructions" data_texttype="" class="editableContent">select text and edit </span> All of the above is just plain HTML (no lurking javascript acting in here). Makes it real clean and simple. Going more into the HTML, I see that the _Layout.cshtml already is using some HTML5 content. I created my project before installing SP1, so that was the reason for my surprise. 1: <!DOCTYPE html> This is the doctype declaration in HTML5 and this is supported even by IE6 (just take my word on IE6 now, don’t go install it to test it, especially when MS is doing an IE6 countdown). That’s just amazing and extremely easy to read remember and talk about a few less bytes on every call! I modified the rest of my _Layout.cshtml to the below: 1: <!DOCTYPE html> 2: <html> 3: <head> 4: <title>5Commerce - HTML 5 Ecommerce site</title> 5: <link href="@Url.Content("~/Content/Site.css")" rel="stylesheet" type="text/css" /> 6: <script src="@Url.Content("~/Scripts/jquery-1.4.4.min.js")" type="text/javascript"></script> 7: <script src="@Url.Content("~/Scripts/CustomScripts.js")" type="text/javascript"></script> 8: <script type="text/javascript"> 9: $(document).ready(function () { 10: WireupEvents(); 11: }); 12:</script> 13:  14: </head> 15:  16: <body role="document" class="bodybackground"> 17: <header role="heading"> 18: <h2>5Commerce - HTML 5 Ecommerce site!</h2> 19: </header> 20: <section id="mainForm"> 21: @RenderBody() 22: </section> 23: <footer id="page_footer" role="siteBaseInfo"> 24: <p>&copy; 2011 5Commerce Inc!</p> 25: </footer> 26: </body> 27: </html> I’m sure you’re seeing some of the new tags here. To give a brief intro about them: <header>, <footer>: Marks the header/footer region of a page or section. <section>: A logical grouping of content role attribute: Identifies the responsibility of an element. This attribute can be used by screen readers and can also be filtered through jQuery. SP1 also allows for some intellisense in HTML5. You see the other types of input fields – email, date, datetime, month, url and there are others as well. So once my page loads, i.e., ‘on document ready’, I’m wiring up the events following the principles of unobtrusive javascript. In the snippet below, I’m controlling the behavior of the input controls for specific events. 1: $("#productList").bind('change blur', function () { 2: IsSelectedProductValid(); 3: }); 4:  5: $("#quantity").bind('blur', function () { 6: IsQuantityValid(); 7: }); 8:  9: $("#placeOrderButton").click( 10: function () { 11: if (IsPageValid()) { 12: LoadProducts(); 13: } 14: }); This enables some client-side validation to occur before the data is sent to the server. These validation constraints are obtained through a JSON call to the WCF service and are set to the ‘data_’ attributes of the input controls. Have a look at the ‘GetValidators()’ function below: 1: function GetValidators() { 2: // the post to your webservice or page 3: $.ajax({ 4: type: "GET", //GET or POST or PUT or DELETE verb 5: url: "http://localhost:14805/OrderService.svc/GetValidators", // Location of the service 6: data: "{}", //Data sent to server 7: contentType: "application/json; charset=utf-8", // content type sent to server 8: dataType: "json", //Expected data format from server 9: processdata: true, //True or False 10: success: function (result) {//On Successfull service call 11: if (result.length > 0) { 12: for (i = 0; i < result.length; i++) { 13: if (result[i].PropertyName == "FirstName") { 14: if (result[i].MinLength > 0) { 15: $("#firstName").attr("data_minLength", result[i].MinLength); 16: } 17: if (result[i].MaxLength > 0) { 18: $("#firstName").attr("data_maxLength", result[i].MaxLength); 19: } 20: } 21: else if (result[i].PropertyName == "LastName") { 22: if (result[i].MinLength > 0) { 23: $("#lastName").attr("data_minLength", result[i].MinLength); 24: } 25: if (result[i].MaxLength > 0) { 26: $("#lastName").attr("data_maxLength", result[i].MaxLength); 27: } 28: } 29: else if (result[i].PropertyName == "Quantity") { 30: if (result[i].MinCount > 0) { 31: $("#quantity").attr("data_minCount", result[i].MinCount); 32: } 33: } 34: else if (result[i].PropertyName == "AdditionalInstructions") { 35: if (result[i].TextType.length > 0) { 36: $("#additionalInstructions").attr("data_textType", result[i].TextType); 37: } 38: } 39: } 40: } 41: }, 42: error: function (result) {// When Service call fails 43: alert('Service call failed: ' + result.status + ' ' + result.statusText); 44: } 45: }); 46:  47: //.... 48: } Just before the GetValidators() function runs and sets the validation constraints, this is what the html looks like (seen through the Dev tools of Chrome): After the function executes, you see the values in the ‘data_’  attributes. As and when we enter valid data into these fields, the error messages disappear, since the validation is bound to the blur event of the control. There you see… no error messages (well, the catch here is that once you enter THAT name, all errors disappear automatically). Clicking on ‘Place Order!’ runs the SaveOrder function. You can see the JSON for the order object that is getting constructed and passed to the WCF Service. 1: function SaveOrder() { 2: var addlInstructionsDefaultText = "select text and edit"; 3: var addlInstructions = $("span:first").text(); 4: if(addlInstructions == addlInstructionsDefaultText) 5: { 6: addlInstructions = ''; 7: } 8: var orderJson = { 9: AdditionalInstructions: addlInstructions, 10: Customer: { 11: FirstName: $("#firstName").val(), 12: LastName: $("#lastName").val() 13: }, 14: OrderedProduct: { 15: Id: $("#productList").val(), 16: Quantity: $("#quantity").val() 17: } 18: }; 19:  20: // the post to your webservice or page 21: $.ajax({ 22: type: "POST", //GET or POST or PUT or DELETE verb 23: url: "http://localhost:14805/OrderService.svc/SaveOrder", // Location of the service 24: data: JSON.stringify(orderJson), //Data sent to server 25: contentType: "application/json; charset=utf-8", // content type sent to server 26: dataType: "json", //Expected data format from server 27: processdata: false, //True or False 28: success: function (result) {//On Successfull service call 29: window.location.href = "http://localhost:14805/home/ShowOrderDetail/" + result; 30: }, 31: error: function (request, error) {// When Service call fails 32: alert('Service call failed: ' + request.status + ' ' + request.statusText); 33: } 34: }); 35: } The service saves this order into an XML file and returns the order id (a guid). On success, I redirect to the ShowOrderDetail action method passing the guid. This page will show all the details of the order. Although the back-end weightlifting is done by WCF, I did not show any of that plumbing-work as I wanted to concentrate more on the HTML5 and its associates. However, you can see it all in the source here. I do have one issue with HTML5 and this is an existing issue with HTML4 as well. If you see the snippet above where I’ve declared a textbox for first name, you’ll see the autofocus attribute just dangling by itself. It doesn’t follow the xml syntax of ‘key="value"’ allowing users to continue writing badly-formatted html even in the new version. You’ll see the same issue with the ‘contenteditable’ attribute as well. The work-around is that you can do ‘autofocus=”true”’ and it’ll work fine plus make it well-formatted. But unless the standards enforce this, there will be people (me included) who’ll get by, by just typing the bare minimum! Hoping this will get fixed in the coming version-updates. Source code here. Verdict: I think it’s time for us to embrace the new HTML5. Thank you HTML4 and Welcome HTML5.

    Read the article

  • My Visual Studio Demo Video Link disappeared &ndash; How do I get it back?

    - by Tarun Arora
    ***Special thanks to Adam Cogan for asking this question and to Andrew Bragdon for answering this question on the ALM Champs list.*** 1. Problem – The link to demo videos will disappear once you have watched the video Learning Visual Studio has become easier than ever with the Visual Studio How to Videos hosted inside of Visual Studio showing up in the context of the task you are trying to achieve. For instance when you click code review in team explorer you can see the link “Streaming Video: Using Code Review to improve quality” when you click this link the video stream is delivered to you right with in Visual Studio. Next time you run Visual Studio you will notice that the home page has a check mark in the video “Using Code Review to improve quality”. If you navigate to code review in the myWork hub in the team explorer, you will notice that the link “Streaming Video: Using Code Review to improve quality” does not show up any more.         2. Solution – How to get the Demo Videos link back Warning: Editing the registry can lead to serious problems if not done correctly.  Always backup your registry before editing. This solution is neither suggested nor supported by Microsoft. Type regedit on the run command prompt to open the Registry editor Navigate to the path Computer\HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\11.0\UltimateStartPage\VideoState and notice the newly created folder “TeamExplorer.CodeReview”, notice the key Watched is set to 1.         Change the value of the key ‘Watched’ to 0 Restart Visual Studio and Navigate to Code Review in myWork hub and voila, the link to stream the video is back!            Watch and enjoy the Demo videos to your hearts content!

    Read the article

  • Do you know about the Visual Studio ALM Rangers Guidance?

    - by Martin Hinshelwood
    I have been tasked with investigating the Guidance available around Visual Studio 2010 for one of our customers and it makes sense to make this available to everyone. The official guidance around Visual Studio 2010 has been created by the Visual Studio ALM Rangers and is a brew of a bunch of really clever guys experiences working with the tools and customers. I will be creating a series of posts on the different guidance options as many people still do not know about them even though Willy-Peter Schaub has done a fantastic job of making sure they get the recognition they deserve. There is a full list of all of the Rangers Solutions and Projects on MSDN, but I wanted to add my own point of view to the usefulness of each one. If you don’t know who the rangers are you should have a look at the Visual Studio ALM Rangers Index to see the full breadth of where the rangers are. All of the Rangers Solutions are available on Codeplex where you can download them and add reviews… Rangers Solutions and Projects Do you know about the Visual Studio 2010 Architecture Guidance? More coming soon… These solutions took a very long time to put together and I wanted to make sure that we all understand the value of the free time that member of The Product Team, Visual Studio ALM MVP’s and partners put in to make them happen.

    Read the article

  • Authoritative sources about Database vs. Flatfile decision

    - by FastAl
    <tldr>looking for a reference to a book or other undeniably authoritative source that gives reasons when you should choose a database vs. when you should choose other storage methods. I have provided an un-authoritative list of reasons about 2/3 of the way down this post.</tldr> I have a situation at my company where a database is being used where it would be better to use another solution (in this case, an auto-generated piece of source code that contains a static lookup table, searched by binary sort). Normally, a database would be an OK solution even though the problem does not require a database, e.g, none of the elements of ACID are needed, as it is read-only data, updated about every 3-5 years (also requiring other sourcecode changes), and fits in memory, and can be keyed into via binary search (a tad faster than db, but speed is not an issue). The problem is that this code runs on our enterprise server, but is shared with several PC platforms (some disconnected, some use a central DB, etc.), and parts of it are managed by multiple programming units, parts by the DBAs, parts even by mathematicians in another department, etc. These hit their own platform’s version of their databases (containing their own copy of the static data). What happens is that every implementation, every little change, something different goes wrong. There are many other issues as well. I can’t even use a flatfile, because one mode of running on our enterprise server does not have permission to read files (only databases, and of course, its own literal storage, e.g., in-source table). Of course, other parts of the system use databases in proper, less obscure manners; there is no problem with those parts. So why don’t we just change it? I don’t have administrative ability to force a change. But I’m affected because sometimes I have to help fix the problems, but mostly because it causes outages and tons of extra IT time by other programmers and d*mmit that makes me mad! The reason neither management, nor the designers of the system, can see the problem is that they propose a solution that won’t work: increase communication; implement more safeguards and standards; etc. But every time, in a different part of the already-pared-down but still multi-step processes, a few different diligent, hard-working, top performing IT personnel make a unique subtle error that causes it to fail, sometimes after the last round of testing! And in general these are not single-person failures, but understandable miscommunications. And communication at our company is actually better than most. People just don't think that's the case because they haven't dug into the matter. However, I have it on very good word from somebody with extensive formal study of sociology and psychology that the relatively small amount of less-than-proper database usage in this gigantic cross-platform multi-source, multi-language project is bureaucratically un-maintainable. Impossible. No chance. At least with Human Beings in the loop, and it can’t be automated. In addition, the management and developers who could change this, though intelligent and capable, don’t understand the rigidity of this ‘how humans are’ issue, and are not convincible on the matter. The reason putting the static data in sourcecode will solve the problem is, although the solution is less sexy than a database, it would function with no technical drawbacks; and since the sharing of sourcecode already works very well, you basically erase any database-related effort from this section of the project, along with all the drawbacks of it that are causing problems. OK, that’s the background, for the curious. I won’t be able to convince management that this is an unfixable sociological problem, and that the real solution is coding around these limits of human nature, just as you would code around a bug in a 3rd party component that you can’t change. So what I have to do is exploit the unsuitableness of the database solution, and not do it using logic, but rather authority. I am aware of many reasons, and posts on this site giving reasons for one over the other; I’m not looking for lists of reasons like these (although you can add a comment if I've miss a doozy): WHY USE A DATABASE? instead of flatfile/other DB vs. file: if you need... Random Read / Transparent search optimization Advanced / varied / customizable Searching and sorting capabilities Transaction/rollback Locks, semaphores Concurrency control / Shared users Security 1-many/m-m is easier Easy modification Scalability Load Balancing Random updates / inserts / deletes Advanced query Administrative control of design, etc. SQL / learning curve Debugging / Logging Centralized / Live Backup capabilities Cached queries / dvlp & cache execution plans Interleaved update/read Referential integrity, avoid redundant/missing/corrupt/out-of-sync data Reporting (from on olap or oltp db) / turnkey generation tools [Disadvantages:] Important to get right the first time - professional design - but only b/c it's meant to last s/w & h/w cost Usu. over a network, speed issue (best vs. best design vs. local=even then a separate process req's marshalling/netwk layers/inter-p comm) indicies and query processing can stand in the way of simple processing (vs. flatfile) WHY USE FLATFILE: If you only need... Sequential Row processing only Limited usage append only (no reading, no master key/update) Only Update the record you're reading (fixed length recs only) Too big to fit into memory If Local disk / read-ahead network connection Portability / small system Email / cut & Paste / store as document by novice - simple format Low design learning curve but high cost later WHY USE IN-MEMORY/TABLE (tables, arrays, etc.): if you need... Processing a single db/ff record that was imported Known size of data Static data if hardcoding the table Narrow, unchanging use (e.g., one program or proc) -includes a class that will be shared, but encapsulates its data manipulation Extreme speed needed / high transaction frequency Random access - but search is dependent on implementation Following are some other posts about the topic: http://stackoverflow.com/questions/1499239/database-vs-flat-text-file-what-are-some-technical-reasons-for-choosing-one-over http://stackoverflow.com/questions/332825/are-flat-file-databases-any-good http://stackoverflow.com/questions/2356851/database-vs-flat-files http://stackoverflow.com/questions/514455/databases-vs-plain-text/514530 What I’d like to know is if anybody could recommend a hard, authoritative source containing these reasons. I’m looking for a paper book I can buy, or a reputable website with whitepapers about the issue (e.g., Microsoft, IBM), not counting the user-generated content on those sites. This will have a greater change to elicit a change that I’m looking for: less wasted programmer time, and more reliable programs. Thanks very much for your help. You win a prize for reading such a large post!

    Read the article

  • ASP.NET MVC 3: Implicit and Explicit code nuggets with Razor

    - by ScottGu
    This is another in a series of posts I’m doing that cover some of the new ASP.NET MVC 3 features: New @model keyword in Razor (Oct 19th) Layouts with Razor (Oct 22nd) Server-Side Comments with Razor (Nov 12th) Razor’s @: and <text> syntax (Dec 15th) Implicit and Explicit code nuggets with Razor (today) In today’s post I’m going to discuss how Razor enables you to both implicitly and explicitly define code nuggets within your view templates, and walkthrough some code examples of each of them.  Fluid Coding with Razor ASP.NET MVC 3 ships with a new view-engine option called “Razor” (in addition to the existing .aspx view engine).  You can learn more about Razor, why we are introducing it, and the syntax it supports from my Introducing Razor blog post. Razor minimizes the number of characters and keystrokes required when writing a view template, and enables a fast, fluid coding workflow. Unlike most template syntaxes, you do not need to interrupt your coding to explicitly denote the start and end of server blocks within your HTML. The Razor parser is smart enough to infer this from your code. This enables a compact and expressive syntax which is clean, fast and fun to type. For example, the Razor snippet below can be used to iterate a collection of products and output a <ul> list of product names that link to their corresponding product pages: When run, the above code generates output like below: Notice above how we were able to embed two code nuggets within the content of the foreach loop.  One of them outputs the name of the Product, and the other embeds the ProductID within a hyperlink.  Notice that we didn’t have to explicitly wrap these code-nuggets - Razor was instead smart enough to implicitly identify where the code began and ended in both of these situations.  How Razor Enables Implicit Code Nuggets Razor does not define its own language.  Instead, the code you write within Razor code nuggets is standard C# or VB.  This allows you to re-use your existing language skills, and avoid having to learn a customized language grammar. The Razor parser has smarts built into it so that whenever possible you do not need to explicitly mark the end of C#/VB code nuggets you write.  This makes coding more fluid and productive, and enables a nice, clean, concise template syntax.  Below are a few scenarios that Razor supports where you can avoid having to explicitly mark the beginning/end of a code nugget, and instead have Razor implicitly identify the code nugget scope for you: Property Access Razor allows you to output a variable value, or a sub-property on a variable that is referenced via “dot” notation: You can also use “dot” notation to access sub-properties multiple levels deep: Array/Collection Indexing: Razor allows you to index into collections or arrays: Calling Methods: Razor also allows you to invoke methods: Notice how for all of the scenarios above how we did not have to explicitly end the code nugget.  Razor was able to implicitly identify the end of the code block for us. Razor’s Parsing Algorithm for Code Nuggets The below algorithm captures the core parsing logic we use to support “@” expressions within Razor, and to enable the implicit code nugget scenarios above: Parse an identifier - As soon as we see a character that isn't valid in a C# or VB identifier, we stop and move to step 2 Check for brackets - If we see "(" or "[", go to step 2.1., otherwise, go to step 3  Parse until the matching ")" or "]" (we track nested "()" and "[]" pairs and ignore "()[]" we see in strings or comments) Go back to step 2 Check for a "." - If we see one, go to step 3.1, otherwise, DO NOT ACCEPT THE "." as code, and go to step 4 If the character AFTER the "." is a valid identifier, accept the "." and go back to step 1, otherwise, go to step 4 Done! Differentiating between code and content Step 3.1 is a particularly interesting part of the above algorithm, and enables Razor to differentiate between scenarios where an identifier is being used as part of the code statement, and when it should instead be treated as static content: Notice how in the snippet above we have ? and ! characters at the end of our code nuggets.  These are both legal C# identifiers – but Razor is able to implicitly identify that they should be treated as static string content as opposed to being part of the code expression because there is whitespace after them.  This is pretty cool and saves us keystrokes. Explicit Code Nuggets in Razor Razor is smart enough to implicitly identify a lot of code nugget scenarios.  But there are still times when you want/need to be more explicit in how you scope the code nugget expression.  The @(expression) syntax allows you to do this: You can write any C#/VB code statement you want within the @() syntax.  Razor will treat the wrapping () characters as the explicit scope of the code nugget statement.  Below are a few scenarios where we could use the explicit code nugget feature: Perform Arithmetic Calculation/Modification: You can perform arithmetic calculations within an explicit code nugget: Appending Text to a Code Expression Result: You can use the explicit expression syntax to append static text at the end of a code nugget without having to worry about it being incorrectly parsed as code: Above we have embedded a code nugget within an <img> element’s src attribute.  It allows us to link to images with URLs like “/Images/Beverages.jpg”.  Without the explicit parenthesis, Razor would have looked for a “.jpg” property on the CategoryName (and raised an error).  By being explicit we can clearly denote where the code ends and the text begins. Using Generics and Lambdas Explicit expressions also allow us to use generic types and generic methods within code expressions – and enable us to avoid the <> characters in generics from being ambiguous with tag elements. One More Thing….Intellisense within Attributes We have used code nuggets within HTML attributes in several of the examples above.  One nice feature supported by the Razor code editor within Visual Studio is the ability to still get VB/C# intellisense when doing this. Below is an example of C# code intellisense when using an implicit code nugget within an <a> href=”” attribute: Below is an example of C# code intellisense when using an explicit code nugget embedded in the middle of a <img> src=”” attribute: Notice how we are getting full code intellisense for both scenarios – despite the fact that the code expression is embedded within an HTML attribute (something the existing .aspx code editor doesn’t support).  This makes writing code even easier, and ensures that you can take advantage of intellisense everywhere. Summary Razor enables a clean and concise templating syntax that enables a very fluid coding workflow.  Razor’s ability to implicitly scope code nuggets reduces the amount of typing you need to perform, and leaves you with really clean code. When necessary, you can also explicitly scope code expressions using a @(expression) syntax to provide greater clarity around your intent, as well as to disambiguate code statements from static markup. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Getting Started with ASP.NET Membership, Profile and RoleManager

    - by Ben Griswold
    A new ASP.NET MVC project includes preconfigured Membership, Profile and RoleManager providers right out of the box.  Try it yourself – create a ASP.NET MVC application, crack open the web.config file and have a look.  First, you’ll find the ApplicationServices database connection: <connectionStrings>   <add name="ApplicationServices"        connectionString="data source=.\SQLEXPRESS;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|aspnetdb.mdf;User Instance=true"        providerName="System.Data.SqlClient"/> </connectionStrings>   Notice the connection string is referencing the aspnetdb.mdf database hosted by SQL Express and it’s using integrated security so it’ll just work for you without having to call out a specific database login or anything. Scroll down the file a bit and you’ll find each of the three noted sections: <membership>   <providers>     <clear/>     <add name="AspNetSqlMembershipProvider"          type="System.Web.Security.SqlMembershipProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"          connectionStringName="ApplicationServices"          enablePasswordRetrieval="false"          enablePasswordReset="true"          requiresQuestionAndAnswer="false"          requiresUniqueEmail="false"          passwordFormat="Hashed"          maxInvalidPasswordAttempts="5"          minRequiredPasswordLength="6"          minRequiredNonalphanumericCharacters="0"          passwordAttemptWindow="10"          passwordStrengthRegularExpression=""          applicationName="/"             />   </providers> </membership>   <profile>   <providers>     <clear/>     <add name="AspNetSqlProfileProvider"          type="System.Web.Profile.SqlProfileProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"          connectionStringName="ApplicationServices"          applicationName="/"             />   </providers> </profile>   <roleManager enabled="false">   <providers>     <clear />     <add connectionStringName="ApplicationServices" applicationName="/" name="AspNetSqlRoleProvider" type="System.Web.Security.SqlRoleProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" />     <add applicationName="/" name="AspNetWindowsTokenRoleProvider" type="System.Web.Security.WindowsTokenRoleProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" />   </providers> </roleManager> Really. It’s all there. Still don’t believe me.  Run the application, walk through the registration process and finally login and logout.  Completely functional – and you didn’t have to do a thing! What else?  Well, you can manage your users via the Configuration Manager which is hiding in Visual Studio behind Projects > ASP.NET Configuration. The ASP.NET Web Site Administration Tool isn’t MVC-specific (neither is the Membership, Profile or RoleManager stuff) but it’s neat and I hardly ever see anyone using it.  Here you can set up and edit users, roles, and set access permissions for your site. You can manage application settings, establish your SMTP settings, configure debugging and tracing, define default error page and even take your application offline.  The UI is rather plain-Jane but it works great. And here’s the best of all.  Let’s say you, like most of us, don’t want to run your application on top of the aspnetdb.mdf database.  Let’s suppose you want to use your own database and you’d like to add the membership stuff to it.  Well, that’s easy enough. Take a look inside your [drive:]\%windir%\Microsoft.Net\Framework\v2.0.50727\ folder.  Here you’ll find a bunch of files.  If you were to run the InstallCommon.sql, InstallMembership.sql, InstallRoles.sql and InstallProfile.sql files against the database of your choices, you’d be installing the same membership, profile and role artifacts which are found in the aspnet.db to your own database.  Too much trouble?  Okay. Run [drive:]\%windir%\Microsoft.Net\Framework\v2.0.50727\aspnet_regsql.exe from the command line instead.  This will launch the ASP.NET SQL Server Setup Wizard which walks you through the installation of those same database objects into the new or existing database of your choice. You may not always have the luxury of using this tool on your destination server, but you should use it whenever you can.  Last tip: don’t forget to update the ApplicationServices connectionstring to point to your custom database after the setup is complete. At the risk of sounding like a smarty, everything I’ve mentioned in this post has been around for quite a while. The thing is that not everyone has had the opportunity to use it.  And it makes sense. I know I’ve worked on projects which used custom membership services.  Why bother with the out-of-the-box stuff, right?   And the .NET framework is so massive, who can know it all. Well, eventually you might have a chance to architect your own solution using any implementation you’d like or you will have the time to play around with another aspect of the framework.  When you do, think back to this post.

    Read the article

  • Overwriting TFS Web Services

    - by javarg
    In this blog I will share a technique I used to intercept TFS Web Services calls. This technique is a very invasive one and requires you to overwrite default TFS Web Services behavior. I only recommend taking such an approach when other means of TFS extensibility fail to provide the same functionality (this is not a supported TFS extensibility point). For instance, intercepting and aborting a Work Item change operation could be implemented using this approach (consider TFS Subscribers functionality before taking this approach, check Martin’s post about subscribers). So let’s get started. The technique consists in versioning TFS Web Services .asmx service classes. If you look into TFS’s ASMX services you will notice that versioning is supported by creating a class hierarchy between different product versions. For instance, let’s take the Work Item management service .asmx. Check the following .asmx file located at: %Program Files%\Microsoft Team Foundation Server 2010\Application Tier\Web Services\_tfs_resources\WorkItemTracking\v3.0\ClientService.asmx The .asmx references the class Microsoft.TeamFoundation.WorkItemTracking.Server.ClientService3: <%-- Copyright (c) Microsoft Corporation. All rights reserved. --%> <%@ webservice language="C#" Class="Microsoft.TeamFoundation.WorkItemTracking.Server.ClientService3" %> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The inheritance hierarchy for this service class follows: Note the naming convention used for service versioning (ClientService3, ClientService2, ClientService). We will need to overwrite the latest service version provided by the product (in this case ClientService3 for TFS 2010). The following example intercepts and analyzes WorkItem fields. Suppose we need to validate state changes with more advanced logic other than the provided validations/constraints of the process template. Important: Backup the original .asmx file and create one of your own. Create a Visual Studio Web App Project and include a new ASMX Web Service in the project Add the following references to the project (check the folder %Program Files%\Microsoft Team Foundation Server 2010\Application Tier\Web Services\bin\): Microsoft.TeamFoundation.Framework.Server.dll Microsoft.TeamFoundation.Server.dll Microsoft.TeamFoundation.Server.dll Microsoft.TeamFoundation.WorkItemTracking.Client.QueryLanguage.dll Microsoft.TeamFoundation.WorkItemTracking.Server.DataAccessLayer.dll Microsoft.TeamFoundation.WorkItemTracking.Server.DataServices.dll Replace the default service implementation with the something similar to the following code: Code Snippet /// <summary> /// Inherit from ClientService3 to overwrite default Implementation /// </summary> [WebService(Namespace = "http://schemas.microsoft.com/TeamFoundation/2005/06/WorkItemTracking/ClientServices/03", Description = "Custom Team Foundation WorkItemTracking ClientService Web Service")] public class CustomTfsClientService : ClientService3 {     [WebMethod, SoapHeader("requestHeader", Direction = SoapHeaderDirection.In)]     public override bool BulkUpdate(         XmlElement package,         out XmlElement result,         MetadataTableHaveEntry[] metadataHave,         out string dbStamp,         out Payload metadata)     {         var xe = XElement.Parse(package.OuterXml);         // We only intercept WorkItems Updates (we can easily extend this sample to capture any operation).         var wit = xe.Element("UpdateWorkItem");         if (wit != null)         {             if (wit.Attribute("WorkItemID") != null)             {                 int witId = (int)wit.Attribute("WorkItemID");                 // With this Id. I can query TFS for more detailed information, using TFS Client API (assuming the WIT already exists).                 var stateChanged =                     wit.Element("Columns").Elements("Column").FirstOrDefault(c => (string)c.Attribute("Column") == "System.State");                 if (stateChanged != null)                 {                     var newStateName = stateChanged.Element("Value").Value;                     if (newStateName == "Resolved")                     {                         throw new Exception("Cannot change state to Resolved!");                     }                 }             }         }         // Finally, we call base method implementation         return base.BulkUpdate(package, out result, metadataHave, out dbStamp, out metadata);     } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 4. Build your solution and overwrite the original .asmx with the new implementation referencing our new service version (don’t forget to backup it up first). 5. Copy your project’s .dll into the following path: %Program Files%\Microsoft Team Foundation Server 2010\Application Tier\Web Services\bin 6. Try saving a WorkItem into the Resolved state. Enjoy!

    Read the article

  • Code Contracts: How they look after compiling?

    - by DigiMortal
    When you are using new tools that make also something at code level then it is good idea to check out what additions are made to code during compilation. Code contracts have simple syntax when we are writing code at Visual Studio but what happens after compilation? Are our methods same as they look in code or are they different after compilation? In this posting I will show you how code contracts look after compiling. In my previous examples about code contracts I used randomizer class with method called GetRandomFromRangeContracted. public int GetRandomFromRangeContracted(int min, int max) {     Contract.Requires<ArgumentOutOfRangeException>(         min < max,         "Min must be less than max"     );       Contract.Ensures(         Contract.Result<int>() >= min &&         Contract.Result<int>() <= max,         "Return value is out of range"     );       return _generator.Next(min, max); } Okay, it is nice to dream about similar code when we open our assembly with Reflector and disassemble it. But… this time we have something interesting. While reading this code don’t feel uncomfortable about the names of variables. This is disassembled code. .NET Framework internally allows these names. It is our compilators that doesn’t accept them when we are building our code. public int GetRandomFromRangeContracted(int min, int max) {     int Contract.Old(min);     int Contract.Old(max);     if (__ContractsRuntime.insideContractEvaluation <= 4)     {         try         {             __ContractsRuntime.insideContractEvaluation++;             __ContractsRuntime.Requires<ArgumentOutOfRangeException>(                min < max,                "Min must be less than max", "min < max");         }         finally         {             __ContractsRuntime.insideContractEvaluation--;         }     }     try     {         Contract.Old(min) = min;     }     catch (Exception exception1)     {         if (exception1 == null)         {             throw;         }     }     try     {         Contract.Old(max) = max;         catch (Exception exception2)     {         if (exception2 == null)         {             throw;         }     }     int CS$1$0000 = this._generator.Next(min, max);     int Contract.Result<int>() = CS$1$0000;     if (__ContractsRuntime.insideContractEvaluation <= 4)     {         try         {             __ContractsRuntime.insideContractEvaluation++;             __ContractsRuntime.Ensures(                (Contract.Result<int>() >= Contract.Old(min)) &&                (Contract.Result<int>() <= Contract.Old(max)),                "Return value is out of range",                "Contract.Result<int>() >= min && Contract.Result<int>() <= max");         }         finally         {             __ContractsRuntime.insideContractEvaluation--;         }     }     return Contract.Result<int>(); } As we can see then contracts are not simply if-then-else checks and exceptions throwing. We can see that there is counter that is incremented before checks and decremented after these whatever the result of check was. One thing that is annoying for me are null checks for exception1 and exception2. Is there really some situation possible when null is thrown instead of some instance that is Exception or that inherits from exception? Conclusion Code contracts are more complex mechanism that it seems when we look at it on our code level. Internally there are done more things than we know. I don’t say it is wrong, it is just good to know how our code looks after compiling. Looking at this example it is sure we need also performance tests for contracted code to see how heavy is their impact to system performance when we run code that makes heavy use of code contracts.

    Read the article

  • Building an ASP.Net 4.5 Web forms application - part 5

    - by nikolaosk
    ?his is the fifth post in a series of posts on how to design and implement an ASP.Net 4.5 Web Forms store that sells posters on line. There are 4 more posts in this series of posts.Please make sure you read them first.You can find the first post here. You can find the second post here. You can find the third post here.You can find the fourth here.  In this new post we will build on the previous posts and we will demonstrate how to display the details of a poster when the user clicks on an individual poster photo/link. We will add a FormView control on a web form and will bind data from the database. FormView is a great web server control for displaying the details of a single record. 1) Launch Visual Studio and open your solution where your project lives2) Add a new web form item on the project.Make sure you include the Master Page.Name it PosterDetails.aspx 3) Open the PosterDetails.aspx page. We will add some markup in this page. Have a look at the code below <asp:Content ID="Content2" ContentPlaceHolderID="FeaturedContent" runat="server">    <asp:FormView ID="posterDetails" runat="server" ItemType="PostersOnLine.DAL.Poster" SelectMethod ="GetPosterDetails">        <ItemTemplate>            <div>                <h1><%#:Item.PosterName %></h1>            </div>            <br />            <table>                <tr>                    <td>                        <img src="<%#:Item.PosterImgpath %>" border="1" alt="<%#:Item.PosterName %>" height="300" />                    </td>                    <td style="vertical-align: top">                        <b>Description:</b><br /><%#:Item.PosterDescription %>                        <br />                        <span><b>Price:</b>&nbsp;<%#: String.Format("{0:c}", Item.PosterPrice) %></span>                        <br />                        <span><b>Poster Number:</b>&nbsp;<%#:Item.PosterID %></span>                        <br />                    </td>                </tr>            </table>        </ItemTemplate>    </asp:FormView></asp:Content> I set the ItemType property to the Poster entity class and the SelectMethod to the GetPosterDetails method.The Item binding expression is available and we can retrieve properties of the Poster object.I retrieve the name, the image,the description and the price of each poster. 4) Now we need to write the GetPosterDetails method.In the code behind of the PosterDetails.aspx page we type public IQueryable<Poster> GetPosterDetails([QueryString("PosterID")]int? posterid)        {                    PosterContext ctx = new PosterContext();            IQueryable<Poster> query = ctx.Posters;            if (posterid.HasValue && posterid > 0)            {                query = query.Where(p => p.PosterID == posterid);            }            else            {                query = null;            }            return query;        } I bind the value from the query string to the posterid parameter at run time.This is all possible due to the QueryStringAttribute class that lives inside the System.Web.ModelBinding and gets the value of the query string variable PosterID.If there is a matching poster it is fetched from the database.If not,there is no data at all coming back from the database. 5) I run my application and then click on the "Midfielders" link.Then click on the first poster that appears from the left (Kenny Dalglish) and click on it to see the details. Have a look at the picture below to see the results.   You can see that now I have all the details of the poster in a new page.?ake sure you place breakpoints in the code so you can see what is really going on. Hope it helps!!!

    Read the article

  • Building an ASP.Net 4.5 Web forms application - part 3

    - by nikolaosk
    ?his is the third post in a series of posts on how to design and implement an ASP.Net 4.5 Web Forms store that sells posters on line.Make sure you read the first and second post in the series.In this new post I will keep making some minor changes in the Markup,CSS and Master page but there is no point in presenting them here. They are just minor changes to reflect the content and layout I want my site to have. What I need to do now is to add some more pages and start displaying properly data from my database.Having said that I will show you how to add more pages to the web application and present data.1) Launch Visual Studio and open your solution where your project lives2) Add a new web form item on the project.Make sure you include the Master Page.Name it PosterList.aspxHave a look at the picture below 3) In Site.Master add the following link to the master page so the user can navigate to it.You should only add the line in bold     <nav>                    <ul id="menu">                        <li><a runat="server" href="~/">Home</a></li>                        <li><a runat="server" href="~/About.aspx">About</a></li>                        <li><a runat="server" href="~/Contact.aspx">Contact</a></li>                          <li><a href="http://weblogs.asp.net/PosterList.aspx">Posters</a></li>                    </ul>                </nav> 4) Now we need to display categories from the database. We will use a ListView web server control.Inside the <div id="body"> add the following code. <section id="postercat">       <asp:ListView ID="categoryList"                          ItemType="PostersOnLine.DAL.PosterCategory"                         runat="server"                        SelectMethod="GetPosterCategories" >                        <ItemTemplate>                                                    <a href="http://weblogs.asp.net/PosterList.aspx?id=<%#: Item.PosterCategoryID %>">                            <%#: Item.PosterCategoryName %>                            </a>                            </b>                        </ItemTemplate>                        <ItemSeparatorTemplate> ----- </ItemSeparatorTemplate>                    </asp:ListView>             </section>        Let me explain what the code does.We have the ListView control that displays each poster category's name.It also includes a link to the PosterList.aspx page with a query-string value containing the ID of the category. We set the ItemType property in the ListView to the PosterCategory entity .We set the SelectMethod property to a method GetPosterCategories. Now we can use the data-binding expression Item (<%#: %>) that is available within the ItemTemplate . 5) Now we must write the GetPosterCategories method. In the Site.Master.cs file add the following code.This is just a simple function that returns the poster categories.        public IQueryable<PosterCategory> GetPosterCategories()        {            PosterContext ctx = new PosterContext();            IQueryable<PosterCategory> query = ctx.PosterCategories;            return query;        } 6) I just changed a few things in the Site.css file to style the new <section> HTML element that includes the ListView control.#postercat {  text-align: center; background-color: #85C465;}     7) Build and run your application. Everything should compile now. Have a look at the picture below.The links (poster categories) appear.?he ListView control when is called during the page lifecycle calls the GetPosterCategories() method.The method is executed and returns the poster categories that are bound to the control.  When I click on any of the poster category links, the PosterList.aspx page will show up with the appropriate Id that is the PosterCategoryID.Have a look at the picture below  We will add more data-enabled controls in the next post in the PosterList.aspx page. Some people are complaining the posts are too long so I will keep them short. Hope it helps!!!

    Read the article

  • Enabling XML-documentation for code contracts

    - by DigiMortal
    One nice feature that code contracts offer is updating of code documentation. If you are using source code documenting features of Visual Studio then code contracts may automate some tasks you otherwise have to implement manually. In this posting I will show you some XML documentation files with documented contracts. I will also explain how this feature works. Enabling XML-documentation in project settings As a first thing let’s enable generating of code documentation under project settings. Open project properties, move to Build page and make check to checkbox called “XML documentation file”. Save project settings and rebuild project. When project is built go to bin/Debug folder and open the XML-file. Here is my XML. <?xml version="1.0"?> <doc>     <assembly>         <name>Eneta.Examples.CodeContracts.Testable</name>     </assembly>     <members>         <member name="T:Eneta.Examples.CodeContracts.Testable.Randomizer">             <summary>             Class for generating random integers in user specified range.             </summary>         </member>         <member name="M:Eneta.Examples.CodeContracts.Testable.Randomizer.#ctor(Eneta.Examples.CodeContracts.Testable.IRandomGenerator)">             <summary>             Constructor of Randomizer. Initializes Randomizer class.             </summary>             <param name="generator">Instance of random number generator.</param>         </member>         <member name="M:Eneta.Examples.CodeContracts.Testable.Randomizer.GetRandomFromRangeContracted(System.Int32,System.Int32)">             <summary>             Returns random integer in given range.             </summary>             <param name="min">Minimum value of random integer.</param>             <param name="max">Maximum value of random integer.</param>         </member>     </members> </doc> You can see nothing about code contracts here. Enabling code contracts documentation Code contracts have their own settings and conditions for documentation. Open project properties and move to Code Contracts tab. From “Contract Reference Assembly” dropdown check Build and make check to checkbox “Emit contracts into XML doc file”. And again – save project setting, build the project and move to bin/Debug folder. Now you can see that there are two files for XML-documentation: <assembly name>.XML <assembly name>.old.XML First files is documentation with contracts, second file is original documentation without contracts. Let’s see now what is inside our new XML-documentation file. <?xml version="1.0"?> <doc>   <assembly>     <name>Eneta.Examples.CodeContracts.Testable</name>   </assembly>   <members>     <member name="T:Eneta.Examples.CodeContracts.Testable.Randomizer">       <summary>             Class for generating random integers in user specified range.             </summary>     </member>     <member name="M:Eneta.Examples.CodeContracts.Testable.Randomizer.#ctor(Eneta.Examples.CodeContracts.Testable.IRandomGenerator)">       <summary>             Constructor of Randomizer. Initializes Randomizer class.             </summary>       <param name="generator">Instance of random number generator.</param>     </member>     <member name="M:Eneta.Examples.CodeContracts.Testable.Randomizer.GetRandomFromRangeContracted(System.Int32,System.Int32)">       <summary>             Returns random integer in given range.             </summary>       <param name="min">Minimum value of random integer.</param>       <param name="max">Maximum value of random integer.</param>       <requires description="Min must be less than max" exception="T:System.ArgumentOutOfRangeException">                 min &lt; max</requires>       <exception cref="T:System.ArgumentOutOfRangeException">                 min &gt;= max</exception>       <ensures description="Return value is out of range">                 Contract.Result&lt;int&gt;() &gt;= min &amp;&amp;                 Contract.Result&lt;int&gt;() &lt;= max</ensures>     </member>   </members> </doc> As you can see then code contracts are pretty well documented. Messages that I provided with code contracts are also available in documentation. If I wrote very good and informative messages then these messages are very useful also in contracts documentation. Code contracts and Sandcastle Sandcastle knows nothing about code contracts by default. There is separate package of file for Sandcastle that is provided you by code contracts installation. You can read from code contracts manual: “Sandcastle (http://www.codeplex.com/Sandcastle) is a freely available tool that generates help les and web sites describing your APIs, based on the XML doc comments in your source code. The CodeContracts install contains a set of les that can be copied over a Sandcastle installation to take advantage of the additional contract information. The produced documentation adds a contract section to methods with declared requires and/or ensures. In order for Sandcastle to produce Contract sections, you need to patch a number of files in its installation. Please refer to the Sandcastle Readme.txt found under Start Menu/CodeContracts/Sandcastle for instructions. A future release of Sandcastle will hopefully support contract sections without the need for this patching step.” Integrating code contracts documentation to Sandcastle will be one of my next postings about code contracts. Conclusion if you are using code documentation then documentation about code contracts can be added to documentation very easily. All you have to do is to enable XML-documentation for contracts and build your project. Later you can use Sandcastle files provided by code contracts installer to integrate contracts documentation to your output documentation package.

    Read the article

  • Can't Access TFS 2010 Beta 2 from Visual Studio 2010 Beta 2 when domain joined

    - by Brian Sullivan
    I'm experimenting with an installation of TFS 2010 Beta 2 on a virtual machine under VirtualBox running Windows Server 2008. When I've got the server in a workgroup, I can connect to it from Visual Studio just fine, as long as I provide credentials for a local user on the server machine when prompted by the "Connect to Team Foundation Server" dialog. The desktop I'm running Visual Studio on is joined to a domain. However, when I join the server to the domain, I can no longer connect to it from Visual Studio. I get a pretty generic error message: "TF31002 - Unable to connect to team foundation server". It gives me several different possible problems, including an incorrect address or an incorrect username and password. I've already added the domain Windows identity with which I'm logged on the the desktop to the TFS Admins group on the server, so I don't think it's a username/password problem. I've also tried putting the literal IP address of the server in the dialog address box instead of the machine name, but still no dice. I made sure that network discovery was enabled on the server, too, and can navigate to "\\webserver2008" in Windows Explorer without any problems. Shouldn't be a firewall problem, since the TFS install creates the appropriate exceptions in Windows Firewall. It's all a bit confusing, since it seems to work when the server is in a workgroup. Note: I'm a dev, not an admin, so there are many subtleties of server administration with which I'm not familiar. Please make no assumptions about what I may or may not have tried; what may be obvious to you may have never occurred to me. Thanks in advance!

    Read the article

  • SQL Server Management Studio not scripting all objects

    - by Ian Boyd
    i've been attempting to script a database using SQL Server 2005 Management Studio. i cannot get it to script some objects. It scripts others, but skips some. i can provide detailed screen shots the options being selected including all tables the folder where the script files will go the folder being empty before scripting the scripting process saying Sucess when scripting a table the destination folder no longer empty, with a hundred or so script files the script of some tables not being in the folder. And earlier SSMS would not script some views. Is this a known thing that the the Generate Scripts task does not generate scripts? Update Known issue on Microsoft Connect, but Microsoft couldn't repro the steps, so they closed closed the ticket. Fails on SQL Server 2005, also fails on SQL Server 2008. Update Two Some basic questions: 1.What version of SQL Server? Microsoft SQL Server 2000 - 8.00.194 (Intel X86) Microsoft SQL Server 2005 - 9.00.3042.00 (Intel X86) Microsoft SQL Server 2008 - 10.0.2531.0 (Intel X86) Microsoft SQL Server 2005 Management Studio: 9.00.4035.00 Microsoft SQL Server 2008 Management Studio: 10.0.1600.22 2.What O/S are you running on? Windows Server 2000 Windows Server 2003 Windows Server 2008 3.How are you logging in to SQL server? sa/password Trusted authentication 4.Have you verified your account has full access to all objects? Yes, i have access to all objects. 5.Can you use the objects that fail to script? (eg: select top(10) * from nonScriptingTable) Yes, all objects work fine. SQL Server Enterprise Manager can script the objects fine. Update Three They fail no matter what version of SQL Server you script against. It wasn't a problem in Enterprise Manager: Client Tools SQL Server 2000 SQL Server 2005 SQL Server 2008 ============ =============== =============== =============== 2000 Yes n/a n/a 2005 No No No 2008 No No No Update Four No errors found in the database using: DBCC CHECKDB go DBCC CHECKCONSTRAINTS go DBCC CHECKFILEGROUP go DBCC CHECKIDENT go DBCC CHECKCATALOG go EXECUTE sp_msforeachtable 'DBCC CHECKTABLE (''?'')' Honk if you hate SSMS.

    Read the article

  • Can't remotely connect through SQL Server Management Studio

    - by FAtBalloon
    I have setup a SQL Server 2008 Express instance on a dedicated Windows 2008 Server hosted by 1and1.com. I cannot connect remotely to the server through management studio. I have taken the following steps below and am beyond any further ideas. I have researched the site and cannot figure anything else out so please forgive me if I missed something obvious, but I'm going crazy. Here's the lowdown. The SQL Server instance is running and works perfectly when working locally. In SQL Server Management Studio, I have checked the box "Allow Remote Connections to this Server" I have removed any external hardware firewall settings from the 1and1 admin panel Windows firewall on the server has been disabled, but just for kicks I added an inbound rule that allows for all connections on port 1433. In SQL Native Client configuration, TCP/IP is enabled. I also made sure the "IP1" with the server's IP address had a 0 for dynamic port, but I deleted it and added 1433 in the regular TCP Port field. I also set the "IPALL" TCP Port to 1433. In SQL Native Client configuration, SQL Server Browser is also running and I also tried adding an ALIAS in the I restarted SQL server after I set this value. Doing a "netstat -ano" on the server machine returns a TCP 0.0.0.0:1433 LISTENING UDP 0.0.0.0:1434 LISTENING I do a port scan from my local computer and it says that the port is FILTERED instead of LISTENING. I also tried to connect from Management studio on my local machine and it is throwing a connection error. Tried the following server names with SQL Server and Windows Authentication marked in the database security. ipaddress\SQLEXPRESS,1433 ipaddress\SQLEXPRESS ipaddress ipaddress,1433 tcp:ipaddress\SQLEXPRESS tcp:ipaddress\SQLEXPRESS,1433

    Read the article

  • SQL Server Management Studio not scripting all objects

    - by Ian Boyd
    i've been attempting to script a database using SQL Server 2005 Management Studio. i cannot get it to script some objects. It scripts others, but skips some. i can provide detailed screen shots the options being selected including all tables the folder where the script files will go the folder being empty before scripting the scripting process saying Sucess when scripting a table the destination folder no longer empty, with a hundred or so script files the script of some tables not being in the folder. And earlier SSMS would not script some views. Is this a known thing that the the Generate Scripts task does not generate scripts? Update Known issue on Microsoft Connect, but Microsoft couldn't repro the steps, so they closed closed the ticket. Fails on SQL Server 2005, also fails on SQL Server 2008. Update Two Some basic questions: 1.What version of SQL Server? Microsoft SQL Server 2000 - 8.00.194 (Intel X86) Microsoft SQL Server 2005 - 9.00.3042.00 (Intel X86) Microsoft SQL Server 2008 - 10.0.2531.0 (Intel X86) Microsoft SQL Server 2005 Management Studio: 9.00.4035.00 Microsoft SQL Server 2008 Management Studio: 10.0.1600.22 2.What O/S are you running on? Windows Server 2000 Windows Server 2003 Windows Server 2008 3.How are you logging in to SQL server? sa/password Trusted authentication 4.Have you verified your account has full access to all objects? Yes, i have access to all objects. 5.Can you use the objects that fail to script? (eg: select top(10) * from nonScriptingTable) Yes, all objects work fine. SQL Server Enterprise Manager can script the objects fine. Update Three They fail no matter what version of SQL Server you script against. It wasn't a problem in Enterprise Manager: Client Tools SQL Server 2000 SQL Server 2005 SQL Server 2008 ============ =============== =============== =============== 2000 Yes n/a n/a 2005 No No No 2008 No No No Update Four No errors found in the database using: DBCC CHECKDB go DBCC CHECKCONSTRAINTS go DBCC CHECKFILEGROUP go DBCC CHECKIDENT go DBCC CHECKCATALOG go EXECUTE sp_msforeachtable 'DBCC CHECKTABLE (''?'')' Honk if you hate SSMS.

    Read the article

  • Unity3D draw call optimization : static batching VS manually draw mesh with MaterialPropertyBlock

    - by Heisenbug
    I've read Unity3D draw call batching documentation. I understood it, and I want to use it (or something similar) in order to optimize my application. My situation is the following: I'm drawing hundreds of 3d buildings. Each building can be represented using a Mesh (or a SubMesh for each building, but I don't thing this will affect performances) Each building can be textured with several combinations of texture patterns(walls, windows,..). Textures are stored into an Atlas for optimizaztion (see Texture2d.PackTextures) Texture mapping and facade pattern generation is done in fragment shader. The shader can be the same (except for few values) for all buildings, so I'd like to use a sharedMaterial in order to optimize parameters passed to the GPU. The main problem is that, even if I use an Atlas, share the material, and declare the objects as static to use static batching, there are few parameters(very fews, it could be just even a float I guess) that should be different for every draw call. I don't know exactly how to manage this situation using Unity3D. I'm trying 2 different solutions, none of them completely implemented. Solution 1 Build a GameObject for each building building (I don't like very much the overhead of a GameObject, anyway..) Prepare each GameObject to be static batched with StaticBatchingUtility.Combine. Pack all texture into an atlas Assign the parent game object of combined batched objects the Material (basically the shader and the atlas) Change some properties in the material before drawing an Object The problem is the point 5. Let's say I have to assign a different id to an object before drawing it, how can I do this? If I use a different material for each object I can't benefit of static batching. If I use a sharedMaterial and I modify a material property, all GameObjects will reference the same modified variable Solution 2 Build a Mesh for every building (sounds better, no GameObject overhead) Pack all textures into an Atlas Draw each mesh manually using Graphics.DrawMesh Customize each DrawMesh call using a MaterialPropertyBlock This would solve the issue related to slightly modify material properties for each draw call, but the documentation isn't clear on the following point: Does several consecutive calls to Graphic.DrawMesh with a different MaterialPropertyBlock would cause a new material to be instanced? Or Unity can understand that I'm modifying just few parameters while using the same material and is able to optimize that (in such a way that the big atlas is passed just once to the GPU)?

    Read the article

  • iPhone 4 vs iPhone 3GS Comparison – Graphical Chart

    - by Gopinath
    600000 people pre-ordered iPhone 4 on a single day and this rush of fan boys left both Apple and AT & T web servers down for many hours. If you wonder why so many people are rushing for iPhone 4, here is the chart that explains the difference between iPhone 3GS and iPhone 4. As Steve Jobs said at WWDC 2010, iPhone 4 is definitely going to change smart phone game all over again. via Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • C#: System.Collections.Concurrent.ConcurrentQueue vs. Queue

    - by James Michael Hare
    I love new toys, so of course when .NET 4.0 came out I felt like the proverbial kid in the candy store!  Now, some people get all excited about the IDE and it’s new features or about changes to WPF and Silver Light and yes, those are all very fine and grand.  But me, I get all excited about things that tend to affect my life on the backside of development.  That’s why when I heard there were going to be concurrent container implementations in the latest version of .NET I was salivating like Pavlov’s dog at the dinner bell. They seem so simple, really, that one could easily overlook them.  Essentially they are implementations of containers (many that mirror the generic collections, others are new) that have either been optimized with very efficient, limited, or no locking but are still completely thread safe -- and I just had to see what kind of an improvement that would translate into. Since part of my job as a solutions architect here where I work is to help design, develop, and maintain the systems that process tons of requests each second, the thought of extremely efficient thread-safe containers was extremely appealing.  Of course, they also rolled out a whole parallel development framework which I won’t get into in this post but will cover bits and pieces of as time goes by. This time, I was mainly curious as to how well these new concurrent containers would perform compared to areas in our code where we manually synchronize them using lock or some other mechanism.  So I set about to run a processing test with a series of producers and consumers that would be either processing a traditional System.Collections.Generic.Queue or a System.Collection.Concurrent.ConcurrentQueue. Now, I wanted to keep the code as common as possible to make sure that the only variance was the container, so I created a test Producer and a test Consumer.  The test Producer takes an Action<string> delegate which is responsible for taking a string and placing it on whichever queue we’re testing in a thread-safe manner: 1: internal class Producer 2: { 3: public int Iterations { get; set; } 4: public Action<string> ProduceDelegate { get; set; } 5: 6: public void Produce() 7: { 8: for (int i = 0; i < Iterations; i++) 9: { 10: ProduceDelegate(“Hello”); 11: } 12: } 13: } Then likewise, I created a consumer that took a Func<string> that would read from whichever queue we’re testing and return either the string if data exists or null if not.  Then, if the item doesn’t exist, it will do a 10 ms wait before testing again.  Once all the producers are done and join the main thread, a flag will be set in each of the consumers to tell them once the queue is empty they can shut down since no other data is coming: 1: internal class Consumer 2: { 3: public Func<string> ConsumeDelegate { get; set; } 4: public bool HaltWhenEmpty { get; set; } 5: 6: public void Consume() 7: { 8: bool processing = true; 9: 10: while (processing) 11: { 12: string result = ConsumeDelegate(); 13: 14: if(result == null) 15: { 16: if (HaltWhenEmpty) 17: { 18: processing = false; 19: } 20: else 21: { 22: Thread.Sleep(TimeSpan.FromMilliseconds(10)); 23: } 24: } 25: else 26: { 27: DoWork(); // do something non-trivial so consumers lag behind a bit 28: } 29: } 30: } 31: } Okay, now that we’ve done that, we can launch threads of varying numbers using lambdas for each different method of production/consumption.  First let's look at the lambdas for a typical System.Collections.Generics.Queue with locking: 1: // lambda for putting to typical Queue with locking... 2: var productionDelegate = s => 3: { 4: lock (_mutex) 5: { 6: _mutexQueue.Enqueue(s); 7: } 8: }; 9:  10: // and lambda for typical getting from Queue with locking... 11: var consumptionDelegate = () => 12: { 13: lock (_mutex) 14: { 15: if (_mutexQueue.Count > 0) 16: { 17: return _mutexQueue.Dequeue(); 18: } 19: } 20: return null; 21: }; Nothing new or interesting here.  Just typical locks on an internal object instance.  Now let's look at using a ConcurrentQueue from the System.Collections.Concurrent library: 1: // lambda for putting to a ConcurrentQueue, notice it needs no locking! 2: var productionDelegate = s => 3: { 4: _concurrentQueue.Enqueue(s); 5: }; 6:  7: // lambda for getting from a ConcurrentQueue, once again, no locking required. 8: var consumptionDelegate = () => 9: { 10: string s; 11: return _concurrentQueue.TryDequeue(out s) ? s : null; 12: }; So I pass each of these lambdas and the number of producer and consumers threads to launch and take a look at the timing results.  Basically I’m timing from the time all threads start and begin producing/consuming to the time that all threads rejoin.  I won't bore you with the test code, basically it just launches code that creates the producers and consumers and launches them in their own threads, then waits for them all to rejoin.  The following are the timings from the start of all threads to the Join() on all threads completing.  The producers create 10,000,000 items evenly between themselves and then when all producers are done they trigger the consumers to stop once the queue is empty. These are the results in milliseconds from the ordinary Queue with locking: 1: Consumers Producers 1 2 3 Time (ms) 2: ---------- ---------- ------ ------ ------ --------- 3: 1 1 4284 5153 4226 4554.33 4: 10 10 4044 3831 5010 4295.00 5: 100 100 5497 5378 5612 5495.67 6: 1000 1000 24234 25409 27160 25601.00 And the following are the results in milliseconds from the ConcurrentQueue with no locking necessary: 1: Consumers Producers 1 2 3 Time (ms) 2: ---------- ---------- ------ ------ ------ --------- 3: 1 1 3647 3643 3718 3669.33 4: 10 10 2311 2136 2142 2196.33 5: 100 100 2480 2416 2190 2362.00 6: 1000 1000 7289 6897 7061 7082.33 Note that even though obviously 2000 threads is quite extreme, the concurrent queue actually scales really well, whereas the traditional queue with simple locking scales much more poorly. I love the new concurrent collections, they look so much simpler without littering your code with the locking logic, and they perform much better.  All in all, a great new toy to add to your arsenal of multi-threaded processing!

    Read the article

  • April 30th Links: ASP.NET, ASP.NET MVC, Visual Studio 2010

    Here is the latest in my link-listing series. [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] ASP.NET Data Web Control Enhancements in ASP.NET 4.0: Scott Mitchell has a good article that summarizes some of the nice improvements coming to the ASP.NET 4 data controls. Refreshing an ASP.NET AJAX UpdatePanel with JavaScript: Scott Mitchell has another nice article in his series...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • April 30th Links: ASP.NET, ASP.NET MVC, Visual Studio 2010

    Here is the latest in my link-listing series. [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] ASP.NET Data Web Control Enhancements in ASP.NET 4.0: Scott Mitchell has a good article that summarizes some of the nice improvements coming to the ASP.NET 4 data controls. Refreshing an ASP.NET AJAX UpdatePanel with JavaScript: Scott Mitchell has another nice article in his series...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • OpenGL extension vs OpenGL core

    - by user209347
    I was doubting: I'm writing a cross-platform engine OpenGL C++, I figured out windows forces the developers to access OpenGL features above 1.1 through extensions. Now the thing is, on Linux, I know that I can directly access functions if the version supports it through glext.h and opengl version. The problem is that if on Linux, the core doesn't support it, is it possible there is an extensions that supports the same functionality, in my case vertex buffer objects? I'm doing something like this: Windows: (hashdeck) define glFunction functionpointer_to_the_extension (apparently the layout changes font size if I use #) Linux: Since glext already defined glFunction, I can write in client code glFunction, and compile it both on Windows AND Linux without changing a single line in my client code using the engine (my goal). Now the thing is, I saw a tutorial use only the extension on Linux, and not checking for the opengl implementation version. If the functionality is available in the core, is it also available as extension (VBO's e.g.)? Or is an extension something you never know is available? I want to write an engine that gets all the possibilities on hardware, so I need to check (on Linux) for extensions as well as core version for possible functionality implementation.

    Read the article

  • OpenGL ES 2/3 vs OpenGL 3 (and 4)

    - by Martin Perry
    I have migrated my code from OpenGL ES 2/3 to OpenGL 3 (I added bunch of defines and abstract classes to encapsulate both versions, so I have both in one project and compile only one or another). All I need to change was context initialization and glClearDepth. I dont have any errors. This was kind of strange to me. Even shaders are working correctly (some of them are GL ES 3 - with #version 300 es in their header) Is this a kind of good solution, or should I rewrite something more, before I start adding another functionality like geometry shaders, performance tools etc ?

    Read the article

  • Multi-tenancy - single database vs multiple database

    - by RichardW1001
    We have a number of clients, whose systems share some functionality, but also have quite a degree of diversity. The number of clients is growing - always a healthy thing! - and the diversity between their businesses is also increasing. At present there is a single ASP.Net (Web Forms) Web Site (as opposed to web project), which has sub-folders for each tenant, with that tenant's non-standard pages. There is a separate model project, which deals with database access and business logic. Which is preferable - and most importantly, why - between having (a) 1 database per client, with only the features associated with that client; or (b) a single database shared by all clients, where only a subset of tables are used by any one client. The main concerns within the business are over: maintenance of multiple assets - backups, version control and the like promoting re-use as much as possible How would you ensure these concerns are addressed, which solution is preferable, and why? (I have been also compiling responses to similar questions)

    Read the article

< Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >