Search Results

Search found 62087 results on 2484 pages for 'net framework extended'.

Page 25/2484 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • Writing an ASP.Net Web based TFS Client

    - by Glav
    So one of the things I needed to do was write an ASP.Net MVC based application for our senior execs to manage a set of arbitrary attributes against stories, bugs etc to be able to attribute whether the item was related to Research and Development, and if so, what kind. We are using TFS Azure and don’t have the option of custom templates. I have decided on using a string based field within the template that is not very visible and which we don’t use to write a small set of custom which will determine the research and development association. However, this string munging on the field is not very user friendly so we need a simple tool that can display attributes against items in a simple dropdown list or something similar. Enter a custom web app that accesses our TFS items in Azure (Note: We are also using Visual Studio 2012) Now TFS Azure uses your Live ID and it is not really possible to easily do this in a server based app where no interaction is available. Even if you capture the Live ID credentials yourself and try to submit them to TFS Azure, it wont work. Bottom line is that it is not straightforward nor obvious what you have to do. In fact, it is a real pain to find and there are some answers out there which don’t appear to be answers at all given they didn’t work in my scenario. So for anyone else who wants to do this, here is a simple breakdown on what you have to do: Go here and get the “TFS Service Credential Viewer”. Install it, run it and connect to your TFS instance in azure and create a service account. Note the username and password exactly as it presents it to you. This is the magic identity that will allow unattended, programmatic access. Without this step, don’t bother trying to do anything else. In your MVC app, reference the following assemblies from “C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\ReferenceAssemblies\v2.0”: Microsoft.TeamFoundation.Client.dll Microsoft.TeamFoundation.Common.dll Microsoft.TeamFoundation.VersionControl.Client.dll Microsoft.TeamFoundation.VersionControl.Common.dll Microsoft.TeamFoundation.WorkItemTracking.Client.DataStoreLoader.dll Microsoft.TeamFoundation.WorkItemTracking.Client.dll Microsoft.TeamFoundation.WorkItemTracking.Common.dll If hosting this in Internet Information Server, for the application pool this app runs under, you will need to enable 32 Bit support. You also have to allow the TFS client assemblies to store a cache of files on your system. If you don’t do this, you will authenticate fine, but then get an exception saying that it is unable to access the cache at some directory path when you query work items. You can set this up by adding the following to your web.config, in the <appSettings> element as shown below: <appSettings> <!-- Add reference to TFS Client Cache --> <add key="WorkItemTrackingCacheRoot" value="C:\windows\temp" /> </appSettings> With all that in place, you can write the following code: var token = new Microsoft.TeamFoundation.Client.SimpleWebTokenCredential("{you-service-account-name", "{your-service-acct-password}"); var clientCreds = new Microsoft.TeamFoundation.Client.TfsClientCredentials(token); var currentCollection = new TfsTeamProjectCollection(new Uri(“https://{yourdomain}.visualstudio.com/defaultcollection”), clientCreds); TfsConfigurationServercurrentCollection.EnsureAuthenticated(); In the above code, not the URL contains the “defaultcollection” at the end of the URL. Obviously replace {yourdomain} with whatever is defined for your TFS in Azure instance. In addition, make sure the service user account and password that was generated in the first step is substituted in here. Note: If something is not right, the “EnsureAuthenticated()” call will throw an exception with the message being you are not authorised. If you forget the “defaultcollection” on the URL, it will still fail but with a message saying you are not authorised. That is, a similar but different exception message. And that is it. You can then query the collection using something like: var service = currentCollection.GetService<WorkItemStore>(); var proj = service.Projects[0]; var allQueries = proj.StoredQueries; for (int qcnt = 0; qcnt < allQueries.Count; qcnt++) {     var query = allQueries[qcnt];     var queryDesc = string.format(“Query found named: {0}”,query.Name); } You get the idea. If you search around, you will find references to the ServiceIdentityCredentialProvider which is referenced in this article. I had no luck with this method and it all looked too hard since it required an extra KB article and other magic sauce. So I hope that helps. This article certainly would have helped me save a boat load of time and frustration.

    Read the article

  • Extended FindWindow

    - by João Angelo
    The Win32 API provides the FindWindow function that supports finding top-level windows by their class name and/or title. However, the title search does not work if you are trying to match partial text at the middle or the end of the full window title. You can however implement support for these extended search features by using another set of Win32 API like EnumWindows and GetWindowText. A possible implementation follows: using System; using System.Collections.Generic; using System.Linq; using System.Runtime.InteropServices; using System.Text; public class WindowInfo { private IntPtr handle; private string className; internal WindowInfo(IntPtr handle, string title) { if (handle == IntPtr.Zero) throw new ArgumentException("Invalid handle.", "handle"); this.Handle = handle; this.Title = title ?? string.Empty; } public string Title { get; private set; } public string ClassName { get { if (className == null) { className = GetWindowClassNameByHandle(this.Handle); } return className; } } public IntPtr Handle { get { if (!NativeMethods.IsWindow(this.handle)) throw new InvalidOperationException("The handle is no longer valid."); return this.handle; } private set { this.handle = value; } } public static WindowInfo[] EnumerateWindows() { var windows = new List<WindowInfo>(); NativeMethods.EnumWindowsProcessor processor = (hwnd, lParam) => { windows.Add(new WindowInfo(hwnd, GetWindowTextByHandle(hwnd))); return true; }; bool succeeded = NativeMethods.EnumWindows(processor, IntPtr.Zero); if (!succeeded) return new WindowInfo[] { }; return windows.ToArray(); } public static WindowInfo FindWindow(Predicate<WindowInfo> predicate) { WindowInfo target = null; NativeMethods.EnumWindowsProcessor processor = (hwnd, lParam) => { var current = new WindowInfo(hwnd, GetWindowTextByHandle(hwnd)); if (predicate(current)) { target = current; return false; } return true; }; NativeMethods.EnumWindows(processor, IntPtr.Zero); return target; } private static string GetWindowTextByHandle(IntPtr handle) { if (handle == IntPtr.Zero) throw new ArgumentException("Invalid handle.", "handle"); int length = NativeMethods.GetWindowTextLength(handle); if (length == 0) return string.Empty; var buffer = new StringBuilder(length + 1); NativeMethods.GetWindowText(handle, buffer, buffer.Capacity); return buffer.ToString(); } private static string GetWindowClassNameByHandle(IntPtr handle) { if (handle == IntPtr.Zero) throw new ArgumentException("Invalid handle.", "handle"); const int WindowClassNameMaxLength = 256; var buffer = new StringBuilder(WindowClassNameMaxLength); NativeMethods.GetClassName(handle, buffer, buffer.Capacity); return buffer.ToString(); } } internal class NativeMethods { public delegate bool EnumWindowsProcessor(IntPtr hwnd, IntPtr lParam); [DllImport("user32.dll")] [return: MarshalAs(UnmanagedType.Bool)] public static extern bool EnumWindows( EnumWindowsProcessor lpEnumFunc, IntPtr lParam); [DllImport("user32.dll", SetLastError = true, CharSet = CharSet.Auto)] public static extern int GetWindowText( IntPtr hWnd, StringBuilder lpString, int nMaxCount); [DllImport("user32.dll", SetLastError = true, CharSet = CharSet.Auto)] public static extern int GetWindowTextLength(IntPtr hWnd); [DllImport("user32.dll", SetLastError = true, CharSet = CharSet.Auto)] public static extern int GetClassName( IntPtr hWnd, StringBuilder lpClassName, int nMaxCount); [DllImport("user32.dll")] [return: MarshalAs(UnmanagedType.Bool)] public static extern bool IsWindow(IntPtr hWnd); } The access to the windows handle is preceded by a sanity check to assert if it’s still valid, but if you are dealing with windows out of your control then the window can be destroyed right after the check so it’s not guaranteed that you’ll get a valid handle. Finally, to wrap this up a usage, example: static void Main(string[] args) { var w = WindowInfo.FindWindow(wi => wi.Title.Contains("Test.docx")); if (w != null) { Console.Write(w.Title); } }

    Read the article

  • New free DotNetNuke 7.0 Skin

    - by Chris Hammond
    With the pending release of DotNetNuke 7, scheduled for this week, I updated my free DotNetNuke (DNN) skin , MultiFunction v1.3 . This latest release requires DotNetNuke 7, it shouldn’t install on an earlier version of DNN. This release updates a number of the CSS classes for DNN 7 specific styles and objects. Overall the design of the skin doesn’t really change much, just cleans up CSS mainly for this release. I also updated to the 3.0 version of the Orangebox jQuery plugin, you can find the code...(read more)

    Read the article

  • Best choise of gui platform/framework for 3d development [closed]

    - by Miguel P
    The title pretty much says it all, so I'm developing a 3d engine for Directx 11, and it's going good so far. I started using .net forms as a GUI, but then(Of curiosity), i jumped to MFC, and the app looked great, but in my perspective, MFC is badly written, and it's too complicated, meaning that some things just took forever, while it would have taken a few seconds in .net forms. But my real question is: If i want to make an Editor for a 3d scene, where directx renders in the form( This was accomplished in .net forms), what would be the best choise of gui platform/framework? MFC,.net forms, Qt, etc....

    Read the article

  • HTG Explains: Should You Buy Extended Warranties?

    - by Chris Hoffman
    Buy something at an electronics store and you’ll be confronted by a pushy salesperson who insists you need an extended warranty. You’ll also see extended warranties pushed hard when shopping online. But are they worth it? There’s a reason stores push extended warranties so hard. They’re almost always pure profit for the store involved. An electronics store may live on razor-thin product margins and make big profits on extended warranties and overpriced HDMI cables. You’re Already Getting Multiple Warranties First, back up. The product you’re buying already includes a warranty. In fact, you’re probably getting several different types of warranties. Store Return and Exchange: Most electronics stores allow you to return a malfunctioning product within the first 15 or 30 days and they’ll provide you with a new one. The exact period of time will vary from store to store. If you walk out of the store with a defective product and have to swap it for a new one within the first few weeks, this should be easy. Manufacturer Warranty: A device’s manufacturer — whether the device is a laptop, a television, or a graphics card — offers their own warranty period. The manufacturer warranty covers you after the store refuses to take the product back and exchange it. The length of this warranty depends on the type of product. For example, a cheap laptop may only offer a one-year manufacturer warranty, while a more expensive laptop may offer a two-year warranty. Credit Card Warranty Extension: Many credit cards offer free extended warranties on products you buy with that credit card. Credit card companies will often give you an additional year of warranty. For example, if you buy a laptop with a two year warranty and it fails in the third year, you could then contact your credit card company and they’d cover the cost of fixing or replacing it. Check your credit card’s benefits and fine print for more information. Why Extended Warranties Are Bad You’re already getting a fairly long warranty period, especially if you have a credit card that offers you a free extended warranty — these are fairly common. If the product you get is a “lemon” and has a manufacturing error, it will likely fail pretty soon — well within your warranty period. The extended warranty matters after all your other warranties are exhausted. In the case of a laptop with a two-year warranty that you purchase with a credit card giving you a one-year warranty extension, your extended warranty will kick in three years after you purchase the laptop. In that many years, your current laptop will likely feel pretty old and laptops that are as good — or better — will likely be pretty cheap. If it’s a television, better television displays will be available at a lower price point. You’ll either want to upgrade to a newer model or you’ll be able to buy a new, just-as-good product for very cheap. You’ll only have to pay out-of-pocket if your device fails after the normal warranty period — in over two or three years for typical laptops purchased with a decent credit card. Save the money you would have spent on the warranty and put it towards a future upgrade. How Much Do Extended Warranties Cost? Let’s look at an example from a typical pushy retail outlet, Best Buy. We went to Best Buy’s website and found a pretty standard $600 Samsung laptop. This laptop comes with a one-year warranty period. If purchased with a fairly common credit card, you can easily get a two-year warranty period on this laptop without spending an additional penny. (Yes, such credit cards are available with no yearly fees.) During the check-out process, Best Buy tries to sell you a Geek Squad “Accidental Protection Plan.” To get an additional year of Best Buy’s extended warranty, you’d have to pay $324.98 for a “3-Year Accidental Protection Plan”. You’d basically be paying more than half the price of your laptop for an additional year of warranty — remember, the standard warranties would cover you anyway for the first two years. If this laptop did break sometime between two and three years from now, we wouldn’t be surprised if you could purchase a comparable laptop for about $325 anyway. And, if you don’t need to replace it, you’ve saved that money. Best Buy would object that this isn’t a standard extended warranty. It’s a supercharged warranty plan that will also provide coverage if you spill something on your laptop or drop it and break it. You just have to ask yourself a question. What are the odds that you’ll drop your laptop or spill something on it? They’re probably pretty low if you’re a typical human being. Is it worth spending more than half the price of the laptop just in case you’ll make an uncommon mistake? Probably not. There may be occasional exceptions to this — some Apple users swear by Apple’s AppleCare, for example — but you should generally avoid buying these things. There’s a reason stores are so pushy about extended warranties, and it’s not because they want to help protect you. It’s because they’re making lots of profit from these plans, and they’re making so much profit because they’re not a good deal for customers. Image Credit: Philip Taylor on Flickr     

    Read the article

  • Anti-Forgery Request Helpers for ASP.NET MVC and jQuery AJAX

    - by Dixin
    Background To secure websites from cross-site request forgery (CSRF, or XSRF) attack, ASP.NET MVC provides an excellent mechanism: The server prints tokens to cookie and inside the form; When the form is submitted to server, token in cookie and token inside the form are sent in the HTTP request; Server validates the tokens. To print tokens to browser, just invoke HtmlHelper.AntiForgeryToken():<% using (Html.BeginForm()) { %> <%: this.Html.AntiForgeryToken(Constants.AntiForgeryTokenSalt)%> <%-- Other fields. --%> <input type="submit" value="Submit" /> <% } %> This invocation generates a token then writes inside the form:<form action="..." method="post"> <input name="__RequestVerificationToken" type="hidden" value="J56khgCvbE3bVcsCSZkNVuH9Cclm9SSIT/ywruFsXEgmV8CL2eW5C/gGsQUf/YuP" /> <!-- Other fields. --> <input type="submit" value="Submit" /> </form> and also writes into the cookie: __RequestVerificationToken_Lw__= J56khgCvbE3bVcsCSZkNVuH9Cclm9SSIT/ywruFsXEgmV8CL2eW5C/gGsQUf/YuP When the above form is submitted, they are both sent to server. In the server side, [ValidateAntiForgeryToken] attribute is used to specify the controllers or actions to validate them:[HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult Action(/* ... */) { // ... } This is very productive for form scenarios. But recently, when resolving security vulnerabilities for Web products, some problems are encountered. Specify validation on controller (not on each action) The server side problem is, It is expected to declare [ValidateAntiForgeryToken] on controller, but actually it has be to declared on each POST actions. Because POST actions are usually much more then controllers, this is a little crazy Problem Usually a controller contains actions for HTTP GET and actions for HTTP POST requests, and usually validations are expected for HTTP POST requests. So, if the [ValidateAntiForgeryToken] is declared on the controller, the HTTP GET requests become invalid:[ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public class SomeController : Controller // One [ValidateAntiForgeryToken] attribute. { [HttpGet] public ActionResult Index() // Index() cannot work. { // ... } [HttpPost] public ActionResult PostAction1(/* ... */) { // ... } [HttpPost] public ActionResult PostAction2(/* ... */) { // ... } // ... } If browser sends an HTTP GET request by clicking a link: http://Site/Some/Index, validation definitely fails, because no token is provided. So the result is, [ValidateAntiForgeryToken] attribute must be distributed to each POST action:public class SomeController : Controller // Many [ValidateAntiForgeryToken] attributes. { [HttpGet] public ActionResult Index() // Works. { // ... } [HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult PostAction1(/* ... */) { // ... } [HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult PostAction2(/* ... */) { // ... } // ... } This is a little bit crazy, because one application can have a lot of POST actions. Solution To avoid a large number of [ValidateAntiForgeryToken] attributes (one for each POST action), the following ValidateAntiForgeryTokenAttribute wrapper class can be helpful, where HTTP verbs can be specified:[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = false, Inherited = true)] public class ValidateAntiForgeryTokenWrapperAttribute : FilterAttribute, IAuthorizationFilter { private readonly ValidateAntiForgeryTokenAttribute _validator; private readonly AcceptVerbsAttribute _verbs; public ValidateAntiForgeryTokenWrapperAttribute(HttpVerbs verbs) : this(verbs, null) { } public ValidateAntiForgeryTokenWrapperAttribute(HttpVerbs verbs, string salt) { this._verbs = new AcceptVerbsAttribute(verbs); this._validator = new ValidateAntiForgeryTokenAttribute() { Salt = salt }; } public void OnAuthorization(AuthorizationContext filterContext) { string httpMethodOverride = filterContext.HttpContext.Request.GetHttpMethodOverride(); if (this._verbs.Verbs.Contains(httpMethodOverride, StringComparer.OrdinalIgnoreCase)) { this._validator.OnAuthorization(filterContext); } } } When this attribute is declared on controller, only HTTP requests with the specified verbs are validated:[ValidateAntiForgeryTokenWrapper(HttpVerbs.Post, Constants.AntiForgeryTokenSalt)] public class SomeController : Controller { // GET actions are not affected. // Only HTTP POST requests are validated. } Now one single attribute on controller turns on validation for all POST actions. Maybe it would be nice if HTTP verbs can be specified on the built-in [ValidateAntiForgeryToken] attribute, which is easy to implemented. Submit token via AJAX The browser side problem is, if server side turns on anti-forgery validation for POST, then AJAX POST requests will fail be default. Problem For AJAX scenarios, when request is sent by jQuery instead of form:$.post(url, { productName: "Tofu", categoryId: 1 // Token is not posted. }, callback); This kind of AJAX POST requests will always be invalid, because server side code cannot see the token in the posted data. Solution The tokens are printed to browser then sent back to server. So first of all, HtmlHelper.AntiForgeryToken() must be called somewhere. Now the browser has token in HTML and cookie. Then jQuery must find the printed token in the HTML, and append token to the data before sending:$.post(url, { productName: "Tofu", categoryId: 1, __RequestVerificationToken: getToken() // Token is posted. }, callback); To be reusable, this can be encapsulated into a tiny jQuery plugin:/// <reference path="jquery-1.4.2.js" /> (function ($) { $.getAntiForgeryToken = function (tokenWindow, appPath) { // HtmlHelper.AntiForgeryToken() must be invoked to print the token. tokenWindow = tokenWindow && typeof tokenWindow === typeof window ? tokenWindow : window; appPath = appPath && typeof appPath === "string" ? "_" + appPath.toString() : ""; // The name attribute is either __RequestVerificationToken, // or __RequestVerificationToken_{appPath}. tokenName = "__RequestVerificationToken" + appPath; // Finds the <input type="hidden" name={tokenName} value="..." /> from the specified. // var inputElements = $("input[type='hidden'][name='__RequestVerificationToken" + appPath + "']"); var inputElements = tokenWindow.document.getElementsByTagName("input"); for (var i = 0; i < inputElements.length; i++) { var inputElement = inputElements[i]; if (inputElement.type === "hidden" && inputElement.name === tokenName) { return { name: tokenName, value: inputElement.value }; } } return null; }; $.appendAntiForgeryToken = function (data, token) { // Converts data if not already a string. if (data && typeof data !== "string") { data = $.param(data); } // Gets token from current window by default. token = token ? token : $.getAntiForgeryToken(); // $.getAntiForgeryToken(window). data = data ? data + "&" : ""; // If token exists, appends {token.name}={token.value} to data. return token ? data + encodeURIComponent(token.name) + "=" + encodeURIComponent(token.value) : data; }; // Wraps $.post(url, data, callback, type). $.postAntiForgery = function (url, data, callback, type) { return $.post(url, $.appendAntiForgeryToken(data), callback, type); }; // Wraps $.ajax(settings). $.ajaxAntiForgery = function (settings) { settings.data = $.appendAntiForgeryToken(settings.data); return $.ajax(settings); }; })(jQuery); In most of the scenarios, it is Ok to just replace $.post() invocation with $.postAntiForgery(), and replace $.ajax() with $.ajaxAntiForgery():$.postAntiForgery(url, { productName: "Tofu", categoryId: 1 }, callback); // Token is posted. There might be some scenarios of custom token. Here $.appendAntiForgeryToken() is provided:data = $.appendAntiForgeryToken(data, token); // Token is already in data. No need to invoke $.postAntiForgery(). $.post(url, data, callback); And there are scenarios that the token is not in the current window. For example, an HTTP POST request can be sent by iframe, while the token is in the parent window. Here window can be specified for $.getAntiForgeryToken():data = $.appendAntiForgeryToken(data, $.getAntiForgeryToken(window.parent)); // Token is already in data. No need to invoke $.postAntiForgery(). $.post(url, data, callback); If you have better solution, please do tell me.

    Read the article

  • General Purpose ASP.NET Data Source Control

    - by Ricardo Peres
    OK, you already know about the ObjectDataSource control, so what’s wrong with it? Well, for once, it doesn’t pass any context to the SelectMethod, you only get the parameters supplied on the SelectParameters plus the desired ordering, starting page and maximum number of rows to display. Also, you must have two separate methods, one for actually retrieving the data, and the other for getting the total number of records (SelectCountMethod). Finally, you don’t get a chance to alter the supplied data before you bind it to the target control. I wanted something simple to use, and more similar to ASP.NET 4.5, where you can have the select method on the page itself, so I came up with CustomDataSource. Here’s how to use it (I chose a GridView, but it works equally well with any regular data-bound control): 1: <web:CustomDataSourceControl runat="server" ID="datasource" PageSize="10" OnData="OnData" /> 2: <asp:GridView runat="server" ID="grid" DataSourceID="datasource" DataKeyNames="Id" PageSize="10" AllowPaging="true" AllowSorting="true" /> The OnData event handler receives a DataEventArgs instance, which contains some properties that describe the desired paging location and size, and it’s where you return the data plus the total record count. Here’s a quick example: 1: protected void OnData(object sender, DataEventArgs e) 2: { 3: //just return some data 4: var data = Enumerable.Range(e.StartRowIndex, e.PageSize).Select(x => new { Id = x, Value = x.ToString(), IsPair = ((x % 2) == 0) }); 5: e.Data = data; 6: //the total number of records 7: e.TotalRowCount = 100; 8: } Here’s the code for the DataEventArgs: 1: [Serializable] 2: public class DataEventArgs : EventArgs 3: { 4: public DataEventArgs(Int32 pageSize, Int32 startRowIndex, String sortExpression, IOrderedDictionary parameters) 5: { 6: this.PageSize = pageSize; 7: this.StartRowIndex = startRowIndex; 8: this.SortExpression = sortExpression; 9: this.Parameters = parameters; 10: } 11:  12: public IEnumerable Data 13: { 14: get; 15: set; 16: } 17:  18: public IOrderedDictionary Parameters 19: { 20: get; 21: private set; 22: } 23:  24: public String SortExpression 25: { 26: get; 27: private set; 28: } 29:  30: public Int32 StartRowIndex 31: { 32: get; 33: private set; 34: } 35:  36: public Int32 PageSize 37: { 38: get; 39: private set; 40: } 41:  42: public Int32 TotalRowCount 43: { 44: get; 45: set; 46: } 47: } As you can guess, the StartRowIndex and PageSize receive the starting row and the desired page size, where the page size comes from the PageSize property on the markup. There’s also a SortExpression, which gets passed the sorted-by column and direction (if descending) and a dictionary containing all the values coming from the SelectParameters collection, if any. All of these are read only, and it is your responsibility to fill in the Data and TotalRowCount. The code for the CustomDataSource is very simple: 1: [NonVisualControl] 2: public class CustomDataSourceControl : DataSourceControl 3: { 4: public CustomDataSourceControl() 5: { 6: this.SelectParameters = new ParameterCollection(); 7: } 8:  9: protected override DataSourceView GetView(String viewName) 10: { 11: return (new CustomDataSourceView(this, viewName)); 12: } 13:  14: internal void GetData(DataEventArgs args) 15: { 16: this.OnData(args); 17: } 18:  19: protected virtual void OnData(DataEventArgs args) 20: { 21: EventHandler<DataEventArgs> data = this.Data; 22:  23: if (data != null) 24: { 25: data(this, args); 26: } 27: } 28:  29: [Browsable(false)] 30: [DesignerSerializationVisibility(DesignerSerializationVisibility.Visible)] 31: [PersistenceMode(PersistenceMode.InnerProperty)] 32: public ParameterCollection SelectParameters 33: { 34: get; 35: private set; 36: } 37:  38: public event EventHandler<DataEventArgs> Data; 39:  40: public Int32 PageSize 41: { 42: get; 43: set; 44: } 45: } Also, the code for the accompanying internal – as there is no need to use it from outside of its declaring assembly - data source view: 1: sealed class CustomDataSourceView : DataSourceView 2: { 3: private readonly CustomDataSourceControl dataSourceControl = null; 4:  5: public CustomDataSourceView(CustomDataSourceControl dataSourceControl, String viewName) : base(dataSourceControl, viewName) 6: { 7: this.dataSourceControl = dataSourceControl; 8: } 9:  10: public override Boolean CanPage 11: { 12: get 13: { 14: return (true); 15: } 16: } 17:  18: public override Boolean CanRetrieveTotalRowCount 19: { 20: get 21: { 22: return (true); 23: } 24: } 25:  26: public override Boolean CanSort 27: { 28: get 29: { 30: return (true); 31: } 32: } 33:  34: protected override IEnumerable ExecuteSelect(DataSourceSelectArguments arguments) 35: { 36: IOrderedDictionary parameters = this.dataSourceControl.SelectParameters.GetValues(HttpContext.Current, this.dataSourceControl); 37: DataEventArgs args = new DataEventArgs(this.dataSourceControl.PageSize, arguments.StartRowIndex, arguments.SortExpression, parameters); 38:  39: this.dataSourceControl.GetData(args); 40:  41: arguments.TotalRowCount = args.TotalRowCount; 42: arguments.MaximumRows = this.dataSourceControl.PageSize; 43: arguments.AddSupportedCapabilities(DataSourceCapabilities.Page | DataSourceCapabilities.Sort | DataSourceCapabilities.RetrieveTotalRowCount); 44: arguments.RetrieveTotalRowCount = true; 45:  46: if (!(args.Data is ICollection)) 47: { 48: return (args.Data.OfType<Object>().ToList()); 49: } 50: else 51: { 52: return (args.Data); 53: } 54: } 55: } As always, looking forward to hearing from you!

    Read the article

  • How to route tree-structured URLs with ASP.NET Routing?

    - by Venemo
    Hello Everyone, I would like to achieve something very similar to this question, with some enhancements. There is an ASP.NET MVC web application. I have a tree of entities. For example, a Page class which has a property called Children, which is of type IList<Page>. (An instance of the Page class corresponds to a row in a database.) I would like to assign a unique URL to every Page in the database. I handle Page objects with a Controller called PageController. Example URLs: http://mysite.com/Page1/ http://mysite.com/Page1/SubPage/ http://mysite.com/Page/ChildPage/GrandChildPage/ You get the picture. So, I'd like every single Page object to have its own URL that is equal to its parent's URL plus its own name. In addition to that, I also would like the ability to map a single Page to the / (root) URL. I would like to apply these rules: If a URL can be handled with any other route, or a file exists in the filesystem in the specified URL, let the default URL mapping happen If a URL can be handled by the virtual path provider, let that handle it If there is no other, map the other URLs to the PageController class I also found this question, and also this one and this one, but they weren't of much help, since they don't provide an explanation about my first two points. I see the following possible soutions: Map a route for each page invidually. This requires me to go over the entire tree when the application starts, and adding an exact match route to the end of the route table. I could add a route with {*path} and write a custom IRouteHandler that handles it, but I can't see how could I deal with the first two rules then, since this handler would get to handle everything. So far, the first solution seems to be the right one, because it is also the simplest. I would really appreciate your thoughts on this. Thank you in advance!

    Read the article

  • Entity Framework DateTime update extremely slow

    - by Phyxion
    I have this situation currently with Entity Framework: using (TestEntities dataContext = DataContext) { UserSession session = dataContext.UserSessions.FirstOrDefault(userSession => userSession.Id == SessionId); if (session != null) { session.LastAvailableDate = DateTime.Now; dataContext.SaveChanges(); } } This is all working perfect, except for the fact that it is terribly slow compared to what I expect (14 calls per second, tested with 100 iterations). When I update this record manually through this command: dataContext.Database.ExecuteSqlCommand(String.Format("update UserSession set LastAvailableDate = '{0}' where Id = '{1}'", DateTime.Now.ToString("yyyy-MM-dd HH:mm:ss.fffffff"), SessionId)); I get 55 calls per second, which is more than fast enough. However, when I don't update the session.LastAvailableDate but I update an integer (e.g. session.UserId) or string with Entity Framework, I get 50 calls per second, which is also more than fast enough. Only the datetime field is terrible slow. The difference of a factor 4 is unacceptable and I was wondering how I can improve this as I don't prefer using direct SQL when I can also use the Entity Framework. I'm using Entity Framework 4.3.1 (also tried 4.1).

    Read the article

  • Parallelism in .NET – Part 11, Divide and Conquer via Parallel.Invoke

    - by Reed
    Many algorithms are easily written to work via recursion.  For example, most data-oriented tasks where a tree of data must be processed are much more easily handled by starting at the root, and recursively “walking” the tree.  Some algorithms work this way on flat data structures, such as arrays, as well.  This is a form of divide and conquer: an algorithm design which is based around breaking up a set of work recursively, “dividing” the total work in each recursive step, and “conquering” the work when the remaining work is small enough to be solved easily. Recursive algorithms, especially ones based on a form of divide and conquer, are often a very good candidate for parallelization. This is apparent from a common sense standpoint.  Since we’re dividing up the total work in the algorithm, we have an obvious, built-in partitioning scheme.  Once partitioned, the data can be worked upon independently, so there is good, clean isolation of data. Implementing this type of algorithm is fairly simple.  The Parallel class in .NET 4 includes a method suited for this type of operation: Parallel.Invoke.  This method works by taking any number of delegates defined as an Action, and operating them all in parallel.  The method returns when every delegate has completed: Parallel.Invoke( () => { Console.WriteLine("Action 1 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); }, () => { Console.WriteLine("Action 2 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); }, () => { Console.WriteLine("Action 3 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); } ); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Running this simple example demonstrates the ease of using this method.  For example, on my system, I get three separate thread IDs when running the above code.  By allowing any number of delegates to be executed directly, concurrently, the Parallel.Invoke method provides us an easy way to parallelize any algorithm based on divide and conquer.  We can divide our work in each step, and execute each task in parallel, recursively. For example, suppose we wanted to implement our own quicksort routine.  The quicksort algorithm can be designed based on divide and conquer.  In each iteration, we pick a pivot point, and use that to partition the total array.  We swap the elements around the pivot, then recursively sort the lists on each side of the pivot.  For example, let’s look at this simple, sequential implementation of quicksort: public static void QuickSort<T>(T[] array) where T : IComparable<T> { QuickSortInternal(array, 0, array.Length - 1); } private static void QuickSortInternal<T>(T[] array, int left, int right) where T : IComparable<T> { if (left >= right) { return; } SwapElements(array, left, (left + right) / 2); int last = left; for (int current = left + 1; current <= right; ++current) { if (array[current].CompareTo(array[left]) < 0) { ++last; SwapElements(array, last, current); } } SwapElements(array, left, last); QuickSortInternal(array, left, last - 1); QuickSortInternal(array, last + 1, right); } static void SwapElements<T>(T[] array, int i, int j) { T temp = array[i]; array[i] = array[j]; array[j] = temp; } Here, we implement the quicksort algorithm in a very common, divide and conquer approach.  Running this against the built-in Array.Sort routine shows that we get the exact same answers (although the framework’s sort routine is slightly faster).  On my system, for example, I can use framework’s sort to sort ten million random doubles in about 7.3s, and this implementation takes about 9.3s on average. Looking at this routine, though, there is a clear opportunity to parallelize.  At the end of QuickSortInternal, we recursively call into QuickSortInternal with each partition of the array after the pivot is chosen.  This can be rewritten to use Parallel.Invoke by simply changing it to: // Code above is unchanged... SwapElements(array, left, last); Parallel.Invoke( () => QuickSortInternal(array, left, last - 1), () => QuickSortInternal(array, last + 1, right) ); } This routine will now run in parallel.  When executing, we now see the CPU usage across all cores spike while it executes.  However, there is a significant problem here – by parallelizing this routine, we took it from an execution time of 9.3s to an execution time of approximately 14 seconds!  We’re using more resources as seen in the CPU usage, but the overall result is a dramatic slowdown in overall processing time. This occurs because parallelization adds overhead.  Each time we split this array, we spawn two new tasks to parallelize this algorithm!  This is far, far too many tasks for our cores to operate upon at a single time.  In effect, we’re “over-parallelizing” this routine.  This is a common problem when working with divide and conquer algorithms, and leads to an important observation: When parallelizing a recursive routine, take special care not to add more tasks than necessary to fully utilize your system. This can be done with a few different approaches, in this case.  Typically, the way to handle this is to stop parallelizing the routine at a certain point, and revert back to the serial approach.  Since the first few recursions will all still be parallelized, our “deeper” recursive tasks will be running in parallel, and can take full advantage of the machine.  This also dramatically reduces the overhead added by parallelizing, since we’re only adding overhead for the first few recursive calls.  There are two basic approaches we can take here.  The first approach would be to look at the total work size, and if it’s smaller than a specific threshold, revert to our serial implementation.  In this case, we could just check right-left, and if it’s under a threshold, call the methods directly instead of using Parallel.Invoke. The second approach is to track how “deep” in the “tree” we are currently at, and if we are below some number of levels, stop parallelizing.  This approach is a more general-purpose approach, since it works on routines which parse trees as well as routines working off of a single array, but may not work as well if a poor partitioning strategy is chosen or the tree is not balanced evenly. This can be written very easily.  If we pass a maxDepth parameter into our internal routine, we can restrict the amount of times we parallelize by changing the recursive call to: // Code above is unchanged... SwapElements(array, left, last); if (maxDepth < 1) { QuickSortInternal(array, left, last - 1, maxDepth); QuickSortInternal(array, last + 1, right, maxDepth); } else { --maxDepth; Parallel.Invoke( () => QuickSortInternal(array, left, last - 1, maxDepth), () => QuickSortInternal(array, last + 1, right, maxDepth)); } We no longer allow this to parallelize indefinitely – only to a specific depth, at which time we revert to a serial implementation.  By starting the routine with a maxDepth equal to Environment.ProcessorCount, we can restrict the total amount of parallel operations significantly, but still provide adequate work for each processing core. With this final change, my timings are much better.  On average, I get the following timings: Framework via Array.Sort: 7.3 seconds Serial Quicksort Implementation: 9.3 seconds Naive Parallel Implementation: 14 seconds Parallel Implementation Restricting Depth: 4.7 seconds Finally, we are now faster than the framework’s Array.Sort implementation.

    Read the article

  • Anti-Forgery Request Recipes For ASP.NET MVC And AJAX

    - by Dixin
    Background To secure websites from cross-site request forgery (CSRF, or XSRF) attack, ASP.NET MVC provides an excellent mechanism: The server prints tokens to cookie and inside the form; When the form is submitted to server, token in cookie and token inside the form are sent in the HTTP request; Server validates the tokens. To print tokens to browser, just invoke HtmlHelper.AntiForgeryToken():<% using (Html.BeginForm()) { %> <%: this.Html.AntiForgeryToken(Constants.AntiForgeryTokenSalt)%> <%-- Other fields. --%> <input type="submit" value="Submit" /> <% } %> This invocation generates a token then writes inside the form:<form action="..." method="post"> <input name="__RequestVerificationToken" type="hidden" value="J56khgCvbE3bVcsCSZkNVuH9Cclm9SSIT/ywruFsXEgmV8CL2eW5C/gGsQUf/YuP" /> <!-- Other fields. --> <input type="submit" value="Submit" /> </form> and also writes into the cookie: __RequestVerificationToken_Lw__= J56khgCvbE3bVcsCSZkNVuH9Cclm9SSIT/ywruFsXEgmV8CL2eW5C/gGsQUf/YuP When the above form is submitted, they are both sent to server. In the server side, [ValidateAntiForgeryToken] attribute is used to specify the controllers or actions to validate them:[HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult Action(/* ... */) { // ... } This is very productive for form scenarios. But recently, when resolving security vulnerabilities for Web products, some problems are encountered. Specify validation on controller (not on each action) The server side problem is, It is expected to declare [ValidateAntiForgeryToken] on controller, but actually it has be to declared on each POST actions. Because POST actions are usually much more then controllers, the work would be a little crazy. Problem Usually a controller contains actions for HTTP GET and actions for HTTP POST requests, and usually validations are expected for HTTP POST requests. So, if the [ValidateAntiForgeryToken] is declared on the controller, the HTTP GET requests become invalid:[ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public class SomeController : Controller // One [ValidateAntiForgeryToken] attribute. { [HttpGet] public ActionResult Index() // Index() cannot work. { // ... } [HttpPost] public ActionResult PostAction1(/* ... */) { // ... } [HttpPost] public ActionResult PostAction2(/* ... */) { // ... } // ... } If browser sends an HTTP GET request by clicking a link: http://Site/Some/Index, validation definitely fails, because no token is provided. So the result is, [ValidateAntiForgeryToken] attribute must be distributed to each POST action:public class SomeController : Controller // Many [ValidateAntiForgeryToken] attributes. { [HttpGet] public ActionResult Index() // Works. { // ... } [HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult PostAction1(/* ... */) { // ... } [HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult PostAction2(/* ... */) { // ... } // ... } This is a little bit crazy, because one application can have a lot of POST actions. Solution To avoid a large number of [ValidateAntiForgeryToken] attributes (one for each POST action), the following ValidateAntiForgeryTokenWrapperAttribute wrapper class can be helpful, where HTTP verbs can be specified:[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = false, Inherited = true)] public class ValidateAntiForgeryTokenWrapperAttribute : FilterAttribute, IAuthorizationFilter { private readonly ValidateAntiForgeryTokenAttribute _validator; private readonly AcceptVerbsAttribute _verbs; public ValidateAntiForgeryTokenWrapperAttribute(HttpVerbs verbs) : this(verbs, null) { } public ValidateAntiForgeryTokenWrapperAttribute(HttpVerbs verbs, string salt) { this._verbs = new AcceptVerbsAttribute(verbs); this._validator = new ValidateAntiForgeryTokenAttribute() { Salt = salt }; } public void OnAuthorization(AuthorizationContext filterContext) { string httpMethodOverride = filterContext.HttpContext.Request.GetHttpMethodOverride(); if (this._verbs.Verbs.Contains(httpMethodOverride, StringComparer.OrdinalIgnoreCase)) { this._validator.OnAuthorization(filterContext); } } } When this attribute is declared on controller, only HTTP requests with the specified verbs are validated:[ValidateAntiForgeryTokenWrapper(HttpVerbs.Post, Constants.AntiForgeryTokenSalt)] public class SomeController : Controller { // GET actions are not affected. // Only HTTP POST requests are validated. } Now one single attribute on controller turns on validation for all POST actions. Maybe it would be nice if HTTP verbs can be specified on the built-in [ValidateAntiForgeryToken] attribute, which is easy to implemented. Specify Non-constant salt in runtime By default, the salt should be a compile time constant, so it can be used for the [ValidateAntiForgeryToken] or [ValidateAntiForgeryTokenWrapper] attribute. Problem One Web product might be sold to many clients. If a constant salt is evaluated in compile time, after the product is built and deployed to many clients, they all have the same salt. Of course, clients do not like this. Even some clients might want to specify a custom salt in configuration. In these scenarios, salt is required to be a runtime value. Solution In the above [ValidateAntiForgeryToken] and [ValidateAntiForgeryTokenWrapper] attribute, the salt is passed through constructor. So one solution is to remove this parameter:public class ValidateAntiForgeryTokenWrapperAttribute : FilterAttribute, IAuthorizationFilter { public ValidateAntiForgeryTokenWrapperAttribute(HttpVerbs verbs) { this._verbs = new AcceptVerbsAttribute(verbs); this._validator = new ValidateAntiForgeryTokenAttribute() { Salt = AntiForgeryToken.Value }; } // Other members. } But here the injected dependency becomes a hard dependency. So the other solution is moving validation code into controller to work around the limitation of attributes:public abstract class AntiForgeryControllerBase : Controller { private readonly ValidateAntiForgeryTokenAttribute _validator; private readonly AcceptVerbsAttribute _verbs; protected AntiForgeryControllerBase(HttpVerbs verbs, string salt) { this._verbs = new AcceptVerbsAttribute(verbs); this._validator = new ValidateAntiForgeryTokenAttribute() { Salt = salt }; } protected override void OnAuthorization(AuthorizationContext filterContext) { base.OnAuthorization(filterContext); string httpMethodOverride = filterContext.HttpContext.Request.GetHttpMethodOverride(); if (this._verbs.Verbs.Contains(httpMethodOverride, StringComparer.OrdinalIgnoreCase)) { this._validator.OnAuthorization(filterContext); } } } Then make controller classes inheriting from this AntiForgeryControllerBase class. Now the salt is no long required to be a compile time constant. Submit token via AJAX For browser side, once server side turns on anti-forgery validation for HTTP POST, all AJAX POST requests will fail by default. Problem In AJAX scenarios, the HTTP POST request is not sent by form. Take jQuery as an example:$.post(url, { productName: "Tofu", categoryId: 1 // Token is not posted. }, callback); This kind of AJAX POST requests will always be invalid, because server side code cannot see the token in the posted data. Solution Basically, the tokens must be printed to browser then sent back to server. So first of all, HtmlHelper.AntiForgeryToken() need to be called somewhere. Now the browser has token in both HTML and cookie. Then jQuery must find the printed token in the HTML, and append token to the data before sending:$.post(url, { productName: "Tofu", categoryId: 1, __RequestVerificationToken: getToken() // Token is posted. }, callback); To be reusable, this can be encapsulated into a tiny jQuery plugin:/// <reference path="jquery-1.4.2.js" /> (function ($) { $.getAntiForgeryToken = function (tokenWindow, appPath) { // HtmlHelper.AntiForgeryToken() must be invoked to print the token. tokenWindow = tokenWindow && typeof tokenWindow === typeof window ? tokenWindow : window; appPath = appPath && typeof appPath === "string" ? "_" + appPath.toString() : ""; // The name attribute is either __RequestVerificationToken, // or __RequestVerificationToken_{appPath}. tokenName = "__RequestVerificationToken" + appPath; // Finds the <input type="hidden" name={tokenName} value="..." /> from the specified. // var inputElements = $("input[type='hidden'][name='__RequestVerificationToken" + appPath + "']"); var inputElements = tokenWindow.document.getElementsByTagName("input"); for (var i = 0; i < inputElements.length; i++) { var inputElement = inputElements[i]; if (inputElement.type === "hidden" && inputElement.name === tokenName) { return { name: tokenName, value: inputElement.value }; } } return null; }; $.appendAntiForgeryToken = function (data, token) { // Converts data if not already a string. if (data && typeof data !== "string") { data = $.param(data); } // Gets token from current window by default. token = token ? token : $.getAntiForgeryToken(); // $.getAntiForgeryToken(window). data = data ? data + "&" : ""; // If token exists, appends {token.name}={token.value} to data. return token ? data + encodeURIComponent(token.name) + "=" + encodeURIComponent(token.value) : data; }; // Wraps $.post(url, data, callback, type). $.postAntiForgery = function (url, data, callback, type) { return $.post(url, $.appendAntiForgeryToken(data), callback, type); }; // Wraps $.ajax(settings). $.ajaxAntiForgery = function (settings) { settings.data = $.appendAntiForgeryToken(settings.data); return $.ajax(settings); }; })(jQuery); In most of the scenarios, it is Ok to just replace $.post() invocation with $.postAntiForgery(), and replace $.ajax() with $.ajaxAntiForgery():$.postAntiForgery(url, { productName: "Tofu", categoryId: 1 }, callback); // Token is posted. There might be some scenarios of custom token, where $.appendAntiForgeryToken() is useful:data = $.appendAntiForgeryToken(data, token); // Token is already in data. No need to invoke $.postAntiForgery(). $.post(url, data, callback); And there are scenarios that the token is not in the current window. For example, an HTTP POST request can be sent by an iframe, while the token is in the parent window. Here, token's container window can be specified for $.getAntiForgeryToken():data = $.appendAntiForgeryToken(data, $.getAntiForgeryToken(window.parent)); // Token is already in data. No need to invoke $.postAntiForgery(). $.post(url, data, callback); If you have better solution, please do tell me.

    Read the article

  • Can static methods be called using object/instance in .NET

    Ans is Yes and No   Yes in C++, Java and VB.NET No in C#   This is only compiler restriction in c#. You might see in some websites that we can break this restriction using reflection and delegates, but we can’t, according to my little research J I shall try to explain you…   Following is code sample to break this rule using reflection, it seems that it is possible to call a static method using an object, p1 using System; namespace T {     class Program     {         static void Main()         {             var p1 = new Person() { Name = "Smith" };             typeof(Person).GetMethod("TestStatMethod").Invoke(p1, new object[] { });                     }         class Person         {             public string Name { get; set; }             public static void TestStatMethod()             {                 Console.WriteLine("Hello");             }         }     } } but I do not think so this method is being called using p1 rather Type Name “Person”. I shall try to prove this… look at another example…  Test2 has been inherited from Test1. Let’s see various scenarios… Scenario1 using System; namespace T {     class Program     {         static void Main()         {             Test1 t = new Test1();            typeof(Test2).GetMethod("Method1").Invoke(t,                                  new object[] { });         }     }     class Test1     {         public static void Method1()         {             Console.WriteLine("At test1::Method1");         }     }       class Test2 : Test1     {         public static void Method1()         {             Console.WriteLine("At test1::Method2");         }     } } Output:   At test1::Method2 Scenario2         static void Main()         {             Test2 t = new Test2();            typeof(Test2).GetMethod("Method1").Invoke(t,                                          new object[] { });         }   Output:   At test1::Method2   Scenario3         static void Main()         {             Test1 t = new Test2();            typeof(Test2).GetMethod("Method1").Invoke(t,                             new object[] { });         }   Output: At test1::Method2 In all above scenarios output is same, that means, Reflection also not considering the object what you pass to Invoke method in case of static methods. It is always considering the type which you specify in typeof(). So, what is the use passing instance to “Invoke”. Let see below sample using System; namespace T {     class Program     {         static void Main()         {            typeof(Test2).GetMethod("Method1").                Invoke(null, new object[] { });         }     }       class Test1     {         public static void Method1()         {             Console.WriteLine("At test1::Method1");         }     }     class Test2 : Test1     {         public static void Method1()         {             Console.WriteLine("At test1::Method2");         }     } }   Output is   At test1::Method2   I was able to call Invoke “Method1” of Test2 without any object.  Yes, there no wonder here as Method1 is static. So we may conclude that static methods cannot be called using instances (only in c#) Why Microsoft has restricted it in C#? Ans: Really there Is no use calling static methods using objects because static methods are stateless. but still Java and C++ latest compilers allow calling static methods using instances. Java sample class Test {      public static void main(String str[])      {            Person p = new Person();            System.out.println(p.GetCount());      } }   class Person {   public static int GetCount()   {      return 100;   } }   Output          100 span.fullpost {display:none;}

    Read the article

  • Parallelism in .NET – Part 1, Decomposition

    - by Reed
    The first step in designing any parallelized system is Decomposition.  Decomposition is nothing more than taking a problem space and breaking it into discrete parts.  When we want to work in parallel, we need to have at least two separate things that we are trying to run.  We do this by taking our problem and decomposing it into parts. There are two common abstractions that are useful when discussing parallel decomposition: Data Decomposition and Task Decomposition.  These two abstractions allow us to think about our problem in a way that helps leads us to correct decision making in terms of the algorithms we’ll use to parallelize our routine. To start, I will make a couple of minor points. I’d like to stress that Decomposition has nothing to do with specific algorithms or techniques.  It’s about how you approach and think about the problem, not how you solve the problem using a specific tool, technique, or library.  Decomposing the problem is about constructing the appropriate mental model: once this is done, you can choose the appropriate design and tools, which is a subject for future posts. Decomposition, being unrelated to tools or specific techniques, is not specific to .NET in any way.  This should be the first step to parallelizing a problem, and is valid using any framework, language, or toolset.  However, this gives us a starting point – without a proper understanding of decomposition, it is difficult to understand the proper usage of specific classes and tools within the .NET framework. Data Decomposition is often the simpler abstraction to use when trying to parallelize a routine.  In order to decompose our problem domain by data, we take our entire set of data and break it into smaller, discrete portions, or chunks.  We then work on each chunk in the data set in parallel. This is particularly useful if we can process each element of data independently of the rest of the data.  In a situation like this, there are some wonderfully simple techniques we can use to take advantage of our data.  By decomposing our domain by data, we can very simply parallelize our routines.  In general, we, as developers, should be always searching for data that can be decomposed. Finding data to decompose if fairly simple, in many instances.  Data decomposition is typically used with collections of data.  Any time you have a collection of items, and you’re going to perform work on or with each of the items, you potentially have a situation where parallelism can be exploited.  This is fairly easy to do in practice: look for iteration statements in your code, such as for and foreach. Granted, every for loop is not a candidate to be parallelized.  If the collection is being modified as it’s iterated, or the processing of elements depends on other elements, the iteration block may need to be processed in serial.  However, if this is not the case, data decomposition may be possible. Let’s look at one example of how we might use data decomposition.  Suppose we were working with an image, and we were applying a simple contrast stretching filter.  When we go to apply the filter, once we know the minimum and maximum values, we can apply this to each pixel independently of the other pixels.  This means that we can easily decompose this problem based off data – we will do the same operation, in parallel, on individual chunks of data (each pixel). Task Decomposition, on the other hand, is focused on the individual tasks that need to be performed instead of focusing on the data.  In order to decompose our problem domain by tasks, we need to think about our algorithm in terms of discrete operations, or tasks, which can then later be parallelized. Task decomposition, in practice, can be a bit more tricky than data decomposition.  Here, we need to look at what our algorithm actually does, and how it performs its actions.  Once we have all of the basic steps taken into account, we can try to analyze them and determine whether there are any constraints in terms of shared data or ordering.  There are no simple things to look for in terms of finding tasks we can decompose for parallelism; every algorithm is unique in terms of its tasks, so every algorithm will have unique opportunities for task decomposition. For example, say we want our software to perform some customized actions on startup, prior to showing our main screen.  Perhaps we want to check for proper licensing, notify the user if the license is not valid, and also check for updates to the program.  Once we verify the license, and that there are no updates, we’ll start normally.  In this case, we can decompose this problem into tasks – we have a few tasks, but there are at least two discrete, independent tasks (check licensing, check for updates) which we can perform in parallel.  Once those are completed, we will continue on with our other tasks. One final note – Data Decomposition and Task Decomposition are not mutually exclusive.  Often, you’ll mix the two approaches while trying to parallelize a single routine.  It’s possible to decompose your problem based off data, then further decompose the processing of each element of data based on tasks.  This just provides a framework for thinking about our algorithms, and for discussing the problem.

    Read the article

  • Visual Studio Talk Show #115 is now online - Entity Framework 4 (French)

    - by guybarrette
    http://www.visualstudiotalkshow.com Matthieu Mezil: Entity Framework 4 Nous discutons avec Matthieu Mezil de la version 4 de Entity Framework (EF4). Entre autres, on évaluera avec Matthieu en quoi cette nouvelle version qui sera inclus avec Visual Studio 2010 permet de concevoir un ORM (Object Relational Mapper) avec une implémentation Agile. Matthieu Mezil est consultant formateur chez Access IT à Paris. MVP C# et speaker INETA, il s’est spécialisé sur l’Entity Framework. Il anime régulièrement des conférences sur ce sujet, notamment dans le cadre d’évènements Microsoft. MCT, Matthieu a également écrit plusieurs formations sur la POO, le langage C# et bien sûr sur l’Entity Framework qu’il anime fréquemment. Dans le cadre de son travail, il est souvent amené à travailler avec le Microsoft Technology Center de Paris. Matthieu est également un bloggeur important: en français sur http://blogs.codes-sources.com/matthieu et en anglais sur http://msmvps.com/blogs/matthieu. Télécharger l'émission Si vous désirez un accès direct au fichier audio en format MP3, nous vous invitons à télécharger le fichier en utilisant un des boutons ci-dessous. Si vous désirez utiliser le feed RSS pour télécharger l'émission, nous vous invitons à vous abonnez en utilisant le bouton ci-dessous. Si vous désirez utiliser le répertoire iTunes Podcast pour télécharger l'émission, nous vous encourageons à vous abonnez en utilisant le bouton ci-dessous. var addthis_pub="guybarrette";

    Read the article

  • ASP.Net Layered app - Share Entity Data Model amongst layers

    - by Chris Klepeis
    How can I share the auto-generated entity data model (generated object classes) amongst all layers of my C# web app whilst only granting query access in the data layer? This uses the typical 3 layer approach: data, business, presentation. My data layer returns an IEnumerable<T> to my business layer, but I cannot return type T to the presentation layer because I do not want the presentation layer to know of the existence of the data layer - which is where the entity framework auto-generated my classes. It was recommended to have a seperate layer with just the data model, but I'm unsure how to seperate the data model from the query functionality the entity framework provides.

    Read the article

  • Pro ASP.NET MVC Framework Review

    - by Ben Griswold
    Early in my career, when I wanted to learn a new technology, I’d sit in the bookstore aisle and I’d work my way through each of the available books on the given subject.  Put in enough time in a bookstore and you can learn just about anything. I used to really enjoy my time in the bookstore – but times have certainly changed.  Whereas books used to be the only place I could find solutions to my problems, now they may be the very last place I look.  I have been working with the ASP.NET MVC Framework for more than a year.  I have a few projects and a couple of major deployments under my belt and I was able to get up to speed with the framework without reading a single book*.  With so many resources at our fingertips (podcasts, screencasts, blogs, stackoverflow, open source projects, www.asp.net, you name it) why bother with a book? Well, I flipped through Steven Sanderson’s Pro ASP.NET MVC Framework a few months ago. And since it is prominently displayed in my co-worker’s office, I tend to pick it up as a reference from time to time.  Last week, I’m not sure why, I decided to read it cover to cover.  Man, did I eat this book up.  Granted, a lot of what I read was review, but it was only review because I had already learned lessons by piecing the puzzle together for myself via various sources. If I were starting with ASP.NET MVC (or ASP.NET Web Deployment in general) today, the first thing I would do is buy Steven Sanderson’s Pro ASP.NET MVC Framework and read it cover to cover. Steven Sanderson did such a great job with this book! As much as I appreciated the in-depth model, view, and controller talk, I was completely impressed with all the extra bits which were included.  There a was nice overview of BDD, view engine comparisons, a chapter dedicated to security and vulnerabilities, IoC, TDD and Mocking (of course), IIS deployment options and a nice overview of what the .NET platform and C# offers.  Heck, Sanderson even include bits about webforms! The book is fantastic and I highly recommend it – even if you think you’ve already got your head around ASP.NET MVC.  By the way, procrastinators may be in luck.  ASP.NET MVC V2 Framework can be pre-ordered.  You might want to jump right into the second edition and find out what Sanderson has to say about MVC 2. * Actually, I did read through the free bits of Professional ASP.NET MVC 1.0.  But it was just a chapter – albeit a really long chapter.

    Read the article

  • How to start with entity framework and service oriented architecture?

    - by citronas
    At work I need to create a new web application, that will connection to an MySql Database. (So far I only have expercience with Linq-To-Sql classes and MSSQL Servers.) My superior tells me to use the entity framework (he probably refers to Linq-To-Entity) and provide everything as a service based architecture. Unfortunatly nobody at work has experience with that framework and with a real nice server oriented architecture. (till now no customer wanted to pay for architecture that he can't see. This speficic project I'm leading will be long-term, meaning multiple years, so it would be best to design it the way, that multiple targetting plattforms like asp.net, c# wpf, ... could use it) For now, the main target plattform is ASP.net So I do have the following questions: 1) Where can I read best what's really behind service oriented architecture (but for now beginner tutorials work fine as well) and how to do it in best practise? 2) So far I can't seem a real difference between Linq-To-Sql classes and the information I've google so far on the 'entity framework'. So, whats the difference? Where do I find nice tutorials for it. 3) Is there any difference in the entity framework regarding the database server (MSSQL or MySQL). If not, does that mean that code snipperts I will stumble across will word database independent? 4) I do you Visual Studio 2010. Do I have to regard something specific?

    Read the article

  • Jump and run HTML5 Game Framework

    - by user1818924
    We're developing a jump and run game with HTML5 and JavaScript and have to build an own game framework for this. Here we have some difficulties and would like to ask you for some advice: we have a "Stage" object, which represents the root of our game and is a global div-wrapper. The stage can contain multiple "Scenes", which are also div-elements. We would implement a Scene for the playing task, for pause, etc. and switch between them. Each scene can therefore contain multiple "Layers", representing a canvas. These Layer contain "ObjectEntities", which represent images or other shapes like rectangles, etc. Each Objectentity has its own temporaryCanvas, to be able to draw images for one entity, whereas another contains a rectangle. We set an activeScene in our Stage, so when the game is played, just the active scene is drawn. Calling activeScene.draw(), calls all sublayers to draw, which draw their entities (calling drawImage(entity.canvas)). But is this some kind of good practive? Having multiple canvas to draw? Each gameloop every layer-context is cleared and drawn again. E.g. we just have a still Background-Layer, … wouldn't it be more useful to draw this once and not to clear it everytime and redraw it? Or should we use a global canvas for example in the Stage and just use this canvas to draw? But we thought this would be to expensive... Other question: Do you have any advice how we could dive into implementing an own framework? Most stuff we find online relies on existing frameworks or they just implement their game without building a framework.

    Read the article

  • Mobile Web Framework that will only control rendering and page transitions

    - by rlemon
    I have been using jQueryMobile for a bit now, and there are some things I like about it and others I do not. First I will give a bit of background. I have a light weight mobile application that has a few configurations and 6 pages. Ideally I Would like to load all pages into the DOM (they interact with each other quite often and pages will be switched in the same frequency). The application will post for some JSON every n seconds and refresh the values on the page (yes it is primarily a information display app). with the jQuery Mobile framework the only real thing I like is how easy it is to have a standardized UI a crossed all devices and browsers, I'm really not using too much else out of the framework other than the basic page navigation (if you are familiar with the framework; a bare-bone multi-page design is all i need). Why I want to step away from jQueryMobile is how weighty it is. Not only do you need to include the mobile library, but also the base jQuery libraries. This I do not like because I'm not using jQuery anywhere else on the site. Any suggestions on light-weight mobile frameworks that have a similar rendering as jQueryMobile?

    Read the article

  • A list of Entity Framework providers for various databases

    - by Robert Koritnik
    Which providers are there and your experience using them I would like to know about all possible native .net Framework Entity Framework providers that are out there as well as their limitations compared to the default Linq2Entities (from MS for MS SQL). If there are more for the same database even better. Tell me and I'll be updating this post with this list. Feel free to add additional providers directly into this post or provide an answer and others (including me) will add it to the list. Entity Framework 1 Microsoft SQL Server Standard/Enterprise/Express Linq 2 Entities - Microsoft SQL Server connector DataDirect ADO.NET Data Providers Microsoft SQL Server CE (Compact Edition) Any provider? MySQL MySQL Connector (since version 6.0) - I've read about issues when using Skip(), Take() and Sort() in the same expression tree - everyone welcome to input their experience/knowledge regarding this. (NOTE: MySQL Connector/NET Visual Studio Integration is not supported in the Express Editions of Visual Studio, meaning you won't be able to view MySQL databases in the Database explorer window or add a MySQL data source via Visual Studio wizard dialog boxes. Some users may find that this limits their ability to use Entity Framework and MySQL within Visual Studio Express). Devart dotConnect for MySQL - similar issues to MySql's connector as I've read and both try to blame MS for it [these issues are supposed to be solved] SQLite Devart dotConnect for SQLite System.Data.SQLite PostgreSQL Devart dotConnect for PostgreSQL Npgsql Oracle Devart dotConnect for Oracle Sample Entity Framework Provider for Oracle - community effort project DataDirect ADO.NET Data Providers DB2 IBM Data Server Provider has EF support. Here are some limitations. DataDirect ADO.NET Data Providers Sybase Sybase iAnywhere DataDirect ADO.NET Data Providers Informix IBM Data Server Provider supports Informix Firebird ADO.NET Data Provider with EF support Provider Wrappers Tracing and Caching Providers for EF Entity Framework 4 (beta) Microsoft SQL Server Microsoft's Linq to Entities 4 - shipped with .net 4.0 and Visual Studio 2010; so far the only provider for EF4 MySQL Devart dotConnect for MySQL SQLite Devart dotConnect for SQLite PostgreSQL Devart dotConnect for PostgreSQL Oracle Devart dotConnect for Oracle

    Read the article

  • ASP.NET MVC Paging/Sorting/Filtering using the MVCContrib Grid and Pager

    - by rajbk
    This post walks you through creating a UI for paging, sorting and filtering a list of data items. It makes use of the excellent MVCContrib Grid and Pager Html UI helpers. A sample project is attached at the bottom. Our UI will eventually look like this. The application will make use of the Northwind database. The top portion of the page has a filter area region. The filter region is enclosed in a form tag. The select lists are wired up with jQuery to auto post back the form. The page has a pager region at the top and bottom of the product list. The product list has a link to display more details about a given product. The column headings are clickable for sorting and an icon shows the sort direction. Strongly Typed View Models The views are written to expect strongly typed objects. We suffix these strongly typed objects with ViewModel since they are designed specifically for passing data down to the view.  The following listing shows the ProductViewModel. This class will be used to hold information about a Product. We use attributes to specify if the property should be hidden and what its heading in the table should be. This metadata will be used by the MvcContrib Grid to render the table. Some of the properties are hidden from the UI ([ScaffoldColumn(false)) but are needed because we will be using those for filtering when writing our LINQ query. public ActionResult Index( string productName, int? supplierID, int? categoryID, GridSortOptions gridSortOptions, int? page) {   var productList = productRepository.GetProductsProjected();   // Set default sort column if (string.IsNullOrWhiteSpace(gridSortOptions.Column)) { gridSortOptions.Column = "ProductID"; }   // Filter on SupplierID if (supplierID.HasValue) { productList = productList.Where(a => a.SupplierID == supplierID); }   // Filter on CategoryID if (categoryID.HasValue) { productList = productList.Where(a => a.CategoryID == categoryID); }   // Filter on ProductName if (!string.IsNullOrWhiteSpace(productName)) { productList = productList.Where(a => a.ProductName.Contains(productName)); }   // Create all filter data and set current values if any // These values will be used to set the state of the select list and textbox // by sending it back to the view. var productFilterViewModel = new ProductFilterViewModel(); productFilterViewModel.SelectedCategoryID = categoryID ?? -1; productFilterViewModel.SelectedSupplierID = supplierID ?? -1; productFilterViewModel.Fill();   // Order and page the product list var productPagedList = productList .OrderBy(gridSortOptions.Column, gridSortOptions.Direction) .AsPagination(page ?? 1, 10);     var productListContainer = new ProductListContainerViewModel { ProductPagedList = productPagedList, ProductFilterViewModel = productFilterViewModel, GridSortOptions = gridSortOptions };   return View(productListContainer); } The following diagram shows the rest of the key ViewModels in our design. We have a container class called ProductListContainerViewModel which has nested classes. The ProductPagedList is of type IPagination<ProductViewModel>. The MvcContrib expects the IPagination<T> interface to determine the page number and page size of the collection we are working with. You convert any IEnumerable<T> into an IPagination<T> by calling the AsPagination extension method in the MvcContrib library. It also creates a paged set of type ProductViewModel. The ProductFilterViewModel class will hold information about the different select lists and the ProductName being searched on. It will also hold state of any previously selected item in the lists and the previous search criteria (you will recall that this type of state information was stored in Viewstate when working with WebForms). With MVC there is no state storage and so all state has to be fetched and passed back to the view. The GridSortOptions is a type defined in the MvcContrib library and is used by the Grid to determine the current column being sorted on and the current sort direction. The following shows the view and partial views used to render our UI. The Index view expects a type ProductListContainerViewModel which we described earlier. <%Html.RenderPartial("SearchFilters", Model.ProductFilterViewModel); %> <% Html.RenderPartial("Pager", Model.ProductPagedList); %> <% Html.RenderPartial("SearchResults", Model); %> <% Html.RenderPartial("Pager", Model.ProductPagedList); %> The View contains a partial view “SearchFilters” and passes it the ProductViewFilterContainer. The SearchFilter uses this Model to render all the search lists and textbox. The partial view “Pager” uses the ProductPageList which implements the interface IPagination. The “Pager” view contains the MvcContrib Pager helper used to render the paging information. This view is repeated twice since we want the pager UI to be available at the top and bottom of the product list. The Pager partial view is located in the Shared directory so that it can be reused across Views. The partial view “SearchResults” uses the ProductListContainer model. This partial view contains the MvcContrib Grid which needs both the ProdctPagedList and GridSortOptions to render itself. The Controller Action An example of a request like this: /Products?productName=test&supplierId=29&categoryId=4. The application receives this GET request and maps it to the Index method of the ProductController. Within the action we create an IQueryable<ProductViewModel> by calling the GetProductsProjected() method. /// <summary> /// This method takes in a filter list, paging/sort options and applies /// them to an IQueryable of type ProductViewModel /// </summary> /// <returns> /// The return object is a container that holds the sorted/paged list, /// state for the fiters and state about the current sorted column /// </returns> public ActionResult Index( string productName, int? supplierID, int? categoryID, GridSortOptions gridSortOptions, int? page) {   var productList = productRepository.GetProductsProjected();   // Set default sort column if (string.IsNullOrWhiteSpace(gridSortOptions.Column)) { gridSortOptions.Column = "ProductID"; }   // Filter on SupplierID if (supplierID.HasValue) { productList.Where(a => a.SupplierID == supplierID); }   // Filter on CategoryID if (categoryID.HasValue) { productList = productList.Where(a => a.CategoryID == categoryID); }   // Filter on ProductName if (!string.IsNullOrWhiteSpace(productName)) { productList = productList.Where(a => a.ProductName.Contains(productName)); }   // Create all filter data and set current values if any // These values will be used to set the state of the select list and textbox // by sending it back to the view. var productFilterViewModel = new ProductFilterViewModel(); productFilterViewModel.SelectedCategoryID = categoryID ?? -1; productFilterViewModel.SelectedSupplierID = supplierID ?? -1; productFilterViewModel.Fill();   // Order and page the product list var productPagedList = productList .OrderBy(gridSortOptions.Column, gridSortOptions.Direction) .AsPagination(page ?? 1, 10);     var productListContainer = new ProductListContainerViewModel { ProductPagedList = productPagedList, ProductFilterViewModel = productFilterViewModel, GridSortOptions = gridSortOptions };   return View(productListContainer); } The supplier, category and productname filters are applied to this IQueryable if any are present in the request. The ProductPagedList class is created by applying a sort order and calling the AsPagination method. Finally the ProductListContainerViewModel class is created and returned to the view. You have seen how to use strongly typed views with the MvcContrib Grid and Pager to render a clean lightweight UI with strongly typed views. You also saw how to use partial views to get data from the strongly typed model passed to it from the parent view. The code also shows you how to use jQuery to auto post back. The sample is attached below. Don’t forget to change your connection string to point to the server containing the Northwind database. NorthwindSales_MvcContrib.zip My name is Kobayashi. I work for Keyser Soze.

    Read the article

  • Autopostback select lists in ASP.NET MVC using jQuery

    - by rajbk
    This tiny snippet of code show you how to have your select lists autopostback its containing form when the selected value changes. When the DOM is fully loaded, we get all select nodes that have an attribute of “data-autopostback” with a value of “true”. We wire up the “change” JavaScript event to all these select nodes. This event is fired as soon as the user changes their selection with the mouse.  When the event is fired, we find the closest form tag for the select node that raised the event and submit the form. $(document).ready(function () { $("select:[data-autopostback=true]").change(function () { $(this).closest("form").submit(); }); }); A select tag with autopostback enabled will look like this <select id="selCategory" name="Category" data-autopostback="true"> <option value='1'>Electronics</option> <option value='2'>Books</option> </select> The reason I am using “data-" suffix in the attribute is to be HTML5 Compliant. A custom data attribute is an attribute in no namespace whose name starts with the string "data-", has at least one character after the hyphen, is XML-compatible, and contains no characters in the range U+0041 to U+005A (LATIN CAPITAL LETTER A to LATIN CAPITAL LETTER Z). The snippet can be used with any HTML page.

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >