Search Results

Search found 125180 results on 5008 pages for 'net code samples'.

Page 24/5008 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • Shallow Copy vs DeepCopy in C#.NET

    Hope below example helps to understand the difference. Please drop a comment if any doubts. using System; using System.IO; using System.Runtime.Serialization.Formatters.Binary; namespace ShallowCopyVsDeepCopy {     class Program     {         static void Main(string[] args)         {             var e1 = new Emp { EmpNo = 10, EmpName = "Smith", Department = new Dep { DeptNo = 100, DeptName = "Finance" } };             var e2 = e1.ShallowClone();             e1.Department.DeptName = "Accounts";             Console.WriteLine(e2.Department.DeptName);             var e3 = new Emp { EmpNo = 10, EmpName = "Smith", Department = new Dep { DeptNo = 100, DeptName = "Finance" } };             var e4 = e3.DeepClone();             e3.Department.DeptName = "Accounts";             Console.WriteLine(e4.Department.DeptName);         }     }     [Serializable]     class Dep     {         public int DeptNo { get; set; }         public String DeptName { get; set; }     }     [Serializable]     class Emp     {         public int EmpNo { get; set; }         public String EmpName { get; set; }         public Dep Department { get; set; }         public Emp ShallowClone()         {             return (Emp)this.MemberwiseClone();         }         public Emp DeepClone()         {             MemoryStream ms = new MemoryStream();             BinaryFormatter bf = new BinaryFormatter();             bf.Serialize(ms, this);             ms.Seek(0, SeekOrigin.Begin);             object copy = bf.Deserialize(ms);             ms.Close();             return copy as Emp;         }     } } span.fullpost {display:none;}

    Read the article

  • Migration & Modernization: Windows/VB6 Apps to ASP.NET HTML5

    - by Visual WebGui
    I would like to invite you to a webinar we are doing in collaboration with Jeffrey S. Hammond , Principal Analyst serving Application Development & Delivery Professionals at Forrester Research. The webinar is free and it will will introduce the substantial changes brought on by the move to Web Applications and Open Web architectures, and the challenges it places on application development shops. We’ll also introduce how we at Gizmox are helping client navigate this mobile shift and evolve existing...(read more)

    Read the article

  • How to read values from RESX file in ASP.NET using ResXResourceReader

    Here is the method which returns the value for a particular key in a given resource file. Below method assumes resourceFileName is the resource filename and key is the string for which the value has to be retrieved. public static string ReadValueFromResourceFile(String resourceFileName, String key)    {        String _value = String.Empty;        ResXResourceReader _resxReader = new ResXResourceReader(            String.Format("{0}{1}\\{2}",System.AppDomain.CurrentDomain.BaseDirectory.ToString(), StringConstants.ResourceFolderName , resourceFileName));        foreach (DictionaryEntry _item in _resxReader)        {            if (_item.Key.Equals(key))            {                _value = _item.Value.ToString();                break;            }        }        return _value;    } span.fullpost {display:none;}

    Read the article

  • Experiments in Wackiness: Allowing percents, angle-brackets, and other naughty things in the ASP.NET

    - by The Official Microsoft IIS Site
    Just because you CAN do something doesn't mean you SHOULD. However, it's always nice to do something crazy so that you can better understand a system. Warning: There is no warranty implied here. I'm loading the gun and showing you where to point it. If you point it at your foot, that's your business. Safety mechanisms exist for a reason and if you're going to to use this tip to just "get an app to work" but you're not sure why it's broken and you're just...(read more)

    Read the article

  • Material del Webcast MSDN: Introducción a páginas Web ASP.NET con Razor Syntax

    - by carlone
    Ayer tuve la oportunidad de compartir con ustedes en el webcast de MSDN una breve introducción a Razor. En este webcast que próximamente estará disponible para que lo puedan descargar o ver a quienes no pudieron acompañarnos, vimos una serie de ejemplos y aplicaciones de Razor.   A continuación les comparto la presentación y el sitio de demostración utilizado en el webcast: Presentación:     Sitio de Demostración:   Durante la demostración utilice WebMatrix, el cual pueden descargar aqui: http://www.microsoft.com/web/webmatrix/    Cualquier duda estoy a sus ordenes,   Saludos Cordiales,   Carlos A. Lone

    Read the article

  • How to read values from RESX file in ASP.NET using ResXResourceReader

    Here is the method which returns the value for a particular key in a given resource file. Below method assumes resourceFileName is the resource filename and key is the string for which the value has to be retrieved. public static string ReadValueFromResourceFile(String resourceFileName, String key)    {        String _value = String.Empty;        ResXResourceReader _resxReader = new ResXResourceReader(            String.Format("{0}{1}\\{2}",System.AppDomain.CurrentDomain.BaseDirectory.ToString(), StringConstants.ResourceFolderName , resourceFileName));        foreach (DictionaryEntry _item in _resxReader)        {            if (_item.Key.Equals(key))            {                _value = _item.Value.ToString();                break;            }        }        return _value;    } span.fullpost {display:none;}

    Read the article

  • IL and case-sensitivity

    - by Ali .NET
    Quoted from A Brief Introduction To IL code, CLR, CTS, CLS and JIT In .NET CLS stands for Common Language Specifications. It is a subset of CTS. CLS is a set of rules or guidelines which if followed ensures that code written in one .NET language can be used by another .NET language. For example one rule is that we cannot have member functions with same name with case difference only i.e we should not have add() and Add(). This may work in C# because it is case-sensitive but if try to use that C# code in VB.NET, it is not possible because VB.NET is not case-sensitive. Based on above text I want to confirm two points here: Does the case-sensitivity of IL is a condition for member functions only, and not for member properties? Is it true that C# wouldn't be inter-operable with VB.NET if it didn't take care of the case sensitivity?

    Read the article

  • ASP.NET ViewState Tips and Tricks #1

    - by João Angelo
    In User Controls or Custom Controls DO NOT use ViewState to store non public properties. Persisting non public properties in ViewState results in loss of functionality if the Page hosting the controls has ViewState disabled since it can no longer reset values of non public properties on page load. Example: public class ExampleControl : WebControl { private const string PublicViewStateKey = "Example_Public"; private const string NonPublicViewStateKey = "Example_NonPublic"; // DO public int Public { get { object o = this.ViewState[PublicViewStateKey]; if (o == null) return default(int); return (int)o; } set { this.ViewState[PublicViewStateKey] = value; } } // DO NOT private int NonPublic { get { object o = this.ViewState[NonPublicViewStateKey]; if (o == null) return default(int); return (int)o; } set { this.ViewState[NonPublicViewStateKey] = value; } } } // Page with ViewState disabled public partial class ExamplePage : Page { protected override void OnLoad(EventArgs e) { base.OnLoad(e); this.Example.Public = 10; // Restore Public value this.Example.NonPublic = 20; // Compile Error! } }

    Read the article

  • Thou shalt not put code on a piedestal - Code is a tool, no more, no less

    - by Ralf Westphal
    “Write great code and everything else becomes easier” is what Paul Pagel believes in. That´s his version of an adage by Brian Marick he cites: “treat code as an end, not just a means.” And he concludes: “My post-Agile world is software craftsmanship.” I wonder, if that´s really the way to go. Will “simply” writing great code lead the software industry into the light? He´s alluding to the philosopher Kant who proposed, a human beings should never be treated as a means, but always as an end. But should we transfer this ethical statement into the world of software? I doubt it.   Reason #1: Human beings are categorially different from code. They are autonomous entities who need to find a way of living happily together. To Kant it seemed this goal could only be reached if nobody (ab)used a human being for his/her purposes. Because using a human being, i.e. treating it as a means, would contradict the fundamental autonomy and freedom of human beings. People should hold up a symmetric view of their relationships: Since nobody wants to be (ab)used, nobody should (ab)use anybody else. If you want to be treated decently, with respect, in accordance with your own free will - which means as an end - then do the same to other people. Code is dead, it´s a product, it´s a tool for people to reach their goals. No company spends any money on code other than to save money or earn money in the long run. Code is not a puppy. Enterprises do not commission software development to just feel good in its company. Code is not a buddy. Code is a slave, if you will. A mechanical slave, a non-tangible robot. Code is a tool, is a tool. And if we start to treat it differently, if we elevate its status unduely… I guess that will contort our relationship in a contraproductive way. Please get me right: Just because something is “just a tool”, “just a product” does not mean we should not be careful while designing, building, using it. Right to the contrary. We should be very careful when writing code – but not for the code´s sake! We should be careful because we respect our customers who are fellow human beings who should be treated as an end. If we are careless, neglectful, ignorant when producing code on their behalf, then we´re using them. Being sloppy means you´re caring more for yourself that for your customer. You´re then treating the customer as a means to fulfill some of your own needs. That´s plain unethical behavior.   Reason #2: The focus should always be on your purpose, not on any tool. But if code is treated as an end, then the focus is on the code. That might sound right, because where else should be your focus as a software developer? But, well, I´d say, your focus should be on delivering value to your customer. Because in the end your customer does not care if you write a single line of code. She just wants her problem to be solved. Solving problems is the purpose of any contractor. Code must be treated just as a means, a tool we know how to handle very well. But if we´re really trying to be craftsmen then we should be conscious about exactly that and act ethically. That means we must never be so focused on our tool as to be unable to suggest better solutions to the problems of our customers than code.   I´m all with Paul when he urges us to “Write great code”. Sure, if you need to write code, then by all means do so. Write the best code you can think of – and then try to improve it. Paul has all the best intentions when he signs Brians “treat code as an end” - but as we all know: “The road to hell is paved with best intentions” ;-) Yes, I can imagine a “hell of code focus”. In fact, I don´t need to imagine it, I´m seeing it quite often. Because code hell is whereever two developers stand together and are so immersed in talking about all sorts of coding tricks, design patterns, code smells, technologies, platforms, tools that they lose sight of the big picture. Talking about TDD or SOLID or refactoring is a sign of consciousness – relative to the “cowboy coders” view of the world. But from yet another point of view TDD, SOLID, and refactoring are just cures for ailments within a system. And I fear, if “Writing great code” is the only focus or the main focus of software development, then we as an industry lose the ability to see that. Focus draws a line around something, it defines a horizon for perceptions and thinking. So if we focus on code our horizon ends where “the land of code” ends. I don´t think that should be our professional attitude.   So what about Software Craftsmanship as the next big thing after Agility? I think Software Craftsmanship has an important message for all software developers and beyond. But to make it the successor of the Agility movement seems to miss a point. Agility never claimed to solve all software development problems, I´d say. So to blame it for having missed out on certain aspects of it is wrong. If I had to summarize Agility in one word I´d say “Value”. Agility put value for the customer back in software development. Focus on delivering value early and often – that´s Agility´s mantra. All else follows from that. And I ask you: Is that obsolete? Is delivering value not hip anymore? No, sure not. That´s our very purpose as software developers. So how can Agility become obsolete and need to be replaced? We need to do away with this “either/or”-thinking. It´s either Agility or Lean or Software Craftsmanship or whatnot. Instead we should start integrating concepts and movements. Think “both/and”. Think Agility plus Software Craftsmanship plus Lean plus whatnot. We don´t neet to tear down anything from a piedestal and replace it with a new idol. Instead we should do away with piedestals and arrange whatever is helpful is a circle. Then we can turn to concepts, movements for whatever they are best. After 10 years of Agility we should be able to identify what it was good at – and keep that. Keep Agility around and add whatever Agility was lacking or never concerned with. Add whatever is at the core of Software Craftsmanship. Add whatever is at the core of Lean etc. But don´t call out the age of Post-Agility. Because it better never will end. Because once we start to lose Agility´s core we´re losing focus of the customer.

    Read the article

  • consume a .net webservice using jQuery

    - by Babunareshnarra
    Implementation shows the way to consume web service using jQuery. The client side AJAX with HTTP POST request is significant when it comes to loading speed and responsiveness.Following is the service created that return's string in JSON.[WebMethod][ScriptMethod(ResponseFormat = ResponseFormat.Json)]public string getData(string marks){    DataTable dt = retrieveDataTable("table", @"              SELECT * FROM TABLE WHERE MARKS='"+ marks.ToString() +"' ");    List<object> RowList = new List<object>();    foreach (DataRow dr in dt.Rows)    {        Dictionary<object, object> ColList = new Dictionary<object, object>();        foreach (DataColumn dc in dt.Columns)        {            ColList.Add(dc.ColumnName,            (string.Empty == dr[dc].ToString()) ? null : dr[dc]);        }        RowList.Add(ColList);    }    JavaScriptSerializer js = new JavaScriptSerializer();    string JSON = js.Serialize(RowList);    return JSON;}Consuming the webservice $.ajax({    type: "POST",    data: '{ "marks": "' + val + '"}', // This is required if we are using parameters    contentType: "application/json",    dataType: "json",    url: "/dataservice.asmx/getData",    success: function(response) {               RES = JSON.parse(response.d);        var obj = JSON.stringify(RES);     }     error: function (msg) {                    alert('failure');     }});Remember to reference jQuery library on the page.

    Read the article

  • Licensing approach for .NET library that might be used desktop / web-service / cloud environment

    - by Bobrovsky
    I am looking for advice how to architect licensing for a .NET library. I am not asking for tool/service recommendations or something like that. My library can be used in a regular desktop application, in an ASP.NET solution. And now Azure services come into play. Currently, for desktop applications the library checks if the application and company names from the version history are the same as the names the key was generated for. In other cases the library compares hardware IDs. Now there are problems: an Azure-enabled web-application can be run on different hardware each time (AFAIK) sometimes the hardware ID for the same hardware changes unexpectedly checking the hardware ID or version info might not be allowed in some circumstances (shared hosting for example) So, I am thinking about what approach I can take to architect a licensing scheme that: is friendly to customers (I do not try to fight piracy, but I do want to warn the customer if he uses the library on more servers than he paid for) can be used when there is no internet connection can be used on shared hosting What would you recommend?

    Read the article

  • What's missing in ASP.NET MVC?

    - by LukaszW.pl
    Hello programmers, I think there are not many people who don't think that ASP.NET MVC is one of the greatest technologies Microsoft gave us. It gives full control over the rendered HTML, provides separation of concerns and suits to stateless nature of web. Next versions of framework gaves us new features and tools and it's great, but... what solutions should Microsoft include in new versions of framework? What are biggest gaps in comparison with another web frameworks like PHP or Ruby? What could improve developers productivity? What's missing in ASP.NET MVC?

    Read the article

  • Job selection between (.net) or PHP [closed]

    - by Swapnil Gondkar
    Hi am Swapnil I am just a fresher passout of 2011 batch of engineering from Mumbai University I have developed dynamic websites on PHP and have quite a good experience working with php for 2years. Now When I went for interviews I got selected for a company that manifolds into PHP and its technologies to create websites.The other company in which I also got selected offers more than half the higher package than previous but I have to work here on .net platform and all the Microsoft Technologies which I do not merry. The work environment of php company is quite cool with 400employees(onli 10 php developers) and the .net company has only a strength of 20employees Now the thing is I do not know about Enterprise Application Building and other stuff so guys If any advice that may help me select my job would be appreciated.

    Read the article

  • Best way to reuse common functions between ASPX pages ?

    - by DFord
    I have a bunch of functions that are used across multiple ASPX files. I want to condense these down to one file to be used for all the ASPX files. I have a few ideas but I want to know what the accepted method to doing this would be. I have an idea to just create a class to put them in. However, I was wondering if i could put them in a ascx page, but that does not look like the solution I'm looking for. Is there a accepted method for this type of situation?

    Read the article

  • VS2010 MSTest CruiseControl.NET .NET 3.5

    - by Bill Campbell
    Hi, We're in the process of upgrading from VS2008 to VS2010 since it's now released. We are using CC.NET along with MSTest and want to use MS coverage tool instead of NCover. Interestingly, as I've seen others talking about as well, when you upgrade your project from VS2008 to VS2010 your Test Projects get converted to .NET 4. Nice move!! So WTF does one do with their CI environment in order to build this stuff (some projects in .net 3.5, some in .net 4 - these are different FRAMEWORKS!) LOL!!! It seems that I might need to have my CC.NET build two separate projects? - not sure about how to run the units tests from cruise with .net 4. Has anyone done this and have a snippet of their config they might share? And I thought this was going to be a simple thing. :( thanks! Bill44077

    Read the article

  • Alternatives to CAT.NET for website security analysis

    - by Gavin Miller
    I'm looking for an alternative tool to CAT.NET for performing static security scans on .NET code. Currently the CAT.NET tooling/development is at a somewhat fragile stage and doesn't offer the reliability that I'm looking for. Are there any alternative static code analyzers that you use for detecting security issues?

    Read the article

  • Stuck at the STARTUP [closed]

    - by Tarik Setia
    I started with "Getting started with asp mvc4 tutorial". I just created the project and when I pressed F5 I got this: Server Error in '/' Application. -------------------------------------------------------------------------------- Could not load type 'System.Web.WebPages.DisplayModes' from assembly 'System.Web.WebPages, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.TypeLoadException: Could not load type 'System.Web.WebPages.DisplayModes' from assembly 'System.Web.WebPages, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35'. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [TypeLoadException: Could not load type 'System.Web.WebPages.DisplayModes' from assembly 'System.Web.WebPages, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35'.] System.Web.Mvc.VirtualPathProviderViewEngine.GetPath(ControllerContext controllerContext, String[] locations, String[] areaLocations, String locationsPropertyName, String name, String controllerName, String cacheKeyPrefix, Boolean useCache, String[]& searchedLocations) +0 System.Web.Mvc.VirtualPathProviderViewEngine.FindView(ControllerContext controllerContext, String viewName, String masterName, Boolean useCache) +315 System.Web.Mvc.c__DisplayClassc.b__a(IViewEngine e) +68 System.Web.Mvc.ViewEngineCollection.Find(Func`2 lookup, Boolean trackSearchedPaths) +182 System.Web.Mvc.ViewEngineCollection.Find(Func`2 cacheLocator, Func`2 locator) +67 System.Web.Mvc.ViewEngineCollection.FindView(ControllerContext controllerContext, String viewName, String masterName) +329 System.Web.Mvc.ViewResult.FindView(ControllerContext context) +135 System.Web.Mvc.ViewResultBase.ExecuteResult(ControllerContext context) +230 System.Web.Mvc.ControllerActionInvoker.InvokeActionResult(ControllerContext controllerContext, ActionResult actionResult) +39 System.Web.Mvc.c__DisplayClass1c.b__19() +74 System.Web.Mvc.ControllerActionInvoker.InvokeActionResultFilter(IResultFilter filter, ResultExecutingContext preContext, Func`1 continuation) +388 System.Web.Mvc.c__DisplayClass1e.b__1b() +72 System.Web.Mvc.ControllerActionInvoker.InvokeActionResultWithFilters(ControllerContext controllerContext, IList`1 filters, ActionResult actionResult) +303 System.Web.Mvc.ControllerActionInvoker.InvokeAction(ControllerContext controllerContext, String actionName) +844 System.Web.Mvc.Controller.ExecuteCore() +130 System.Web.Mvc.ControllerBase.Execute(RequestContext requestContext) +229 System.Web.Mvc.ControllerBase.System.Web.Mvc.IController.Execute(RequestContext requestContext) +39 System.Web.Mvc.c__DisplayClassb.b__5() +71 System.Web.Mvc.Async.c__DisplayClass1.b__0() +44 System.Web.Mvc.Async.c__DisplayClass8`1.b__7(IAsyncResult _) +42 System.Web.Mvc.Async.WrappedAsyncResult`1.End() +152 System.Web.Mvc.Async.AsyncResultWrapper.End(IAsyncResult asyncResult, Object tag) +59 System.Web.Mvc.Async.AsyncResultWrapper.End(IAsyncResult asyncResult, Object tag) +40 System.Web.Mvc.c__DisplayClasse.b__d() +75 System.Web.Mvc.SecurityUtil.b__0(Action f) +31 System.Web.Mvc.SecurityUtil.ProcessInApplicationTrust(Action action) +61 System.Web.Mvc.MvcHandler.EndProcessRequest(IAsyncResult asyncResult) +118 System.Web.Mvc.MvcHandler.System.Web.IHttpAsyncHandler.EndProcessRequest(IAsyncResult result) +38 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +10303829 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +178 -------------------------------------------------------------------------------- Version Information: Microsoft .NET Framework Version:4.0.30319; ASP.NET Version:4.0.30319.17020

    Read the article

  • Why does not Asp.net mvc application work on Asp.Net Classic Application Pool?

    - by Amitabh
    I have an Asp.Net MVC 2 web application deployed on IIS 7.5 on .Net 4.0. When I select application pool as Asp.Net v4.0 Classic I get the following error. HTTP Error 403.14 - Forbidden The Web server is configured to not list the contents of this directory. The same application works fine when I select application pool as Asp.Net v4.0 Integrated. Does anyone know what is the reason for this?

    Read the article

  • Performance Related features for migration from .net 2003 Framework 1.1 to .net 2008 framework 3.5?

    - by KuldipMCA
    I am work on VB.net 2003 Framework 1.1 for last 3.5 years in windows Application. We are currently migrating to VB.net 2008 framework 3.5, but i don't know about the features which related to ADO.net and which is important to performance. I know linq to SQL but our architecture is made in .net 2003 so we should follow this. Any features which is very important to enhance the performance?

    Read the article

  • Is it necessary to create ASP.NET 4.0 SQL session state database, distinct from existing ASP.NET 2.0

    - by Chris W. Rea
    Is the ASP.NET 4.0 SQL session state mechanism backward-compatible with the ASP.NET 2.0 schema for session state, or should/must we create a separate and distinct session state database for our ASP.NET 4.0 apps? I'm leaning towards the latter anyway, but the 2.0 database seems to just work, though I'm wondering if there are any substantive differences between the ASPState database schema / procedures between the 2.0 and 4.0 versions of ASP.NET. Thank you.

    Read the article

  • Parallelism in .NET – Part 5, Partitioning of Work

    - by Reed
    When parallelizing any routine, we start by decomposing the problem.  Once the problem is understood, we need to break our work into separate tasks, so each task can be run on a different processing element.  This process is called partitioning. Partitioning our tasks is a challenging feat.  There are opposing forces at work here: too many partitions adds overhead, too few partitions leaves processors idle.  Trying to work the perfect balance between the two extremes is the goal for which we should aim.  Luckily, the Task Parallel Library automatically handles much of this process.  However, there are situations where the default partitioning may not be appropriate, and knowledge of our routines may allow us to guide the framework to making better decisions. First off, I’d like to say that this is a more advanced topic.  It is perfectly acceptable to use the parallel constructs in the framework without considering the partitioning taking place.  The default behavior in the Task Parallel Library is very well-behaved, even for unusual work loads, and should rarely be adjusted.  I have found few situations where the default partitioning behavior in the TPL is not as good or better than my own hand-written partitioning routines, and recommend using the defaults unless there is a strong, measured, and profiled reason to avoid using them.  However, understanding partitioning, and how the TPL partitions your data, helps in understanding the proper usage of the TPL. I indirectly mentioned partitioning while discussing aggregation.  Typically, our systems will have a limited number of Processing Elements (PE), which is the terminology used for hardware capable of processing a stream of instructions.  For example, in a standard Intel i7 system, there are four processor cores, each of which has two potential hardware threads due to Hyperthreading.  This gives us a total of 8 PEs – theoretically, we can have up to eight operations occurring concurrently within our system. In order to fully exploit this power, we need to partition our work into Tasks.  A task is a simple set of instructions that can be run on a PE.  Ideally, we want to have at least one task per PE in the system, since fewer tasks means that some of our processing power will be sitting idle.  A naive implementation would be to just take our data, and partition it with one element in our collection being treated as one task.  When we loop through our collection in parallel, using this approach, we’d just process one item at a time, then reuse that thread to process the next, etc.  There’s a flaw in this approach, however.  It will tend to be slower than necessary, often slower than processing the data serially. The problem is that there is overhead associated with each task.  When we take a simple foreach loop body and implement it using the TPL, we add overhead.  First, we change the body from a simple statement to a delegate, which must be invoked.  In order to invoke the delegate on a separate thread, the delegate gets added to the ThreadPool’s current work queue, and the ThreadPool must pull this off the queue, assign it to a free thread, then execute it.  If our collection had one million elements, the overhead of trying to spawn one million tasks would destroy our performance. The answer, here, is to partition our collection into groups, and have each group of elements treated as a single task.  By adding a partitioning step, we can break our total work into small enough tasks to keep our processors busy, but large enough tasks to avoid overburdening the ThreadPool.  There are two clear, opposing goals here: Always try to keep each processor working, but also try to keep the individual partitions as large as possible. When using Parallel.For, the partitioning is always handled automatically.  At first, partitioning here seems simple.  A naive implementation would merely split the total element count up by the number of PEs in the system, and assign a chunk of data to each processor.  Many hand-written partitioning schemes work in this exactly manner.  This perfectly balanced, static partitioning scheme works very well if the amount of work is constant for each element.  However, this is rarely the case.  Often, the length of time required to process an element grows as we progress through the collection, especially if we’re doing numerical computations.  In this case, the first PEs will finish early, and sit idle waiting on the last chunks to finish.  Sometimes, work can decrease as we progress, since previous computations may be used to speed up later computations.  In this situation, the first chunks will be working far longer than the last chunks.  In order to balance the workload, many implementations create many small chunks, and reuse threads.  This adds overhead, but does provide better load balancing, which in turn improves performance. The Task Parallel Library handles this more elaborately.  Chunks are determined at runtime, and start small.  They grow slowly over time, getting larger and larger.  This tends to lead to a near optimum load balancing, even in odd cases such as increasing or decreasing workloads.  Parallel.ForEach is a bit more complicated, however. When working with a generic IEnumerable<T>, the number of items required for processing is not known in advance, and must be discovered at runtime.  In addition, since we don’t have direct access to each element, the scheduler must enumerate the collection to process it.  Since IEnumerable<T> is not thread safe, it must lock on elements as it enumerates, create temporary collections for each chunk to process, and schedule this out.  By default, it uses a partitioning method similar to the one described above.  We can see this directly by looking at the Visual Partitioning sample shipped by the Task Parallel Library team, and available as part of the Samples for Parallel Programming.  When we run the sample, with four cores and the default, Load Balancing partitioning scheme, we see this: The colored bands represent each processing core.  You can see that, when we started (at the top), we begin with very small bands of color.  As the routine progresses through the Parallel.ForEach, the chunks get larger and larger (seen by larger and larger stripes). Most of the time, this is fantastic behavior, and most likely will out perform any custom written partitioning.  However, if your routine is not scaling well, it may be due to a failure in the default partitioning to handle your specific case.  With prior knowledge about your work, it may be possible to partition data more meaningfully than the default Partitioner. There is the option to use an overload of Parallel.ForEach which takes a Partitioner<T> instance.  The Partitioner<T> class is an abstract class which allows for both static and dynamic partitioning.  By overriding Partitioner<T>.SupportsDynamicPartitions, you can specify whether a dynamic approach is available.  If not, your custom Partitioner<T> subclass would override GetPartitions(int), which returns a list of IEnumerator<T> instances.  These are then used by the Parallel class to split work up amongst processors.  When dynamic partitioning is available, GetDynamicPartitions() is used, which returns an IEnumerable<T> for each partition.  If you do decide to implement your own Partitioner<T>, keep in mind the goals and tradeoffs of different partitioning strategies, and design appropriately. The Samples for Parallel Programming project includes a ChunkPartitioner class in the ParallelExtensionsExtras project.  This provides example code for implementing your own, custom allocation strategies, including a static allocator of a given chunk size.  Although implementing your own Partitioner<T> is possible, as I mentioned above, this is rarely required or useful in practice.  The default behavior of the TPL is very good, often better than any hand written partitioning strategy.

    Read the article

  • Parallelism in .NET – Part 2, Simple Imperative Data Parallelism

    - by Reed
    In my discussion of Decomposition of the problem space, I mentioned that Data Decomposition is often the simplest abstraction to use when trying to parallelize a routine.  If a problem can be decomposed based off the data, we will often want to use what MSDN refers to as Data Parallelism as our strategy for implementing our routine.  The Task Parallel Library in .NET 4 makes implementing Data Parallelism, for most cases, very simple. Data Parallelism is the main technique we use to parallelize a routine which can be decomposed based off data.  Data Parallelism refers to taking a single collection of data, and having a single operation be performed concurrently on elements in the collection.  One side note here: Data Parallelism is also sometimes referred to as the Loop Parallelism Pattern or Loop-level Parallelism.  In general, for this series, I will try to use the terminology used in the MSDN Documentation for the Task Parallel Library.  This should make it easier to investigate these topics in more detail. Once we’ve determined we have a problem that, potentially, can be decomposed based on data, implementation using Data Parallelism in the TPL is quite simple.  Let’s take our example from the Data Decomposition discussion – a simple contrast stretching filter.  Here, we have a collection of data (pixels), and we need to run a simple operation on each element of the pixel.  Once we know the minimum and maximum values, we most likely would have some simple code like the following: for (int row=0; row < pixelData.GetUpperBound(0); ++row) { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This simple routine loops through a two dimensional array of pixelData, and calls the AdjustContrast routine on each pixel. As I mentioned, when you’re decomposing a problem space, most iteration statements are potentially candidates for data decomposition.  Here, we’re using two for loops – one looping through rows in the image, and a second nested loop iterating through the columns.  We then perform one, independent operation on each element based on those loop positions. This is a prime candidate – we have no shared data, no dependencies on anything but the pixel which we want to change.  Since we’re using a for loop, we can easily parallelize this using the Parallel.For method in the TPL: Parallel.For(0, pixelData.GetUpperBound(0), row => { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } }); Here, by simply changing our first for loop to a call to Parallel.For, we can parallelize this portion of our routine.  Parallel.For works, as do many methods in the TPL, by creating a delegate and using it as an argument to a method.  In this case, our for loop iteration block becomes a delegate creating via a lambda expression.  This lets you write code that, superficially, looks similar to the familiar for loop, but functions quite differently at runtime. We could easily do this to our second for loop as well, but that may not be a good idea.  There is a balance to be struck when writing parallel code.  We want to have enough work items to keep all of our processors busy, but the more we partition our data, the more overhead we introduce.  In this case, we have an image of data – most likely hundreds of pixels in both dimensions.  By just parallelizing our first loop, each row of pixels can be run as a single task.  With hundreds of rows of data, we are providing fine enough granularity to keep all of our processors busy. If we parallelize both loops, we’re potentially creating millions of independent tasks.  This introduces extra overhead with no extra gain, and will actually reduce our overall performance.  This leads to my first guideline when writing parallel code: Partition your problem into enough tasks to keep each processor busy throughout the operation, but not more than necessary to keep each processor busy. Also note that I parallelized the outer loop.  I could have just as easily partitioned the inner loop.  However, partitioning the inner loop would have led to many more discrete work items, each with a smaller amount of work (operate on one pixel instead of one row of pixels).  My second guideline when writing parallel code reflects this: Partition your problem in a way to place the most work possible into each task. This typically means, in practice, that you will want to parallelize the routine at the “highest” point possible in the routine, typically the outermost loop.  If you’re looking at parallelizing methods which call other methods, you’ll want to try to partition your work high up in the stack – as you get into lower level methods, the performance impact of parallelizing your routines may not overcome the overhead introduced. Parallel.For works great for situations where we know the number of elements we’re going to process in advance.  If we’re iterating through an IList<T> or an array, this is a typical approach.  However, there are other iteration statements common in C#.  In many situations, we’ll use foreach instead of a for loop.  This can be more understandable and easier to read, but also has the advantage of working with collections which only implement IEnumerable<T>, where we do not know the number of elements involved in advance. As an example, lets take the following situation.  Say we have a collection of Customers, and we want to iterate through each customer, check some information about the customer, and if a certain case is met, send an email to the customer and update our instance to reflect this change.  Normally, this might look something like: foreach(var customer in customers) { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { theStore.EmailCustomer(customer); customer.LastEmailContact = DateTime.Now; } } Here, we’re doing a fair amount of work for each customer in our collection, but we don’t know how many customers exist.  If we assume that theStore.GetLastContact(customer) and theStore.EmailCustomer(customer) are both side-effect free, thread safe operations, we could parallelize this using Parallel.ForEach: Parallel.ForEach(customers, customer => { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { theStore.EmailCustomer(customer); customer.LastEmailContact = DateTime.Now; } }); Just like Parallel.For, we rework our loop into a method call accepting a delegate created via a lambda expression.  This keeps our new code very similar to our original iteration statement, however, this will now execute in parallel.  The same guidelines apply with Parallel.ForEach as with Parallel.For. The other iteration statements, do and while, do not have direct equivalents in the Task Parallel Library.  These, however, are very easy to implement using Parallel.ForEach and the yield keyword. Most applications can benefit from implementing some form of Data Parallelism.  Iterating through collections and performing “work” is a very common pattern in nearly every application.  When the problem can be decomposed by data, we often can parallelize the workload by merely changing foreach statements to Parallel.ForEach method calls, and for loops to Parallel.For method calls.  Any time your program operates on a collection, and does a set of work on each item in the collection where that work is not dependent on other information, you very likely have an opportunity to parallelize your routine.

    Read the article

  • Cloud hosted CI for .NET projects

    - by Scott Dorman
    Originally posted on: http://geekswithblogs.net/sdorman/archive/2014/06/02/cloud-hosted-ci-for-.net-projects.aspxContinuous integration (CI) is important. If you don’t have it set up…you should. There are a lot of different options available for hosting your own CI server, but they all require you to maintain your own infrastructure. If you’re a business, that generally isn’t a problem. However, if you have some open source projects hosted, for example on GitHub, there haven’t really been any options. That has changed with the latest release of AppVeyor, which bills itself as “Continuous integration for busy developers.” What’s different about AppVeyor is that it’s a hosted solution. Why is that important? By being a hosted solution, it means that I don’t have to maintain my own infrastructure for a build server. How does that help if you’re hosting an open source project? AppVeyor has a really competitive pricing plan. For an unlimited amount of public repositories, it’s free. That gives you a cloud hosted CI system for all of your GitHub projects for the cost of some time to set them up, which actually isn’t hard to do at all. I have several open source projects (hosted at https://github.com/scottdorman), so I signed up using my GitHub credentials. AppVeyor fully supported my two-factor authentication with GitHub, so I never once had to enter my password for GitHub into AppVeyor. Once it was done, I authorized GitHub and it instantly found all of the repositories I have (both the ones I created and the ones I cloned from elsewhere). You can even add “build badges” to your markdown files in GitHub, so anyone who visits your project can see the status of the lasted build. Out of the box, you can simply select a repository, add the build project, click New Build and wait for the build to complete. You now have a complete CI server running for your project. The best part of this, besides the fact that it “just worked” with almost zero configuration is that you can configure it through a web-based interface which is very streamlined, clean and easy to use or you can use a appveyor.yml file. This means that you can define your CI build process (including any scripts that might need to be run, etc.) in a standard file format (the YAML format) and store it in your repository. The benefits to that are huge. The file becomes a versioned artifact in your source control system, so it can be branched, merged, and is completely transparent to anyone working on the project. By the way, AppVeyor isn’t limited to just GitHub. It currently supports GitHub, BitBucket, Visual Studio Online, and Kiln. I did have a few issues getting one of my projects to build, but the same day I posted the problem to the support forum a fix was deployed, and I had a functioning CI build about 5 minutes after that. Since then, I’ve provided some additional feature requests and had a few other questions, all of which have seen responses within a 24-hour period. I have to say that it’s easily been one of the best customer support experiences I’ve seen in a long time. AppVeyor is still young, so it doesn’t yet have full feature parity with some of the older (more established) CI systems available,  but it’s getting better all the time and I have no doubt that it will quickly catch up to those other CI systems and then pass them. The bottom line, if you’re looking for a good cloud-hosted CI system for your .NET-based projects, look at AppVeyor.

    Read the article

  • ASP.Net MVC vs ASP.Net for Complex workflows

    - by Grant Sutcliffe
    I have just become involved in migrating a series of complex workflows with InfoPath UIs to Web-based UIs. I am new to ASP.Net MVC but have started to evaluate it as the technology versus classic ASP.Net for the job. As is typical of most workflows, in each state there are a number of business rules that determine (a) who can view what content; (2) who can edit what content; (3) what the user action options might be (Edit; Reject; Approve), etc. In essence, there is a lot of logic that needs to be applied to each request before presenting the appropriate view. Being more experienced in ASP.Net, I know that presenting the form(s) as required can be easily achieved through code behind pages (enable / disable / hide fields). I have not seen how this can be achieved with ASP.Net MVC (but am realising that new thinking is required of me when working with MVC - ‘Give only the content on a particular View + limited user action options’). Therefore, if using ASP.Net MVC, it looks like I would need to create a lot of views. Much of the content in each view would be the same. Only field enabled status or buttons would differ in most instances for these views in each state. For example: Step01Initiate (‘Has Save’ button); Step01OriginatorView (has ‘Edit’ Button) ; Step01OriginatorEdit (has ‘Save’ button); Step01Review (has ‘Accept’ / ‘Reject’ buttons); Step01ReviewReject (for reviewer notes; has ‘Save’ / ‘Cancel’ buttons). With workflows of up to six states, this would result in a lot of views. I can see the advantages of choosing ASP.MVC (1) ‘thin’ Views in terms of content; and (2) with logic consolidation in Controllers and different Models. Am I thinking along the right lines in terms of applying the MVC – ‘plenty of views’; or is there a better way to achieve my goal (using ASP.Net MVC or classic ASP.Net)?

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >