Search Results

Search found 59420 results on 2377 pages for 'net general'.

Page 36/2377 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • From HttpRuntime.Cache to Windows Azure Caching (Preview)

    - by Jeff
    I don’t know about you, but the announcement of Windows Azure Caching (Preview) (yes, the parentheses are apparently part of the interim name) made me a lot more excited about using Azure. Why? Because one of the great performance tricks of any Web app is to cache frequently used data in memory, so it doesn’t have to hit the database, a service, or whatever. When you run your Web app on one box, HttpRuntime.Cache is a sweet and stupid-simple solution. Somewhere in the data fetching pieces of your app, you can see if an object is available in cache, and return that instead of hitting the data store. I did this quite a bit in POP Forums, and it dramatically cuts down on the database chatter. The problem is that it falls apart if you run the app on many servers, in a Web farm, where one server may initiate a change to that data, and the others will have no knowledge of the change, making it stale. Of course, if you have the infrastructure to do so, you can use something like memcached or AppFabric to do a distributed cache, and achieve the caching flavor you desire. You could do the same thing in Azure before, but it would cost more because you’d need to pay for another role or VM or something to host the cache. Now, you can use a portion of the memory from each instance of a Web role to act as that cache, with no additional cost. That’s huge. So if you’re using a percentage of memory that comes out to 100 MB, and you have three instances running, that’s 300 MB available for caching. For the uninitiated, a Web role in Azure is essentially a VM that runs a Web app (worker roles are the same idea, only without the IIS part). You can spin up many instances of the role, and traffic is load balanced to the various instances. It’s like adding or removing servers to a Web farm all willy-nilly and at your discretion, and it’s what the cloud is all about. I’d say it’s my favorite thing about Windows Azure. The slightly annoying thing about developing for a Web role in Azure is that the local emulator that’s launched by Visual Studio is a little on the slow side. If you’re used to using the built-in Web server, you’re used to building and then alt-tabbing to your browser and refreshing a page. If you’re just changing an MVC view, you’re not even doing the building part. Spinning up the simulated Azure environment is too slow for this, but ideally you want to code your app to use this fantastic distributed cache mechanism. So first off, here’s the link to the page showing how to code using the caching feature. If you’re used to using HttpRuntime.Cache, this should be pretty familiar to you. Let’s say that you want to use the Azure cache preview when you’re running in Azure, but HttpRuntime.Cache if you’re running local, or in a regular IIS server environment. Through the magic of dependency injection, we can get there pretty quickly. First, design an interface to handle the cache insertion, fetching and removal. Mine looks like this: public interface ICacheProvider {     void Add(string key, object item, int duration);     T Get<T>(string key) where T : class;     void Remove(string key); } Now we’ll create two implementations of this interface… one for Azure cache, one for HttpRuntime: public class AzureCacheProvider : ICacheProvider {     public AzureCacheProvider()     {         _cache = new DataCache("default"); // in Microsoft.ApplicationServer.Caching, see how-to      }         private readonly DataCache _cache;     public void Add(string key, object item, int duration)     {         _cache.Add(key, item, new TimeSpan(0, 0, 0, 0, duration));     }     public T Get<T>(string key) where T : class     {         return _cache.Get(key) as T;     }     public void Remove(string key)     {         _cache.Remove(key);     } } public class LocalCacheProvider : ICacheProvider {     public LocalCacheProvider()     {         _cache = HttpRuntime.Cache;     }     private readonly System.Web.Caching.Cache _cache;     public void Add(string key, object item, int duration)     {         _cache.Insert(key, item, null, DateTime.UtcNow.AddMilliseconds(duration), System.Web.Caching.Cache.NoSlidingExpiration);     }     public T Get<T>(string key) where T : class     {         return _cache[key] as T;     }     public void Remove(string key)     {         _cache.Remove(key);     } } Feel free to expand these to use whatever cache features you want. I’m not going to go over dependency injection here, but I assume that if you’re using ASP.NET MVC, you’re using it. Somewhere in your app, you set up the DI container that resolves interfaces to concrete implementations (Ninject call is a “kernel” instead of a container). For this example, I’ll show you how StructureMap does it. It uses a convention based scheme, where if you need to get an instance of IFoo, it looks for a class named Foo. You can also do this mapping explicitly. The initialization of the container looks something like this: ObjectFactory.Initialize(x =>             {                 x.Scan(scan =>                         {                             scan.AssembliesFromApplicationBaseDirectory();                             scan.WithDefaultConventions();                         });                 if (Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment.IsAvailable)                     x.For<ICacheProvider>().Use<AzureCacheProvider>();                 else                     x.For<ICacheProvider>().Use<LocalCacheProvider>();             }); If you use Ninject or Windsor or something else, that’s OK. Conceptually they’re all about the same. The important part is the conditional statement that checks to see if the app is running in Azure. If it is, it maps ICacheProvider to AzureCacheProvider, otherwise it maps to LocalCacheProvider. Now when a request comes into your MVC app, and the chain of dependency resolution occurs, you can see to it that the right caching code is called. A typical design may have a call stack that goes: Controller –> BusinessLogicClass –> Repository. Let’s say your repository class looks like this: public class MyRepo : IMyRepo {     public MyRepo(ICacheProvider cacheProvider)     {         _context = new MyDataContext();         _cache = cacheProvider;     }     private readonly MyDataContext _context;     private readonly ICacheProvider _cache;     public SomeType Get(int someTypeID)     {         var key = "somename-" + someTypeID;         var cachedObject = _cache.Get<SomeType>(key);         if (cachedObject != null)         {             _context.SomeTypes.Attach(cachedObject);             return cachedObject;         }         var someType = _context.SomeTypes.SingleOrDefault(p => p.SomeTypeID == someTypeID);         _cache.Add(key, someType, 60000);         return someType;     } ... // more stuff to update, delete or whatever, being sure to remove // from cache when you do so  When the DI container gets an instance of the repo, it passes an instance of ICacheProvider to the constructor, which in this case will be whatever implementation was specified when the container was initialized. The Get method first tries to hit the cache, and of course doesn’t care what the underlying implementation is, Azure, HttpRuntime, or otherwise. If it finds the object, it returns it right then. If not, it hits the database (this example is using Entity Framework), and inserts the object into the cache before returning it. The important thing not pictured here is that other methods in the repo class will construct the key for the cached object, in this case “somename-“ plus the ID of the object, and then remove it from cache, in any method that alters or deletes the object. That way, no matter what instance of the role is processing the request, it won’t find the object if it has been made stale, that is, updated or outright deleted, forcing it to attempt to hit the database. So is this good technique? Well, sort of. It depends on how you use it, and what your testing looks like around it. Because of differences in behavior and execution of the two caching providers, for example, you could see some strange errors. For example, I immediately got an error indicating there was no parameterless constructor for an MVC controller, because the DI resolver failed to create instances for the dependencies it had. In reality, the NuGet packaged DI resolver for StructureMap was eating an exception thrown by the Azure components that said my configuration, outlined in that how-to article, was wrong. That error wouldn’t occur when using the HttpRuntime. That’s something a lot of people debate about using different components like that, and how you configure them. I kinda hate XML config files, and like the idea of the code-based approach above, but you should be darn sure that your unit and integration testing can account for the differences.

    Read the article

  • A few announcements for those in the UK

    - by ScottGu
    This a quick post to announce a few upcoming events for those in the UK. I’ll be presenting in Glasgow, Scotland on March 25th I’m doing a free 5 hour presentation in Glasgow on March 25th. I’ll be covering VS 2010, ASP.NET 4, ASP.NET Web Forms 4, ASP.NET MVC 2, Silverlight and potentially show off a few new things that haven’t been announced yet. You can learn more about the event and register for free here.  There are only a few spots left – so register quickly.  When the event fills up there will be a wait-list – please add yourself to this as we’ll be encouraging people who won’t be able to attend to let us know ahead of time so that we can add more people to the event. I’ll be presenting in Birmingham, England on March 26th I’m doing a free 5 hour presentation in Birmingham (UK) on March 26th. I’ll be covering VS 2010, ASP.NET 4, ASP.NET Web Forms 4, ASP.NET MVC 2, Silverlight and also potentially show off a few new things that haven’t been announced yet. You can learn more about the event and register for free here. The event unfortunately filled up immediately (even before I had a chance to blog it) – but there is a waitlist.  If you’d like to attend please add yourself to it as hopefully a number of people will be able to attend off of it. UK Party at MIX If you are going to MIX and are from the UK send mail to [email protected] (or tweet him @plip) for an invite to a party being organized for UK MIX attendees next Sunday (March 14th).  Knowing the people involved I’m sure the party will be fun. <g> Hope this helps, Scott

    Read the article

  • Five Bucks says you’ll Bookmark this Site: jsFiddle.net

    - by SGWellens
    In my never-ending wandering of technical web sites, I've been encountering links to jsFiddle.net more and more. Why? Because it is an incredibly useful site: It is a great 'sandbox' to play in. You can test, modify and retest HTML, CSS, and JavaScript code. It is a great way to communicate technical issues and share code samples. There are four screen areas: Three inputs* and one output: The three inputs are: HTML CSS JavaScript The output is: The rendered result Here's a cropped screen shot: What am I thinking? Here's the actual page: Demo *There are other inputs. You can select the level of HTML you want to run against (HTM5, HTML4.01 Strict, etc). You can add various versions of JavaScript libraries (jQuery, MooTools, YUI, etc.). Many other options are available. If I wanted to share this code with someone manually, they would have to copy and paste three separate code chunks into their development environment. And maybe load some external libraries. Not many people are willing to make such an effort. Instead, with jsFiddler, they can just go to the link and click Run. Awesome. I hope someone finds this useful (and I was kidding about the five bucks). Steve Wellens CodeProject

    Read the article

  • SharePoint Apps and Windows Azure

    - by ScottGu
    Last Monday I had an opportunity to present as part of the keynote of this year’s SharePoint Conference.  My segment of the keynote covered the new SharePoint Cloud App Model we are introducing as part of the upcoming SharePoint 2013 and Office 365 releases.  This new app model for SharePoint is additive to the full trust solutions developers write today, and is built around three core tenants: Simplifying the development model and making it consistent between the on-premises version of SharePoint and SharePoint Online provided with Office 365. Making the execution model loosely coupled – and enabling developers to build apps and write code that can run outside of the core SharePoint service. This makes it easy to deploy SharePoint apps using Windows Azure, and avoid having to worry about breaking SharePoint and the apps within it when something is upgraded.  This new loosely coupled model also enables developers to write SharePoint applications that can leverage the full capabilities of the .NET Framework – including ASP.NET Web Forms 4.5, ASP.NET MVC 4, ASP.NET Web API, EF 5, Async, and more. Implementing this loosely coupled model using standard web protocols – like OAuth, JSON, and REST APIs – that enable developers to re-use skills and tools, and easily integrate SharePoint with Web and Mobile application architectures. A video of my talk + demos is now available to watch online: In the talk I walked through building an app from scratch – it showed off how easy it is to build solutions using new SharePoint application, and highlighted a web + workflow + mobile scenario that integrates SharePoint with code hosted on Windows Azure (all built using Visual Studio 2012 and ASP.NET 4.5 – including MVC and Web API). The new SharePoint Cloud App Model is something that I think is pretty exciting, and it is going to make it a lot easier to build SharePoint apps using the full power of both Windows Azure and the .NET Framework.  Using Windows Azure to easily extend SaaS based solutions like Office 365 is also a really natural fit and one that is going to offer a bunch of great developer opportunities.  Hope this helps, Scott  P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Aplicações do SharePoint e Windows Azure

    - by Leniel Macaferi
    Segunda-feira passada eu tive a oportunidade de me apresentar dando uma palestra na SharePoint Conference (em Inglês). Meu segmento na palestra cobriu o novo modelo de Aplicações para Nuvem do SharePoint (SharePoint Cloud App Model) que estamos introduzindo como parte dos próximos lançamentos do SharePoint 2013 e Office 365. Este novo modelo de aplicações para o SharePoint é aditivo para as soluções de total confiança que os desenvolvedores escrevem atualmente, e é construído em torno de três pilares principais: Simplificar o modelo de desenvolvimento tornando-o consistente entre a versão local do SharePoint e a versão online do SharePoint fornecida com o Office 365. Tornar o modelo de execução flexível - permitindo que os desenvolvedores criem aplicações e escrevam código que pode ser executado fora do núcleo do serviço do SharePoint. Isto torna mais fácil implantar aplicações SharePoint usando a Windows Azure, evitando a preocupação com a quebra do SharePoint e das aplicações que rodam dentro dele quando algo é atualizado. Este novo modelo flexível também permite que os desenvolvedores escrevam aplicações do SharePoint que podem alavancar as capacidades do .NET Framework - incluindo ASP.NET Web Forms 4.5, ASP.NET MVC 4, ASP.NET Web API, Entity Framework 5, Async, e mais. Implementar este modelo flexível utilizando protocolos padrão da web - como OAuth, JSON e APIs REST - que permitem aos desenvolvedores reutilizar habilidades e ferramentas, facilmente integrando o SharePoint com arquiteturas Web e arquiteturas para aplicações móveis. Um vídeo da minha palestra + demos está disponível para assistir on-line (em Inglês): Na palestra eu mostrei como construir uma aplicação a partir do zero - ela mostrou como é fácil construir soluções usando a nova aplicação SharePoint, e destacou um cenário web + workflow + móvel que integra o SharePoint com código hospedado na Windows Azure (totalmente construído usando o Visual Studio 2012 e ASP.NET 4.5 - incluindo MVC e Web API). O novo Modelo de Aplicações para Nuvem do SharePoint é algo que eu acho extremamente emocionante, e que vai tornar muito mais fácil criar aplicações SharePoint usando todo o poder da Windows Azure e do .NET Framework. Usar a Windows Azure para estender facilmente soluções baseadas em SaaS como o Office 365 é também algo muito natural e que vai oferecer um monte de ótimas oportunidades para os desenvolvedores.  Espero que ajude, - Scott P.S. Além do blog, eu também estou utilizando o Twitter para atualizações rápidas e para compartilhar links. Siga-me em: twitter.com/ScottGu Texto traduzido do post original por Leniel Macaferi.

    Read the article

  • ASP.NET GZip Encoding Caveats

    - by Rick Strahl
    GZip encoding in ASP.NET is pretty easy to accomplish using the built-in GZipStream and DeflateStream classes and applying them to the Response.Filter property.  While applying GZip and Deflate behavior is pretty easy there are a few caveats that you have watch out for as I found out today for myself with an application that was throwing up some garbage data. But before looking at caveats let’s review GZip implementation for ASP.NET. ASP.NET GZip/Deflate Basics Response filters basically are applied to the Response.OutputStream and transform it as data is written to it through the ASP.NET Response object. So a Response.Write eventually gets written into the output stream which if a filter is also written through the filter stream’s interface. To perform the actual GZip (and Deflate) encoding typically used by Web pages .NET includes the GZipStream and DeflateStream stream classes which can be readily assigned to the Repsonse.OutputStream. With these two stream classes in place it’s almost trivially easy to create a couple of reusable methods that allow you to compress your HTTP output. In my standard WebUtils utility class (from the West Wind West Wind Web Toolkit) created two static utility methods – IsGZipSupported and GZipEncodePage – that check whether the client supports GZip encoding and then actually encodes the current output (note that although the method includes ‘Page’ in its name this code will work with any ASP.NET output). /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } /// <summary> /// Sets up the current page or handler to use GZip through a Response.Filter /// IMPORTANT: /// You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() { HttpResponse Response = HttpContext.Current.Response; if (IsGZipSupported()) { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (AcceptEncoding.Contains("deflate")) { Response.Filter = new System.IO.Compression.DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "deflate"); } else { Response.Filter = new System.IO.Compression.GZipStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "gzip"); } } } As you can see the actual assignment of the Filter is as simple as: Response.Filter = new DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); which applies the filter to the OutputStream. You also need to ensure that your response reflects the new GZip or Deflate encoding and ensure that any pages that are cached in Proxy servers can differentiate between pages that were encoded with the various different encodings (or no encoding). To use this utility function now is trivially easy: In any ASP.NET code that wants to compress its Response output you simply use: protected void Page_Load(object sender, EventArgs e) { WebUtils.GZipEncodePage(); Entry = WebLogFactory.GetEntry(); var entries = Entry.GetLastEntries(App.Configuration.ShowEntryCount, "pk,Title,SafeTitle,Body,Entered,Feedback,Location,ShowTopAd", "TEntries"); if (entries == null) throw new ApplicationException("Couldn't load WebLog Entries: " + Entry.ErrorMessage); this.repEntries.DataSource = entries; this.repEntries.DataBind(); } Here I use an ASP.NET page, but the above WebUtils.GZipEncode() method call will work in any ASP.NET application type including HTTP Handlers. The only requirement is that the filter needs to be applied before any other output is sent to the OutputStream. For example, in my CallbackHandler service implementation by default output over a certain size is GZip encoded. The output that is generated is JSON or XML and if the output is over 5k in size I apply WebUtils.GZipEncode(): if (sbOutput.Length > GZIP_ENCODE_TRESHOLD) WebUtils.GZipEncodePage(); Response.ContentType = ControlResources.STR_JsonContentType; HttpContext.Current.Response.Write(sbOutput.ToString()); Ok, so you probably get the idea: Encoding GZip/Deflate content is pretty easy. Hold on there Hoss –Watch your Caching Or is it? There are a few caveats that you need to watch out for when dealing with GZip content. The fist issue is that you need to deal with the fact that some clients don’t support GZip or Deflate content. Most modern browsers support it, but if you have a programmatic Http client accessing your content GZip/Deflate support is by no means guaranteed. For example, WinInet Http clients don’t support GZip out of the box – it has to be explicitly implemented. Other low level HTTP clients on other platforms too don’t support GZip out of the box. The problem is that your application, your Web Server and Proxy Servers on the Internet might be caching your generated content. If you return content with GZip once and then again without, either caching is not applied or worse the wrong type of content is returned back to the client from a cache or proxy. The result is an unreadable response for *some clients* which is also very hard to debug and fix once in production. You already saw the issue of Proxy servers addressed in the GZipEncodePage() function: // Allow proxy servers to cache encoded and unencoded versions separately Response.AppendHeader("Vary", "Content-Encoding"); This ensures that any Proxy servers also check for the Content-Encoding HTTP Header to cache their content – not just the URL. The same thing applies if you do OutputCaching in your own ASP.NET code. If you generate output for GZip on an OutputCached page the GZipped content will be cached (either by ASP.NET’s cache or in some cases by the IIS Kernel Cache). But what if the next client doesn’t support GZip? She’ll get served a cached GZip page that won’t decode and she’ll get a page full of garbage. Wholly undesirable. To fix this you need to add some custom OutputCache rules by way of the GetVaryByCustom() HttpApplication method in your global_ASAX file: public override string GetVaryByCustomString(HttpContext context, string custom) { // Override Caching for compression if (custom == "GZIP") { string acceptEncoding = HttpContext.Current.Response.Headers["Content-Encoding"]; if (string.IsNullOrEmpty(acceptEncoding)) return ""; else if (acceptEncoding.Contains("gzip")) return "GZIP"; else if (acceptEncoding.Contains("deflate")) return "DEFLATE"; return ""; } return base.GetVaryByCustomString(context, custom); } In a page that use Output caching you then specify: <%@ OutputCache Duration="180" VaryByParam="none" VaryByCustom="GZIP" %> To use that custom rule. It’s all Fun and Games until ASP.NET throws an Error Ok, so you’re up and running with GZip, you have your caching squared away and your pages that you are applying it to are jamming along. Then BOOM, something strange happens and you get a lovely garbled page that look like this: Lovely isn’t it? What’s happened here is that I have WebUtils.GZipEncode() applied to my page, but there’s an error in the page. The error falls back to the ASP.NET error handler and the error handler removes all existing output (good) and removes all the custom HTTP headers I’ve set manually (usually good, but very bad here). Since I applied the Response.Filter (via GZipEncode) the output is now GZip encoded, but ASP.NET has removed my Content-Encoding header, so the browser receives the GZip encoded content without a notification that it is encoded as GZip. The result is binary output. Here’s what Fiddler says about the raw HTTP header output when an error occurs when GZip encoding was applied: HTTP/1.1 500 Internal Server Error Cache-Control: private Content-Type: text/html; charset=utf-8 Date: Sat, 30 Apr 2011 22:21:08 GMT Content-Length: 2138 Connection: close ?`I?%&/m?{J?J??t??` … binary output striped here Notice: no Content-Encoding header and that’s why we’re seeing this garbage. ASP.NET has stripped the Content-Encoding header but left our filter intact. So how do we fix this? In my applications I typically have a global Application_Error handler set up and in this case I’ve been using that. One thing that you can do in the Application_Error handler is explicitly clear out the Response.Filter and set it to null at the top: protected void Application_Error(object sender, EventArgs e) { // Remove any special filtering especially GZip filtering Response.Filter = null; … } And voila I get my Yellow Screen of Death or my custom generated error output back via uncompressed content. BTW, the same is true for Page level errors handled in Page_Error or ASP.NET MVC Error handling methods in a controller. Another and possibly even better solution is to check whether a filter is attached just before the headers are sent to the client as pointed out by Adam Schroeder in the comments: protected void Application_PreSendRequestHeaders() { // ensure that if GZip/Deflate Encoding is applied that headers are set // also works when error occurs if filters are still active HttpResponse response = HttpContext.Current.Response; if (response.Filter is GZipStream && response.Headers["Content-encoding"] != "gzip") response.AppendHeader("Content-encoding", "gzip"); else if (response.Filter is DeflateStream && response.Headers["Content-encoding"] != "deflate") response.AppendHeader("Content-encoding", "deflate"); } This uses the Application_PreSendRequestHeaders() pipeline event to check for compression encoding in a filter and adjusts the content accordingly. This is actually a better solution since this is generic – it’ll work regardless of how the content is cleaned up. For example, an error Response.Redirect() or short error display might get changed and the filter not cleared and this code actually handles that. Sweet, thanks Adam. It’s unfortunate that ASP.NET doesn’t natively clear out Response.Filters when an error occurs just as it clears the Response and Headers. I can’t see where leaving a Filter in place in an error situation would make any sense, but hey - this is what it is and it’s easy enough to fix as long as you know where to look. Riiiight! IIS and GZip I should also mention that IIS 7 includes good support for compression natively. If you can defer encoding to let IIS perform it for you rather than doing it in your code by all means you should do it! Especially any static or semi-dynamic content that can be made static should be using IIS built-in compression. Dynamic caching is also supported but is a bit more tricky to judge in terms of performance and footprint. John Forsyth has a great article on the benefits and drawbacks of IIS 7 compression which gives some detailed performance comparisons and impact reviews. I’ll post another entry next with some more info on IIS compression since information on it seems to be a bit hard to come by. Related Content Built-in GZip/Deflate Compression in IIS 7.x HttpWebRequest and GZip Responses © Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET   IIS7  

    Read the article

  • Back to Basics: When does a .NET Assembly Dependency get loaded

    - by Rick Strahl
    When we work on typical day to day applications, it's easy to forget some of the core features of the .NET framework. For me personally it's been a long time since I've learned about some of the underlying CLR system level services even though I rely on them on a daily basis. I often think only about high level application constructs and/or high level framework functionality, but the low level stuff is often just taken for granted. Over the last week at DevConnections I had all sorts of low level discussions with other developers about the inner workings of this or that technology (especially in light of my Low Level ASP.NET Architecture talk and the Razor Hosting talk). One topic that came up a couple of times and ended up a point of confusion even amongst some seasoned developers (including some folks from Microsoft <snicker>) is when assemblies actually load into a .NET process. There are a number of different ways that assemblies are loaded in .NET. When you create a typical project assemblies usually come from: The Assembly reference list of the top level 'executable' project The Assembly references of referenced projects Dynamically loaded at runtime via AppDomain/Reflection loading In addition .NET automatically loads mscorlib (most of the System namespace) the boot process that hosts the .NET runtime in EXE apps, or some other kind of runtime hosting environment (runtime hosting in servers like IIS, SQL Server or COM Interop). In hosting environments the runtime host may also pre-load a bunch of assemblies on its own (for example the ASP.NET host requires all sorts of assemblies just to run itself, before ever routing into your user specific code). Assembly Loading The most obvious source of loaded assemblies is the top level application's assembly reference list. You can add assembly references to a top level application and those assembly references are then available to the application. In a nutshell, referenced assemblies are not immediately loaded - they are loaded on the fly as needed. So regardless of whether you have an assembly reference in a top level project, or a dependent assembly assemblies typically load on an as needed basis, unless explicitly loaded by user code. The same is true of dependent assemblies. To check this out I ran a simple test: I have a utility assembly Westwind.Utilities which is a general purpose library that can work in any type of project. Due to a couple of small requirements for encoding and a logging piece that allows logging Web content (dependency on HttpContext.Current) this utility library has a dependency on System.Web. Now System.Web is a pretty large assembly and generally you'd want to avoid adding it to a non-Web project if it can be helped. So I created a Console Application that loads my utility library: You can see that the top level Console app a reference to Westwind.Utilities and System.Data (beyond the core .NET libs). The Westwind.Utilities project on the other hand has quite a few dependencies including System.Web. I then add a main program that accesses only a simple utillity method in the Westwind.Utilities library that doesn't require any of the classes that access System.Web: static void Main(string[] args) { Console.WriteLine(StringUtils.NewStringId()); Console.ReadLine(); } StringUtils.NewStringId() calls into Westwind.Utilities, but it doesn't rely on System.Web. Any guesses what the assembly list looks like when I stop the code on the ReadLine() command? I'll wait here while you think about it… … … So, when I stop on ReadLine() and then fire up Process Explorer and check the assembly list I get: We can see here that .NET has not actually loaded any of the dependencies of the Westwind.Utilities assembly. Also not loaded is the top level System.Data reference even though it's in the dependent assembly list of the top level project. Since this particular function I called only uses core System functionality (contained in mscorlib) there's in fact nothing else loaded beyond the main application and my Westwind.Utilities assembly that contains the method accessed. None of the dependencies of Westwind.Utilities loaded. If you were to open the assembly in a disassembler like Reflector or ILSpy, you would however see all the compiled in dependencies. The referenced assemblies are in the dependency list and they are loadable, but they are not immediately loaded by the application. In other words the C# compiler and .NET linker are smart enough to figure out the dependencies based on the code that actually is referenced from your application and any dependencies cascading down into the dependencies from your top level application into the referenced assemblies. In the example above the usage requirement is pretty obvious since I'm only calling a single static method and then exiting the app, but in more complex applications these dependency relationships become very complicated - however it's all taken care of by the compiler and linker figuring out what types and members are actually referenced and including only those assemblies that are in fact referenced in your code or required by any of your dependencies. The good news here is: That if you are referencing an assembly that has a dependency on something like System.Web in a few places that are not actually accessed by any of your code or any dependent assembly code that you are calling, that assembly is never loaded into memory! Some Hosting Environments pre-load Assemblies The load behavior can vary however. In Console and desktop applications we have full control over assembly loading so we see the core CLR behavior. However other environments like ASP.NET for example will preload referenced assemblies explicitly as part of the startup process - primarily to minimize load conflicts. Specifically ASP.NET pre-loads all assemblies referenced in the assembly list and the /bin folder. So in Web applications it definitely pays to minimize your top level assemblies if they are not used. Understanding when Assemblies Load To clarify and see it actually happen what I described in the first example , let's look at a couple of other scenarios. To see assemblies loading at runtime in real time lets create a utility function to print out loaded assemblies to the console: public static void PrintAssemblies() { var assemblies = AppDomain.CurrentDomain.GetAssemblies(); foreach (var assembly in assemblies) { Console.WriteLine(assembly.GetName()); } } Now let's look at the first scenario where I have class method that references internally uses System.Web. In the first scenario lets add a method to my main program like this: static void Main(string[] args) { Console.WriteLine(StringUtils.NewStringId()); Console.ReadLine(); PrintAssemblies(); } public static void WebLogEntry() { var entry = new WebLogEntry(); entry.UpdateFromRequest(); Console.WriteLine(entry.QueryString); } UpdateFromWebRequest() internally accesses HttpContext.Current to read some information of the ASP.NET Request object so it clearly needs a reference System.Web to work. In this first example, the method that holds the calling code is never called, but exists as a static method that can potentially be called externally at some point. What do you think will happen here with the assembly loading? Will System.Web load in this example? No - it doesn't. Because the WebLogEntry() method is never called by the mainline application (or anywhere else) System.Web is not loaded. .NET dynamically loads assemblies as code that needs it is called. No code references the WebLogEntry() method and so System.Web is never loaded. Next, let's add the call to this method, which should trigger System.Web to be loaded because a dependency exists. Let's change the code to: static void Main(string[] args) { Console.WriteLine(StringUtils.NewStringId()); Console.WriteLine("--- Before:"); PrintAssemblies(); WebLogEntry(); Console.WriteLine("--- After:"); PrintAssemblies(); Console.ReadLine(); } public static void WebLogEntry() { var entry = new WebLogEntry(); entry.UpdateFromRequest(); Console.WriteLine(entry.QueryString); } Looking at the code now, when do you think System.Web will be loaded? Will the before list include it? Yup System.Web gets loaded, but only after it's actually referenced. In fact, just until before the call to UpdateFromRequest() System.Web is not loaded - it only loads when the method is actually called and requires the reference in the executing code. Moral of the Story So what have we learned - or maybe remembered again? Dependent Assembly References are not pre-loaded when an application starts (by default) Dependent Assemblies that are not referenced by executing code are never loaded Dependent Assemblies are just in time loaded when first referenced in code All of this is nothing new - .NET has always worked like this. But it's good to have a refresher now and then and go through the exercise of seeing it work in action. It's not one of those things we think about everyday, and as I found out last week, I couldn't remember exactly how it worked since it's been so long since I've learned about this. And apparently I'm not the only one as several other people I had discussions with in relation to loaded assemblies also didn't recall exactly what should happen or assumed incorrectly that just having a reference automatically loads the assembly. The moral of the story for me is: Trying at all costs to eliminate an assembly reference from a component is not quite as important as it's often made out to be. For example, the Westwind.Utilities module described above has a logging component, including a Web specific logging entry that supports pulling information from the active HTTP Context. Adding that feature requires a reference to System.Web. Should I worry about this in the scope of this library? Probably not, because if I don't use that one class of nearly a hundred, System.Web never gets pulled into the parent process. IOW, System.Web only loads when I use that specific feature and if I am, well I clearly have to be running in a Web environment anyway to use it realistically. The alternative would be considerably uglier: Pulling out the WebLogEntry class and sticking it into another assembly and breaking up the logging code. In this case - definitely not worth it. So, .NET definitely goes through some pretty nifty optimizations to ensure that it loads only what it needs and in most cases you can just rely on .NET to do the right thing. Sometimes though assembly loading can go wrong (especially when signed and versioned local assemblies are involved), but that's subject for a whole other post…© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  CSharp   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • What Features Do You Want To See in .NET 5/C# 5?

    - by Randolpho
    .NET 4 and Visual Studio 2010 are finally here, along with a host of new features. But we wouldn't be who we are if we didn't long for a feature that didn't make the cut, or that was never even considered. So sound off and let the folks at Microsoft know: What features do you want to see in .NET 5? What features do you want to see in C# 5? F#? VB.NET? Before you vote to close: some may find this question too subjective to let stand, but there are other threads that have stood the test of time. Let this one stand as well.

    Read the article

  • Does a .NET 4.0 website load faster for a reason?

    - by Clarence Klopfstein
    I have been using DotNetBlogEngine for many years, and today my host (JodoHost.com) officially turned on support for .NET 4.0. What I've noticed immediately is that the website loads tremendously faster on the first load, subsequent loads are only slightly faster. The website is compiled as a .NET 2.0 web application. Is there a known reason for this performance increase? Was there a change in the .NET 4.0 framework that improved the initial load time of websites into an application pool? This is hosted on a 2003 server. Here is the site for reference: http://www.ocdprogrammer.com

    Read the article

  • Are there any web application frameworks usable with .NET 2.0?

    - by adhocgeek
    Apologies if this has been asked many, many times before - I'm afraid I couldn't find any satisfactory answers. I'm stuck in an environment (a bank) where, although we have VS 2008 on the development machines, production machines are locked down to the .Net framework 2.0 and SQL Server 2005. Are there any modern application frameworks that I could employ? I've looked at things like Spring.NET, PureMVC and ASP.NET MVC (S#arp Arch?), but I don't really have the luxury of time to investigate in depth. I don't want to initiate a war over which framework might be best, I just want to know if there are any I can actually use.

    Read the article

  • Tip on Reusing Classes in Different .NET Project Types

    - by psheriff
    All of us have class libraries that we developed for use in our projects. When you create a .NET Class Library project with many classes, you can use that DLL in ASP.NET, Windows Forms and WPF applications. However, for Silverlight and Windows Phone, these .NET Class Libraries cannot be used. The reason is Silverlight and Windows Phone both use a scaled down version of .NET and thus do not have access to the full .NET framework class library. However, there are many classes and functionality that will work in the full .NET and in the scaled down versions that Silverlight and Windows Phone use.Let’s take an example of a class that you might want to use in all of the above mentioned projects. The code listing shown below might be something that you have in a Windows Form or an ASP.NET application. public class StringCommon{  public static bool IsAllLowerCase(string value)  {    return new Regex(@"^([^A-Z])+$").IsMatch(value);  }   public static bool IsAllUpperCase(string value)  {    return new Regex(@"^([^a-z])+$").IsMatch(value);  }} The StringCommon class is very simple with just two methods, but you know that the System.Text.RegularExpressions namespace is available in Silverlight and Windows Phone. Thus, you know that you may reuse this class in your Silverlight and Windows Phone projects. Here is the problem: if you create a Silverlight Class Library project and you right-click on that project in Solution Explorer and choose Add | Add Existing Item… from the menu, the class file StringCommon.cs will be copied from the original location and placed into the Silverlight Class Library project. You now have two files with the same code. If you want to change the code you will now need to change it in two places! This is a maintenance nightmare that you have just created. If you then add this to a Windows Phone Class Library project, you now have three places you need to modify the code! Add As LinkInstead of creating three separate copies of the same class file, you want to leave the original class file in its original location and just create a link to that file from the Silverlight and Windows Phone class libraries. Visual Studio will allow you to do this, but you need to do one additional step in the Add Existing Item dialog (see Figure 1). You will still right mouse click on the project and choose Add | Add Existing Item… from the menu. You will still highlight the file you want to add to your project, but DO NOT click on the Add button. Instead click on the drop down portion of the Add button and choose the “Add As Link” menu item. This will now create a link to the file on disk and will not copy the file into your new project. Figure 1: Add as Link will create a link, not copy the file over. When this linked file is added to your project, there will be a different icon next to that file in the Solution Explorer window. This icon signifies that this is a link to a file in another folder on your hard drive.   Figure 2: The Linked file will have a different icon to show it is a link. Of course, if you have code that will not work in Silverlight or Windows Phone -- because the code has dependencies on features of .NET that are not supported on those platforms – you  can always wrap conditional compilation code around the offending code so it will be removed when compiled in those class libraries. SummaryIn this short blog entry you learned how to reuse one of your class libraries from ASP.NET, Windows Forms or WPF applications in your Silverlight or Windows Phone class libraries. You can do this without creating a maintenance nightmare by using the “Add a Link” feature of the Add Existing Item dialog. Good Luck with your Coding,Paul Sheriff ** SPECIAL OFFER FOR MY BLOG READERS **Visit http://www.pdsa.com/Event/Blog for a free video on Silverlight entitled Silverlight XAML for the Complete Novice - Part 1.

    Read the article

  • Configuring .NET Users in IIS 6 (ASP.NET 2.0, Win Server 2003)

    - by Bernhard
    Is it possible in IIS 6 to administrate users as in IIS 7 (e.g. as described here: http://technet.microsoft.com/en-us/library/cc731783%28v=ws.10%29.aspx)? In IIS7 you can define users within the ASP.NET group with click on .NET users and it automatically creates a ms sql server db in background in the website directory. So far I didn't found anything about the issue, most forum questions are about how to migrate asp.net sites from IIS6 to IIS7.

    Read the article

  • Visual Studio 2010 SP1

    - by ScottGu
    Last week we shipped Service Pack 1 of Visual Studio 2010 and the Visual Studio Express Tools.  In addition to bug fixes and performance improvements, SP1 includes a number of feature enhancements.  This includes improved local help support, IntelliTrace support for 64-bit applications and SharePoint, built-in Silverlight 4 Tooling support in the box, unit testing support when targeting .NET 3.5, a new performance wizard for Silverlight, IIS Express and SQL CE Tooling support for web projects, HTML5 Intellisense for ASP.NET, and more.  TFS 2010 SP1 was also released last week, together with a new TFS Project Server Integration Pack and Load Test Feature Pack.  Brian Harry has a good blog post about the TFS updates here. VS 2010 SP1 Download Click here to download and install SP1 for all versions of Visual Studio (including express).  This installer examines what you have installed on your machine, and only downloads the servicing downloads necessary to update them to SP1.  The time it takes to download and update will consequently depend on what all you have installed.  Jon Galloway has a good blog post on tips to speed up the SP1 install by uninstalling unused components. Web Platform Installer Bundles In addition to the core VS 2010 SP1 installer, we have also put together two Web Platform Installer (WebPI) bundles that automate installing SP1 together with additional web-specific components: VS 2010 SP1 WebPI Bundle Visual Web Developer 2010 SP1 WebPI Bundle The above WebPI bundles automate installing: VS 2010/VWD 2010 SP1 ASP.NET MVC 3 (runtime + tools support) IIS 7.5 Express SQL Server Compact Edition 4.0 (runtime + tools support) Web Deployment 2.0 Only the components that are not already installed on your machine will be downloaded when you use the above WebPI bundles.  This means that you can run the WebPI bundle at any time (even if you have already installed SP1 or ASP.NET MVC 3) and not have to worry about wasting time downloading/installing these components again. Earlier this year I did two posts that discussed how to use IIS Express and SQL CE with ASP.NET projects in SP1.  Read the below posts to learn more about how to use them after you run the above bundles: Visual Studio 2010 SP1 and IIS Express Visual Studio 2010 SP1 and SQL CE for ASP.NET The above feature additions work with any web project type – including both ASP.NET Web Forms and ASP.NET MVC. Additional SP1 Notes Two additional notes about VS 2010 SP1: 1) One change we made between RTM and SP1 is that by default Visual Studio now uses software rendering instead of hardware acceleration when running on Windows XP.  We made this change because we’ve seen reports of (often inconsistent) performance issues caused by older video drivers.  Running in software mode eliminates these and delivers consistent speeds.  You can optionally re-enable hardware acceleration with SP1 using Visual Studio’s Tools->Options menu command – we did not remove support for HW acceleration on XP, we simply changed the default setting for it.  Jason Zander has written more details on the change and how to re-enable HW acceleration inside VS here. 2) We have discovered an issue where installing SP1 can cause TSQL intellisense within SQL Server Management Studio 2008 R2 to stop working (typing still works – but intellisense doesn’t show up).  The SQL team is investigating this now and I’ll post an update on how to fix this once more details are known.  Hope this helps, Scott P.S. I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • How to experience gradual improvement of knowledge while a newbie does .NET maintenance programming?

    - by amir
    I started my career as a software developer about 6 months ago. This is my first job, and I am the only developer in this company. I gained .NET knowledge by self study and also by doing some university projects. Our systems have old foundations based on an earlier version of .NET, and I'm starting to feel that I am not improving since I am a maintenance programmer here. Everything is old and my manager is not really taking any chances on gradually improving the software. What is your opinion? What should I do? I am newbie and also work hard to find my way through. There is no other developer, not even a senior one to help me here. I need your advice on my situation. And one last thing, can I get a new job with doing maintenance programming? I mean don't managers say that you do not have the experience of developing a new software from scratch? I feel redundant, what do I do?

    Read the article

  • Office Live add-in 1.5 cannot be installed

    - by wisecarver
    Having trouble with a recent Windows Update that failed to install the Office Live add-in 1.5? This has been driving me nuts on a Windows 7 Ultimate 64-bit system for three days. Windows Update would fail, click the “Try again” button and…fail So like I good boy I used http://www.bing.com and have been searching for resolutions. Success! The Microsoft Social forums. http://social.answers.microsoft.com/Forums/en-US/officeinstall/thread/4c62e615-a3e5-4cf9-ae6a-5fd870dfb0bc http://support.microsoft...(read more)

    Read the article

  • Code Contracts: Hiding ContractException

    - by DigiMortal
    It’s time to move on and improve my randomizer I wrote for an example of static checking of code contracts. In this posting I will modify contracts and give some explanations about pre-conditions and post-conditions. Also I will show you how to avoid ContractExceptions and how to replace them with your own exceptions. As a first thing let’s take a look at my randomizer. public class Randomizer {     public static int GetRandomFromRange(int min, int max)     {         var rnd = new Random();         return rnd.Next(min, max);     }       public static int GetRandomFromRangeContracted(int min, int max)     {         Contract.Requires(min < max, "Min must be less than max");           var rnd = new Random();         return rnd.Next(min, max);     } } We have some problems here. We need contract for method output and we also need some better exception handling mechanism. As ContractException as type is hidden from us we have to switch from ContractException to some other Exception type that we can catch. Adding post-condition Pre-conditions are contracts for method’s input interface. Read it as follows: pre-conditions make sure that all conditions for method’s successful run are met. Post-conditions are contracts for output interface of method. So, post-conditions are for output arguments and return value. My code misses the post-condition that checks return value. Return value in this case must be greater or equal to minimum value and less or equal to maximum value. To make sure that method can run only the correct value I added call to Contract.Ensures() method. public static int GetRandomFromRangeContracted(int min, int max) {     Contract.Requires(min < max, "Min must be less than max");       Contract.Ensures(         Contract.Result<int>() >= min &&         Contract.Result<int>() <= max,         "Return value is out of range"     );       var rnd = new Random();     return rnd.Next(min, max); } I think that the line I added does not need any further comments. Avoiding ContractException for input interface ContractException lives in hidden namespace and we cannot see it at design time. But it is common exception type for all contract exceptions that we do not switch over to some other type. The case of Contract.Requires() method is simple: we can tell it what kind of exception we need if something goes wrong with contract it ensures. public static int GetRandomFromRangeContracted(int min, int max) {     Contract.Requires<ArgumentOutOfRangeException>(         min < max,         "Min must be less than max"     );       Contract.Ensures(         Contract.Result<int>() >= min &&         Contract.Result<int>() <= max,         "Return value is out of range"     );       var rnd = new Random();     return rnd.Next(min, max); } Now, if we violate the input interface contract giving min value that is not less than max value we get ArgumentOutOfRangeException. Avoiding ContractException for output interface Output interface is more complex to control. We cannot give exception type there and hope that this type of exception will be thrown if something goes wrong. Instead we have to use delegate that gathers information about problem and throws the exception we expect to be thrown. From documentation you can find the following example about the delegate I mentioned. Contract.ContractFailed += (sender, e) => {     e.SetHandled();     e.SetUnwind(); // cause code to abort after event     Assert.Fail(e.FailureKind.ToString() + ":" + e.DebugMessage); }; We can use this delegate to throw the Exception. Let’s move the code to separate method too. Here is our method that uses now ContractException hiding. public static int GetRandomFromRangeContracted(int min, int max) {     Contract.Requires(min < max, "Min must be less than max");       Contract.Ensures(         Contract.Result<int>() >= min &&         Contract.Result<int>() <= max,         "Return value is out of range"     );     Contract.ContractFailed += Contract_ContractFailed;       var rnd = new Random();     return rnd.Next(min, max)+1000; } And here is the delegate that creates exception. public static void Contract_ContractFailed(object sender,     ContractFailedEventArgs e) {     e.SetHandled();     e.SetUnwind();       throw new Exception(e.FailureKind.ToString() + ":" + e.Message); } Basically we can do in this delegate whatever we like to do with output interface errors. We can even introduce our own contract exception type. As you can see later then ContractFailed event is very useful at unit testing.

    Read the article

  • Mercurial Conversion from Team Foundation Server

    - by mhawley
    I’m using Twitter. Follow me @matthawley One of my many (almost) daily tasks when working on the CodePlex platform since releasing Mercurial as a supported version control system, is converting projects from Team Foundation Server (TFS) to Mercurial. I'm happy to say that of all the conversions I have done since mid-January, the success rate of migrating full source history is about 95%. To get to this success point, I have had to learn and refine several techniques utilizing a few different tools… (read more)

    Read the article

  • LLBLGen Pro feature highlights: model views

    - by FransBouma
    (This post is part of a series of posts about features of the LLBLGen Pro system) To be able to work with large(r) models, it's key you can view subsets of these models so you can have a better, more focused look at them. For example because you want to display how a subset of entities relate to one another in a different way than the list of entities. LLBLGen Pro offers this in the form of Model Views. Model Views are views on parts of the entity model of a project, and the subsets are displayed in a graphical way. Additionally, one can add documentation to a Model View. As Model Views are displaying parts of the model in a graphical way, they're easier to explain to people who aren't familiar with entity models, e.g. the stakeholders you're interviewing for your project. The documentation can then be used to communicate specifics of the elements on the model view to the developers who have to write the actual code. Below I've included an example. It's a model view on a subset of the entities of AdventureWorks. It displays several entities, their relationships (both relational and inheritance relationships) and also some specifics gathered from the interview with the stakeholder. As the information is inside the actual project the developer will work with, the information doesn't have to be converted back/from e.g .word documents or other intermediate formats, it's the same project. This makes sure there are less errors / misunderstandings. (of course you can hide the docked documentation pane or dock it to another corner). The Model View can contain entities which are placed in different groups. This makes it ideal to group entities together for close examination even though they're stored in different groups. The Model View is a first-class citizen of the code-generator. This means you can write templates which consume Model Views and generate code accordingly. E.g. you can write a template which generates a service per Model View and exposes the entities in the Model View as a single entity graph, fetched through a method. (This template isn't included in the LLBLGen Pro package, but it's easy to write it up yourself with the built-in template editor). Viewing an entity model in different ways is key to fully understand the entity model and Model Views help with that.

    Read the article

  • DLL-s needed to run ASP.NET MVC 3 RC on Windows Azure

    - by DigiMortal
    In this weekend I made one of my new apps run on Windows Azure. I am building this application using ASP.NET MVC 3 RC and Razor view engine. In this posting I will list DLL-s you need to have as local copies to get ASP.NET MVC 3 RC run on Windows Azure web role. Besides assemblies that are already references you may need to add references to some more assemblies. List of assemblies is here: Microsoft.Web.Infrastructure System.Web.Helpers System.Web.Mvc System.Web.Razor System.Web.WebPages System.Web.WebPages.Razor WebMatrix.Data You can find Razor and ASP.NET Web Pages related assemblies from folder: C:\Program Files\Microsoft ASP.NET\ASP.NET Web Pages\v1.0\Assemblies\ NB! If your project is using dynamically loaded assemblies that are not referenced from any of your project make sure you are including them as project items that are located in bin folder. This way these DLL-s are also put to deployment package and you don’t have to create code level references to them.

    Read the article

  • Using the Parallel class to make multithreading easy

    - by thycotic
    Kevin has posted about the Parallel class and how to use it to easily do multiple operations at once without radically changing the structure of your code.  Very neat stuff.   Jonathan Cogley is the CEO of Thycotic Software, an agile software services and product development company based in Washington DC.  Secret Server is our flagship enterprise password vault.

    Read the article

  • C# with keyword equivalent

    - by oazabir
    There’s no with keyword in C#, like Visual Basic. So you end up writing code like this: this.StatusProgressBar.IsIndeterminate = false; this.StatusProgressBar.Visibility = Visibility.Visible; this.StatusProgressBar.Minimum = 0; this.StatusProgressBar.Maximum = 100; this.StatusProgressBar.Value = percentage; Here’s a work around to this: With.A<ProgressBar>(this.StatusProgressBar, (p) => { p.IsIndeterminate = false; p.Visibility = Visibility.Visible; p.Minimum = 0; p.Maximum = 100; p.Value = percentage; }); Saves you repeatedly typing the same class instance or control name over and over again. It also makes code more readable since it clearly says that you are working with a progress bar control within the block. It you are setting properties of several controls one after another, it’s easier to read such code this way since you will have dedicated block for each control. It’s a very simple one line function that does it: public static class With { public static void A<T>(T item, Action<T> work) { work(item); } } You could argue that you can just do this: var p = this.StatusProgressBar; p.IsIndeterminate = false; p.Visibility = Visibility.Visible; p.Minimum = 0; p.Maximum = 100; p.Value = percentage; But it’s not elegant. You are introducing a variable “p” in the local scope of the whole function. This goes against naming conventions. Morever, you can’t limit the scope of “p” within a certain place in the function.

    Read the article

  • Simple Netduino Go Tutorial Flashing RGB LEDs with a potentiometer

    - by Chris Hammond
    In case you missed the announcement on 4/4, the guys and Secret Labs, along with other members of the Netduino Community have come out with a new platform called Netduino Go . Head on over www.netduino.com for the introduction forum post . This post is how to quickly get up and running with your Netduino Go, based on Chris Walker’s getting started forum post , with some enhancements that I think will make it easier to get up and running, as Chris’ post unfortunately leaves a few things out. Hardware...(read more)

    Read the article

  • How to use the Netduino Go Piezo Buzzer Module

    - by Chris Hammond
    Originally posted on ChrisHammond.com Over the next couple of days people should be receiving their Netduino Go Piezo Buzzer Modules , at least if they have ordered them from Amazon. I was lucky enough to get mine very quickly from Amazon and put together a sample project the other night. This is by no means a complex project, and most of it is code from the public domain for projects based on the original Netduino. Project Overview So what does the project do? Essentially it plays 3 “tunes” that...(read more)

    Read the article

  • My Code Kata–A Solution Kata

    - by Glav
    There are many developers and coders out there who like to do code Kata’s to keep their coding ability up to scratch and to practice their skills. I think it is a good idea. While I like the concept, I find them dead boring and of minimal purpose. Yes, they serve to hone your skills but that’s about it. They are often quite abstract, in that they usually focus on a small problem set requiring specific solutions. It is fair enough as that is how they are designed but again, I find them quite boring. What I personally like to do is go for something a little larger and a little more fun. It takes a little more time and is not as easily executed as a kata though, but it services the same purposes from a practice perspective and allows me to continue to solve some problems that are not directly part of the initial goal. This means I can cover a broader learning range and have a bit more fun. If I am lucky, sometimes they even end up being useful tools. With that in mind, I thought I’d share my current ‘kata’. It is not really a code kata as it is too big. I prefer to think of it as a ‘solution kata’. The code is on bitbucket here. What I wanted to do was create a kind of simplistic virtual world where I can create a player, or a class, stuff it into the world, and see if it survives, and can navigate its way to the exit. Requirements were pretty simple: Must be able to define a map to describe the world using simple X,Y co-ordinates. Z co-ordinates as well if you feel like getting clever. Should have the concept of entrances, exists, solid blocks, and potentially other materials (again if you want to get clever). A coder should be able to easily write a class which will act as an inhabitant of the world. An inhabitant will receive stimulus from the world in the form of surrounding environment and be able to make a decision on action which it passes back to the ‘world’ for processing. At a minimum, an inhabitant will have sight and speed characteristics which determine how far they can ‘see’ in the world, and how fast they can move. Coders who write a really bad ‘inhabitant’ should not adversely affect the rest of world. Should allow multiple inhabitants in the world. So that was the solution I set out to act as a practice solution and a little bit of fun. It had some interesting problems to solve and I figured, if it turned out ok, I could potentially use this as a ‘developer test’ for interviews. Ask a potential coder to write a class for an inhabitant. Show the coder the map they will navigate, but also mention that we will use their code to navigate a map they have not yet seen and a little more complex. I have been playing with solution for a short time now and have it working in basic concepts. Below is a screen shot using a very basic console visualiser that shows the map, boundaries, blocks, entrance, exit and players/inhabitants. The yellow asterisks ‘*’ are the players, green ‘O’ the entrance, purple ‘^’ the exit, maroon/browny ‘#’ are solid blocks. The players can move around at different speeds, knock into each others, and make directional movement decisions based on what they see and who is around them. It has been quite fun to write and it is also quite fun to develop different players to inject into the world. The code below shows a really simple implementation of an inhabitant that can work out what to do based on stimulus from the world. It is pretty simple and just tries to move in some direction if there is nothing blocking the path. public class TestPlayer:LivingEntity { public TestPlayer() { Name = "Beta Boy"; LifeKey = Guid.NewGuid(); } public override ActionResult DecideActionToPerform(EcoDev.Core.Common.Actions.ActionContext actionContext) { try { var action = new MovementAction(); // move forward if we can if (actionContext.Position.ForwardFacingPositions.Length > 0) { if (CheckAccessibilityOfMapBlock(actionContext.Position.ForwardFacingPositions[0])) { action.DirectionToMove = MovementDirection.Forward; return action; } } if (actionContext.Position.LeftFacingPositions.Length > 0) { if (CheckAccessibilityOfMapBlock(actionContext.Position.LeftFacingPositions[0])) { action.DirectionToMove = MovementDirection.Left; return action; } } if (actionContext.Position.RearFacingPositions.Length > 0) { if (CheckAccessibilityOfMapBlock(actionContext.Position.RearFacingPositions[0])) { action.DirectionToMove = MovementDirection.Back; return action; } } if (actionContext.Position.RightFacingPositions.Length > 0) { if (CheckAccessibilityOfMapBlock(actionContext.Position.RightFacingPositions[0])) { action.DirectionToMove = MovementDirection.Right; return action; } } return action; } catch (Exception ex) { World.WriteDebugInformation("Player: "+ Name, string.Format("Player Generated exception: {0}",ex.Message)); throw ex; } } private bool CheckAccessibilityOfMapBlock(MapBlock block) { if (block == null || block.Accessibility == MapBlockAccessibility.AllowEntry || block.Accessibility == MapBlockAccessibility.AllowExit || block.Accessibility == MapBlockAccessibility.AllowPotentialEntry) { return true; } return false; } } It is simple and it seems to work well. The world implementation itself decides the stimulus context that is passed to he inhabitant to make an action decision. All movement is carried out on separate threads and timed appropriately to be as fair as possible and to cater for additional skills such as speed, and eventually maybe stamina, strength, with actions like fighting. It is pretty fun to make up random maps and see how your inhabitant does. You can download the code from here. Along the way I have played with parallel extensions to make the compute intensive stuff spread across all cores, had to heavily factor in visibility of methods and properties so design of classes was paramount, work out movement algorithms that play fairly in the world and properly favour the players with higher abilities, as well as a host of other issues. So that is my ‘solution kata’. If I keep going with it, I may develop a web interface for it where people can upload assemblies and watch their player within a web browser visualiser and maybe even a map designer. What do you do to keep the fires burning?

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >