Search Results

Search found 296 results on 12 pages for 'reflector'.

Page 11/12 | < Previous Page | 7 8 9 10 11 12  | Next Page >

  • How do i prevent my code from being stolen?

    - by Calmarius
    What happens exactly when I launch a .NET exe? I know that C# is compiled to IL code and I think the generated exe file just a launcher that starts the runtime and passes the IL code to it. But how? And how complex process is it? IL code is embedded in the exe. I think it can be executed from the memory without writing it to the disk while ordinary exe's are not (ok, yes but it is very complicated). My final aim is extracting the IL code and write my own encrypted launcher to prevent scriptkiddies to open my code in Reflector and just steal all my classes easily. Well I can't prevent reverse engineering completely. If they are able to inspect the memory and catch the moment when I'm passing the pure IL to the runtime then it won't matter if it is a .net exe or not, is it? I know there are several obfuscator tools but I don't want to mess up the IL code itself. EDIT: so it seems it isn't worth trying what I wanted. They will crack it anyway... So I will look for an obfuscation tool. And yes my friends said too that it is enough to rename all symbols to a meaningless name. And reverse engineering won't be so easy after all.

    Read the article

  • Faster way to convert from a String to generic type T when T is a valuetype?

    - by Kumba
    Does anyone know of a fast way in VB to go from a string to a generic type T constrained to a valuetype (Of T as Structure), when I know that T will always be some number type? This is too slow for my taste: Return DirectCast(Convert.ChangeType(myStr, GetType(T)), T) But it seems to be the only sane method of getting from a String -- T. I've tried using Reflector to see how Convert.ChangeType works, and while I can convert from the String to a given number type via a hacked-up version of that code, I have no idea how to jam that type back into T so it can be returned. I'll add that part of the speed penalty I'm seeing (in a timing loop) is because the return value is getting assigned to a Nullable(Of T) value. If I strongly-type my class for a specific number type (i.e., UInt16), then I can vastly increase the performance, but then the class would need to be duplicated for each numeric type that I use. It'd almost be nice if there was converter to/from T while working on it in a generic method/class. Maybe there is and I'm oblivious to its existence?

    Read the article

  • SharePoint 2010 – Central Admin tooling to create host header site collections

    - by eJugnoo
    Just like SharePoint 2007, you can create host-header based site collections in SharePoint 2010 as well. It means, that you do not necessarily need to create a site-collection under a managed path like /sites/, you can create multiple root-level site collections on same web-application/port by using host-header site collections. All you need to do is point your domain or sub-domain to your web-application and create a matching site-collection that you want. But, just like in 2007, it is something that you do by using STSADM, and is not available on Central Admin UI in 2010 as well. Yeah, though you can now also use PowerShell to create one: C:\PS>$w = Get-SPWebApplication http://sitename   C:\PS>New-SPSite http://www.contoso.com -OwnerAlias "DOMAIN\jdoe" -HostHeaderWebApplication $w -Title "Contoso" -Template "STS#0"   This example creates a host header site collection. Because the template is provided, the root Web of this site collection will be created. .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } I’ve been playing with WCM in SharePoint 2010 more and more, and for that I preferred creating hosts file entries for desired domains and create site-collections by those headers – in my dev environment. I used PowerShell initially, but then got interested to build my own UI on Central Admin instead. Developed with Visual Studio 2010 So I used new Visual Studio 2010 tooling to create an empty SharePoint 2010 project. Added an application page (there is no option to add _Admin page item in VS 2010 RC), that got created in Layouts “mapped” folder. Created a new Admin mapped folder for 14-“hive”, and moved my new page there instead. Yes, I didn’t change the base class for page, its just that it runs under _admin, but it is indeed a LayoutsPageBase inherited page. To introduce a action-link in Central Admin console, I created following element: 1: <Elements xmlns="http://schemas.microsoft.com/sharepoint/"> 2: <CustomAction 3: Id="CreateSiteByHeader" 4: Location="Microsoft.SharePoint.Administration.Applications" 5: Title="Create site collections by host header" 6: GroupId="SiteCollections" 7: Sequence="15" 8: RequiredAdmin="Delegated" 9: Description="Create a new top-level web site, by host header" > 10: <UrlAction Url="/_admin/OfficeToolbox/CreateSiteByHeader.aspx" /> 11: </CustomAction> 12: </Elements> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Used Reflector to understand any special code behind createpage.aspx, and created a new for our purpose – CreateSiteByHeader.aspx. From there I quickly created a similar code behind, without all the fancy of Farm Config Wizard handling and dealt with alternate implementations of sealed classes! Goal was to create a professional looking and OOB-type experience. I also added Regex validation to ensure user types a valid domain name as header value. Below is the result…   Release @ Codeplex I’ve released to WSP on OfficeToolbox @ Codeplex, and you can download from here. Hope you find it useful… -- Sharad

    Read the article

  • Programmatically use a server as the Build Server for multiple Project Collections

    Important: With this post you create an unsupported scenario by Microsoft. It will break your support for this server with Microsoft. So handle with care. I am the administrator an a TFS environment with a lot of Project Collections. In the supported configuration of Microsoft 2010 you need one Build Controller per Project Collection, and it is not supported to have multiple Build Controllers installed. Jim Lamb created a post how you can modify your system to change this behaviour. But since I have so many Project Collections, I automated this with the API of TFS. When you install a new build server via the UI, you do the following steps Register the build service (with this you hook the windows server into the build server environment) Add a new build controller Add a new build agent So in pseudo code, the code would look like foreach (projectCollection in GetAllProjectCollections) {       CreateNewWindowsService();       RegisterService();       AddNewController();       AddNewAgent(); } The following code fragements show you the most important parts of the method implementations. Attached is the full project. CreateNewWindowsService We create a new windows service with the SC command via the Diagnostics.Process class:             var pi = new ProcessStartInfo("sc.exe")                         {                             Arguments =                                 string.Format(                                     "create \"{0}\" start= auto binpath= \"C:\\Program Files\\Microsoft Team Foundation Server 2010\\Tools\\TfsBuildServiceHost.exe              /NamedInstance:{0}\" DisplayName= \"Visual Studio Team Foundation Build Service Host ({1})\"",                                     serviceHostName, tpcName)                         };            Process.Start(pi);             pi.Arguments = string.Format("failure {0} reset= 86400 actions= restart/60000", serviceHostName);            Process.Start(pi); RegisterService The trick in this method is that we set the NamedInstance static property. This property is Internal, so we need to set it through reflection. To get information on these you need nice Microsoft friends and the .Net reflector .             // Indicate which build service host instance we are using            typeof(BuildServiceHostUtilities).Assembly.GetType("Microsoft.TeamFoundation.Build.Config.BuildServiceHostProcess").InvokeMember("NamedInstance",              System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.SetProperty | System.Reflection.BindingFlags.Static, null, null, new object[] { serviceName });             // Create the build service host            serviceHost = buildServer.CreateBuildServiceHost(serviceName, endPoint);            serviceHost.Save();             // Register the build service host            BuildServiceHostUtilities.Register(serviceHost, user, password); AddNewController and AddNewAgent Once you have the BuildServerHost, the rest is pretty straightforward. There are methods on the BuildServerHost to modify the controllers and the agents                 controller = serviceHost.CreateBuildController(controllerName);                 agent = controller.ServiceHost.CreateBuildAgent(agentName, buildDirectory, controller);                controller.AddBuildAgent(agent); You have now seen the highlights of the application. If you need it and want to have sample information when you work in this area, download the app TFS2010_RegisterBuildServerToTPCs

    Read the article

  • Playing with aspx page cycle using JustMock

    - by mehfuzh
    In this post , I will cover a test code that will mock the various elements needed to complete a HTTP page request and  assert the expected page cycle steps. To begin, i have a simple enumeration that has my predefined page steps: public enum PageStep {     PreInit,     Load,     PreRender,     UnLoad } Once doing so, i  first created the page object [not mocking]. Page page = new Page(); Here, our target is to fire up the page process through ProcessRequest call, now if we take a look inside the method with reflector.net,  the call trace will go like : ProcessRequest –> ProcessRequestWithNoAssert –> SetInstrinsics –> Finallly ProcessRequest. Inside SetInstrinsics ,  it requires calls from HttpRequest, HttpResponse and HttpBrowserCababilities. Using this clue at hand, we can easily know the classes / calls  we need to mock in order to get through the expected call. Accordingly, for  HttpBrowserCapabilities our required mock code will look like: var browser = Mock.Create<HttpBrowserCapabilities>(); // Arrange Mock.Arrange(() => browser.PreferredRenderingMime).Returns("text/html"); Mock.Arrange(() => browser.PreferredResponseEncoding).Returns("UTF-8"); Mock.Arrange(() => browser.PreferredRequestEncoding).Returns("UTF-8"); Now, HttpBrowserCapabilities is get though [Instance]HttpRequest.Browser. Therefore, we create the HttpRequest mock: var request = Mock.Create<HttpRequest>(); Then , add the required get call : Mock.Arrange(() => request.Browser).Returns(browser); As, [instance]Browser.PerferrredResponseEncoding and [instance]Browser.PreferredResponseEncoding  are also set to the request object and to make that they are set properly, we can add the following lines as well [not required though]. bool requestContentEncodingSet = false; Mock.ArrangeSet(() => request.ContentEncoding = Encoding.GetEncoding("UTF-8")).DoInstead(() =>  requestContentEncodingSet = true); Similarly, for response we can write:  var response = Mock.Create<HttpResponse>();    bool responseContentEncodingSet = false;  Mock.ArrangeSet(() => response.ContentEncoding = Encoding.GetEncoding("UTF-8")).DoInstead(() => responseContentEncodingSet = true); Finally , I created a mock of HttpContext and set the Request and Response properties that will returns the mocked version. var context = Mock.Create<HttpContext>();   Mock.Arrange(() => context.Request).Returns(request); Mock.Arrange(() => context.Response).Returns(response); As, Page internally calls RenderControl method , we just need to replace that with our one and optionally we can check if  invoked properly: bool rendered = false; Mock.Arrange(() => page.RenderControl(Arg.Any<HtmlTextWriter>())).DoInstead(() => rendered = true); That’s  it, the rest of the code is simple,  where  i asserted the page cycle with the PageSteps that i defined earlier: var pageSteps = new Queue<PageStep>();   page.PreInit +=delegate { pageSteps.Enqueue(PageStep.PreInit); }; page.Load += delegate { pageSteps.Enqueue(PageStep.Load); }; page.PreRender += delegate { pageSteps.Enqueue(PageStep.PreRender);}; page.Unload +=delegate { pageSteps.Enqueue(PageStep.UnLoad);};   page.ProcessRequest(context);   Assert.True(requestContentEncodingSet); Assert.True(responseContentEncodingSet); Assert.True(rendered);   Assert.Equal(pageSteps.Dequeue(), PageStep.PreInit); Assert.Equal(pageSteps.Dequeue(), PageStep.Load); Assert.Equal(pageSteps.Dequeue(), PageStep.PreRender); Assert.Equal(pageSteps.Dequeue(), PageStep.UnLoad);   Mock.Assert(request); Mock.Assert(response); You can get the test class shown in this post here to give a try by yourself with of course JustMock :-). Enjoy!!

    Read the article

  • Searching for the Perfect Developer&rsquo;s Laptop.

    - by mbcrump
    I have been in the market for a new computer for several months. I set out with a budget of around $1200. I knew up front that the machine would be used for developing applications and maybe some light gaming. I kept switching between buying a laptop or a desktop but the laptop won because: With a Laptop, I can carry it everywhere and with a desktop I can’t. I searched for about 2 weeks and narrowed it down to a list of must-have’s : i7 Processor (I wasn’t going to settle for an i5 or AMD. I wanted a true Quad-core machine, not 2 dual-core fused together). 15.6” monitor SSD 128GB or Larger. – It’s almost 2011 and I don’t want an old standard HDD in this machine. 8GB of DDR3 Ram. – The more the better, right? 1GB Video Card (Prefer NVidia) – I might want to play games with this. HDMI Port – Almost a standard on new Machines. This would be used when I am on the road and want to stream Netflix to the HDTV in the Hotel room. Webcam Built-in – This would be to video chat with the wife and kids if I am on the road. 6-Cell Battery. – I’ve read that an i7 in a laptop really kills the battery. A 6-cell or 9-cell is even better. That is a pretty long list for a budget of around $1200. I searched around the internet and could not buy this machine prebuilt for under $1200. That was even with coupons and my company’s 10% Dell discount. The only way that I would get a machine like this was to buy a prebuilt and replace parts. I chose the  Lenovo Y560 on Newegg to start as my base. Below is a top-down picture of it.   Part 1: The Hardware The Specs for this machine: Color :  GrayOperating System : Windows 7 Home Premium 64-bitCPU Type : Intel Core i7-740QM(1.73GHz)Screen : 15.6" WXGAMemory Size : 4GB DDR3Hard Disk : 500GBOptical Drive : DVD±R/RWGraphics Card : ATI Mobility Radeon HD 5730Video Memory : 1GBCommunication : Gigabit LAN and WLANCard slot : 1 x Express Card/34Battery Life : Up to 3.5 hoursDimensions : 15.20" x 10.00" x 0.80" - 1.30"Weight : 5.95 lbs. This computer met most of the requirements above except that it didn’t come with an SSD or have 8GB of DDR3 Memory. So, I needed to start shopping except this time for an SSD. I asked around on twitter and other hardware forums and everyone pointed me to the Crucial C300 SSD. After checking prices of the drive, it was going to cost an extra $275 bucks and I was going from a spacious 500GB drive to 128GB. After watching some of the SSD videos on YouTube I started feeling better. Below is a pic of the Crucial C300 SSD. The second thing that I needed to upgrade was the RAM. It came with 4GB of DDR3 RAM, but it was slow. I decided to buy the Crucial 8GB (4GB x 2) Kit from Newegg. This RAM cost an extra $120 and had a CAS Latency of 7. In the end this machine delivered everything that I wanted and it cost around $1300. You are probably saying, well your budget was $1200. I have spare parts that I’m planning on selling on eBay or Anandtech.  =) If you are interested then shoot me an email and I will give you a great deal mbcrump[at]gmail[dot]com. 500GB Laptop 7200RPM HDD 4GB of DDR3 RAM (2GB x 2) faceVision HD 720p Camera – Unopened In the end my Windows Experience Rating of the SSD was 7.7 and the CPU 7.1. The max that you can get is a 7.9. Part 2: The Software I’m very lucky that I get a lot of software for free. When choosing a laptop, the OS really doesn’t matter because I would never keep the bloatware pre-installed or Windows 7 Home Premium on my main development machine. Matter of fact, as soon as I got the laptop, I immediately took out the old HDD without booting into it. After I got the SSD into the machine, I installed Windows 7 Ultimate 64-Bit. The BIOS was out of date, so I updated that to the latest version and started downloading drivers off of Lenovo’s site. I had to download the Wireless Networking Drivers to a USB-Key before I could get my machine on my wireless network. I also discovered that if the date on your computer is off then you cannot join the Windows 7 Homegroup until you fix it. I’m aware that most people like peeking into what programs other software developers use and I went ahead and listed my “essentials” of a fresh build. I am a big Silverlight guy, so naturally some of the software listed below is specific to Silverlight. You should also check out my master list of Tools and Utilities for the .NET Developer. See a killer app that I’m missing? Feel free to leave it in the comments below. My Software Essential List. CPU-Z Dropbox Everything Search Tool Expression Encoder Update Expression Studio 4 Ultimate Foxit Reader Google Chrome Infragistics NetAdvantage Ultimate Edition Keepass Microsoft Office Professional Plus 2010 Microsoft Security Essentials 2  Mindscape Silverlight Elements Notepad 2 (with shell extension) Precode Code Snippet Manager RealVNC Reflector ReSharper v5.1.1753.4 Silverlight 4 Toolkit Silverlight Spy Snagit 10 SyncFusion Reporting Controls for Silverlight Telerik Silverlight RadControls TweetDeck Virtual Clone Drive Visual Studio 2010 Feature Pack 2 Visual Studio 2010 Ultimate VS KB2403277 Update to get Feature Pack 2 to work. Windows 7 Ultimate 64-Bit Windows Live Essentials 2011 Windows Live Writer Backup. Windows Phone Development Tools That is pretty much it, I have a new laptop and am happy with the purchase. If you have any questions then feel free to leave a comment below.  Subscribe to my feed

    Read the article

  • Mocking HttpContext with JustMock

    - by mehfuzh
    In post , i will show a test code that will mock the various elements needed to complete a HTTP page request and  assert the expected page cycle steps. To begin, i have a simple enumeration that has my predefined page steps: public enum PageStep {     PreInit,     Load,     PreRender,     UnLoad } Once doing so, i  first created the page object [not mocking]. Page page = new Page(); Here, our target is to fire up the page process though ProcessRequest call, now if we take a look inside method though reflector, we will find calls stack like : ProcessRequest –> ProcessRequestWithNoAssert –> SetInstrinsics –> Finallly ProcessRequest. Inside SetIntrinsics , where it requires calls from HttpRequest, HttpResponse and HttpBrowserCababilities. With this , we can easily know what are classes / calls  we need to mock in order to get though the expected call. Accordingly, for  HttpBrowserCapabilities our required test code will look like: Mock.Arrange(() => browser.PreferredRenderingMime).Returns("text/html"); Mock.Arrange(() => browser.PreferredResponseEncoding).Returns("UTF-8"); Mock.Arrange(() => browser.PreferredRequestEncoding).Returns("UTF-8"); Now, HttpBrowserCapabilities is get though [Instance]HttpRequest.Browser. Therefore, we create the HttpRequest mock: var request = Mock.Create<HttpRequest>(); Then , add the required get call : Mock.Arrange(() => request.Browser).Returns(browser); As, [instance]Browser.PerferrredResponseEncoding and [instance]Browser.PreferredResponseEncoding  are also set to the request object and to make that they are set properly, we can add the following lines as well [not required though]. bool requestContentEncodingSet = false; Mock.ArrangeSet(() => request.ContentEncoding = Encoding.GetEncoding("UTF-8")).DoInstead(() =>  requestContentEncodingSet = true); Similarly, for response we can write:  var response = Mock.Create<HttpResponse>();    bool responseContentEncodingSet = false;  Mock.ArrangeSet(() => response.ContentEncoding = Encoding.GetEncoding("UTF-8")).DoInstead(() => responseContentEncodingSet = true); Finally , I created a mock of HttpContext and set the Request and Response properties that will returns the mocked version. var context = Mock.Create<HttpContext>();   Mock.Arrange(() => context.Request).Returns(request); Mock.Arrange(() => context.Response).Returns(response); As, Page internally calls RenderControl method , we just need to replace that with our one and optionally we can check if  invoked properly: bool rendered = false; Mock.Arrange(() => page.RenderControl(Arg.Any<HtmlTextWriter>())).DoInstead(() => rendered = true); That’s  it, the rest of the code is simple,  where  i asserted the page cycle with the PageSteps that i defined earlier: var pageSteps = new Queue<PageStep>();    page.PreInit +=      delegate      {          pageSteps.Enqueue(PageStep.PreInit);      };  page.Load +=      delegate      {          pageSteps.Enqueue(PageStep.Load);      };    page.PreRender +=      delegate      {          pageSteps.Enqueue(PageStep.PreRender);      };    page.Unload +=      delegate      {          pageSteps.Enqueue(PageStep.UnLoad);      };    page.ProcessRequest(context);    Assert.True(requestContentEncodingSet);  Assert.True(responseContentEncodingSet);  Assert.True(rendered);    Assert.Equal(pageSteps.Dequeue(), PageStep.PreInit);  Assert.Equal(pageSteps.Dequeue(), PageStep.Load);  Assert.Equal(pageSteps.Dequeue(), PageStep.PreRender);  Assert.Equal(pageSteps.Dequeue(), PageStep.UnLoad);    Mock.Assert(request);  Mock.Assert(response);   You can get the test class shown in this post here to give a try by yourself with of course JustMock. Enjoy!!

    Read the article

  • Playing with http page cycle using JustMock

    - by mehfuzh
    In this post , I will cover a test code that will mock the various elements needed to complete a HTTP page request and  assert the expected page cycle steps. To begin, i have a simple enumeration that has my predefined page steps: public enum PageStep {     PreInit,     Load,     PreRender,     UnLoad } Once doing so, i  first created the page object [not mocking]. Page page = new Page(); Here, our target is to fire up the page process through ProcessRequest call, now if we take a look inside the method with reflector.net,  the call trace will go like : ProcessRequest –> ProcessRequestWithNoAssert –> SetInstrinsics –> Finallly ProcessRequest. Inside SetInstrinsics ,  it requires calls from HttpRequest, HttpResponse and HttpBrowserCababilities. With this clue at hand, we can easily know the classes / calls  we need to mock in order to get through the expected call. Accordingly, for  HttpBrowserCapabilities our required test code will look like: Mock.Arrange(() => browser.PreferredRenderingMime).Returns("text/html"); Mock.Arrange(() => browser.PreferredResponseEncoding).Returns("UTF-8"); Mock.Arrange(() => browser.PreferredRequestEncoding).Returns("UTF-8"); Now, HttpBrowserCapabilities is get though [Instance]HttpRequest.Browser. Therefore, we create the HttpRequest mock: var request = Mock.Create<HttpRequest>(); Then , add the required get call : Mock.Arrange(() => request.Browser).Returns(browser); As, [instance]Browser.PerferrredResponseEncoding and [instance]Browser.PreferredResponseEncoding  are also set to the request object and to make that they are set properly, we can add the following lines as well [not required though]. bool requestContentEncodingSet = false; Mock.ArrangeSet(() => request.ContentEncoding = Encoding.GetEncoding("UTF-8")).DoInstead(() =>  requestContentEncodingSet = true); Similarly, for response we can write:  var response = Mock.Create<HttpResponse>();    bool responseContentEncodingSet = false;  Mock.ArrangeSet(() => response.ContentEncoding = Encoding.GetEncoding("UTF-8")).DoInstead(() => responseContentEncodingSet = true); Finally , I created a mock of HttpContext and set the Request and Response properties that will returns the mocked version. var context = Mock.Create<HttpContext>();   Mock.Arrange(() => context.Request).Returns(request); Mock.Arrange(() => context.Response).Returns(response); As, Page internally calls RenderControl method , we just need to replace that with our one and optionally we can check if  invoked properly: bool rendered = false; Mock.Arrange(() => page.RenderControl(Arg.Any<HtmlTextWriter>())).DoInstead(() => rendered = true); That’s  it, the rest of the code is simple,  where  i asserted the page cycle with the PageSteps that i defined earlier: var pageSteps = new Queue<PageStep>();   page.PreInit +=      delegate      {          pageSteps.Enqueue(PageStep.PreInit);      }; page.Load +=      delegate      {          pageSteps.Enqueue(PageStep.Load);      };   page.PreRender +=      delegate      {          pageSteps.Enqueue(PageStep.PreRender);      };   page.Unload +=      delegate      {          pageSteps.Enqueue(PageStep.UnLoad);      };   page.ProcessRequest(context);    Assert.True(requestContentEncodingSet);  Assert.True(responseContentEncodingSet);  Assert.True(rendered);    Assert.Equal(pageSteps.Dequeue(), PageStep.PreInit);  Assert.Equal(pageSteps.Dequeue(), PageStep.Load);  Assert.Equal(pageSteps.Dequeue(), PageStep.PreRender);  Assert.Equal(pageSteps.Dequeue(), PageStep.UnLoad);    Mock.Assert(request);  Mock.Assert(response);   You can get the test class shown in this post here to give a try by yourself with of course JustMock :-).   Enjoy!!

    Read the article

  • Why enumerator structs are a really bad idea

    - by Simon Cooper
    If you've ever poked around the .NET class libraries in Reflector, I'm sure you would have noticed that the generic collection classes all have implementations of their IEnumerator as a struct rather than a class. As you will see, this design decision has some rather unfortunate side effects... As is generally known in the .NET world, mutable structs are a Very Bad Idea; and there are several other blogs around explaining this (Eric Lippert's blog post explains the problem quite well). In the BCL, the generic collection enumerators are all mutable structs, as they need to keep track of where they are in the collection. This bit me quite hard when I was coding a wrapper around a LinkedList<int>.Enumerator. It boils down to this code: sealed class EnumeratorWrapper : IEnumerator<int> { private readonly LinkedList<int>.Enumerator m_Enumerator; public EnumeratorWrapper(LinkedList<int> linkedList) { m_Enumerator = linkedList.GetEnumerator(); } public int Current { get { return m_Enumerator.Current; } } object System.Collections.IEnumerator.Current { get { return Current; } } public bool MoveNext() { return m_Enumerator.MoveNext(); } public void Reset() { ((System.Collections.IEnumerator)m_Enumerator).Reset(); } public void Dispose() { m_Enumerator.Dispose(); } } The key line here is the MoveNext method. When I initially coded this, I thought that the call to m_Enumerator.MoveNext() would alter the enumerator state in the m_Enumerator class variable and so the enumeration would proceed in an orderly fashion through the collection. However, when I ran this code it went into an infinite loop - the m_Enumerator.MoveNext() call wasn't actually changing the state in the m_Enumerator variable at all, and my code was looping forever on the first collection element. It was only after disassembling that method that I found out what was going on The MoveNext method above results in the following IL: .method public hidebysig newslot virtual final instance bool MoveNext() cil managed { .maxstack 1 .locals init ( [0] bool CS$1$0000, [1] valuetype [System]System.Collections.Generic.LinkedList`1/Enumerator CS$0$0001) L_0000: nop L_0001: ldarg.0 L_0002: ldfld valuetype [System]System.Collections.Generic.LinkedList`1/Enumerator EnumeratorWrapper::m_Enumerator L_0007: stloc.1 L_0008: ldloca.s CS$0$0001 L_000a: call instance bool [System]System.Collections.Generic.LinkedList`1/Enumerator::MoveNext() L_000f: stloc.0 L_0010: br.s L_0012 L_0012: ldloc.0 L_0013: ret } Here, the important line is 0002 - m_Enumerator is accessed using the ldfld operator, which does the following: Finds the value of a field in the object whose reference is currently on the evaluation stack. So, what the MoveNext method is doing is the following: public bool MoveNext() { LinkedList<int>.Enumerator CS$0$0001 = this.m_Enumerator; bool CS$1$0000 = CS$0$0001.MoveNext(); return CS$1$0000; } The enumerator instance being modified by the call to MoveNext is the one stored in the CS$0$0001 variable on the stack, and not the one in the EnumeratorWrapper class instance. Hence why the state of m_Enumerator wasn't getting updated. Hmm, ok. Well, why is it doing this? If you have a read of Eric Lippert's blog post about this issue, you'll notice he quotes a few sections of the C# spec. In particular, 7.5.4: ...if the field is readonly and the reference occurs outside an instance constructor of the class in which the field is declared, then the result is a value, namely the value of the field I in the object referenced by E. And my m_Enumerator field is readonly! Indeed, if I remove the readonly from the class variable then the problem goes away, and the code works as expected. The IL confirms this: .method public hidebysig newslot virtual final instance bool MoveNext() cil managed { .maxstack 1 .locals init ( [0] bool CS$1$0000) L_0000: nop L_0001: ldarg.0 L_0002: ldflda valuetype [System]System.Collections.Generic.LinkedList`1/Enumerator EnumeratorWrapper::m_Enumerator L_0007: call instance bool [System]System.Collections.Generic.LinkedList`1/Enumerator::MoveNext() L_000c: stloc.0 L_000d: br.s L_000f L_000f: ldloc.0 L_0010: ret } Notice on line 0002, instead of the ldfld we had before, we've got a ldflda, which does this: Finds the address of a field in the object whose reference is currently on the evaluation stack. Instead of loading the value, we're loading the address of the m_Enumerator field. So now the call to MoveNext modifies the enumerator stored in the class rather than on the stack, and everything works as expected. Previously, I had thought enumerator structs were an odd but interesting feature of the BCL that I had used in the past to do linked list slices. However, effects like this only underline how dangerous mutable structs are, and I'm at a loss to explain why the enumerators were implemented as structs in the first place. (interestingly, the SortedList<TKey, TValue> enumerator is a struct but is private, which makes it even more odd - the only way it can be accessed is as a boxed IEnumerator!). I would love to hear people's theories as to why the enumerators are implemented in such a fashion. And bonus points if you can explain why LinkedList<int>.Enumerator.Reset is an explicit implementation but Dispose is implicit... Note to self: never ever ever code a mutable struct.

    Read the article

  • If I were in a Silverlight focus group, here is ten things I would say.

    - by mbcrump
    Silverlight is a great product right off the shelf. I use it, love it and spend a lot of time helping the community understand it. This however, doesn’t mean that I don’t think that it can get better. If I were invited to a Microsoft Focus Group about Silverlight here is 10 things I would say:  We need more navigation templates. I’ve found (4) templates that Microsoft has released (Cosmo, Windows 7, Accent and JetPack). This number needs to be around 16. In order to get more people developing for Silverlight, we need to give them a variety of templates to get them off the ground quickly. Silverlight needs to ship with the next version of Windows. At least version 4 needs to be pre-installed on Windows going forward. It’s small, in its own sandbox and I cannot find a reason for it not to be included. Silverlight needs to run on more platforms.  iOS and Android are the key here. I think Microsoft should shoot for Android first since I believe Android will take the lead in the mobile market (at least for the short-term). It would also be great to see Microsoft use Silverlight as the focus on their new tablets / “AppleTV”. I would even invest in getting it working with Kinect. When creating a new project in Silverlight, we should have the option to create a Unit Test. Most Silverlight developers are not unit testing. If this is surprising to you then you need to get out and talk to more developers. I partially blame this on Microsoft. When you create a new ASP.NET MVC application, you simply put a check to create a Unit Test project. We need the same thing for Silverlight. We should steer the developer into the right direction. Design patterns such as MVVM need to be easier to implement in Silverlight solutions.  I’d go so far as to say that MVVM Light should ship with Visual Studio. With the project / item templates and code snippets, Laurent puts you into the right direction. This is the way that it should have been. Easy for the 9-5 developer to grasp. I believe the majority of developers use code behind because that’s what is in all the demos provided by Microsoft. They are not trying to write sucky code it is that they simply don’t know a better way.  The XAP Files should be obfuscated/unused references deleted by default when in “Release” mode. A better Silverlight experience starts with a smaller XAP file. The less that a user has to download is the better, even with the majority of people on broadband. I would also recommend built-in obfuscation by Microsoft. People are paranoid that they can rename the .zip and run it through reflector. Get rid of the boring install experiences. Here is a great write up on what I’m talking about. The default “Install Silverlight” and “Loading screens” suck. They suck bad. We need a choice of templates that a professional designer has created.  Silverlight needs to supports more image formats. For example: it would be great to use .gif’s without converting them to .png.    Switching between Blend 4 and VS2010 to develop a Silverlight application is a pain. Probably one of the biggest issues that I can’t think of a good solution for. It would be nice if VS2012 had the best of both worlds and you never have to leave VS. We need reporting controls with SSRS included with the Silverlight Toolkit. I can’t think of another control that we need built into the toolkit. It would also be helpful to have export to .xls, .pdf and .doc included with the control. I hope that this post will at least get a few people talking. Who knows, Microsoft could be working on these things right now. Thanks for reading!  Subscribe to my feed CodeProject

    Read the article

  • When constructing a Bitmap with Bitmap.FromHbitmap(), how soon can the original bitmap handle be del

    - by GBegen
    From the documentation of Image.FromHbitmap() at http://msdn.microsoft.com/en-us/library/k061we7x%28VS.80%29.aspx : The FromHbitmap method makes a copy of the GDI bitmap; so you can release the incoming GDI bitmap using the GDIDeleteObject method immediately after creating the new Image. This pretty explicitly states that the bitmap handle can be immediately deleted with DeleteObject as soon as the Bitmap instance is created. Looking at the implementation of Image.FromHbitmap() with Reflector, however, shows that it is a pretty thin wrapper around the GDI+ function, GdipCreateBitmapFromHBITMAP(). There is pretty scant documentation on the GDI+ flat functions, but http://msdn.microsoft.com/en-us/library/ms533971%28VS.85%29.aspx says that GdipCreateBitmapFromHBITMAP() corresponds to the Bitmap::Bitmap() constructor that takes an HBITMAP and an HPALETTE as parameters. The documentation for this version of the Bitmap::Bitmap() constructor at http://msdn.microsoft.com/en-us/library/ms536314%28VS.85%29.aspx has this to say: You are responsible for deleting the GDI bitmap and the GDI palette. However, you should not delete the GDI bitmap or the GDI palette until after the GDI+ Bitmap::Bitmap object is deleted or goes out of scope. Do not pass to the GDI+ Bitmap::Bitmap constructor a GDI bitmap or a GDI palette that is currently (or was previously) selected into a device context. Furthermore, one can see the source code for the C++ portion of GDI+ in GdiPlusBitmap.h that the Bitmap::Bitmap() constructor in question is itself a wrapper for the GdipCreateBitmapFromHBITMAP() function from the flat API: inline Bitmap::Bitmap( IN HBITMAP hbm, IN HPALETTE hpal ) { GpBitmap *bitmap = NULL; lastResult = DllExports::GdipCreateBitmapFromHBITMAP(hbm, hpal, &bitmap); SetNativeImage(bitmap); } What I can't easily see is the implementation of GdipCreateBitmapFromHBITMAP() that is the core of this functionality, but the two remarks in the documentation seem to be contradictory. The .Net documentation says I can delete the bitmap handle immediately, and the GDI+ documentation says the bitmap handle must be kept until the wrapping object is deleted, but both are based on the same GDI+ function. Furthermore, the GDI+ documentation warns against using a source HBITMAP that is currently or previously selected into a device context. While I can understand why the bitmap should not be selected into a device context currently, I do not understand why there is a warning against using a bitmap that was previously selected into a device context. That would seem to prevent use of GDI+ bitmaps that had been created in memory using standard GDI. So, in summary: Does the original bitmap handle need to be kept around until the .Net Bitmap object is disposed? Does the GDI+ function, GdipCreateBitmapFromHBITMAP(), make a copy of the source bitmap or merely hold onto the handle to the original? Why should I not use an HBITMAP that was previously selected into a device context?

    Read the article

  • How to inherit from DataAnnotations.ValidationAttribute (it appears SecureCritical under Visual Stud

    - by codetuner
    Hi, I have an [AllowPartiallyTrustedCallers] class library containing subtypes of the System.DataAnnotations.ValidationAttribute. The library is used on contract types of WCF services. In .NET 2/3.5, this worked fine. Since .NET 4.0 however, running a client of the service in the Visual Studio debugger results in the exception "Inheritance security rules violated by type: '(my subtype of ValidationAttribute)'. Derived types must either match the security accessibility of the base type or be less accessible." (System.TypeLoadException) The error appears to occure only when all of the following conditions are met: a subclass of ValidationAttribute is in an AllowPartiallyTrustedCallers assembly reflection is used to check for the attribute the Visual Studio hosting process is enabled (checkbox on Project properties, Debug tab) So basically, in Visual Studio.NET 2010: create a new Console project, add a reference to "System.ComponentModel.DataAnnotations" 4.0.0.0, write the following code: . using System; [assembly: System.Security.AllowPartiallyTrustedCallers()] namespace TestingVaidationAttributeSecurity { public class MyValidationAttribute : System.ComponentModel.DataAnnotations.ValidationAttribute { } [MyValidation] public class FooBar { } class Program { static void Main(string[] args) { Console.WriteLine("ValidationAttribute IsCritical: {0}", typeof(System.ComponentModel.DataAnnotations.ValidationAttribute).IsSecurityCritical); FooBar fb = new FooBar(); fb.GetType().GetCustomAttributes(true); Console.WriteLine("Press enter to end."); Console.ReadLine(); } } } Press F5 and you get the exception ! Press Ctrl-F5 (start without debugging), and it all works fine without exception... The strange thing is that the ValidationAttribute will or will not be securitycritical depending on the way you run the program (F5 or Ctrl+F5). As illustrated by the Console.WriteLine in the above code. But then again, this appear to happen with other attributes (and types?) too. Now the questions... Why do I have this behaviour when inheriting from ValidationAttribute, but not when inheriting from System.Attribute ? (Using Reflector I don't find special settings on the ValidationAttribute class or it's assembly) And what can I do to solve this ? How can I keep MyValidationAttribute inheriting from ValidationAttribute in an AllowPartiallyTrustedCallers assembly without marking it SecurityCritical, still using the new .NET 4 level 2 security model and still have it work using the VS.NET debug host (or other hosts) ?? Thanks a lot! Rudi

    Read the article

  • .NET RegEx "Memory Leak" investigation

    - by Kevin Pullin
    I recently looked into some .NET "memory leaks" (i.e. unexpected, lingering GC rooted objects) in a WinForms app. After loading and then closing a huge report, the memory usage did not drop as expected even after a couple of gen2 collections. Assuming that the reporting control was being kept alive by a stray event handler I cracked open WinDbg to see what was happening... Using WinDbg, the !dumpheap -stat command reported a large amount of memory was consumed by string instances. Further refining this down with the !dumpheap -type System.String command I found the culprit, a 90MB string used for the report, at address 03be7930. The last step was to invoke !gcroot 03be7930 to see which object(s) were keeping it alive. My expectations were incorrect - it was not an unhooked event handler hanging onto the reporting control (and report string), but instead it was held on by a System.Text.RegularExpressions.RegexInterpreter instance, which itself is a descendant of a System.Text.RegularExpressions.CachedCodeEntry. Now, the caching of Regexs is (somewhat) common knowledge as this helps to reduce the overhead of having to recompile the Regex each time it is used. But what then does this have to do with keeping my string alive? Based on analysis using Reflector, it turns out that the input string is stored in the RegexInterpreter whenever a Regex method is called. The RegexInterpreter holds onto this string reference until a new string is fed into it by a subsequent Regex method invocation. I'd expect similar behaviour by hanging onto Regex.Match instances and perhaps others. The chain is something like this: Regex.Split, Regex.Match, Regex.Replace, etc Regex.Run RegexScanner.Scan (RegexScanner is the base class, RegexInterpreter is the subclass described above). The offending Regex is only used for reporting, rarely used, and therefore unlikely to be used again to clear out the existing report string. And even if the Regex was used at a later point, it would probably be processing another large report. This is a relatively significant problem and just plain feels dirty. All that said, I found a few options on how to resolve, or at least work around, this scenario. I'll let the community respond first and if no takers come forward I will fill in any gaps in a day or two.

    Read the article

  • Using embedded resources in Silverlight (4) - other cultures not being compiled

    - by Andrei Rinea
    I am having a bit of a hard time providing localized strings for the UI in a small Silverlight 4 application. Basically I've put a folder "Resources" and placed two resource files in it : Statuses.resx Statuses.ro.resx I do have an enum Statuses : public enum Statuses { None, Working } and a convertor : public class StatusToMessage : IValueConverter { public object Convert(object value, Type targetType, object parameter, CultureInfo culture) { if (!Enum.IsDefined(typeof(Status), value)) { throw new ArgumentOutOfRangeException("value"); } var x = Statuses.None; return Statuses.ResourceManager.GetString(((Status)value).ToString(), Thread.CurrentThread.CurrentUICulture); } public object ConvertBack(object value, Type targetType, object parameter, CultureInfo culture) { throw new NotImplementedException(); } } in the view I have a textblock : <TextBlock Grid.Column="3" Text="{Binding Status, Converter={StaticResource StatusToMessage}}" /> Upon view rendering the converter is called but no matter what the Thread.CurrentThread.CurrentUICulture is set it always returns the default culture value. Upon further inspection I took apart the XAP resulted file, taken the resulted DLL file to Reflector and inspected the embedded resources. It only contains the default resource!! Going back to the two resource files I am now inspecting their properties : Build action : Embedded Resource Copy to output directory : Do not copy Custom tool : ResXFileCodeGenerator Custom tool namespace : [empty] Both resource (.resx) files have these settings. The .Designer.cs resulted files are as follows : Statuses.Designer.cs : //------------------------------------------------------------------------------ // <auto-generated> // This code was generated by a tool. // Runtime Version:4.0.30319.1 // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // </auto-generated> //------------------------------------------------------------------------------ namespace SilverlightApplication5.Resources { using System; /// <summary> /// A strongly-typed resource class, for looking up localized strings, etc. /// </summary> // This class was auto-generated by the StronglyTypedResourceBuilder // class via a tool like ResGen or Visual Studio. // To add or remove a member, edit your .ResX file then rerun ResGen // with the /str option, or rebuild your VS project. [global::System.CodeDom.Compiler.GeneratedCodeAttribute("System.Resources.Tools.StronglyTypedResourceBuilder", "4.0.0.0")] [global::System.Diagnostics.DebuggerNonUserCodeAttribute()] [global::System.Runtime.CompilerServices.CompilerGeneratedAttribute()] internal class Statuses { // ... yadda-yadda Statuses.ro.Designer.cs [empty] I've taken both files and put them in a console application and they behave as expected in it, not like in this silverlight application. What is wrong?

    Read the article

  • MemoryStream, XmlTextWriter and Warning 4 CA2202 : Microsoft.Usage

    - by rasx
    The Run Code Analysis command in Visual Studio 2010 Ultimate returns a warning when seeing a certain pattern with MemoryStream and XmlTextWriter. This is the warning: Warning 7 CA2202 : Microsoft.Usage : Object 'ms' can be disposed more than once in method 'KinteWritePages.GetXPathDocument(DbConnection)'. To avoid generating a System.ObjectDisposedException you should not call Dispose more than one time on an object.: Lines: 421 C:\Visual Studio 2010\Projects\Songhay.DataAccess.KinteWritePages\KinteWritePages.cs 421 Songhay.DataAccess.KinteWritePages This is the form: static XPathDocument GetXPathDocument(DbConnection connection) { XPathDocument xpDoc = null; var ms = new MemoryStream(); try { using(XmlTextWriter writer = new XmlTextWriter(ms, Encoding.UTF8)) { using(DbDataReader reader = CommonReader.GetReader(connection, Resources.KinteRssSql)) { writer.WriteStartDocument(); writer.WriteStartElement("data"); do { while(reader.Read()) { writer.WriteStartElement("item"); for(int i = 0; i < reader.FieldCount; i++) { writer.WriteRaw(String.Format("<{0}>{1}</{0}>", reader.GetName(i), reader[i].ToString())); } writer.WriteFullEndElement(); } } while(reader.NextResult()); writer.WriteFullEndElement(); writer.WriteEndDocument(); writer.Flush(); ms.Position = 0; xpDoc = new XPathDocument(ms); } } } finally { ms.Dispose(); } return xpDoc; } The same kind of warning is produced for this form: XPathDocument xpDoc = null; using(var ms = new MemoryStream()) { using(XmlTextWriter writer = new XmlTextWriter(ms, Encoding.UTF8)) { using(DbDataReader reader = CommonReader.GetReader(connection, Resources.KinteRssSql)) { //... } } } return xpDoc; By the way, the following form produces another warning: XPathDocument xpDoc = null; var ms = new MemoryStream(); using(XmlTextWriter writer = new XmlTextWriter(ms, Encoding.UTF8)) { using(DbDataReader reader = CommonReader.GetReader(connection, Resources.KinteRssSql)) { //... } } return xpDoc; The above produces the warning: Warning 7 CA2000 : Microsoft.Reliability : In method 'KinteWritePages.GetXPathDocument(DbConnection)', object 'ms' is not disposed along all exception paths. Call System.IDisposable.Dispose on object 'ms' before all references to it are out of scope. C:\Visual Studio 2010\Projects\Songhay.DataAccess.KinteWritePages\KinteWritePages.cs 383 Songhay.DataAccess.KinteWritePages In addition to the following, what are my options?: Supress warning CA2202. Supress warning CA2000 and hope that Microsoft is disposing of MemoryStream (because Reflector is not showing me the source code). Rewrite my legacy code to recognize the wonderful XDocument and LINQ to XML.

    Read the article

  • .net List<string>.Remove bug Should I submit a MS Connect bug report on this?

    - by Dean Lunz
    So I was beating my head against a wall for a while before it dawned on me. I have some code that saves a list of names into a text file ala ... System.IO.File.WriteAllLines(dlg.FileName, this.characterNameMasterList.Distinct().ToArray()); The character names can contain special characters. These names come from the wow armory at www.wowarmory.com There are about 26000 names or so saved in the .txt file. The names get saved to the .txt file just fine. I wrote another application that reads these names from that .txt file using this code // download the names from the db var webNames = this.DownloadNames("character"); // filter names and get ones that need to be added to the db var localNames = new List<string>(System.IO.File.ReadAllLines(dlg.FileName)); foreach (var name in webNames) { if (localNames.Contains(name.Trim())) localNames.Remove(name); } return localNames; ... the code downloads a list of names from my website that are already in the db. Then reads the local .txt file and singles out every name that is not yet in the db so it can later add it. The names that get read from the .txt file also get read just fine with no problems. The problem comes in when removing names from the localNames list. localNames is a List type. As soon as localNames.Remove(name) gets called any names in the list that had special characters in them would get corrupted and be converted into ? characters. See for screen cap http://yfrog.com/12badcharsp So i tried doing it another way using ... // download the names from web that are already in the db var webNames = this.DownloadNames("character"); // filter names and get ones that need to be added to the db var localNames = new List<string>(System.IO.File.ReadAllLines(dlg.FileName)); int index = 0; while (index < webNames.Count) { var name = webNames[index++]; var pos = localNames.IndexOf(name.Trim()); if (pos != -1) localNames.RemoveAt(pos); } return localNames; .. But using localNames.RemoveAt also corrupts the items in the list converting special characters into ?. So is this a known bug with the List.remove methods? Does anyone know? Has anyone else had this problem? I also used .NET Reflector to disassemble/inspect the list.remove and list.RemoveAt code and it appear to be calling some external Copy function. Aside from the fact that this is prob not the best way to get a unique list of items from 2 lists am I missing something or should be aware of when using the List.Remove methods ? I am running windows 7 vs2010 and my app is set for .net 4 (no client profile )

    Read the article

  • ASP.Net 4.0 - Response required in SiteMap building?

    - by Nick Craver
    I'm running into an issue upgrading a project to .Net 4.0...and having trouble finding any reason for the issue (or at least, the change causing it). Given the freshness of 4.0, not a lot of blogs out there for issues yet, so I'm hoping someone here has an idea. Preface: this is a Web Forms application, coming from 3.5 SP1 to 4.0. In the Application_Start event we're iterating through the SiteMap and constructing routes based off data there (prettifying URLs mostly with some utility added), that part isn't failing though...or at least isn't not getting that far. It seems that calling the SiteMap.RootNode (inside application_start) causes 4.0 to eat it, since the XmlSiteMapProvider.GetNodeFromXmlNode method has been changed, looking in reflector you can see it's hitting HttpResponse.ApplyAppPathModifier here: str2 = HttpContext.Current.Response.ApplyAppPathModifier(str2); HttpResponse wasn't used at all in this method in the 2.0 CLR, so what we had worked fine, in 4.0 though, that method is called as a result of this stack: [HttpException (0x80004005): Response is not available in this context.] System.Web.XmlSiteMapProvider.GetNodeFromXmlNode(XmlNode xmlNode, Queue queue) System.Web.XmlSiteMapProvider.ConvertFromXmlNode(Queue queue) System.Web.XmlSiteMapProvider.BuildSiteMap() System.Web.XmlSiteMapProvider.get_RootNode() System.Web.SiteMap.get_RootNode() Since Response isn't available here in 4.0, we get an error. To reproduce this, you can narrow the test case down to this in global: protected void Application_Start(object sender, EventArgs e) { var s = SiteMap.RootNode; //Kaboom! //or just var r = Context.Response; //or var r = HttpContext.Current.Response; //all result in the same "not available" error } Question: Am I missing something obvious here? Or, is there another event added in 4.0 that's recommended for anything related to SiteMap on startup? For anyone curious/willing to help, I've created a very minimal project (a default VS 2010 ASP.Net 4.0 site, all the bells & whistles removed and only a blank sitemap and the Application_Start code added). It's a small 10kb zip available here: http://www.ncraver.com/Test/SiteMapTest.zip Update: Not a great solution, but the current work-around is to do the work in Application_BeginRequest, like this: private static bool routesRegistered = false; protected void Application_BeginRequest(object sender, EventArgs e) { if (!routesRegistered) { Application.Lock(); if (!routesRegistered) RouteManager.RegisterRoutes(RouteTable.Routes); routesRegistered = true; Application.UnLock(); } } I don't like this particularly, feels like an abuse of the event to bypass the issue. Does anyone have at least a better work-around, since the .Net 4 behavior with SiteMap is not likely to change?

    Read the article

  • Web SITE publishing, dynamic compilation, smoke & mirrors

    - by tbehunin
    When you publish a web SITE in Visual Studio, in the dialog box that follows, you are given an option to "Allow this precompiled site to be updatable". According to MSDN, checking this option "specifies that all program code is compiled into assemblies, but that .aspx files (including single-file ASP.NET Web pages) are copied as-is to the target folder". With this option checked, you can update existing .aspx files as well as add new ones without any issue. When a page, that has either been updated or newly created, is requested, the page gets dynamically compiled at run-time and is then processed and returned to the user. If, on the other hand, you didn't check that checkbox during the publish phase, the .aspx files get compiled, along with the code-behind and App_Code files in separate assemblies. The .aspx files are then completely overwritten with a line of text that says: This is a marker file generated by the precompilation tool, and should not be deleted! You obviously can't edit an existing page in this scenario. If you were to ADD a new .aspx file to this site, you would get a .Net run-time error saying that the file hasn't been precompiled. With that background, my questions are these: Something must be able to determine that this website was published to be updatable (allow dynamic compilation) or not. If it was published as updatable, it must also be able to determine whether a file was changed or added, so it can do a dynamic compile. Who makes those determinations? IIS? ASP.NET worker process? HOW does it make those determinations? If I had the same website published in both of those scenarios, could I make a visual determination that one is updatable and the other is not? Is there some bit I can look at in the assemblies using Reflector to make that determination myself? In addition to answering those questions, what also might be helpful would be information on the process flow from when a resource is requested to when it starts being processed, not necessarily the ASP.NET Page Lifecycle, but what happens BEFORE ASP.Net worker process starts processing the page and firing off events. The dynamic compilation appears to be smoke and mirrors. Can someone demystify this for me?

    Read the article

  • How does System.TraceListener prepend message with process name?

    - by btlog
    I have been looking at using System.Diagnostics.Trace for doing logging is a very basic app. Generally it does all I need it to do. The downside is that if I call Trace.TraceInformation("Some info"); The output is "SomeApp.Exe Information: 0: Some info". Initally this entertained me but no longer. I would like to just output "Some info" to console. So I thought writing a cusom TraceListener, rather than using the inbuilt ConsoleTraceListener, would solve the problem. I can see a specific format that I want all the text after the second colon. Here is my attempt to see if this would work. class LogTraceListener : TraceListener { public override void Write(string message) { int firstColon = message.IndexOf(":"); int secondColon = message.IndexOf(":", firstColon + 1); Console.Write(message); } public override void WriteLine(string message) { int firstColon = message.IndexOf(":"); int secondColon = message.IndexOf(":", firstColon + 1); Console.WriteLine(message); } } If I output the value of firstColon it is always -1. If I put a break point the message is always just "Some info". Where does all the other information come from? So I had a look at the call stack at the point just before Console.WriteLine was called. The method that called my WriteLine method is: System.dll!System.Diagnostics.TraceListener.TraceEvent(System.Diagnostics.TraceEventCache eventCache, string source, System.Diagnostics.TraceEventType eventType, int id, string message) + 0x33 bytes When I use Reflector to look at this message it all seems pretty straight forward. I can't see any code that changes the value of the string after I have sent it to Console.WriteLine. The only method that could posibly change the underlying string value is a call to UnsafeNativeMethods.EventWriteString which has a parameter that is a pointer to the message. Does anyone understand what is going on here and whether I can change the output to be just my message with out the additional fluff. It seems like evil magic that I can pass a string "Some info" to Console.WriteLine (or any other method for that matter) and the string that output is different.

    Read the article

  • C# 'could not found' existing method

    - by shybovycha
    Greetings! I've been fooling around (a bit) with C# and its assemblies. And so i've found such an interesting feature as dynamic loading assemblies and invoking its class members. A bit of google and here i am, writing some kind of 'assembly explorer'. (i've used some portions of code from here, here and here and none of 'em gave any of expected results). But i've found a small bug: when i tried to invoke class method from assembly i've loaded, application raised MissingMethod exception. I'm sure DLL i'm loading contains class and method i'm tryin' to invoke (my app ensures me as well as RedGate's .NET Reflector): The main application code seems to be okay and i start thinking if i was wrong with my DLL... Ah, and i've put both of projects into one solution, but i don't think it may cause any troubles. And yes, DLL project has 'class library' target while the main application one has 'console applcation' target. So, the question is: what's wrong with 'em? Here are some source code: DLL source: using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace ClassLibrary1 { public class Class1 { public void Main() { System.Console.WriteLine("Hello, World!"); } } } Main application source: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Reflection; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { Assembly asm = Assembly.LoadFrom(@"a\long\long\path\ClassLibrary1.dll"); try { foreach (Type t in asm.GetTypes()) { if (t.IsClass == true && t.FullName.EndsWith(".Class1")) { object obj = Activator.CreateInstance(t); object res = t.InvokeMember("Main", BindingFlags.Default | BindingFlags.InvokeMethod, null, obj, null); // Exception is risen from here } } } catch (Exception e) { System.Console.WriteLine("Error: {0}", e.Message); } System.Console.ReadKey(); } } } UPD: worked for one case - when DLL method takes no arguments: DLL class (also works if method is not static): public class Class1 { public static void Main() { System.Console.WriteLine("Hello, World!"); } } Method invoke code: object res = t.InvokeMember("Main", BindingFlags.Default | BindingFlags.InvokeMethod, null, null, null);

    Read the article

  • Listing common SQL Code Smells.

    - by Phil Factor
    Once you’ve done a number of SQL Code-reviews, you’ll know those signs in the code that all might not be well. These ’Code Smells’ are coding styles that don’t directly cause a bug, but are indicators that all is not well with the code. . Kent Beck and Massimo Arnoldi seem to have coined the phrase in the "OnceAndOnlyOnce" page of www.C2.com, where Kent also said that code "wants to be simple". Bad Smells in Code was an essay by Kent Beck and Martin Fowler, published as Chapter 3 of the book ‘Refactoring: Improving the Design of Existing Code’ (ISBN 978-0201485677) Although there are generic code-smells, SQL has its own particular coding habits that will alert the programmer to the need to re-factor what has been written. See Exploring Smelly Code   and Code Deodorants for Code Smells by Nick Harrison for a grounding in Code Smells in C# I’ve always been tempted by the idea of automating a preliminary code-review for SQL. It would be so useful to trawl through code and pick up the various problems, much like the classic ‘Lint’ did for C, and how the Code Metrics plug-in for .NET Reflector by Jonathan 'Peli' de Halleux is used for finding Code Smells in .NET code. The problem is that few of the standard procedural code smells are relevant to SQL, and we need an agreed list of code smells. Merrilll Aldrich made a grand start last year in his blog Top 10 T-SQL Code Smells.However, I'd like to make a start by discovering if there is a general opinion amongst Database developers what the most important SQL Smells are. One can be a bit defensive about code smells. I will cheerfully write very long stored procedures, even though they are frowned on. I’ll use dynamic SQL occasionally. You can only use them as an aid for your own judgment and it is fine to ‘sign them off’ as being appropriate in particular circumstances. Also, whole classes of ‘code smells’ may be irrelevant for a particular database. The use of proprietary SQL, for example, is only a ‘code smell’ if there is a chance that the database will have to be ported to another RDBMS. The use of dynamic SQL is a risk only with certain security models. As the saying goes,  a CodeSmell is a hint of possible bad practice to a pragmatist, but a sure sign of bad practice to a purist. Plamen Ratchev’s wonderful article Ten Common SQL Programming Mistakes lists some of these ‘code smells’ along with out-and-out mistakes, but there are more. The use of nested transactions, for example, isn’t entirely incorrect, even though the database engine ignores all but the outermost: but it does flag up the possibility that the programmer thinks that nested transactions are supported. If anything requires some sort of general agreement, the definition of code smells is one. I’m therefore going to make this Blog ‘dynamic, in that, if anyone twitters a suggestion with a #SQLCodeSmells tag (or sends me a twitter) I’ll update the list here. If you add a comment to the blog with a suggestion of what should be added or removed, I’ll do my best to oblige. In other words, I’ll try to keep this blog up to date. The name against each 'smell' is the name of the person who Twittered me, commented about or who has written about the 'smell'. it does not imply that they were the first ever to think of the smell! Use of deprecated syntax such as *= (Dave Howard) Denormalisation that requires the shredding of the contents of columns. (Merrill Aldrich) Contrived interfaces Use of deprecated datatypes such as TEXT/NTEXT (Dave Howard) Datatype mis-matches in predicates that rely on implicit conversion.(Plamen Ratchev) Using Correlated subqueries instead of a join   (Dave_Levy/ Plamen Ratchev) The use of Hints in queries, especially NOLOCK (Dave Howard /Mike Reigler) Few or No comments. Use of functions in a WHERE clause. (Anil Das) Overuse of scalar UDFs (Dave Howard, Plamen Ratchev) Excessive ‘overloading’ of routines. The use of Exec xp_cmdShell (Merrill Aldrich) Excessive use of brackets. (Dave Levy) Lack of the use of a semicolon to terminate statements Use of non-SARGable functions on indexed columns in predicates (Plamen Ratchev) Duplicated code, or strikingly similar code. Misuse of SELECT * (Plamen Ratchev) Overuse of Cursors (Everyone. Special mention to Dave Levy & Adrian Hills) Overuse of CLR routines when not necessary (Sam Stange) Same column name in different tables with different datatypes. (Ian Stirk) Use of ‘broken’ functions such as ‘ISNUMERIC’ without additional checks. Excessive use of the WHILE loop (Merrill Aldrich) INSERT ... EXEC (Merrill Aldrich) The use of stored procedures where a view is sufficient (Merrill Aldrich) Not using two-part object names (Merrill Aldrich) Using INSERT INTO without specifying the columns and their order (Merrill Aldrich) Full outer joins even when they are not needed. (Plamen Ratchev) Huge stored procedures (hundreds/thousands of lines). Stored procedures that can produce different columns, or order of columns in their results, depending on the inputs. Code that is never used. Complex and nested conditionals WHILE (not done) loops without an error exit. Variable name same as the Datatype Vague identifiers. Storing complex data  or list in a character map, bitmap or XML field User procedures with sp_ prefix (Aaron Bertrand)Views that reference views that reference views that reference views (Aaron Bertrand) Inappropriate use of sql_variant (Neil Hambly) Errors with identity scope using SCOPE_IDENTITY @@IDENTITY or IDENT_CURRENT (Neil Hambly, Aaron Bertrand) Schemas that involve multiple dated copies of the same table instead of partitions (Matt Whitfield-Atlantis UK) Scalar UDFs that do data lookups (poor man's join) (Matt Whitfield-Atlantis UK) Code that allows SQL Injection (Mladen Prajdic) Tables without clustered indexes (Matt Whitfield-Atlantis UK) Use of "SELECT DISTINCT" to mask a join problem (Nick Harrison) Multiple stored procedures with nearly identical implementation. (Nick Harrison) Excessive column aliasing may point to a problem or it could be a mapping implementation. (Nick Harrison) Joining "too many" tables in a query. (Nick Harrison) Stored procedure returning more than one record set. (Nick Harrison) A NOT LIKE condition (Nick Harrison) excessive "OR" conditions. (Nick Harrison) User procedures with sp_ prefix (Aaron Bertrand) Views that reference views that reference views that reference views (Aaron Bertrand) sp_OACreate or anything related to it (Bill Fellows) Prefixing names with tbl_, vw_, fn_, and usp_ ('tibbling') (Jeremiah Peschka) Aliases that go a,b,c,d,e... (Dave Levy/Diane McNurlan) Overweight Queries (e.g. 4 inner joins, 8 left joins, 4 derived tables, 10 subqueries, 8 clustered GUIDs, 2 UDFs, 6 case statements = 1 query) (Robert L Davis) Order by 3,2 (Dave Levy) MultiStatement Table functions which are then filtered 'Sel * from Udf() where Udf.Col = Something' (Dave Ballantyne) running a SQL 2008 system in SQL 2000 compatibility mode(John Stafford)

    Read the article

  • Using the BAM Interceptor with Continuation

    - by Charles Young
    Originally posted on: http://geekswithblogs.net/cyoung/archive/2014/06/02/using-the-bam-interceptor-with-continuation.aspxI’ve recently been resurrecting some code written several years ago that makes extensive use of the BAM Interceptor provided as part of BizTalk Server’s BAM event observation library.  In doing this, I noticed an issue with continuations.  Essentially, whenever I tried to configure one or more continuations for an activity, the BAM Interceptor failed to complete the activity correctly.   Careful inspection of my code confirmed that I was initializing and invoking the BAM interceptor correctly, so I was mystified.  However, I eventually found the problem.  It is a logical error in the BAM Interceptor code itself. The BAM Interceptor provides a useful mechanism for implementing dynamic tracking.  It supports configurable ‘track points’.  These are grouped into named ‘locations’.  BAM uses the term ‘step’ as a synonym for ‘location’.   Each track point defines a BAM action such as starting an activity, extracting a data item, enabling a continuation, etc.  Each step defines a collection of track points. Understanding Steps The BAM Interceptor provides an abstract model for handling configuration of steps.  It doesn’t, however, define any specific configuration mechanism (e.g., config files, SSO, etc.)  It is up to the developer to decide how to store, manage and retrieve configuration data.  At run time, this configuration is used to register track points which then drive the BAM Interceptor. The full semantics of a step are not immediately clear from Microsoft’s documentation.  They represent a point in a business activity where BAM tracking occurs.  They are named locations in the code.  What is less obvious is that they always represent either the full tracking work for a given activity or a discrete fragment of that work which commences with the start of a new activity or the continuation of an existing activity.  The BAM Interceptor enforces this by throwing an error if no ‘start new’ or ‘continue’ track point is registered for a named location. This constraint implies that each step must marked with an ‘end activity’ track point.  One of the peculiarities of BAM semantics is that when an activity is continued under a correlated ID, you must first mark the current activity as ‘ended’ in order to ensure the right housekeeping is done in the database.  If you re-start an ended activity under the same ID, you will leave the BAM import tables in an inconsistent state.  A step, therefore, always represents an entire unit of work for a given activity or continuation ID.  For activities with continuation, each unit of work is termed a ‘fragment’. Instance and Fragment State Internally, the BAM Interceptor maintains state data at two levels.  First, it represents the overall state of the activity using a ‘trace instance’ token.  This token contains the name and ID of the activity together with a couple of state flags.  The second level of state represents a ‘trace fragment’.   As we have seen, a fragment of an activity corresponds directly to the notion of a ‘step’.  It is the unit of work done at a named location, and it must be bounded by start and end, or continue and end, actions. When handling continuations, the BAM Interceptor differentiates between ‘root’ fragments and other fragments.  Very simply, a root fragment represents the start of an activity.  Other fragments represent continuations.  This is where the logic breaks down.  The BAM Interceptor loses state integrity for root fragments when continuations are defined. Initialization Microsoft’s BAM Interceptor code supports the initialization of BAM Interceptors from track point configuration data.  The process starts by populating an Activity Interceptor Configuration object with an array of track points.  These can belong to different steps (aka ‘locations’) and can be registered in any order.  Once it is populated with track points, the Activity Interceptor Configuration is used to initialise the BAM Interceptor.  The BAM Interceptor sets up a hash table of array lists.  Each step is represented by an array list, and each array list contains an ordered set of track points.  The BAM Interceptor represents track points as ‘executable’ components.  When the OnStep method of the BAM Interceptor is called for a given step, the corresponding list of track points is retrieved and each track point is executed in turn.  Each track point retrieves any required data using a call back mechanism and then serializes a BAM trace fragment object representing a specific action (e.g., start, update, enable continuation, stop, etc.).  The serialised trace fragment is then handed off to a BAM event stream (buffered or direct) which takes the appropriate action. The Root of the Problem The logic breaks down in the Activity Interceptor Configuration.  Each Activity Interceptor Configuration is initialised with an instance of a ‘trace instance’ token.  This provides the basic metadata for the activity as a whole.  It contains the activity name and ID together with state flags indicating if the activity ID is a root (i.e., not a continuation fragment) and if it is completed.  This single token is then shared by all trace actions for all steps registered with the Activity Interceptor Configuration. Each trace instance token is automatically initialised to represent a root fragment.  However, if you subsequently register a ‘continuation’ step with the Activity Interceptor Configuration, the ‘root’ flag is set to false at the point the ‘continue’ track point is registered for that step.   If you use a ‘reflector’ tool to inspect the code for the ActivityInterceptorConfiguration class, you can see the flag being set in one of the overloads of the RegisterContinue method.    This makes no sense.  The trace instance token is shared across all the track points registered with the Activity Interceptor Configuration.  The Activity Interceptor Configuration is designed to hold track points for multiple steps.  The ‘root’ flag is clearly meant to be initialised to ‘true’ for the preliminary root fragment and then subsequently set to false at the point that a continuation step is processed.  Instead, if the Activity Interceptor Configuration contains a continuation step, it is changed to ‘false’ before the root fragment is processed.  This is clearly an error in logic. The problem causes havoc when the BAM Interceptor is used with continuation.  Effectively the root step is no longer processed correctly, and the ultimate effect is that the continued activity never completes!   This has nothing to do with the root and the continuation being in the same process.  It is due to a fundamental mistake of setting the ‘root’ flag to false for a continuation before the root fragment is processed. The Workaround Fortunately, it is easy to work around the bug.  The trick is to ensure that you create a new Activity Interceptor Configuration object for each individual step.  This may mean filtering your configuration data to extract the track points for a single step or grouping the configured track points into individual steps and the creating a separate Activity Interceptor Configuration for each group.  In my case, the first approach was required.  Here is what the amended code looks like: // Because of a logic error in Microsoft's code, a separate ActivityInterceptorConfiguration must be used // for each location. The following code extracts only those track points for a given step name (location). var trackPointGroup = from ResolutionService.TrackPoint tp in bamActivity.TrackPoints                       where (string)tp.Location == bamStepName                       select tp; var bamActivityInterceptorConfig =     new Microsoft.BizTalk.Bam.EventObservation.ActivityInterceptorConfiguration(activityName); foreach (var trackPoint in trackPointGroup) {     switch (trackPoint.Type)     {         case TrackPointType.Start:             bamActivityInterceptorConfig.RegisterStartNew(trackPoint.Location, trackPoint.ExtractionInfo);             break; etc… I’m using LINQ to filter a list of track points for those entries that correspond to a given step and then registering only those track points on a new instance of the ActivityInterceptorConfiguration class.   As soon as I re-wrote the code to do this, activities with continuations started to complete correctly.

    Read the article

  • C#: Does an IDisposable in a Halted Iterator Dispose?

    - by James Michael Hare
    If that sounds confusing, let me give you an example. Let's say you expose a method to read a database of products, and instead of returning a List<Product> you return an IEnumerable<Product> in iterator form (yield return). This accomplishes several good things: The IDataReader is not passed out of the Data Access Layer which prevents abstraction leak and resource leak potentials. You don't need to construct a full List<Product> in memory (which could be very big) if you just want to forward iterate once. If you only want to consume up to a certain point in the list, you won't incur the database cost of looking up the other items. This could give us an example like: 1: // a sample data access object class to do standard CRUD operations. 2: public class ProductDao 3: { 4: private DbProviderFactory _factory = SqlClientFactory.Instance 5:  6: // a method that would retrieve all available products 7: public IEnumerable<Product> GetAvailableProducts() 8: { 9: // must create the connection 10: using (var con = _factory.CreateConnection()) 11: { 12: con.ConnectionString = _productsConnectionString; 13: con.Open(); 14:  15: // create the command 16: using (var cmd = _factory.CreateCommand()) 17: { 18: cmd.Connection = con; 19: cmd.CommandText = _getAllProductsStoredProc; 20: cmd.CommandType = CommandType.StoredProcedure; 21:  22: // get a reader and pass back all results 23: using (var reader = cmd.ExecuteReader()) 24: { 25: while(reader.Read()) 26: { 27: yield return new Product 28: { 29: Name = reader["product_name"].ToString(), 30: ... 31: }; 32: } 33: } 34: } 35: } 36: } 37: } The database details themselves are irrelevant. I will say, though, that I'm a big fan of using the System.Data.Common classes instead of your provider specific counterparts directly (SqlCommand, OracleCommand, etc). This lets you mock your data sources easily in unit testing and also allows you to swap out your provider in one line of code. In fact, one of the shared components I'm most proud of implementing was our group's DatabaseUtility library that simplifies all the database access above into one line of code in a thread-safe and provider-neutral way. I went with my own flavor instead of the EL due to the fact I didn't want to force internal company consumers to use the EL if they didn't want to, and it made it easy to allow them to mock their database for unit testing by providing a MockCommand, MockConnection, etc that followed the System.Data.Common model. One of these days I'll blog on that if anyone's interested. Regardless, you often have situations like the above where you are consuming and iterating through a resource that must be closed once you are finished iterating. For the reasons stated above, I didn't want to return IDataReader (that would force them to remember to Dispose it), and I didn't want to return List<Product> (that would force them to hold all products in memory) -- but the first time I wrote this, I was worried. What if you never consume the last item and exit the loop? Are the reader, command, and connection all disposed correctly? Of course, I was 99.999999% sure the creators of C# had already thought of this and taken care of it, but inspection in Reflector was difficult due to the nature of the state machines yield return generates, so I decided to try a quick example program to verify whether or not Dispose() will be called when an iterator is broken from outside the iterator itself -- i.e. before the iterator reports there are no more items. So I wrote a quick Sequencer class with a Dispose() method and an iterator for it. Yes, it is COMPLETELY contrived: 1: // A disposable sequence of int -- yes this is completely contrived... 2: internal class Sequencer : IDisposable 3: { 4: private int _i = 0; 5: private readonly object _mutex = new object(); 6:  7: // Constructs an int sequence. 8: public Sequencer(int start) 9: { 10: _i = start; 11: } 12:  13: // Gets the next integer 14: public int GetNext() 15: { 16: lock (_mutex) 17: { 18: return _i++; 19: } 20: } 21:  22: // Dispose the sequence of integers. 23: public void Dispose() 24: { 25: // force output immediately (flush the buffer) 26: Console.WriteLine("Disposed with last sequence number of {0}!", _i); 27: Console.Out.Flush(); 28: } 29: } And then I created a generator (infinite-loop iterator) that did the using block for auto-Disposal: 1: // simply defines an extension method off of an int to start a sequence 2: public static class SequencerExtensions 3: { 4: // generates an infinite sequence starting at the specified number 5: public static IEnumerable<int> GetSequence(this int starter) 6: { 7: // note the using here, will call Dispose() when block terminated. 8: using (var seq = new Sequencer(starter)) 9: { 10: // infinite loop on this generator, means must be bounded by caller! 11: while(true) 12: { 13: yield return seq.GetNext(); 14: } 15: } 16: } 17: } This is really the same conundrum as the database problem originally posed. Here we are using iteration (yield return) over a large collection (infinite sequence of integers). If we cut the sequence short by breaking iteration, will that using block exit and hence, Dispose be called? Well, let's see: 1: // The test program class 2: public class IteratorTest 3: { 4: // The main test method. 5: public static void Main() 6: { 7: Console.WriteLine("Going to consume 10 of infinite items"); 8: Console.Out.Flush(); 9:  10: foreach(var i in 0.GetSequence()) 11: { 12: // could use TakeWhile, but wanted to output right at break... 13: if(i >= 10) 14: { 15: Console.WriteLine("Breaking now!"); 16: Console.Out.Flush(); 17: break; 18: } 19:  20: Console.WriteLine(i); 21: Console.Out.Flush(); 22: } 23:  24: Console.WriteLine("Done with loop."); 25: Console.Out.Flush(); 26: } 27: } So, what do we see? Do we see the "Disposed" message from our dispose, or did the Dispose get skipped because from an "eyeball" perspective we should be locked in that infinite generator loop? Here's the results: 1: Going to consume 10 of infinite items 2: 0 3: 1 4: 2 5: 3 6: 4 7: 5 8: 6 9: 7 10: 8 11: 9 12: Breaking now! 13: Disposed with last sequence number of 11! 14: Done with loop. Yes indeed, when we break the loop, the state machine that C# generates for yield iterate exits the iteration through the using blocks and auto-disposes the IDisposable correctly. I must admit, though, the first time I wrote one, I began to wonder and that led to this test. If you've never seen iterators before (I wrote a previous entry here) the infinite loop may throw you, but you have to keep in mind it is not a linear piece of code, that every time you hit a "yield return" it cedes control back to the state machine generated for the iterator. And this state machine, I'm happy to say, is smart enough to clean up the using blocks correctly. I suspected those wily guys and gals at Microsoft engineered it well, and I wasn't disappointed. But, I've been bitten by assumptions before, so it's good to test and see. Yes, maybe you knew it would or figured it would, but isn't it nice to know? And as those campy 80s G.I. Joe cartoon public service reminders always taught us, "Knowing is half the battle...". Technorati Tags: C#,.NET

    Read the article

  • C# Extension Methods - To Extend or Not To Extend...

    - by James Michael Hare
    I've been thinking a lot about extension methods lately, and I must admit I both love them and hate them. They are a lot like sugar, they taste so nice and sweet, but they'll rot your teeth if you eat them too much.   I can't deny that they aren't useful and very handy. One of the major components of the Shared Component library where I work is a set of useful extension methods. But, I also can't deny that they tend to be overused and abused to willy-nilly extend every living type.   So what constitutes a good extension method? Obviously, you can write an extension method for nearly anything whether it is a good idea or not. Many times, in fact, an idea seems like a good extension method but in retrospect really doesn't fit.   So what's the litmus test? To me, an extension method should be like in the movies when a person runs into their twin, separated at birth. You just know you're related. Obviously, that's hard to quantify, so let's try to put a few rules-of-thumb around them.   A good extension method should:     Apply to any possible instance of the type it extends.     Simplify logic and improve readability/maintainability.     Apply to the most specific type or interface applicable.     Be isolated in a namespace so that it does not pollute IntelliSense.     So let's look at a few examples in relation to these rules.   The first rule, to me, is the most important of all. Once again, it bears repeating, a good extension method should apply to all possible instances of the type it extends. It should feel like the long lost relative that should have been included in the original class but somehow was missing from the family tree.    Take this nifty little int extension, I saw this once in a blog and at first I really thought it was pretty cool, but then I started noticing a code smell I couldn't quite put my finger on. So let's look:       public static class IntExtensinos     {         public static int Seconds(int num)         {             return num * 1000;         }           public static int Minutes(int num)         {             return num * 60000;         }     }     This is so you could do things like:       ...     Thread.Sleep(5.Seconds());     ...     proxy.Timeout = 1.Minutes();     ...     Awww, you say, that's cute! Well, that's the problem, it's kitschy and it doesn't always apply (and incidentally you could achieve the same thing with TimeStamp.FromSeconds(5)). It's syntactical candy that looks cool, but tends to rot and pollute the code. It would allow things like:       total += numberOfTodaysOrders.Seconds();     which makes no sense and should never be allowed. The problem is you're applying an extension method to a logical domain, not a type domain. That is, the extension method Seconds() doesn't really apply to ALL ints, it applies to ints that are representative of time that you want to convert to milliseconds.    Do you see what I mean? The two problems, in a nutshell, are that a) Seconds() called off a non-time value makes no sense and b) calling Seconds() off something to pass to something that does not take milliseconds will be off by a factor of 1000 or worse.   Thus, in my mind, you should only ever have an extension method that applies to the whole domain of that type.   For example, this is one of my personal favorites:       public static bool IsBetween<T>(this T value, T low, T high)         where T : IComparable<T>     {         return value.CompareTo(low) >= 0 && value.CompareTo(high) <= 0;     }   This allows you to check if any IComparable<T> is within an upper and lower bound. Think of how many times you type something like:       if (response.Employee.Address.YearsAt >= 2         && response.Employee.Address.YearsAt <= 10)     {     ...     }     Now, you can instead type:       if(response.Employee.Address.YearsAt.IsBetween(2, 10))     {     ...     }     Note that this applies to all IComparable<T> -- that's ints, chars, strings, DateTime, etc -- and does not depend on any logical domain. In addition, it satisfies the second point and actually makes the code more readable and maintainable.   Let's look at the third point. In it we said that an extension method should fit the most specific interface or type possible. Now, I'm not saying if you have something that applies to enumerables, you create an extension for List, Array, Dictionary, etc (though you may have reasons for doing so), but that you should beware of making things TOO general.   For example, let's say we had an extension method like this:       public static T ConvertTo<T>(this object value)     {         return (T)Convert.ChangeType(value, typeof(T));     }         This lets you do more fluent conversions like:       double d = "5.0".ConvertTo<double>();     However, if you dig into Reflector (LOVE that tool) you will see that if the type you are calling on does not implement IConvertible, what you convert to MUST be the exact type or it will throw an InvalidCastException. Now this may or may not be what you want in this situation, and I leave that up to you. Things like this would fail:       object value = new Employee();     ...     // class cast exception because typeof(IEmployee) != typeof(Employee)     IEmployee emp = value.ConvertTo<IEmployee>();       Yes, that's a downfall of working with Convertible in general, but if you wanted your fluent interface to be more type-safe so that ConvertTo were only callable on IConvertibles (and let casting be a manual task), you could easily make it:         public static T ConvertTo<T>(this IConvertible value)     {         return (T)Convert.ChangeType(value, typeof(T));     }         This is what I mean by choosing the best type to extend. Consider that if we used the previous (object) version, every time we typed a dot ('.') on an instance we'd pull up ConvertTo() whether it was applicable or not. By filtering our extension method down to only valid types (those that implement IConvertible) we greatly reduce our IntelliSense pollution and apply a good level of compile-time correctness.   Now my fourth rule is just my general rule-of-thumb. Obviously, you can make extension methods as in-your-face as you want. I included all mine in my work libraries in its own sub-namespace, something akin to:       namespace Shared.Core.Extensions { ... }     This is in a library called Shared.Core, so just referencing the Core library doesn't pollute your IntelliSense, you have to actually do a using on Shared.Core.Extensions to bring the methods in. This is very similar to the way Microsoft puts its extension methods in System.Linq. This way, if you want 'em, you use the appropriate namespace. If you don't want 'em, they won't pollute your namespace.   To really make this work, however, that namespace should only include extension methods and subordinate types those extensions themselves may use. If you plant other useful classes in those namespaces, once a user includes it, they get all the extensions too.   Also, just as a personal preference, extension methods that aren't simply syntactical shortcuts, I like to put in a static utility class and then have extension methods for syntactical candy. For instance, I think it imaginable that any object could be converted to XML:       namespace Shared.Core     {         // A collection of XML Utility classes         public static class XmlUtility         {             ...             // Serialize an object into an xml string             public static string ToXml(object input)             {                 var xs = new XmlSerializer(input.GetType());                   // use new UTF8Encoding here, not Encoding.UTF8. The later includes                 // the BOM which screws up subsequent reads, the former does not.                 using (var memoryStream = new MemoryStream())                 using (var xmlTextWriter = new XmlTextWriter(memoryStream, new UTF8Encoding()))                 {                     xs.Serialize(xmlTextWriter, input);                     return Encoding.UTF8.GetString(memoryStream.ToArray());                 }             }             ...         }     }   I also wanted to be able to call this from an object like:       value.ToXml();     But here's the problem, if i made this an extension method from the start with that one little keyword "this", it would pop into IntelliSense for all objects which could be very polluting. Instead, I put the logic into a utility class so that users have the choice of whether or not they want to use it as just a class and not pollute IntelliSense, then in my extensions namespace, I add the syntactical candy:       namespace Shared.Core.Extensions     {         public static class XmlExtensions         {             public static string ToXml(this object value)             {                 return XmlUtility.ToXml(value);             }         }     }   So now it's the best of both worlds. On one hand, they can use the utility class if they don't want to pollute IntelliSense, and on the other hand they can include the Extensions namespace and use as an extension if they want. The neat thing is it also adheres to the Single Responsibility Principle. The XmlUtility is responsible for converting objects to XML, and the XmlExtensions is responsible for extending object's interface for ToXml().

    Read the article

  • Using Sitecore RenderingContext Parameters as MVC controller action arguments

    - by Kyle Burns
    I have been working with the Technical Preview of Sitecore 6.6 on a project and have been for the most part happy with the way that Sitecore (which truly is an MVC implementation unto itself) has been expanded to support ASP.NET MVC. That said, getting up to speed with the combined platform has not been entirely without stumbles and today I want to share one area where Sitecore could have really made things shine from the "it just works" perspective. A couple days ago I was asked by a colleague about the usage of the "Parameters" field that is defined on Sitecore's Controller Rendering data template. Based on the standard way that Sitecore handles a field named Parameters, I was able to deduce that the field expected key/value pairs separated by the "&" character, but beyond that I wasn't sure and didn't see anything from a documentation perspective to guide me, so it was time to dig and find out where the data in the field was made available. My first thought was that it would be really nice if Sitecore handled the parameters in this field consistently with the way that ASP.NET MVC handles the various parameter collections on the HttpRequest object and automatically maps them to parameters of the action method executing. Being the hopeful sort, I configured a name/value pair on one of my renderings, added a parameter with matching name to the controller action and fired up the bugger to see... that the parameter was not populated. Having established that the field's value was not going to be presented to me the way that I had hoped it would, the next assumption that I would work on was that Sitecore would handle this field similar to how they handle other similar data and would plug it into some ambient object that I could reference from within the controller method. After a considerable amount of guessing, testing, and cracking code open with Redgate's Reflector (a must-have companion to Sitecore documentation), I found that the most direct way to access the parameter was through the ambient RenderingContext object using code similar to: string myArgument = string.Empty; var rc = Sitecore.Mvc.Presentation.RenderingContext.CurrentOrNull; if (rc != null) {     var parms = rc.Rendering.Parameters;     myArgument = parms["myArgument"]; } At this point, we know how this field is used out of the box from Sitecore and can provide information from Sitecore's Content Editor that will be available when the controller action is executing, but it feels a little dirty. In order to properly test the action method I would have to do a lot of setup work and possible use an isolation framework such as Pex and Moles to get at a value that my action method is dependent upon. Notice I said that my method is dependent upon the value but in order to meet that dependency I've accepted another dependency upon Sitecore's RenderingContext.  I'm a big believer in, when possible, ensuring that any piece of code explicitly advertises dependencies using the method signature, so I found myself still wanting this to work the same as if the parameters were in the request route, querystring, or form by being able to add a myArgument parameter to the action method and have this parameter populated by the framework. Lucky for us, the ASP.NET MVC framework is extremely flexible and provides some easy to grok and use extensibility points. ASP.NET MVC is able to provide information from the request as input parameters to controller actions because it uses objects which implement an interface called IValueProvider and have been registered to service the application. The most basic statement of responsibility for an IValueProvider implementation is "I know about some data which is indexed by key. If you hand me the key for a piece of data that I know about I give you that data". When preparing to invoke a controller action, the framework queries registered IValueProvider implementations with the name of each method argument to see if the ValueProvider can supply a value for the parameter. (the rest of this post will assume you're working along and make a lot more sense if you do) Let's pull Sitecore out of the equation for a second to simplify things and create an extremely simple IValueProvider implementation. For this example, I first create a new ASP.NET MVC3 project in Visual Studio, selecting "Internet Application" and otherwise taking defaults (I'm assuming that anyone reading this far in the post either already knows how to do this or will need to take a quick run through one of the many available basic MVC tutorials such as the MVC Music Store). Once the new project is created, go to the Index action of HomeController.  This action sets a Message property on the ViewBag to "Welcome to ASP.NET MVC!" and invokes the View, which has been coded to display the Message. For our example, we will remove the hard coded message from this controller (although we'll leave it just as hard coded somewhere else - this is sample code). For the first step in our exercise, add a string parameter to the Index action method called welcomeMessage and use the value of this argument to set the ViewBag.Message property. The updated Index action should look like: public ActionResult Index(string welcomeMessage) {     ViewBag.Message = welcomeMessage;     return View(); } This represents the entirety of the change that you will make to either the controller or view.  If you run the application now, the home page will display and no message will be presented to the user because no value was supplied to the Action method. Let's now write a ValueProvider to ensure this parameter gets populated. We'll start by creating a new class called StaticValueProvider. When the class is created, we'll update the using statements to ensure that they include the following: using System.Collections.Specialized; using System.Globalization; using System.Web.Mvc; With the appropriate using statements in place, we'll update the StaticValueProvider class to implement the IValueProvider interface. The System.Web.Mvc library already contains a pretty flexible dictionary-like implementation called NameValueCollectionValueProvider, so we'll just wrap that and let it do most of the real work for us. The completed class looks like: public class StaticValueProvider : IValueProvider {     private NameValueCollectionValueProvider _wrappedProvider;     public StaticValueProvider(ControllerContext controllerContext)     {         var parameters = new NameValueCollection();         parameters.Add("welcomeMessage", "Hello from the value provider!");         _wrappedProvider = new NameValueCollectionValueProvider(parameters, CultureInfo.InvariantCulture);     }     public bool ContainsPrefix(string prefix)     {         return _wrappedProvider.ContainsPrefix(prefix);     }     public ValueProviderResult GetValue(string key)     {         return _wrappedProvider.GetValue(key);     } } Notice that the only entry in the collection matches the name of the argument to our HomeController's Index action.  This is the important "secret sauce" that will make things work. We've got our new value provider now, but that's not quite enough to be finished. Mvc obtains IValueProvider instances using factories that are registered when the application starts up. These factories extend the abstract ValueProviderFactory class by initializing and returning the appropriate implementation of IValueProvider from the GetValueProvider method. While I wouldn't do so in production code, for the sake of this example, I'm going to add the following class definition within the StaticValueProvider.cs source file: public class StaticValueProviderFactory : ValueProviderFactory {     public override IValueProvider GetValueProvider(ControllerContext controllerContext)     {         return new StaticValueProvider(controllerContext);     } } Now that we have a factory, we can register it by adding the following line to the end of the Application_Start method in Global.asax.cs: ValueProviderFactories.Factories.Add(new StaticValueProviderFactory()); If you've done everything right to this point, you should be able to run the application and be presented with the home page reading "Hello from the value provider!". Now that you have the basics of the IValueProvider down, you have everything you need to enhance your Sitecore MVC implementation by adding an IValueProvider that exposes values from the ambient RenderingContext's Parameters property. I'll provide the code for the IValueProvider implementation (which should look VERY familiar) and you can use the work we've already done as a reference to create and register the factory: public class RenderingContextValueProvider : IValueProvider {     private NameValueCollectionValueProvider _wrappedProvider = null;     public RenderingContextValueProvider(ControllerContext controllerContext)     {         var collection = new NameValueCollection();         var rc = RenderingContext.CurrentOrNull;         if (rc != null && rc.Rendering != null)         {             foreach(var parameter in rc.Rendering.Parameters)             {                 collection.Add(parameter.Key, parameter.Value);             }         }         _wrappedProvider = new NameValueCollectionValueProvider(collection, CultureInfo.InvariantCulture);         }     public bool ContainsPrefix(string prefix)     {         return _wrappedProvider.ContainsPrefix(prefix);     }     public ValueProviderResult GetValue(string key)     {         return _wrappedProvider.GetValue(key);     } } In this post I've discussed the MVC IValueProvider used to map data to controller action method arguments and how this can be integrated into your Sitecore 6.6 MVC solution.

    Read the article

< Previous Page | 7 8 9 10 11 12  | Next Page >