Search Results

Search found 427 results on 18 pages for 'jeffery the wind'.

Page 8/18 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • IntelliSense for Razor Hosting in non-Web Applications

    - by Rick Strahl
    When I posted my Razor Hosting article a couple of weeks ago I got a number of questions on how to get IntelliSense to work inside of Visual Studio while editing your templates. The answer to this question is mainly dependent on how Visual Studio recognizes assemblies, so a little background is required. If you open a template just on its own as a standalone file by clicking on it say in Explorer, Visual Studio will open up with the template in the editor, but you won’t get any IntelliSense on any of your related assemblies that you might be using by default. It’ll give Intellisense on base System namespace, but not on your imported assembly types. This makes sense: Visual Studio has no idea what the assembly associations for the single file are. There are two options available to you to make IntelliSense work for templates: Add the templates as included files to your non-Web project Add a BIN folder to your template’s folder and add all assemblies required there Including Templates in your Host Project By including templates into your Razor hosting project, Visual Studio will pick up the project’s assembly references and make IntelliSense available for any of the custom types in your project and on your templates. To see this work I moved the \Templates folder from the samples from the Debug\Bin folder into the project root and included the templates in the WinForm sample project. Here’s what this looks like in Visual Studio after the templates have been included:   Notice that I take my original example and type cast the Context object to the specific type that it actually represents – namely CustomContext – by using a simple code block: @{ CustomContext Model = Context as CustomContext; } After that assignment my Model local variable is in scope and IntelliSense works as expected. Note that you also will need to add any namespaces with the using command in this case: @using RazorHostingWinForm which has to be defined at the very top of a Razor document. BTW, while you can only pass in a single Context 'parameter’ to the template with the default template I’ve provided realize that you can also assign a complex object to Context. For example you could have a container object that references a variety of other objects which you can then cast to the appropriate types as needed: @{ ContextContainer container = Context as ContextContainer; CustomContext Model = container.Model; CustomDAO DAO = container.DAO; } and so forth. IntelliSense for your Custom Template Notice also that you can get IntelliSense for the top level template by specifying an inherits tag at the top of the document: @inherits RazorHosting.RazorTemplateFolderHost By specifying the above you can then get IntelliSense on your base template’s properties. For example, in my base template there are Request and Response objects. This is very useful especially if you end up creating custom templates that include your custom business objects as you can get effectively see full IntelliSense from the ‘page’ level down. For Html Help Builder for example, I’d have a Help object on the page and assuming I have the references available I can see all the way into that Help object without even having to do anything fancy. Note that the @inherits key is a GREAT and easy way to override the base template you normally specify as the default template. It allows you to create a custom template and as long as it inherits from the base template it’ll work properly. Since the last post I’ve also made some changes in the base template that allow hooking up some simple initialization logic so it gets much more easy to create custom templates and hook up custom objects with an IntializeTemplate() hook function that gets called with the Context and a Configuration object. These objects are objects you can pass in at runtime from your host application and then assign to custom properties on your template. For example the default implementation for RazorTemplateFolderHost does this: public override void InitializeTemplate(object context, object configurationData) { // Pick up configuration data and stuff into Request object RazorFolderHostTemplateConfiguration config = configurationData as RazorFolderHostTemplateConfiguration; this.Request.TemplatePath = config.TemplatePath; this.Request.TemplateRelativePath = config.TemplateRelativePath; // Just use the entire ConfigData as the model, but in theory // configData could contain many objects or values to set on // template properties this.Model = config.ConfigData as TModel; } to set up a strongly typed Model and the Request object. You can do much more complex hookups here of course and create complex base template pages that contain all the objects that you need in your code with strong typing. Adding a Bin folder to your Template’s Root Path Including templates in your host project works if you own the project and you’re the only one modifying the templates. However, if you are distributing the Razor engine as a templating/scripting solution as part of your application or development tool the original project is likely not available and so that approach is not practical. Another option you have is to add a Bin folder and add all the related assemblies into it. You can also add a Web.Config file with assembly references for any GAC’d assembly references that need to be associated with the templates. Between the web.config and bin folder Visual Studio can figure out how to provide IntelliSense. The Bin folder should contain: The RazorHosting.dll Your host project’s EXE or DLL – renamed to .dll if it’s an .exe Any external (bin folder) dependent assemblies Note that you most likely also want a reference to the host project if it contains references that are going to be used in templates. Visual Studio doesn’t recognize an EXE reference so you have to rename the EXE to DLL to make it work. Apparently the binary signature of EXE and DLL files are identical and it just works – learn something new everyday… For GAC assembly references you can add a web.config file to your template root. The Web.config file then should contain any full assembly references to GAC components: <configuration> <system.web> <compilation debug="true"> <assemblies> <add assembly="System.Web.Mvc, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> <add assembly="System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" /> <add assembly="System.Web.Extensions, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> </assemblies> </compilation> </system.web> </configuration> And with that you should get full IntelliSense. Note that if you add a BIN folder and you also have the templates in your Visual Studio project Visual Studio will complain about reference conflicts as it’s effectively seeing both the project references and the ones in the bin folder. So it’s probably a good idea to use one or the other but not both at the same time :-) Seeing IntelliSense in your Razor templates is a big help for users of your templates. If you’re shipping an application level scripting solution especially it’ll be real useful for your template consumers/users to be able to get some quick help on creating customized templates – after all that’s what templates are all about – easy customization. Making sure that everything is referenced in your bin folder and web.config is a good idea and it’s great to see that Visual Studio (and presumably WebMatrix/Visual Web Developer as well) will be able to pick up your custom IntelliSense in Razor templates.© Rick Strahl, West Wind Technologies, 2005-2011Posted in Razor  

    Read the article

  • Adding proper THEAD sections to a GridView

    - by Rick Strahl
    I’m working on some legacy code for a customer today and dealing with a page that has my favorite ‘friend’ on it: A GridView control. The ASP.NET GridView control (and also the older DataGrid control) creates some pretty messed up HTML. One of the more annoying things it does is to generate all rows including the header into the page in the <tbody> section of the document rather than in a properly separated <thead> section. Here’s is typical GridView generated HTML output: <table class="tablesorter blackborder" cellspacing="0" rules="all" border="1" id="Table1" style="border-collapse:collapse;"> <tr> <th scope="col">Name</th> <th scope="col">Company</th> <th scope="col">Entered</th><th scope="col">Balance</th> </tr> <tr> <td>Frank Hobson</td><td>Hobson Inc.</td> <td>10/20/2010 12:00:00 AM</td><td>240.00</td> </tr> ... </table> Notice that all content – both the headers and the body of the table – are generated directly under the <table> tag and there’s no explicit use of <tbody> or <thead> (or <tfooter> for that matter). When the browser renders this the document some default settings kick in and the DOM tree turns into something like this: <table> <tbody> <tr> <-- header <tr> <—detail row <tr> <—detail row </tbody> </table> Now if you’re just rendering the Grid server side and you’re applying all your styles through CssClass assignments this isn’t much of a problem. However, if you want to style your grid more generically using hierarchical CSS selectors it gets a lot more tricky to format tables that don’t properly delineate headers and body content. Also many plug-ins and other JavaScript utilities that work on tables require a properly formed table layout, and many of these simple won’t work out of the box with a GridView. For example, one of the things I wanted to do for this app is use the jQuery TableSorter plug-in which – not surprisingly – requires to work of table headers in the DOM document. Out of the box, the TableSorter plug-in doesn’t work with GridView controls, because the lack of a <thead> section to work on. Luckily with a little help of some jQuery scripting there’s a real easy fix to this problem. Basically, if we know the GridView generated table has a header in it, code like the following will move the headers from <tbody> to <thead>: <script type="text/javascript"> $(document).ready(function () { // Fix up GridView to support THEAD tags $("#gvCustomers tbody").before("<thead><tr></tr></thead>"); $("#gvCustomers thead tr").append($("#gvCustomers th")); $("#gvCustomers tbody tr:first").remove(); $("#gvCustomers").tablesorter({ sortList: [[1, 0]] }); }); </script> And voila you have a table that now works with the TableSorter plug-in. If you use GridView’s a lot you might want something a little more generic so the following does the same thing but should work more generically on any GridView/DataGrid missing its <thead> tag: function fixGridView(tableEl) {            var jTbl = $(tableEl);         if(jTbl.find("tbody>tr>th").length > 0) {         jTbl.find("tbody").before("<thead><tr></tr></thead>");         jTbl.find("thead tr").append(jTbl.find("th"));         jTbl.find("tbody tr:first").remove();     } } which you can call like this: $(document).ready(function () { fixGridView( $("#gvCustomers") ); $("#gvCustomers").tablesorter({ sortList: [[1, 0]] }); }); Server Side THEAD Rendering [updated from comments 11/21/2010] Several commenters pointed out that you can also do this on the server side by using the GridView.HeaderRow.TableSection property to force rendering with a proper table header. I was unaware of this option actually – not exactly an easy one to discover. One issue here is that timing of this needs to happen during the databinding process so you need to use an event handler: this.gvCustomers.DataBound += (object o, EventArgs ev) => { gvCustomers.HeaderRow.TableSection = TableRowSection.TableHeader; }; this.gvCustomers.DataSource = custList; this.gvCustomers.DataBind(); You can apply the same logic for the FooterRow. It’s beyond me why this rendering mode isn’t the default for a GridView – why would you ever want to have a table that doesn’t use a THEAD section??? But I disgress :-) I don’t use GridViews much anymore – opting for more flexible approaches using ListViews or even plain code based views or other custom displays that allow more control over layout, but I still see a lot of old code that does use them old clunkers including my own :) (gulp) and this does make life a little bit easier especially if you’re working with any of the jQuery table related plug-ins that expect a proper table structure.© Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  jQuery  

    Read the article

  • HTML5 Input type=date Formatting Issues

    - by Rick Strahl
    One of the nice features in HTML5 is the abililty to specify a specific input type for HTML text input boxes. There a host of very useful input types available including email, number, date, datetime, month, number, range, search, tel, time, url and week. For a more complete list you can check out the MDN reference. Date input types also support automatic validation which can be useful in some scenarios but maybe can get in the way at other times. One of the more common input types, and one that can most benefit of a custom UI for selection is of course date input. Almost every application could use a decent date representation and HTML5's date input type seems to push into the right direction. It'd be nice if you could just say:<form action="DateTest.html"> <label for="FromDate">Enter a Date:</label> <input type="date" id="FromDate" name="FromDate" value="11/08/2012" class="date" /> <hr /> <input type="submit" id="btnSubmit" name="btnSubmit" value="Save Date" class="smallbutton" /> </form> but if you'd expect to just work, you're likely to be pretty disappointed. Problem #1: Browser Support For starters there's browser support. Out of the major browsers only the latest versions of WebKit and Opera based browsers seem to support date input. Neither FireFox, nor any version of Internet Explorer (including the new touch enabled IE10 in Windows RT) support input type=date. Browser support is an issue, but it would be OK if it wasn't for problem #2. Problem #2: Date Formatting If you look at my date input from before:<input type="date" id="FromDate" name="FromDate" value="11/08/2012" class="date" /> You can see that my date is formatted in local date format (ie. en-us). Now when I run this sadly the form that comes up in Chrome (and also iOS mobile browsers) comes up like this: Chrome isn't recognizing my local date string. Instead it's expecting my date format to be provided in ISO 8601 format which is: 2012-11-08 So if I change the date input field to:<input type="date" id="FromDate" name="FromDate" value="2012-10-08" class="date" /> I correctly get the date field filled in: Also when I pick a date with the DatePicker the date value is also returned is also set to the ISO date format. Yet notice how the date is still formatted to the local date time format (ie. en-US format). So if I pick a new date: and then save, the value field is set back to: 2012-11-15 using the ISO format. The same is true for Opera and iOS browsers and I suspect any other WebKit style browser and their date pickers. So to summarize input type=date: Expects ISO 8601 format dates to display intial values Sets selected date values to ISO 8601 Now what? This would sort of make sense, if all browsers supported input type=date. It'd be easy because you could just format dates appropriately when you set the date value into the control by applying the appropriate culture formatting (ie. .ToString("yyyy-MM-dd") ). .NET is actually smart enough to pick up the date on the other end for modelbinding when ISO 8601 is used. For other environments this might be a bit more tricky. input type=date is clearly the way to go forward. Date controls implemented in HTML are going the way of the dodo, given the intricacies of mobile platforms and scaling for both desktop and mobile. I've been using jQuery UI Datepicker for ages but once going to mobile, that's no longer an option as the control doesn't scale down well for mobile apps (at least not without major re-styling). It also makes a lot of sense for the browser to provide this functionality - creating a consistent date input experience across apps only makes sense, which is why I find it baffling that neither FireFox nor IE 10 deign it necessary to support date input natively. The problem is that a large number of even the latest and greatest browsers don't support this. So now you're stuck with not knowing what date format you have to serve since neither the local format, nor the ISO format works in all cases. For my current app I just broke down and used the ISO format and so I'll live with the non-local date format. <input type="date" id="ToDate" name="ToDate" value="2012-11-08" class="date"/> Here's what this looks like on Chrome: Here's what it looks like on my iPhone: Both Chrome and the phone do this the way it should be. For the phone especially this demonstrates why we'd want this - the built-in date picker there certainly beats manually trying to edit the date using finger gymnastics, and it's one of the easiest ways to pick a date I can think of (ie. easier to use than your typical date picker). Finally here's what the date looks like in FireFox: Certainly this is not the ideal date format, but it's clear enough I suppose. If users enter a date in local US format and that works as well (but won't work for other locales). It'll have to do. Over time one can only hope that other browsers will finally decide to implement this functionality natively to provide a unique experience. Until then, incomplete solutions it is. Related Posts Html 5 Input Types - How useful is this really going to be?© Rick Strahl, West Wind Technologies, 2005-2012Posted in HTML5  HTML   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • CLR Version issues with CorBindRuntimeEx

    - by Rick Strahl
    I’m working on an older FoxPro application that’s using .NET Interop and this app loads its own copy of the .NET runtime through some of our own tools (wwDotNetBridge). This all works fine and it’s fairly straightforward to load and host the runtime and then make calls against it. I’m writing this up for myself mostly because I’ve been bitten by these issues repeatedly and spend 15 minutes each However, things get tricky when calling specific versions of the .NET runtime since .NET 4.0 has shipped. Basically we need to be able to support both .NET 2.0 and 4.0 and we’re currently doing it with the same assembly – a .NET 2.0 assembly that is the AppDomain entry point. This works as .NET 4.0 can easily host .NET 2.0 assemblies and the functionality in the 2.0 assembly provides all the features we need to call .NET 4.0 assemblies via Reflection. In wwDotnetBridge we provide a load flag that allows specification of the runtime version to use. Something like this: do wwDotNetBridge LOCAL loBridge as wwDotNetBridge loBridge = CreateObject("wwDotNetBridge","v4.0.30319") and this works just fine in most cases.  If I specify V4 internally that gets fixed up to a whole version number like “v4.0.30319” which is then actually used to host the .NET runtime. Specifically the ClrVersion setting is handled in this Win32 DLL code that handles loading the runtime for me: /// Starts up the CLR and creates a Default AppDomain DWORD WINAPI ClrLoad(char *ErrorMessage, DWORD *dwErrorSize) { if (spDefAppDomain) return 1; //Retrieve a pointer to the ICorRuntimeHost interface HRESULT hr = CorBindToRuntimeEx( ClrVersion, //Retrieve latest version by default L"wks", //Request a WorkStation build of the CLR STARTUP_LOADER_OPTIMIZATION_MULTI_DOMAIN | STARTUP_CONCURRENT_GC, CLSID_CorRuntimeHost, IID_ICorRuntimeHost, (void**)&spRuntimeHost ); if (FAILED(hr)) { *dwErrorSize = SetError(hr,ErrorMessage); return hr; } //Start the CLR hr = spRuntimeHost->Start(); if (FAILED(hr)) return hr; CComPtr<IUnknown> pUnk; WCHAR domainId[50]; swprintf(domainId,L"%s_%i",L"wwDotNetBridge",GetTickCount()); hr = spRuntimeHost->CreateDomain(domainId,NULL,&pUnk); hr = pUnk->QueryInterface(&spDefAppDomain.p); if (FAILED(hr)) return hr; return 1; } CorBindToRuntimeEx allows for a specific .NET version string to be supplied which is what I’m doing via an API call from the FoxPro code. The behavior of CorBindToRuntimeEx is a bit finicky however. The documentation states that NULL should load the latest version of the .NET runtime available on the machine – but it actually doesn’t. As far as I can see – regardless of runtime overrides even in the .config file – NULL will always load .NET 2.0 even if 4.0 is installed. <supportedRuntime> .config File Settings Things get even more unpredictable once you start adding runtime overrides into the application’s .config file. In my scenario working inside of Visual FoxPro this would be VFP9.exe.config in the FoxPro installation folder (not the current folder). If I have a specific runtime override in the .config file like this: <?xml version="1.0"?> <configuration> <startup> <supportedRuntime version="v2.0.50727" /> </startup> </configuration> Not surprisingly with this I can load a .NET 2.0  runtime, but I will not be able to load Version 4.0 of the .NET runtime even if I explicitly specify it in my call to ClrLoad. Worse I don’t get an error – it will just go ahead and hand me a V2 version of the runtime and assume that’s what I wanted. Yuck! However, if I set the supported runtime to V4 in the .config file: <?xml version="1.0"?> <configuration> <startup> <supportedRuntime version="v4.0.30319" /> </startup> </configuration> Then I can load both V4 and V2 of the runtime. Specifying NULL however will STILL only give me V2 of the runtime. Again this seems pretty inconsistent. If you’re hosting runtimes make sure you check which version of the runtime is actually loading first to ensure you get the one you’re looking for. If the wrong version loads – say 2.0 and you want 4.0 - and you then proceed to load 4.0 assemblies they will all fail to load due to version mismatches. This is how all of this started – I had a bunch of assemblies that weren’t loading and it took a while to figure out that the host was running the wrong version of the CLR and therefore caused the assemblies loading to fail. Arrggh! <supportedRuntime> and Debugger Version <supportedRuntime> also affects the use of the .NET debugger when attached to the target application. Whichever runtime is specified in the key is the version of the debugger that fires up. This can have some interesting side effects. If you load a .NET 2.0 assembly but <supportedRuntime> points at V4.0 (or vice versa) the debugger will never fire because it can only debug in the appropriate runtime version. This has bitten me on several occasions where code runs just fine but the debugger will just breeze by breakpoints without notice. The default version for the debugger is the latest version installed on the system if <supportedRuntime> is not set. Summary Besides all the hassels, I’m thankful I can build a .NET 2.0 assembly and have it host .NET 4.0 and call .NET 4.0 code. This way we’re able to ship a single assembly that provides functionality that supports both .NET 2 and 4 without having to have separate DLLs for both which would be a deployment and update nightmare. The MSDN documentation does point at newer hosting API’s specifically for .NET 4.0 which are way more complicated and even less documented but that doesn’t help here because the runtime needs to be able to host both .NET 4.0 and 2.0. Not pleased about that – the new APIs look way more complex and of course they’re not available with older versions of the runtime installed which in our case makes them useless to me in this scenario where I have to support .NET 2.0 hosting (to provide greater ‘built-in’ platform support). Once you know the behavior above, it’s manageable. However, it’s quite easy to get tripped up here because there are multiple combinations that can really screw up behaviors.© Rick Strahl, West Wind Technologies, 2005-2011Posted in .NET  FoxPro  

    Read the article

  • Monitoring Html Element CSS Changes in JavaScript

    - by Rick Strahl
    [ updated Feb 15, 2011: Added event unbinding to avoid unintended recursion ] Here's a scenario I've run into on a few occasions: I need to be able to monitor certain CSS properties on an HTML element and know when that CSS element changes. For example, I have a some HTML element behavior plugins like a drop shadow that attaches to any HTML element, but I then need to be able to automatically keep the shadow in sync with the window if the  element dragged around the window or moved via code. Unfortunately there's no move event for HTML elements so you can't tell when it's location changes. So I've been looking around for some way to keep track of the element and a specific CSS property, but no luck. I suspect there's nothing native to do this so the only way I could think of is to use a timer and poll rather frequently for the property. I ended up with a generic jQuery plugin that looks like this: (function($){ $.fn.watch = function (props, func, interval, id) { /// <summary> /// Allows you to monitor changes in a specific /// CSS property of an element by polling the value. /// when the value changes a function is called. /// The function called is called in the context /// of the selected element (ie. this) /// </summary> /// <param name="prop" type="String">CSS Properties to watch sep. by commas</param> /// <param name="func" type="Function"> /// Function called when the value has changed. /// </param> /// <param name="interval" type="Number"> /// Optional interval for browsers that don't support DOMAttrModified or propertychange events. /// Determines the interval used for setInterval calls. /// </param> /// <param name="id" type="String">A unique ID that identifies this watch instance on this element</param> /// <returns type="jQuery" /> if (!interval) interval = 200; if (!id) id = "_watcher"; return this.each(function () { var _t = this; var el$ = $(this); var fnc = function () { __watcher.call(_t, id) }; var itId = null; var data = { id: id, props: props.split(","), func: func, vals: [props.split(",").length], fnc: fnc, origProps: props, interval: interval }; $.each(data.props, function (i) { data.vals[i] = el$.css(data.props[i]); }); el$.data(id, data); hookChange(el$, id, data.fnc); }); function hookChange(el$, id, fnc) { el$.each(function () { var el = $(this); if (typeof (el.get(0).onpropertychange) == "object") el.bind("propertychange." + id, fnc); else if ($.browser.mozilla) el.bind("DOMAttrModified." + id, fnc); else itId = setInterval(fnc, interval); }); } function __watcher(id) { var el$ = $(this); var w = el$.data(id); if (!w) return; var _t = this; if (!w.func) return; // must unbind or else unwanted recursion may occur el$.unwatch(id); var changed = false; var i = 0; for (i; i < w.props.length; i++) { var newVal = el$.css(w.props[i]); if (w.vals[i] != newVal) { w.vals[i] = newVal; changed = true; break; } } if (changed) w.func.call(_t, w, i); // rebind event hookChange(el$, id, w.fnc); } } $.fn.unwatch = function (id) { this.each(function () { var el = $(this); var fnc = el.data(id).fnc; try { if (typeof (this.onpropertychange) == "object") el.unbind("propertychange." + id, fnc); else if ($.browser.mozilla) el.unbind("DOMAttrModified." + id, fnc); else clearInterval(id); } // ignore if element was already unbound catch (e) { } }); return this; } })(jQuery); With this I can now monitor movement by monitoring say the top CSS property of the element. The following code creates a box and uses the draggable (jquery.ui) plugin and a couple of custom plugins that center and create a shadow. Here's how I can set this up with the watcher: $("#box") .draggable() .centerInClient() .shadow() .watch("top", function() { $(this).shadow(); },70,"_shadow"); ... $("#box") .unwatch("_shadow") .shadow("remove"); This code basically sets up the window to be draggable and initially centered and then a shadow is added. The .watch() call then assigns a CSS property to monitor (top in this case) and a function to call in response. The component now sets up a setInterval call and keeps on pinging this property every time. When the top value changes the supplied function is called. While this works and I can now drag my window around with the shadow following suit it's not perfect by a long shot. The shadow move is delayed and so drags behind the window, but using a higher timer value is not appropriate either as the UI starts getting jumpy if the timer's set with too small of an increment. This sort of monitor can be useful for other things as well where operations are maybe not quite as time critical as a UI operation taking place. Can anybody see a better a better way of capturing movement of an element on the page?© Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  JavaScript  jQuery  

    Read the article

  • Modern/Metro Internet Explorer: What were they thinking???

    - by Rick Strahl
    As I installed Windows 8.1 last week I decided that I really should take a closer look at Internet Explorer in the Modern/Metro environment again. Right away I ran into two issues that are real head scratchers to me.Modern Split Windows don't resize Viewport but Zoom OutThis one falls in the "WTF, really?" department: It looks like Modern Internet Explorer's Modern doesn't resize the browser window as every other browser (including IE 11 on the desktop) does, but rather tries to adjust the zoom to the width of the browser. This means that if you use the Modern IE browser and you split the display between IE and another application, IE will be zoomed out, with text becoming much, much smaller, rather than resizing the browser viewport and adjusting the pixel width as you would when a browser window is typically resized.Here's what I'm talking about in a couple of pictures. First here's the full screen Internet Explorer version (this shot is resized down since it's full screen at 1080p, click to see the full image):This brings up the first issue which is: On the desktop who wants to browse a site full screen? Most sites aren't fully optimized for 1080p widescreen experience and frankly most content that wide just looks weird. Even in typical 10" resolutions of 1280 width it's weird to look at things this way. At least this issue can be worked around with @media queries and either constraining the view, or adding additional content to make use of the extra space. Still running a desktop browser full screen is not optimal on a desktop machine - ever.Regardless, this view, while oversized, is what I expect: Everything is rendered in the right ratios, with font-size and the responsive design styling properly respected.But now look what happens when you split the desktop windows and show half desktop and have modern IE (this screen shot is not resized but cropped - this is actual size content as you can see in the cropped Twitter window on the right half of the screen):What's happening here is that IE is zooming out of the content to make it fit into the smaller width, shrinking the content rather than resizing the viewport's pixel width. In effect it looks like the pixel width stays at 1080px and the viewport expands out height-wise in response resulting in some crazy long portrait view.There goes responsive design - out the window literally. If you've built your site using @media queries and fixed viewport sizes, Internet Explorer completely screws you in this split view. On my 1080p monitor, the site shown at a little under half width becomes completely unreadable as the fonts are too small and break up. As you go into split view and you resize the window handle the content of the browser gets smaller and smaller (and effectively longer and longer on the bottom) effectively throwing off any responsive layout to the point of un-readability even on a big display, let alone a small tablet screen.What could POSSIBLY be the benefit of this screwed up behavior? I checked around a bit trying different pages in this shrunk down view. Other than the Microsoft home page, every page I went to was nearly unreadable at a quarter width. The only page I found that worked 'normally' was the Microsoft home page which undoubtedly is optimized just for Internet Explorer specifically.Bottom Address Bar opaquely overlays ContentAnother problematic feature for me is the browser address bar on the bottom. Modern IE shows the status bar opaquely on the bottom, overlaying the content area of the Web Page - until you click on the page. Until you do though, the address bar overlays the bottom content solidly. And not just a little bit but by good sizable chunk.In the application from the screen shot above I have an application toolbar on the bottom and the IE Address bar completely hides that bottom toolbar when the page is first loaded, until the user clicks into the content at which point the address bar shrinks down to a fat border style bar with a … on it. Toolbars on the bottom are pretty common these days, especially for mobile optimized applications, so I'd say this is a common use case. But even if you don't have toolbars on the bottom maybe there's other fixed content on the bottom of the page that is vital to display. While other browsers often also show address bars and then later hide them, these other browsers tend to resize the viewport when the address bar status changes, so the content can respond to the size change. Not so with Modern IE. The address bar overlays content and stays visible until content is clicked. No resize notification or viewport height change is sent to the browser.So basically Internet Explorer is telling me: "Our toolbar is more important than your content!" - AND gives me no chance to re-act to that behavior. The result on this page/application is that the user sees no actionable operations until he or she clicks into the content area, which is terrible from a UI perspective as the user has no idea what options are available on initial load.It's doubly confounding in that IE is running in full screen mode and has an the entire height of the screen at its disposal - there's plenty of real estate available to not require this sort of hiding of content in the first place. Heck, even Windows Phone with its more constrained size doesn't hide content - in fact the address bar on Windows Phone 8 is always visible.What were they thinking?Every time I use anything in the Modern Metro interface in Windows 8/8.1 I get angry.  I can pretty much ignore Metro/Modern for my everyday usage, but unfortunately with Internet Explorer in the modern shell I have to live with, because there will be users using it to access my sites. I think it's inexcusable by Microsoft to build such a crappy shell around the browser that impacts the actual usability of Web content. In both of the cases above I can only scratch my head at what could have possibly motivated anybody designing the UI for the browser to make these screwed up choices, that manipulate the content in a totally unmaintainable way.© Rick Strahl, West Wind Technologies, 2005-2013Posted in Windows  HTML5   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Dealing with SerializationExceptions in C#

    - by Tony
    I get a SerializationException: (...) is not marked as serializable. error in the following code: [Serializable] public class Wind { public MyText text; public Size MSize; public Point MLocation; public int MNumber; /.../ } [Serializable] public class MyText { public string MString; public Font MFont; public StringFormat StrFormat; public float MySize; public Color FColor, SColor, TColor; public bool IsRequest; public decimal MWide; /.../ } and the List to be serialized: List<Wind> MyList = new List<Wind>(); Code Snippet: FileStream FS = new FileStream(AppDomain.CurrentDomain.BaseDirectory + "Sticks.dat", FileMode.Create); BinaryFormatter BF = new BinaryFormatter(); BF.Serialize(FS, MyList); FS.Close(); throws an Exception: System.Runtime.Serialization.SerializationException was unhandled Message="Type 'System.Drawing.StringFormat' in Assembly 'System.Drawing, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' is not marked as serializable." How do I solve this problem?

    Read the article

  • Dynamic Type to do away with Reflection

    - by Rick Strahl
    The dynamic type in C# 4.0 is a welcome addition to the language. One thing I’ve been doing a lot with it is to remove explicit Reflection code that’s often necessary when you ‘dynamically’ need to walk and object hierarchy. In the past I’ve had a number of ReflectionUtils that used string based expressions to walk an object hierarchy. With the introduction of dynamic much of the ReflectionUtils code can be removed for cleaner code that runs considerably faster to boot. The old Way - Reflection Here’s a really contrived example, but assume for a second, you’d want to dynamically retrieve a Page.Request.Url.AbsoluteUrl based on a Page instance in an ASP.NET Web Page request. The strongly typed version looks like this: string path = Page.Request.Url.AbsolutePath; Now assume for a second that Page wasn’t available as a strongly typed instance and all you had was an object reference to start with and you couldn’t cast it (right I said this was contrived :-)) If you’re using raw Reflection code to retrieve this you’d end up writing 3 sets of Reflection calls using GetValue(). Here’s some internal code I use to retrieve Property values as part of ReflectionUtils: /// <summary> /// Retrieve a property value from an object dynamically. This is a simple version /// that uses Reflection calls directly. It doesn't support indexers. /// </summary> /// <param name="instance">Object to make the call on</param> /// <param name="property">Property to retrieve</param> /// <returns>Object - cast to proper type</returns> public static object GetProperty(object instance, string property) { return instance.GetType().GetProperty(property, ReflectionUtils.MemberAccess).GetValue(instance, null); } If you want more control over properties and support both fields and properties as well as array indexers a little more work is required: /// <summary> /// Parses Properties and Fields including Array and Collection references. /// Used internally for the 'Ex' Reflection methods. /// </summary> /// <param name="Parent"></param> /// <param name="Property"></param> /// <returns></returns> private static object GetPropertyInternal(object Parent, string Property) { if (Property == "this" || Property == "me") return Parent; object result = null; string pureProperty = Property; string indexes = null; bool isArrayOrCollection = false; // Deal with Array Property if (Property.IndexOf("[") > -1) { pureProperty = Property.Substring(0, Property.IndexOf("[")); indexes = Property.Substring(Property.IndexOf("[")); isArrayOrCollection = true; } // Get the member MemberInfo member = Parent.GetType().GetMember(pureProperty, ReflectionUtils.MemberAccess)[0]; if (member.MemberType == MemberTypes.Property) result = ((PropertyInfo)member).GetValue(Parent, null); else result = ((FieldInfo)member).GetValue(Parent); if (isArrayOrCollection) { indexes = indexes.Replace("[", string.Empty).Replace("]", string.Empty); if (result is Array) { int Index = -1; int.TryParse(indexes, out Index); result = CallMethod(result, "GetValue", Index); } else if (result is ICollection) { if (indexes.StartsWith("\"")) { // String Index indexes = indexes.Trim('\"'); result = CallMethod(result, "get_Item", indexes); } else { // assume numeric index int index = -1; int.TryParse(indexes, out index); result = CallMethod(result, "get_Item", index); } } } return result; } /// <summary> /// Returns a property or field value using a base object and sub members including . syntax. /// For example, you can access: oCustomer.oData.Company with (this,"oCustomer.oData.Company") /// This method also supports indexers in the Property value such as: /// Customer.DataSet.Tables["Customers"].Rows[0] /// </summary> /// <param name="Parent">Parent object to 'start' parsing from. Typically this will be the Page.</param> /// <param name="Property">The property to retrieve. Example: 'Customer.Entity.Company'</param> /// <returns></returns> public static object GetPropertyEx(object Parent, string Property) { Type type = Parent.GetType(); int at = Property.IndexOf("."); if (at < 0) { // Complex parse of the property return GetPropertyInternal(Parent, Property); } // Walk the . syntax - split into current object (Main) and further parsed objects (Subs) string main = Property.Substring(0, at); string subs = Property.Substring(at + 1); // Retrieve the next . section of the property object sub = GetPropertyInternal(Parent, main); // Now go parse the left over sections return GetPropertyEx(sub, subs); } As you can see there’s a fair bit of code involved into retrieving a property or field value reliably especially if you want to support array indexer syntax. This method is then used by a variety of routines to retrieve individual properties including one called GetPropertyEx() which can walk the dot syntax hierarchy easily. Anyway with ReflectionUtils I can  retrieve Page.Request.Url.AbsolutePath using code like this: string url = ReflectionUtils.GetPropertyEx(Page, "Request.Url.AbsolutePath") as string; This works fine, but is bulky to write and of course requires that I use my custom routines. It’s also quite slow as the code in GetPropertyEx does all sorts of string parsing to figure out which members to walk in the hierarchy. Enter dynamic – way easier! .NET 4.0’s dynamic type makes the above really easy. The following code is all that it takes: object objPage = Page; // force to object for contrivance :) dynamic page = objPage; // convert to dynamic from untyped object string scriptUrl = page.Request.Url.AbsolutePath; The dynamic type assignment in the first two lines turns the strongly typed Page object into a dynamic. The first assignment is just part of the contrived example to force the strongly typed Page reference into an untyped value to demonstrate the dynamic member access. The next line then just creates the dynamic type from the Page reference which allows you to access any public properties and methods easily. It also lets you access any child properties as dynamic types so when you look at Intellisense you’ll see something like this when typing Request.: In other words any dynamic value access on an object returns another dynamic object which is what allows the walking of the hierarchy chain. Note also that the result value doesn’t have to be explicitly cast as string in the code above – the compiler is perfectly happy without the cast in this case inferring the target type based on the type being assigned to. The dynamic conversion automatically handles the cast when making the final assignment which is nice making for natural syntnax that looks *exactly* like the fully typed syntax, but is completely dynamic. Note that you can also use indexers in the same natural syntax so the following also works on the dynamic page instance: string scriptUrl = page.Request.ServerVariables["SCRIPT_NAME"]; The dynamic type is going to make a lot of Reflection code go away as it’s simply so much nicer to be able to use natural syntax to write out code that previously required nasty Reflection syntax. Another interesting thing about the dynamic type is that it actually works considerably faster than Reflection. Check out the following methods that check performance: void Reflection() { Stopwatch stop = new Stopwatch(); stop.Start(); for (int i = 0; i < reps; i++) { // string url = ReflectionUtils.GetProperty(Page,"Title") as string;// "Request.Url.AbsolutePath") as string; string url = Page.GetType().GetProperty("Title", ReflectionUtils.MemberAccess).GetValue(Page, null) as string; } stop.Stop(); Response.Write("Reflection: " + stop.ElapsedMilliseconds.ToString()); } void Dynamic() { Stopwatch stop = new Stopwatch(); stop.Start(); dynamic page = Page; for (int i = 0; i < reps; i++) { string url = page.Title; //Request.Url.AbsolutePath; } stop.Stop(); Response.Write("Dynamic: " + stop.ElapsedMilliseconds.ToString()); } The dynamic code runs in 4-5 milliseconds while the Reflection code runs around 200+ milliseconds! There’s a bit of overhead in the first dynamic object call but subsequent calls are blazing fast and performance is actually much better than manual Reflection. Dynamic is definitely a huge win-win situation when you need dynamic access to objects at runtime.© Rick Strahl, West Wind Technologies, 2005-2010Posted in .NET  CSharp  

    Read the article

  • Mapping UrlEncoded POST Values in ASP.NET Web API

    - by Rick Strahl
    If there's one thing that's a bit unexpected in ASP.NET Web API, it's the limited support for mapping url encoded POST data values to simple parameters of ApiController methods. When I first looked at this I thought I was doing something wrong, because it seems mighty odd that you can bind query string values to parameters by name, but can't bind POST values to parameters in the same way. To demonstrate here's a simple example. If you have a Web API method like this:[HttpGet] public HttpResponseMessage Authenticate(string username, string password) { …} and then hit with a URL like this: http://localhost:88/samples/authenticate?Username=ricks&Password=sekrit it works just fine. The query string values are mapped to the username and password parameters of our API method. But if you now change the method to work with [HttpPost] instead like this:[HttpPost] public HttpResponseMessage Authenticate(string username, string password) { …} and hit it with a POST HTTP Request like this: POST http://localhost:88/samples/authenticate HTTP/1.1 Host: localhost:88 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Content-type: application/x-www-form-urlencoded Content-Length: 30 Username=ricks&Password=sekrit you'll find that while the request works, it doesn't actually receive the two string parameters. The username and password parameters are null and so the method is definitely going to fail. When I mentioned this over Twitter a few days ago I got a lot of responses back of why I'd want to do this in the first place - after all HTML Form submissions are the domain of MVC and not WebAPI which is a valid point. However, the more common use case is using POST Variables with AJAX calls. The following is quite common for passing simple values:$.post(url,{ Username: "Rick", Password: "sekrit" },function(result) {…}); but alas that doesn't work. How ASP.NET Web API handles Content Bodies Web API supports parsing content data in a variety of ways, but it does not deal with multiple posted content values. In effect you can only post a single content value to a Web API Action method. That one parameter can be very complex and you can bind it in a variety of ways, but ultimately you're tied to a single POST content value in your parameter definition. While it's possible to support multiple parameters on a POST/PUT operation, only one parameter can be mapped to the actual content - the rest have to be mapped to route values or the query string. Web API treats the whole request body as one big chunk of data that is sent to a Media Type Formatter that's responsible for de-serializing the content into whatever value the method requires. The restriction comes from async nature of Web API where the request data is read only once inside of the formatter that retrieves and deserializes it. Because it's read once, checking for content (like individual POST variables) first is not possible. However, Web API does provide a couple of ways to access the form POST data: Model Binding - object property mapping to bind POST values FormDataCollection - collection of POST keys/values ModelBinding POST Values - Binding POST data to Object Properties The recommended way to handle POST values in Web API is to use Model Binding, which maps individual urlencoded POST values to properties of a model object provided as the parameter. Model binding requires a single object as input to be bound to the POST data, with each POST key that matches a property name (including nested properties like Address.Street) being mapped and updated including automatic type conversion of simple types. This is a very nice feature - and a familiar one from MVC - that makes it very easy to have model objects mapped directly from inbound data. The obvious drawback with Model Binding is that you need a model for it to work: You have to provide a strongly typed object that can receive the data and this object has to map the inbound data. To rewrite the example above to use ModelBinding I have to create a class maps the properties that I need as parameters:public class LoginData { public string Username { get; set; } public string Password { get; set; } } and then accept the data like this in the API method:[HttpPost] public HttpResponseMessage Authenticate(LoginData login) { string username = login.Username; string password = login.Password; … } This works fine mapping the POST values to the properties of the login object. As a side benefit of this method definition, the method now also allows posting of JSON or XML to the same endpoint. If I change my request to send JSON like this: POST http://localhost:88/samples/authenticate HTTP/1.1 Host: localhost:88 Accept: application/jsonContent-type: application/json Content-Length: 40 {"Username":"ricks","Password":"sekrit"} it works as well and transparently, courtesy of the nice Content Negotiation features of Web API. There's nothing wrong with using Model binding and in fact it's a common practice to use (view) model object for inputs coming back from the client and mapping them into these models. But it can be  kind of a hassle if you have AJAX applications with a ton of backend hits, especially if many methods are very atomic and focused and don't effectively require a model or view. Not always do you have to pass structured data, but sometimes there are just a couple of simple response values that need to be sent back. If all you need is to pass a couple operational parameters, creating a view model object just for parameter purposes seems like overkill. Maybe you can use the query string instead (if that makes sense), but if you can't then you can often end up with a plethora of 'message objects' that serve no further  purpose than to make Model Binding work. Note that you can accept multiple parameters with ModelBinding so the following would still work:[HttpPost] public HttpResponseMessage Authenticate(LoginData login, string loginDomain) but only the object will be bound to POST data. As long as loginDomain comes from the querystring or route data this will work. Collecting POST values with FormDataCollection Another more dynamic approach to handle POST values is to collect POST data into a FormDataCollection. FormDataCollection is a very basic key/value collection (like FormCollection in MVC and Request.Form in ASP.NET in general) and then read the values out individually by querying each. [HttpPost] public HttpResponseMessage Authenticate(FormDataCollection form) { var username = form.Get("Username"); var password = form.Get("Password"); …} The downside to this approach is that it's not strongly typed, you have to handle type conversions on non-string parameters, and it gets a bit more complicated to test such as setup as you have to seed a FormDataCollection with data. On the other hand it's flexible and easy to use and especially with string parameters is easy to deal with. It's also dynamic, so if the client sends you a variety of combinations of values on which you make operating decisions, this is much easier to work with than a strongly typed object that would have to account for all possible values up front. The downside is that the code looks old school and isn't as self-documenting as a parameter list or object parameter would be. Nevertheless it's totally functionality and a viable choice for collecting POST values. What about [FromBody]? Web API also has a [FromBody] attribute that can be assigned to parameters. If you have multiple parameters on a Web API method signature you can use [FromBody] to specify which one will be parsed from the POST content. Unfortunately it's not terribly useful as it only returns content in raw format and requires a totally non-standard format ("=content") to specify your content. For more info in how FromBody works and several related issues to how POST data is mapped, you can check out Mike Stalls post: How WebAPI does Parameter Binding Not really sure where the Web API team thought [FromBody] would really be a good fit other than a down and dirty way to send a full string buffer. Extending Web API to make multiple POST Vars work? Don't think so Clearly there's no native support for multiple POST variables being mapped to parameters, which is a bit of a bummer. I know in my own work on one project my customer actually found this to be a real sticking point in their AJAX backend work, and we ended up not using Web API and using MVC JSON features instead. That's kind of sad because Web API is supposed to be the proper solution for AJAX backends. With all of ASP.NET Web API's extensibility you'd think there would be some way to build this functionality on our own, but after spending a bit of time digging and asking some of the experts from the team and Web API community I didn't hear anything that even suggests that this is possible. From what I could find I'd say it's not possible primarily because Web API's Routing engine does not account for the POST variable mapping. This means [HttpPost] methods with url encoded POST buffers are not mapped to the parameters of the endpoint, and so the routes would never even trigger a request that could be intercepted. Once the routing doesn't work there's not much that can be done. If somebody has an idea how this could be accomplished I would love to hear about it. Do we really need multi-value POST mapping? I think that that POST value mapping is a feature that one would expect of any API tool to have. If you look at common APIs out there like Flicker and Google Maps etc. they all work with POST data. POST data is very prominent much more so than JSON inputs and so supporting as many options that enable would seem to be crucial. All that aside, Web API does provide very nice features with Model Binding that allows you to capture many POST variables easily enough, and logistically this will let you build whatever you need with POST data of all shapes as long as you map objects. But having to have an object for every operation that receives a data input is going to take its toll in heavy AJAX applications, with a lot of types created that do nothing more than act as parameter containers. I also think that POST variable mapping is an expected behavior and Web APIs non-support will likely result in many, many questions like this one: How do I bind a simple POST value in ASP.NET WebAPI RC? with no clear answer to this question. I hope for V.next of WebAPI Microsoft will consider this a feature that's worth adding. Related Articles Passing multiple POST parameters to Web API Controller Methods Mike Stall's post: How Web API does Parameter Binding Where does ASP.NET Web API Fit?© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • ASP.NET MVC Postbacks and HtmlHelper Controls ignoring Model Changes

    - by Rick Strahl
    So here's a binding behavior in ASP.NET MVC that I didn't really get until today: HtmlHelpers controls (like .TextBoxFor() etc.) don't bind to model values on Postback, but rather get their value directly out of the POST buffer from ModelState. Effectively it looks like you can't change the display value of a control via model value updates on a Postback operation. To demonstrate here's an example. I have a small section in a document where I display an editable email address: This is what the form displays on a GET operation and as expected I get the email value displayed in both the textbox and plain value display below, which reflects the value in the mode. I added a plain text value to demonstrate the model value compared to what's rendered in the textbox. The relevant markup is the email address which needs to be manipulated via the model in the Controller code. Here's the Razor markup: <div class="fieldcontainer"> <label> Email: &nbsp; <small>(username and <a href="http://gravatar.com">Gravatar</a> image)</small> </label> <div> @Html.TextBoxFor( mod=> mod.User.Email, new {type="email",@class="inputfield"}) @Model.User.Email </div> </div>   So, I have this form and the user can change their email address. On postback the Post controller code then asks the business layer whether the change is allowed. If it's not I want to reset the email address back to the old value which exists in the database and was previously store. The obvious thing to do would be to modify the model. Here's the Controller logic block that deals with that:// did user change email? if (!string.IsNullOrEmpty(oldEmail) && user.Email != oldEmail) { if (userBus.DoesEmailExist(user.Email)) { userBus.ValidationErrors.Add("New email address exists already. Please…"); user.Email = oldEmail; } else // allow email change but require verification by forcing a login user.IsVerified = false; }… model.user = user; return View(model); The logic is straight forward - if the new email address is not valid because it already exists I don't want to display the new email address the user entered, but rather the old one. To do this I change the value on the model which effectively does this:model.user.Email = oldEmail; return View(model); So when I press the Save button after entering in my new email address ([email protected]) here's what comes back in the rendered view: Notice that the textbox value and the raw displayed model value are different. The TextBox displays the POST value, the raw value displays the actual model value which are different. This means that MVC renders the textbox value from the POST data rather than from the view data when an Http POST is active. Now I don't know about you but this is not the behavior I expected - initially. This behavior effectively means that I cannot modify the contents of the textbox from the Controller code if using HtmlHelpers for binding. Updating the model for display purposes in a POST has in effect - no effect. (Apr. 25, 2012 - edited the post heavily based on comments and more experimentation) What should the behavior be? After getting quite a few comments on this post I quickly realized that the behavior I described above is actually the behavior you'd want in 99% of the binding scenarios. You do want to get the POST values back into your input controls at all times, so that the data displayed on a form for the user matches what they typed. So if an error occurs, the error doesn't mysteriously disappear getting replaced either with a default value or some value that you changed on the model on your own. Makes sense. Still it is a little non-obvious because the way you create the UI elements with MVC, it certainly looks like your are binding to the model value:@Html.TextBoxFor( mod=> mod.User.Email, new {type="email",@class="inputfield",required="required" }) and so unless one understands a little bit about how the model binder works this is easy to trip up. At least it was for me. Even though I'm telling the control which model value to bind to, that model value is only used initially on GET operations. After that ModelState/POST values provide the display value. Workarounds The default behavior should be fine for 99% of binding scenarios. But if you do need fix up values based on your model rather than the default POST values, there are a number of ways that you can work around this. Initially when I ran into this, I couldn't figure out how to set the value using code and so the simplest solution to me was simply to not use the MVC Html Helper for the specific control and explicitly bind the model via HTML markup and @Razor expression: <input type="text" name="User.Email" id="User_Email" value="@Model.User.Email" /> And this produces the right result. This is easy enough to create, but feels a little out of place when using the @Html helpers for everything else. As you can see by the difference in the name and id values, you also are forced to remember the naming conventions that MVC imposes in order for ModelBinding to work properly which is a pain to remember and set manually (name is the same as the property with . syntax, id replaces dots with underlines). Use the ModelState Some of my original confusion came because I didn't understand how the model binder works. The model binder basically maintains ModelState on a postback, which holds a value and binding errors for each of the Post back value submitted on the page that can be mapped to the model. In other words there's one ModelState entry for each bound property of the model. Each ModelState entry contains a value property that holds AttemptedValue and RawValue properties. The AttemptedValue is essentially the POST value retrieved from the form. The RawValue is the value that the model holds. When MVC binds controls like @Html.TextBoxFor() or @Html.TextBox(), it always binds values on a GET operation. On a POST operation however, it'll always used the AttemptedValue to display the control. MVC binds using the ModelState on a POST operation, not the model's value. So, if you want the behavior that I was expecting originally you can actually get it by clearing the ModelState in the controller code:ModelState.Clear(); This clears out all the captured ModelState values, and effectively binds to the model. Note this will produce very similar results - in fact if there are no binding errors you see exactly the same behavior as if binding from ModelState, because the model has been updated from the ModelState already and binding to the updated values most likely produces the same values you would get with POST back values. The big difference though is that any values that couldn't bind - like say putting a string into a numeric field - will now not display back the value the user typed, but the default field value or whatever you changed the model value to. This is the behavior I was actually expecting previously. But - clearing out all values might be a bit heavy handed. You might want to fix up one or two values in a model but rarely would you want the entire model to update from the model. So, you can also clear out individual values on an as needed basis:if (userBus.DoesEmailExist(user.Email)) { userBus.ValidationErrors.Add("New email address exists already. Please…"); user.Email = oldEmail; ModelState.Remove("User.Email"); } This allows you to remove a single value from the ModelState and effectively allows you to replace that value for display from the model. Why? While researching this I came across a post from Microsoft's Brad Wilson who describes the default binding behavior best in a forum post: The reason we use the posted value for editors rather than the model value is that the model may not be able to contain the value that the user typed. Imagine in your "int" editor the user had typed "dog". You want to display an error message which says "dog is not valid", and leave "dog" in the editor field. However, your model is an int: there's no way it can store "dog". So we keep the old value. If you don't want the old values in the editor, clear out the Model State. That's where the old value is stored and pulled from the HTML helpers. There you have it. It's not the most intuitive behavior, but in hindsight this behavior does make some sense even if at first glance it looks like you should be able to update values from the model. The solution of clearing ModelState works and is a reasonable one but you have to know about some of the innards of ModelState and how it actually works to figure that out.© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Back to Basics: When does a .NET Assembly Dependency get loaded

    - by Rick Strahl
    When we work on typical day to day applications, it's easy to forget some of the core features of the .NET framework. For me personally it's been a long time since I've learned about some of the underlying CLR system level services even though I rely on them on a daily basis. I often think only about high level application constructs and/or high level framework functionality, but the low level stuff is often just taken for granted. Over the last week at DevConnections I had all sorts of low level discussions with other developers about the inner workings of this or that technology (especially in light of my Low Level ASP.NET Architecture talk and the Razor Hosting talk). One topic that came up a couple of times and ended up a point of confusion even amongst some seasoned developers (including some folks from Microsoft <snicker>) is when assemblies actually load into a .NET process. There are a number of different ways that assemblies are loaded in .NET. When you create a typical project assemblies usually come from: The Assembly reference list of the top level 'executable' project The Assembly references of referenced projects Dynamically loaded at runtime via AppDomain/Reflection loading In addition .NET automatically loads mscorlib (most of the System namespace) the boot process that hosts the .NET runtime in EXE apps, or some other kind of runtime hosting environment (runtime hosting in servers like IIS, SQL Server or COM Interop). In hosting environments the runtime host may also pre-load a bunch of assemblies on its own (for example the ASP.NET host requires all sorts of assemblies just to run itself, before ever routing into your user specific code). Assembly Loading The most obvious source of loaded assemblies is the top level application's assembly reference list. You can add assembly references to a top level application and those assembly references are then available to the application. In a nutshell, referenced assemblies are not immediately loaded - they are loaded on the fly as needed. So regardless of whether you have an assembly reference in a top level project, or a dependent assembly assemblies typically load on an as needed basis, unless explicitly loaded by user code. The same is true of dependent assemblies. To check this out I ran a simple test: I have a utility assembly Westwind.Utilities which is a general purpose library that can work in any type of project. Due to a couple of small requirements for encoding and a logging piece that allows logging Web content (dependency on HttpContext.Current) this utility library has a dependency on System.Web. Now System.Web is a pretty large assembly and generally you'd want to avoid adding it to a non-Web project if it can be helped. So I created a Console Application that loads my utility library: You can see that the top level Console app a reference to Westwind.Utilities and System.Data (beyond the core .NET libs). The Westwind.Utilities project on the other hand has quite a few dependencies including System.Web. I then add a main program that accesses only a simple utillity method in the Westwind.Utilities library that doesn't require any of the classes that access System.Web: static void Main(string[] args) { Console.WriteLine(StringUtils.NewStringId()); Console.ReadLine(); } StringUtils.NewStringId() calls into Westwind.Utilities, but it doesn't rely on System.Web. Any guesses what the assembly list looks like when I stop the code on the ReadLine() command? I'll wait here while you think about it… … … So, when I stop on ReadLine() and then fire up Process Explorer and check the assembly list I get: We can see here that .NET has not actually loaded any of the dependencies of the Westwind.Utilities assembly. Also not loaded is the top level System.Data reference even though it's in the dependent assembly list of the top level project. Since this particular function I called only uses core System functionality (contained in mscorlib) there's in fact nothing else loaded beyond the main application and my Westwind.Utilities assembly that contains the method accessed. None of the dependencies of Westwind.Utilities loaded. If you were to open the assembly in a disassembler like Reflector or ILSpy, you would however see all the compiled in dependencies. The referenced assemblies are in the dependency list and they are loadable, but they are not immediately loaded by the application. In other words the C# compiler and .NET linker are smart enough to figure out the dependencies based on the code that actually is referenced from your application and any dependencies cascading down into the dependencies from your top level application into the referenced assemblies. In the example above the usage requirement is pretty obvious since I'm only calling a single static method and then exiting the app, but in more complex applications these dependency relationships become very complicated - however it's all taken care of by the compiler and linker figuring out what types and members are actually referenced and including only those assemblies that are in fact referenced in your code or required by any of your dependencies. The good news here is: That if you are referencing an assembly that has a dependency on something like System.Web in a few places that are not actually accessed by any of your code or any dependent assembly code that you are calling, that assembly is never loaded into memory! Some Hosting Environments pre-load Assemblies The load behavior can vary however. In Console and desktop applications we have full control over assembly loading so we see the core CLR behavior. However other environments like ASP.NET for example will preload referenced assemblies explicitly as part of the startup process - primarily to minimize load conflicts. Specifically ASP.NET pre-loads all assemblies referenced in the assembly list and the /bin folder. So in Web applications it definitely pays to minimize your top level assemblies if they are not used. Understanding when Assemblies Load To clarify and see it actually happen what I described in the first example , let's look at a couple of other scenarios. To see assemblies loading at runtime in real time lets create a utility function to print out loaded assemblies to the console: public static void PrintAssemblies() { var assemblies = AppDomain.CurrentDomain.GetAssemblies(); foreach (var assembly in assemblies) { Console.WriteLine(assembly.GetName()); } } Now let's look at the first scenario where I have class method that references internally uses System.Web. In the first scenario lets add a method to my main program like this: static void Main(string[] args) { Console.WriteLine(StringUtils.NewStringId()); Console.ReadLine(); PrintAssemblies(); } public static void WebLogEntry() { var entry = new WebLogEntry(); entry.UpdateFromRequest(); Console.WriteLine(entry.QueryString); } UpdateFromWebRequest() internally accesses HttpContext.Current to read some information of the ASP.NET Request object so it clearly needs a reference System.Web to work. In this first example, the method that holds the calling code is never called, but exists as a static method that can potentially be called externally at some point. What do you think will happen here with the assembly loading? Will System.Web load in this example? No - it doesn't. Because the WebLogEntry() method is never called by the mainline application (or anywhere else) System.Web is not loaded. .NET dynamically loads assemblies as code that needs it is called. No code references the WebLogEntry() method and so System.Web is never loaded. Next, let's add the call to this method, which should trigger System.Web to be loaded because a dependency exists. Let's change the code to: static void Main(string[] args) { Console.WriteLine(StringUtils.NewStringId()); Console.WriteLine("--- Before:"); PrintAssemblies(); WebLogEntry(); Console.WriteLine("--- After:"); PrintAssemblies(); Console.ReadLine(); } public static void WebLogEntry() { var entry = new WebLogEntry(); entry.UpdateFromRequest(); Console.WriteLine(entry.QueryString); } Looking at the code now, when do you think System.Web will be loaded? Will the before list include it? Yup System.Web gets loaded, but only after it's actually referenced. In fact, just until before the call to UpdateFromRequest() System.Web is not loaded - it only loads when the method is actually called and requires the reference in the executing code. Moral of the Story So what have we learned - or maybe remembered again? Dependent Assembly References are not pre-loaded when an application starts (by default) Dependent Assemblies that are not referenced by executing code are never loaded Dependent Assemblies are just in time loaded when first referenced in code All of this is nothing new - .NET has always worked like this. But it's good to have a refresher now and then and go through the exercise of seeing it work in action. It's not one of those things we think about everyday, and as I found out last week, I couldn't remember exactly how it worked since it's been so long since I've learned about this. And apparently I'm not the only one as several other people I had discussions with in relation to loaded assemblies also didn't recall exactly what should happen or assumed incorrectly that just having a reference automatically loads the assembly. The moral of the story for me is: Trying at all costs to eliminate an assembly reference from a component is not quite as important as it's often made out to be. For example, the Westwind.Utilities module described above has a logging component, including a Web specific logging entry that supports pulling information from the active HTTP Context. Adding that feature requires a reference to System.Web. Should I worry about this in the scope of this library? Probably not, because if I don't use that one class of nearly a hundred, System.Web never gets pulled into the parent process. IOW, System.Web only loads when I use that specific feature and if I am, well I clearly have to be running in a Web environment anyway to use it realistically. The alternative would be considerably uglier: Pulling out the WebLogEntry class and sticking it into another assembly and breaking up the logging code. In this case - definitely not worth it. So, .NET definitely goes through some pretty nifty optimizations to ensure that it loads only what it needs and in most cases you can just rely on .NET to do the right thing. Sometimes though assembly loading can go wrong (especially when signed and versioned local assemblies are involved), but that's subject for a whole other post…© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  CSharp   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Passing multiple simple POST Values to ASP.NET Web API

    - by Rick Strahl
    A few weeks backs I posted a blog post  about what does and doesn't work with ASP.NET Web API when it comes to POSTing data to a Web API controller. One of the features that doesn't work out of the box - somewhat unexpectedly -  is the ability to map POST form variables to simple parameters of a Web API method. For example imagine you have this form and you want to post this data to a Web API end point like this via AJAX: <form> Name: <input type="name" name="name" value="Rick" /> Value: <input type="value" name="value" value="12" /> Entered: <input type="entered" name="entered" value="12/01/2011" /> <input type="button" id="btnSend" value="Send" /> </form> <script type="text/javascript"> $("#btnSend").click( function() { $.post("samples/PostMultipleSimpleValues?action=kazam", $("form").serialize(), function (result) { alert(result); }); }); </script> or you might do this more explicitly by creating a simple client map and specifying the POST values directly by hand:$.post("samples/PostMultipleSimpleValues?action=kazam", { name: "Rick", value: 1, entered: "12/01/2012" }, $("form").serialize(), function (result) { alert(result); }); On the wire this generates a simple POST request with Url Encoded values in the content:POST /AspNetWebApi/samples/PostMultipleSimpleValues?action=kazam HTTP/1.1 Host: localhost User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64; rv:15.0) Gecko/20100101 Firefox/15.0.1 Accept: application/json Connection: keep-alive Content-Type: application/x-www-form-urlencoded; charset=UTF-8 X-Requested-With: XMLHttpRequest Referer: http://localhost/AspNetWebApi/FormPostTest.html Content-Length: 41 Pragma: no-cache Cache-Control: no-cachename=Rick&value=12&entered=12%2F10%2F2011 Seems simple enough, right? We are basically posting 3 form variables and 1 query string value to the server. Unfortunately Web API can't handle request out of the box. If I create a method like this:[HttpPost] public string PostMultipleSimpleValues(string name, int value, DateTime entered, string action = null) { return string.Format("Name: {0}, Value: {1}, Date: {2}, Action: {3}", name, value, entered, action); }You'll find that you get an HTTP 404 error and { "Message": "No HTTP resource was found that matches the request URI…"} Yes, it's possible to pass multiple POST parameters of course, but Web API expects you to use Model Binding for this - mapping the post parameters to a strongly typed .NET object, not to single parameters. Alternately you can also accept a FormDataCollection parameter on your API method to get a name value collection of all POSTed values. If you're using JSON only, using the dynamic JObject/JValue objects might also work. ModelBinding is fine in many use cases, but can quickly become overkill if you only need to pass a couple of simple parameters to many methods. Especially in applications with many, many AJAX callbacks the 'parameter mapping type' per method signature can lead to serious class pollution in a project very quickly. Simple POST variables are also commonly used in AJAX applications to pass data to the server, even in many complex public APIs. So this is not an uncommon use case, and - maybe more so a behavior that I would have expected Web API to support natively. The question "Why aren't my POST parameters mapping to Web API method parameters" is already a frequent one… So this is something that I think is fairly important, but unfortunately missing in the base Web API installation. Creating a Custom Parameter Binder Luckily Web API is greatly extensible and there's a way to create a custom Parameter Binding to provide this functionality! Although this solution took me a long while to find and then only with the help of some folks Microsoft (thanks Hong Mei!!!), it's not difficult to hook up in your own projects. It requires one small class and a GlobalConfiguration hookup. Web API parameter bindings allow you to intercept processing of individual parameters - they deal with mapping parameters to the signature as well as converting the parameters to the actual values that are returned. Here's the implementation of the SimplePostVariableParameterBinding class:public class SimplePostVariableParameterBinding : HttpParameterBinding { private const string MultipleBodyParameters = "MultipleBodyParameters"; public SimplePostVariableParameterBinding(HttpParameterDescriptor descriptor) : base(descriptor) { } /// <summary> /// Check for simple binding parameters in POST data. Bind POST /// data as well as query string data /// </summary> public override Task ExecuteBindingAsync(ModelMetadataProvider metadataProvider, HttpActionContext actionContext, CancellationToken cancellationToken) { // Body can only be read once, so read and cache it NameValueCollection col = TryReadBody(actionContext.Request); string stringValue = null; if (col != null) stringValue = col[Descriptor.ParameterName]; // try reading query string if we have no POST/PUT match if (stringValue == null) { var query = actionContext.Request.GetQueryNameValuePairs(); if (query != null) { var matches = query.Where(kv => kv.Key.ToLower() == Descriptor.ParameterName.ToLower()); if (matches.Count() > 0) stringValue = matches.First().Value; } } object value = StringToType(stringValue); // Set the binding result here SetValue(actionContext, value); // now, we can return a completed task with no result TaskCompletionSource<AsyncVoid> tcs = new TaskCompletionSource<AsyncVoid>(); tcs.SetResult(default(AsyncVoid)); return tcs.Task; } private object StringToType(string stringValue) { object value = null; if (stringValue == null) value = null; else if (Descriptor.ParameterType == typeof(string)) value = stringValue; else if (Descriptor.ParameterType == typeof(int)) value = int.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(Int32)) value = Int32.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(Int64)) value = Int64.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(decimal)) value = decimal.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(double)) value = double.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(DateTime)) value = DateTime.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(bool)) { value = false; if (stringValue == "true" || stringValue == "on" || stringValue == "1") value = true; } else value = stringValue; return value; } /// <summary> /// Read and cache the request body /// </summary> /// <param name="request"></param> /// <returns></returns> private NameValueCollection TryReadBody(HttpRequestMessage request) { object result = null; // try to read out of cache first if (!request.Properties.TryGetValue(MultipleBodyParameters, out result)) { // parsing the string like firstname=Hongmei&lastname=Ge result = request.Content.ReadAsFormDataAsync().Result; request.Properties.Add(MultipleBodyParameters, result); } return result as NameValueCollection; } private struct AsyncVoid { } }   The ExecuteBindingAsync method is fired for each parameter that is mapped and sent for conversion. This custom binding is fired only if the incoming parameter is a simple type (that gets defined later when I hook up the binding), so this binding never fires on complex types or if the first type is not a simple type. For the first parameter of a request the Binding first reads the request body into a NameValueCollection and caches that in the request.Properties collection. The request body can only be read once, so the first parameter request reads it and then caches it. Subsequent parameters then use the cached POST value collection. Once the form collection is available the value of the parameter is read, and the value is translated into the target type requested by the Descriptor. SetValue writes out the value to be mapped. Once you have the ParameterBinding in place, the binding has to be assigned. This is done along with all other Web API configuration tasks at application startup in global.asax's Application_Start:GlobalConfiguration.Configuration.ParameterBindingRules .Insert(0, (HttpParameterDescriptor descriptor) => { var supportedMethods = descriptor.ActionDescriptor.SupportedHttpMethods; // Only apply this binder on POST and PUT operations if (supportedMethods.Contains(HttpMethod.Post) || supportedMethods.Contains(HttpMethod.Put)) { var supportedTypes = new Type[] { typeof(string), typeof(int), typeof(decimal), typeof(double), typeof(bool), typeof(DateTime) }; if (supportedTypes.Where(typ => typ == descriptor.ParameterType).Count() > 0) return new SimplePostVariableParameterBinding(descriptor); } // let the default bindings do their work return null; });   The ParameterBindingRules.Insert method takes a delegate that checks which type of requests it should handle. The logic here checks whether the request is POST or PUT and whether the parameter type is a simple type that is supported. Web API calls this delegate once for each method signature it tries to map and the delegate returns null to indicate it's not handling this parameter, or it returns a new parameter binding instance - in this case the SimplePostVariableParameterBinding. Once the parameter binding and this hook up code is in place, you can now pass simple POST values to methods with simple parameters. The examples I showed above should now work in addition to the standard bindings. Summary Clearly this is not easy to discover. I spent quite a bit of time digging through the Web API source trying to figure this out on my own without much luck. It took Hong Mei at Micrsoft to provide a base example as I asked around so I can't take credit for this solution :-). But once you know where to look, Web API is brilliantly extensible to make it relatively easy to customize the parameter behavior. I'm very stoked that this got resolved  - in the last two months I've had two customers with projects that decided not to use Web API in AJAX heavy SPA applications because this POST variable mapping wasn't available. This might actually change their mind to still switch back and take advantage of the many great features in Web API. I too frequently use plain POST variables for communicating with server AJAX handlers and while I could have worked around this (with untyped JObject or the Form collection mostly), having proper POST to parameter mapping makes things much easier. I said this in my last post on POST data and say it again here: I think POST to method parameter mapping should have been shipped in the box with Web API, because without knowing about this limitation the expectation is that simple POST variables map to parameters just like query string values do. I hope Microsoft considers including this type of functionality natively in the next version of Web API natively or at least as a built-in HttpParameterBinding that can be just added. This is especially true, since this binding doesn't affect existing bindings. Resources SimplePostVariableParameterBinding Source on GitHub Global.asax hookup source Mapping URL Encoded Post Values in  ASP.NET Web API© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api  AJAX   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Creating a Dynamic DataRow for easier DataRow Syntax

    - by Rick Strahl
    I've been thrown back into an older project that uses DataSets and DataRows as their entity storage model. I have several applications internally that I still maintain that run just fine (and I sometimes wonder if this wasn't easier than all this ORM crap we deal with with 'newer' improved technology today - but I disgress) but use this older code. For the most part DataSets/DataTables/DataRows are abstracted away in a pseudo entity model, but in some situations like queries DataTables and DataRows are still surfaced to the business layer. Here's an example. Here's a business object method that runs dynamic query and the code ends up looping over the result set using the ugly DataRow Array syntax:public int UpdateAllSafeTitles() { int result = this.Execute("select pk, title, safetitle from " + Tablename + " where EntryType=1", "TPks"); if (result < 0) return result; result = 0; foreach (DataRow row in this.DataSet.Tables["TPks"].Rows) { string title = row["title"] as string; string safeTitle = row["safeTitle"] as string; int pk = (int)row["pk"]; string newSafeTitle = this.GetSafeTitle(title); if (newSafeTitle != safeTitle) { this.ExecuteNonQuery("update " + this.Tablename + " set safeTitle=@safeTitle where pk=@pk", this.CreateParameter("@safeTitle",newSafeTitle), this.CreateParameter("@pk",pk) ); result++; } } return result; } The problem with looping over DataRow objecs is two fold: The array syntax is tedious to type and not real clear to look at, and explicit casting is required in order to do anything useful with the values. I've highlighted the place where this matters. Using the DynamicDataRow class I'll show in a minute this code can be changed to look like this:public int UpdateAllSafeTitles() { int result = this.Execute("select pk, title, safetitle from " + Tablename + " where EntryType=1", "TPks"); if (result < 0) return result; result = 0; foreach (DataRow row in this.DataSet.Tables["TPks"].Rows) { dynamic entry = new DynamicDataRow(row); string newSafeTitle = this.GetSafeTitle(entry.title); if (newSafeTitle != entry.safeTitle) { this.ExecuteNonQuery("update " + this.Tablename + " set safeTitle=@safeTitle where pk=@pk", this.CreateParameter("@safeTitle",newSafeTitle), this.CreateParameter("@pk",entry.pk) ); result++; } } return result; } The code looks much a bit more natural and describes what's happening a little nicer as well. Well, using the new dynamic features in .NET it's actually quite easy to implement the DynamicDataRow class. Creating your own custom Dynamic Objects .NET 4.0 introduced the Dynamic Language Runtime (DLR) and opened up a whole bunch of new capabilities for .NET applications. The dynamic type is an easy way to avoid Reflection and directly access members of 'dynamic' or 'late bound' objects at runtime. There's a lot of very subtle but extremely useful stuff that dynamic does (especially for COM Interop scenearios) but in its simplest form it often allows you to do away with manual Reflection at runtime. In addition you can create DynamicObject implementations that can perform  custom interception of member accesses and so allow you to provide more natural access to more complex or awkward data structures like the DataRow that I use as an example here. Bascially you can subclass DynamicObject and then implement a few methods (TryGetMember, TrySetMember, TryInvokeMember) to provide the ability to return dynamic results from just about any data structure using simple property/method access. In the code above, I created a custom DynamicDataRow class which inherits from DynamicObject and implements only TryGetMember and TrySetMember. Here's what simple class looks like:/// <summary> /// This class provides an easy way to turn a DataRow /// into a Dynamic object that supports direct property /// access to the DataRow fields. /// /// The class also automatically fixes up DbNull values /// (null into .NET and DbNUll to DataRow) /// </summary> public class DynamicDataRow : DynamicObject { /// <summary> /// Instance of object passed in /// </summary> DataRow DataRow; /// <summary> /// Pass in a DataRow to work off /// </summary> /// <param name="instance"></param> public DynamicDataRow(DataRow dataRow) { DataRow = dataRow; } /// <summary> /// Returns a value from a DataRow items array. /// If the field doesn't exist null is returned. /// DbNull values are turned into .NET nulls. /// /// </summary> /// <param name="binder"></param> /// <param name="result"></param> /// <returns></returns> public override bool TryGetMember(GetMemberBinder binder, out object result) { result = null; try { result = DataRow[binder.Name]; if (result == DBNull.Value) result = null; return true; } catch { } result = null; return false; } /// <summary> /// Property setter implementation tries to retrieve value from instance /// first then into this object /// </summary> /// <param name="binder"></param> /// <param name="value"></param> /// <returns></returns> public override bool TrySetMember(SetMemberBinder binder, object value) { try { if (value == null) value = DBNull.Value; DataRow[binder.Name] = value; return true; } catch {} return false; } } To demonstrate the basic features here's a short test: [TestMethod] [ExpectedException(typeof(RuntimeBinderException))] public void BasicDataRowTests() { DataTable table = new DataTable("table"); table.Columns.Add( new DataColumn() { ColumnName = "Name", DataType=typeof(string) }); table.Columns.Add( new DataColumn() { ColumnName = "Entered", DataType=typeof(DateTime) }); table.Columns.Add(new DataColumn() { ColumnName = "NullValue", DataType = typeof(string) }); DataRow row = table.NewRow(); DateTime now = DateTime.Now; row["Name"] = "Rick"; row["Entered"] = now; row["NullValue"] = null; // converted in DbNull dynamic drow = new DynamicDataRow(row); string name = drow.Name; DateTime entered = drow.Entered; string nulled = drow.NullValue; Assert.AreEqual(name, "Rick"); Assert.AreEqual(entered,now); Assert.IsNull(nulled); // this should throw a RuntimeBinderException Assert.AreEqual(entered,drow.enteredd); } The DynamicDataRow requires a custom constructor that accepts a single parameter that sets the DataRow. Once that's done you can access property values that match the field names. Note that types are automatically converted - no type casting is needed in the code you write. The class also automatically converts DbNulls to regular nulls and vice versa which is something that makes it much easier to deal with data returned from a database. What's cool here isn't so much the functionality - even if I'd prefer to leave DataRow behind ASAP -  but the fact that we can create a dynamic type that uses a DataRow as it's 'DataSource' to serve member values. It's pretty useful feature if you think about it, especially given how little code it takes to implement. By implementing these two simple methods we get to provide two features I was complaining about at the beginning that are missing from the DataRow: Direct Property Syntax Automatic Type Casting so no explicit casts are required Caveats As cool and easy as this functionality is, it's important to understand that it doesn't come for free. The dynamic features in .NET are - well - dynamic. Which means they are essentially evaluated at runtime (late bound). Rather than static typing where everything is compiled and linked by the compiler/linker, member invokations are looked up at runtime and essentially call into your custom code. There's some overhead in this. Direct invocations - the original code I showed - is going to be faster than the equivalent dynamic code. However, in the above code the difference of running the dynamic code and the original data access code was very minor. The loop running over 1500 result records took on average 13ms with the original code and 14ms with the dynamic code. Not exactly a serious performance bottleneck. One thing to remember is that Microsoft optimized the DLR code significantly so that repeated calls to the same operations are routed very efficiently which actually makes for very fast evaluation. The bottom line for performance with dynamic code is: Make sure you test and profile your code if you think that there might be a performance issue. However, in my experience with dynamic types so far performance is pretty good for repeated operations (ie. in loops). While usually a little slower the perf hit is a lot less typically than equivalent Reflection work. Although the code in the second example looks like standard object syntax, dynamic is not static code. It's evaluated at runtime and so there's no type recognition until runtime. This means no Intellisense at development time, and any invalid references that call into 'properties' (ie. fields in the DataRow) that don't exist still cause runtime errors. So in the case of the data row you still get a runtime error if you mistype a column name:// this should throw a RuntimeBinderException Assert.AreEqual(entered,drow.enteredd); Dynamic - Lots of uses The arrival of Dynamic types in .NET has been met with mixed emotions. Die hard .NET developers decry dynamic types as an abomination to the language. After all what dynamic accomplishes goes against all that a static language is supposed to provide. On the other hand there are clearly scenarios when dynamic can make life much easier (COM Interop being one place). Think of the possibilities. What other data structures would you like to expose to a simple property interface rather than some sort of collection or dictionary? And beyond what I showed here you can also implement 'Method missing' behavior on objects with InvokeMember which essentially allows you to create dynamic methods. It's all very flexible and maybe just as important: It's easy to do. There's a lot of power hidden in this seemingly simple interface. Your move…© Rick Strahl, West Wind Technologies, 2005-2011Posted in CSharp  .NET   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Project Navigation and File Nesting in ASP.NET MVC Projects

    - by Rick Strahl
    More and more I’m finding myself getting lost in the files in some of my larger Web projects. There’s so much freaking content to deal with – HTML Views, several derived CSS pages, page level CSS, script libraries, application wide scripts and page specific script files etc. etc. Thankfully I use Resharper and the Ctrl-T Go to Anything which autocompletes you to any file, type, member rapidly. Awesome except when I forget – or when I’m not quite sure of the name of what I’m looking for. Project navigation is still important. Sometimes while working on a project I seem to have 30 or more files open and trying to locate another new file to open in the solution often ends up being a mental exercise – “where did I put that thing?” It’s those little hesitations that tend to get in the way of workflow frequently. To make things worse most NuGet packages for client side frameworks and scripts, dump stuff into folders that I generally don’t use. I’ve never been a fan of the ‘Content’ folder in MVC which is just an empty layer that doesn’t serve much of a purpose. It’s usually the first thing I nuke in every MVC project. To me the project root is where the actual content for a site goes – is there really a need to add another folder to force another path into every resource you use? It’s ugly and also inefficient as it adds additional bytes to every resource link you embed into a page. Alternatives I’ve been playing around with different folder layouts recently and found that moving my cheese around has actually made project navigation much easier. In this post I show a couple of things I’ve found useful and maybe you find some of these useful as well or at least get some ideas what can be changed to provide better project flow. The first thing I’ve been doing is add a root Code folder and putting all server code into that. I’m a big fan of treating the Web project root folder as my Web root folder so all content comes from the root without unneeded nesting like the Content folder. By moving all server code out of the root tree (except for Code) the root tree becomes a lot cleaner immediately as you remove Controllers, App_Start, Models etc. and move them underneath Code. Yes this adds another folder level for server code, but it leaves only code related things in one place that’s easier to jump back and forth in. Additionally I find myself doing a lot less with server side code these days, more with client side code so I want the server code separated from that. The root folder itself then serves as the root content folder. Specifically I have the Views folder below it, as well as the Css and Scripts folders which serve to hold only common libraries and global CSS and Scripts code. These days of building SPA style application, I also tend to have an App folder there where I keep my application specific JavaScript files, as well as HTML View templates for client SPA apps like Angular. Here’s an example of what this looks like in a relatively small project: The goal is to keep things that are related together, so I don’t end up jumping around so much in the solution to get to specific project items. The Code folder may irk some of you and hark back to the days of the App_Code folder in non Web-Application projects, but these days I find myself messing with a lot less server side code and much more with client side files – HTML, CSS and JavaScript. Generally I work on a single controller at a time – once that’s open it’s open that’s typically the only server code I work with regularily. Business logic lives in another project altogether, so other than the controller and maybe ViewModels there’s not a lot of code being accessed in the Code folder. So throwing that off the root and isolating seems like an easy win. Nesting Page specific content In a lot of my existing applications that are pure server side MVC application perhaps with some JavaScript associated with them , I tend to have page level javascript and css files. For these types of pages I actually prefer the local files stored in the same folder as the parent view. So typically I have a .css and .js files with the same name as the view in the same folder. This looks something like this: In order for this to work you have to also make a configuration change inside of the /Views/web.config file, as the Views folder is blocked with the BlockViewHandler that prohibits access to content from that folder. It’s easy to fix by changing the path from * to *.cshtml or *.vbhtml so that view retrieval is blocked:<system.webServer> <handlers> <remove name="BlockViewHandler"/> <add name="BlockViewHandler" path="*.cshtml" verb="*" preCondition="integratedMode" type="System.Web.HttpNotFoundHandler" /> </handlers> </system.webServer> With this in place, from inside of your Views you can then reference those same resources like this:<link href="~/Views/Admin/QuizPrognosisItems.css" rel="stylesheet" /> and<script src="~/Views/Admin/QuizPrognosisItems.js"></script> which works fine. JavaScript and CSS files in the Views folder deploy just like the .cshtml files do and can be referenced from this folder as well. Making this happen is not really as straightforward as it should be with just Visual Studio unfortunately, as there’s no easy way to get the file nesting from the VS IDE directly (you have to modify the .csproj file). However, Mads Kristensen has a nice Visual Studio Add-in that provides file nesting via a short cut menu option. Using this you can select each of the ‘child’ files and then nest them under a parent file. In the case above I select the .js and .css files and nest them underneath the .cshtml view. I was even toying with the idea of throwing the controller.cs files into the Views folder, but that’s maybe going a little too far :-) It would work however as Visual Studio doesn’t publish .cs files and the compiler doesn’t care where the files live. There are lots of options and if you think that would make life easier it’s another option to help group related things together. Are there any downside to this? Possibly – if you’re using automated minification/packaging tools like ASP.NET Bundling or Grunt/Gulp with Uglify, it becomes a little harder to group script and css files for minification as you may end up looking in multiple folders instead of a single folder. But – again that’s a one time configuration step that’s easily handled and much less intrusive then constantly having to search for files in your project. Client Side Folders The particular project shown above in the screen shots above is a traditional server side ASP.NET MVC application with most content rendered into server side Razor pages. There’s a fair amount of client side stuff happening on these pages as well – specifically several of these pages are self contained single page Angular applications that deal with 1 or maybe 2 separate views and the layout I’ve shown above really focuses on the server side aspect where there are Razor views with related script and css resources. For applications that are more client centric and have a lot more script and HTML template based content I tend to use the same layout for the server components, but the client side code can often be broken out differently. In SPA type applications I tend to follow the App folder approach where all the application pieces that make the SPA applications end up below the App folder. Here’s what that looks like for me – here this is an AngularJs project: In this case the App folder holds both the application specific js files, and the partial HTML views that get loaded into this single SPA page application. In this particular Angular SPA application that has controllers linked to particular partial views, I prefer to keep the script files that are associated with the views – Angular Js Controllers in this case – with the actual partials. Again I like the proximity of the view with the main code associated with the view, because 90% of the UI application code that gets written is handled between these two files. This approach works well, but only if controllers are fairly closely aligned with the partials. If you have many smaller sub-controllers or lots of directives where the alignment between views and code is more segmented this approach starts falling apart and you’ll probably be better off with separate folders in js folder. Following Angular conventions you’d have controllers/directives/services etc. folders. Please note that I’m not saying any of these ways are right or wrong  – this is just what has worked for me and why! Skipping Project Navigation altogether with Resharper I’ve talked a bit about project navigation in the project tree, which is a common way to navigate and which we all use at least some of the time, but if you use a tool like Resharper – which has Ctrl-T to jump to anything, you can quickly navigate with a shortcut key and autocomplete search. Here’s what Resharper’s jump to anything looks like: Resharper’s Goto Anything box lets you type and quick search over files, classes and members of the entire solution which is a very fast and powerful way to find what you’re looking for in your project, by passing the solution explorer altogether. As long as you remember to use (which I sometimes don’t) and you know what you’re looking for it’s by far the quickest way to find things in a project. It’s a shame that this sort of a simple search interface isn’t part of the native Visual Studio IDE. Work how you like to work Ultimately it all comes down to workflow and how you like to work, and what makes *you* more productive. Following pre-defined patterns is great for consistency, as long as they don’t get in the way you work. A lot of the default folder structures in Visual Studio for ASP.NET MVC were defined when things were done differently. These days we’re dealing with a lot more diverse project content than when ASP.NET MVC was originally introduced and project organization definitely is something that can get in the way if it doesn’t fit your workflow. So take a look and see what works well and what might benefit from organizing files differently. As so many things with ASP.NET, as things evolve and tend to get more complex I’ve found that I end up fighting some of the conventions. The good news is that you don’t have to follow the conventions and you have the freedom to do just about anything that works for you. Even though what I’ve shown here diverges from conventions, I don’t think anybody would stumble over these relatively minor changes and not immediately figure out where things live, even in larger projects. But nevertheless think long and hard before breaking those conventions – if there isn’t a good reason to break them or the changes don’t provide improved workflow then it’s not worth it. Break the rules, but only if there’s a quantifiable benefit. You may not agree with how I’ve chosen to divert from the standard project structures in this article, but maybe it gives you some ideas of how you can mix things up to make your existing project flow a little nicer and make it easier to navigate for your environment. © Rick Strahl, West Wind Technologies, 2005-2014Posted in ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Creating a JSONP Formatter for ASP.NET Web API

    - by Rick Strahl
    Out of the box ASP.NET WebAPI does not include a JSONP formatter, but it's actually very easy to create a custom formatter that implements this functionality. JSONP is one way to allow Browser based JavaScript client applications to bypass cross-site scripting limitations and serve data from the non-current Web server. AJAX in Web Applications uses the XmlHttp object which by default doesn't allow access to remote domains. There are number of ways around this limitation <script> tag loading and JSONP is one of the easiest and semi-official ways that you can do this. JSONP works by combining JSON data and wrapping it into a function call that is executed when the JSONP data is returned. If you use a tool like jQUery it's extremely easy to access JSONP content. Imagine that you have a URL like this: http://RemoteDomain/aspnetWebApi/albums which on an HTTP GET serves some data - in this case an array of record albums. This URL is always directly accessible from an AJAX request if the URL is on the same domain as the parent request. However, if that URL lives on a separate server it won't be easily accessible to an AJAX request. Now, if  the server can serve up JSONP this data can be accessed cross domain from a browser client. Using jQuery it's really easy to retrieve the same data with JSONP:function getAlbums() { $.getJSON("http://remotedomain/aspnetWebApi/albums?callback=?",null, function (albums) { alert(albums.length); }); } The resulting callback the same as if the call was to a local server when the data is returned. jQuery deserializes the data and feeds it into the method. Here the array is received and I simply echo back the number of items returned. From here your app is ready to use the data as needed. This all works fine - as long as the server can serve the data with JSONP. What does JSONP look like? JSONP is a pretty simple 'protocol'. All it does is wrap a JSON response with a JavaScript function call. The above result from the JSONP call looks like this:Query17103401925975181569_1333408916499( [{"Id":"34043957","AlbumName":"Dirty Deeds Done Dirt Cheap",…},{…}] ) The way JSONP works is that the client (jQuery in this case) sends of the request, receives the response and evals it. The eval basically executes the function and deserializes the JSON inside of the function. It's actually a little more complex for the framework that does this, but that's the gist of what happens. JSONP works by executing the code that gets returned from the JSONP call. JSONP and ASP.NET Web API As mentioned previously, JSONP support is not natively in the box with ASP.NET Web API. But it's pretty easy to create and plug-in a custom formatter that provides this functionality. The following code is based on Christian Weyers example but has been updated to the latest Web API CodePlex bits, which changes the implementation a bit due to the way dependent objects are exposed differently in the latest builds. Here's the code:  using System; using System.IO; using System.Net; using System.Net.Http.Formatting; using System.Net.Http.Headers; using System.Threading.Tasks; using System.Web; using System.Net.Http; namespace Westwind.Web.WebApi { /// <summary> /// Handles JsonP requests when requests are fired with /// text/javascript or application/json and contain /// a callback= (configurable) query string parameter /// /// Based on Christian Weyers implementation /// https://github.com/thinktecture/Thinktecture.Web.Http/blob/master/Thinktecture.Web.Http/Formatters/JsonpFormatter.cs /// </summary> public class JsonpFormatter : JsonMediaTypeFormatter { public JsonpFormatter() { SupportedMediaTypes.Add(new MediaTypeHeaderValue("application/json")); SupportedMediaTypes.Add(new MediaTypeHeaderValue("text/javascript")); //MediaTypeMappings.Add(new UriPathExtensionMapping("jsonp", "application/json")); JsonpParameterName = "callback"; } /// <summary> /// Name of the query string parameter to look for /// the jsonp function name /// </summary> public string JsonpParameterName {get; set; } /// <summary> /// Captured name of the Jsonp function that the JSON call /// is wrapped in. Set in GetPerRequestFormatter Instance /// </summary> private string JsonpCallbackFunction; public override bool CanWriteType(Type type) { return true; } /// <summary> /// Override this method to capture the Request object /// and look for the query string parameter and /// create a new instance of this formatter. /// /// This is the only place in a formatter where the /// Request object is available. /// </summary> /// <param name="type"></param> /// <param name="request"></param> /// <param name="mediaType"></param> /// <returns></returns> public override MediaTypeFormatter GetPerRequestFormatterInstance(Type type, HttpRequestMessage request, MediaTypeHeaderValue mediaType) { var formatter = new JsonpFormatter() { JsonpCallbackFunction = GetJsonCallbackFunction(request) }; return formatter; } /// <summary> /// Override to wrap existing JSON result with the /// JSONP function call /// </summary> /// <param name="type"></param> /// <param name="value"></param> /// <param name="stream"></param> /// <param name="contentHeaders"></param> /// <param name="transportContext"></param> /// <returns></returns> public override Task WriteToStreamAsync(Type type, object value, Stream stream, HttpContentHeaders contentHeaders, TransportContext transportContext) { if (!string.IsNullOrEmpty(JsonpCallbackFunction)) { return Task.Factory.StartNew(() => { var writer = new StreamWriter(stream); writer.Write( JsonpCallbackFunction + "("); writer.Flush(); base.WriteToStreamAsync(type, value, stream, contentHeaders, transportContext).Wait(); writer.Write(")"); writer.Flush(); }); } else { return base.WriteToStreamAsync(type, value, stream, contentHeaders, transportContext); } } /// <summary> /// Retrieves the Jsonp Callback function /// from the query string /// </summary> /// <returns></returns> private string GetJsonCallbackFunction(HttpRequestMessage request) { if (request.Method != HttpMethod.Get) return null; var query = HttpUtility.ParseQueryString(request.RequestUri.Query); var queryVal = query[this.JsonpParameterName]; if (string.IsNullOrEmpty(queryVal)) return null; return queryVal; } } } Note again that this code will not work with the Beta bits of Web API - it works only with post beta bits from CodePlex and hopefully this will continue to work until RTM :-) This code is a bit different from Christians original code as the API has changed. The biggest change is that the Read/Write functions no longer receive a global context object that gives access to the Request and Response objects as the older bits did. Instead you now have to override the GetPerRequestFormatterInstance() method, which receives the Request as a parameter. You can capture the Request there, or use the request to pick up the values you need and store them on the formatter. Note that I also have to create a new instance of the formatter since I'm storing request specific state on the instance (information whether the callback= querystring is present) so I return a new instance of this formatter. Other than that the code should be straight forward: The code basically writes out the function pre- and post-amble and the defers to the base stream to retrieve the JSON to wrap the function call into. The code uses the Async APIs to write this data out (this will take some getting used to seeing all over the place for me). Hooking up the JsonpFormatter Once you've created a formatter, it has to be added to the request processing sequence by adding it to the formatter collection. Web API is configured via the static GlobalConfiguration object.  protected void Application_Start(object sender, EventArgs e) { // Verb Routing RouteTable.Routes.MapHttpRoute( name: "AlbumsVerbs", routeTemplate: "albums/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumApi" } ); GlobalConfiguration .Configuration .Formatters .Insert(0, new Westwind.Web.WebApi.JsonpFormatter()); }   That's all it takes. Note that I added the formatter at the top of the list of formatters, rather than adding it to the end which is required. The JSONP formatter needs to fire before any other JSON formatter since it relies on the JSON formatter to encode the actual JSON data. If you reverse the order the JSONP output never shows up. So, in general when adding new formatters also try to be aware of the order of the formatters as they are added. Resources JsonpFormatter Code on GitHub© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Routing to a Controller with no View in Angular

    - by Rick Strahl
    I've finally had some time to put Angular to use this week in a small project I'm working on for fun. Angular's routing is great and makes it real easy to map URL routes to controllers and model data into views. But what if you don't actually need a view, if you effectively need a headless controller that just runs code, but doesn't render a view?Preserve the ViewWhen Angular navigates a route and and presents a new view, it loads the controller and then renders the view from scratch. Views are not cached or stored, but displayed and then removed. So if you have routes configured like this:'use strict'; // Declare app level module which depends on filters, and services window.myApp = angular.module('myApp', ['myApp.filters', 'myApp.services', 'myApp.directives', 'myApp.controllers']). config(['$routeProvider', function($routeProvider) { $routeProvider.when('/map', { template: "partials/map.html ", controller: 'mapController', reloadOnSearch: false, animation: 'slide' }); … $routeProvider.otherwise({redirectTo: '/map'}); }]); Angular routes to the mapController and then re-renders the map.html template with the new data from the $scope filled in.But, but… I don't want a new View!Now in most cases this works just fine. If I'm rendering plain DOM content, or textboxes in a form interface that is all fine and dandy - it's perfectly fine to completely re-render the UI.But in some cases, the UI that's being managed has state and shouldn't be redrawn. In this case the main page in question has a Google Map on it. The map is  going to be manipulated throughout the lifetime of the application and the rest of the pages. In my application I have a toolbar on the bottom and the rest of the content is replaced/switched out by the Angular Views:The problem is that the map shouldn't be redrawn each time the Location view is activated. It should maintain its state, such as the current position selected (which can move), and shouldn't redraw due to the overhead of re-rendering the initial map.Originally I set up the map, exactly like all my other views - as a partial, that is rendered with a separate file, but that didn't work.The Workaround - Controller Only RoutesThe workaround for this goes decidedly against Angular's way of doing things:Setting up a Template-less RouteIn-lining the map view directly into the main pageHiding and showing the map view manuallyLet's see how this works.Controller Only RouteThe template-less route is basically a route that doesn't have any template to render. This is not directly supported by Angular, but thankfully easy to fake. The end goal here is that I want to simply have the Controller fire and then have the controller manage the display of the already active view by hiding and showing the map and any other view content, in effect bypassing Angular's view display management.In short - I want a controller action, but no view rendering.The controller-only or template-less route looks like this: $routeProvider.when('/map', { template: " ", // just fire controller controller: 'mapController', animation: 'slide' });Notice I'm using the template property rather than templateUrl (used in the first example above), which allows specifying a string template, and leaving it blank. The template property basically allows you to provide a templated string using Angular's HandleBar like binding syntax which can be useful at times. You can use plain strings or strings with template code in the template, or as I'm doing here a blank string to essentially fake 'just clear the view'. In-lined ViewSo if there's no view where does the HTML go? Because I don't want Angular to manage the view the map markup is in-lined directly into the page. So instead of rendering the map into the Angular view container, the content is simply set up as inline HTML to display as a sibling to the view container.<div id="MapContent" data-icon="LocationIcon" ng-controller="mapController" style="display:none"> <div class="headerbar"> <div class="right-header" style="float:right"> <a id="btnShowSaveLocationDialog" class="iconbutton btn btn-sm" href="#/saveLocation" style="margin-right: 2px;"> <i class="icon-ok icon-2x" style="color: lightgreen; "></i> Save Location </a> </div> <div class="left-header">GeoCrumbs</div> </div> <div class="clearfix"></div> <div id="Message"> <i id="MessageIcon"></i> <span id="MessageText"></span> </div> <div id="Map" class="content-area"> </div> </div> <div id="ViewPlaceholder" ng-view></div>Note that there's the #MapContent element and the #ViewPlaceHolder. The #MapContent is my static map view that is always 'live' and is initially hidden. It is initially hidden and doesn't get made visible until the MapController controller activates it which does the initial rendering of the map. After that the element is persisted with the map data already loaded and any future access only updates the map with new locations/pins etc.Note that default route is assigned to the mapController, which means that the mapController is fired right as the page loads, which is actually a good thing in this case, as the map is the cornerstone of this app that is manipulated by some of the other controllers/views.The Controller handles some UISince there's effectively no view activation with the template-less route, the controller unfortunately has to take over some UI interaction directly. Specifically it has to swap the hidden state between the map and any of the other views.Here's what the controller looks like:myApp.controller('mapController', ["$scope", "$routeParams", "locationData", function($scope, $routeParams, locationData) { $scope.locationData = locationData.location; $scope.locationHistory = locationData.locationHistory; if ($routeParams.mode == "currentLocation") { bc.getCurrentLocation(false); } bc.showMap(false,"#LocationIcon"); }]);bc.showMap is responsible for a couple of display tasks that hide/show the views/map and for activating/deactivating icons. The code looks like this:this.showMap = function (hide,selActiveIcon) { if (!hide) $("#MapContent").show(); else { $("#MapContent").hide(); } self.fitContent(); if (selActiveIcon) { $(".iconbutton").removeClass("active"); $(selActiveIcon).addClass("active"); } };Each of the other controllers in the app also call this function when they are activated to basically hide the map and make the View Content area visible. The map controller makes the map.This is UI code and calling this sort of thing from controllers is generally not recommended, but I couldn't figure out a way using directives to make this work any more easily than this. It'd be easy to hide and show the map and view container using a flag an ng-show, but it gets tricky because of scoping of the $scope. I would have to resort to storing this setting on the $rootscope which I try to avoid. The same issues exists with the icons.It sure would be nice if Angular had a way to explicitly specify that a View shouldn't be destroyed when another view is activated, so currently this workaround is required. Searching around, I saw a number of whacky hacks to get around this, but this solution I'm using here seems much easier than any of that I could dig up even if it doesn't quite fit the 'Angular way'.Angular nice, until it's notOverall I really like Angular and the way it works although it took me a bit of time to get my head around how all the pieces fit together. Once I got the idea how the app/routes, the controllers and views snap together, putting together Angular pages becomes fairly straightforward. You can get quite a bit done never going beyond those basics. For most common things Angular's default routing and view presentation works very well.But, when you do something a bit more complex, where there are multiple dependencies or as in this case where Angular doesn't appear to support a feature that's absolutely necessary, you're on your own. Finding information on more advanced topics is not trivial especially since versions are changing so rapidly and the low level behaviors are changing frequently so finding something that works is often an exercise in trial and error. Not that this is surprising. Angular is a complex piece of kit as are all the frameworks that try to hack JavaScript into submission to do something that it was really never designed to. After all everything about a framework like Angular is an elaborate hack. A lot of shit has to happen to make this all work together and at that Angular (and Ember, Durandel etc.) are pretty amazing pieces of JavaScript code. So no harm, no foul, but I just can't help feeling like working in toy sandbox at times :-)© Rick Strahl, West Wind Technologies, 2005-2013Posted in Angular  JavaScript   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • The Red Gate and .NET Reflector Debacle

    - by Rick Strahl
    About a month ago Red Gate – the company who owns the NET Reflector tool most .NET devs use at one point or another – decided to change their business model for Reflector and take the product from free to a fully paid for license model. As a bit of history: .NET Reflector was originally created by Lutz Roeder as a free community tool to inspect .NET assemblies. Using Reflector you can examine the types in an assembly, drill into type signatures and quickly disassemble code to see how a particular method works.  In case you’ve been living under a rock and you’ve never looked at Reflector, here’s what it looks like drilled into an assembly from disk with some disassembled source code showing: Note that you get tons of information about each element in the tree, and almost all related types and members are clickable both in the list and source view so it’s extremely easy to navigate and follow the code flow even in this static assembly only view. For many year’s Lutz kept the the tool up to date and added more features gradually improving an already amazing tool and making it better. Then about two and a half years ago Red Gate bought the tool from Lutz. A lot of ruckus and noise ensued in the community back then about what would happen with the tool and… for the most part very little did. Other than the incessant update notices with prominent Red Gate promo on them life with Reflector went on. The product didn’t die and and it didn’t go commercial or to a charge model. When .NET 4.0 came out it still continued to work mostly because the .NET feature set doesn’t drastically change how types behave.  Then a month back Red Gate started making noise about a new Version Version 7 which would be commercial. No more free version - and a shit storm broke out in the community. Now normally I’m not one to be critical of companies trying to make money from a product, much less for a product that’s as incredibly useful as Reflector. There isn’t day in .NET development that goes by for me where I don’t fire up Reflector. Whether it’s for examining the innards of the .NET Framework, checking out third party code, or verifying some of my own code and resources. Even more so recently I’ve been doing a lot of Interop work with a non-.NET application that needs to access .NET components and Reflector has been immensely valuable to me (and my clients) if figuring out exact type signatures required to calling .NET components in assemblies. In short Reflector is an invaluable tool to me. Ok, so what’s the problem? Why all the fuss? Certainly the $39 Red Gate is trying to charge isn’t going to kill any developer. If there’s any tool in .NET that’s worth $39 it’s Reflector, right? Right, but that’s not the problem here. The problem is how Red Gate went about moving the product to commercial which borders on the downright bizarre. It’s almost as if somebody in management wrote a slogan: “How can we piss off the .NET community in the most painful way we can?” And that it seems Red Gate has a utterly succeeded. People are rabid, and for once I think that this outrage isn’t exactly misplaced. Take a look at the message thread that Red Gate dedicated from a link off the download page. Not only is Version 7 going to be a paid commercial tool, but the older versions of Reflector won’t be available any longer. Not only that but older versions that are already in use also will continually try to update themselves to the new paid version – which when installed will then expire unless registered properly. There have also been reports of Version 6 installs shutting themselves down and failing to work if the update is refused (I haven’t seen that myself so not sure if that’s true). In other words Red Gate is trying to make damn sure they’re getting your money if you attempt to use Reflector. There’s a lot of temptation there. Think about the millions of .NET developers out there and all of them possibly upgrading – that’s a nice chunk of change that Red Gate’s sitting on. Even with all the community backlash these guys are probably making some bank right now just because people need to get life to move on. Red Gate also put up a Feedback link on the download page – which not surprisingly is chock full with hate mail condemning the move. Oddly there’s not a single response to any of those messages by the Red Gate folks except when it concerns license questions for the full version. It puzzles me what that link serves for other yet than another complete example of failure to understand how to handle customer relations. There’s no doubt that that all of this has caused some serious outrage in the community. The sad part though is that this could have been handled so much less arrogantly and without pissing off the entire community and causing so much ill-will. People are pissed off and I have no doubt that this negative publicity will show up in the sales numbers for their other products. I certainly hope so. Stupidity ought to be painful! Why do Companies do boneheaded stuff like this? Red Gate’s original decision to buy Reflector was hotly debated but at that the time most of what would happen was mostly speculation. But I thought it was a smart move for any company that is in need of spreading its marketing message and corporate image as a vendor in the .NET space. Where else do you get to flash your corporate logo to hordes of .NET developers on a regular basis?  Exploiting that marketing with some goodwill of providing a free tool breeds positive feedback that hopefully has a good effect on the company’s visibility and the products it sells. Instead Red Gate seems to have taken exactly the opposite tack of corporate bullying to try to make a quick buck – and in the process ruined any community goodwill that might have come from providing a service community for free while still getting valuable marketing. What’s so puzzling about this boneheaded escapade is that the company doesn’t need to resort to underhanded tactics like what they are trying with Reflector 7. The tools the company makes are very good. I personally use SQL Compare, Sql Data Compare and ANTS Profiler on a regular basis and all of these tools are essential in my toolbox. They certainly work much better than the tools that are in the box with Visual Studio. Chances are that if Reflector 7 added useful features I would have been more than happy to shell out my $39 to upgrade when the time is right. It’s Expensive to give away stuff for Free At the same time, this episode shows some of the big problems that come with ‘free’ tools. A lot of organizations are realizing that giving stuff away for free is actually quite expensive and the pay back is often very intangible if any at all. Those that rely on donations or other voluntary compensation find that they amount contributed is absolutely miniscule as to not matter at all. Yet at the same time I bet most of those clamouring the loudest on that Red Gate Reflector feedback page that Reflector won’t be free anymore probably have NEVER made a donation to any open source project or free tool ever. The expectation of Free these days is just too great – which is a shame I think. There’s a lot to be said for paid software and having somebody to hold to responsible to because you gave them some money. There’s an incentive –> payback –> responsibility model that seems to be missing from free software (not all of it, but a lot of it). While there certainly are plenty of bad apples in paid software as well, money tends to be a good motivator for people to continue working and improving products. Reasons for giving away stuff are many but often it’s a naïve desire to share things when things are simple. At first it might be no problem to volunteer time and effort but as products mature the fun goes out of it, and as the reality of product maintenance kicks in developers want to get something back for the time and effort they’re putting in doing non-glamorous work. It’s then when products die or languish and this is painful for all to watch. For Red Gate however, I think there was always a pretty good payback from the Reflector acquisition in terms of marketing: Visibility and possible positioning of their products although they seemed to have mostly ignored that option. On the other hand they started this off pretty badly even 2 and a half years back when they aquired Reflector from Lutz with the same arrogant attitude that is evident in the latest episode. You really gotta wonder what folks are thinking in management – the sad part is from advance emails that were circulating, they were fully aware of the shit storm they were inciting with this and I suspect they are banking on the sheer numbers of .NET developers to still make them a tidy chunk of change from upgrades… Alternatives are coming For me personally the single license isn’t a problem, but I actually have a tool that I sell (an interop Web Service proxy generation tool) to customers and one of the things I recommend to use with has been Reflector to view assembly information and to find which Interop classes to instantiate from the non-.NET environment. It’s been nice to use Reflector for this with its small footprint and zero-configuration installation. But now with V7 becoming a paid tool that option is not going to be available anymore. Luckily it looks like the .NET community is jumping to it and trying to fill the void. Amidst the Red Gate outrage a new library called ILSpy has sprung up and providing at least some of the core functionality of Reflector with an open source library. It looks promising going forward and I suspect there will be a lot more support and interest to support this project now that Reflector has gone over to the ‘dark side’…© Rick Strahl, West Wind Technologies, 2005-2011

    Read the article

  • Caveats with the runAllManagedModulesForAllRequests in IIS 7/8

    - by Rick Strahl
    One of the nice enhancements in IIS 7 (and now 8) is the ability to be able to intercept non-managed - ie. non ASP.NET served - requests from within ASP.NET managed modules. This opened up a ton of new functionality that could be applied across non-managed content using .NET code. I thought I had a pretty good handle on how IIS 7's Integrated mode pipeline works, but when I put together some samples last tonight I realized that the way that managed and unmanaged requests fire into the pipeline is downright confusing especially when it comes to the runAllManagedModulesForAllRequests attribute. There are a number of settings that can affect whether a managed module receives non-ASP.NET content requests such as static files or requests from other frameworks like PHP or ASP classic, and this is topic of this blog post. Native and Managed Modules The integrated mode IIS pipeline for IIS 7 and later - as the name suggests - allows for integration of ASP.NET pipeline events in the IIS request pipeline. Natively IIS runs unmanaged code and there are a host of native mode modules that handle the core behavior of IIS. If you set up a new IIS site or application without managed code support only the native modules are supported and fired without any interaction between native and managed code. If you use the Integrated pipeline with managed code enabled however things get a little more confusing as there both native modules and .NET managed modules can fire against the same IIS request. If you open up the IIS Modules dialog you see both managed and unmanaged modules. Unmanaged modules point at physical files on disk, while unmanaged modules point at .NET types and files referenced from the GAC or the current project's BIN folder. Both native and managed modules can co-exist and execute side by side and on the same request. When running in IIS 7 the IIS pipeline actually instantiates a the ASP.NET  runtime (via the System.Web.PipelineRuntime class) which unlike the core HttpRuntime classes in ASP.NET receives notification callbacks when IIS integrated mode events fire. The IIS pipeline is smart enough to detect whether managed handlers are attached and if they're none these notifications don't fire, improving performance. The good news about all of this for .NET devs is that ASP.NET style modules can be used for just about every kind of IIS request. All you need to do is create a new Web Application and enable ASP.NET on it, and then attach managed handlers. Handlers can look at ASP.NET content (ie. ASPX pages, MVC, WebAPI etc. requests) as well as non-ASP.NET content including static content like HTML files, images, javascript and css resources etc. It's very cool that this capability has been surfaced. However, with that functionality comes a lot of responsibility. Because every request passes through the ASP.NET pipeline if managed modules (or handlers) are attached there are possible performance implications that come with it. Running through the ASP.NET pipeline does add some overhead. ASP.NET and Your Own Modules When you create a new ASP.NET project typically the Visual Studio templates create the modules section like this: <system.webServer> <validation validateIntegratedModeConfiguration="false" /> <modules runAllManagedModulesForAllRequests="true" > </modules> </system.webServer> Specifically the interesting thing about this is the runAllManagedModulesForAllRequest="true" flag, which seems to indicate that it controls whether any registered modules always run, even when the value is set to false. Realistically though this flag does not control whether managed code is fired for all requests or not. Rather it is an override for the preCondition flag on a particular handler. With the flag set to the default true setting, you can assume that pretty much every IIS request you receive ends up firing through your ASP.NET module pipeline and every module you have configured is accessed even by non-managed requests like static files. In other words, your module will have to handle all requests. Now so far so obvious. What's not quite so obvious is what happens when you set the runAllManagedModulesForAllRequest="false". You probably would expect that immediately the non-ASP.NET requests no longer get funnelled through the ASP.NET Module pipeline. But that's not what actually happens. For example, if I create a module like this:<add name="SharewareModule" type="HowAspNetWorks.SharewareMessageModule" /> by default it will fire against ALL requests regardless of the runAllManagedModulesForAllRequests flag. Even if the value runAllManagedModulesForAllRequests="false", the module is fired. Not quite expected. So what is the runAllManagedModulesForAllRequests really good for? It's essentially an override for managedHandler preCondition. If I declare my handler in web.config like this:<add name="SharewareModule" type="HowAspNetWorks.SharewareMessageModule" preCondition="managedHandler" /> and the runAllManagedModulesForAllRequests="false" my module only fires against managed requests. If I switch the flag to true, now my module ends up handling all IIS requests that are passed through from IIS. The moral of the story here is that if you intend to only look at ASP.NET content, you should always set the preCondition="managedHandler" attribute to ensure that only managed requests are fired on this module. But even if you do this, realize that runAllManagedModulesForAllRequests="true" can override this setting. runAllManagedModulesForAllRequests and Http Application Events Another place the runAllManagedModulesForAllRequest attribute affects is the Global Http Application object (typically in global.asax) and the Application_XXXX events that you can hook up there. So while the events there are dynamically hooked up to the application class, they basically behave as if they were set with the preCodition="managedHandler" configuration switch. The end result is that if you have runAllManagedModulesForAllRequests="true" you'll see every Http request passed through the Application_XXXX events, and you only see ASP.NET requests with the flag set to "false". What's all that mean? Configuring an application to handle requests for both ASP.NET and other content requests can be tricky especially if you need to mix modules that might require both. Couple of things are important to remember. If your module doesn't need to look at every request, by all means set a preCondition="managedHandler" on it. This will at least allow it to respond to the runAllManagedModulesForAllRequests="false" flag and then only process ASP.NET requests. Look really carefully to see whether you actually need runAllManagedModulesForAllRequests="true" in your applications as set by the default new project templates in Visual Studio. Part of the reason, this is the default because it was required for the initial versions of IIS 7 and ASP.NET 2 in order to handle MVC extensionless URLs. However, if you are running IIS 7 or later and .NET 4.0 you can use the ExtensionlessUrlHandler instead to allow you MVC functionality without requiring runAllManagedModulesForAllRequests="true": <handlers> <remove name="ExtensionlessUrlHandler-Integrated-4.0" /> <add name="ExtensionlessUrlHandler-Integrated-4.0" path="*." verb="GET,HEAD,POST,DEBUG,PUT,DELETE,PATCH,OPTIONS" type="System.Web.Handlers.TransferRequestHandler" preCondition="integratedMode,runtimeVersionv4.0" /> </handlers> Oddly this is the default for Visual Studio 2012 MVC template apps, so I'm not sure why the default template still adds runAllManagedModulesForAllRequests="true" is - it should be enabled only if there's a specific need to access non ASP.NET requests. As a side note, it's interesting that when you access a static HTML resource, you can actually write into the Response object and get the output to show, which is trippy. I haven't looked closely to see how this works - whether ASP.NET just fires directly into the native output stream or whether the static requests are re-routed directly through the ASP.NET pipeline once a managed code module is detected. This doesn't work for all non ASP.NET resources - for example, I can't do the same with ASP classic requests, but it makes for an interesting demo when injecting HTML content into a static HTML page :-) Note that on the original Windows Server 2008 and Vista (IIS 7.0) you might need a HotFix in order for ExtensionLessUrlHandler to work properly for MVC projects. On my live server I needed it (about 6 months ago), but others have observed that the latest service updates have integrated this functionality and the hotfix is not required. On IIS 7.5 and later I've not needed any patches for things to just work. Plan for non-ASP.NET Requests It's important to remember that if you write a .NET Module to run on IIS 7, there's no way for you to prevent non-ASP.NET requests from hitting your module. So make sure you plan to support requests to extensionless URLs, to static resources like files. Luckily ASP.NET creates a full Request and full Response object for you for non ASP.NET content. So even for static files and even for ASP classic for example, you can look at Request.FilePath or Request.ContentType (in post handler pipeline events) to determine what content you are dealing with. As always with Module design make sure you check for the conditions in your code that make the module applicable and if a filter fails immediately exit - minimize the code that runs if your module doesn't need to process the request.© Rick Strahl, West Wind Technologies, 2005-2012Posted in IIS7   ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Unable to cast transparent proxy to type &lt;type&gt;

    - by Rick Strahl
    This is not the first time I've run into this wonderful error while creating new AppDomains in .NET and then trying to load types and access them across App Domains. In almost all cases the problem I've run into with this error the problem comes from the two AppDomains involved loading different copies of the same type. Unless the types match exactly and come exactly from the same assembly the typecast will fail. The most common scenario is that the types are loaded from different assemblies - as unlikely as that sounds. An Example of Failure To give some context, I'm working on some old code in Html Help Builder that creates a new AppDomain in order to parse assembly information for documentation purposes. I create a new AppDomain in order to load up an assembly process it and then immediately unload it along with the AppDomain. The AppDomain allows for unloading that otherwise wouldn't be possible as well as isolating my code from the assembly that's being loaded. The process to accomplish this is fairly established and I use it for lots of applications that use add-in like functionality - basically anywhere where code needs to be isolated and have the ability to be unloaded. My pattern for this is: Create a new AppDomain Load a Factory Class into the AppDomain Use the Factory Class to load additional types from the remote domain Here's the relevant code from my TypeParserFactory that creates a domain and then loads a specific type - TypeParser - that is accessed cross-AppDomain in the parent domain:public class TypeParserFactory : System.MarshalByRefObject,IDisposable { …/// <summary> /// TypeParser Factory method that loads the TypeParser /// object into a new AppDomain so it can be unloaded. /// Creates AppDomain and creates type. /// </summary> /// <returns></returns> public TypeParser CreateTypeParser() { if (!CreateAppDomain(null)) return null; /// Create the instance inside of the new AppDomain /// Note: remote domain uses local EXE's AppBasePath!!! TypeParser parser = null; try { Assembly assembly = Assembly.GetExecutingAssembly(); string assemblyPath = Assembly.GetExecutingAssembly().Location; parser = (TypeParser) this.LocalAppDomain.CreateInstanceFrom(assemblyPath, typeof(TypeParser).FullName).Unwrap(); } catch (Exception ex) { this.ErrorMessage = ex.GetBaseException().Message; return null; } return parser; } private bool CreateAppDomain(string lcAppDomain) { if (lcAppDomain == null) lcAppDomain = "wwReflection" + Guid.NewGuid().ToString().GetHashCode().ToString("x"); AppDomainSetup setup = new AppDomainSetup(); // *** Point at current directory setup.ApplicationBase = AppDomain.CurrentDomain.BaseDirectory; //setup.PrivateBinPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "bin"); this.LocalAppDomain = AppDomain.CreateDomain(lcAppDomain,null,setup); // Need a custom resolver so we can load assembly from non current path AppDomain.CurrentDomain.AssemblyResolve += new ResolveEventHandler(CurrentDomain_AssemblyResolve); return true; } …} Note that the classes must be either [Serializable] (by value) or inherit from MarshalByRefObject in order to be accessible remotely. Here I need to call methods on the remote object so all classes are MarshalByRefObject. The specific problem code is the loading up a new type which points at an assembly that visible both in the current domain and the remote domain and then instantiates a type from it. This is the code in question:Assembly assembly = Assembly.GetExecutingAssembly(); string assemblyPath = Assembly.GetExecutingAssembly().Location; parser = (TypeParser) this.LocalAppDomain.CreateInstanceFrom(assemblyPath, typeof(TypeParser).FullName).Unwrap(); The last line of code is what blows up with the Unable to cast transparent proxy to type <type> error. Without the cast the code actually returns a TransparentProxy instance, but the cast is what blows up. In other words I AM in fact getting a TypeParser instance back but it can't be cast to the TypeParser type that is loaded in the current AppDomain. Finding the Problem To see what's going on I tried using the .NET 4.0 dynamic type on the result and lo and behold it worked with dynamic - the value returned is actually a TypeParser instance: Assembly assembly = Assembly.GetExecutingAssembly(); string assemblyPath = Assembly.GetExecutingAssembly().Location; object objparser = this.LocalAppDomain.CreateInstanceFrom(assemblyPath, typeof(TypeParser).FullName).Unwrap(); // dynamic works dynamic dynParser = objparser; string info = dynParser.GetVersionInfo(); // method call works // casting fails parser = (TypeParser)objparser; So clearly a TypeParser type is coming back, but nevertheless it's not the right one. Hmmm… mysterious.Another couple of tries reveal the problem however:// works dynamic dynParser = objparser; string info = dynParser.GetVersionInfo(); // method call works // c:\wwapps\wwhelp\wwReflection20.dll (Current Execution Folder) string info3 = typeof(TypeParser).Assembly.CodeBase; // c:\program files\vfp9\wwReflection20.dll (my COM client EXE's folder) string info4 = dynParser.GetType().Assembly.CodeBase; // fails parser = (TypeParser)objparser; As you can see the second value is coming from a totally different assembly. Note that this is even though I EXPLICITLY SPECIFIED an assembly path to load the assembly from! Instead .NET decided to load the assembly from the original ApplicationBase folder. Ouch! How I actually tracked this down was a little more tedious: I added a method like this to both the factory and the instance types and then compared notes:public string GetVersionInfo() { return ".NET Version: " + Environment.Version.ToString() + "\r\n" + "wwReflection Assembly: " + typeof(TypeParserFactory).Assembly.CodeBase.Replace("file:///", "").Replace("/", "\\") + "\r\n" + "Assembly Cur Dir: " + Directory.GetCurrentDirectory() + "\r\n" + "ApplicationBase: " + AppDomain.CurrentDomain.SetupInformation.ApplicationBase + "\r\n" + "App Domain: " + AppDomain.CurrentDomain.FriendlyName + "\r\n"; } For the factory I got: .NET Version: 4.0.30319.239wwReflection Assembly: c:\wwapps\wwhelp\bin\wwreflection20.dllAssembly Cur Dir: c:\wwapps\wwhelpApplicationBase: C:\Programs\vfp9\App Domain: wwReflection534cfa1f For the instance type I got: .NET Version: 4.0.30319.239wwReflection Assembly: C:\\Programs\\vfp9\wwreflection20.dllAssembly Cur Dir: c:\\wwapps\\wwhelpApplicationBase: C:\\Programs\\vfp9\App Domain: wwDotNetBridge_56006605 which clearly shows the problem. You can see that both are loading from different appDomains but the each is loading the assembly from a different location. Probably a better solution yet (for ANY kind of assembly loading problem) is to use the .NET Fusion Log Viewer to trace assembly loads.The Fusion viewer will show a load trace for each assembly loaded and where it's looking to find it. Here's what the viewer looks like: The last trace above that I found for the second wwReflection20 load (the one that is wonky) looks like this:*** Assembly Binder Log Entry (1/13/2012 @ 3:06:49 AM) *** The operation was successful. Bind result: hr = 0x0. The operation completed successfully. Assembly manager loaded from: C:\Windows\Microsoft.NET\Framework\V4.0.30319\clr.dll Running under executable c:\programs\vfp9\vfp9.exe --- A detailed error log follows. === Pre-bind state information === LOG: User = Ras\ricks LOG: DisplayName = wwReflection20, Version=4.61.0.0, Culture=neutral, PublicKeyToken=null (Fully-specified) LOG: Appbase = file:///C:/Programs/vfp9/ LOG: Initial PrivatePath = NULL LOG: Dynamic Base = NULL LOG: Cache Base = NULL LOG: AppName = vfp9.exe Calling assembly : (Unknown). === LOG: This bind starts in default load context. LOG: Using application configuration file: C:\Programs\vfp9\vfp9.exe.Config LOG: Using host configuration file: LOG: Using machine configuration file from C:\Windows\Microsoft.NET\Framework\V4.0.30319\config\machine.config. LOG: Policy not being applied to reference at this time (private, custom, partial, or location-based assembly bind). LOG: Attempting download of new URL file:///C:/Programs/vfp9/wwReflection20.DLL. LOG: Assembly download was successful. Attempting setup of file: C:\Programs\vfp9\wwReflection20.dll LOG: Entering run-from-source setup phase. LOG: Assembly Name is: wwReflection20, Version=4.61.0.0, Culture=neutral, PublicKeyToken=null LOG: Binding succeeds. Returns assembly from C:\Programs\vfp9\wwReflection20.dll. LOG: Assembly is loaded in default load context. WRN: The same assembly was loaded into multiple contexts of an application domain: WRN: Context: Default | Domain ID: 2 | Assembly Name: wwReflection20, Version=4.61.0.0, Culture=neutral, PublicKeyToken=null WRN: Context: LoadFrom | Domain ID: 2 | Assembly Name: wwReflection20, Version=4.61.0.0, Culture=neutral, PublicKeyToken=null WRN: This might lead to runtime failures. WRN: It is recommended to inspect your application on whether this is intentional or not. WRN: See whitepaper http://go.microsoft.com/fwlink/?LinkId=109270 for more information and common solutions to this issue. Notice that the fusion log clearly shows that the .NET loader makes no attempt to even load the assembly from the path I explicitly specified. Remember your Assembly Locations As mentioned earlier all failures I've seen like this ultimately resulted from different versions of the same type being available in the two AppDomains. At first sight that seems ridiculous - how could the types be different and why would you have multiple assemblies - but there are actually a number of scenarios where it's quite possible to have multiple copies of the same assembly floating around in multiple places. If you're hosting different environments (like hosting the Razor Engine, or ASP.NET Runtime for example) it's common to create a private BIN folder and it's important to make sure that there's no overlap of assemblies. In my case of Html Help Builder the problem started because I'm using COM interop to access the .NET assembly and the above code. COM Interop has very specific requirements on where assemblies can be found and because I was mucking around with the loader code today, I ended up moving assemblies around to a new location for explicit loading. The explicit load works in the main AppDomain, but failed in the remote domain as I showed. The solution here was simple enough: Delete the extraneous assembly which was left around by accident. Not a common problem, but one that when it bites is pretty nasty to figure out because it seems so unlikely that types wouldn't match. I know I've run into this a few times and writing this down hopefully will make me remember in the future rather than poking around again for an hour trying to debug the issue as I did today. Hopefully it'll save some of you some time as well in the future.© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  COM   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Filtering List Data with a jQuery-searchFilter Plugin

    - by Rick Strahl
    When dealing with list based data on HTML forms, filtering that data down based on a search text expression is an extremely useful feature. We’re used to search boxes on just about anything these days and HTML forms should be no different. In this post I’ll describe how you can easily filter a list down to just the elements that match text typed into a search box. It’s a pretty simple task and it’s super easy to do, but I get a surprising number of comments from developers I work with who are surprised how easy it is to hook up this sort of behavior, that I thought it’s worth a blog post. But Angular does that out of the Box, right? These days it seems everybody is raving about Angular and the rich SPA features it provides. One of the cool features of Angular is the ability to do drop dead simple filters where you can specify a filter expression as part of a looping construct and automatically have that filter applied so that only items that match the filter show. I think Angular has single handedly elevated search filters to first rate, front-row status because it’s so easy. I love using Angular myself, but Angular is not a generic solution to problems like this. For one thing, using Angular requires you to render the list data with Angular – if you have data that is server rendered or static, then Angular doesn’t work. Not all applications are client side rendered SPAs – not by a long shot, and nor do all applications need to become SPAs. Long story short, it’s pretty easy to achieve text filtering effects using jQuery (or plain JavaScript for that matter) with just a little bit of work. Let’s take a look at an example. Why Filter? Client side filtering is a very useful tool that can make it drastically easier to sift through data displayed in client side lists. In my applications I like to display scrollable lists that contain a reasonably large amount of data, rather than the classic paging style displays which tend to be painful to use. So I often display 50 or so items per ‘page’ and it’s extremely useful to be able to filter this list down. Here’s an example in my Time Trakker application where I can quickly glance at various common views of my time entries. I can see Recent Entries, Unbilled Entries, Open Entries etc and filter those down by individual customers and so forth. Each of these lists results tends to be a few pages worth of scrollable content. The following screen shot shows a filtered view of Recent Entries that match the search keyword of CellPage: As you can see in this animated GIF, the filter is applied as you type, displaying only entries that match the text anywhere inside of the text of each of the list items. This is an immediately useful feature for just about any list display and adds significant value. A few lines of jQuery The good news is that this is trivially simple using jQuery. To get an idea what this looks like, here’s the relevant page layout showing only the search box and the list layout:<div id="divItemWrapper"> <div class="time-entry"> <div class="time-entry-right"> May 11, 2014 - 7:20pm<br /> <span style='color:steelblue'>0h:40min</span><br /> <a id="btnDeleteButton" href="#" class="hoverbutton" data-id="16825"> <img src="images/remove.gif" /> </a> </div> <div class="punchedoutimg"></div> <b><a href='/TimeTrakkerWeb/punchout/16825'>Project Housekeeping</a></b><br /> <small><i>Sawgrass</i></small> </div> ... more items here </div> So we have a searchbox txtSearchPage and a bunch of DIV elements with a .time-entry CSS class attached that makes up the list of items displayed. To hook up the search filter with jQuery is merely a matter of a few lines of jQuery code hooked to the .keyup() event handler: <script type="text/javascript"> $("#txtSearchPage").keyup(function() { var search = $(this).val(); $(".time-entry").show(); if (search) $(".time-entry").not(":contains(" + search + ")").hide(); }); </script> The idea here is pretty simple: You capture the keystroke in the search box and capture the search text. Using that search text you first make all items visible and then hide all the items that don’t match. Since DOM changes are applied after a method finishes execution in JavaScript, the show and hide operations are effectively batched up and so the view changes only to the final list rather than flashing the whole list and then removing items on a slow machine. You get the desired effect of the list showing the items in question. Case Insensitive Filtering But there is one problem with the solution above: The jQuery :contains filter is case sensitive, so your search text has to match expressions explicitly which is a bit cumbersome when typing. In the screen capture above I actually cheated – I used a custom filter that provides case insensitive contains behavior. jQuery makes it really easy to create custom query filters, and so I created one called containsNoCase. Here’s the implementation of this custom filter:$.expr[":"].containsNoCase = function(el, i, m) { var search = m[3]; if (!search) return false; return new RegExp(search, "i").test($(el).text()); }; This filter can be added anywhere where page level JavaScript runs – in page script or a seperately loaded .js file.  The filter basically extends jQuery with a : expression. Filters get passed a tokenized array that contains the expression. In this case the m[3] contains the search text from inside of the brackets. A filter basically looks at the active element that is passed in and then can return true or false to determine whether the item should be matched. Here I check a regular expression that looks for the search text in the element’s text. So the code for the filter now changes to:$(".time-entry").not(":containsNoCase(" + search + ")").hide(); And voila – you now have a case insensitive search.You can play around with another simpler example using this Plunkr:http://plnkr.co/edit/hDprZ3IlC6uzwFJtgHJh?p=preview Wrapping it up in a jQuery Plug-in To make this even easier to use and so that you can more easily remember how to use this search type filter, we can wrap this logic into a small jQuery plug-in:(function($, undefined) { $.expr[":"].containsNoCase = function(el, i, m) { var search = m[3]; if (!search) return false; return new RegExp(search, "i").test($(el).text()); }; $.fn.searchFilter = function(options) { var opt = $.extend({ // target selector targetSelector: "", // number of characters before search is applied charCount: 1 }, options); return this.each(function() { var $el = $(this); $el.keyup(function() { var search = $(this).val(); var $target = $(opt.targetSelector); $target.show(); if (search && search.length >= opt.charCount) $target.not(":containsNoCase(" + search + ")").hide(); }); }); }; })(jQuery); To use this plug-in now becomes a one liner:$("#txtSearchPagePlugin").searchFilter({ targetSelector: ".time-entry", charCount: 2}) You attach the .searchFilter() plug-in to the text box you are searching and specify a targetSelector that is to be filtered. Optionally you can specify a character count at which the filter kicks in since it’s kind of useless to filter at a single character typically. Summary This is s a very easy solution to a cool user interface feature your users will thank you for. Search filtering is a simple but highly effective user interface feature, and as you’ve seen in this post it’s very simple to create this behavior with just a few lines of jQuery code. While all the cool kids are doing Angular these days, jQuery is still useful in many applications that don’t embrace the ‘everything generated in JavaScript’ paradigm. I hope this jQuery plug-in or just the raw jQuery will be useful to some of you… Resources Example on Plunker© Rick Strahl, West Wind Technologies, 2005-2014Posted in jQuery  HTML5  JavaScript   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • What’s new in ASP.NET 4.0: Core Features

    - by Rick Strahl
    Microsoft released the .NET Runtime 4.0 and with it comes a brand spanking new version of ASP.NET – version 4.0 – which provides an incremental set of improvements to an already powerful platform. .NET 4.0 is a full release of the .NET Framework, unlike version 3.5, which was merely a set of library updates on top of the .NET Framework version 2.0. Because of this full framework revision, there has been a welcome bit of consolidation of assemblies and configuration settings. The full runtime version change to 4.0 also means that you have to explicitly pick version 4.0 of the runtime when you create a new Application Pool in IIS, unlike .NET 3.5, which actually requires version 2.0 of the runtime. In this first of two parts I'll take a look at some of the changes in the core ASP.NET runtime. In the next edition I'll go over improvements in Web Forms and Visual Studio. Core Engine Features Most of the high profile improvements in ASP.NET have to do with Web Forms, but there are a few gems in the core runtime that should make life easier for ASP.NET developers. The following list describes some of the things I've found useful among the new features. Clean web.config Files Are Back! If you've been using ASP.NET 3.5, you probably have noticed that the web.config file has turned into quite a mess of configuration settings between all the custom handler and module mappings for the various web server versions. Part of the reason for this mess is that .NET 3.5 is a collection of add-on components running on top of the .NET Runtime 2.0 and so almost all of the new features of .NET 3.5 where essentially introduced as custom modules and handlers that had to be explicitly configured in the config file. Because the core runtime didn't rev with 3.5, all those configuration options couldn't be moved up to other configuration files in the system chain. With version 4.0 a consolidation was possible, and the result is a much simpler web.config file by default. A default empty ASP.NET 4.0 Web Forms project looks like this: <?xml version="1.0"?> <configuration> <system.web> <compilation debug="true" targetFramework="4.0" /> </system.web> </configuration> Need I say more? Configuration Transformation Files to Manage Configurations and Application Packaging ASP.NET 4.0 introduces the ability to create multi-target configuration files. This means it's possible to create a single configuration file that can be transformed based on relatively simple replacement rules using a Visual Studio and WebDeploy provided XSLT syntax. The idea is that you can create a 'master' configuration file and then create customized versions of this master configuration file by applying some relatively simplistic search and replace, add or remove logic to specific elements and attributes in the original file. To give you an idea, here's the example code that Visual Studio creates for a default web.Release.config file, which replaces a connection string, removes the debug attribute and replaces the CustomErrors section: <?xml version="1.0"?> <configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform"> <connectionStrings> <add name="MyDB" connectionString="Data Source=ReleaseSQLServer;Initial Catalog=MyReleaseDB;Integrated Security=True" xdt:Transform="SetAttributes" xdt:Locator="Match(name)"/> </connectionStrings> <system.web> <compilation xdt:Transform="RemoveAttributes(debug)" /> <customErrors defaultRedirect="GenericError.htm" mode="RemoteOnly" xdt:Transform="Replace"> <error statusCode="500" redirect="InternalError.htm"/> </customErrors> </system.web> </configuration> You can see the XSL transform syntax that drives this functionality. Basically, only the elements listed in the override file are matched and updated – all the rest of the original web.config file stays intact. Visual Studio 2010 supports this functionality directly in the project system so it's easy to create and maintain these customized configurations in the project tree. Once you're ready to publish your application, you can then use the Publish <yourWebApplication> option on the Build menu which allows publishing to disk, via FTP or to a Web Server using Web Deploy. You can also create a deployment package as a .zip file which can be used by the WebDeploy tool to configure and install the application. You can manually run the Web Deploy tool or use the IIS Manager to install the package on the server or other machine. You can find out more about WebDeploy and Packaging here: http://tinyurl.com/2anxcje. Improved Routing Routing provides a relatively simple way to create clean URLs with ASP.NET by associating a template URL path and routing it to a specific ASP.NET HttpHandler. Microsoft first introduced routing with ASP.NET MVC and then they integrated routing with a basic implementation in the core ASP.NET engine via a separate ASP.NET routing assembly. In ASP.NET 4.0, the process of using routing functionality gets a bit easier. First, routing is now rolled directly into System.Web, so no extra assembly reference is required in your projects to use routing. The RouteCollection class now includes a MapPageRoute() method that makes it easy to route to any ASP.NET Page requests without first having to implement an IRouteHandler implementation. It would have been nice if this could have been extended to serve *any* handler implementation, but unfortunately for anything but a Page derived handlers you still will have to implement a custom IRouteHandler implementation. ASP.NET Pages now include a RouteData collection that will contain route information. Retrieving route data is now a lot easier by simply using this.RouteData.Values["routeKey"] where the routeKey is the value specified in the route template (i.e., "users/{userId}" would use Values["userId"]). The Page class also has a GetRouteUrl() method that you can use to create URLs with route data values rather than hardcoding the URL: <%= this.GetRouteUrl("users",new { userId="ricks" }) %> You can also use the new Expression syntax using <%$RouteUrl %> to accomplish something similar, which can be easier to embed into Page or MVC View code: <a runat="server" href='<%$RouteUrl:RouteName=user, id=ricks %>'>Visit User</a> Finally, the Response object also includes a new RedirectToRoute() method to build a route url for redirection without hardcoding the URL. Response.RedirectToRoute("users", new { userId = "ricks" }); All of these routines are helpers that have been integrated into the core ASP.NET engine to make it easier to create routes and retrieve route data, which hopefully will result in more people taking advantage of routing in ASP.NET. To find out more about the routing improvements you can check out Dan Maharry's blog which has a couple of nice blog entries on this subject: http://tinyurl.com/37trutj and http://tinyurl.com/39tt5w5. Session State Improvements Session state is an often used and abused feature in ASP.NET and version 4.0 introduces a few enhancements geared towards making session state more efficient and to minimize at least some of the ill effects of overuse. The first improvement affects out of process session state, which is typically used in web farm environments or for sites that store application sensitive data that must survive AppDomain restarts (which in my opinion is just about any application). When using OutOfProc session state, ASP.NET serializes all the data in the session statebag into a blob that gets carried over the network and stored either in the State server or SQL Server via the Session provider. Version 4.0 provides some improvement in this serialization of the session data by offering an enableCompression option on the web.Config <Session> section, which forces the serialized session state to be compressed. Depending on the type of data that is being serialized, this compression can reduce the size of the data travelling over the wire by as much as a third. It works best on string data, but can also reduce the size of binary data. In addition, ASP.NET 4.0 now offers a way to programmatically turn session state on or off as part of the request processing queue. In prior versions, the only way to specify whether session state is available is by implementing a marker interface on the HTTP handler implementation. In ASP.NET 4.0, you can now turn session state on and off programmatically via HttpContext.Current.SetSessionStateBehavior() as part of the ASP.NET module pipeline processing as long as it occurs before the AquireRequestState pipeline event. Output Cache Provider Output caching in ASP.NET has been a very useful but potentially memory intensive feature. The default OutputCache mechanism works through in-memory storage that persists generated output based on various lifetime related parameters. While this works well enough for many intended scenarios, it also can quickly cause runaway memory consumption as the cache fills up and serves many variations of pages on your site. ASP.NET 4.0 introduces a provider model for the OutputCache module so it becomes possible to plug-in custom storage strategies for cached pages. One of the goals also appears to be to consolidate some of the different cache storage mechanisms used in .NET in general to a generic Windows AppFabric framework in the future, so various different mechanisms like OutputCache, the non-Page specific ASP.NET cache and possibly even session state eventually can use the same caching engine for storage of persisted data both in memory and out of process scenarios. For developers, the OutputCache provider feature means that you can now extend caching on your own by implementing a custom Cache provider based on the System.Web.Caching.OutputCacheProvider class. You can find more info on creating an Output Cache provider in Gunnar Peipman's blog at: http://tinyurl.com/2vt6g7l. Response.RedirectPermanent ASP.NET 4.0 includes features to issue a permanent redirect that issues as an HTTP 301 Moved Permanently response rather than the standard 302 Redirect respond. In pre-4.0 versions you had to manually create your permanent redirect by setting the Status and Status code properties – Response.RedirectPermanent() makes this operation more obvious and discoverable. There's also a Response.RedirectToRoutePermanent() which provides permanent redirection of route Urls. Preloading of Applications ASP.NET 4.0 provides a new feature to preload ASP.NET applications on startup, which is meant to provide a more consistent startup experience. If your application has a lengthy startup cycle it can appear very slow to serve data to clients while the application is warming up and loading initial resources. So rather than serve these startup requests slowly in ASP.NET 4.0, you can force the application to initialize itself first before even accepting requests for processing. This feature works only on IIS 7.5 (Windows 7 and Windows Server 2008 R2) and works in combination with IIS. You can set up a worker process in IIS 7.5 to always be running, which starts the Application Pool worker process immediately. ASP.NET 4.0 then allows you to specify site-specific settings by setting the serverAutoStartEnabled on a particular site along with an optional serviceAutoStartProvider class that can be used to receive "startup events" when the application starts up. This event in turn can be used to configure the application and optionally pre-load cache data and other information required by the app on startup.  The configuration settings need to be made in applicationhost.config: <sites> <site name="WebApplication2" id="1"> <application path="/" serviceAutoStartEnabled="true" serviceAutoStartProvider="PreWarmup" /> </site> </sites> <serviceAutoStartProviders> <add name="PreWarmup" type="PreWarmupProvider,MyAssembly" /> </serviceAutoStartProviders> Hooking up a warm up provider is optional so you can omit the provider definition and reference. If you do define it here's what it looks like: public class PreWarmupProvider System.Web.Hosting.IProcessHostPreloadClient { public void Preload(string[] parameters) { // initialization for app } } This code fires and while it's running, ASP.NET/IIS will hold requests from hitting the pipeline. So until this code completes the application will not start taking requests. The idea is that you can perform any pre-loading of resources and cache values so that the first request will be ready to perform at optimal performance level without lag. Runtime Performance Improvements According to Microsoft, there have also been a number of invisible performance improvements in the internals of the ASP.NET runtime that should make ASP.NET 4.0 applications run more efficiently and use less resources. These features come without any change requirements in applications and are virtually transparent, except that you get the benefits by updating to ASP.NET 4.0. Summary The core feature set changes are minimal which continues a tradition of small incremental changes to the ASP.NET runtime. ASP.NET has been proven as a solid platform and I'm actually rather happy to see that most of the effort in this release went into stability, performance and usability improvements rather than a massive amount of new features. The new functionality added in 4.0 is minimal but very useful. A lot of people are still running pure .NET 2.0 applications these days and have stayed off of .NET 3.5 for some time now. I think that version 4.0 with its full .NET runtime rev and assembly and configuration consolidation will make an attractive platform for developers to update to. If you're a Web Forms developer in particular, ASP.NET 4.0 includes a host of new features in the Web Forms engine that are significant enough to warrant a quick move to .NET 4.0. I'll cover those changes in my next column. Until then, I suggest you give ASP.NET 4.0 a spin and see for yourself how the new features can help you out. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • An easy way to create Side by Side registrationless COM Manifests with Visual Studio

    - by Rick Strahl
    Here's something I didn't find out until today: You can use Visual Studio to easily create registrationless COM manifest files for you with just a couple of small steps. Registrationless COM lets you use COM component without them being registered in the registry. This means it's possible to deploy COM components along with another application using plain xcopy semantics. To be sure it's rarely quite that easy - you need to watch out for dependencies - but if you know you have COM components that are light weight and have no or known dependencies it's easy to get everything into a single folder and off you go. Registrationless COM works via manifest files which carry the same name as the executable plus a .manifest extension (ie. yourapp.exe.manifest) I'm going to use a Visual FoxPro COM object as an example and create a simple Windows Forms app that calls the component - without that component being registered. Let's take a walk down memory lane… Create a COM Component I start by creating a FoxPro COM component because that's what I know and am working with here in my legacy environment. You can use VB classic or C++ ATL object if that's more to your liking. Here's a real simple Fox one: DEFINE CLASS SimpleServer as Session OLEPUBLIC FUNCTION HelloWorld(lcName) RETURN "Hello " + lcName ENDDEFINE Compile it into a DLL COM component with: BUILD MTDLL simpleserver FROM simpleserver RECOMPILE And to make sure it works test it quickly from Visual FoxPro: server = CREATEOBJECT("simpleServer.simpleserver") MESSAGEBOX( server.HelloWorld("Rick") ) Using Visual Studio to create a Manifest File for a COM Component Next open Visual Studio and create a new executable project - a Console App or WinForms or WPF application will all do. Go to the References Node Select Add Reference Use the Browse tab and find your compiled DLL to import  Next you'll see your assembly in the project. Right click on the reference and select Properties Click on the Isolated DropDown and select True Compile and that's all there's to it. Visual Studio will create a App.exe.manifest file right alongside your application's EXE. The manifest file created looks like this: xml version="1.0" encoding="utf-8"? assembly xsi:schemaLocation="urn:schemas-microsoft-com:asm.v1 assembly.adaptive.xsd" manifestVersion="1.0" xmlns:asmv1="urn:schemas-microsoft-com:asm.v1" xmlns:asmv2="urn:schemas-microsoft-com:asm.v2" xmlns:asmv3="urn:schemas-microsoft-com:asm.v3" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:co.v1="urn:schemas-microsoft-com:clickonce.v1" xmlns:co.v2="urn:schemas-microsoft-com:clickonce.v2" xmlns="urn:schemas-microsoft-com:asm.v1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" assemblyIdentity name="App.exe" version="1.0.0.0" processorArchitecture="x86" type="win32" / file name="simpleserver.DLL" asmv2:size="27293" hash xmlns="urn:schemas-microsoft-com:asm.v2" dsig:Transforms dsig:Transform Algorithm="urn:schemas-microsoft-com:HashTransforms.Identity" / dsig:Transforms dsig:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1" / dsig:DigestValuepuq+ua20bbidGOWhPOxfquztBCU=dsig:DigestValue hash typelib tlbid="{f10346e2-c9d9-47f7-81d1-74059cc15c3c}" version="1.0" helpdir="" resourceid="0" flags="HASDISKIMAGE" / comClass clsid="{af2c2811-0657-4264-a1f5-06d033a969ff}" threadingModel="Apartment" tlbid="{f10346e2-c9d9-47f7-81d1-74059cc15c3c}" progid="simpleserver.SimpleServer" description="simpleserver.SimpleServer" / file assembly Now let's finish our super complex console app to test with: using System; using System.Collections.Generic; using System.Text; namespace ConsoleApplication1 {     class Program     {         static voidMain(string[] args)         { Type type = Type.GetTypeFromProgID("simpleserver.simpleserver",true); dynamic server = Activator.CreateInstance(type); Console.WriteLine(server.HelloWorld("rick")); Console.ReadLine(); } } } Now run the Console Application… As expected that should work. And why not? The COM component is still registered, right? :-) Nothing tricky about that. Let's unregister the COM component and then re-run and see what happens. Go to the Command Prompt Change to the folder where the DLL is installed Unregister with: RegSvr32 -u simpleserver.dll      To be sure that the COM component no longer works, check it out with the same test you used earlier (ie. o = CREATEOBJECT("SimpleServer.SimpleServer") in your development environment or VBScript etc.). Make sure you run the EXE and you don't re-compile the application or else Visual Studio will complain that it can't find the COM component in the registry while compiling. In fact now that we have our .manifest file you can remove the COM object from the project. When you run run the EXE from Windows Explorer or a command prompt to avoid the recompile. Watch out for embedded Manifest Files Now recompile your .NET project and run it… and it will most likely fail! The problem is that .NET applications by default embeds a manifest file into the compiled EXE application which results in the externally created manifest file being completely ignored. Only one manifest can be applied at a time and the compiled manifest takes precedency. Uh, thanks Visual Studio - not very helpful… Note that if you use another development tool like Visual FoxPro to create your EXE this won't be an issue as long as the tool doesn't automatically add a manifest file. Creating a Visual FoxPro EXE for example will work immediately with the generated manifest file as is. If you are using .NET and Visual Studio you have a couple of options of getting around this: Remove the embedded manifest file Copy the contents of the generated manifest file into a project manifest file and compile that in To remove an embedded manifest in a Visual Studio project: Open the Project Properties (Alt-Enter on project node) Go down to Resources | Manifest and select | Create Application without a Manifest   You can now add use the external manifest file and it will actually be respected when the app runs. The other option is to let Visual Studio create the manifest file on disk and then explicitly add the manifest file into the project. Notice on the dialog above I did this for app.exe.manifest and the manifest actually shows up in the list. If I select this file it will be compiled into the EXE and be used in lieu of any external files and that works as well. Remove the simpleserver.dll reference so you can compile your code and run the application. Now it should work without COM registration of the component. Personally I prefer external manifests because they can be modified after the fact - compiled manifests are evil in my mind because they are immutable - once they are there they can't be overriden or changed. So I prefer an external manifest. However, if you are absolutely sure nothing needs to change and you don't want anybody messing with your manifest, you can also embed it. The option to either is there. Watch for Manifest Caching While working trying to get this to work I ran into some problems at first. Specifically when it wasn't working at first (due to the embedded schema) I played with various different manifest layouts in different files etc.. There are a number of different ways to actually represent manifest files including offloading to separate folder (more on that later). A few times I made deliberate errors in the schema file and I found that regardless of what I did once the app failed or worked no amount of changing of the manifest file would make it behave differently. It appears that Windows is caching the manifest data for a given EXE or DLL. It takes a restart or a recompile of either the EXE or the DLL to clear the caching. Recompile your servers in order to see manifest changes unless there's an outright failure of an invalid manifest file. If the app starts the manifest is being read and caches immediately. This can be very confusing especially if you don't know that it's happening. I found myself always recompiling the exe after each run and before making any changes to the manifest file. Don't forget about Runtimes of COM Objects In the example I used above I used a Visual FoxPro COM component. Visual FoxPro is a runtime based environment so if I'm going to distribute an application that uses a FoxPro COM object the runtimes need to be distributed as well. The same is true of classic Visual Basic applications. Assuming that you don't know whether the runtimes are installed on the target machines make sure to install all the additional files in the EXE's directory alongside the COM DLL. In the case of Visual FoxPro the target folder should contain: The EXE  App.exe The Manifest file (unless it's compiled in) App.exe.manifest The COM object DLL (simpleserver.dll) Visual FoxPro Runtimes: VFP9t.dll (or VFP9r.dll for non-multithreaded dlls), vfp9rENU.dll, msvcr71.dll All these files should be in the same folder. Debugging Manifest load Errors If you for some reason get your manifest loading wrong there are a couple of useful tools available - SxSTrace and SxSParse. These two tools can be a huge help in debugging manifest loading errors. Put the following into a batch file (SxS_Trace.bat for example): sxstrace Trace -logfile:sxs.bin sxstrace Parse -logfile:sxs.bin -outfile:sxs.txt Then start the batch file before running your EXE. Make sure there's no caching happening as described in the previous section. For example, if I go into the manifest file and explicitly break the CLSID and/or ProgID I get a detailed report on where the EXE is looking for the manifest and what it's reading. Eventually the trace gives me an error like this: INFO: Parsing Manifest File C:\wwapps\Conf\SideBySide\Code\app.EXE.     INFO: Manifest Definition Identity is App.exe,processorArchitecture="x86",type="win32",version="1.0.0.0".     ERROR: Line 13: The value {AAaf2c2811-0657-4264-a1f5-06d033a969ff} of attribute clsid in element comClass is invalid. ERROR: Activation Context generation failed. End Activation Context Generation. pinpointing nicely where the error lies. Pay special attention to the various attributes - they have to match exactly in the different sections of the manifest file(s). Multiple COM Objects The manifest file that Visual Studio creates is actually quite more complex than is required for basic registrationless COM object invokation. The manifest file can be simplified a lot actually by stripping off various namespaces and removing the type library references altogether. Here's an example of a simplified manifest file that actually includes references to 2 COM servers: xml version="1.0" encoding="utf-8"? assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0" assemblyIdentity name="App.exe" version="1.0.0.0" processorArchitecture="x86" type="win32" / file name="simpleserver.DLL" comClass clsid="{af2c2811-0657-4264-a1f5-06d033a969ff}" threadingModel="Apartment" progid="simpleserver.SimpleServer" description="simpleserver.SimpleServer" / file file name = "sidebysidedeploy.dll" comClass clsid="{EF82B819-7963-4C36-9443-3978CD94F57C}" progid="sidebysidedeploy.SidebysidedeployServer" description="SidebySideDeploy Server" threadingModel="apartment" / file assembly Simple enough right? Routing to separate Manifest Files and Folders In the examples above all files ended up in the application's root folder - all the DLLs, support files and runtimes. Sometimes that's not so desirable and you can actually create separate manifest files. The easiest way to do this is to create a manifest file that 'routes' to another manifest file in a separate folder. Basically you create a new 'assembly identity' via a named id. You can then create a folder and another manifest with the id plus .manifest that points at the actual file. In this example I create: App.exe.manifest A folder called App.deploy A manifest file in App.deploy All DLLs and runtimes in App.deploy Let's start with that master manifest file. This file only holds a reference to another manifest file: App.exe.manifest xml version="1.0" encoding="UTF-8" standalone="yes"? assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0" assemblyIdentity name="App.exe" version="1.0.0.0" processorArchitecture="x86" type="win32" / dependency dependentAssembly assemblyIdentity name="App.deploy" version="1.0.0.0" type="win32" / dependentAssembly dependency assembly   Note this file only contains a dependency to App.deploy which is another manifest id. I can then create App.deploy.manifest in the current folder or in an App.deploy folder. In this case I'll create App.deploy and in it copy the DLLs and support runtimes. I then create App.deploy.manifest. App.deploy.manifest xml version="1.0" encoding="UTF-8" standalone="yes"? assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0" assemblyIdentity name="App.deploy" type="win32" version="1.0.0.0" / file name="simpleserver.DLL" comClass clsid="{af2c2811-0657-4264-a1f5-06d033a969ff}" threadingModel="Apartment" progid="simpleserver.SimpleServer" description="simpleserver.SimpleServer" / file file name="sidebysidedeploy.dll" comClass clsid="{EF82B819-7963-4C36-9443-3978CD94F57C}" threadingModel="Apartment" progid="sidebysidedeploy.SidebysidedeployServer" description="SidebySideDeploy Server" / file assembly   In this manifest file I then host my COM DLLs and any support runtimes. This is quite useful if you have lots of DLLs you are referencing or if you need to have separate configuration and application files that are associated with the COM object. This way the operation of your main application and the COM objects it interacts with is somewhat separated. You can see the two folders here:   Routing Manifests to different Folders In theory registrationless COM should be pretty easy in painless - you've seen the configuration manifest files and it certainly doesn't look very complicated, right? But the devil's in the details. The ActivationContext API (SxS - side by side activation) is very intolerant of small errors in the XML or formatting of the keys, so be really careful when setting up components, especially if you are manually editing these files. If you do run into trouble SxsTrace/SxsParse are a huge help to track down the problems. And remember that if you do have problems that you'll need to recompile your EXEs or DLLs for the SxS APIs to refresh themselves properly. All of this gets even more fun if you want to do registrationless COM inside of IIS :-) But I'll leave that for another blog post…© Rick Strahl, West Wind Technologies, 2005-2011Posted in COM  .NET  FoxPro   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Built-in GZip/Deflate Compression on IIS 7.x

    - by Rick Strahl
    IIS 7 improves internal compression functionality dramatically making it much easier than previous versions to take advantage of compression that’s built-in to the Web server. IIS 7 also supports dynamic compression which allows automatic compression of content created in your own applications (ASP.NET or otherwise!). The scheme is based on content-type sniffing and so it works with any kind of Web application framework. While static compression on IIS 7 is super easy to set up and turned on by default for most text content (text/*, which includes HTML and CSS, as well as for JavaScript, Atom, XAML, XML), setting up dynamic compression is a bit more involved, mostly because the various default compression settings are set in multiple places down the IIS –> ASP.NET hierarchy. Let’s take a look at each of the two approaches available: Static Compression Compresses static content from the hard disk. IIS can cache this content by compressing the file once and storing the compressed file on disk and serving the compressed alias whenever static content is requested and it hasn’t changed. The overhead for this is minimal and should be aggressively enabled. Dynamic Compression Works against application generated output from applications like your ASP.NET apps. Unlike static content, dynamic content must be compressed every time a page that requests it regenerates its content. As such dynamic compression has a much bigger impact than static caching. How Compression is configured Compression in IIS 7.x  is configured with two .config file elements in the <system.WebServer> space. The elements can be set anywhere in the IIS/ASP.NET configuration pipeline all the way from ApplicationHost.config down to the local web.config file. The following is from the the default setting in ApplicationHost.config (in the %windir%\System32\inetsrv\config forlder) on IIS 7.5 with a couple of small adjustments (added json output and enabled dynamic compression): <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <httpCompression directory="%SystemDrive%\inetpub\temp\IIS Temporary Compressed Files"> <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> <dynamicTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/json" enabled="true" /> <add mimeType="*/*" enabled="false" /> </dynamicTypes> <staticTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/atom+xml" enabled="true" /> <add mimeType="application/xaml+xml" enabled="true" /> <add mimeType="*/*" enabled="false" /> </staticTypes> </httpCompression> <urlCompression doStaticCompression="true" doDynamicCompression="true" /> </system.webServer> </configuration> You can find documentation on the httpCompression and urlCompression keys here respectively: http://msdn.microsoft.com/en-us/library/ms690689%28v=vs.90%29.aspx http://msdn.microsoft.com/en-us/library/aa347437%28v=vs.90%29.aspx The httpCompression Element – What and How to compress Basically httpCompression configures what types to compress and how to compress them. It specifies the DLL that handles gzip encoding and the types of documents that are to be compressed. Types are set up based on mime-types which looks at returned Content-Type headers in HTTP responses. For example, I added the application/json to mime type to my dynamic compression types above to allow that content to be compressed as well since I have quite a bit of AJAX content that gets sent to the client. The UrlCompression Element – Enables and Disables Compression The urlCompression element is a quick way to turn compression on and off. By default static compression is enabled server wide, and dynamic compression is disabled server wide. This might be a bit confusing because the httpCompression element also has a doDynamicCompression attribute which is set to true by default, but the urlCompression attribute by the same name actually overrides it. The urlCompression element only has three attributes: doStaticCompression, doDynamicCompression and dynamicCompressionBeforeCache. The doCompression attributes are the final determining factor whether compression is enabled, so it’s a good idea to be explcit! The default for doDynamicCompression='false”, but doStaticCompression="true"! Static Compression is enabled by Default, Dynamic Compression is not Because static compression is very efficient in IIS 7 it’s enabled by default server wide and there probably is no reason to ever change that setting. Dynamic compression however, since it’s more resource intensive, is turned off by default. If you want to enable dynamic compression there are a few quirks you have to deal with, namely that enabling it in ApplicationHost.config doesn’t work. Setting: <urlCompression doDynamicCompression="true" /> in applicationhost.config appears to have no effect and I had to move this element into my local web.config to make dynamic compression work. This is actually a smart choice because you’re not likely to want dynamic compression in every application on a server. Rather dynamic compression should be applied selectively where it makes sense. However, nowhere is it documented that the setting in applicationhost.config doesn’t work (or more likely is overridden somewhere and disabled lower in the configuration hierarchy). So: remember to set doDynamicCompression=”true” in web.config!!! How Static Compression works Static compression works against static content loaded from files on disk. Because this content is static and not bound to change frequently – such as .js, .css and static HTML content – it’s fairly easy for IIS to compress and then cache the compressed content. The way this works is that IIS compresses the files into a special folder on the server’s hard disk and then reads the content from this location if already compressed content is requested and the underlying file resource has not changed. The semantics of serving an already compressed file are very efficient – IIS still checks for file changes, but otherwise just serves the already compressed file from the compression folder. The compression folder is located at: %windir%\inetpub\temp\IIS Temporary Compressed Files\ApplicationPool\ If you look into the subfolders you’ll find compressed files: These files are pre-compressed and IIS serves them directly to the client until the underlying files are changed. As I mentioned before – static compression is on by default and there’s very little reason to turn that functionality off as it is efficient and just works out of the box. The one tweak you might want to do is to set the compression level to maximum. Since IIS only compresses content very infrequently it would make sense to apply maximum compression. You can do this with the staticCompressionLevel setting on the scheme element: <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> Other than that the default settings are probably just fine. Dynamic Compression – not so fast! By default dynamic compression is disabled and that’s actually quite sensible – you should use dynamic compression very carefully and think about what content you want to compress. In most applications it wouldn’t make sense to compress *all* generated content as it would generate a significant amount of overhead. Scott Fortsyth has a great post that details some of the performance numbers and how much impact dynamic compression has. Depending on how busy your server is you can play around with compression and see what impact it has on your server’s performance. There are also a few settings you can tweak to minimize the overhead of dynamic compression. Specifically the httpCompression key has a couple of CPU related keys that can help minimize the impact of Dynamic Compression on a busy server: dynamicCompressionDisableCpuUsage dynamicCompressionEnableCpuUsage By default these are set to 90 and 50 which means that when the CPU hits 90% compression will be disabled until CPU utilization drops back down to 50%. Again this is actually quite sensible as it utilizes CPU power from compression when available and falling off when the threshold has been hit. It’s a good way some of that extra CPU power on your big servers to use when utilization is low. Again these settings are something you likely have to play with. I would probably set the upper limit a little lower than 90% maybe around 70% to make this a feature that kicks in only if there’s lots of power to spare. I’m not really sure how accurate these CPU readings that IIS uses are as Cpu usage on Web Servers can spike drastically even during low loads. Don’t trust settings – do some load testing or monitor your server in a live environment to see what values make sense for your environment. Finally for dynamic compression I tend to add one Mime type for JSON data, since a lot of my applications send large chunks of JSON data over the wire. You can do that with the application/json content type: <add mimeType="application/json" enabled="true" /> What about Deflate Compression? The default compression is GZip. The documentation hints that you can use a different compression scheme and mentions Deflate compression. And sure enough you can change the compression settings to: <scheme name="deflate" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> to get deflate style compression. The deflate algorithm produces slightly more compact output so I tend to prefer it over GZip but more HTTP clients (other than browsers) support GZip than Deflate so be careful with this option if you build Web APIs. I also had some issues with the above value actually being applied right away. Changing the scheme in applicationhost.config didn’t show up on the site  right away. It required me to do a full IISReset to get that change to show up before I saw the change over to deflate compressed content. Content was slightly more compressed with deflate – not sure if it’s worth the slightly less common compression type, but the option at least is available. IIS 7 finally makes GZip Easy In summary IIS 7 makes GZip easy finally, even if the configuration settings are a bit obtuse and the documentation is seriously lacking. But once you know the basic settings I’ve described here and the fact that you can override all of this in your local web.config it’s pretty straight forward to configure GZip support and tweak it exactly to your needs. Static compression is a total no brainer as it adds very little overhead compared to direct static file serving and provides solid compression. Dynamic Compression is a little more tricky as it does add some overhead to servers, so it probably will require some tweaking to get the right balance of CPU load vs. compression ratios. Looking at large sites like Amazon, Yahoo, NewEgg etc. – they all use Related Content Code based ASP.NET GZip Caveats HttpWebRequest and GZip Responses © Rick Strahl, West Wind Technologies, 2005-2011Posted in IIS7   ASP.NET  

    Read the article

  • Passing multiple POST parameters to Web API Controller Methods

    - by Rick Strahl
    ASP.NET Web API introduces a new API for creating REST APIs and making AJAX callbacks to the server. This new API provides a host of new great functionality that unifies many of the features of many of the various AJAX/REST APIs that Microsoft created before it - ASP.NET AJAX, WCF REST specifically - and combines them into a whole more consistent API. Web API addresses many of the concerns that developers had with these older APIs, namely that it was very difficult to build consistent REST style resource APIs easily. While Web API provides many new features and makes many scenarios much easier, a lot of the focus has been on making it easier to build REST compliant APIs that are focused on resource based solutions and HTTP verbs. But  RPC style calls that are common with AJAX callbacks in Web applications, have gotten a lot less focus and there are a few scenarios that are not that obvious, especially if you're expecting Web API to provide functionality similar to ASP.NET AJAX style AJAX callbacks. RPC vs. 'Proper' REST RPC style HTTP calls mimic calling a method with parameters and returning a result. Rather than mapping explicit server side resources or 'nouns' RPC calls tend simply map a server side operation, passing in parameters and receiving a typed result where parameters and result values are marshaled over HTTP. Typically RPC calls - like SOAP calls - tend to always be POST operations rather than following HTTP conventions and using the GET/POST/PUT/DELETE etc. verbs to implicitly determine what operation needs to be fired. RPC might not be considered 'cool' anymore, but for typical private AJAX backend operations of a Web site I'd wager that a large percentage of use cases of Web API will fall towards RPC style calls rather than 'proper' REST style APIs. Web applications that have needs for things like live validation against data, filling data based on user inputs, handling small UI updates often don't lend themselves very well to limited HTTP verb usage. It might not be what the cool kids do, but I don't see RPC calls getting replaced by proper REST APIs any time soon.  Proper REST has its place - for 'real' API scenarios that manage and publish/share resources, but for more transactional operations RPC seems a better choice and much easier to implement than trying to shoehorn a boatload of endpoint methods into a few HTTP verbs. In any case Web API does a good job of providing both RPC abstraction as well as the HTTP Verb/REST abstraction. RPC works well out of the box, but there are some differences especially if you're coming from ASP.NET AJAX service or WCF Rest when it comes to multiple parameters. Action Routing for RPC Style Calls If you've looked at Web API demos you've probably seen a bunch of examples of how to create HTTP Verb based routing endpoints. Verb based routing essentially maps a controller and then uses HTTP verbs to map the methods that are called in response to HTTP requests. This works great for resource APIs but doesn't work so well when you have many operational methods in a single controller. HTTP Verb routing is limited to the few HTTP verbs available (plus separate method signatures) and - worse than that - you can't easily extend the controller with custom routes or action routing beyond that. Thankfully Web API also supports Action based routing which allows you create RPC style endpoints fairly easily:RouteTable.Routes.MapHttpRoute( name: "AlbumRpcApiAction", routeTemplate: "albums/{action}/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumApi", action = "GetAblums" } ); This uses traditional MVC style {action} method routing which is different from the HTTP verb based routing you might have read a bunch about in conjunction with Web API. Action based routing like above lets you specify an end point method in a Web API controller either via the {action} parameter in the route string or via a default value for custom routes. Using routing you can pass multiple parameters either on the route itself or pass parameters on the query string, via ModelBinding or content value binding. For most common scenarios this actually works very well. As long as you are passing either a single complex type via a POST operation, or multiple simple types via query string or POST buffer, there's no issue. But if you need to pass multiple parameters as was easily done with WCF REST or ASP.NET AJAX things are not so obvious. Web API has no issue allowing for single parameter like this:[HttpPost] public string PostAlbum(Album album) { return String.Format("{0} {1:d}", album.AlbumName, album.Entered); } There are actually two ways to call this endpoint: albums/PostAlbum Using the Model Binder with plain POST values In this mechanism you're sending plain urlencoded POST values to the server which the ModelBinder then maps the parameter. Each property value is matched to each matching POST value. This works similar to the way that MVC's  ModelBinder works. Here's how you can POST using the ModelBinder and jQuery:$.ajax( { url: "albums/PostAlbum", type: "POST", data: { AlbumName: "Dirty Deeds", Entered: "5/1/2012" }, success: function (result) { alert(result); }, error: function (xhr, status, p3, p4) { var err = "Error " + " " + status + " " + p3; if (xhr.responseText && xhr.responseText[0] == "{") err = JSON.parse(xhr.responseText).message; alert(err); } }); Here's what the POST data looks like for this request: The model binder and it's straight form based POST mechanism is great for posting data directly from HTML pages to model objects. It avoids having to do manual conversions for many operations and is a great boon for AJAX callback requests. Using Web API JSON Formatter The other option is to post data using a JSON string. The process for this is similar except that you create a JavaScript object and serialize it to JSON first.album = { AlbumName: "PowerAge", Entered: new Date(1977,0,1) } $.ajax( { url: "albums/PostAlbum", type: "POST", contentType: "application/json", data: JSON.stringify(album), success: function (result) { alert(result); } }); Here the data is sent using a JSON object rather than form data and the data is JSON encoded over the wire. The trace reveals that the data is sent using plain JSON (Source above), which is a little more efficient since there's no UrlEncoding that occurs. BTW, notice that WebAPI automatically deals with the date. I provided the date as a plain string, rather than a JavaScript date value and the Formatter and ModelBinder both automatically map the date propertly to the Entered DateTime property of the Album object. Passing multiple Parameters to a Web API Controller Single parameters work fine in either of these RPC scenarios and that's to be expected. ModelBinding always works against a single object because it maps a model. But what happens when you want to pass multiple parameters? Consider an API Controller method that has a signature like the following:[HttpPost] public string PostAlbum(Album album, string userToken) Here I'm asking to pass two objects to an RPC method. Is that possible? This used to be fairly straight forward either with WCF REST and ASP.NET AJAX ASMX services, but as far as I can tell this is not directly possible using a POST operation with WebAPI. There a few workarounds that you can use to make this work: Use both POST *and* QueryString Parameters in Conjunction If you have both complex and simple parameters, you can pass simple parameters on the query string. The above would actually work with: /album/PostAlbum?userToken=sekkritt but that's not always possible. In this example it might not be a good idea to pass a user token on the query string though. It also won't work if you need to pass multiple complex objects, since query string values do not support complex type mapping. They only work with simple types. Use a single Object that wraps the two Parameters If you go by service based architecture guidelines every service method should always pass and return a single value only. The input should wrap potentially multiple input parameters and the output should convey status as well as provide the result value. You typically have a xxxRequest and a xxxResponse class that wraps the inputs and outputs. Here's what this method might look like:public PostAlbumResponse PostAlbum(PostAlbumRequest request) { var album = request.Album; var userToken = request.UserToken; return new PostAlbumResponse() { IsSuccess = true, Result = String.Format("{0} {1:d} {2}", album.AlbumName, album.Entered,userToken) }; } with these support types:public class PostAlbumRequest { public Album Album { get; set; } public User User { get; set; } public string UserToken { get; set; } } public class PostAlbumResponse { public string Result { get; set; } public bool IsSuccess { get; set; } public string ErrorMessage { get; set; } }   To call this method you now have to assemble these objects on the client and send it up as JSON:var album = { AlbumName: "PowerAge", Entered: "1/1/1977" } var user = { Name: "Rick" } var userToken = "sekkritt"; $.ajax( { url: "samples/PostAlbum", type: "POST", contentType: "application/json", data: JSON.stringify({ Album: album, User: user, UserToken: userToken }), success: function (result) { alert(result.Result); } }); I assemble the individual types first and then combine them in the data: property of the $.ajax() call into the actual object passed to the server, that mimics the structure of PostAlbumRequest server class that has Album, User and UserToken properties. This works well enough but it gets tedious if you have to create Request and Response types for each method signature. If you have common parameters that are always passed (like you always pass an album or usertoken) you might be able to abstract this to use a single object that gets reused for all methods, but this gets confusing too: Overload a single 'parameter' too much and it becomes a nightmare to decipher what your method actual can use. Use JObject to parse multiple Property Values out of an Object If you recall, ASP.NET AJAX and WCF REST used a 'wrapper' object to make default AJAX calls. Rather than directly calling a service you always passed an object which contained properties for each parameter: { parm1: Value, parm2: Value2 } WCF REST/ASP.NET AJAX would then parse this top level property values and map them to the parameters of the endpoint method. This automatic type wrapping functionality is no longer available directly in Web API, but since Web API now uses JSON.NET for it's JSON serializer you can actually simulate that behavior with a little extra code. You can use the JObject class to receive a dynamic JSON result and then using the dynamic cast of JObject to walk through the child objects and even parse them into strongly typed objects. Here's how to do this on the API Controller end:[HttpPost] public string PostAlbum(JObject jsonData) { dynamic json = jsonData; JObject jalbum = json.Album; JObject juser = json.User; string token = json.UserToken; var album = jalbum.ToObject<Album>(); var user = juser.ToObject<User>(); return String.Format("{0} {1} {2}", album.AlbumName, user.Name, token); } This is clearly not as nice as having the parameters passed directly, but it works to allow you to pass multiple parameters and access them using Web API. JObject is JSON.NET's generic object container which sports a nice dynamic interface that allows you to walk through the object's properties using standard 'dot' object syntax. All you have to do is cast the object to dynamic to get access to the property interface of the JSON type. Additionally JObject also allows you to parse JObject instances into strongly typed objects, which enables us here to retrieve the two objects passed as parameters from this jquery code:var album = { AlbumName: "PowerAge", Entered: "1/1/1977" } var user = { Name: "Rick" } var userToken = "sekkritt"; $.ajax( { url: "samples/PostAlbum", type: "POST", contentType: "application/json", data: JSON.stringify({ Album: album, User: user, UserToken: userToken }), success: function (result) { alert(result); } }); Summary ASP.NET Web API brings many new features and many advantages over the older Microsoft AJAX and REST APIs, but realize that some things like passing multiple strongly typed object parameters will work a bit differently. It's not insurmountable, but just knowing what options are available to simulate this behavior is good to know. Now let me say here that it's probably not a good practice to pass a bunch of parameters to an API call. Ideally APIs should be closely factored to accept single parameters or a single content parameter at least along with some identifier parameters that can be passed on the querystring. But saying that doesn't mean that occasionally you don't run into a situation where you have the need to pass several objects to the server and all three of the options I mentioned might have merit in different situations. For now I'm sure the question of how to pass multiple parameters will come up quite a bit from people migrating WCF REST or ASP.NET AJAX code to Web API. At least there are options available to make it work.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • AngularJs ng-cloak Problems on large Pages

    - by Rick Strahl
    I’ve been working on a rather complex and large Angular page. Unlike a typical AngularJs SPA style ‘application’ this particular page is just that: a single page with a large amount of data on it that has to be visible all at once. The problem is that when this large page loads it flickers and displays template markup briefly before kicking into its actual content rendering. This is is what the Angular ng-cloak is supposed to address, but in this case I had no luck getting it to work properly. This application is a shop floor app where workers need to see all related information in one big screen view, so some of the benefits of Angular’s routing and view swapping features couldn’t be applied. Instead, we decided to have one very big view but lots of ng-controllers and directives to break out the logic for code separation. For code separation this works great – there are a number of small controllers that deal with their own individual and isolated application concerns. For HTML separation we used partial ASP.NET MVC Razor Views which made breaking out the HTML into manageable pieces super easy and made migration of this page from a previous server side Razor page much easier. We were also able to leverage most of our server side localization without a lot of  changes as a bonus. But as a result of this choice the initial HTML document that loads is rather large – even without any data loaded into it, resulting in a fairly large DOM tree that Angular must manage. Large Page and Angular Startup The problem on this particular page is that there’s quite a bit of markup – 35k’s worth of markup without any data loaded, in fact. It’s a large HTML page with a complex DOM tree. There are quite a lot of Angular {{ }} markup expressions in the document. Angular provides the ng-cloak directive to try and hide the element it cloaks so that you don’t see the flash of these markup expressions when the page initially loads before Angular has a chance to render the data into the markup expressions.<div id="mainContainer" class="mainContainer boxshadow" ng-app="app" ng-cloak> Note the ng-cloak attribute on this element, which here is an outer wrapper element of the most of this large page’s content. ng-cloak is supposed to prevent displaying the content below it, until Angular has taken control and is ready to render the data into the templates. Alas, with this large page the end result unfortunately is a brief flicker of un-rendered markup which looks like this: It’s brief, but plenty ugly – right?  And depending on the speed of the machine this flash gets more noticeable with slow machines that take longer to process the initial HTML DOM. ng-cloak Styles ng-cloak works by temporarily hiding the marked up element and it does this by essentially applying a style that does this:[ng\:cloak], [ng-cloak], [data-ng-cloak], [x-ng-cloak], .ng-cloak, .x-ng-cloak { display: none !important; } This style is inlined as part of AngularJs itself. If you looking at the angular.js source file you’ll find this at the very end of the file:!angular.$$csp() && angular.element(document) .find('head') .prepend('<style type="text/css">@charset "UTF-8";[ng\\:cloak],[ng-cloak],' + '[data-ng-cloak],[x-ng-cloak],.ng-cloak,.x-ng-cloak,' + '.ng-hide{display:none !important;}ng\\:form{display:block;}' '.ng-animate-block-transitions{transition:0s all!important;-webkit-transition:0s all!important;}' + '</style>'); This is is meant to initially hide any elements that contain the ng-cloak attribute or one of the other Angular directive permutation markup. Unfortunately on this particular web page ng-cloak had no effect – I still see the flicker. Why doesn’t ng-cloak work? The problem is of course – timing. The problem is that Angular actually needs to get control of the page before it ever starts doing anything like process even the ng-cloak attribute (or style etc). Because this page is rather large (about 35k of non-data HTML) it takes a while for the DOM to actually plow through the HTML. With the Angular <script> tag defined at the bottom of the page after the HTML DOM content there’s a slight delay which causes the flicker. For smaller pages the initial DOM load/parse cycle is so fast that the markup never shows, but with larger content pages it may show and become an annoying problem. Workarounds There a number of simple ways around this issue and some of them are hinted on in the Angular documentation. Load Angular Sooner One obvious thing that would help with this is to load Angular at the top of the page  BEFORE the DOM loads and that would give it much earlier control. The old ng-cloak documentation actually recommended putting the Angular.js script into the header of the page (apparently this was recently removed), but generally it’s not a good practice to load scripts in the header for page load performance. This is especially true if you load other libraries like jQuery which should be loaded prior to loading Angular so it can use jQuery rather than its own jqLite subset. This is not something I normally would like to do and also something that I’d likely forget in the future and end up right back here :-). Use ng-include for Child Content Angular supports nesting of child templates via the ng-include directive which essentially delay loads HTML content. This helps by removing a lot of the template content out of the main page and so getting control to Angular a lot sooner in order to hide the markup template content. In the application in question, I realize that in hindsight it might have been smarter to break this page out with client side ng-include directives instead of MVC Razor partial views we used to break up the page sections. Razor partial views give that nice separation as well, but in the end Razor puts humpty dumpty (ie. the HTML) back together into a whole single and rather large HTML document. Razor provides the logical separation, but still results in a large physical result document. But Razor also ended up being helpful to have a few security related blocks handled via server side template logic that simply excludes certain parts of the UI the user is not allowed to see – something that you can’t really do with client side exclusion like ng-hide/ng-show – client side content is always there whereas on the server side you can simply not send it to the client. Another reason I’m not a huge fan of ng-include is that it adds another HTTP hit to a request as templates are loaded from the server dynamically as needed. Given that this page was already heavy with resources adding another 10 separate ng-include directives wouldn’t be beneficial :-) ng-include is a valid option if you start from scratch and partition your logic. Of course if you don’t have complex pages, having completely separate views that are swapped in as they are accessed are even better, but we didn’t have this option due to the information having to be on screen all at once. Avoid using {{ }}  Expressions The biggest issue that ng-cloak attempts to address isn’t so much displaying the original content – it’s displaying empty {{ }} markup expression tags that get embedded into content. It gives you the dreaded “now you see it, now you don’t” effect where you sometimes see three separate rendering states: Markup junk, empty views, then views filled with data. If we can remove {{ }} expressions from the page you remove most of the perceived double draw effect as you would effectively start with a blank form and go straight to a filled form. To do this you can forego {{ }}  expressions and replace them with ng-bind directives on DOM elements. For example you can turn:<div class="list-item-name listViewOrderNo"> <a href='#'>{{lineItem.MpsOrderNo}}</a> </div>into:<div class="list-item-name listViewOrderNo"> <a href="#" ng-bind="lineItem.MpsOrderNo"></a> </div> to get identical results but because the {{ }}  expression has been removed there’s no double draw effect for this element. Again, not a great solution. The {{ }} syntax sure reads cleaner and is more fluent to type IMHO. In some cases you may also not have an outer element to attach ng-bind to which then requires you to artificially inject DOM elements into the page. This is especially painful if you have several consecutive values like {{Firstname}} {{Lastname}} for example. It’s an option though especially if you think of this issue up front and you don’t have a ton of expressions to deal with. Add the ng-cloak Styles manually You can also explicitly define the .css styles that Angular injects via code manually in your application’s style sheet. By doing so the styles become immediately available and so are applied right when the page loads – no flicker. I use the minimal:[ng-cloak] { display: none !important; } which works for:<div id="mainContainer" class="mainContainer dialog boxshadow" ng-app="app" ng-cloak> If you use one of the other combinations add the other CSS selectors as well or use the full style shown earlier. Angular will still load its version of the ng-cloak styling but it overrides those settings later, but this will do the trick of hiding the content before that CSS is injected into the page. Adding the CSS in your own style sheet works well, and is IMHO by far the best option. The nuclear option: Hiding the Content manually Using the explicit CSS is the best choice, so the following shouldn’t ever be necessary. But I’ll mention it here as it gives some insight how you can hide/show content manually on load for other frameworks or in your own markup based templates. Before I figured out that I could explicitly embed the CSS style into the page, I had tried to figure out why ng-cloak wasn’t doing its job. After wasting an hour getting nowhere I finally decided to just manually hide and show the container. The idea is simple – initially hide the container, then show it once Angular has done its initial processing and removal of the template markup from the page. You can manually hide the content and make it visible after Angular has gotten control. To do this I used:<div id="mainContainer" class="mainContainer boxshadow" ng-app="app" style="display:none"> Notice the display: none style that explicitly hides the element initially on the page. Then once Angular has run its initialization and effectively processed the template markup on the page you can show the content. For Angular this ‘ready’ event is the app.run() function:app.run( function ($rootScope, $location, cellService) { $("#mainContainer").show(); … }); This effectively removes the display:none style and the content displays. By the time app.run() fires the DOM is ready to displayed with filled data or at least empty data – Angular has gotten control. Edge Case Clearly this is an edge case. In general the initial HTML pages tend to be reasonably sized and the load time for the HTML and Angular are fast enough that there’s no flicker between the rendering times. This only becomes an issue as the initial pages get rather large. Regardless – if you have an Angular application it’s probably a good idea to add the CSS style into your application’s CSS (or a common shared one) just to make sure that content is always hidden. You never know how slow of a browser somebody might be running and while your super fast dev machine might not show any flicker, grandma’s old XP box very well might…© Rick Strahl, West Wind Technologies, 2005-2014Posted in Angular  JavaScript  CSS  HTML   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >