Search Results

Search found 381 results on 16 pages for 'wind'.

Page 5/16 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Prefilling an SMS on Mobile Devices with the sms: Uri Scheme

    - by Rick Strahl
    Popping up the native SMS app from a mobile HTML Web page is a nice feature that allows you to pre-fill info into a text for sending by a user of your mobile site. The syntax is a bit tricky due to some device inconsistencies (and quite a bit of wrong/incomplete info on the Web), but it's definitely something that's fairly easy to do.In one of my Mobile HTML Web apps I need to share a current location via SMS. While browsing around a page I want to select a geo location, then share it by texting it to somebody. Basically I want to pre-fill an SMS message with some text, but no name or number, which instead will be filled in by the user.What worksThe syntax that seems to work fairly consistently except for iOS is this:sms:phonenumber?body=messageFor iOS instead of the ? use a ';' (because Apple is always right, standards be damned, right?):sms:phonenumber;body=messageand that works to pop up a new SMS message on the mobile device. I've only marginally tested this with a few devices: an iPhone running iOS 6, an iPad running iOS 7, Windows Phone 8 and a Nexus S in the Android Emulator. All four devices managed to pop up the SMS with the data prefilled.You can use this in a link:<a href="sms:1-111-1111;body=I made it!">Send location via SMS</a>or you can set it on the window.location in JavaScript:window.location ="sms:1-111-1111;body=" + encodeURIComponent("I made it!");to make the window pop up directly from code. Notice that the content should be URL encoded - HTML links automatically encode, but when you assign the URL directly in code the text value should be encoded.Body onlyI suspect in most applications you won't know who to text, so you only want to fill the text body, not the number. That works as you'd expect by just leaving out the number - here's what the URLs look like in that case:sms:?body=messageFor iOS same thing except with the ;sms:;body=messageHere's an example of the code I use to set up the SMS:var ua = navigator.userAgent.toLowerCase(); var url; if (ua.indexOf("iphone") > -1 || ua.indexOf("ipad") > -1) url = "sms:;body=" + encodeURIComponent("I'm at " + mapUrl + " @ " + pos.Address); else url = "sms:?body=" + encodeURIComponent("I'm at " + mapUrl + " @ " + pos.Address); location.href = url;and that also works for all the devices mentioned above.It's pretty cool that URL schemes exist to access device functionality and the SMS one will come in pretty handy for a number of things. Now if only all of the URI schemes were a bit more consistent (damn you Apple!) across devices...© Rick Strahl, West Wind Technologies, 2005-2013Posted in IOS  JavaScript  HTML5   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • HttpWebRequest and Ignoring SSL Certificate Errors

    - by Rick Strahl
    Man I can't believe this. I'm still mucking around with OFX servers and it drives me absolutely crazy how some these servers are just so unbelievably misconfigured. I've recently hit three different 3 major brokerages which fail HTTP validation with bad or corrupt certificates at least according to the .NET WebRequest class. What's somewhat odd here though is that WinInet seems to find no issue with these servers - it's only .NET's Http client that's ultra finicky. So the question then becomes how do you tell HttpWebRequest to ignore certificate errors? In WinInet there used to be a host of flags to do this, but it's not quite so easy with WebRequest. Basically you need to configure the CertificatePolicy on the ServicePointManager by creating a custom policy. Not exactly trivial. Here's the code to hook it up: public bool CreateWebRequestObject(string Url) {    try     {        this.WebRequest =  (HttpWebRequest) System.Net.WebRequest.Create(Url);         if (this.IgnoreCertificateErrors)            ServicePointManager.CertificatePolicy = delegate { return true; };}One thing to watch out for is that this an application global setting. There's one global ServicePointManager and once you set this value any subsequent requests will inherit this policy as well, which may or may not be what you want. So it's probably a good idea to set the policy when the app starts and leave it be - otherwise you may run into odd behavior in some situations especially in multi-thread situations.Another way to deal with this is in you application .config file. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} <configuration>   <system.net>     <settings>       <servicePointManager           checkCertificateName="false"           checkCertificateRevocationList="false"                />     </settings>   </system.net> </configuration> This seems to work most of the time, although I've seen some situations where it doesn't, but where the code implementation works which is frustrating. The .config settings aren't as inclusive as the programmatic code that can ignore any and all cert errors - shrug. Anyway, the code approach got me past the stopper issue. It still amazes me that theses OFX servers even require this. After all this is financial data we're talking about here. The last thing I want to do is disable extra checks on the certificates. Well I guess I shouldn't be surprised - these are the same companies that apparently don't believe in XML enough to generate valid XML (or even valid SGML for that matter)...© Rick Strahl, West Wind Technologies, 2005-2011Posted in .NET  CSharp  HTTP  

    Read the article

  • Non-Dom Element Event Binding with jQuery

    - by Rick Strahl
    Yesterday I had a short discussion with Dave Reed on Twitter regarding setting up fake ‘events’ on objects that are hookable. jQuery makes it real easy to bind events on DOM elements and with a little bit of extra work (that I didn’t know about) you can also set up binding to non-DOM element ‘event’ bindings. Assume for a second that you have a simple JavaScript object like this: var item = { sku: "wwhelp" , foo: function() { alert('orginal foo function'); } }; and you want to be notified when the foo function is called. You can use jQuery to bind the handler like this: $(item).bind("foo", function () { alert('foo Hook called'); } ); Binding alone won’t actually cause the handler to be triggered so when you call: item.foo(); you only get the ‘original’ message. In order to fire both the original handler and the bound event hook you have to use the .trigger() function: $(item).trigger("foo"); Now if you do the following complete sequence: var item = { sku: "wwhelp" , foo: function() { alert('orginal foo function'); } }; $(item).bind("foo", function () { alert('foo hook called'); } ); $(item).trigger("foo"); You’ll see the ‘hook’ message first followed by the ‘original’ message fired in succession. In other words, using this mechanism you can hook standard object functions and chain events to them in a way similar to the way you can do with DOM elements. The main difference is that the ‘event’ has to be explicitly triggered in order for this to happen rather than just calling the method directly. .trigger() relies on some internal logic that checks for event bindings on the object (attached via an expando property) which .trigger() searches for in its bound event list. Once the ‘event’ is found it’s called prior to execution of the original function. This is pretty useful as it allows you to create standard JavaScript objects that can act as event handlers and are effectively hookable without having to explicitly override event definitions with JavaScript function handlers. You get all the benefits of jQuery’s event methods including the ability to hook up multiple events to the same handler function and the ability to uniquely identify each specific event instance with post fix string names (ie. .bind("MyEvent.MyName") and .unbind("MyEvent.MyName") to bind MyEvent). Watch out for an .unbind() Bug Note that there appears to be a bug with .unbind() in jQuery that doesn’t reliably unbind an event and results in a elem.removeEventListener is not a function error. The following code demonstrates: var item = { sku: "wwhelp", foo: function () { alert('orginal foo function'); } }; $(item).bind("foo.first", function () { alert('foo hook called'); }); $(item).bind("foo.second", function () { alert('foo hook2 called'); }); $(item).trigger("foo"); setTimeout(function () { $(item).unbind("foo"); // $(item).unbind("foo.first"); // $(item).unbind("foo.second"); $(item).trigger("foo"); }, 3000); The setTimeout call delays the unbinding and is supposed to remove the event binding on the foo function. It fails both with the foo only value (both if assigned only as “foo” or “foo.first/second” as well as when removing both of the postfixed event handlers explicitly. Oddly the following that removes only one of the two handlers works: setTimeout(function () { //$(item).unbind("foo"); $(item).unbind("foo.first"); // $(item).unbind("foo.second"); $(item).trigger("foo"); }, 3000); this actually works which is weird as the code in unbind tries to unbind using a DOM method that doesn’t exist. <shrug> A partial workaround for unbinding all ‘foo’ events is the following: setTimeout(function () { $.event.special.foo = { teardown: function () { alert('teardown'); return true; } }; $(item).unbind("foo"); $(item).trigger("foo"); }, 3000); which is a bit cryptic to say the least but it seems to work more reliably. I can’t take credit for any of this – thanks to Dave Reed and Damien Edwards who pointed out some of these behaviors. I didn’t find any good descriptions of the process so thought it’d be good to write it down here. Hope some of you find this helpful.© Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery  

    Read the article

  • jQuery 1.4 Opacity and IE Filters

    - by Rick Strahl
    Ran into a small problem today with my client side jQuery library after switching to jQuery 1.4. I ran into a problem with a shadow plugin that I use to provide drop shadows for absolute elements – for Mozilla WebKit browsers the –moz-box-shadow and –webkit-box-shadow CSS attributes are used but for IE a manual element is created to provide the shadow that underlays the original element along with a blur filter to provide the fuzziness in the shadow. Some of the key pieces are: var vis = el.is(":visible"); if (!vis) el.show(); // must be visible to get .position var pos = el.position(); if (typeof shEl.style.filter == "string") sh.css("filter", 'progid:DXImageTransform.Microsoft.Blur(makeShadow=true, pixelradius=3, shadowOpacity=' + opt.opacity.toString() + ')'); sh.show() .css({ position: "absolute", width: el.outerWidth(), height: el.outerHeight(), opacity: opt.opacity, background: opt.color, left: pos.left + opt.offset, top: pos.top + opt.offset }); This has always worked in previous versions of jQuery, but with 1.4 the original filter no longer works. It appears that applying the opacity after the original filter wipes out the original filter. IOW, the opacity filter is not applied incrementally, but absolutely which is a real bummer. Luckily the workaround is relatively easy by just switching the order in which the opacity and filter are applied. If I apply the blur after the opacity I get my correct behavior back with both opacity: sh.show() .css({ position: "absolute", width: el.outerWidth(), height: el.outerHeight(), opacity: opt.opacity, background: opt.color, left: pos.left + opt.offset, top: pos.top + opt.offset }); if (typeof shEl.style.filter == "string") sh.css("filter", 'progid:DXImageTransform.Microsoft.Blur(makeShadow=true, pixelradius=3, shadowOpacity=' + opt.opacity.toString() + ')'); While this works this still causes problems in other areas where opacity is implicitly set in code such as for fade operations or in the case of my shadow component the style/property watcher that keeps the shadow and main object linked. Both of these may set the opacity explicitly and that is still broken as it will effectively kill the blur filter. This seems like a really strange design decision by the jQuery team, since clearly the jquery css function does the right thing for setting filters. Internally however, the opacity setting doesn’t use .css instead hardcoding the filter which given jQuery’s usual flexibility and smart code seems really inappropriate. The following is from jQuery.js 1.4: var style = elem.style || elem, set = value !== undefined; // IE uses filters for opacity if ( !jQuery.support.opacity && name === "opacity" ) { if ( set ) { // IE has trouble with opacity if it does not have layout // Force it by setting the zoom level style.zoom = 1; // Set the alpha filter to set the opacity var opacity = parseInt( value, 10 ) + "" === "NaN" ? "" : "alpha(opacity=" + value * 100 + ")"; var filter = style.filter || jQuery.curCSS( elem, "filter" ) || ""; style.filter = ralpha.test(filter) ? filter.replace(ralpha, opacity) : opacity; } return style.filter && style.filter.indexOf("opacity=") >= 0 ? (parseFloat( ropacity.exec(style.filter)[1] ) / 100) + "": ""; } You can see here that the style is explicitly set in code rather than relying on $.css() to assign the value resulting in the old filter getting wiped out. jQuery 1.32 looks a little different: // IE uses filters for opacity if ( !jQuery.support.opacity && name == "opacity" ) { if ( set ) { // IE has trouble with opacity if it does not have layout // Force it by setting the zoom level elem.zoom = 1; // Set the alpha filter to set the opacity elem.filter = (elem.filter || "").replace( /alpha\([^)]*\)/, "" ) + (parseInt( value ) + '' == "NaN" ? "" : "alpha(opacity=" + value * 100 + ")"); } return elem.filter && elem.filter.indexOf("opacity=") >= 0 ? (parseFloat( elem.filter.match(/opacity=([^)]*)/)[1] ) / 100) + '': ""; } Offhand I’m not sure why the latter works better since it too is assigning the filter. However, when checking with the IE script debugger I can see that there are actually a couple of filter tags assigned when using jQuery 1.32 but only one when I use jQuery 1.4. Note also that the jQuery 1.3 compatibility plugin for jQUery 1.4 doesn’t address this issue either. Resources ww.jquery.js (shadow plug-in $.fn.shadow) © Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery  

    Read the article

  • Request Limit Length Limits for IIS&rsquo;s requestFiltering Module

    - by Rick Strahl
    Today I updated my CodePaste.net site to MVC 3 and pushed an update to the site. The update of MVC went pretty smooth as well as most of the update process to the live site. Short of missing a web.config change in the /views folder that caused blank pages on the server, the process was relatively painless. However, one issue that kicked my ass for about an hour – and not foe the first time – was a problem with my OpenId authentication using DotNetOpenAuth. I tested the site operation fairly extensively locally and everything worked no problem, but on the server the OpenId returns resulted in a 404 response from IIS for a nice friendly OpenId return URL like this: http://codepaste.net/Account/OpenIdLogon?dnoa.userSuppliedIdentifier=http%3A%2F%2Frstrahl.myopenid.com%2F&dnoa.return_to_sig_handle=%7B634239223364590000%7D%7BjbHzkg%3D%3D%7D&dnoa.return_to_sig=7%2BcGhp7UUkcV2B8W29ibIDnZuoGoqzyS%2F%2FbF%2FhhYscgWzjg%2BB%2Fj10ZpNdBkUCu86dkTL6f4OK2zY5qHhCnJ2Dw%3D%3D&openid.assoc_handle=%7BHMAC-SHA256%7D%7B4cca49b2%7D%7BMVGByQ%3D%3D%7D&openid.claimed_id=http%3A%2F%2Frstrahl.myopenid.com%2F&openid.identity=http%3A%2F%2Frstrahl.myopenid.com%2F&openid.mode=id_res&openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.ns.sreg=http%3A%2F%2Fopenid.net%2Fextensions%2Fsreg%2F1.1&openid.op_endpoint=http%3A%2F%2Fwww.myopenid.com%2Fserver&openid.response_nonce=2010-10-29T04%3A12%3A53Zn5F4r5&openid.return_to=http%3A%2F%2Fcodepaste.net%2FAccount%2FOpenIdLogon%3Fdnoa.userSuppliedIdentifier%3Dhttp%253A%252F%252Frstrahl.myopenid.com%252F%26dnoa.return_to_sig_handle%3D%257B634239223364590000%257D%257BjbHzkg%253D%253D%257D%26dnoa.return_to_sig%3D7%252BcGhp7UUkcV2B8W29ibIDnZuoGoqzyS%252F%252FbF%252FhhYscgWzjg%252BB%252Fj10ZpNdBkUCu86dkTL6f4OK2zY5qHhCnJ2Dw%253D%253D&openid.sig=h1GCSBTDAn1on98sLA6cti%2Bj1M6RffNerdVEI80mnYE%3D&openid.signed=assoc_handle%2Cclaimed_id%2Cidentity%2Cmode%2Cns%2Cns.sreg%2Cop_endpoint%2Cresponse_nonce%2Creturn_to%2Csigned%2Csreg.email%2Csreg.fullname&openid.sreg.email=rstrahl%40host.com&openid.sreg.fullname=Rick+Strahl A 404 of course isn’t terribly helpful – normally a 404 is a resource not found error, but the resource is definitely there. So how the heck do you figure out what’s wrong? If you’re just interested in the solution, here’s the short version: IIS by default allows only for a 1024 byte query string, which is obviously exceeded by the above. The setting is controlled by the RequestFiltering module in IIS 6 and later which can be configured in ApplicationHost.config (in \%windir\system32\inetsvr\config). To set the value configure the requestLimits key like so: <configuration> <security> <requestFiltering> <requestLimits maxQueryString="2048"> </requestLimits> </requestFiltering> </security> </configuration> This fixed me right up and made the requests work. How do you find out about problems like this? Ah yes the troubles of an administrator? Read on and I’ll take you through a quick review of how I tracked this down. Finding the Problem The issue with the error returned is that IIS returns a 404 Resource not found error and doesn’t provide much information about it. If you’re lucky enough to be able to run your site from the localhost IIS is actually very helpful and gives you the right information immediately in a nicely detailed error page. The bottom of the page actually describes exactly what needs to be fixed. One problem with this easy way to find an error: You HAVE TO run localhost. On my server which has about 10 domains running localhost doesn’t point at the particular site I had problems with so I didn’t get the luxury of this nice error page. Using Failed Request Tracing to retrieve Error Info The first place I go with IIS errors is to turn on Failed Request Tracing in IIS to get more error information. If you have access to the server to make a configuration change you can enable Failed Request Tracing like this: Find the Failed Request Tracing Rules in the IIS Service Manager.   Select the option and then Edit Site Tracing to enable tracing. Then add a rule for * (all content) and specify status codes from 100-999 to capture all errors. if you know exactly what error you’re looking for it might help to specify it exactly to keep the number of errors down. Then run your request and let it fail. IIS will throw error log files into a folder like this C:\inetpub\logs\FailedReqLogFiles\W3SVC5 where the last 5 is the instance ID of the site. These files are XML but they include an XSL stylesheet that provides some decent formatting. In this case it pointed me straight at the offending module:   Ok, it’s the RequestFilteringModule. Request Filtering is built into IIS 6-7 and configured in ApplicationHost.config. This module defines a few basic rules about what paths and extensions are allowed in requests and among other things how long a query string is allowed to be. Most of these settings are pretty sensible but the query string value can easily become a problem especially if you’re dealing with OpenId since these return URLs are quite extensive. Debugging failed requests is never fun, but IIS 6 and forward at least provides us the tools that can help us point in the right direction. The error message the FRT report isn’t as nice as the IIS error message but it at least points at the offending module which gave me the clue I needed to look at request restrictions in ApplicationHost.config. This would still be a stretch if you’re not intimately familiar, but I think with some Google searches it would be easy to track this down with a few tries… Hope this was useful to some of you. Useful to me to put this out as a reminder – I’ve run into this issue before myself and totally forgot. Next time I got it, right?© Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  Security  

    Read the article

  • Figuring out the IIS Version for a given OS in .NET Code

    - by Rick Strahl
    Here's an odd requirement: I need to figure out what version of IIS is available on a given machine in order to take specific configuration actions when installing an IIS based application. I build several configuration tools for application configuration and installation and depending on which version of IIS is available on IIS different configuration paths are taken. For example, when dealing with XP machine you can't set up an Application Pool for an application because XP (IIS 5.1) didn't support Application pools. Configuring 32 and 64 bit settings are easy in IIS 7 but this didn't work in prior versions and so on. Along the same lines I saw a question on the AspInsiders list today, regarding a similar issue where somebody needed to know the IIS version as part of an ASP.NET application prior to when the Request object is available. So it's useful to know which version of IIS you can possibly expect. This should be easy right? But it turns there's no real easy way to detect IIS on a machine. There's no registry key that gives you the full version number - you can detect installation but not which version is installed. The easiest way: Request.ServerVariables["SERVER_SOFTWARE"] The easiest way to determine IIS version number is if you are already running inside of ASP.NET and you are inside of an ASP.NET request. You can look at Request.ServerVariables["SERVER_SOFTWARE"] to get a string like Microsoft-IIS/7.5 returned to you. It's a cinch to parse this to retrieve the version number. This works in the limited scenario where you need to know the version number inside of a running ASP.NET application. Unfortunately this is not a likely use case, since most times when you need to know a specific version of IIS when you are configuring or installing your application. The messy way: Match Windows OS Versions to IIS Versions Since Version 5.x of IIS versions of IIS have always been tied very closely to the Operating System. Meaning the only way to get a specific version of IIS was through the OS - you couldn't install another version of IIS on the given OS. Microsoft has a page that describes the OS version to IIS version relationship here: http://support.microsoft.com/kb/224609 In .NET you can then sniff the OS version and based on that return the IIS version. The following is a small utility function that accomplishes the task of returning an IIS version number for a given OS: /// <summary> /// Returns the IIS version for the given Operating System. /// Note this routine doesn't check to see if IIS is installed /// it just returns the version of IIS that should run on the OS. /// /// Returns the value from Request.ServerVariables["Server_Software"] /// if available. Otherwise uses OS sniffing to determine OS version /// and returns IIS version instead. /// </summary> /// <returns>version number or -1 </returns> public static decimal GetIisVersion() { // if running inside of IIS parse the SERVER_SOFTWARE key // This would be most reliable if (HttpContext.Current != null && HttpContext.Current.Request != null) { string os = HttpContext.Current.Request.ServerVariables["SERVER_SOFTWARE"]; if (!string.IsNullOrEmpty(os)) { //Microsoft-IIS/7.5 int dash = os.LastIndexOf("/"); if (dash > 0) { decimal iisVer = 0M; if (Decimal.TryParse(os.Substring(dash + 1), out iisVer)) return iisVer; } } } decimal osVer = (decimal) Environment.OSVersion.Version.Major + ((decimal) Environment.OSVersion.Version.MajorRevision / 10); // Windows 7 and Win2008 R2 if (osVer == 6.1M) return 7.5M; // Windows Vista and Windows 2008 else if (osVer == 6.0M) return 7.0M; // Windows 2003 and XP 64 bit else if (osVer == 5.2M) return 6.0M; // Windows XP else if (osVer == 5.1M) return 5.1M; // Windows 2000 else if (osVer == 5.0M) return 5.0M; // error result return -1M; } } Talk about a brute force apporach, but it works. This code goes only back to IIS 5 - anything before that is not something you possibly would want to have running. :-) Note that this is updated through Windows 7/Windows Server 2008 R2. Later versions will need to be added as needed. Anybody know what the Windows Version number of Windows 8 is?© Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  IIS   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Getting a Web Resource Url in non WebForms Applications

    - by Rick Strahl
    WebResources in ASP.NET are pretty useful feature. WebResources are resources that are embedded into a .NET assembly and can be loaded from the assembly via a special resource URL. WebForms includes a method on the ClientScriptManager (Page.ClientScript) and the ScriptManager object to retrieve URLs to these resources. For example you can do: ClientScript.GetWebResourceUrl(typeof(ControlResources), ControlResources.JQUERY_SCRIPT_RESOURCE); GetWebResourceUrl requires a type (which is used for the assembly lookup in which to find the resource) and the resource id to lookup. GetWebResourceUrl() then returns a nasty old long URL like this: WebResource.axd?d=-b6oWzgbpGb8uTaHDrCMv59VSmGhilZP5_T_B8anpGx7X-PmW_1eu1KoHDvox-XHqA1EEb-Tl2YAP3bBeebGN65tv-7-yAimtG4ZnoWH633pExpJor8Qp1aKbk-KQWSoNfRC7rQJHXVP4tC0reYzVw2&t=634533278261362212 While lately excessive resource usage has been frowned upon especially by MVC developers who tend to opt for content distributed as files, I still think that Web Resources have their place even in non-WebForms applications. Also if you have existing assemblies that include resources like scripts and common image links it sure would be nice to access them from non-WebForms pages like MVC views or even in plain old Razor Web Pages. Where's my Page object Dude? Unfortunately natively ASP.NET doesn't have a mechanism for retrieving WebResource Urls outside of the WebForms engine. It's a feature that's specifically baked into WebForms and that relies specifically on the Page HttpHandler implementation. Both Page.ClientScript (obviously) and ScriptManager rely on a hosting Page object in order to work and the various methods off these objects require control instances passed. The reason for this is that the script managers can inject scripts and links into Page content (think RegisterXXXX methods) and for that a Page instance is required. However, for many other methods - like GetWebResourceUrl() - that simply return resources or resource links the Page reference is really irrelevant. While there's a separate ClientScriptManager class, it's marked as sealed and doesn't have any public constructors so you can't create your own instance (without Reflection). Even if it did the internal constructor it does have requires a Page reference. No good… So, can we get access to a WebResourceUrl generically without running in a WebForms Page instance? We just have to create a Page instance ourselves and use it internally. There's nothing intrinsic about the use of the Page class in ClientScript, at least for retrieving resources and resource Urls so it's easy to create an instance of a Page for example in a static method. For our needs of retrieving ResourceUrls or even actually retrieving script resources we can use a canned, non-configured Page instance we create on our own. The following works just fine: public static string GetWebResourceUrl(Type type, string resource ) { Page page = new Page(); return page.ClientScript.GetWebResourceUrl(type, resource); } A slight optimization for this might be to cache the created Page instance. Page tends to be a pretty heavy object to create each time a URL is required so you might want to cache the instance: public class WebUtils { private static Page CachedPage { get { if (_CachedPage == null) _CachedPage = new Page(); return _CachedPage; } } private static Page _CachedPage; public static string GetWebResourceUrl(Type type, string resource) { return CachedPage.ClientScript.GetWebResourceUrl(type, resource); } } You can now use GetWebResourceUrl in a Razor page like this: <!DOCTYPE html> <html <head> <script src="@WebUtils.GetWebResourceUrl(typeof(ControlResources),ControlResources.JQUERY_SCRIPT_RESOURCE)"> </script> </head> <body> <div class="errordisplay"> <img src="@WebUtils.GetWebResourceUrl(typeof(ControlResources),ControlResources.WARNING_ICON_RESOURCE)" /> This is only a Test! </div> </body> </html> And voila - there you have WebResources served from a non-Page based application. WebResources may be a on the way out, but legacy apps have them embedded and for some situations, like fallback scripts and some common image resources I still like to use them. Being able to use them from non-WebForms applications should have been built into the core ASP.NETplatform IMHO, but seeing that it's not this workaround is easy enough to implement.© Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  MVC   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • .NET 3.5 Installation Problems in Windows 8

    - by Rick Strahl
    Windows 8 installs with .NET 4.5. A default installation of Windows 8 doesn't seem to include .NET 3.0 or 3.5, although .NET 2.0 does seem to be available by default (presumably because Windows has app dependencies on that). I ran into some pretty nasty compatibility issues regarding .NET 3.5 which I'll describe in this post. I'll preface this by saying that depending on how you install Windows 8 you may not run into these issues. In fact, it's probably a special case, but one that might be common with developer folks reading my blog. Specifically it's the install order that screwed things up for me -  installing Visual Studio before explicitly installing .NET 3.5 from Windows Features - in particular. If you install Visual Studio 2010 I highly recommend you install .NET 3.5 from Windows features BEFORE you install Visual Studio 2010 and save yourself the trouble I went through. So when I installed Windows 8, and then looked at the Windows Features to install after the fact in the Windows Feature dialog, I thought - .NET 3.5 - who needs it. I'd be happy to not have to install .NET 3.5, but unfortunately I found out quite a while after initial installation that one of my applications/tools (DevExpress's awesome CodeRush) depends on it and won't install without it. Enabling .NET 3.5 in Windows 8 If you want to run .NET 3.5 on Windows 8, don't download an installer - those installers don't work on Windows 8, and you don't need to do this because you can use the Windows Features dialog to enable .NET 3.5: And that *should* do the trick. If you do this before you install other apps that require .NET 3.5 and install a non-SP1 one version of it, you are going to have no problems. Unfortunately for me, even after I've installed the above, when I run the CodeRush installer I still get this lovely dialog: Now I double checked to see if .NET 3.5 is installed - it is, both for 32 bit and 64 bit. I went as far as creating a small .NET Console app and running it to verify that it actually runs. And it does… So naturally I thought the CodeRush installer is a little whacky. After some back and forth Alex Skorkin on Twitter pointed me in the right direction: He asked me to look in the registry for exact info on which version of .NET 3.5 is installed here: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP where I found that .NET 3.5 SP1 was installed. This is the 64 bit key which looks all correct. However, when I looked under the 32 bit node I found: HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\NET Framework Setup\NDP\v3.5 Notice that the service pack number is set to 0, rather than 1 (which it was for the 64 bit install), which is what the installer requires. So to summarize: the 64 bit version is installed with SP1, the 32 bit version is not. Uhm, Ok… thanks for that! Easy to fix, you say - just install SP1. Nope, not so easy because the standalone installer doesn't work on Windows 8. I can't get either .NET 3.5 installer or the SP 1 installer to even launch. They simply start and hang (or exit immediately) without messages. I also tried to get Windows to update .NET 3.5 by checking for Windows Updates, which should pick up on the dated version of .NET 3.5 and pull down SP1, but that's also no go. Check for Updates doesn't bring down any updates for me yet. I'm sure at some random point in the future Windows will deem it necessary to update .NET 3.5 to SP1, but at this point it's not letting me coerce it to do it explicitly. How did this happen I'm not sure exactly whether this is the cause and effect, but I suspect the story goes like this: Installed Windows 8 without support for .NET 3.5 Installed Visual Studio 2010 which installs .NET 3.5 (no SP) I now had .NET 3.5 installed but without SP1. I then: Tried to install CodeRush - Error: .NET 3.5 SP1 required Enabled .NET 3.5 in Windows Features I figured enabling the .NET 3.5 Windows Features would do the trick. But still no go. Now I suspect Visual Studio installed the 32 bit version of .NET 3.5 on my machine and Windows Features detected the previous install and didn't reinstall it. This left the 32 bit install at least with no SP1 installed. How to Fix it My final solution was to completely uninstall .NET 3.5 *and* to reboot: Go to Windows Features Uncheck the .NET Framework 3.5 Restart Windows Go to Windows Features Check .NET Framework 3.5 and voila, I now have a proper installation of .NET 3.5. I tried this before but without the reboot step in between which did not work. Make sure you reboot between uninstalling and reinstalling .NET 3.5! More Problems The above fixed me right up, but in looking for a solution it seems that a lot of people are also having problems with .NET 3.5 installing properly from the Windows Features dialog. The problem there is that the feature wasn't properly loading from the installer disks or not downloading the proper components for updates. It turns out you can explicitly install Windows features using the DISM tool in Windows.dism.exe /online /enable-feature /featurename:NetFX3 /Source:f:\sources\sxs You can try this without the /Source flag first - which uses the hidden Windows installer files if you kept those. Otherwise insert the DVD or ISO and point at the path \sources\sxs path where the installer lives. This also gives you a little more information if something does go wrong.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Windows  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Internet Explorer and Cookie Domains

    - by Rick Strahl
    I've been bitten by some nasty issues today in regards to using a domain cookie as part of my FormsAuthentication operations. In the app I'm currently working on we need to have single sign-on that spans multiple sub-domains (www.domain.com, store.domain.com, mail.domain.com etc.). That's what a domain cookie is meant for - when you set the cookie with a Domain value of the base domain the cookie stays valid for all sub-domains. I've been testing the app for quite a while and everything is working great. Finally I get around to checking the app with Internet Explorer and I start discovering some problems - specifically on my local machine using localhost. It appears that Internet Explorer (all versions) doesn't allow you to specify a domain of localhost, a local IP address or machine name. When you do, Internet Explorer simply ignores the cookie. In my last post I talked about some generic code I created to basically parse out the base domain from the current URL so a domain cookie would automatically used using this code:private void IssueAuthTicket(UserState userState, bool rememberMe) { FormsAuthenticationTicket ticket = new FormsAuthenticationTicket(1, userState.UserId, DateTime.Now, DateTime.Now.AddDays(10), rememberMe, userState.ToString()); string ticketString = FormsAuthentication.Encrypt(ticket); HttpCookie cookie = new HttpCookie(FormsAuthentication.FormsCookieName, ticketString); cookie.HttpOnly = true; if (rememberMe) cookie.Expires = DateTime.Now.AddDays(10); var domain = Request.Url.GetBaseDomain(); if (domain != Request.Url.DnsSafeHost) cookie.Domain = domain; HttpContext.Response.Cookies.Add(cookie); } This code works fine on all browsers but Internet Explorer both locally and on full domains. And it also works fine for Internet Explorer with actual 'real' domains. However, this code fails silently for IE when the domain is localhost or any other local address. In that case Internet Explorer simply refuses to accept the cookie and fails to log in. Argh! The end result is that the solution above trying to automatically parse the base domain won't work as local addresses end up failing. Configuration Setting Given this screwed up state of affairs, the best solution to handle this is a configuration setting. Forms Authentication actually has a domain key that can be set for FormsAuthentication so that's natural choice for the storing the domain name: <authentication mode="Forms"> <forms loginUrl="~/Account/Login" name="gnc" domain="mydomain.com" slidingExpiration="true" timeout="30" xdt:Transform="Replace"/> </authentication> Although I'm not actually letting FormsAuth set my cookie directly I can still access the domain name from the static FormsAuthentication.CookieDomain property, by changing the domain assignment code to:if (!string.IsNullOrEmpty(FormsAuthentication.CookieDomain)) cookie.Domain = FormsAuthentication.CookieDomain; The key is to only set the domain when actually running on a full authority, and leaving the domain key blank on the local machine to avoid the local address debacle. Note if you want to see this fail with IE, set the domain to domain="localhost" and watch in Fiddler what happens. Logging Out When specifying a domain key for a login it's also vitally important that that same domain key is used when logging out. Forms Authentication will do this automatically for you when the domain is set and you use FormsAuthentication.SignOut(). If you use an explicit Cookie to manage your logins or other persistant value, make sure that when you log out you also specify the domain. IOW, the expiring cookie you set for a 'logout' should match the same settings - name, path, domain - as the cookie you used to set the value.HttpCookie cookie = new HttpCookie("gne", ""); cookie.Expires = DateTime.Now.AddDays(-5); // make sure we use the same logic to release cookie var domain = Request.Url.GetBaseDomain(); if (domain != Request.Url.DnsSafeHost) cookie.Domain = domain; HttpContext.Response.Cookies.Add(cookie); I managed to get my code to do what I needed it to, but man I'm getting so sick and tired of fixing IE only bugs. I spent most of the day today fixing a number of small IE layout bugs along with this issue which took a bit of time to trace down.© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Rendering ASP.NET MVC Razor Views outside of MVC revisited

    - by Rick Strahl
    Last year I posted a detailed article on how to render Razor Views to string both inside of ASP.NET MVC and outside of it. In that article I showed several different approaches to capture the rendering output. The first and easiest is to use an existing MVC Controller Context to render a view by simply passing the controller context which is fairly trivial and I demonstrated a simple ViewRenderer class that simplified the process down to a couple lines of code. However, if no Controller Context is available the process is not quite as straight forward and I referenced an old, much more complex example that uses my RazorHosting library, which is a custom self-contained implementation of the Razor templating engine that can be hosted completely outside of ASP.NET. While it works inside of ASP.NET, it’s an awkward solution when running inside of ASP.NET, because it requires a bit of setup to run efficiently.Well, it turns out that I missed something in the original article, namely that it is possible to create a ControllerContext, if you have a controller instance, even if MVC didn’t create that instance. Creating a Controller Instance outside of MVCThe trick to make this work is to create an MVC Controller instance – any Controller instance – and then configure a ControllerContext through that instance. As long as an HttpContext.Current is available it’s possible to create a fully functional controller context as Razor can get all the necessary context information from the HttpContextWrapper().The key to make this work is the following method:/// <summary> /// Creates an instance of an MVC controller from scratch /// when no existing ControllerContext is present /// </summary> /// <typeparam name="T">Type of the controller to create</typeparam> /// <returns>Controller Context for T</returns> /// <exception cref="InvalidOperationException">thrown if HttpContext not available</exception> public static T CreateController<T>(RouteData routeData = null) where T : Controller, new() { // create a disconnected controller instance T controller = new T(); // get context wrapper from HttpContext if available HttpContextBase wrapper = null; if (HttpContext.Current != null) wrapper = new HttpContextWrapper(System.Web.HttpContext.Current); else throw new InvalidOperationException( "Can't create Controller Context if no active HttpContext instance is available."); if (routeData == null) routeData = new RouteData(); // add the controller routing if not existing if (!routeData.Values.ContainsKey("controller") && !routeData.Values.ContainsKey("Controller")) routeData.Values.Add("controller", controller.GetType().Name .ToLower() .Replace("controller", "")); controller.ControllerContext = new ControllerContext(wrapper, routeData, controller); return controller; }This method creates an instance of a Controller class from an existing HttpContext which means this code should work from anywhere within ASP.NET to create a controller instance that’s ready to be rendered. This means you can use this from within an Application_Error handler as I needed to or even from within a WebAPI controller as long as it’s running inside of ASP.NET (ie. not self-hosted). Nice.So using the ViewRenderer class from the previous article I can now very easily render an MVC view outside of the context of MVC. Here’s what I ended up in my Application’s custom error HttpModule: protected override void OnDisplayError(WebErrorHandler errorHandler, ErrorViewModel model) { var Response = HttpContext.Current.Response; Response.ContentType = "text/html"; Response.StatusCode = errorHandler.OriginalHttpStatusCode; var context = ViewRenderer.CreateController<ErrorController>().ControllerContext; var renderer = new ViewRenderer(context); string html = renderer.RenderView("~/Views/Shared/GenericError.cshtml", model); Response.Write(html); }That’s pretty sweet, because it’s now possible to use ViewRenderer just about anywhere in any ASP.NET application, not only inside of controller code. This also allows the constructor for the ViewRenderer from the last article to work without a controller context parameter, using a generic view as a base for the controller context when not passed:public ViewRenderer(ControllerContext controllerContext = null) { // Create a known controller from HttpContext if no context is passed if (controllerContext == null) { if (HttpContext.Current != null) controllerContext = CreateController<ErrorController>().ControllerContext; else throw new InvalidOperationException( "ViewRenderer must run in the context of an ASP.NET " + "Application and requires HttpContext.Current to be present."); } Context = controllerContext; }In this case I use the ErrorController class which is a generic controller instance that exists in the same assembly as my ViewRenderer class and that works just fine since ‘generically’ rendered views tend to not rely on anything from the controller other than the model which is explicitly passed.While these days most of my apps use MVC I do still have a number of generic pieces in most of these applications where Razor comes in handy. This includes modules like the above, which when they error often need to display error output. In other cases I need to generate string template output for emailing or logging data to disk. Being able to render simply render an arbitrary View to and pass in a model makes this super nice and easy at least within the context of an ASP.NET application!You can check out the updated ViewRenderer class below to render your ‘generic views’ from anywhere within your ASP.NET applications. Hope some of you find this useful.ResourcesViewRenderer Class in Westwind.Web.Mvc Library (Github)Original ViewRenderer ArticleRazor Hosting Library (GitHub)Original Razor Hosting Article© Rick Strahl, West Wind Technologies, 2005-2013Posted in ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • SnagIt Live Writer Plug-in Updated

    - by Rick Strahl
    Ah, I love SnagIt from TechSmith and I use the heck out of it almost every day. So no surprise that I've decided some time ago to integrate SnagIt into a few applications that require screen shots extensively. It's been a while since I've posted an update to my small SnagIt Windows Live Writer plug-in. There have been a few nagging issues that have crept up with recent changes in the way SnagIt handles captures in recent versions and they have been addressed in this update of SnagIt. Personally I love SnagIt and use it extensively mostly for blogging, but also for writing documentation and articles etc. While there are many other (and also free) tools out there to do basic screen captures, SnagIt continues to be the most convenient tool for me with its nice built in capture and effects editor that makes creating professional looking captures childishly simple. And maybe even more importantly: SnagIt has a COM interface that can be automated and  makes it super easy to embed into other applications. I've built plugins for SnagIt as well as for one of my company's own tools, Html Help Builder. If you use the Windows Live Writer offline WebLog Editor to write blog posts and have a copy of SnagIt it's probably worth your while to check this out if you haven't already. In case you haven't, this plugin integrates SnagIt with Live Writer so you can easily capture and edit content and embed it into a post. Captures are shown in the SnagIt Preview editor where you can edit the image and apply image markup or effects, before selecting Finish (or Cancel). The final image can then be pasted directly into your Live Writer post. When installed the SnagIt plug-in shows up on the PlugIn list or in the Plug-Ins toolbar shortcut: Once you select the Plug in you get the capture window that allows you to customize the capture process which includes most of the useful SnagIt capture options: Once you're done capturing the image shows up in the SnagIt Image Editor and you can crop, mark up and apply effects. When done you click the Finish button and the image is embedded right into your blog post. Easy - how do you think the images in this blog entry got in here? The beauty of SnagIt is that it's all easily integrated - Capturing, editing and embedding, it only takes a few seconds to do it all especially if you save image effect presets in SnagIt. What's updated The main issue addressed in this update has to do with the plug-in updates the Live Writer window. When a capture starts Live Writer gets minimized to get out of the way to let you pick your capture source. When the capture is complete and the image has been embedded Live Writer is activated once again. Recent versions of SnagIt however had changed the Window positioning of SnagIt so that Live Writer ended up popping up back behind the SnagIt window which was pretty annoying. This update pushes Live Writer back to the top of the window stack using some delaying tactics in the code. There have also been a few small changes to the way the code interacts with the COM object which is more reliable if a capture fails or SnagIt blows up or is locked because it's already in a capture outside of the automation interface. Source Code SnagIt Automation is something I actually use a lot. As mentioned I've integrated this automation into Live Writer as well as my documentation tool Html Help Builder, which I use just about daily. The SnagIt integration has a similar interface in that application and provides similar functionality. It's quite useful to integrate SnagIt into other applications. Because it's quite useful to embed SnagIt into other apps there's source code that you can download and embed into your own applications. The code includes both the dialog class that is automated from Live Writer, as well as the basic capture component that captures images to a disk file. Resources Download the SnagIt Capture Plug-in Installer An MSI installer that you can run that will install the plug-in into Live Writer's PlugIns directory. Source Code to the SnagIt Capture Plug-in Contains the plug-in assembly, as well as the source code to the plug-in and the setup project.© Rick Strahl, West Wind Technologies, 2005-2011Posted in Live Writer  WebLog   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • $.fadeTo/fadeOut() operations on Table Rows in IE fail

    - by Rick Strahl
    Here’s a a small problem that one of customers ran into a few days ago: He was playing around with some of the sample code I’ve put out for one of my simple jQuery demos which deals with providing a simple pulse behavior plug-in: $.fn.pulse = function(time) { if (!time) time = 2000; // *** this == jQuery object that contains selections $(this).fadeTo(time, 0.20, function() { $(this).fadeTo(time, 1); }); return this; } it’s a very simplistic plug-in and it works fine for simple pulse animations. However he ran into a problem where it didn’t work when working with tables – specifically pulsing a table row in Internet Explorer. Works fine in FireFox and Chrome, but IE not so much. It also works just fine in IE as long as you don’t try it on tables or table rows specifically. Applying against something like this (an ASP.NET GridView): var sel = $("#gdEntries>tbody>tr") .not(":first-child") // no header .not(":last-child") // no footer .filter(":even") .addClass("gridalternate"); // *** Demonstrate simple plugin sel.pulse(2000); fails in IE. No pulsing happens in any version of IE. After some additional experimentation with single rows and various ways of selecting each and still failing, I’ve come to the conclusion that the various fade operations in jQuery simply won’t work correctly in IE (any version). So even something as ‘elemental’ as this: var el = $("#gdEntries>tbody>tr").get(0);$(el).fadeOut(2000); is not working correctly. The item will stick around for 2 seconds and then magically disappear. Likewise: sel.hide().fadeIn(5000); also doesn’t fade in although the items become immediately visible in IE. Go figure that behavior out. Thanks to a tweet from red_square and a link he provided here is a grid that explains what works and doesn’t in IE (and most last gen browsers) regarding opacity: http://www.quirksmode.org/js/opacity.html It appears from this link that table and row elements can’t be made opaque, but td elements can. This means for the row selections I can force each of the td elements to be selected and then pulse all of those. Once you have the rows it’s easy to explicitly select all the columns in those rows with .find(“td”). Aha the following actually works: var sel = $("#gdEntries>tbody>tr") .not(":first-child") // no header .not(":last-child") // no footer .filter(":even") .addClass("gridalternate"); // *** Demonstrate simple plugin sel.find("td").pulse(2000); A little unintuitive that, but it works. Stay away from <table> and <tr> Fades The moral of the story is – stay away from TR, TH and TABLE fades and opacity. If you have to do it on tables use the columns instead and if necessary use .find(“td”) on your row(s) selector to grab all the columns. I’ve been surprised by this uhm relevation, since I use fadeOut in almost every one of my applications for deletion of items and row deletions from grids are not uncommon especially in older apps. But it turns out that fadeOut actually works in terms of behavior: It removes the item when the timeout’s done and because the fade is relatively short lived and I don’t extensively test IE code any more I just never noticed that the fade wasn’t happening. Note – this behavior or rather lack thereof appears to be specific to table table,tr,th elements. I see no problems with other elements like <div> and <li> items. Chalk this one up to another of IE’s shortcomings. Incidentally I’m not the only one who has failed to address this in my simplistic plug-in: The jquery-ui pulsate effect also fails on the table rows in the same way. sel.effect("pulsate", { times: 3 }, 2000); and it also works with the same workaround. If you’re already using jquery-ui definitely use this version of the plugin which provides a few more options… Bottom line: be careful with table based fade operations and remember that if you do need to fade – fade on columns.© Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery  

    Read the article

  • HTML5 and CSS3 Editing in Windows Live Writer

    - by Rick Strahl
    Windows Live Writer is a wonderful tool for editing blog posts and getting them posted to your blog. What makes it nice is that it has a small set of useful features, plus a simple plug-in model that has spawned many useful add-ins. Small tool with a reasonably decent plug-in model to extend equals a great solution to a simple problem. If you're running Windows, have a blog and aren’t using Live Writer you’re probably doing it wrong…One of Live Writer’s nice features is that it can download your blog’s CSS for preview and edit displays. It lets you edit your content inside of the context of that CSS using the WYSIWYG editor, so your content actually looks very close to what you’ll see on your blog while you’re editing your post. Unfortunately Live Writer renders the HTML content in the Web Browser Control’s  default IE 7 rendering mode. Yeah you read that right: IE 7 is the default for the Web Browser control and most applications that use it, are stuck in this modus unless the application explicitly overrides this default. The Web Browser control does not use the version of Internet Explorer installed on the system (IE 10 on my Win8 machine) but uses IE 7 mode for ‘compatibility’ for old applications.If you are importing your blog’s CSS that may suck if you’re using rich HTML 5 and CSS 3 formatting. Hack the Registry to get Live Writer to render using IE 9 or 10In order to get Live Writer (or any other application that uses the Web Browser Control for that matter) to render you can apply a registry hack that overrides the Web Browser Control engine usage for a specific application. I wrote about this in detail in a previous blog post a couple of years back.Here’s how you can set up Windows Live Writer to render your CSS 3 by making a change in your registry:The above is for setup on a 64 bit machine, where I configure Live Writer which is a 32 bit application for using IE 10 rendering. The keys set are as follows:32bit Configuration on 64 bit machine:HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Internet Explorer\Main\FeatureControl\FEATURE_BROWSER_EMULATIONKey: WindowsLiveWriter.exeValue: 9000 or 10000  (IE 9 or 10 respectively) (DWORD value)On a 32 bit only machine: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Internet Explorer\Main\FeatureControl\FEATURE_BROWSER_EMULATIONKey: WindowsLiveWriter.exeValue: 9000 or 10000  (IE 9 or 10 respectively) (DWORD value)Use decimal values of 9000, 10000 or 11000 to specify specific versions of Internet Explorer. This is a minor tweak, but it’s nice to actually see my blog posts now with the proper CSS formatting intact. Notice the rounded borders and shadow on the code blocks as well as the overflow-x and scrollbars that show up. In this particular case I can see what the code blocks actually look like in a specific resolution – much better than in the old plain view which just chopped things off at the end of the window frame. There are a few other elements that now show properly in the editor as well including block quotes and note boxes that I occasionally use. It’s minor stuff, but it makes the editing experience better yet and closer to the final things so there are less republish operations than I previously had. Sweet!Note that this approach of putting an IE version override into the registry works with most applications that use the Web Browser control. If you are using the Web Browser control in your own applications, it’s a good idea to switch the browser to a more recent version so you can take advantage of HTML 5 and CSS 3 in your browser displayed content by automatically setting this flag in the registry or as part of the application’s startup routine if not dedicated setup tool is used. At the very least you might set it to 9000 (IE 9) which supports most of the basic CSS3 features and is a decent baseline that works for most Windows 7 and 8 machines. If running pre-IE9, the browser will fall back to IE7 rendering and look bad but at least more recent browsers will see an improved experience.I’m surprised that there aren’t more vendors and third party apps using this feature. You can see in my first screen shot that there are only very few entries in the registry key group on my machine – any other apps use the Web Browser control are using IE7. Go figure. Certainly Windows Live Writer should be writing this key into the registry automatically as part of installation to support this functionality out of the box, but alas since it does not, this registry hack lets you get your way anyway…Resources.reg Files to register Live Write Browser Emulation (set for IE9)Specifying Internet Explorer Version for ApplicationsSnagIt LiveWriter Plug-inDownload Windows Live WriterDownload Windows Live Writer with Chocolatey© Rick Strahl, West Wind Technologies, 2005-2013Posted in Live Writer  Windows   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • ASP.NET Web API and Simple Value Parameters from POSTed data

    - by Rick Strahl
    In testing out various features of Web API I've found a few oddities in the way that the serialization is handled. These are probably not super common but they may throw you for a loop. Here's what I found. Simple Parameters from Xml or JSON Content Web API makes it very easy to create action methods that accept parameters that are automatically parsed from XML or JSON request bodies. For example, you can send a JavaScript JSON object to the server and Web API happily deserializes it for you. This works just fine:public string ReturnAlbumInfo(Album album) { return album.AlbumName + " (" + album.YearReleased.ToString() + ")"; } However, if you have methods that accept simple parameter types like strings, dates, number etc., those methods don't receive their parameters from XML or JSON body by default and you may end up with failures. Take the following two very simple methods:public string ReturnString(string message) { return message; } public HttpResponseMessage ReturnDateTime(DateTime time) { return Request.CreateResponse<DateTime>(HttpStatusCode.OK, time); } The first one accepts a string and if called with a JSON string from the client like this:var client = new HttpClient(); var result = client.PostAsJsonAsync<string>(http://rasxps/AspNetWebApi/albums/rpc/ReturnString, "Hello World").Result; which results in a trace like this: POST http://rasxps/AspNetWebApi/albums/rpc/ReturnString HTTP/1.1Content-Type: application/json; charset=utf-8Host: rasxpsContent-Length: 13Expect: 100-continueConnection: Keep-Alive "Hello World" produces… wait for it: null. Sending a date in the same fashion:var client = new HttpClient(); var result = client.PostAsJsonAsync<DateTime>(http://rasxps/AspNetWebApi/albums/rpc/ReturnDateTime, new DateTime(2012, 1, 1)).Result; results in this trace: POST http://rasxps/AspNetWebApi/albums/rpc/ReturnDateTime HTTP/1.1Content-Type: application/json; charset=utf-8Host: rasxpsContent-Length: 30Expect: 100-continueConnection: Keep-Alive "\/Date(1325412000000-1000)\/" (yes still the ugly MS AJAX date, yuk! This will supposedly change by RTM with Json.net used for client serialization) produces an error response: The parameters dictionary contains a null entry for parameter 'time' of non-nullable type 'System.DateTime' for method 'System.Net.Http.HttpResponseMessage ReturnDateTime(System.DateTime)' in 'AspNetWebApi.Controllers.AlbumApiController'. An optional parameter must be a reference type, a nullable type, or be declared as an optional parameter. Basically any simple parameters are not parsed properly resulting in null being sent to the method. For the string the call doesn't fail, but for the non-nullable date it produces an error because the method can't handle a null value. This behavior is a bit unexpected to say the least, but there's a simple solution to make this work using an explicit [FromBody] attribute:public string ReturnString([FromBody] string message) andpublic HttpResponseMessage ReturnDateTime([FromBody] DateTime time) which explicitly instructs Web API to read the value from the body. UrlEncoded Form Variable Parsing Another similar issue I ran into is with POST Form Variable binding. Web API can retrieve parameters from the QueryString and Route Values but it doesn't explicitly map parameters from POST values either. Taking our same ReturnString function from earlier and posting a message POST variable like this:var formVars = new Dictionary<string,string>(); formVars.Add("message", "Some Value"); var content = new FormUrlEncodedContent(formVars); var client = new HttpClient(); var result = client.PostAsync(http://rasxps/AspNetWebApi/albums/rpc/ReturnString, content).Result; which produces this trace: POST http://rasxps/AspNetWebApi/albums/rpc/ReturnString HTTP/1.1Content-Type: application/x-www-form-urlencodedHost: rasxpsContent-Length: 18Expect: 100-continue message=Some+Value When calling ReturnString:public string ReturnString(string message) { return message; } unfortunately it does not map the message value to the message parameter. This sort of mapping unfortunately is not available in Web API. Web API does support binding to form variables but only as part of model binding, which binds object properties to the POST variables. Sending the same message as in the previous example you can use the following code to pick up POST variable data:public string ReturnMessageModel(MessageModel model) { return model.Message; } public class MessageModel { public string Message { get; set; }} Note that the model is bound and the message form variable is mapped to the Message property as would other variables to properties if there were more. This works but it's not very dynamic. There's no real easy way to retrieve form variables (or query string values for that matter) in Web API's Request object as far as I can discern. Well only if you consider this easy:public string ReturnString() { var formData = Request.Content.ReadAsAsync<FormDataCollection>().Result; return formData.Get("message"); } Oddly FormDataCollection does not allow for indexers to work so you have to use the .Get() method which is rather odd. If you're running under IIS/Cassini you can always resort to the old and trusty HttpContext access for request data:public string ReturnString() { return HttpContext.Current.Request.Form["message"]; } which works fine and is easier. It's kind of a bummer that HttpRequestMessage doesn't expose some sort of raw Request object that has access to dynamic data - given that it's meant to serve as a generic REST/HTTP API that seems like a crucial missing piece. I don't see any way to read query string values either. To me personally HttpContext works, since I don't see myself using self-hosted code much.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • FireFox 6 Super Slow? Cache Settings Corruption

    - by Rick Strahl
    For those of you that follow me on Twitter, you've probably seen some of my tweets regarding major performance problems I've seen with the install of FireFox 6.0. FireFox 6.0 was released a couple of weeks ago and is treated as a 'force feed' update for FireFox 5.0. I'm not sure what the deal is with this braindead versioning that Mozilla is doing with major version releases coming out, what now every other month? Seriously that's retarded especially given the limited number of new features these releases bring, and the upgrade pain for plug-ins that the major version release causes. Anyway, after the FireFox updater bugged me long enough I finally gave in last week and updated to FireFox 6. Immediately after install I noticed terrible performance. Everything was running at a snail's pace with Web pages loading slowly and most content actually slowly 'painting' the page. A typical sign of content downloading slowly. However these are pages that should be mostly cached on my system and even repeated accesses ran just as slow. Just for a reality check I ran the same sites in Chrome (blazing fast) and IE (fast enough :-)) but FireFox - dog on a stick. Why so slow Boss? While complaining lots of people recommended to ditch FireFox - use Chrome, yada yada yada. Yeah, Chrome is fast and getting better but I have a number of plug-ins that I use in FF that I can't easily give up. So I suffered and started looking around more closely at what was happening. The first thing I noticed when accessing pages was that I continually saw accesses to the Google CDN downloading jQuery and jQuery UI. UI especially is pretty heavy in size and currently I'm in a location with a fairly slow IP connection where large files are a bit of an issue. However, seeing the CDN urls pop up repeatedly raised a flag with me. That stuff should be caching and it looked like each and every hit was reloading these scripts and various images over and over again. Fired up FireBug and sure enough I saw something like this on a repeated hit to my blog: Those two highlights are jquery and the main CSS file for the site and both are being loaded fully and taking a while to load. However, since this page had been loaded before, these items should be cached and show 304 requests instead of the full HTTP requests returning 200 result codes. In short it looked like FireFox was not caching ANY content at all and constantly reloading all page resources. No wonder things were running dog slow. Once I realized what the problem was I took a look in the about:config settings and lo and behold a bunch of the cache settings were set to not cache: In my case ALL the main cache flags were set to false for some reason that I can't figure out.  It appears that after the FireFox 6 update these flags somehow mysteriously changed and performance took a nose dive. Switching the .enable flags back to true and resetting all the cache settings tote default reverted performance back to the way it's supposed to be: reasonably fast and snappy as soon as content is cached and accessed again  from cache. I try not to muck with the about:config settings much (other than turning off the IPV6 option) but when there are problems access to these features can be really nice. However, I treat this as a last resort so it took me quite some time before I started looking through ALL the settings. This takes a while, not knowing what I was looking for exactly. If Web load performance is slow it's a good idea to check the cache settings. I have no idea what hosed these settings for me - I certainly didn't explicitly set them in about:config and while in FireFox's Options dialog I didn't see any option that would affect global caching like this, so this remains a mystery to me. Anyway, I hope that this is helpful to some, in case some of you end up running into a similar issue.© Rick Strahl, West Wind Technologies, 2005-2011Posted in FireFox   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Allowing Access to HttpContext in WCF REST Services

    - by Rick Strahl
    If you’re building WCF REST Services you may find that WCF’s OperationContext, which provides some amount of access to Http headers on inbound and outbound messages, is pretty limited in that it doesn’t provide access to everything and sometimes in a not so convenient manner. For example accessing query string parameters explicitly is pretty painful: [OperationContract] [WebGet] public string HelloWorld() { var properties = OperationContext.Current.IncomingMessageProperties; var property = properties[HttpRequestMessageProperty.Name] as HttpRequestMessageProperty; string queryString = property.QueryString; var name = StringUtils.GetUrlEncodedKey(queryString,"Name"); return "Hello World " + name; } And that doesn’t account for the logic in GetUrlEncodedKey to retrieve the querystring value. It’s a heck of a lot easier to just do this: [OperationContract] [WebGet] public string HelloWorld() { var name = HttpContext.Current.Request.QueryString["Name"] ?? string.Empty; return "Hello World " + name; } Ok, so if you follow the REST guidelines for WCF REST you shouldn’t have to rely on reading query string parameters manually but instead rely on routing logic, but you know what: WCF REST is a PITA anyway and anything to make things a little easier is welcome. To enable the second scenario there are a couple of steps that you have to take on your service implementation and the configuration file. Add aspNetCompatibiltyEnabled in web.config Fist you need to configure the hosting environment to support ASP.NET when running WCF Service requests. This ensures that the ASP.NET pipeline is fired up and configured for every incoming request. <system.serviceModel>     <serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true" /> </system.serviceModel> Markup your Service Implementation with AspNetCompatibilityRequirements Attribute Next you have to mark up the Service Implementation – not the contract if you’re using a separate interface!!! – with the AspNetCompatibilityRequirements attribute: [ServiceContract(Namespace = "RateTestService")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class RestRateTestProxyService Typically you’ll want to use Allowed as the preferred option. The other options are NotAllowed and Required. Allowed will let the service run if the web.config attribute is not set. Required has to have it set. All these settings determine whether an ASP.NET host AppDomain is used for requests. Once Allowed or Required has been set on the implemented class you can make use of the ASP.NET HttpContext object. When I allow for ASP.NET compatibility in my WCF services I typically add a property that exposes the Context and Request objects a little more conveniently: public HttpContext Context { get { return HttpContext.Current; } } public HttpRequest Request { get { return HttpContext.Current.Request; } } While you can also access the Response object and write raw data to it and manipulate headers THAT is probably not such a good idea as both your code and WCF will end up writing into the output stream. However it might be useful in some situations where you need to take over output generation completely and return something completely custom. Remember though that WCF REST DOES actually support that as well with Stream responses that essentially allow you to return any kind of data to the client so using Response should really never be necessary. Should you or shouldn’t you? WCF purists will tell you never to muck with the platform specific features or the underlying protocol, and if you can avoid it you definitely should avoid it. Querystring management in particular can be handled largely with Url Routing, but there are exceptions of course. Try to use what WCF natively provides – if possible as it makes the code more portable. For example, if you do enable ASP.NET Compatibility you won’t be able to self host a WCF REST service. At the same time realize that especially in WCF REST there are number of big holes or access to some features are a royal pain and so it’s not unreasonable to access the HttpContext directly especially if it’s only for read-only access. Since everything in REST works of URLS and the HTTP protocol more control and easier access to HTTP features is a key requirement to building flexible services. It looks like vNext of the WCF REST stuff will feature many improvements along these lines with much deeper native HTTP support that is often so useful in REST applications along with much more extensibility that allows for customization of the inputs and outputs as data goes through the request pipeline. I’m looking forward to this stuff as WCF REST as it exists today still is a royal pain (in fact I’m struggling with a mysterious version conflict/crashing error on my machine that I have not been able to resolve – grrrr…).© Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  AJAX  WCF  

    Read the article

  • Windows 8 Live Accounts and the actual Windows Account

    - by Rick Strahl
    As if Windows Security wasn't confusing enough, in Windows 8 we get thrown yet another curve ball with Windows Live accounts to logon. When I set up my Windows 8 machine I originally set it up with a 'real', non-live account that I always use on my Windows machines. I did this mainly so I have a matching account for resources around my home and intranet network so I could log on to network resources properly. At some point later I decided to set up Windows Live security just to see how changes things. Windows wants you to use Windows Live Windows 8 logins are required in order for the Windows RT account info to work. Not that I care - since installing Windows 8 I've maybe spent 10 minutes with Windows RT because - well it's pretty freaking sucky on the desktop. From shitty apps to mis-managed screen real estate I can't say that there's anything compelling there to date, but then I haven't looked that hard either. Anyway… I set up the Windows Live account to see if that changes things. It does - I do get all my live logins to work from Live Account so that Twitter and Facebook posts and pictures and calendars all show up on live tiles on the start screen and in the actual apps. That's nice-ish, but hardly that exciting given that all of the apps tied to those live tiles are average at best. And it would have been nice if all of this could be done without being forced into running with a Windows Live User Account - this all feels like strong-arming you into moving into Microsofts walled garden… and that's probably what it's meant to do. Who am I? The real problem to me though is that these Windows Live and raw Windows User accounts are a bit unpredictable especially when it comes to developer information about the account and which credentials to use. So for example Windows reports folder security like this: Notice it's showing my Windows Live account. Now if I go to Edit and try to add my Windows user account (rstrahl) it'll just automatically show up as the live account. On the other hand though the underlying system sees everything as my real Windows account. After I switched to a Windows Live login account and I have to login to Windows with my Live account, what do you suppose this returns?Console.WriteLine(Environment.UserName); It returns my raw Windows user account (rstrahl). All my permissions, all my actual settings and the desktop console altogether run under that account. If I look in TaskManager (or Process Explorer for me) I see: Everything running on the desktop shell with my login running under my Windows user account. I suppose it makes sense, but where is that association happening? When I switched to a Windows Live account, nowhere did I associate my real account with the Live account - it just happened. And looking through the account configuration dialogs I can't find any reference to the raw Windows account. Other than switching back I see no mention anywhere of the raw Windows account - everything refers to the Live account. Right then, clear as potato soup! So this is who you really are! The problem is that in some situations this schizophrenic account behavior gets a bit weird. Today I was running a local Web application in IIS that uses Windows Authentication - I tried to log-in with my real Windows account login because that's what I'm used to using with WINDOWS freaking Authentication through IIS. But… it failed. I checked my IIS settings, my apps login settings and I just could not for the life of me get into the site with my Windows username. That is until I finally realized that I should try using my Windows Live credentials instead. And that worked. So now in this Windows Authentication dialog I had to type in my Live ID and password, which is - just weird. Then in IIS if I look at a Trace page (or in my case my app's Status page) I see that the logged on account is - my Windows user account. What's really annoying about this is that in some places it uses the live account in other places it uses my Windows account. If I remote desktop into my Web server online - I have to use the local authentication dialog but I have to put in my real Windows credentials not the Live account. Oh yes, it's all so terribly intuitive and logical… So in summary, when you log on with a Live account you are actually mapped to an underlying Windows user. In any application if you check the user name it'll be the underlying user account (not sure what happens in a Windows RT app or even what mechanism is used there to get the user name info).  When logging on to local machine resource with user name and password you have to use your Live IDs even if the permissions on the resources are mapped to your underlying Windows account. Easy enough I suppose, but still not exactly intuitive behavior…© Rick Strahl, West Wind Technologies, 2005-2012Posted in Windows   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • IE9 not rendering box-shadow Elements inside of Table Cells

    - by Rick Strahl
    Ran into an annoying problem today with IE 9. Slowly updating some older sites with CSS 3 tags and for the most part IE9 does a reasonably decent job of working with the new CSS 3 features. Not all by a long shot but at least some of the more useful ones like border-radius and box-shadow are supported. Until today I was happy to see that IE supported box-shadow just fine, but I ran into a problem with some old markup that uses tables for its main layout sections. I found that inside of a table cell IE fails to render a box-shadow. Below are images from Chrome (left) and IE 9 (right) of the same content: The download and purchase images are rendered with: <a href="download.asp" style="display:block;margin: 10px;"><img src="../images/download.gif" class="boxshadow roundbox" /></a> where the .boxshadow and .roundbox styles look like this:.boxshadow { -moz-box-shadow: 3px 3px 5px #535353; -webkit-box-shadow: 3px 3px 5px #535353; box-shadow: 3px 3px 5px #535353; } .roundbox { -moz-border-radius: 6px 6px 6px 6px; -webkit-border-radius: 6px; border-radius: 6px 6px 6px 6px; } And the Problem is… collapsed Table Borders Now normally these two styles work just fine in IE 9 when applied to elements. But the box-shadow doesn't work inside of this markup - because the parent container is a table cell.<td class="sidebar" style="border-collapse: collapse"> … <a href="download.asp" style="display:block;margin: 10px;"><img src="../images/download.gif" class="boxshadow roundbox" /></a> …</td> This HTML causes the image to not show a shadow. In actuality I'm not styling inline, but as part of my browser Reset I have the following in my master .css file:table { border-collapse: collapse; border-spacing: 0; } which has the same effect as the inline style. border-collapse by default inherits from the parent and so the TD inherits from table and tr - so TD tags are effectively collapsed. You can check out a test document that demonstrates this behavior here in this CodePaste.net snippet or run it here. How to work around this Issue To get IE9 to render the shadows inside of the TD tag correctly, I can just change the style explicitly NOT to use border-collapse:<td class="sidebar" style="border-collapse: separate; border-width: 0;"> Or better yet (thanks to David's comment below), you can add the border-collapse: separate to the .boxshadow style like this:.boxshadow { -moz-box-shadow: 3px 3px 5px #535353; -webkit-box-shadow: 3px 3px 5px #535353; box-shadow: 3px 3px 5px #535353; border-collapse: separate; } With either of these approaches IE renders the shadows correctly. Do you really need border-collapse? Should you bother with border-collapse? I think so! Collapsed borders render flat as a single fat line if a border-width and border-color are assigned, while separated borders render a thin line with a bunch of weird white space around it or worse render a old skool 3D raised border which is terribly ugly as well. So as a matter of course in any app my browser Reset includes the above code to make sure all tables with borders render the same flat borders. As you probably know, IE has all sorts of rendering issues in tables and on backgrounds (opacity backgrounds or image backgrounds) most of which is caused by the way that IE internally uses ActiveX filters to apply these effects. Apparently collapsed borders are yet one more item that causes problems with rendering. There you have it. Another crappy failure in IE we have to check for now, just one more reason to hate Internet Explorer. Luckily this one has a reasonably easy workaround. I hope this helps out somebody and saves them the hour I spent trying to figure out what caused this problem in the first place. Resources Sample HTML document that demonstrates the behavior Run the Sample© Rick Strahl, West Wind Technologies, 2005-2012Posted in HTML  Internet Explorer   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Problems with opening CHM Help files from Network or Internet

    - by Rick Strahl
    As a publisher of a Help Creation tool called Html Help Help Builder, I’ve seen a lot of problems with help files that won't properly display actual topic content and displays an error message for topics instead. Here’s the scenario: You go ahead and happily build your fancy, schmanzy Help File for your application and deploy it to your customer. Or alternately you've created a help file and you let your customers download them off the Internet directly or in a zip file. The customer downloads the file, opens the zip file and copies the help file contained in the zip file to disk. She then opens the help file and finds the following unfortunate result:     The help file  comes up with all topics in the tree on the left, but a Navigation to the WebPage was cancelled or Operation Aborted error in the Help Viewer's content window whenever you try to open a topic. The CHM file obviously opened since the topic list is there, but the Help Viewer refuses to display the content. Looks like a broken help file, right? But it's not - it's merely a Windows security 'feature' that tries to be overly helpful in protecting you. The reason this happens is because files downloaded off the Internet - including ZIP files and CHM files contained in those zip files - are marked as as coming from the Internet and so can potentially be malicious, so do not get browsing rights on the local machine – they can’t access local Web content, which is exactly what help topics are. If you look at the URL of a help topic you see something like this:   mk:@MSITStore:C:\wwapps\wwIPStuff\wwipstuff.chm::/indexpage.htm which points at a special Microsoft Url Moniker that in turn points the CHM file and a relative path within that HTML help file. Try pasting a URL like this into Internet Explorer and you'll see the help topic pop up in your browser (along with a warning most likely). Although the URL looks weird this still equates to a call to the local computer zone, the same as if you had navigated to a local file in IE which by default is not allowed.  Unfortunately, unlike Internet Explorer where you have the option of clicking a security toolbar, the CHM viewer simply refuses to load the page and you get an error page as shown above. How to Fix This - Unblock the Help File There's a workaround that lets you explicitly 'unblock' a CHM help file. To do this: Open Windows Explorer Find your CHM file Right click and select Properties Click the Unblock button on the General tab Here's what the dialog looks like:   Clicking the Unblock button basically, tells Windows that you approve this Help File and allows topics to be viewed.   Is this insecure? Not unless you're running a really old Version of Windows (XP pre-SP1). In recent versions of Windows Internet Explorer pops up various security dialogs or fires script errors when potentially malicious operations are accessed (like loading Active Controls), so it's relatively safe to run local content in the CHM viewer. Since most help files don't contain script or only load script that runs pure JavaScript access web resources this works fine without issues. How to avoid this Problem As an application developer there's a simple solution around this problem: Always install your Help Files with an Installer. The above security warning pop up because Windows can't validate the source of the CHM file. However, if the help file is installed as part of an installation the installation and all files associated with that installation including the help file are trusted. A fully installed Help File of an application works just fine because it is trusted by Windows. Summary It's annoying as all hell that this sort of obtrusive marking is necessary, but it's admittedly a necessary evil because of Microsoft's use of the insecure Internet Explorer engine that drives the CHM Html Engine's topic viewer. Because help files are viewing local content and script is allowed to execute in CHM files there's potential for malicious code hiding in CHM files and the above precautions are supposed to avoid any issues. © Rick Strahl, West Wind Technologies, 2005-2012 Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • XmlWriter and lower ASCII characters

    - by Rick Strahl
    Ran into an interesting problem today on my CodePaste.net site: The main RSS and ATOM feeds on the site were broken because one code snippet on the site contained a lower ASCII character (CHR(3)). I don't think this was done on purpose but it was enough to make the feeds fail. After quite a bit of debugging and throwing in a custom error handler into my actual feed generation code that just spit out the raw error instead of running it through the ASP.NET MVC and my own error pipeline I found the actual error. The lovely base exception and error trace I got looked like this: Error: '', hexadecimal value 0x03, is an invalid character. at System.Xml.XmlUtf8RawTextWriter.InvalidXmlChar(Int32 ch, Byte* pDst, Boolean entitize)at System.Xml.XmlUtf8RawTextWriter.WriteElementTextBlock(Char* pSrc, Char* pSrcEnd)at System.Xml.XmlUtf8RawTextWriter.WriteString(String text)at System.Xml.XmlWellFormedWriter.WriteString(String text)at System.Xml.XmlWriter.WriteElementString(String localName, String ns, String value)at System.ServiceModel.Syndication.Rss20FeedFormatter.WriteItemContents(XmlWriter writer, SyndicationItem item, Uri feedBaseUri)at System.ServiceModel.Syndication.Rss20FeedFormatter.WriteItem(XmlWriter writer, SyndicationItem item, Uri feedBaseUri)at System.ServiceModel.Syndication.Rss20FeedFormatter.WriteItems(XmlWriter writer, IEnumerable`1 items, Uri feedBaseUri)at System.ServiceModel.Syndication.Rss20FeedFormatter.WriteFeed(XmlWriter writer)at System.ServiceModel.Syndication.Rss20FeedFormatter.WriteTo(XmlWriter writer)at CodePasteMvc.Controllers.ApiControllerBase.GetFeed(Object instance) in C:\Projects2010\CodePaste\CodePasteMvc\Controllers\ApiControllerBase.cs:line 131 XML doesn't like extended ASCII Characters It turns out the issue is that XML in general does not deal well with lower ASCII characters. According to the XML spec it looks like any characters below 0x09 are invalid. If you generate an XML document in .NET with an embedded &#x3; entity (as mine did to create the error above), you tend to get an XML document error when displaying it in a viewer. For example, here's what the result of my  feed output looks like with the invalid character embedded inside of Chrome which displays RSS feeds as raw XML by default: Other browsers show similar error messages. The nice thing about Chrome is that you can actually view source and jump down to see the line that causes the error which allowed me to track down the actual message that failed. If you create an XML document that contains a 0x03 character the XML writer fails outright with the error: '', hexadecimal value 0x03, is an invalid character. The good news is that this behavior is overridable so XML output can at least be created by using the XmlSettings object when configuring the XmlWriter instance. In my RSS configuration code this looks something like this:MemoryStream ms = new MemoryStream(); var settings = new XmlWriterSettings() { CheckCharacters = false }; XmlWriter writer = XmlWriter.Create(ms,settings); and voila the feed now generates. Now generally this is probably NOT a good idea, because as mentioned above these characters are illegal and if you view a raw XML document you'll get validation errors. Luckily though most RSS feed readers however don't care and happily accept and display the feed correctly, which is good because it got me over an embarrassing hump until I figured out a better solution. How to handle extended Characters? I was glad to get the feed fixed for the time being, but now I was still stuck with an interesting dilemma. CodePaste.net accepts user input for code snippets and those code snippets can contain just about anything. This means that ASP.NET's standard request filtering cannot be applied to this content. The code content displayed is encoded before display so for the HTML end the CHR(3) input is not really an issue. While invisible characters are hardly useful in user input it's not uncommon that odd characters show up in code snippets. You know the old fat fingering that happens when you're in the middle of a coding session and those invisible characters do end up sometimes in code editors and then end up pasted into the HTML textbox for pasting as a Codepaste.net snippet. The question is how to filter this text? Looking back at the XML Charset Spec it looks like all characters below 0x20 (space) except for 0x09 (tab), 0x0A (LF), 0x0D (CR) are illegal. So applying the following filter with a RegEx should work to remove invalid characters:string code = Regex.Replace(item.Code, @"[\u0000-\u0008,\u000B,\u000C,\u000E-\u001F]", ""); Applying this RegEx to the code snippet (and title) eliminates the problems and the feed renders cleanly.© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  XML   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Error on 64 Bit Install of IIS &ndash; LoadLibraryEx failed on aspnet_filter.dll

    - by Rick Strahl
    I’ve been having a few problems with my Windows 7 install and trying to get IIS applications to run properly in 64 bit. After installing IIS and creating virtual directories for several of my applications and firing them up I was left with the following error message from IIS: Calling LoadLibraryEx on ISAPI filter “c:\windows\Microsoft.NET\Framework\v4.0.30319\aspnet_filter.dll” failed This is on Windows 7 64 bit and running on an ASP.NET 4.0 Application configured for running 64 bit (32 bit disabled). It’s also on what is essentially a brand new installation of IIS and Windows 7. So it failed right out of the box. The problem here is that IIS is trying to loading this ISAPI filter from the 32 bit folder – it should be loading from Framework64 folder note the Framework folder. The aspnet_filter.dll component is a small Win32 ISAPI filter used to back up the cookieless session state for ASP.NET on IIS 7 applications. It’s not terribly important because of this focus, but it’s a default loaded component. After a lot of fiddling I ended up with two solutions (with the help and support of some Twitter folks): Switch IIS to run in 32 bit mode Fix the filter listing in ApplicationHost.config Switching IIS to allow 32 Bit Code This is a quick fix for the problem above which enables 32 bit code in the Application Pool. The problem above is that IIS is trying to load a 32 bit ISAPI filter and enabling 32 bit code gets you around this problem. To configure your Application Pool, open the Application Pool in IIS Manager bring up Advanced Options and Enable 32 Bit Applications: And voila the error message above goes away. Fix Filters Enabling 32 bit code is a quick fix solution to this problem, but not an ideal one. If you’re running a pure .NET application that doesn’t need to do COM or pInvoke Interop with 32 bit apps there’s usually no need for enabling 32 bit code in an Application Pool as you can run in native 64 bit code. So trying to get 64 bit working natively is a pretty key feature in my opinion :-) So what’s the problem – why is IIS trying to load a 32 bit DLL in a 64 bit install, especially if the application pool is configured to not allow 32 bit code at all? The problem lies in the server configuration and the fact that 32 bit and 64 bit configuration settings exist side by side in IIS. If I open my Default Web Site (or any other root Web Site) and go to the ISAPI filter list here’s what I see: Notice that there are 3 entries for ASP.NET 4.0 in this list. Only two of them however are specifically scoped to the specifically to 32 bit or 64 bit. As you can see the 64 bit filter correctly points at the Framework64 folder to load the dll, while both the 32 bit and the ‘generic’ entry point at the plain Framework 32 bit folder. Aha! Hence lies our problem. You can edit ApplicationHost.config manually, but I ran into the nasty issue of not being able to easily edit that file with the 32 bit editor (who ever thought that was a good idea???? WTF). You have to open ApplicationHost.Config in a 64 bit native text editor – which Visual Studio is not. Or my favorite editor: EditPad Pro. Since I don’t have a native 64 bit editor handy Notepad was my only choice. Or as an alternative you can use the IIS 7.5 Configuration Editor which lets you interactively browse and edit most ApplicationHost settings. You can drill into the configuration hierarchy visually to find your keys and edit attributes and sub values in property editor type interface. I had no idea this tool existed prior to today and it’s pretty cool as it gives you some visual clues to options available – especially in absence of an Intellisense scheme you’d get in Visual Studio (which doesn’t work). To use the Configuration Editor go the Web Site root and use the Configuration Editor option in the Management Group. Drill into System.webServer/isapiFilters and then click on the Collection’s … button on the right. You should now see a display like this: which shows all the same attributes you’d see in ApplicationHost.config (cool!). These entries correspond to these raw ApplicationHost.config entries: <filter name="ASP.Net_4.0" path="C:\Windows\Microsoft.NET\Framework\v4.0.30319\aspnet_filter.dll" enableCache="true" preCondition="runtimeVersionv4.0" /> <filter name="ASP.Net_4.0_64bit" path="C:\Windows\Microsoft.NET\Framework64\v4.0.30319\aspnet_filter.dll" enableCache="true" preCondition="runtimeVersionv4.0,bitness64" /> <filter name="ASP.Net_4.0_32bit" path="C:\Windows\Microsoft.NET\Framework\v4.0.30319\aspnet_filter.dll" enableCache="true" preCondition="runtimeVersionv4.0,bitness32" /> The key attribute we’re concerned with here is the preCondition and the bitness subvalue. Notice that the ‘generic’ version – which comes first in the filter list – has no bitness assigned to it, so it defaults to 32 bit and the 32 bit dll path. And this is where our problem comes from. The simple solution to fix the startup problem is to remove the generic entry from this list here or in the filters list shown earlier and leave only the bitness specific versions active. The preCondition attribute acts as a filter and as you can see here it filters the list by runtime version and bitness value. This is something to keep an eye out in general – if a bitness values are missing it’s easy to run into conflicts like this with any settings that are global and especially those that load modules and handlers and other executable code. On 64 bit systems it’s a good idea to explicitly set the bitness of all entries or remove the non-specific versions and add bit specific entries. So how did this get misconfigured? I installed IIS before everything else was installed on this machine and then went ahead and installed Visual Studio. I suspect the Visual Studio install munged this up as I never saw a similar problem on my live server where everything just worked right out of the box. In searching about this problem a lot of solutions pointed at using aspnet_regiis –r from the Framework64 directory, but that did not fix this extra entry in the filters list – it adds the required 32 bit and 64 bit entries, but it doesn’t remove the errand un-bitness set entry. Hopefully this post will help out anybody who runs into a similar situation without having to trouble shoot all the way down into the configuration settings and noticing the bitness settings. It’s a good lesson learned for me – this is my first desktop install of a 64 bit OS and things like this are what I was reluctant to find. Now that I ran into this I have a good idea what to look for with 32/64 bit misconfigurations in IIS at least.© Rick Strahl, West Wind Technologies, 2005-2011Posted in IIS7   ASP.NET  

    Read the article

  • LINQ to SQL and missing Many to Many EntityRefs

    - by Rick Strahl
    Ran into an odd behavior today with a many to many mapping of one of my tables in LINQ to SQL. Many to many mappings aren’t transparent in LINQ to SQL and it maps the link table the same way the SQL schema has it when creating one. In other words LINQ to SQL isn’t smart about many to many mappings and just treats it like the 3 underlying tables that make up the many to many relationship. Iain Galloway has a nice blog entry about Many to Many relationships in LINQ to SQL. I can live with that – it’s not really difficult to deal with this arrangement once mapped, especially when reading data back. Writing is a little more difficult as you do have to insert into two entities for new records, but nothing that can’t be handled in a small business object method with a few lines of code. When I created a database I’ve been using to experiment around with various different OR/Ms recently I found that for some reason LINQ to SQL was completely failing to map even to the linking table. As it turns out there’s a good reason why it fails, can you spot it below? (read on :-}) Here is the original database layout: There’s an items table, a category table and a link table that holds only the foreign keys to the Items and Category tables for a typical M->M relationship. When these three tables are imported into the model the *look* correct – I do get the relationships added (after modifying the entity names to strip the prefix): The relationship looks perfectly fine, both in the designer as well as in the XML document: <Table Name="dbo.wws_Item_Categories" Member="ItemCategories"> <Type Name="ItemCategory"> <Column Name="ItemId" Type="System.Guid" DbType="uniqueidentifier NOT NULL" CanBeNull="false" /> <Column Name="CategoryId" Type="System.Guid" DbType="uniqueidentifier NOT NULL" CanBeNull="false" /> <Association Name="ItemCategory_Category" Member="Categories" ThisKey="CategoryId" OtherKey="Id" Type="Category" /> <Association Name="Item_ItemCategory" Member="Item" ThisKey="ItemId" OtherKey="Id" Type="Item" IsForeignKey="true" /> </Type> </Table> <Table Name="dbo.wws_Categories" Member="Categories"> <Type Name="Category"> <Column Name="Id" Type="System.Guid" DbType="UniqueIdentifier NOT NULL" IsPrimaryKey="true" IsDbGenerated="true" CanBeNull="false" /> <Column Name="ParentId" Type="System.Guid" DbType="UniqueIdentifier" CanBeNull="true" /> <Column Name="CategoryName" Type="System.String" DbType="NVarChar(150)" CanBeNull="true" /> <Column Name="CategoryDescription" Type="System.String" DbType="NVarChar(MAX)" CanBeNull="true" /> <Column Name="tstamp" AccessModifier="Internal" Type="System.Data.Linq.Binary" DbType="rowversion" CanBeNull="true" IsVersion="true" /> <Association Name="ItemCategory_Category" Member="ItemCategory" ThisKey="Id" OtherKey="CategoryId" Type="ItemCategory" IsForeignKey="true" /> </Type> </Table> However when looking at the code generated these navigation properties (also on Item) are completely missing: [global::System.Data.Linq.Mapping.TableAttribute(Name="dbo.wws_Item_Categories")] [global::System.Runtime.Serialization.DataContractAttribute()] public partial class ItemCategory : Westwind.BusinessFramework.EntityBase { private System.Guid _ItemId; private System.Guid _CategoryId; public ItemCategory() { } [global::System.Data.Linq.Mapping.ColumnAttribute(Storage="_ItemId", DbType="uniqueidentifier NOT NULL")] [global::System.Runtime.Serialization.DataMemberAttribute(Order=1)] public System.Guid ItemId { get { return this._ItemId; } set { if ((this._ItemId != value)) { this._ItemId = value; } } } [global::System.Data.Linq.Mapping.ColumnAttribute(Storage="_CategoryId", DbType="uniqueidentifier NOT NULL")] [global::System.Runtime.Serialization.DataMemberAttribute(Order=2)] public System.Guid CategoryId { get { return this._CategoryId; } set { if ((this._CategoryId != value)) { this._CategoryId = value; } } } } Notice that the Item and Category association properties which should be EntityRef properties are completely missing. They’re there in the model, but the generated code – not so much. So what’s the problem here? The problem – it appears – is that LINQ to SQL requires primary keys on all entities it tracks. In order to support tracking – even of the link table entity – the link table requires a primary key. Real obvious ain’t it, especially since the designer happily lets you import the table and even shows the relationship and implicitly the related properties. Adding an Id field as a Pk to the database and then importing results in this model layout: which properly generates the Item and Category properties into the link entity. It’s ironic that LINQ to SQL *requires* the PK in the middle – the Entity Framework requires that a link table have *only* the two foreign key fields in a table in order to recognize a many to many relation. EF actually handles the M->M relation directly without the intermediate link entity unlike LINQ to SQL. [updated from comments – 12/24/2009] Another approach is to set up both ItemId and CategoryId in the database which shows up in LINQ to SQL like this: This also work in creating the Category and Item fields in the ItemCategory entity. Ultimately this is probably the best approach as it also guarantees uniqueness of the keys and so helps in database integrity. It took me a while to figure out WTF was going on here – lulled by the designer to think that the properties should be when they were not. It’s actually a well documented feature of L2S that each entity in the model requires a Pk but of course that’s easy to miss when the model viewer shows it to you and even the underlying XML model shows the Associations properly. This is one of the issue with L2S of course – you have to play by its rules and once you hit one of those rules there’s no way around them – you’re stuck with what it requires which in this case meant changing the database.© Rick Strahl, West Wind Technologies, 2005-2010Posted in ADO.NET  LINQ  

    Read the article

  • WinInet Apps failing when Internet Explorer is set to Offline Mode

    - by Rick Strahl
    Ran into a nasty issue last week when all of a sudden many of my old applications that are using WinInet for HTTP access started failing. Specifically, the WinInet HttpSendRequest() call started failing with an error of 2, which when retrieving the error boils down to: WinInet Error 2: The system cannot find the file specified Now this error can pop up in many legitimate scenarios with WinInet such as when no Internet connection is available or the HTTP configuration (usually configured in Internet Explorer’s options) is misconfigured. The error typically means that the server in question cannot be found or more specifically an Internet connection can’t be established. In this case the problem started suddenly and was causing some of my own applications (old Visual FoxPro apps using my own wwHttp library) and all Adobe Air applications (which apparently uses WinInet for its basic HTTP stack) along with a few more oddball applications to fail instantly when trying to connect via HTTP. Most other applications – all of my installed browsers, email clients, various social network updaters all worked just fine. It seems it was only WinInet apps that were failing. Yet oddly Internet Explorer appeared to be working. So the problem seemed to be isolated to those ‘classic’ applications using WinInet. WinInet’s base configuration uses the Internet Explorer options dialog. To check this out I typically go to the Internet Explorer options and find the Connection tab, and check out the LAN Setup. Make sure there are no rogue proxy settings or configuration scripts that are invalid. Trying with Auto-configuration on and off also can often fix ‘real’ configuration errors. This time however this wasn’t a problem – nothing in the LAN configuration was set (all default). I also played with the Automatic detection of settings which also had no effect. I also tried to use Fiddler to see if that would tell me something. Fiddler has a few additional WinInet configuration options in its configuration. Running Fiddler and hitting an HTTP request using WinInet would never actually hit Fiddler – the failure would occur before WinInet ever fired up the HTTP connection to go through the Fiddler HTTP proxy. And the Culprit is: Internet Explorer’s Work Offline Option The culprit in this situation was Internet Explorer which at some point, unknown to me switched into Offline Mode and was then shut down: When this Offline mode is checked when IE is running *or* if IE gets shut down with this flag set, all applications using WinInet by default assume that it’s running in offline mode. Depending on your caching HTTP headers and whether the page was cached previously you may or may not get a response or an error. For an independent non-browser application this will be highly unpredictable and likely result in failures getting online – especially if the application forces requests to always reload by disabling HTTP caching (as I do on most of my dynamic HTTP clients). What makes this especially tricky is that even when IE is in offline mode in the browser, you can still browse around the Web *if* you have a connection. IE will try to load anything it has cached from the local cache, but as soon as you hit a URL that isn’t cached it will automatically try to access that URL and uncheck the Work Offline option. Conversely if you get knocked off the Internet and browse in IE 9, IE will automatically go into offline mode. I never explicitly set offline mode – it just automatically sets itself on and off depending on the connection. Problem is if you’re not using IE all the time (as I do – rarely and just for testing so usually a few commonly used URLs) and you left it in offline mode when you exit, offline mode stays set which results in the above head scratcher. Ack. This isn’t new behavior in IE 9 BTW – this behavior has always been there, but I think what’s different is that IE now automatically switches between online and offline modes without notifying you at all, so it’s hard to tell when you are offline. Fixing the Issue in your Code If you have an application that is using WinInet, there’s a WinInet option called INTERNET_OPTION_IGNORE_OFFLINE. I just checked this out in my own applications and Internet Explorer 9 and it works, but apparently it’s been broken for some older releases (I can’t confirm how far back though) – lots of posts seem to suggest the flag doesn’t work. However, in IE 9 at least it does seem to work if you call InternetSetOption before you call HttpOpenRequest with the Http Session handle. In FoxPro code I use: DECLARE INTEGER InternetSetOption ;    IN WININET.DLL ;    INTEGER HINTERNET,;    INTEGER dwFlags,;    INTEGER @dwValue,;    INTEGER cbSize lnOptionValue = 1   && BOOL TRUE pass by reference   *** Set needed SSL flags lnResult=InternetSetOption(this.hHttpSession,;    INTERNET_OPTION_IGNORE_OFFLINE ,;  && 77    @lnOptionValue ,4)   DECLARE INTEGER HttpOpenRequest ;    IN WININET.DLL ;    INTEGER hHTTPHandle,;    STRING lpzReqMethod,;    STRING lpzPage,;    STRING lpzVersion,;    STRING lpzReferer,;    STRING lpzAcceptTypes,;    INTEGER dwFlags,;    INTEGER dwContextw     hHTTPResult=HttpOpenRequest(THIS.hHttpsession,;    lcVerb,;    tcPage,;    NULL,NULL,NULL,;    INTERNET_FLAG_RELOAD + ;    IIF(THIS.lsecurelink,INTERNET_FLAG_SECURE,0) + ;    this.nHTTPServiceFlags,0) …  And this fixes the issue at least for IE 9… In my FoxPro wwHttp class I now call this by default to never get bitten by this again… This solves the problem permanently for my HTTP client. I never want to see offline operation in an HTTP client API – it’s just too unpredictable in handling errors and the last thing you want is getting unpredictably stale data. Problem solved but this behavior is – well ugly. But then that’s to be expected from an API that’s based on Internet Explorer, eh?© Rick Strahl, West Wind Technologies, 2005-2011Posted in HTTP  Windows  

    Read the article

  • GZip/Deflate Compression in ASP.NET MVC

    - by Rick Strahl
    A long while back I wrote about GZip compression in ASP.NET. In that article I describe two generic helper methods that I've used in all sorts of ASP.NET application from WebForms apps to HttpModules and HttpHandlers that require gzip or deflate compression. The same static methods also work in ASP.NET MVC. Here are the two routines:/// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } /// <summary> /// Sets up the current page or handler to use GZip through a Response.Filter /// IMPORTANT: /// You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() { HttpResponse Response = HttpContext.Current.Response; if (IsGZipSupported()) { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (AcceptEncoding.Contains("gzip")) { Response.Filter = new System.IO.Compression.GZipStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "gzip"); } else { Response.Filter = new System.IO.Compression.DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "deflate"); } } // Allow proxy servers to cache encoded and unencoded versions separately Response.AppendHeader("Vary", "Content-Encoding"); } The first method checks whether the client sending the request includes the accept-encoding for either gzip or deflate, and if if it does it returns true. The second function uses IsGzipSupported() to decide whether it should encode content and uses an Response Filter to do its job. Basically response filters look at the Response output stream as it's written and convert the data flowing through it. Filters are a bit tricky to work with but the two .NET filter streams for GZip and Deflate Compression make this a snap to implement. In my old code and even now in MVC I can always do:public ActionResult List(string keyword=null, int category=0) { WebUtils.GZipEncodePage(); …} to encode my content. And that works just fine. The proper way: Create an ActionFilterAttribute However in MVC this sort of thing is typically better handled by an ActionFilter which can be applied with an attribute. So to be all prim and proper I created an CompressContentAttribute ActionFilter that incorporates those two helper methods and which looks like this:/// <summary> /// Attribute that can be added to controller methods to force content /// to be GZip encoded if the client supports it /// </summary> public class CompressContentAttribute : ActionFilterAttribute { /// <summary> /// Override to compress the content that is generated by /// an action method. /// </summary> /// <param name="filterContext"></param> public override void OnActionExecuting(ActionExecutingContext filterContext) { GZipEncodePage(); } /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } /// <summary> /// Sets up the current page or handler to use GZip through a Response.Filter /// IMPORTANT: /// You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() { HttpResponse Response = HttpContext.Current.Response; if (IsGZipSupported()) { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (AcceptEncoding.Contains("gzip")) { Response.Filter = new System.IO.Compression.GZipStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "gzip"); } else { Response.Filter = new System.IO.Compression.DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "deflate"); } } // Allow proxy servers to cache encoded and unencoded versions separately Response.AppendHeader("Vary", "Content-Encoding"); } } It's basically the same code wrapped into an ActionFilter attribute, which intercepts requests MVC requests to Controller methods and lets you hook up logic before and after the methods have executed. Here I want to override OnActionExecuting() which fires before the Controller action is fired. With the CompressContentAttribute created, it can now be applied to either the controller as a whole:[CompressContent] public class ClassifiedsController : ClassifiedsBaseController { … } or to one of the Action methods:[CompressContent] public ActionResult List(string keyword=null, int category=0) { … } The former applies compression to every action method, while the latter is selective and only applies it to the individual action method. Is the attribute better than the static utility function? Not really, but it is the standard MVC way to hook up 'filter' content and that's where others are likely to expect to set options like this. In fact,  you have a bit more control with the utility function because you can conditionally apply it in code, but this is actually much less likely in MVC applications than old WebForms apps since controller methods tend to be more focused. Compression Caveats Http compression is very cool and pretty easy to implement in ASP.NET but you have to be careful with it - especially if your content might get transformed or redirected inside of ASP.NET. A good example, is if an error occurs and a compression filter is applied. ASP.NET errors don't clear the filter, but clear the Response headers which results in some nasty garbage because the compressed content now no longer matches the headers. Another issue is Caching, which has to account for all possible ways of compression and non-compression that the content is served. Basically compressed content and caching don't mix well. I wrote about several of these issues in an old blog post and I recommend you take a quick peek before diving into making every bit of output Gzip encoded. None of these are show stoppers, but you have to be aware of the issues. Related Posts GZip Compression with ASP.NET Content ASP.NET GZip Encoding Caveats© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • DropDownList and SelectListItem Array Item Updates in MVC

    - by Rick Strahl
    So I ran into an interesting behavior today as I deployed my first MVC 4 app tonight. I have a list form that has a filter drop down that allows selection of categories. This list is static and rarely changes so rather than loading these items from the database each time I load the items once and then cache the actual SelectListItem[] array in a static property. However, when we put the site online tonight we immediately noticed that the drop down list was coming up with pre-set values that randomly changed. Didn't take me long to trace this back to the cached list of SelectListItem[]. Clearly the list was getting updated - apparently through the model binding process in the selection postback. To clarify the scenario here's the drop down list definition in the Razor View:@Html.DropDownListFor(mod => mod.QueryParameters.Category, Model.CategoryList, "All Categories") where Model.CategoryList gets set with:[HttpPost] [CompressContent] public ActionResult List(MessageListViewModel model) { InitializeViewModel(model); busEntry entryBus = new busEntry(); var entries = entryBus.GetEntryList(model.QueryParameters); model.Entries = entries; model.DisplayMode = ApplicationDisplayModes.Standard; model.CategoryList = AppUtils.GetCachedCategoryList(); return View(model); } The AppUtils.GetCachedCategoryList() method gets the cached list or loads the list on the first access. The code to load up the list is housed in a Web utility class. The method looks like this:/// <summary> /// Returns a static category list that is cached /// </summary> /// <returns></returns> public static SelectListItem[] GetCachedCategoryList() { if (_CategoryList != null) return _CategoryList; lock (_SyncLock) { if (_CategoryList != null) return _CategoryList; var catBus = new busCategory(); var categories = catBus.GetCategories().ToList(); // Turn list into a SelectItem list var catList= categories .Select(cat => new SelectListItem() { Text = cat.Name, Value = cat.Id.ToString() }) .ToList(); catList.Insert(0, new SelectListItem() { Value = ((int)SpecialCategories.AllCategoriesButRealEstate).ToString(), Text = "All Categories except Real Estate" }); catList.Insert(1, new SelectListItem() { Value = "-1", Text = "--------------------------------" }); _CategoryList = catList.ToArray(); } return _CategoryList; } private static SelectListItem[] _CategoryList ; This seemed normal enough to me - I've been doing stuff like this forever caching smallish lists in memory to avoid an extra trip to the database. This list is used in various places throughout the application - for the list display and also when adding new items and setting up for notifications etc.. Watch that ModelBinder! However, it turns out that this code is clearly causing a problem. It appears that the model binder on the [HttpPost] method is actually updating the list that's bound to and changing the actual entry item in the list and setting its selected value. If you look at the code above I'm not setting the SelectListItem.Selected value anywhere - the only place this value can get set is through ModelBinding. Sure enough when stepping through the code I see that when an item is selected the actual model - model.CategoryList[x].Selected - reflects that. This is bad on several levels: First it's obviously affecting the application behavior - nobody wants to see their drop down list values jump all over the place randomly. But it's also a problem because the array is getting updated by multiple ASP.NET threads which likely would lead to odd crashes from time to time. Not good! In retrospect the modelbinding behavior makes perfect sense. The actual items and the Selected property is the ModelBinder's way of keeping track of one or more selected values. So while I assumed the list to be read-only, the ModelBinder is actually updating it on a post back producing the rather surprising results. Totally missed this during testing and is another one of those little - "Did you know?" moments. So, is there a way around this? Yes but it's maybe not quite obvious. I can't change the behavior of the ModelBinder, but I can certainly change the way that the list is generated. Rather than returning the cached list, I can return a brand new cloned list from the cached items like this:/// <summary> /// Returns a static category list that is cached /// </summary> /// <returns></returns> public static SelectListItem[] GetCachedCategoryList() { if (_CategoryList != null) { // Have to create new instances via projection // to avoid ModelBinding updates to affect this // globally return _CategoryList .Select(cat => new SelectListItem() { Value = cat.Value, Text = cat.Text }) .ToArray(); } …}  The key is that newly created instances of SelectListItems are returned not just filtered instances of the original list. The key here is 'new instances' so that the ModelBinding updates do not update the actual static instance. The code above uses LINQ and a projection into new SelectListItem instances to create this array of fresh instances. And this code works correctly - no more cross-talk between users. Unfortunately this code is also less efficient - it has to reselect the items and uses extra memory for the new array. Knowing what I know now I probably would have not cached the list and just take the hit to read from the database. If there is even a possibility of thread clashes I'm very wary of creating code like this. But since the method already exists and handles this load in one place this fix was easy enough to put in. Live and learn. It's little things like this that can cause some interesting head scratchers sometimes…© Rick Strahl, West Wind Technologies, 2005-2012Posted in MVC  ASP.NET  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >