Search Results

Search found 673 results on 27 pages for 'jesse the wind wanderer'.

Page 8/27 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Opening the Internet Settings Dialog and using Windows Default Network Settings via Code

    - by Rick Strahl
    Ran into a question from a client the other day that asked how to deal with Internet Connection settings for running  HTTP requests. In this case this is an old FoxPro app and it's using WinInet to handle the actual HTTP connection. Another client asked a similar question about using the IE Web Browser control and configuring connection properties. Regardless of platform or tools used to do HTTP connections, you can probably configure custom connection and proxy settings in your application to configure http connection settings manually. However, this is a repetitive process for each application requires you to track system information in your application which is undesirable. Often it's much easier to rely on the system wide proxy settings that Windows provides via the Internet Settings dialog. The dialog is a Control Panel applet (inetcpl.cpl) and is the same dialog that you see when you pop up Internet Explorer's Options dialog: This dialog controls the Windows connection properties that determine how the Windows HTTP stack connects to the Internet and how Proxy's are used if configured. Depending on how the HTTP client is configured - it can typically inherit and use these global settings. Loading the Settings Dialog Programmatically The settings dialog is a Control Panel applet with the name of: inetcpl.cpl and you can use any Shell execution mechanism (Run dialog, ShellExecute API, Process.Start() in .NET etc.) to invoke the dialog. Changes made there are immediately reflected in any applications that use the default connection settings. In .NET you can simply do this to bring up the Internet Settings dialog with the Connection tab enabled: Process.Start("inetcpl.cpl",",4"); In FoxPro you can simply use the RUN command to execute inetcpl.cpl: lcCmd = "inetcpl.cpl ,4" RUN &lcCmd Using the Default Connection/Proxy Settings When using WinInet you specify the Http connect type in the call to InternetOpen() like this (FoxPro code here): hInetConnection=; InternetOpen(THIS.cUserAgent,0,; THIS.chttpproxyname,THIS.chttpproxybypass,0) The second parameter of 0 specifies that the default system proxy settings should be used and it uses the settings from the Internet Settings Connections tab. Other connection options for HTTP connections include 1 - direct (no proxies and ignore system settings), 3 - explicit Proxy specification. In most situations a connection mode setting of 0 should work. In .NET HTTP connections by default are direct connections and so you need to explicitly specify a default proxy or proxy configuration to use. The easiest way to do this is on the application level in the config file: <configuration> <system.net> <defaultProxy> <proxy bypassonlocal="False" autoDetect="True" usesystemdefault="True" /> </defaultProxy> </system.net> </configuration> You can do the same sort of thing in code specifying the proxy explicitly and using System.Net.WebProxy.GetDefaultProxy(). So when making HTTP calls to Web Services or using the HttpWebRequest class you can set the proxy with: StoreService.Proxy = WebProxy.GetDefaultProxy(); All of this is pretty easy to deal with and in my opinion is a way better choice to managing connection settings than having to track this stuff in your own application. Plus if you use default settings, most of the time it's highly likely that the connection settings are already properly configured making further configuration rare.© Rick Strahl, West Wind Technologies, 2005-2011Posted in Windows  HTTP  .NET  FoxPro   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • ASP.NET Routing not working on IIS 7.0

    - by Rick Strahl
    I ran into a nasty little problem today when deploying an application using ASP.NET 4.0 Routing to my live server. The application and its Routing were working just fine on my dev machine (Windows 7 and IIS 7.5), but when I deployed (Windows 2008 R1 and IIS 7.0) Routing would just not work. Every time I hit a routed url IIS would just throw up a 404 error: This is an IIS error, not an ASP.NET error so this doesn’t actually come from ASP.NET’s routing engine but from IIS’s handling of expressionless URLs. Note that it’s clearly falling through all the way to the StaticFile handler which is the last handler to fire in the typical IIS handler list. In other words IIS is trying to parse the extension less URL and not firing it into ASP.NET but failing. As I mentioned on my local machine this all worked fine and to make sure local and live setups match I re-copied my Web.config, double checked handler mappings in IIS and re-copied the actual application assemblies to the server. It all looked exactly matched. However no workey on the server with IIS 7.0!!! Finally, totally by chance, I remembered the runAllManagedModulesForAllRequests attribute flag on the modules key in web.config and set it to true: <system.webServer> <modules runAllManagedModulesForAllRequests="true"> <add name="ScriptCompressionModule" type="Westwind.Web.ScriptCompressionModule,Westwind.Web" /> </modules> </system.webServer> And lo and behold, Routing started working on the live server and IIS 7.0! This seems really obvious now of course, but the really tricky thing about this is that on IIS 7.5 this key is not necessary. So on my Windows 7 machine ASP.NET Routing was working just fine without the key set. However on IIS 7.0 on my live server the same missing setting was not working. On IIS 7.0 this key must be present or Routing will not work. Oddly on IIS 7.5 it appears that you can’t even turn off the behavior – setting runtAllManagedModuleForAllRequests="false" had no effect at all and Routing continued to work just fine even with the flag set to false, which is NOT what I would have expected. Kind of disappointing too that Windows Server 2008 (R1) can’t be upgraded to IIS 7.5. It sure seems like that should have been possible since the OS server core changes in R2 are pretty minor. For the future I really hope Microsoft will allow updating IIS versions without tying them explicitly to the OS. It looks like that with the release of IIS Express Microsoft has taken some steps to untie some of those tight OS links from IIS. Let’s hope that’s the case for the future – it sure is nice to run the same IIS version on dev and live boxes, but upgrading live servers is too big a deal to do just because an updated OS release came out. Moral of the story – never assume that your dev setup will work as is on the live setup. It took me forever to figure this out because I assumed that because my web.config on the local machine was fine and working and I copied all relevant web.config data to the server it can’t be the configuration settings. I was looking everywhere but in the .config file forever before getting desperate and remembering the flag when I accidentally checked the intellisense settings in the modules key. Never assume anything. The other moral is: Try to keep your dev machine and server OS’s in sync whenever possible. Maybe it’s time to upgrade to Windows Server 2008 R2 after all. More info on Extensionless URLs in IIS Want to find out more exactly on how extensionless Urls work on IIS 7? The check out  How ASP.NET MVC Routing Works and its Impact on the Performance of Static Requests which goes into great detail on the complexities of the process. Thanks to Jeff Graves for pointing me at this article – a great linked reference for this topic!© Rick Strahl, West Wind Technologies, 2005-2011Posted in IIS7  Windows  

    Read the article

  • Odd MVC 4 Beta Razor Designer Issue

    - by Rick Strahl
    This post is a small cry for help along with an explanation of a problem that is hard to describe on twitter or even a connect bug and written in hopes somebody has seen this before and any ideas on what might cause this. Lots of helpful people had comments on Twitter for me, but they all assumed that the code doesn't run, which is not the case - it's a designer issue. A few days ago I started getting some odd problems in my MVC 4 designer for an app I've been working on for the past 2 weeks. Basically the MVC 4 Razor designer keeps popping me errors, about the call signature to various Html Helper methods being incorrect. It also complains about the ViewBag object and not supporting dynamic requesting to load assemblies into the project. Here's what the designer errors look like: You can see the red error underlines under the ViewBag and an Html Helper I plopped in at the top to demonstrate the behavior. Basically any HtmlHelper I'm accessing is showing the same errors. Note that the code *runs just fine* - it's just the designer that is complaining with Errors. What's odd about this is that *I think* this started only a few days ago and nothing consequential that I can think of has happened to the project or overall installations. These errors aren't critical since the code runs but pretty annoying especially if you're building and have .csHtml files open in Visual Studio mixing these fake errors with real compiler errors. What I've checked Looking at the errors it indeed looks like certain references are missing. I can't make sense of the Html Helpers error, but certainly the ViewBag dynamic error looks like System.Core or Microsoft.CSharp assemblies are missing. Rest assured they are there and the code DOES run just fine at runtime. This is a designer issue only. I went ahead and checked the namespaces that MVC has access to in Razor which lives in the Views folder's web.config file: /Views/web.config For good measure I added <system.web.webPages.razor> <host factoryType="System.Web.Mvc.MvcWebRazorHostFactory, System.Web.Mvc, <split for layout> Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <pages pageBaseType="System.Web.Mvc.WebViewPage"> <namespaces> <add namespace="System.Web.Mvc" /> <add namespace="System.Web.Mvc.Ajax" /> <add namespace="System.Web.Mvc.Html" /> <add namespace="System.Web.Routing" /> <add namespace="System.Linq" /> <add namespace="System.Linq.Expressions" /> <add namespace="ClassifiedsBusiness" /> <add namespace="ClassifiedsWeb"/> <add namespace="Westwind.Utilities" /> <add namespace="Westwind.Web" /> <add namespace="Westwind.Web.Mvc" /> </namespaces> </pages> </system.web.webPages.razor> For good measure I added System.Linq and System.Linq.Expression on which some of the Html.xxxxFor() methods rely, but no luck. So, has anybody seen this before? Any ideas on what might be causing these issues only at design time rather, when the final compiled code runs just fine?© Rick Strahl, West Wind Technologies, 2005-2012Posted in Razor  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • IIS not starting: The process cannot access the file because it is being used by another process

    - by Rick Strahl
    Ok, apparently a few people knew about this issue, but it is new to me and has caused me nearly an hour to track down today. What happened is that I’ve been working all day doing some final pre-deployment testing of several tools on my local dev machine. In the process I’ve been starting and stopping several IIS 7 Web sites. At some point I was done and just wanted to start my Default Web Site again and found this  little gem of an error message popping up: The process cannot access the file because it is being used by another process. (Exception from HRESULT: 0x80070020) A lot of headless running around ensued after this, trying to figure out why IIS wouldn’t start. Oddly some sites started right up, others didn’t. I killed INetInfo, all worker processes, tried IISReset a million times and even rebooted – all to no avail. What gives? Skype, you evil Bastard! As it turns out the culprit is – drum roll please - Skype!  What, you may ask, does Skype have to do with IIS and Web Requests? It looks like recent versions of Skype have an option to run over Port 80 and 443 to allow running over corporate firewalls. Which is actually a nice feature that lets Skype work just about anywhere. What’s not so cool is that IIS fails to start up when another application is already using the same port that a Web site is mapped to. In the case of my dev site that’d be port 80 and Skype was hogging it. To fix this issue you can stop Skype from using port 80 and 443 which quickly fixes the problem. Or stop Skype. Duh! To permanently fix the problem in Skype find the option on the Options | Connection tab and uncheck the Use port 80/443 option: Oddly I haven’t run into this problem even though my setup hasn’t changed in quite some time. It appears that it’s bad startup timing that causes this problem to occur. Whatever the circumstance was, Skype somehow ended up starting before IIS.  If Skype is started after IIS has started it will automatically opt for other ports and not use port 80 and so there’s no problem. It’s easy to demonstrate this behavior if you’re looking for it: Stop IIS Stop Skype Start Skype and make a test call Start IIS And voila your error is ready for you! This really shouldn’t be a problem except that it would be really nice if IIS could give a more helpful error message when it can fire up a site because a port is blocked. “The process cannot access a file” is really not a very helpful error message in this scenario… I/O port / file ah what the heck it’s all the same to Windows. Right! I’ve run into this situation quite a bit with other, albeit more obvious applications like running Apache on the local machine for testing and then trying to run an IIS application. Same situation,  although it’s been a while – pre IIS 7 and I think previous versions of IIS actually gave more useful error messages for port blockages and that would be helpful. On the way to figuring this out I ran into some pretty humorous forum posts though with people ragging on why the hell you would be running IIS. Or Skype. The misinformed paranoia police out in full force so to say :-). It’ll be nice to start running IIS Express once Visual Studio 2010 SP1 gets released. Anyway, no surprise that Skype didn’t jump out at me as the culprit right away and I was left fumbling for a while until the Internet came to the rescue. I’m not the first to have found this for sure – I posted a message on Twitter and dozens of people replied they’d run into this before as well. Seems worth mentioning again though – since I’m sure to forget that this happened in a year from now when I hit that same error. Maybe I’ll even find this blog post to remind me…© Rick Strahl, West Wind Technologies, 2005-2011Posted in IIS7  Windows  

    Read the article

  • Prefilling an SMS on Mobile Devices with the sms: Uri Scheme

    - by Rick Strahl
    Popping up the native SMS app from a mobile HTML Web page is a nice feature that allows you to pre-fill info into a text for sending by a user of your mobile site. The syntax is a bit tricky due to some device inconsistencies (and quite a bit of wrong/incomplete info on the Web), but it's definitely something that's fairly easy to do.In one of my Mobile HTML Web apps I need to share a current location via SMS. While browsing around a page I want to select a geo location, then share it by texting it to somebody. Basically I want to pre-fill an SMS message with some text, but no name or number, which instead will be filled in by the user.What worksThe syntax that seems to work fairly consistently except for iOS is this:sms:phonenumber?body=messageFor iOS instead of the ? use a ';' (because Apple is always right, standards be damned, right?):sms:phonenumber;body=messageand that works to pop up a new SMS message on the mobile device. I've only marginally tested this with a few devices: an iPhone running iOS 6, an iPad running iOS 7, Windows Phone 8 and a Nexus S in the Android Emulator. All four devices managed to pop up the SMS with the data prefilled.You can use this in a link:<a href="sms:1-111-1111;body=I made it!">Send location via SMS</a>or you can set it on the window.location in JavaScript:window.location ="sms:1-111-1111;body=" + encodeURIComponent("I made it!");to make the window pop up directly from code. Notice that the content should be URL encoded - HTML links automatically encode, but when you assign the URL directly in code the text value should be encoded.Body onlyI suspect in most applications you won't know who to text, so you only want to fill the text body, not the number. That works as you'd expect by just leaving out the number - here's what the URLs look like in that case:sms:?body=messageFor iOS same thing except with the ;sms:;body=messageHere's an example of the code I use to set up the SMS:var ua = navigator.userAgent.toLowerCase(); var url; if (ua.indexOf("iphone") > -1 || ua.indexOf("ipad") > -1) url = "sms:;body=" + encodeURIComponent("I'm at " + mapUrl + " @ " + pos.Address); else url = "sms:?body=" + encodeURIComponent("I'm at " + mapUrl + " @ " + pos.Address); location.href = url;and that also works for all the devices mentioned above.It's pretty cool that URL schemes exist to access device functionality and the SMS one will come in pretty handy for a number of things. Now if only all of the URI schemes were a bit more consistent (damn you Apple!) across devices...© Rick Strahl, West Wind Technologies, 2005-2013Posted in IOS  JavaScript  HTML5   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • HttpWebRequest and Ignoring SSL Certificate Errors

    - by Rick Strahl
    Man I can't believe this. I'm still mucking around with OFX servers and it drives me absolutely crazy how some these servers are just so unbelievably misconfigured. I've recently hit three different 3 major brokerages which fail HTTP validation with bad or corrupt certificates at least according to the .NET WebRequest class. What's somewhat odd here though is that WinInet seems to find no issue with these servers - it's only .NET's Http client that's ultra finicky. So the question then becomes how do you tell HttpWebRequest to ignore certificate errors? In WinInet there used to be a host of flags to do this, but it's not quite so easy with WebRequest. Basically you need to configure the CertificatePolicy on the ServicePointManager by creating a custom policy. Not exactly trivial. Here's the code to hook it up: public bool CreateWebRequestObject(string Url) {    try     {        this.WebRequest =  (HttpWebRequest) System.Net.WebRequest.Create(Url);         if (this.IgnoreCertificateErrors)            ServicePointManager.CertificatePolicy = delegate { return true; };}One thing to watch out for is that this an application global setting. There's one global ServicePointManager and once you set this value any subsequent requests will inherit this policy as well, which may or may not be what you want. So it's probably a good idea to set the policy when the app starts and leave it be - otherwise you may run into odd behavior in some situations especially in multi-thread situations.Another way to deal with this is in you application .config file. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} <configuration>   <system.net>     <settings>       <servicePointManager           checkCertificateName="false"           checkCertificateRevocationList="false"                />     </settings>   </system.net> </configuration> This seems to work most of the time, although I've seen some situations where it doesn't, but where the code implementation works which is frustrating. The .config settings aren't as inclusive as the programmatic code that can ignore any and all cert errors - shrug. Anyway, the code approach got me past the stopper issue. It still amazes me that theses OFX servers even require this. After all this is financial data we're talking about here. The last thing I want to do is disable extra checks on the certificates. Well I guess I shouldn't be surprised - these are the same companies that apparently don't believe in XML enough to generate valid XML (or even valid SGML for that matter)...© Rick Strahl, West Wind Technologies, 2005-2011Posted in .NET  CSharp  HTTP  

    Read the article

  • Non-Dom Element Event Binding with jQuery

    - by Rick Strahl
    Yesterday I had a short discussion with Dave Reed on Twitter regarding setting up fake ‘events’ on objects that are hookable. jQuery makes it real easy to bind events on DOM elements and with a little bit of extra work (that I didn’t know about) you can also set up binding to non-DOM element ‘event’ bindings. Assume for a second that you have a simple JavaScript object like this: var item = { sku: "wwhelp" , foo: function() { alert('orginal foo function'); } }; and you want to be notified when the foo function is called. You can use jQuery to bind the handler like this: $(item).bind("foo", function () { alert('foo Hook called'); } ); Binding alone won’t actually cause the handler to be triggered so when you call: item.foo(); you only get the ‘original’ message. In order to fire both the original handler and the bound event hook you have to use the .trigger() function: $(item).trigger("foo"); Now if you do the following complete sequence: var item = { sku: "wwhelp" , foo: function() { alert('orginal foo function'); } }; $(item).bind("foo", function () { alert('foo hook called'); } ); $(item).trigger("foo"); You’ll see the ‘hook’ message first followed by the ‘original’ message fired in succession. In other words, using this mechanism you can hook standard object functions and chain events to them in a way similar to the way you can do with DOM elements. The main difference is that the ‘event’ has to be explicitly triggered in order for this to happen rather than just calling the method directly. .trigger() relies on some internal logic that checks for event bindings on the object (attached via an expando property) which .trigger() searches for in its bound event list. Once the ‘event’ is found it’s called prior to execution of the original function. This is pretty useful as it allows you to create standard JavaScript objects that can act as event handlers and are effectively hookable without having to explicitly override event definitions with JavaScript function handlers. You get all the benefits of jQuery’s event methods including the ability to hook up multiple events to the same handler function and the ability to uniquely identify each specific event instance with post fix string names (ie. .bind("MyEvent.MyName") and .unbind("MyEvent.MyName") to bind MyEvent). Watch out for an .unbind() Bug Note that there appears to be a bug with .unbind() in jQuery that doesn’t reliably unbind an event and results in a elem.removeEventListener is not a function error. The following code demonstrates: var item = { sku: "wwhelp", foo: function () { alert('orginal foo function'); } }; $(item).bind("foo.first", function () { alert('foo hook called'); }); $(item).bind("foo.second", function () { alert('foo hook2 called'); }); $(item).trigger("foo"); setTimeout(function () { $(item).unbind("foo"); // $(item).unbind("foo.first"); // $(item).unbind("foo.second"); $(item).trigger("foo"); }, 3000); The setTimeout call delays the unbinding and is supposed to remove the event binding on the foo function. It fails both with the foo only value (both if assigned only as “foo” or “foo.first/second” as well as when removing both of the postfixed event handlers explicitly. Oddly the following that removes only one of the two handlers works: setTimeout(function () { //$(item).unbind("foo"); $(item).unbind("foo.first"); // $(item).unbind("foo.second"); $(item).trigger("foo"); }, 3000); this actually works which is weird as the code in unbind tries to unbind using a DOM method that doesn’t exist. <shrug> A partial workaround for unbinding all ‘foo’ events is the following: setTimeout(function () { $.event.special.foo = { teardown: function () { alert('teardown'); return true; } }; $(item).unbind("foo"); $(item).trigger("foo"); }, 3000); which is a bit cryptic to say the least but it seems to work more reliably. I can’t take credit for any of this – thanks to Dave Reed and Damien Edwards who pointed out some of these behaviors. I didn’t find any good descriptions of the process so thought it’d be good to write it down here. Hope some of you find this helpful.© Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery  

    Read the article

  • jQuery 1.4 Opacity and IE Filters

    - by Rick Strahl
    Ran into a small problem today with my client side jQuery library after switching to jQuery 1.4. I ran into a problem with a shadow plugin that I use to provide drop shadows for absolute elements – for Mozilla WebKit browsers the –moz-box-shadow and –webkit-box-shadow CSS attributes are used but for IE a manual element is created to provide the shadow that underlays the original element along with a blur filter to provide the fuzziness in the shadow. Some of the key pieces are: var vis = el.is(":visible"); if (!vis) el.show(); // must be visible to get .position var pos = el.position(); if (typeof shEl.style.filter == "string") sh.css("filter", 'progid:DXImageTransform.Microsoft.Blur(makeShadow=true, pixelradius=3, shadowOpacity=' + opt.opacity.toString() + ')'); sh.show() .css({ position: "absolute", width: el.outerWidth(), height: el.outerHeight(), opacity: opt.opacity, background: opt.color, left: pos.left + opt.offset, top: pos.top + opt.offset }); This has always worked in previous versions of jQuery, but with 1.4 the original filter no longer works. It appears that applying the opacity after the original filter wipes out the original filter. IOW, the opacity filter is not applied incrementally, but absolutely which is a real bummer. Luckily the workaround is relatively easy by just switching the order in which the opacity and filter are applied. If I apply the blur after the opacity I get my correct behavior back with both opacity: sh.show() .css({ position: "absolute", width: el.outerWidth(), height: el.outerHeight(), opacity: opt.opacity, background: opt.color, left: pos.left + opt.offset, top: pos.top + opt.offset }); if (typeof shEl.style.filter == "string") sh.css("filter", 'progid:DXImageTransform.Microsoft.Blur(makeShadow=true, pixelradius=3, shadowOpacity=' + opt.opacity.toString() + ')'); While this works this still causes problems in other areas where opacity is implicitly set in code such as for fade operations or in the case of my shadow component the style/property watcher that keeps the shadow and main object linked. Both of these may set the opacity explicitly and that is still broken as it will effectively kill the blur filter. This seems like a really strange design decision by the jQuery team, since clearly the jquery css function does the right thing for setting filters. Internally however, the opacity setting doesn’t use .css instead hardcoding the filter which given jQuery’s usual flexibility and smart code seems really inappropriate. The following is from jQuery.js 1.4: var style = elem.style || elem, set = value !== undefined; // IE uses filters for opacity if ( !jQuery.support.opacity && name === "opacity" ) { if ( set ) { // IE has trouble with opacity if it does not have layout // Force it by setting the zoom level style.zoom = 1; // Set the alpha filter to set the opacity var opacity = parseInt( value, 10 ) + "" === "NaN" ? "" : "alpha(opacity=" + value * 100 + ")"; var filter = style.filter || jQuery.curCSS( elem, "filter" ) || ""; style.filter = ralpha.test(filter) ? filter.replace(ralpha, opacity) : opacity; } return style.filter && style.filter.indexOf("opacity=") >= 0 ? (parseFloat( ropacity.exec(style.filter)[1] ) / 100) + "": ""; } You can see here that the style is explicitly set in code rather than relying on $.css() to assign the value resulting in the old filter getting wiped out. jQuery 1.32 looks a little different: // IE uses filters for opacity if ( !jQuery.support.opacity && name == "opacity" ) { if ( set ) { // IE has trouble with opacity if it does not have layout // Force it by setting the zoom level elem.zoom = 1; // Set the alpha filter to set the opacity elem.filter = (elem.filter || "").replace( /alpha\([^)]*\)/, "" ) + (parseInt( value ) + '' == "NaN" ? "" : "alpha(opacity=" + value * 100 + ")"); } return elem.filter && elem.filter.indexOf("opacity=") >= 0 ? (parseFloat( elem.filter.match(/opacity=([^)]*)/)[1] ) / 100) + '': ""; } Offhand I’m not sure why the latter works better since it too is assigning the filter. However, when checking with the IE script debugger I can see that there are actually a couple of filter tags assigned when using jQuery 1.32 but only one when I use jQuery 1.4. Note also that the jQuery 1.3 compatibility plugin for jQUery 1.4 doesn’t address this issue either. Resources ww.jquery.js (shadow plug-in $.fn.shadow) © Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery  

    Read the article

  • Request Limit Length Limits for IIS&rsquo;s requestFiltering Module

    - by Rick Strahl
    Today I updated my CodePaste.net site to MVC 3 and pushed an update to the site. The update of MVC went pretty smooth as well as most of the update process to the live site. Short of missing a web.config change in the /views folder that caused blank pages on the server, the process was relatively painless. However, one issue that kicked my ass for about an hour – and not foe the first time – was a problem with my OpenId authentication using DotNetOpenAuth. I tested the site operation fairly extensively locally and everything worked no problem, but on the server the OpenId returns resulted in a 404 response from IIS for a nice friendly OpenId return URL like this: http://codepaste.net/Account/OpenIdLogon?dnoa.userSuppliedIdentifier=http%3A%2F%2Frstrahl.myopenid.com%2F&dnoa.return_to_sig_handle=%7B634239223364590000%7D%7BjbHzkg%3D%3D%7D&dnoa.return_to_sig=7%2BcGhp7UUkcV2B8W29ibIDnZuoGoqzyS%2F%2FbF%2FhhYscgWzjg%2BB%2Fj10ZpNdBkUCu86dkTL6f4OK2zY5qHhCnJ2Dw%3D%3D&openid.assoc_handle=%7BHMAC-SHA256%7D%7B4cca49b2%7D%7BMVGByQ%3D%3D%7D&openid.claimed_id=http%3A%2F%2Frstrahl.myopenid.com%2F&openid.identity=http%3A%2F%2Frstrahl.myopenid.com%2F&openid.mode=id_res&openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.ns.sreg=http%3A%2F%2Fopenid.net%2Fextensions%2Fsreg%2F1.1&openid.op_endpoint=http%3A%2F%2Fwww.myopenid.com%2Fserver&openid.response_nonce=2010-10-29T04%3A12%3A53Zn5F4r5&openid.return_to=http%3A%2F%2Fcodepaste.net%2FAccount%2FOpenIdLogon%3Fdnoa.userSuppliedIdentifier%3Dhttp%253A%252F%252Frstrahl.myopenid.com%252F%26dnoa.return_to_sig_handle%3D%257B634239223364590000%257D%257BjbHzkg%253D%253D%257D%26dnoa.return_to_sig%3D7%252BcGhp7UUkcV2B8W29ibIDnZuoGoqzyS%252F%252FbF%252FhhYscgWzjg%252BB%252Fj10ZpNdBkUCu86dkTL6f4OK2zY5qHhCnJ2Dw%253D%253D&openid.sig=h1GCSBTDAn1on98sLA6cti%2Bj1M6RffNerdVEI80mnYE%3D&openid.signed=assoc_handle%2Cclaimed_id%2Cidentity%2Cmode%2Cns%2Cns.sreg%2Cop_endpoint%2Cresponse_nonce%2Creturn_to%2Csigned%2Csreg.email%2Csreg.fullname&openid.sreg.email=rstrahl%40host.com&openid.sreg.fullname=Rick+Strahl A 404 of course isn’t terribly helpful – normally a 404 is a resource not found error, but the resource is definitely there. So how the heck do you figure out what’s wrong? If you’re just interested in the solution, here’s the short version: IIS by default allows only for a 1024 byte query string, which is obviously exceeded by the above. The setting is controlled by the RequestFiltering module in IIS 6 and later which can be configured in ApplicationHost.config (in \%windir\system32\inetsvr\config). To set the value configure the requestLimits key like so: <configuration> <security> <requestFiltering> <requestLimits maxQueryString="2048"> </requestLimits> </requestFiltering> </security> </configuration> This fixed me right up and made the requests work. How do you find out about problems like this? Ah yes the troubles of an administrator? Read on and I’ll take you through a quick review of how I tracked this down. Finding the Problem The issue with the error returned is that IIS returns a 404 Resource not found error and doesn’t provide much information about it. If you’re lucky enough to be able to run your site from the localhost IIS is actually very helpful and gives you the right information immediately in a nicely detailed error page. The bottom of the page actually describes exactly what needs to be fixed. One problem with this easy way to find an error: You HAVE TO run localhost. On my server which has about 10 domains running localhost doesn’t point at the particular site I had problems with so I didn’t get the luxury of this nice error page. Using Failed Request Tracing to retrieve Error Info The first place I go with IIS errors is to turn on Failed Request Tracing in IIS to get more error information. If you have access to the server to make a configuration change you can enable Failed Request Tracing like this: Find the Failed Request Tracing Rules in the IIS Service Manager.   Select the option and then Edit Site Tracing to enable tracing. Then add a rule for * (all content) and specify status codes from 100-999 to capture all errors. if you know exactly what error you’re looking for it might help to specify it exactly to keep the number of errors down. Then run your request and let it fail. IIS will throw error log files into a folder like this C:\inetpub\logs\FailedReqLogFiles\W3SVC5 where the last 5 is the instance ID of the site. These files are XML but they include an XSL stylesheet that provides some decent formatting. In this case it pointed me straight at the offending module:   Ok, it’s the RequestFilteringModule. Request Filtering is built into IIS 6-7 and configured in ApplicationHost.config. This module defines a few basic rules about what paths and extensions are allowed in requests and among other things how long a query string is allowed to be. Most of these settings are pretty sensible but the query string value can easily become a problem especially if you’re dealing with OpenId since these return URLs are quite extensive. Debugging failed requests is never fun, but IIS 6 and forward at least provides us the tools that can help us point in the right direction. The error message the FRT report isn’t as nice as the IIS error message but it at least points at the offending module which gave me the clue I needed to look at request restrictions in ApplicationHost.config. This would still be a stretch if you’re not intimately familiar, but I think with some Google searches it would be easy to track this down with a few tries… Hope this was useful to some of you. Useful to me to put this out as a reminder – I’ve run into this issue before myself and totally forgot. Next time I got it, right?© Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  Security  

    Read the article

  • Figuring out the IIS Version for a given OS in .NET Code

    - by Rick Strahl
    Here's an odd requirement: I need to figure out what version of IIS is available on a given machine in order to take specific configuration actions when installing an IIS based application. I build several configuration tools for application configuration and installation and depending on which version of IIS is available on IIS different configuration paths are taken. For example, when dealing with XP machine you can't set up an Application Pool for an application because XP (IIS 5.1) didn't support Application pools. Configuring 32 and 64 bit settings are easy in IIS 7 but this didn't work in prior versions and so on. Along the same lines I saw a question on the AspInsiders list today, regarding a similar issue where somebody needed to know the IIS version as part of an ASP.NET application prior to when the Request object is available. So it's useful to know which version of IIS you can possibly expect. This should be easy right? But it turns there's no real easy way to detect IIS on a machine. There's no registry key that gives you the full version number - you can detect installation but not which version is installed. The easiest way: Request.ServerVariables["SERVER_SOFTWARE"] The easiest way to determine IIS version number is if you are already running inside of ASP.NET and you are inside of an ASP.NET request. You can look at Request.ServerVariables["SERVER_SOFTWARE"] to get a string like Microsoft-IIS/7.5 returned to you. It's a cinch to parse this to retrieve the version number. This works in the limited scenario where you need to know the version number inside of a running ASP.NET application. Unfortunately this is not a likely use case, since most times when you need to know a specific version of IIS when you are configuring or installing your application. The messy way: Match Windows OS Versions to IIS Versions Since Version 5.x of IIS versions of IIS have always been tied very closely to the Operating System. Meaning the only way to get a specific version of IIS was through the OS - you couldn't install another version of IIS on the given OS. Microsoft has a page that describes the OS version to IIS version relationship here: http://support.microsoft.com/kb/224609 In .NET you can then sniff the OS version and based on that return the IIS version. The following is a small utility function that accomplishes the task of returning an IIS version number for a given OS: /// <summary> /// Returns the IIS version for the given Operating System. /// Note this routine doesn't check to see if IIS is installed /// it just returns the version of IIS that should run on the OS. /// /// Returns the value from Request.ServerVariables["Server_Software"] /// if available. Otherwise uses OS sniffing to determine OS version /// and returns IIS version instead. /// </summary> /// <returns>version number or -1 </returns> public static decimal GetIisVersion() { // if running inside of IIS parse the SERVER_SOFTWARE key // This would be most reliable if (HttpContext.Current != null && HttpContext.Current.Request != null) { string os = HttpContext.Current.Request.ServerVariables["SERVER_SOFTWARE"]; if (!string.IsNullOrEmpty(os)) { //Microsoft-IIS/7.5 int dash = os.LastIndexOf("/"); if (dash > 0) { decimal iisVer = 0M; if (Decimal.TryParse(os.Substring(dash + 1), out iisVer)) return iisVer; } } } decimal osVer = (decimal) Environment.OSVersion.Version.Major + ((decimal) Environment.OSVersion.Version.MajorRevision / 10); // Windows 7 and Win2008 R2 if (osVer == 6.1M) return 7.5M; // Windows Vista and Windows 2008 else if (osVer == 6.0M) return 7.0M; // Windows 2003 and XP 64 bit else if (osVer == 5.2M) return 6.0M; // Windows XP else if (osVer == 5.1M) return 5.1M; // Windows 2000 else if (osVer == 5.0M) return 5.0M; // error result return -1M; } } Talk about a brute force apporach, but it works. This code goes only back to IIS 5 - anything before that is not something you possibly would want to have running. :-) Note that this is updated through Windows 7/Windows Server 2008 R2. Later versions will need to be added as needed. Anybody know what the Windows Version number of Windows 8 is?© Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  IIS   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Getting a Web Resource Url in non WebForms Applications

    - by Rick Strahl
    WebResources in ASP.NET are pretty useful feature. WebResources are resources that are embedded into a .NET assembly and can be loaded from the assembly via a special resource URL. WebForms includes a method on the ClientScriptManager (Page.ClientScript) and the ScriptManager object to retrieve URLs to these resources. For example you can do: ClientScript.GetWebResourceUrl(typeof(ControlResources), ControlResources.JQUERY_SCRIPT_RESOURCE); GetWebResourceUrl requires a type (which is used for the assembly lookup in which to find the resource) and the resource id to lookup. GetWebResourceUrl() then returns a nasty old long URL like this: WebResource.axd?d=-b6oWzgbpGb8uTaHDrCMv59VSmGhilZP5_T_B8anpGx7X-PmW_1eu1KoHDvox-XHqA1EEb-Tl2YAP3bBeebGN65tv-7-yAimtG4ZnoWH633pExpJor8Qp1aKbk-KQWSoNfRC7rQJHXVP4tC0reYzVw2&t=634533278261362212 While lately excessive resource usage has been frowned upon especially by MVC developers who tend to opt for content distributed as files, I still think that Web Resources have their place even in non-WebForms applications. Also if you have existing assemblies that include resources like scripts and common image links it sure would be nice to access them from non-WebForms pages like MVC views or even in plain old Razor Web Pages. Where's my Page object Dude? Unfortunately natively ASP.NET doesn't have a mechanism for retrieving WebResource Urls outside of the WebForms engine. It's a feature that's specifically baked into WebForms and that relies specifically on the Page HttpHandler implementation. Both Page.ClientScript (obviously) and ScriptManager rely on a hosting Page object in order to work and the various methods off these objects require control instances passed. The reason for this is that the script managers can inject scripts and links into Page content (think RegisterXXXX methods) and for that a Page instance is required. However, for many other methods - like GetWebResourceUrl() - that simply return resources or resource links the Page reference is really irrelevant. While there's a separate ClientScriptManager class, it's marked as sealed and doesn't have any public constructors so you can't create your own instance (without Reflection). Even if it did the internal constructor it does have requires a Page reference. No good… So, can we get access to a WebResourceUrl generically without running in a WebForms Page instance? We just have to create a Page instance ourselves and use it internally. There's nothing intrinsic about the use of the Page class in ClientScript, at least for retrieving resources and resource Urls so it's easy to create an instance of a Page for example in a static method. For our needs of retrieving ResourceUrls or even actually retrieving script resources we can use a canned, non-configured Page instance we create on our own. The following works just fine: public static string GetWebResourceUrl(Type type, string resource ) { Page page = new Page(); return page.ClientScript.GetWebResourceUrl(type, resource); } A slight optimization for this might be to cache the created Page instance. Page tends to be a pretty heavy object to create each time a URL is required so you might want to cache the instance: public class WebUtils { private static Page CachedPage { get { if (_CachedPage == null) _CachedPage = new Page(); return _CachedPage; } } private static Page _CachedPage; public static string GetWebResourceUrl(Type type, string resource) { return CachedPage.ClientScript.GetWebResourceUrl(type, resource); } } You can now use GetWebResourceUrl in a Razor page like this: <!DOCTYPE html> <html <head> <script src="@WebUtils.GetWebResourceUrl(typeof(ControlResources),ControlResources.JQUERY_SCRIPT_RESOURCE)"> </script> </head> <body> <div class="errordisplay"> <img src="@WebUtils.GetWebResourceUrl(typeof(ControlResources),ControlResources.WARNING_ICON_RESOURCE)" /> This is only a Test! </div> </body> </html> And voila - there you have WebResources served from a non-Page based application. WebResources may be a on the way out, but legacy apps have them embedded and for some situations, like fallback scripts and some common image resources I still like to use them. Being able to use them from non-WebForms applications should have been built into the core ASP.NETplatform IMHO, but seeing that it's not this workaround is easy enough to implement.© Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  MVC   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • .NET 3.5 Installation Problems in Windows 8

    - by Rick Strahl
    Windows 8 installs with .NET 4.5. A default installation of Windows 8 doesn't seem to include .NET 3.0 or 3.5, although .NET 2.0 does seem to be available by default (presumably because Windows has app dependencies on that). I ran into some pretty nasty compatibility issues regarding .NET 3.5 which I'll describe in this post. I'll preface this by saying that depending on how you install Windows 8 you may not run into these issues. In fact, it's probably a special case, but one that might be common with developer folks reading my blog. Specifically it's the install order that screwed things up for me -  installing Visual Studio before explicitly installing .NET 3.5 from Windows Features - in particular. If you install Visual Studio 2010 I highly recommend you install .NET 3.5 from Windows features BEFORE you install Visual Studio 2010 and save yourself the trouble I went through. So when I installed Windows 8, and then looked at the Windows Features to install after the fact in the Windows Feature dialog, I thought - .NET 3.5 - who needs it. I'd be happy to not have to install .NET 3.5, but unfortunately I found out quite a while after initial installation that one of my applications/tools (DevExpress's awesome CodeRush) depends on it and won't install without it. Enabling .NET 3.5 in Windows 8 If you want to run .NET 3.5 on Windows 8, don't download an installer - those installers don't work on Windows 8, and you don't need to do this because you can use the Windows Features dialog to enable .NET 3.5: And that *should* do the trick. If you do this before you install other apps that require .NET 3.5 and install a non-SP1 one version of it, you are going to have no problems. Unfortunately for me, even after I've installed the above, when I run the CodeRush installer I still get this lovely dialog: Now I double checked to see if .NET 3.5 is installed - it is, both for 32 bit and 64 bit. I went as far as creating a small .NET Console app and running it to verify that it actually runs. And it does… So naturally I thought the CodeRush installer is a little whacky. After some back and forth Alex Skorkin on Twitter pointed me in the right direction: He asked me to look in the registry for exact info on which version of .NET 3.5 is installed here: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP where I found that .NET 3.5 SP1 was installed. This is the 64 bit key which looks all correct. However, when I looked under the 32 bit node I found: HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\NET Framework Setup\NDP\v3.5 Notice that the service pack number is set to 0, rather than 1 (which it was for the 64 bit install), which is what the installer requires. So to summarize: the 64 bit version is installed with SP1, the 32 bit version is not. Uhm, Ok… thanks for that! Easy to fix, you say - just install SP1. Nope, not so easy because the standalone installer doesn't work on Windows 8. I can't get either .NET 3.5 installer or the SP 1 installer to even launch. They simply start and hang (or exit immediately) without messages. I also tried to get Windows to update .NET 3.5 by checking for Windows Updates, which should pick up on the dated version of .NET 3.5 and pull down SP1, but that's also no go. Check for Updates doesn't bring down any updates for me yet. I'm sure at some random point in the future Windows will deem it necessary to update .NET 3.5 to SP1, but at this point it's not letting me coerce it to do it explicitly. How did this happen I'm not sure exactly whether this is the cause and effect, but I suspect the story goes like this: Installed Windows 8 without support for .NET 3.5 Installed Visual Studio 2010 which installs .NET 3.5 (no SP) I now had .NET 3.5 installed but without SP1. I then: Tried to install CodeRush - Error: .NET 3.5 SP1 required Enabled .NET 3.5 in Windows Features I figured enabling the .NET 3.5 Windows Features would do the trick. But still no go. Now I suspect Visual Studio installed the 32 bit version of .NET 3.5 on my machine and Windows Features detected the previous install and didn't reinstall it. This left the 32 bit install at least with no SP1 installed. How to Fix it My final solution was to completely uninstall .NET 3.5 *and* to reboot: Go to Windows Features Uncheck the .NET Framework 3.5 Restart Windows Go to Windows Features Check .NET Framework 3.5 and voila, I now have a proper installation of .NET 3.5. I tried this before but without the reboot step in between which did not work. Make sure you reboot between uninstalling and reinstalling .NET 3.5! More Problems The above fixed me right up, but in looking for a solution it seems that a lot of people are also having problems with .NET 3.5 installing properly from the Windows Features dialog. The problem there is that the feature wasn't properly loading from the installer disks or not downloading the proper components for updates. It turns out you can explicitly install Windows features using the DISM tool in Windows.dism.exe /online /enable-feature /featurename:NetFX3 /Source:f:\sources\sxs You can try this without the /Source flag first - which uses the hidden Windows installer files if you kept those. Otherwise insert the DVD or ISO and point at the path \sources\sxs path where the installer lives. This also gives you a little more information if something does go wrong.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Windows  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Internet Explorer and Cookie Domains

    - by Rick Strahl
    I've been bitten by some nasty issues today in regards to using a domain cookie as part of my FormsAuthentication operations. In the app I'm currently working on we need to have single sign-on that spans multiple sub-domains (www.domain.com, store.domain.com, mail.domain.com etc.). That's what a domain cookie is meant for - when you set the cookie with a Domain value of the base domain the cookie stays valid for all sub-domains. I've been testing the app for quite a while and everything is working great. Finally I get around to checking the app with Internet Explorer and I start discovering some problems - specifically on my local machine using localhost. It appears that Internet Explorer (all versions) doesn't allow you to specify a domain of localhost, a local IP address or machine name. When you do, Internet Explorer simply ignores the cookie. In my last post I talked about some generic code I created to basically parse out the base domain from the current URL so a domain cookie would automatically used using this code:private void IssueAuthTicket(UserState userState, bool rememberMe) { FormsAuthenticationTicket ticket = new FormsAuthenticationTicket(1, userState.UserId, DateTime.Now, DateTime.Now.AddDays(10), rememberMe, userState.ToString()); string ticketString = FormsAuthentication.Encrypt(ticket); HttpCookie cookie = new HttpCookie(FormsAuthentication.FormsCookieName, ticketString); cookie.HttpOnly = true; if (rememberMe) cookie.Expires = DateTime.Now.AddDays(10); var domain = Request.Url.GetBaseDomain(); if (domain != Request.Url.DnsSafeHost) cookie.Domain = domain; HttpContext.Response.Cookies.Add(cookie); } This code works fine on all browsers but Internet Explorer both locally and on full domains. And it also works fine for Internet Explorer with actual 'real' domains. However, this code fails silently for IE when the domain is localhost or any other local address. In that case Internet Explorer simply refuses to accept the cookie and fails to log in. Argh! The end result is that the solution above trying to automatically parse the base domain won't work as local addresses end up failing. Configuration Setting Given this screwed up state of affairs, the best solution to handle this is a configuration setting. Forms Authentication actually has a domain key that can be set for FormsAuthentication so that's natural choice for the storing the domain name: <authentication mode="Forms"> <forms loginUrl="~/Account/Login" name="gnc" domain="mydomain.com" slidingExpiration="true" timeout="30" xdt:Transform="Replace"/> </authentication> Although I'm not actually letting FormsAuth set my cookie directly I can still access the domain name from the static FormsAuthentication.CookieDomain property, by changing the domain assignment code to:if (!string.IsNullOrEmpty(FormsAuthentication.CookieDomain)) cookie.Domain = FormsAuthentication.CookieDomain; The key is to only set the domain when actually running on a full authority, and leaving the domain key blank on the local machine to avoid the local address debacle. Note if you want to see this fail with IE, set the domain to domain="localhost" and watch in Fiddler what happens. Logging Out When specifying a domain key for a login it's also vitally important that that same domain key is used when logging out. Forms Authentication will do this automatically for you when the domain is set and you use FormsAuthentication.SignOut(). If you use an explicit Cookie to manage your logins or other persistant value, make sure that when you log out you also specify the domain. IOW, the expiring cookie you set for a 'logout' should match the same settings - name, path, domain - as the cookie you used to set the value.HttpCookie cookie = new HttpCookie("gne", ""); cookie.Expires = DateTime.Now.AddDays(-5); // make sure we use the same logic to release cookie var domain = Request.Url.GetBaseDomain(); if (domain != Request.Url.DnsSafeHost) cookie.Domain = domain; HttpContext.Response.Cookies.Add(cookie); I managed to get my code to do what I needed it to, but man I'm getting so sick and tired of fixing IE only bugs. I spent most of the day today fixing a number of small IE layout bugs along with this issue which took a bit of time to trace down.© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Rendering ASP.NET MVC Razor Views outside of MVC revisited

    - by Rick Strahl
    Last year I posted a detailed article on how to render Razor Views to string both inside of ASP.NET MVC and outside of it. In that article I showed several different approaches to capture the rendering output. The first and easiest is to use an existing MVC Controller Context to render a view by simply passing the controller context which is fairly trivial and I demonstrated a simple ViewRenderer class that simplified the process down to a couple lines of code. However, if no Controller Context is available the process is not quite as straight forward and I referenced an old, much more complex example that uses my RazorHosting library, which is a custom self-contained implementation of the Razor templating engine that can be hosted completely outside of ASP.NET. While it works inside of ASP.NET, it’s an awkward solution when running inside of ASP.NET, because it requires a bit of setup to run efficiently.Well, it turns out that I missed something in the original article, namely that it is possible to create a ControllerContext, if you have a controller instance, even if MVC didn’t create that instance. Creating a Controller Instance outside of MVCThe trick to make this work is to create an MVC Controller instance – any Controller instance – and then configure a ControllerContext through that instance. As long as an HttpContext.Current is available it’s possible to create a fully functional controller context as Razor can get all the necessary context information from the HttpContextWrapper().The key to make this work is the following method:/// <summary> /// Creates an instance of an MVC controller from scratch /// when no existing ControllerContext is present /// </summary> /// <typeparam name="T">Type of the controller to create</typeparam> /// <returns>Controller Context for T</returns> /// <exception cref="InvalidOperationException">thrown if HttpContext not available</exception> public static T CreateController<T>(RouteData routeData = null) where T : Controller, new() { // create a disconnected controller instance T controller = new T(); // get context wrapper from HttpContext if available HttpContextBase wrapper = null; if (HttpContext.Current != null) wrapper = new HttpContextWrapper(System.Web.HttpContext.Current); else throw new InvalidOperationException( "Can't create Controller Context if no active HttpContext instance is available."); if (routeData == null) routeData = new RouteData(); // add the controller routing if not existing if (!routeData.Values.ContainsKey("controller") && !routeData.Values.ContainsKey("Controller")) routeData.Values.Add("controller", controller.GetType().Name .ToLower() .Replace("controller", "")); controller.ControllerContext = new ControllerContext(wrapper, routeData, controller); return controller; }This method creates an instance of a Controller class from an existing HttpContext which means this code should work from anywhere within ASP.NET to create a controller instance that’s ready to be rendered. This means you can use this from within an Application_Error handler as I needed to or even from within a WebAPI controller as long as it’s running inside of ASP.NET (ie. not self-hosted). Nice.So using the ViewRenderer class from the previous article I can now very easily render an MVC view outside of the context of MVC. Here’s what I ended up in my Application’s custom error HttpModule: protected override void OnDisplayError(WebErrorHandler errorHandler, ErrorViewModel model) { var Response = HttpContext.Current.Response; Response.ContentType = "text/html"; Response.StatusCode = errorHandler.OriginalHttpStatusCode; var context = ViewRenderer.CreateController<ErrorController>().ControllerContext; var renderer = new ViewRenderer(context); string html = renderer.RenderView("~/Views/Shared/GenericError.cshtml", model); Response.Write(html); }That’s pretty sweet, because it’s now possible to use ViewRenderer just about anywhere in any ASP.NET application, not only inside of controller code. This also allows the constructor for the ViewRenderer from the last article to work without a controller context parameter, using a generic view as a base for the controller context when not passed:public ViewRenderer(ControllerContext controllerContext = null) { // Create a known controller from HttpContext if no context is passed if (controllerContext == null) { if (HttpContext.Current != null) controllerContext = CreateController<ErrorController>().ControllerContext; else throw new InvalidOperationException( "ViewRenderer must run in the context of an ASP.NET " + "Application and requires HttpContext.Current to be present."); } Context = controllerContext; }In this case I use the ErrorController class which is a generic controller instance that exists in the same assembly as my ViewRenderer class and that works just fine since ‘generically’ rendered views tend to not rely on anything from the controller other than the model which is explicitly passed.While these days most of my apps use MVC I do still have a number of generic pieces in most of these applications where Razor comes in handy. This includes modules like the above, which when they error often need to display error output. In other cases I need to generate string template output for emailing or logging data to disk. Being able to render simply render an arbitrary View to and pass in a model makes this super nice and easy at least within the context of an ASP.NET application!You can check out the updated ViewRenderer class below to render your ‘generic views’ from anywhere within your ASP.NET applications. Hope some of you find this useful.ResourcesViewRenderer Class in Westwind.Web.Mvc Library (Github)Original ViewRenderer ArticleRazor Hosting Library (GitHub)Original Razor Hosting Article© Rick Strahl, West Wind Technologies, 2005-2013Posted in ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • SnagIt Live Writer Plug-in Updated

    - by Rick Strahl
    Ah, I love SnagIt from TechSmith and I use the heck out of it almost every day. So no surprise that I've decided some time ago to integrate SnagIt into a few applications that require screen shots extensively. It's been a while since I've posted an update to my small SnagIt Windows Live Writer plug-in. There have been a few nagging issues that have crept up with recent changes in the way SnagIt handles captures in recent versions and they have been addressed in this update of SnagIt. Personally I love SnagIt and use it extensively mostly for blogging, but also for writing documentation and articles etc. While there are many other (and also free) tools out there to do basic screen captures, SnagIt continues to be the most convenient tool for me with its nice built in capture and effects editor that makes creating professional looking captures childishly simple. And maybe even more importantly: SnagIt has a COM interface that can be automated and  makes it super easy to embed into other applications. I've built plugins for SnagIt as well as for one of my company's own tools, Html Help Builder. If you use the Windows Live Writer offline WebLog Editor to write blog posts and have a copy of SnagIt it's probably worth your while to check this out if you haven't already. In case you haven't, this plugin integrates SnagIt with Live Writer so you can easily capture and edit content and embed it into a post. Captures are shown in the SnagIt Preview editor where you can edit the image and apply image markup or effects, before selecting Finish (or Cancel). The final image can then be pasted directly into your Live Writer post. When installed the SnagIt plug-in shows up on the PlugIn list or in the Plug-Ins toolbar shortcut: Once you select the Plug in you get the capture window that allows you to customize the capture process which includes most of the useful SnagIt capture options: Once you're done capturing the image shows up in the SnagIt Image Editor and you can crop, mark up and apply effects. When done you click the Finish button and the image is embedded right into your blog post. Easy - how do you think the images in this blog entry got in here? The beauty of SnagIt is that it's all easily integrated - Capturing, editing and embedding, it only takes a few seconds to do it all especially if you save image effect presets in SnagIt. What's updated The main issue addressed in this update has to do with the plug-in updates the Live Writer window. When a capture starts Live Writer gets minimized to get out of the way to let you pick your capture source. When the capture is complete and the image has been embedded Live Writer is activated once again. Recent versions of SnagIt however had changed the Window positioning of SnagIt so that Live Writer ended up popping up back behind the SnagIt window which was pretty annoying. This update pushes Live Writer back to the top of the window stack using some delaying tactics in the code. There have also been a few small changes to the way the code interacts with the COM object which is more reliable if a capture fails or SnagIt blows up or is locked because it's already in a capture outside of the automation interface. Source Code SnagIt Automation is something I actually use a lot. As mentioned I've integrated this automation into Live Writer as well as my documentation tool Html Help Builder, which I use just about daily. The SnagIt integration has a similar interface in that application and provides similar functionality. It's quite useful to integrate SnagIt into other applications. Because it's quite useful to embed SnagIt into other apps there's source code that you can download and embed into your own applications. The code includes both the dialog class that is automated from Live Writer, as well as the basic capture component that captures images to a disk file. Resources Download the SnagIt Capture Plug-in Installer An MSI installer that you can run that will install the plug-in into Live Writer's PlugIns directory. Source Code to the SnagIt Capture Plug-in Contains the plug-in assembly, as well as the source code to the plug-in and the setup project.© Rick Strahl, West Wind Technologies, 2005-2011Posted in Live Writer  WebLog   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • $.fadeTo/fadeOut() operations on Table Rows in IE fail

    - by Rick Strahl
    Here’s a a small problem that one of customers ran into a few days ago: He was playing around with some of the sample code I’ve put out for one of my simple jQuery demos which deals with providing a simple pulse behavior plug-in: $.fn.pulse = function(time) { if (!time) time = 2000; // *** this == jQuery object that contains selections $(this).fadeTo(time, 0.20, function() { $(this).fadeTo(time, 1); }); return this; } it’s a very simplistic plug-in and it works fine for simple pulse animations. However he ran into a problem where it didn’t work when working with tables – specifically pulsing a table row in Internet Explorer. Works fine in FireFox and Chrome, but IE not so much. It also works just fine in IE as long as you don’t try it on tables or table rows specifically. Applying against something like this (an ASP.NET GridView): var sel = $("#gdEntries>tbody>tr") .not(":first-child") // no header .not(":last-child") // no footer .filter(":even") .addClass("gridalternate"); // *** Demonstrate simple plugin sel.pulse(2000); fails in IE. No pulsing happens in any version of IE. After some additional experimentation with single rows and various ways of selecting each and still failing, I’ve come to the conclusion that the various fade operations in jQuery simply won’t work correctly in IE (any version). So even something as ‘elemental’ as this: var el = $("#gdEntries>tbody>tr").get(0);$(el).fadeOut(2000); is not working correctly. The item will stick around for 2 seconds and then magically disappear. Likewise: sel.hide().fadeIn(5000); also doesn’t fade in although the items become immediately visible in IE. Go figure that behavior out. Thanks to a tweet from red_square and a link he provided here is a grid that explains what works and doesn’t in IE (and most last gen browsers) regarding opacity: http://www.quirksmode.org/js/opacity.html It appears from this link that table and row elements can’t be made opaque, but td elements can. This means for the row selections I can force each of the td elements to be selected and then pulse all of those. Once you have the rows it’s easy to explicitly select all the columns in those rows with .find(“td”). Aha the following actually works: var sel = $("#gdEntries>tbody>tr") .not(":first-child") // no header .not(":last-child") // no footer .filter(":even") .addClass("gridalternate"); // *** Demonstrate simple plugin sel.find("td").pulse(2000); A little unintuitive that, but it works. Stay away from <table> and <tr> Fades The moral of the story is – stay away from TR, TH and TABLE fades and opacity. If you have to do it on tables use the columns instead and if necessary use .find(“td”) on your row(s) selector to grab all the columns. I’ve been surprised by this uhm relevation, since I use fadeOut in almost every one of my applications for deletion of items and row deletions from grids are not uncommon especially in older apps. But it turns out that fadeOut actually works in terms of behavior: It removes the item when the timeout’s done and because the fade is relatively short lived and I don’t extensively test IE code any more I just never noticed that the fade wasn’t happening. Note – this behavior or rather lack thereof appears to be specific to table table,tr,th elements. I see no problems with other elements like <div> and <li> items. Chalk this one up to another of IE’s shortcomings. Incidentally I’m not the only one who has failed to address this in my simplistic plug-in: The jquery-ui pulsate effect also fails on the table rows in the same way. sel.effect("pulsate", { times: 3 }, 2000); and it also works with the same workaround. If you’re already using jquery-ui definitely use this version of the plugin which provides a few more options… Bottom line: be careful with table based fade operations and remember that if you do need to fade – fade on columns.© Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery  

    Read the article

  • HTML5 and CSS3 Editing in Windows Live Writer

    - by Rick Strahl
    Windows Live Writer is a wonderful tool for editing blog posts and getting them posted to your blog. What makes it nice is that it has a small set of useful features, plus a simple plug-in model that has spawned many useful add-ins. Small tool with a reasonably decent plug-in model to extend equals a great solution to a simple problem. If you're running Windows, have a blog and aren’t using Live Writer you’re probably doing it wrong…One of Live Writer’s nice features is that it can download your blog’s CSS for preview and edit displays. It lets you edit your content inside of the context of that CSS using the WYSIWYG editor, so your content actually looks very close to what you’ll see on your blog while you’re editing your post. Unfortunately Live Writer renders the HTML content in the Web Browser Control’s  default IE 7 rendering mode. Yeah you read that right: IE 7 is the default for the Web Browser control and most applications that use it, are stuck in this modus unless the application explicitly overrides this default. The Web Browser control does not use the version of Internet Explorer installed on the system (IE 10 on my Win8 machine) but uses IE 7 mode for ‘compatibility’ for old applications.If you are importing your blog’s CSS that may suck if you’re using rich HTML 5 and CSS 3 formatting. Hack the Registry to get Live Writer to render using IE 9 or 10In order to get Live Writer (or any other application that uses the Web Browser Control for that matter) to render you can apply a registry hack that overrides the Web Browser Control engine usage for a specific application. I wrote about this in detail in a previous blog post a couple of years back.Here’s how you can set up Windows Live Writer to render your CSS 3 by making a change in your registry:The above is for setup on a 64 bit machine, where I configure Live Writer which is a 32 bit application for using IE 10 rendering. The keys set are as follows:32bit Configuration on 64 bit machine:HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Internet Explorer\Main\FeatureControl\FEATURE_BROWSER_EMULATIONKey: WindowsLiveWriter.exeValue: 9000 or 10000  (IE 9 or 10 respectively) (DWORD value)On a 32 bit only machine: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Internet Explorer\Main\FeatureControl\FEATURE_BROWSER_EMULATIONKey: WindowsLiveWriter.exeValue: 9000 or 10000  (IE 9 or 10 respectively) (DWORD value)Use decimal values of 9000, 10000 or 11000 to specify specific versions of Internet Explorer. This is a minor tweak, but it’s nice to actually see my blog posts now with the proper CSS formatting intact. Notice the rounded borders and shadow on the code blocks as well as the overflow-x and scrollbars that show up. In this particular case I can see what the code blocks actually look like in a specific resolution – much better than in the old plain view which just chopped things off at the end of the window frame. There are a few other elements that now show properly in the editor as well including block quotes and note boxes that I occasionally use. It’s minor stuff, but it makes the editing experience better yet and closer to the final things so there are less republish operations than I previously had. Sweet!Note that this approach of putting an IE version override into the registry works with most applications that use the Web Browser control. If you are using the Web Browser control in your own applications, it’s a good idea to switch the browser to a more recent version so you can take advantage of HTML 5 and CSS 3 in your browser displayed content by automatically setting this flag in the registry or as part of the application’s startup routine if not dedicated setup tool is used. At the very least you might set it to 9000 (IE 9) which supports most of the basic CSS3 features and is a decent baseline that works for most Windows 7 and 8 machines. If running pre-IE9, the browser will fall back to IE7 rendering and look bad but at least more recent browsers will see an improved experience.I’m surprised that there aren’t more vendors and third party apps using this feature. You can see in my first screen shot that there are only very few entries in the registry key group on my machine – any other apps use the Web Browser control are using IE7. Go figure. Certainly Windows Live Writer should be writing this key into the registry automatically as part of installation to support this functionality out of the box, but alas since it does not, this registry hack lets you get your way anyway…Resources.reg Files to register Live Write Browser Emulation (set for IE9)Specifying Internet Explorer Version for ApplicationsSnagIt LiveWriter Plug-inDownload Windows Live WriterDownload Windows Live Writer with Chocolatey© Rick Strahl, West Wind Technologies, 2005-2013Posted in Live Writer  Windows   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • ASP.NET Web API and Simple Value Parameters from POSTed data

    - by Rick Strahl
    In testing out various features of Web API I've found a few oddities in the way that the serialization is handled. These are probably not super common but they may throw you for a loop. Here's what I found. Simple Parameters from Xml or JSON Content Web API makes it very easy to create action methods that accept parameters that are automatically parsed from XML or JSON request bodies. For example, you can send a JavaScript JSON object to the server and Web API happily deserializes it for you. This works just fine:public string ReturnAlbumInfo(Album album) { return album.AlbumName + " (" + album.YearReleased.ToString() + ")"; } However, if you have methods that accept simple parameter types like strings, dates, number etc., those methods don't receive their parameters from XML or JSON body by default and you may end up with failures. Take the following two very simple methods:public string ReturnString(string message) { return message; } public HttpResponseMessage ReturnDateTime(DateTime time) { return Request.CreateResponse<DateTime>(HttpStatusCode.OK, time); } The first one accepts a string and if called with a JSON string from the client like this:var client = new HttpClient(); var result = client.PostAsJsonAsync<string>(http://rasxps/AspNetWebApi/albums/rpc/ReturnString, "Hello World").Result; which results in a trace like this: POST http://rasxps/AspNetWebApi/albums/rpc/ReturnString HTTP/1.1Content-Type: application/json; charset=utf-8Host: rasxpsContent-Length: 13Expect: 100-continueConnection: Keep-Alive "Hello World" produces… wait for it: null. Sending a date in the same fashion:var client = new HttpClient(); var result = client.PostAsJsonAsync<DateTime>(http://rasxps/AspNetWebApi/albums/rpc/ReturnDateTime, new DateTime(2012, 1, 1)).Result; results in this trace: POST http://rasxps/AspNetWebApi/albums/rpc/ReturnDateTime HTTP/1.1Content-Type: application/json; charset=utf-8Host: rasxpsContent-Length: 30Expect: 100-continueConnection: Keep-Alive "\/Date(1325412000000-1000)\/" (yes still the ugly MS AJAX date, yuk! This will supposedly change by RTM with Json.net used for client serialization) produces an error response: The parameters dictionary contains a null entry for parameter 'time' of non-nullable type 'System.DateTime' for method 'System.Net.Http.HttpResponseMessage ReturnDateTime(System.DateTime)' in 'AspNetWebApi.Controllers.AlbumApiController'. An optional parameter must be a reference type, a nullable type, or be declared as an optional parameter. Basically any simple parameters are not parsed properly resulting in null being sent to the method. For the string the call doesn't fail, but for the non-nullable date it produces an error because the method can't handle a null value. This behavior is a bit unexpected to say the least, but there's a simple solution to make this work using an explicit [FromBody] attribute:public string ReturnString([FromBody] string message) andpublic HttpResponseMessage ReturnDateTime([FromBody] DateTime time) which explicitly instructs Web API to read the value from the body. UrlEncoded Form Variable Parsing Another similar issue I ran into is with POST Form Variable binding. Web API can retrieve parameters from the QueryString and Route Values but it doesn't explicitly map parameters from POST values either. Taking our same ReturnString function from earlier and posting a message POST variable like this:var formVars = new Dictionary<string,string>(); formVars.Add("message", "Some Value"); var content = new FormUrlEncodedContent(formVars); var client = new HttpClient(); var result = client.PostAsync(http://rasxps/AspNetWebApi/albums/rpc/ReturnString, content).Result; which produces this trace: POST http://rasxps/AspNetWebApi/albums/rpc/ReturnString HTTP/1.1Content-Type: application/x-www-form-urlencodedHost: rasxpsContent-Length: 18Expect: 100-continue message=Some+Value When calling ReturnString:public string ReturnString(string message) { return message; } unfortunately it does not map the message value to the message parameter. This sort of mapping unfortunately is not available in Web API. Web API does support binding to form variables but only as part of model binding, which binds object properties to the POST variables. Sending the same message as in the previous example you can use the following code to pick up POST variable data:public string ReturnMessageModel(MessageModel model) { return model.Message; } public class MessageModel { public string Message { get; set; }} Note that the model is bound and the message form variable is mapped to the Message property as would other variables to properties if there were more. This works but it's not very dynamic. There's no real easy way to retrieve form variables (or query string values for that matter) in Web API's Request object as far as I can discern. Well only if you consider this easy:public string ReturnString() { var formData = Request.Content.ReadAsAsync<FormDataCollection>().Result; return formData.Get("message"); } Oddly FormDataCollection does not allow for indexers to work so you have to use the .Get() method which is rather odd. If you're running under IIS/Cassini you can always resort to the old and trusty HttpContext access for request data:public string ReturnString() { return HttpContext.Current.Request.Form["message"]; } which works fine and is easier. It's kind of a bummer that HttpRequestMessage doesn't expose some sort of raw Request object that has access to dynamic data - given that it's meant to serve as a generic REST/HTTP API that seems like a crucial missing piece. I don't see any way to read query string values either. To me personally HttpContext works, since I don't see myself using self-hosted code much.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • FireFox 6 Super Slow? Cache Settings Corruption

    - by Rick Strahl
    For those of you that follow me on Twitter, you've probably seen some of my tweets regarding major performance problems I've seen with the install of FireFox 6.0. FireFox 6.0 was released a couple of weeks ago and is treated as a 'force feed' update for FireFox 5.0. I'm not sure what the deal is with this braindead versioning that Mozilla is doing with major version releases coming out, what now every other month? Seriously that's retarded especially given the limited number of new features these releases bring, and the upgrade pain for plug-ins that the major version release causes. Anyway, after the FireFox updater bugged me long enough I finally gave in last week and updated to FireFox 6. Immediately after install I noticed terrible performance. Everything was running at a snail's pace with Web pages loading slowly and most content actually slowly 'painting' the page. A typical sign of content downloading slowly. However these are pages that should be mostly cached on my system and even repeated accesses ran just as slow. Just for a reality check I ran the same sites in Chrome (blazing fast) and IE (fast enough :-)) but FireFox - dog on a stick. Why so slow Boss? While complaining lots of people recommended to ditch FireFox - use Chrome, yada yada yada. Yeah, Chrome is fast and getting better but I have a number of plug-ins that I use in FF that I can't easily give up. So I suffered and started looking around more closely at what was happening. The first thing I noticed when accessing pages was that I continually saw accesses to the Google CDN downloading jQuery and jQuery UI. UI especially is pretty heavy in size and currently I'm in a location with a fairly slow IP connection where large files are a bit of an issue. However, seeing the CDN urls pop up repeatedly raised a flag with me. That stuff should be caching and it looked like each and every hit was reloading these scripts and various images over and over again. Fired up FireBug and sure enough I saw something like this on a repeated hit to my blog: Those two highlights are jquery and the main CSS file for the site and both are being loaded fully and taking a while to load. However, since this page had been loaded before, these items should be cached and show 304 requests instead of the full HTTP requests returning 200 result codes. In short it looked like FireFox was not caching ANY content at all and constantly reloading all page resources. No wonder things were running dog slow. Once I realized what the problem was I took a look in the about:config settings and lo and behold a bunch of the cache settings were set to not cache: In my case ALL the main cache flags were set to false for some reason that I can't figure out.  It appears that after the FireFox 6 update these flags somehow mysteriously changed and performance took a nose dive. Switching the .enable flags back to true and resetting all the cache settings tote default reverted performance back to the way it's supposed to be: reasonably fast and snappy as soon as content is cached and accessed again  from cache. I try not to muck with the about:config settings much (other than turning off the IPV6 option) but when there are problems access to these features can be really nice. However, I treat this as a last resort so it took me quite some time before I started looking through ALL the settings. This takes a while, not knowing what I was looking for exactly. If Web load performance is slow it's a good idea to check the cache settings. I have no idea what hosed these settings for me - I certainly didn't explicitly set them in about:config and while in FireFox's Options dialog I didn't see any option that would affect global caching like this, so this remains a mystery to me. Anyway, I hope that this is helpful to some, in case some of you end up running into a similar issue.© Rick Strahl, West Wind Technologies, 2005-2011Posted in FireFox   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Allowing Access to HttpContext in WCF REST Services

    - by Rick Strahl
    If you’re building WCF REST Services you may find that WCF’s OperationContext, which provides some amount of access to Http headers on inbound and outbound messages, is pretty limited in that it doesn’t provide access to everything and sometimes in a not so convenient manner. For example accessing query string parameters explicitly is pretty painful: [OperationContract] [WebGet] public string HelloWorld() { var properties = OperationContext.Current.IncomingMessageProperties; var property = properties[HttpRequestMessageProperty.Name] as HttpRequestMessageProperty; string queryString = property.QueryString; var name = StringUtils.GetUrlEncodedKey(queryString,"Name"); return "Hello World " + name; } And that doesn’t account for the logic in GetUrlEncodedKey to retrieve the querystring value. It’s a heck of a lot easier to just do this: [OperationContract] [WebGet] public string HelloWorld() { var name = HttpContext.Current.Request.QueryString["Name"] ?? string.Empty; return "Hello World " + name; } Ok, so if you follow the REST guidelines for WCF REST you shouldn’t have to rely on reading query string parameters manually but instead rely on routing logic, but you know what: WCF REST is a PITA anyway and anything to make things a little easier is welcome. To enable the second scenario there are a couple of steps that you have to take on your service implementation and the configuration file. Add aspNetCompatibiltyEnabled in web.config Fist you need to configure the hosting environment to support ASP.NET when running WCF Service requests. This ensures that the ASP.NET pipeline is fired up and configured for every incoming request. <system.serviceModel>     <serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true" /> </system.serviceModel> Markup your Service Implementation with AspNetCompatibilityRequirements Attribute Next you have to mark up the Service Implementation – not the contract if you’re using a separate interface!!! – with the AspNetCompatibilityRequirements attribute: [ServiceContract(Namespace = "RateTestService")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class RestRateTestProxyService Typically you’ll want to use Allowed as the preferred option. The other options are NotAllowed and Required. Allowed will let the service run if the web.config attribute is not set. Required has to have it set. All these settings determine whether an ASP.NET host AppDomain is used for requests. Once Allowed or Required has been set on the implemented class you can make use of the ASP.NET HttpContext object. When I allow for ASP.NET compatibility in my WCF services I typically add a property that exposes the Context and Request objects a little more conveniently: public HttpContext Context { get { return HttpContext.Current; } } public HttpRequest Request { get { return HttpContext.Current.Request; } } While you can also access the Response object and write raw data to it and manipulate headers THAT is probably not such a good idea as both your code and WCF will end up writing into the output stream. However it might be useful in some situations where you need to take over output generation completely and return something completely custom. Remember though that WCF REST DOES actually support that as well with Stream responses that essentially allow you to return any kind of data to the client so using Response should really never be necessary. Should you or shouldn’t you? WCF purists will tell you never to muck with the platform specific features or the underlying protocol, and if you can avoid it you definitely should avoid it. Querystring management in particular can be handled largely with Url Routing, but there are exceptions of course. Try to use what WCF natively provides – if possible as it makes the code more portable. For example, if you do enable ASP.NET Compatibility you won’t be able to self host a WCF REST service. At the same time realize that especially in WCF REST there are number of big holes or access to some features are a royal pain and so it’s not unreasonable to access the HttpContext directly especially if it’s only for read-only access. Since everything in REST works of URLS and the HTTP protocol more control and easier access to HTTP features is a key requirement to building flexible services. It looks like vNext of the WCF REST stuff will feature many improvements along these lines with much deeper native HTTP support that is often so useful in REST applications along with much more extensibility that allows for customization of the inputs and outputs as data goes through the request pipeline. I’m looking forward to this stuff as WCF REST as it exists today still is a royal pain (in fact I’m struggling with a mysterious version conflict/crashing error on my machine that I have not been able to resolve – grrrr…).© Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  AJAX  WCF  

    Read the article

  • Windows 8 Live Accounts and the actual Windows Account

    - by Rick Strahl
    As if Windows Security wasn't confusing enough, in Windows 8 we get thrown yet another curve ball with Windows Live accounts to logon. When I set up my Windows 8 machine I originally set it up with a 'real', non-live account that I always use on my Windows machines. I did this mainly so I have a matching account for resources around my home and intranet network so I could log on to network resources properly. At some point later I decided to set up Windows Live security just to see how changes things. Windows wants you to use Windows Live Windows 8 logins are required in order for the Windows RT account info to work. Not that I care - since installing Windows 8 I've maybe spent 10 minutes with Windows RT because - well it's pretty freaking sucky on the desktop. From shitty apps to mis-managed screen real estate I can't say that there's anything compelling there to date, but then I haven't looked that hard either. Anyway… I set up the Windows Live account to see if that changes things. It does - I do get all my live logins to work from Live Account so that Twitter and Facebook posts and pictures and calendars all show up on live tiles on the start screen and in the actual apps. That's nice-ish, but hardly that exciting given that all of the apps tied to those live tiles are average at best. And it would have been nice if all of this could be done without being forced into running with a Windows Live User Account - this all feels like strong-arming you into moving into Microsofts walled garden… and that's probably what it's meant to do. Who am I? The real problem to me though is that these Windows Live and raw Windows User accounts are a bit unpredictable especially when it comes to developer information about the account and which credentials to use. So for example Windows reports folder security like this: Notice it's showing my Windows Live account. Now if I go to Edit and try to add my Windows user account (rstrahl) it'll just automatically show up as the live account. On the other hand though the underlying system sees everything as my real Windows account. After I switched to a Windows Live login account and I have to login to Windows with my Live account, what do you suppose this returns?Console.WriteLine(Environment.UserName); It returns my raw Windows user account (rstrahl). All my permissions, all my actual settings and the desktop console altogether run under that account. If I look in TaskManager (or Process Explorer for me) I see: Everything running on the desktop shell with my login running under my Windows user account. I suppose it makes sense, but where is that association happening? When I switched to a Windows Live account, nowhere did I associate my real account with the Live account - it just happened. And looking through the account configuration dialogs I can't find any reference to the raw Windows account. Other than switching back I see no mention anywhere of the raw Windows account - everything refers to the Live account. Right then, clear as potato soup! So this is who you really are! The problem is that in some situations this schizophrenic account behavior gets a bit weird. Today I was running a local Web application in IIS that uses Windows Authentication - I tried to log-in with my real Windows account login because that's what I'm used to using with WINDOWS freaking Authentication through IIS. But… it failed. I checked my IIS settings, my apps login settings and I just could not for the life of me get into the site with my Windows username. That is until I finally realized that I should try using my Windows Live credentials instead. And that worked. So now in this Windows Authentication dialog I had to type in my Live ID and password, which is - just weird. Then in IIS if I look at a Trace page (or in my case my app's Status page) I see that the logged on account is - my Windows user account. What's really annoying about this is that in some places it uses the live account in other places it uses my Windows account. If I remote desktop into my Web server online - I have to use the local authentication dialog but I have to put in my real Windows credentials not the Live account. Oh yes, it's all so terribly intuitive and logical… So in summary, when you log on with a Live account you are actually mapped to an underlying Windows user. In any application if you check the user name it'll be the underlying user account (not sure what happens in a Windows RT app or even what mechanism is used there to get the user name info).  When logging on to local machine resource with user name and password you have to use your Live IDs even if the permissions on the resources are mapped to your underlying Windows account. Easy enough I suppose, but still not exactly intuitive behavior…© Rick Strahl, West Wind Technologies, 2005-2012Posted in Windows   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • IE9 not rendering box-shadow Elements inside of Table Cells

    - by Rick Strahl
    Ran into an annoying problem today with IE 9. Slowly updating some older sites with CSS 3 tags and for the most part IE9 does a reasonably decent job of working with the new CSS 3 features. Not all by a long shot but at least some of the more useful ones like border-radius and box-shadow are supported. Until today I was happy to see that IE supported box-shadow just fine, but I ran into a problem with some old markup that uses tables for its main layout sections. I found that inside of a table cell IE fails to render a box-shadow. Below are images from Chrome (left) and IE 9 (right) of the same content: The download and purchase images are rendered with: <a href="download.asp" style="display:block;margin: 10px;"><img src="../images/download.gif" class="boxshadow roundbox" /></a> where the .boxshadow and .roundbox styles look like this:.boxshadow { -moz-box-shadow: 3px 3px 5px #535353; -webkit-box-shadow: 3px 3px 5px #535353; box-shadow: 3px 3px 5px #535353; } .roundbox { -moz-border-radius: 6px 6px 6px 6px; -webkit-border-radius: 6px; border-radius: 6px 6px 6px 6px; } And the Problem is… collapsed Table Borders Now normally these two styles work just fine in IE 9 when applied to elements. But the box-shadow doesn't work inside of this markup - because the parent container is a table cell.<td class="sidebar" style="border-collapse: collapse"> … <a href="download.asp" style="display:block;margin: 10px;"><img src="../images/download.gif" class="boxshadow roundbox" /></a> …</td> This HTML causes the image to not show a shadow. In actuality I'm not styling inline, but as part of my browser Reset I have the following in my master .css file:table { border-collapse: collapse; border-spacing: 0; } which has the same effect as the inline style. border-collapse by default inherits from the parent and so the TD inherits from table and tr - so TD tags are effectively collapsed. You can check out a test document that demonstrates this behavior here in this CodePaste.net snippet or run it here. How to work around this Issue To get IE9 to render the shadows inside of the TD tag correctly, I can just change the style explicitly NOT to use border-collapse:<td class="sidebar" style="border-collapse: separate; border-width: 0;"> Or better yet (thanks to David's comment below), you can add the border-collapse: separate to the .boxshadow style like this:.boxshadow { -moz-box-shadow: 3px 3px 5px #535353; -webkit-box-shadow: 3px 3px 5px #535353; box-shadow: 3px 3px 5px #535353; border-collapse: separate; } With either of these approaches IE renders the shadows correctly. Do you really need border-collapse? Should you bother with border-collapse? I think so! Collapsed borders render flat as a single fat line if a border-width and border-color are assigned, while separated borders render a thin line with a bunch of weird white space around it or worse render a old skool 3D raised border which is terribly ugly as well. So as a matter of course in any app my browser Reset includes the above code to make sure all tables with borders render the same flat borders. As you probably know, IE has all sorts of rendering issues in tables and on backgrounds (opacity backgrounds or image backgrounds) most of which is caused by the way that IE internally uses ActiveX filters to apply these effects. Apparently collapsed borders are yet one more item that causes problems with rendering. There you have it. Another crappy failure in IE we have to check for now, just one more reason to hate Internet Explorer. Luckily this one has a reasonably easy workaround. I hope this helps out somebody and saves them the hour I spent trying to figure out what caused this problem in the first place. Resources Sample HTML document that demonstrates the behavior Run the Sample© Rick Strahl, West Wind Technologies, 2005-2012Posted in HTML  Internet Explorer   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Problems with opening CHM Help files from Network or Internet

    - by Rick Strahl
    As a publisher of a Help Creation tool called Html Help Help Builder, I’ve seen a lot of problems with help files that won't properly display actual topic content and displays an error message for topics instead. Here’s the scenario: You go ahead and happily build your fancy, schmanzy Help File for your application and deploy it to your customer. Or alternately you've created a help file and you let your customers download them off the Internet directly or in a zip file. The customer downloads the file, opens the zip file and copies the help file contained in the zip file to disk. She then opens the help file and finds the following unfortunate result:     The help file  comes up with all topics in the tree on the left, but a Navigation to the WebPage was cancelled or Operation Aborted error in the Help Viewer's content window whenever you try to open a topic. The CHM file obviously opened since the topic list is there, but the Help Viewer refuses to display the content. Looks like a broken help file, right? But it's not - it's merely a Windows security 'feature' that tries to be overly helpful in protecting you. The reason this happens is because files downloaded off the Internet - including ZIP files and CHM files contained in those zip files - are marked as as coming from the Internet and so can potentially be malicious, so do not get browsing rights on the local machine – they can’t access local Web content, which is exactly what help topics are. If you look at the URL of a help topic you see something like this:   mk:@MSITStore:C:\wwapps\wwIPStuff\wwipstuff.chm::/indexpage.htm which points at a special Microsoft Url Moniker that in turn points the CHM file and a relative path within that HTML help file. Try pasting a URL like this into Internet Explorer and you'll see the help topic pop up in your browser (along with a warning most likely). Although the URL looks weird this still equates to a call to the local computer zone, the same as if you had navigated to a local file in IE which by default is not allowed.  Unfortunately, unlike Internet Explorer where you have the option of clicking a security toolbar, the CHM viewer simply refuses to load the page and you get an error page as shown above. How to Fix This - Unblock the Help File There's a workaround that lets you explicitly 'unblock' a CHM help file. To do this: Open Windows Explorer Find your CHM file Right click and select Properties Click the Unblock button on the General tab Here's what the dialog looks like:   Clicking the Unblock button basically, tells Windows that you approve this Help File and allows topics to be viewed.   Is this insecure? Not unless you're running a really old Version of Windows (XP pre-SP1). In recent versions of Windows Internet Explorer pops up various security dialogs or fires script errors when potentially malicious operations are accessed (like loading Active Controls), so it's relatively safe to run local content in the CHM viewer. Since most help files don't contain script or only load script that runs pure JavaScript access web resources this works fine without issues. How to avoid this Problem As an application developer there's a simple solution around this problem: Always install your Help Files with an Installer. The above security warning pop up because Windows can't validate the source of the CHM file. However, if the help file is installed as part of an installation the installation and all files associated with that installation including the help file are trusted. A fully installed Help File of an application works just fine because it is trusted by Windows. Summary It's annoying as all hell that this sort of obtrusive marking is necessary, but it's admittedly a necessary evil because of Microsoft's use of the insecure Internet Explorer engine that drives the CHM Html Engine's topic viewer. Because help files are viewing local content and script is allowed to execute in CHM files there's potential for malicious code hiding in CHM files and the above precautions are supposed to avoid any issues. © Rick Strahl, West Wind Technologies, 2005-2012 Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • XmlWriter and lower ASCII characters

    - by Rick Strahl
    Ran into an interesting problem today on my CodePaste.net site: The main RSS and ATOM feeds on the site were broken because one code snippet on the site contained a lower ASCII character (CHR(3)). I don't think this was done on purpose but it was enough to make the feeds fail. After quite a bit of debugging and throwing in a custom error handler into my actual feed generation code that just spit out the raw error instead of running it through the ASP.NET MVC and my own error pipeline I found the actual error. The lovely base exception and error trace I got looked like this: Error: '', hexadecimal value 0x03, is an invalid character. at System.Xml.XmlUtf8RawTextWriter.InvalidXmlChar(Int32 ch, Byte* pDst, Boolean entitize)at System.Xml.XmlUtf8RawTextWriter.WriteElementTextBlock(Char* pSrc, Char* pSrcEnd)at System.Xml.XmlUtf8RawTextWriter.WriteString(String text)at System.Xml.XmlWellFormedWriter.WriteString(String text)at System.Xml.XmlWriter.WriteElementString(String localName, String ns, String value)at System.ServiceModel.Syndication.Rss20FeedFormatter.WriteItemContents(XmlWriter writer, SyndicationItem item, Uri feedBaseUri)at System.ServiceModel.Syndication.Rss20FeedFormatter.WriteItem(XmlWriter writer, SyndicationItem item, Uri feedBaseUri)at System.ServiceModel.Syndication.Rss20FeedFormatter.WriteItems(XmlWriter writer, IEnumerable`1 items, Uri feedBaseUri)at System.ServiceModel.Syndication.Rss20FeedFormatter.WriteFeed(XmlWriter writer)at System.ServiceModel.Syndication.Rss20FeedFormatter.WriteTo(XmlWriter writer)at CodePasteMvc.Controllers.ApiControllerBase.GetFeed(Object instance) in C:\Projects2010\CodePaste\CodePasteMvc\Controllers\ApiControllerBase.cs:line 131 XML doesn't like extended ASCII Characters It turns out the issue is that XML in general does not deal well with lower ASCII characters. According to the XML spec it looks like any characters below 0x09 are invalid. If you generate an XML document in .NET with an embedded &#x3; entity (as mine did to create the error above), you tend to get an XML document error when displaying it in a viewer. For example, here's what the result of my  feed output looks like with the invalid character embedded inside of Chrome which displays RSS feeds as raw XML by default: Other browsers show similar error messages. The nice thing about Chrome is that you can actually view source and jump down to see the line that causes the error which allowed me to track down the actual message that failed. If you create an XML document that contains a 0x03 character the XML writer fails outright with the error: '', hexadecimal value 0x03, is an invalid character. The good news is that this behavior is overridable so XML output can at least be created by using the XmlSettings object when configuring the XmlWriter instance. In my RSS configuration code this looks something like this:MemoryStream ms = new MemoryStream(); var settings = new XmlWriterSettings() { CheckCharacters = false }; XmlWriter writer = XmlWriter.Create(ms,settings); and voila the feed now generates. Now generally this is probably NOT a good idea, because as mentioned above these characters are illegal and if you view a raw XML document you'll get validation errors. Luckily though most RSS feed readers however don't care and happily accept and display the feed correctly, which is good because it got me over an embarrassing hump until I figured out a better solution. How to handle extended Characters? I was glad to get the feed fixed for the time being, but now I was still stuck with an interesting dilemma. CodePaste.net accepts user input for code snippets and those code snippets can contain just about anything. This means that ASP.NET's standard request filtering cannot be applied to this content. The code content displayed is encoded before display so for the HTML end the CHR(3) input is not really an issue. While invisible characters are hardly useful in user input it's not uncommon that odd characters show up in code snippets. You know the old fat fingering that happens when you're in the middle of a coding session and those invisible characters do end up sometimes in code editors and then end up pasted into the HTML textbox for pasting as a Codepaste.net snippet. The question is how to filter this text? Looking back at the XML Charset Spec it looks like all characters below 0x20 (space) except for 0x09 (tab), 0x0A (LF), 0x0D (CR) are illegal. So applying the following filter with a RegEx should work to remove invalid characters:string code = Regex.Replace(item.Code, @"[\u0000-\u0008,\u000B,\u000C,\u000E-\u001F]", ""); Applying this RegEx to the code snippet (and title) eliminates the problems and the feed renders cleanly.© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  XML   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Import a bunch of certificates into the correct certificate store using a script

    - by Jesse Weigert
    I have a collection of certificates in a p7b file, and I would like to automatically import each certificate into the correct store depending on the certificate template. What is the best way to do this with a script? I tried using certutil -addstore root Certificate.p7b, and that will correctly place all of the root CAs into the root store, but it returns an error if it encounters any other type of certificate. I'm willing to use batch scripts, vbscript or powershell to accomplish this task. Thanks!

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >