Search Results

Search found 16230 results on 650 pages for 'three js'.

Page 88/650 | < Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >

  • Page markup maintainability

    - by Tony
    Hi where js folder is under the root. if u put this JS ref in common\SomeControl.ascx, it will work fine if SomeControl is placed on ~/SomePage.aspx because SomePage is under the website root. How to put JS ref in SomeControl and allow it to be placed at any path on the website without losing the JS ref. Thanks

    Read the article

  • load qUnit asyncronously

    - by Cedric Dugas
    I am trying to load qUnit in js but the addevent function in QUnit.js is never fired, and it just not working: var appendQUnit = document.createElement('script'); appendQUnit.src = 'js/utility/qunit/qunit.js'; appendQUnit.type = 'text/javascript'; document.getElementsByTagName('head')[0].appendChild(appendQUnit);

    Read the article

  • Will spreading your servers load not just consume more recourses

    - by Saif Bechan
    I am running a heavy real-time updating website. The amount of recourses needed per user are quite high, ill give you an example. Setup Every visit The application is php/mysql so on every visit static and dynamic content is loaded. Recourses: apache,php,mysql Every second (no more than a second will just be too long) The website needs to be updated real-time so every second there is an ajax call thats updates the website. Recourses: jQuery,apache,php,mysql Avarage spending for single user (spending one minute and visited 3 pages) Apache: +/- 63 requests / responsess serving static and dynamic content (img,css,js,html) php: +/- 63 requests / responses mysql: +/- 63 requests / responses jquery: +/- 60 requests / responses Optimization I want to optimize this process, but I think that maybe it would be just the same in the end. Before implementing and testing (which will take weeks) I wanted to have some second opinions from you guys. Every visit I want to start off with having nginx in the front and work as a proxy to deliver the static content. Recources: Dynamic: apache,php,mysql Static: nginx This will spread the load on apache a lot. Every Second For the script that loads every second I want to set up Node.js server side javascript with nginx in te front. I want to set it up that jquery makes a request ones a minute, and node.js streams the data to the client every second. Recources: jQuery,nginx,node.js,mysql Avarage spending for single user (spending one minute and visited 3 pages) Nginx: 4 requests / responsess serving mostly static conetent(img,css,js) Apache: 3 requests only the pages php: 3 requests only the pages node.js: 1 request / 60 responses jquery: 1 request / 60 responses mysql: 63 requests / responses Optimization As you can see in the optimisation the load from Apache and PHP are lifted and places on nginx and node.js. These are known for there light footprint and good performance. But I am having my doubts, because there are still 2 programs extra loaded in the memory and they consume cpu. So it it better to have less programs that do the job, or more. Before I am going to spend a lot of time setting this up I would like to know if it will be worth the while.

    Read the article

  • How to ensure javascript in the browser is always enabled while traversing my plone site

    - by user956424
    I wish to disable loading the plone site if JS is disabled in the browser. Where exactly do I change the code? Which template/skin do I choose? I want to ensure that JS is always enabled while browsing any part of the plone site. While browsing, if JS is disabled, I can redirect to another page with tag to enable the JS in the browser and give a hlink to the site back if it is enabled. I am using Plone 4.1

    Read the article

  • Read a file form web-app

    - by Steve Brewer
    Inside a grails application, I need to upload a file under web-app/js, add a prefix, and put it in S3. I'm having trouble figuring out how to read the js file in a way that will work in development (/web-app/js) and production (/js). I'm doing this from inside a domain object.

    Read the article

  • In Rails, how do you functional test a Javascript response format?

    - by Teflon Ted
    If your controller action looks like this: respond_to do |format| format.html { raise 'Unsupported' } format.js # index.js.erb end and your functional test looks like this: test "javascript response..." do get :index end it will execute the HTML branch of the respond_to block. If you try this: test "javascript response..." do get 'index.js' end it executes the view (index.js.erb) withOUT running the controller action!

    Read the article

  • CakePHP Webroot .htaccess

    - by Mr A
    Normally one would have a webroot that looks like this: /www/ | +--index.php | +-- js/ | | | +-- xyz.js | +-- css/ | | | +--xyz.css | +---etc..... With my setup, it is going to be most beneficial for me to move everything into a common subfolder, leaving the index.php of the Cake app in the root. I.e: /www/ | +--index.php | | +-- resources | +-- js/ | | | +-- xyz.js | +-- css/ | | | +--xyz.css | +---etc..... What is my .htaccess going to look like? Thanks!

    Read the article

  • Symfony - Override sf_format when calling get_partial.

    - by deadwards
    I'm making an AJAX call in my symfony project, so it has an sf_format of 'js'. In the actionSuccess.js.php view, I call get_partial to update the content on the page. By default it looks for the partial in 'js' format since the sf_format is still set as 'js'. Is it possible to override the sf_format so that it uses the regular 'html' partial that I already have (so that I don't have to have two identical partials)?

    Read the article

  • Security Issues with Single Page Apps

    - by Stephen.Walther
    Last week, I was asked to do a code review of a Single Page App built using the ASP.NET Web API, Durandal, and Knockout (good stuff!). In particular, I was asked to investigate whether there any special security issues associated with building a Single Page App which are not present in the case of a traditional server-side ASP.NET application. In this blog entry, I discuss two areas in which you need to exercise extra caution when building a Single Page App. I discuss how Single Page Apps are extra vulnerable to both Cross-Site Scripting (XSS) attacks and Cross-Site Request Forgery (CSRF) attacks. This goal of this blog post is NOT to persuade you to avoid writing Single Page Apps. I’m a big fan of Single Page Apps. Instead, the goal is to ensure that you are fully aware of some of the security issues related to Single Page Apps and ensure that you know how to guard against them. Cross-Site Scripting (XSS) Attacks According to WhiteHat Security, over 65% of public websites are open to XSS attacks. That’s bad. By taking advantage of XSS holes in a website, a hacker can steal your credit cards, passwords, or bank account information. Any website that redisplays untrusted information is open to XSS attacks. Let me give you a simple example. Imagine that you want to display the name of the current user on a page. To do this, you create the following server-side ASP.NET page located at http://MajorBank.com/SomePage.aspx: <%@Page Language="C#" %> <html> <head> <title>Some Page</title> </head> <body> Welcome <%= Request["username"] %> </body> </html> Nothing fancy here. Notice that the page displays the current username by using Request[“username”]. Using Request[“username”] displays the username regardless of whether the username is present in a cookie, a form field, or a query string variable. Unfortunately, by using Request[“username”] to redisplay untrusted information, you have now opened your website to XSS attacks. Here’s how. Imagine that an evil hacker creates the following link on another website (hackers.com): <a href="/SomePage.aspx?username=<script src=Evil.js></script>">Visit MajorBank</a> Notice that the link includes a query string variable named username and the value of the username variable is an HTML <SCRIPT> tag which points to a JavaScript file named Evil.js. When anyone clicks on the link, the <SCRIPT> tag will be injected into SomePage.aspx and the Evil.js script will be loaded and executed. What can a hacker do in the Evil.js script? Anything the hacker wants. For example, the hacker could display a popup dialog on the MajorBank.com site which asks the user to enter their password. The script could then post the password back to hackers.com and now the evil hacker has your secret password. ASP.NET Web Forms and ASP.NET MVC have two automatic safeguards against this type of attack: Request Validation and Automatic HTML Encoding. Protecting Coming In (Request Validation) In a server-side ASP.NET app, you are protected against the XSS attack described above by a feature named Request Validation. If you attempt to submit “potentially dangerous” content — such as a JavaScript <SCRIPT> tag — in a form field or query string variable then you get an exception. Unfortunately, Request Validation only applies to server-side apps. Request Validation does not help in the case of a Single Page App. In particular, the ASP.NET Web API does not pay attention to Request Validation. You can post any content you want – including <SCRIPT> tags – to an ASP.NET Web API action. For example, the following HTML page contains a form. When you submit the form, the form data is submitted to an ASP.NET Web API controller on the server using an Ajax request: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title></title> </head> <body> <form data-bind="submit:submit"> <div> <label> User Name: <input data-bind="value:user.userName" /> </label> </div> <div> <label> Email: <input data-bind="value:user.email" /> </label> </div> <div> <input type="submit" value="Submit" /> </div> </form> <script src="Scripts/jquery-1.7.1.js"></script> <script src="Scripts/knockout-2.1.0.js"></script> <script> var viewModel = { user: { userName: ko.observable(), email: ko.observable() }, submit: function () { $.post("/api/users", ko.toJS(this.user)); } }; ko.applyBindings(viewModel); </script> </body> </html> The form above is using Knockout to bind the form fields to a view model. When you submit the form, the view model is submitted to an ASP.NET Web API action on the server. Here’s the server-side ASP.NET Web API controller and model class: public class UsersController : ApiController { public HttpResponseMessage Post(UserViewModel user) { var userName = user.UserName; return Request.CreateResponse(HttpStatusCode.OK); } } public class UserViewModel { public string UserName { get; set; } public string Email { get; set; } } If you submit the HTML form, you don’t get an error. The “potentially dangerous” content is passed to the server without any exception being thrown. In the screenshot below, you can see that I was able to post a username form field with the value “<script>alert(‘boo’)</script”. So what this means is that you do not get automatic Request Validation in the case of a Single Page App. You need to be extra careful in a Single Page App about ensuring that you do not display untrusted content because you don’t have the Request Validation safety net which you have in a traditional server-side ASP.NET app. Protecting Going Out (Automatic HTML Encoding) Server-side ASP.NET also protects you from XSS attacks when you render content. By default, all content rendered by the razor view engine is HTML encoded. For example, the following razor view displays the text “<b>Hello!</b>” instead of the text “Hello!” in bold: @{ var message = "<b>Hello!</b>"; } @message   If you don’t want to render content as HTML encoded in razor then you need to take the extra step of using the @Html.Raw() helper. In a Web Form page, if you use <%: %> instead of <%= %> then you get automatic HTML Encoding: <%@ Page Language="C#" %> <% var message = "<b>Hello!</b>"; %> <%: message %> This automatic HTML Encoding will prevent many types of XSS attacks. It prevents <script> tags from being rendered and only allows &lt;script&gt; tags to be rendered which are useless for executing JavaScript. (This automatic HTML encoding does not protect you from all forms of XSS attacks. For example, you can assign the value “javascript:alert(‘evil’)” to the Hyperlink control’s NavigateUrl property and execute the JavaScript). The situation with Knockout is more complicated. If you use the Knockout TEXT binding then you get HTML encoded content. On the other hand, if you use the HTML binding then you do not: <!-- This JavaScript DOES NOT execute --> <div data-bind="text:someProp"></div> <!-- This Javacript DOES execute --> <div data-bind="html:someProp"></div> <script src="Scripts/jquery-1.7.1.js"></script> <script src="Scripts/knockout-2.1.0.js"></script> <script> var viewModel = { someProp : "<script>alert('Evil!')<" + "/script>" }; ko.applyBindings(viewModel); </script>   So, in the page above, the DIV element which uses the TEXT binding is safe from XSS attacks. According to the Knockout documentation: “Since this binding sets your text value using a text node, it’s safe to set any string value without risking HTML or script injection.” Just like server-side HTML encoding, Knockout does not protect you from all types of XSS attacks. For example, there is nothing in Knockout which prevents you from binding JavaScript to a hyperlink like this: <a data-bind="attr:{href:homePageUrl}">Go</a> <script src="Scripts/jquery-1.7.1.min.js"></script> <script src="Scripts/knockout-2.1.0.js"></script> <script> var viewModel = { homePageUrl: "javascript:alert('evil!')" }; ko.applyBindings(viewModel); </script> In the page above, the value “javascript:alert(‘evil’)” is bound to the HREF attribute using Knockout. When you click the link, the JavaScript executes. Cross-Site Request Forgery (CSRF) Attacks Cross-Site Request Forgery (CSRF) attacks rely on the fact that a session cookie does not expire until you close your browser. In particular, if you visit and login to MajorBank.com and then you navigate to Hackers.com then you will still be authenticated against MajorBank.com even after you navigate to Hackers.com. Because MajorBank.com cannot tell whether a request is coming from MajorBank.com or Hackers.com, Hackers.com can submit requests to MajorBank.com pretending to be you. For example, Hackers.com can post an HTML form from Hackers.com to MajorBank.com and change your email address at MajorBank.com. Hackers.com can post a form to MajorBank.com using your authentication cookie. After your email address has been changed, by using a password reset page at MajorBank.com, a hacker can access your bank account. To prevent CSRF attacks, you need some mechanism for detecting whether a request is coming from a page loaded from your website or whether the request is coming from some other website. The recommended way of preventing Cross-Site Request Forgery attacks is to use the “Synchronizer Token Pattern” as described here: https://www.owasp.org/index.php/Cross-Site_Request_Forgery_%28CSRF%29_Prevention_Cheat_Sheet When using the Synchronizer Token Pattern, you include a hidden input field which contains a random token whenever you display an HTML form. When the user opens the form, you add a cookie to the user’s browser with the same random token. When the user posts the form, you verify that the hidden form token and the cookie token match. Preventing Cross-Site Request Forgery Attacks with ASP.NET MVC ASP.NET gives you a helper and an action filter which you can use to thwart Cross-Site Request Forgery attacks. For example, the following razor form for creating a product shows how you use the @Html.AntiForgeryToken() helper: @model MvcApplication2.Models.Product <h2>Create Product</h2> @using (Html.BeginForm()) { @Html.AntiForgeryToken(); <div> @Html.LabelFor( p => p.Name, "Product Name:") @Html.TextBoxFor( p => p.Name) </div> <div> @Html.LabelFor( p => p.Price, "Product Price:") @Html.TextBoxFor( p => p.Price) </div> <input type="submit" /> } The @Html.AntiForgeryToken() helper generates a random token and assigns a serialized version of the same random token to both a cookie and a hidden form field. (Actually, if you dive into the source code, the AntiForgeryToken() does something a little more complex because it takes advantage of a user’s identity when generating the token). Here’s what the hidden form field looks like: <input name=”__RequestVerificationToken” type=”hidden” value=”NqqZGAmlDHh6fPTNR_mti3nYGUDgpIkCiJHnEEL59S7FNToyyeSo7v4AfzF2i67Cv0qTB1TgmZcqiVtgdkW2NnXgEcBc-iBts0x6WAIShtM1″ /> And here’s what the cookie looks like using the Google Chrome developer toolbar: You use the [ValidateAntiForgeryToken] action filter on the controller action which is the recipient of the form post to validate that the token in the hidden form field matches the token in the cookie. If the tokens don’t match then validation fails and you can’t post the form: public ActionResult Create() { return View(); } [ValidateAntiForgeryToken] [HttpPost] public ActionResult Create(Product productToCreate) { if (ModelState.IsValid) { // save product to db return RedirectToAction("Index"); } return View(); } How does this all work? Let’s imagine that a hacker has copied the Create Product page from MajorBank.com to Hackers.com – the hacker grabs the HTML source and places it at Hackers.com. Now, imagine that the hacker trick you into submitting the Create Product form from Hackers.com to MajorBank.com. You’ll get the following exception: The Cross-Site Request Forgery attack is blocked because the anti-forgery token included in the Create Product form at Hackers.com won’t match the anti-forgery token stored in the cookie in your browser. The tokens were generated at different times for different users so the attack fails. Preventing Cross-Site Request Forgery Attacks with a Single Page App In a Single Page App, you can’t prevent Cross-Site Request Forgery attacks using the same method as a server-side ASP.NET MVC app. In a Single Page App, HTML forms are not generated on the server. Instead, in a Single Page App, forms are loaded dynamically in the browser. Phil Haack has a blog post on this topic where he discusses passing the anti-forgery token in an Ajax header instead of a hidden form field. He also describes how you can create a custom anti-forgery token attribute to compare the token in the Ajax header and the token in the cookie. See: http://haacked.com/archive/2011/10/10/preventing-csrf-with-ajax.aspx Also, take a look at Johan’s update to Phil Haack’s original post: http://johan.driessen.se/posts/Updated-Anti-XSRF-Validation-for-ASP.NET-MVC-4-RC (Other server frameworks such as Rails and Django do something similar. For example, Rails uses an X-CSRF-Token to prevent CSRF attacks which you generate on the server – see http://excid3.com/blog/rails-tip-2-include-csrf-token-with-every-ajax-request/#.UTFtgDDkvL8 ). For example, if you are creating a Durandal app, then you can use the following razor view for your one and only server-side page: @{ Layout = null; } <!DOCTYPE html> <html> <head> <title>Index</title> </head> <body> @Html.AntiForgeryToken() <div id="applicationHost"> Loading app.... </div> @Scripts.Render("~/scripts/vendor") <script type="text/javascript" src="~/App/durandal/amd/require.js" data-main="/App/main"></script> </body> </html> Notice that this page includes a call to @Html.AntiForgeryToken() to generate the anti-forgery token. Then, whenever you make an Ajax request in the Durandal app, you can retrieve the anti-forgery token from the razor view and pass the token as a header: var csrfToken = $("input[name='__RequestVerificationToken']").val(); $.ajax({ headers: { __RequestVerificationToken: csrfToken }, type: "POST", dataType: "json", contentType: 'application/json; charset=utf-8', url: "/api/products", data: JSON.stringify({ name: "Milk", price: 2.33 }), statusCode: { 200: function () { alert("Success!"); } } }); Use the following code to create an action filter which you can use to match the header and cookie tokens: using System.Linq; using System.Net.Http; using System.Web.Helpers; using System.Web.Http.Controllers; namespace MvcApplication2.Infrastructure { public class ValidateAjaxAntiForgeryToken : System.Web.Http.AuthorizeAttribute { protected override bool IsAuthorized(HttpActionContext actionContext) { var headerToken = actionContext .Request .Headers .GetValues("__RequestVerificationToken") .FirstOrDefault(); ; var cookieToken = actionContext .Request .Headers .GetCookies() .Select(c => c[AntiForgeryConfig.CookieName]) .FirstOrDefault(); // check for missing cookie or header if (cookieToken == null || headerToken == null) { return false; } // ensure that the cookie matches the header try { AntiForgery.Validate(cookieToken.Value, headerToken); } catch { return false; } return base.IsAuthorized(actionContext); } } } Notice that the action filter derives from the base AuthorizeAttribute. The ValidateAjaxAntiForgeryToken only works when the user is authenticated and it will not work for anonymous requests. Add the action filter to your ASP.NET Web API controller actions like this: [ValidateAjaxAntiForgeryToken] public HttpResponseMessage PostProduct(Product productToCreate) { // add product to db return Request.CreateResponse(HttpStatusCode.OK); } After you complete these steps, it won’t be possible for a hacker to pretend to be you at Hackers.com and submit a form to MajorBank.com. The header token used in the Ajax request won’t travel to Hackers.com. This approach works, but I am not entirely happy with it. The one thing that I don’t like about this approach is that it creates a hard dependency on using razor. Your single page in your Single Page App must be generated from a server-side razor view. A better solution would be to generate the anti-forgery token in JavaScript. Unfortunately, until all browsers support a way to generate cryptographically strong random numbers – for example, by supporting the window.crypto.getRandomValues() method — there is no good way to generate anti-forgery tokens in JavaScript. So, at least right now, the best solution for generating the tokens is the server-side solution with the (regrettable) dependency on razor. Conclusion The goal of this blog entry was to explore some ways in which you need to handle security differently in the case of a Single Page App than in the case of a traditional server app. In particular, I focused on how to prevent Cross-Site Scripting and Cross-Site Request Forgery attacks in the case of a Single Page App. I want to emphasize that I am not suggesting that Single Page Apps are inherently less secure than server-side apps. Whatever type of web application you build – regardless of whether it is a Single Page App, an ASP.NET MVC app, an ASP.NET Web Forms app, or a Rails app – you must constantly guard against security vulnerabilities.

    Read the article

  • Metro, Authentication, and the ASP.NET Web API

    - by Stephen.Walther
    Imagine that you want to create a Metro style app written with JavaScript and you want to communicate with a remote web service. For example, you are creating a movie app which retrieves a list of movies from a movies service. In this situation, how do you authenticate your Metro app and the Metro user so not just anyone can call the movies service? How can you identify the user making the request so you can return user specific data from the service? The Windows Live SDK supports a feature named Single Sign-On. When a user logs into a Windows 8 machine using their Live ID, you can authenticate the user’s identity automatically. Even better, when the Metro app performs a call to a remote web service, you can pass an authentication token to the remote service and prevent unauthorized access to the service. The documentation for Single Sign-On is located here: http://msdn.microsoft.com/en-us/library/live/hh826544.aspx In this blog entry, I describe the steps that you need to follow to use Single Sign-On with a (very) simple movie app. We build a Metro app which communicates with a web service created using the ASP.NET Web API. Creating the Visual Studio Solution Let’s start by creating a Visual Studio solution which contains two projects: a Windows Metro style Blank App project and an ASP.NET MVC 4 Web Application project. Name the Metro app MovieApp and the ASP.NET MVC application MovieApp.Services. When you create the ASP.NET MVC application, select the Web API template: After you create the two projects, your Visual Studio Solution Explorer window should look like this: Configuring the Live SDK You need to get your hands on the Live SDK and register your Metro app. You can download the latest version of the SDK (version 5.2) from the following address: http://www.microsoft.com/en-us/download/details.aspx?id=29938 After you download the Live SDK, you need to visit the following website to register your Metro app: https://manage.dev.live.com/build Don’t let the title of the website — Windows Push Notifications & Live Connect – confuse you, this is the right place. Follow the instructions at the website to register your Metro app. Don’t forget to follow the instructions in Step 3 for updating the information in your Metro app’s manifest. After you register, your client secret is displayed. Record this client secret because you will need it later (we use it with the web service): You need to configure one more thing. You must enter your Redirect Domain by visiting the following website: https://manage.dev.live.com/Applications/Index Click on your application name, click Edit Settings, click the API Settings tab, and enter a value for the Redirect Domain field. You can enter any domain that you please just as long as the domain has not already been taken: For the Redirect Domain, I entered http://superexpertmovieapp.com. Create the Metro MovieApp Next, we need to create the MovieApp. The MovieApp will: 1. Use Single Sign-On to log the current user into Live 2. Call the MoviesService web service 3. Display the results in a ListView control Because we use the Live SDK in the MovieApp, we need to add a reference to it. Right-click your References folder in the Solution Explorer window and add the reference: Here’s the HTML page for the Metro App: <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <title>MovieApp</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.1.0.RC/css/ui-dark.css" rel="stylesheet" /> <script src="//Microsoft.WinJS.1.0.RC/js/base.js"></script> <script src="//Microsoft.WinJS.1.0.RC/js/ui.js"></script> <!-- Live SDK --> <script type="text/javascript" src="/LiveSDKHTML/js/wl.js"></script> <!-- WebServices references --> <link href="/css/default.css" rel="stylesheet" /> <script src="/js/default.js"></script> </head> <body> <div id="tmplMovie" data-win-control="WinJS.Binding.Template"> <div class="movieItem"> <span data-win-bind="innerText:title"></span> <br /><span data-win-bind="innerText:director"></span> </div> </div> <div id="lvMovies" data-win-control="WinJS.UI.ListView" data-win-options="{ itemTemplate: select('#tmplMovie') }"> </div> </body> </html> The HTML page above contains a Template and ListView control. These controls are used to display the movies when the movies are returned from the movies service. Notice that the page includes a reference to the Live script that we registered earlier: <!-- Live SDK --> <script type="text/javascript" src="/LiveSDKHTML/js/wl.js"></script> The JavaScript code looks like this: (function () { "use strict"; var REDIRECT_DOMAIN = "http://superexpertmovieapp.com"; var WEBSERVICE_URL = "http://localhost:49743/api/movies"; function init() { WinJS.UI.processAll().done(function () { // Get element and control references var lvMovies = document.getElementById("lvMovies").winControl; // Login to Windows Live var scopes = ["wl.signin"]; WL.init({ scope: scopes, redirect_uri: REDIRECT_DOMAIN }); WL.login().then( function(response) { // Get the authentication token var authenticationToken = response.session.authentication_token; // Call the web service var options = { url: WEBSERVICE_URL, headers: { authenticationToken: authenticationToken } }; WinJS.xhr(options).done( function (xhr) { var movies = JSON.parse(xhr.response); var listMovies = new WinJS.Binding.List(movies); lvMovies.itemDataSource = listMovies.dataSource; }, function (xhr) { console.log(xhr.statusText); } ); }, function(response) { throw WinJS.ErrorFromName("Failed to login!"); } ); }); } document.addEventListener("DOMContentLoaded", init); })(); There are two constants which you need to set to get the code above to work: REDIRECT_DOMAIN and WEBSERVICE_URL. The REDIRECT_DOMAIN is the domain that you entered when registering your app with Live. The WEBSERVICE_URL is the path to your web service. You can get the correct value for WEBSERVICE_URL by opening the Project Properties for the MovieApp.Services project, clicking the Web tab, and getting the correct URL. The port number is randomly generated. In my code, I used the URL  “http://localhost:49743/api/movies”. Assuming that the user is logged into Windows 8 with a Live account, when the user runs the MovieApp, the user is logged into Live automatically. The user is logged in with the following code: // Login to Windows Live var scopes = ["wl.signin"]; WL.init({ scope: scopes, redirect_uri: REDIRECT_DOMAIN }); WL.login().then(function(response) { // Do something }); The scopes setting determines what the user has permission to do. For example, access the user’s SkyDrive or access the user’s calendar or contacts. The available scopes are listed here: http://msdn.microsoft.com/en-us/library/live/hh243646.aspx In our case, we only need the wl.signin scope which enables Single Sign-On. After the user signs in, you can retrieve the user’s Live authentication token. The authentication token is passed to the movies service to authenticate the user. Creating the Movies Service The Movies Service is implemented as an API controller in an ASP.NET MVC 4 Web API project. Here’s what the MoviesController looks like: using System.Collections.Generic; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http; using JWTSample; using MovieApp.Services.Models; namespace MovieApp.Services.Controllers { public class MoviesController : ApiController { const string CLIENT_SECRET = "NtxjF2wu7JeY1unvVN-lb0hoeWOMUFoR"; // GET api/values public HttpResponseMessage Get() { // Authenticate // Get authenticationToken var authenticationToken = Request.Headers.GetValues("authenticationToken").FirstOrDefault(); if (authenticationToken == null) { return new HttpResponseMessage(HttpStatusCode.Unauthorized); } // Validate token var d = new Dictionary<int, string>(); d.Add(0, CLIENT_SECRET); try { var myJWT = new JsonWebToken(authenticationToken, d); } catch { return new HttpResponseMessage(HttpStatusCode.Unauthorized); } // Return results return Request.CreateResponse( HttpStatusCode.OK, new List<Movie> { new Movie {Title="Star Wars", Director="Lucas"}, new Movie {Title="King Kong", Director="Jackson"}, new Movie {Title="Memento", Director="Nolan"} } ); } } } Because the Metro app performs an HTTP GET request, the MovieController Get() action is invoked. This action returns a set of three movies when, and only when, the authentication token is validated. The Movie class looks like this: using Newtonsoft.Json; namespace MovieApp.Services.Models { public class Movie { [JsonProperty(PropertyName="title")] public string Title { get; set; } [JsonProperty(PropertyName="director")] public string Director { get; set; } } } Notice that the Movie class uses the JsonProperty attribute to change Title to title and Director to director to make JavaScript developers happy. The Get() method validates the authentication token before returning the movies to the Metro app. To get authentication to work, you need to provide the client secret which you created at the Live management site. If you forgot to write down the secret, you can get it again here: https://manage.dev.live.com/Applications/Index The client secret is assigned to a constant at the top of the MoviesController class. The MoviesController class uses a helper class named JsonWebToken to validate the authentication token. This class was created by the Windows Live team. You can get the source code for the JsonWebToken class from the following GitHub repository: https://github.com/liveservices/LiveSDK/blob/master/Samples/Asp.net/AuthenticationTokenSample/JsonWebToken.cs You need to add an additional reference to your MVC project to use the JsonWebToken class: System.Runtime.Serialization. You can use the JsonWebToken class to get a unique and validated user ID like this: var user = myJWT.Claims.UserId; If you need to store user specific information then you can use the UserId property to uniquely identify the user making the web service call. Running the MovieApp When you first run the Metro MovieApp, you get a screen which asks whether the app should have permission to use Single Sign-On. This screen never appears again after you give permission once. Actually, when I first ran the app, I get the following error: According to the error, the app is blocked because “We detected some suspicious activity with your Online Id account. To help protect you, we’ve temporarily blocked your account.” This appears to be a bug in the current preview release of the Live SDK and there is more information about this bug here: http://social.msdn.microsoft.com/Forums/en-US/messengerconnect/thread/866c495f-2127-429d-ab07-842ef84f16ae/ If you click continue, and continue running the app, the error message does not appear again.  Summary The goal of this blog entry was to describe how you can validate Metro apps and Metro users when performing a call to a remote web service. First, I explained how you can create a Metro app which takes advantage of Single Sign-On to authenticate the current user against Live automatically. You learned how to register your Metro app with Live and how to include an authentication token in an Ajax call. Next, I explained how you can validate the authentication token – retrieved from the request header – in a web service. I discussed how you can use the JsonWebToken class to validate the authentication token and retrieve the unique user ID.

    Read the article

  • Yet Another ASP.NET MVC CRUD Tutorial

    - by Ricardo Peres
    I know that I have not posted much on MVC, mostly because I don’t use it on my daily life, but since I find it so interesting, and since it is gaining such popularity, I will be talking about it much more. This time, it’s about the most basic of scenarios: CRUD. Although there are several ASP.NET MVC tutorials out there that cover ordinary CRUD operations, I couldn’t find any that would explain how we can have also AJAX, optimistic concurrency control and validation, using Entity Framework Code First, so I set out to write one! I won’t go into explaining what is MVC, Code First or optimistic concurrency control, or AJAX, I assume you are all familiar with these concepts by now. Let’s consider an hypothetical use case, products. For simplicity, we only want to be able to either view a single product or edit this product. First, we need our model: 1: public class Product 2: { 3: public Product() 4: { 5: this.Details = new HashSet<OrderDetail>(); 6: } 7:  8: [Required] 9: [StringLength(50)] 10: public String Name 11: { 12: get; 13: set; 14: } 15:  16: [Key] 17: [ScaffoldColumn(false)] 18: [DatabaseGenerated(DatabaseGeneratedOption.Identity)] 19: public Int32 ProductId 20: { 21: get; 22: set; 23: } 24:  25: [Required] 26: [Range(1, 100)] 27: public Decimal Price 28: { 29: get; 30: set; 31: } 32:  33: public virtual ISet<OrderDetail> Details 34: { 35: get; 36: protected set; 37: } 38:  39: [Timestamp] 40: [ScaffoldColumn(false)] 41: public Byte[] RowVersion 42: { 43: get; 44: set; 45: } 46: } Keep in mind that this is a simple scenario. Let’s see what we have: A class Product, that maps to a product record on the database; A product has a required (RequiredAttribute) Name property which can contain up to 50 characters (StringLengthAttribute); The product’s Price must be a decimal value between 1 and 100 (RangeAttribute); It contains a set of order details, for each time that it has been ordered, which we will not talk about (Details); The record’s primary key (mapped to property ProductId) comes from a SQL Server IDENTITY column generated by the database (KeyAttribute, DatabaseGeneratedAttribute); The table uses a SQL Server ROWVERSION (previously known as TIMESTAMP) column for optimistic concurrency control mapped to property RowVersion (TimestampAttribute). Then we will need a controller for viewing product details, which will located on folder ~/Controllers under the name ProductController: 1: public class ProductController : Controller 2: { 3: [HttpGet] 4: public ViewResult Get(Int32 id = 0) 5: { 6: if (id != 0) 7: { 8: using (ProductContext ctx = new ProductContext()) 9: { 10: return (this.View("Single", ctx.Products.Find(id) ?? new Product())); 11: } 12: } 13: else 14: { 15: return (this.View("Single", new Product())); 16: } 17: } 18: } If the requested product does not exist, or one was not requested at all, one with default values will be returned. I am using a view named Single to display the product’s details, more on that later. As you can see, it delegates the loading of products to an Entity Framework context, which is defined as: 1: public class ProductContext: DbContext 2: { 3: public DbSet<Product> Products 4: { 5: get; 6: set; 7: } 8: } Like I said before, I’ll keep it simple for now, only aggregate root Product is available. The controller will use the standard routes defined by the Visual Studio ASP.NET MVC 3 template: 1: routes.MapRoute( 2: "Default", // Route name 3: "{controller}/{action}/{id}", // URL with parameters 4: new { controller = "Home", action = "Index", id = UrlParameter.Optional } // Parameter defaults 5: ); Next, we need a view for displaying the product details, let’s call it Single, and have it located under ~/Views/Product: 1: <%@ Page Language="C#" Inherits="System.Web.Mvc.ViewPage<Product>" %> 2: <!DOCTYPE html> 3:  4: <html> 5: <head runat="server"> 6: <title>Product</title> 7: <script src="/Scripts/jquery-1.7.2.js" type="text/javascript"></script> 1:  2: <script src="/Scripts/jquery-ui-1.8.19.js" type="text/javascript"> 1: </script> 2: <script src="/Scripts/jquery.unobtrusive-ajax.js" type="text/javascript"> 1: </script> 2: <script src="/Scripts/jquery.validate.js" type="text/javascript"> 1: </script> 2: <script src="/Scripts/jquery.validate.unobtrusive.js" type="text/javascript"> 1: </script> 2: <script type="text/javascript"> 3: function onFailure(error) 4: { 5: } 6:  7: function onComplete(ctx) 8: { 9: } 10:  11: </script> 8: </head> 9: <body> 10: <div> 11: <% 1: : this.Html.ValidationSummary(false) %> 12: <% 1: using (this.Ajax.BeginForm("Edit", "Product", new AjaxOptions{ HttpMethod = FormMethod.Post.ToString(), OnSuccess = "onSuccess", OnFailure = "onFailure" })) { %> 13: <% 1: : this.Html.EditorForModel() %> 14: <input type="submit" name="submit" value="Submit" /> 15: <% 1: } %> 16: </div> 17: </body> 18: </html> Yes… I am using ASPX syntax… sorry about that!   I implemented an editor template for the Product class, which must be located on the ~/Views/Shared/EditorTemplates folder as file Product.ascx: 1: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<Product>" %> 2: <div> 3: <%: this.Html.HiddenFor(model => model.ProductId) %> 4: <%: this.Html.HiddenFor(model => model.RowVersion) %> 5: <fieldset> 6: <legend>Product</legend> 7: <div class="editor-label"> 8: <%: this.Html.LabelFor(model => model.Name) %> 9: </div> 10: <div class="editor-field"> 11: <%: this.Html.TextBoxFor(model => model.Name) %> 12: <%: this.Html.ValidationMessageFor(model => model.Name) %> 13: </div> 14: <div class="editor-label"> 15: <%= this.Html.LabelFor(model => model.Price) %> 16: </div> 17: <div class="editor-field"> 18: <%= this.Html.TextBoxFor(model => model.Price) %> 19: <%: this.Html.ValidationMessageFor(model => model.Price) %> 20: </div> 21: </fieldset> 22: </div> One thing you’ll notice is, I am including both the ProductId and the RowVersion properties as hidden fields; they will come handy later or, so that we know what product and version we are editing. The other thing is the included JavaScript files: jQuery, jQuery UI and unobtrusive validations. Also, I am not using the Content extension method for translating relative URLs, because that way I would lose JavaScript intellisense for jQuery functions. OK, so, at this moment, I want to add support for AJAX and optimistic concurrency control. So I write a controller method like this: 1: [HttpPost] 2: [AjaxOnly] 3: [Authorize] 4: public JsonResult Edit(Product product) 5: { 6: if (this.TryValidateModel(product) == true) 7: { 8: using (BlogContext ctx = new BlogContext()) 9: { 10: Boolean success = false; 11:  12: ctx.Entry(product).State = (product.ProductId == 0) ? EntityState.Added : EntityState.Modified; 13:  14: try 15: { 16: success = (ctx.SaveChanges() == 1); 17: } 18: catch (DbUpdateConcurrencyException) 19: { 20: ctx.Entry(product).Reload(); 21: } 22:  23: return (this.Json(new { Success = success, ProductId = product.ProductId, RowVersion = Convert.ToBase64String(product.RowVersion) })); 24: } 25: } 26: else 27: { 28: return (this.Json(new { Success = false, ProductId = 0, RowVersion = String.Empty })); 29: } 30: } So, this method is only valid for HTTP POST requests (HttpPost), coming from AJAX (AjaxOnly, from MVC Futures), and from authenticated users (Authorize). It returns a JSON object, which is what you would normally use for AJAX requests, containing three properties: Success: a boolean flag; RowVersion: the current version of the ROWVERSION column as a Base-64 string; ProductId: the inserted product id, as coming from the database. If the product is new, it will be inserted into the database, and its primary key will be returned into the ProductId property. Success will be set to true; If a DbUpdateConcurrencyException occurs, it means that the value in the RowVersion property does not match the current ROWVERSION column value on the database, so the record must have been modified between the time that the page was loaded and the time we attempted to save the product. In this case, the controller just gets the new value from the database and returns it in the JSON object; Success will be false. Otherwise, it will be updated, and Success, ProductId and RowVersion will all have their values set accordingly. So let’s see how we can react to these situations on the client side. Specifically, we want to deal with these situations: The user is not logged in when the update/create request is made, perhaps the cookie expired; The optimistic concurrency check failed; All went well. So, let’s change our view: 1: <%@ Page Language="C#" Inherits="System.Web.Mvc.ViewPage<Product>" %> 2: <%@ Import Namespace="System.Web.Security" %> 3:  4: <!DOCTYPE html> 5:  6: <html> 7: <head runat="server"> 8: <title>Product</title> 9: <script src="/Scripts/jquery-1.7.2.js" type="text/javascript"></script> 1:  2: <script src="/Scripts/jquery-ui-1.8.19.js" type="text/javascript"> 1: </script> 2: <script src="/Scripts/jquery.unobtrusive-ajax.js" type="text/javascript"> 1: </script> 2: <script src="/Scripts/jquery.validate.js" type="text/javascript"> 1: </script> 2: <script src="/Scripts/jquery.validate.unobtrusive.js" type="text/javascript"> 1: </script> 2: <script type="text/javascript"> 3: function onFailure(error) 4: { 5: window.alert('An error occurred: ' + error); 6: } 7:  8: function onSuccess(ctx) 9: { 10: if (typeof (ctx.Success) != 'undefined') 11: { 12: $('input#ProductId').val(ctx.ProductId); 13: $('input#RowVersion').val(ctx.RowVersion); 14:  15: if (ctx.Success == false) 16: { 17: window.alert('An error occurred while updating the entity: it may have been modified by third parties. Please try again.'); 18: } 19: else 20: { 21: window.alert('Saved successfully'); 22: } 23: } 24: else 25: { 26: if (window.confirm('Not logged in. Login now?') == true) 27: { 28: document.location.href = '<%: FormsAuthentication.LoginUrl %>?ReturnURL=' + document.location.pathname; 29: } 30: } 31: } 32:  33: </script> 10: </head> 11: <body> 12: <div> 13: <% 1: : this.Html.ValidationSummary(false) %> 14: <% 1: using (this.Ajax.BeginForm("Edit", "Product", new AjaxOptions{ HttpMethod = FormMethod.Post.ToString(), OnSuccess = "onSuccess", OnFailure = "onFailure" })) { %> 15: <% 1: : this.Html.EditorForModel() %> 16: <input type="submit" name="submit" value="Submit" /> 17: <% 1: } %> 18: </div> 19: </body> 20: </html> The implementation of the onSuccess function first checks if the response contains a Success property, if not, the most likely cause is the request was redirected to the login page (using Forms Authentication), because it wasn’t authenticated, so we navigate there as well, keeping the reference to the current page. It then saves the current values of the ProductId and RowVersion properties to their respective hidden fields. They will be sent on each successive post and will be used in determining if the request is for adding a new product or to updating an existing one. The only thing missing is the ability to insert a new product, after inserting/editing an existing one, which can be easily achieved using this snippet: 1: <input type="button" value="New" onclick="$('input#ProductId').val('');$('input#RowVersion').val('');"/> And that’s it.

    Read the article

  • AngularJs ng-cloak Problems on large Pages

    - by Rick Strahl
    I’ve been working on a rather complex and large Angular page. Unlike a typical AngularJs SPA style ‘application’ this particular page is just that: a single page with a large amount of data on it that has to be visible all at once. The problem is that when this large page loads it flickers and displays template markup briefly before kicking into its actual content rendering. This is is what the Angular ng-cloak is supposed to address, but in this case I had no luck getting it to work properly. This application is a shop floor app where workers need to see all related information in one big screen view, so some of the benefits of Angular’s routing and view swapping features couldn’t be applied. Instead, we decided to have one very big view but lots of ng-controllers and directives to break out the logic for code separation. For code separation this works great – there are a number of small controllers that deal with their own individual and isolated application concerns. For HTML separation we used partial ASP.NET MVC Razor Views which made breaking out the HTML into manageable pieces super easy and made migration of this page from a previous server side Razor page much easier. We were also able to leverage most of our server side localization without a lot of  changes as a bonus. But as a result of this choice the initial HTML document that loads is rather large – even without any data loaded into it, resulting in a fairly large DOM tree that Angular must manage. Large Page and Angular Startup The problem on this particular page is that there’s quite a bit of markup – 35k’s worth of markup without any data loaded, in fact. It’s a large HTML page with a complex DOM tree. There are quite a lot of Angular {{ }} markup expressions in the document. Angular provides the ng-cloak directive to try and hide the element it cloaks so that you don’t see the flash of these markup expressions when the page initially loads before Angular has a chance to render the data into the markup expressions.<div id="mainContainer" class="mainContainer boxshadow" ng-app="app" ng-cloak> Note the ng-cloak attribute on this element, which here is an outer wrapper element of the most of this large page’s content. ng-cloak is supposed to prevent displaying the content below it, until Angular has taken control and is ready to render the data into the templates. Alas, with this large page the end result unfortunately is a brief flicker of un-rendered markup which looks like this: It’s brief, but plenty ugly – right?  And depending on the speed of the machine this flash gets more noticeable with slow machines that take longer to process the initial HTML DOM. ng-cloak Styles ng-cloak works by temporarily hiding the marked up element and it does this by essentially applying a style that does this:[ng\:cloak], [ng-cloak], [data-ng-cloak], [x-ng-cloak], .ng-cloak, .x-ng-cloak { display: none !important; } This style is inlined as part of AngularJs itself. If you looking at the angular.js source file you’ll find this at the very end of the file:!angular.$$csp() && angular.element(document) .find('head') .prepend('<style type="text/css">@charset "UTF-8";[ng\\:cloak],[ng-cloak],' + '[data-ng-cloak],[x-ng-cloak],.ng-cloak,.x-ng-cloak,' + '.ng-hide{display:none !important;}ng\\:form{display:block;}' '.ng-animate-block-transitions{transition:0s all!important;-webkit-transition:0s all!important;}' + '</style>'); This is is meant to initially hide any elements that contain the ng-cloak attribute or one of the other Angular directive permutation markup. Unfortunately on this particular web page ng-cloak had no effect – I still see the flicker. Why doesn’t ng-cloak work? The problem is of course – timing. The problem is that Angular actually needs to get control of the page before it ever starts doing anything like process even the ng-cloak attribute (or style etc). Because this page is rather large (about 35k of non-data HTML) it takes a while for the DOM to actually plow through the HTML. With the Angular <script> tag defined at the bottom of the page after the HTML DOM content there’s a slight delay which causes the flicker. For smaller pages the initial DOM load/parse cycle is so fast that the markup never shows, but with larger content pages it may show and become an annoying problem. Workarounds There a number of simple ways around this issue and some of them are hinted on in the Angular documentation. Load Angular Sooner One obvious thing that would help with this is to load Angular at the top of the page  BEFORE the DOM loads and that would give it much earlier control. The old ng-cloak documentation actually recommended putting the Angular.js script into the header of the page (apparently this was recently removed), but generally it’s not a good practice to load scripts in the header for page load performance. This is especially true if you load other libraries like jQuery which should be loaded prior to loading Angular so it can use jQuery rather than its own jqLite subset. This is not something I normally would like to do and also something that I’d likely forget in the future and end up right back here :-). Use ng-include for Child Content Angular supports nesting of child templates via the ng-include directive which essentially delay loads HTML content. This helps by removing a lot of the template content out of the main page and so getting control to Angular a lot sooner in order to hide the markup template content. In the application in question, I realize that in hindsight it might have been smarter to break this page out with client side ng-include directives instead of MVC Razor partial views we used to break up the page sections. Razor partial views give that nice separation as well, but in the end Razor puts humpty dumpty (ie. the HTML) back together into a whole single and rather large HTML document. Razor provides the logical separation, but still results in a large physical result document. But Razor also ended up being helpful to have a few security related blocks handled via server side template logic that simply excludes certain parts of the UI the user is not allowed to see – something that you can’t really do with client side exclusion like ng-hide/ng-show – client side content is always there whereas on the server side you can simply not send it to the client. Another reason I’m not a huge fan of ng-include is that it adds another HTTP hit to a request as templates are loaded from the server dynamically as needed. Given that this page was already heavy with resources adding another 10 separate ng-include directives wouldn’t be beneficial :-) ng-include is a valid option if you start from scratch and partition your logic. Of course if you don’t have complex pages, having completely separate views that are swapped in as they are accessed are even better, but we didn’t have this option due to the information having to be on screen all at once. Avoid using {{ }}  Expressions The biggest issue that ng-cloak attempts to address isn’t so much displaying the original content – it’s displaying empty {{ }} markup expression tags that get embedded into content. It gives you the dreaded “now you see it, now you don’t” effect where you sometimes see three separate rendering states: Markup junk, empty views, then views filled with data. If we can remove {{ }} expressions from the page you remove most of the perceived double draw effect as you would effectively start with a blank form and go straight to a filled form. To do this you can forego {{ }}  expressions and replace them with ng-bind directives on DOM elements. For example you can turn:<div class="list-item-name listViewOrderNo"> <a href='#'>{{lineItem.MpsOrderNo}}</a> </div>into:<div class="list-item-name listViewOrderNo"> <a href="#" ng-bind="lineItem.MpsOrderNo"></a> </div> to get identical results but because the {{ }}  expression has been removed there’s no double draw effect for this element. Again, not a great solution. The {{ }} syntax sure reads cleaner and is more fluent to type IMHO. In some cases you may also not have an outer element to attach ng-bind to which then requires you to artificially inject DOM elements into the page. This is especially painful if you have several consecutive values like {{Firstname}} {{Lastname}} for example. It’s an option though especially if you think of this issue up front and you don’t have a ton of expressions to deal with. Add the ng-cloak Styles manually You can also explicitly define the .css styles that Angular injects via code manually in your application’s style sheet. By doing so the styles become immediately available and so are applied right when the page loads – no flicker. I use the minimal:[ng-cloak] { display: none !important; } which works for:<div id="mainContainer" class="mainContainer dialog boxshadow" ng-app="app" ng-cloak> If you use one of the other combinations add the other CSS selectors as well or use the full style shown earlier. Angular will still load its version of the ng-cloak styling but it overrides those settings later, but this will do the trick of hiding the content before that CSS is injected into the page. Adding the CSS in your own style sheet works well, and is IMHO by far the best option. The nuclear option: Hiding the Content manually Using the explicit CSS is the best choice, so the following shouldn’t ever be necessary. But I’ll mention it here as it gives some insight how you can hide/show content manually on load for other frameworks or in your own markup based templates. Before I figured out that I could explicitly embed the CSS style into the page, I had tried to figure out why ng-cloak wasn’t doing its job. After wasting an hour getting nowhere I finally decided to just manually hide and show the container. The idea is simple – initially hide the container, then show it once Angular has done its initial processing and removal of the template markup from the page. You can manually hide the content and make it visible after Angular has gotten control. To do this I used:<div id="mainContainer" class="mainContainer boxshadow" ng-app="app" style="display:none"> Notice the display: none style that explicitly hides the element initially on the page. Then once Angular has run its initialization and effectively processed the template markup on the page you can show the content. For Angular this ‘ready’ event is the app.run() function:app.run( function ($rootScope, $location, cellService) { $("#mainContainer").show(); … }); This effectively removes the display:none style and the content displays. By the time app.run() fires the DOM is ready to displayed with filled data or at least empty data – Angular has gotten control. Edge Case Clearly this is an edge case. In general the initial HTML pages tend to be reasonably sized and the load time for the HTML and Angular are fast enough that there’s no flicker between the rendering times. This only becomes an issue as the initial pages get rather large. Regardless – if you have an Angular application it’s probably a good idea to add the CSS style into your application’s CSS (or a common shared one) just to make sure that content is always hidden. You never know how slow of a browser somebody might be running and while your super fast dev machine might not show any flicker, grandma’s old XP box very well might…© Rick Strahl, West Wind Technologies, 2005-2014Posted in Angular  JavaScript  CSS  HTML   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • testing Clojure in Maven

    - by Ralph
    I am new at Maven and even newer at Clojure. As an exercise to learn the language, I am writing a spider solitaire player program. I also plan on writing a similar program in Scala to compare the implementations (see my post http://stackoverflow.com/questions/2571267/modern-java-alternatives-closed). I have configured a Maven directory structure containing the usual src/main/clojure and src/test/clojure directories. My pom.xml file includes the clojure-maven-plugin. When I run "mvn test", it displays "No tests to run", despite my having test code in the src/test/clojure directory. As I misnaming something? Here is my pom.xml file: <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>SpiderPlayer</groupId> <artifactId>SpiderPlayer</artifactId> <version>1.0.0-SNAPSHOT</version> <inceptionYear>2010</inceptionYear> <packaging>jar</packaging> <properties> <maven.build.timestamp.format>yyMMdd.HHmm</maven.build.timestamp.format> <main.dir>org/dogdaze/spider_player</main.dir> <main.package>org.dogdaze.spider_player</main.package> <main.class>${main.package}.Main</main.class> </properties> <build> <sourceDirectory>src/main/clojure</sourceDirectory> <testSourceDirectory>src/main/clojure</testSourceDirectory> <plugins> <plugin> <groupId>com.theoryinpractise</groupId> <artifactId>clojure-maven-plugin</artifactId> <version>1.3.1</version> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-antrun-plugin</artifactId> <version>1.3</version> <executions> <execution> <goals> <goal>run</goal> </goals> <phase>generate-sources</phase> <configuration> <tasks> <echo file="${project.build.sourceDirectory}/${main.dir}/Version.clj" message="(ns ${main.package})${line.separator}"/> <echo file="${project.build.sourceDirectory}/${main.dir}/Version.clj" append="true" message="(def version &quot;${maven.build.timestamp}&quot;)${line.separator}"/> </tasks> </configuration> </execution> </executions> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-assembly-plugin</artifactId> <version>2.1</version> <executions> <execution> <goals> <goal>single</goal> </goals> <phase>package</phase> <configuration> <descriptorRefs> <descriptorRef>jar-with-dependencies</descriptorRef> </descriptorRefs> <archive> <manifest> <mainClass>${main.class}</mainClass> </manifest> </archive> </configuration> </execution> </executions> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <configuration> <redirectTestOutputToFile>true</redirectTestOutputToFile> <skipTests>false</skipTests> <skip>false</skip> </configuration> <executions> <execution> <id>surefire-it</id> <phase>integration-test</phase> <goals> <goal>test</goal> </goals> <configuration> <skip>false</skip> </configuration> </execution> </executions> </plugin> </plugins> </build> <dependencies> <dependency> <groupId>commons-cli</groupId> <artifactId>commons-cli</artifactId> <version>1.2</version> <scope>compile</scope> </dependency> </dependencies> </project> Here is my Clojure source file (src/main/clojure/org/dogdaze/spider_player/Deck.clj): ; Copyright 2010 Dogdaze (ns org.dogdaze.spider_player.Deck (:use [clojure.contrib.seq-utils :only (shuffle)])) (def suits [:clubs :diamonds :hearts :spades]) (def ranks [:ace :two :three :four :five :six :seven :eight :nine :ten :jack :queen :king]) (defn suit-seq "Return 4 suits: if number-of-suits == 1: :clubs :clubs :clubs :clubs if number-of-suits == 2: :clubs :diamonds :clubs :diamonds if number-of-suits == 4: :clubs :diamonds :hearts :spades." [number-of-suits] (take 4 (cycle (take number-of-suits suits)))) (defstruct card :rank :suit) (defn unshuffled-deck "Create an unshuffled deck containing all cards from the number of suits specified." [number-of-suits] (for [rank ranks suit (suit-seq number-of-suits)] (struct card rank suit))) (defn deck "Create a shuffled deck containing all cards from the number of suits specified." [number-of-suits] (shuffle (unshuffled-deck number-of-suits))) Here is my test case (src/test/clojure/org/dogdaze/spider_player/TestDeck.clj): ; Copyright 2010 Dogdaze (ns org.dogdaze.spider_player (:use clojure.set clojure.test org.dogdaze.spider_player.Deck)) (deftest test-suit-seq (is (= (suit-seq 1) [:clubs :clubs :clubs :clubs])) (is (= (suit-seq 2) [:clubs :diamonds :clubs :diamonds])) (is (= (suit-seq 4) [:clubs :diamonds :hearts :spades]))) (def one-suit-deck [{:rank :ace, :suit :clubs} {:rank :ace, :suit :clubs} {:rank :ace, :suit :clubs} {:rank :ace, :suit :clubs} {:rank :two, :suit :clubs} {:rank :two, :suit :clubs} {:rank :two, :suit :clubs} {:rank :two, :suit :clubs} {:rank :three, :suit :clubs} {:rank :three, :suit :clubs} {:rank :three, :suit :clubs} {:rank :three, :suit :clubs} {:rank :four, :suit :clubs} {:rank :four, :suit :clubs} {:rank :four, :suit :clubs} {:rank :four, :suit :clubs} {:rank :five, :suit :clubs} {:rank :five, :suit :clubs} {:rank :five, :suit :clubs} {:rank :five, :suit :clubs} {:rank :six, :suit :clubs} {:rank :six, :suit :clubs} {:rank :six, :suit :clubs} {:rank :six, :suit :clubs} {:rank :seven, :suit :clubs} {:rank :seven, :suit :clubs} {:rank :seven, :suit :clubs} {:rank :seven, :suit :clubs} {:rank :eight, :suit :clubs} {:rank :eight, :suit :clubs} {:rank :eight, :suit :clubs} {:rank :eight, :suit :clubs} {:rank :nine, :suit :clubs} {:rank :nine, :suit :clubs} {:rank :nine, :suit :clubs} {:rank :nine, :suit :clubs} {:rank :ten, :suit :clubs} {:rank :ten, :suit :clubs} {:rank :ten, :suit :clubs} {:rank :ten, :suit :clubs} {:rank :jack, :suit :clubs} {:rank :jack, :suit :clubs} {:rank :jack, :suit :clubs} {:rank :jack, :suit :clubs} {:rank :queen, :suit :clubs} {:rank :queen, :suit :clubs} {:rank :queen, :suit :clubs} {:rank :queen, :suit :clubs} {:rank :king, :suit :clubs} {:rank :king, :suit :clubs} {:rank :king, :suit :clubs} {:rank :king, :suit :clubs}]) (def two-suits-deck [{:rank :ace, :suit :clubs} {:rank :ace, :suit :diamonds} {:rank :ace, :suit :clubs} {:rank :ace, :suit :diamonds} {:rank :two, :suit :clubs} {:rank :two, :suit :diamonds} {:rank :two, :suit :clubs} {:rank :two, :suit :diamonds} {:rank :three, :suit :clubs} {:rank :three, :suit :diamonds} {:rank :three, :suit :clubs} {:rank :three, :suit :diamonds} {:rank :four, :suit :clubs} {:rank :four, :suit :diamonds} {:rank :four, :suit :clubs} {:rank :four, :suit :diamonds} {:rank :five, :suit :clubs} {:rank :five, :suit :diamonds} {:rank :five, :suit :clubs} {:rank :five, :suit :diamonds} {:rank :six, :suit :clubs} {:rank :six, :suit :diamonds} {:rank :six, :suit :clubs} {:rank :six, :suit :diamonds} {:rank :seven, :suit :clubs} {:rank :seven, :suit :diamonds} {:rank :seven, :suit :clubs} {:rank :seven, :suit :diamonds} {:rank :eight, :suit :clubs} {:rank :eight, :suit :diamonds} {:rank :eight, :suit :clubs} {:rank :eight, :suit :diamonds} {:rank :nine, :suit :clubs} {:rank :nine, :suit :diamonds} {:rank :nine, :suit :clubs} {:rank :nine, :suit :diamonds} {:rank :ten, :suit :clubs} {:rank :ten, :suit :diamonds} {:rank :ten, :suit :clubs} {:rank :ten, :suit :diamonds} {:rank :jack, :suit :clubs} {:rank :jack, :suit :diamonds} {:rank :jack, :suit :clubs} {:rank :jack, :suit :diamonds} {:rank :queen, :suit :clubs} {:rank :queen, :suit :diamonds} {:rank :queen, :suit :clubs} {:rank :queen, :suit :diamonds} {:rank :king, :suit :clubs} {:rank :king, :suit :diamonds} {:rank :king, :suit :clubs} {:rank :king, :suit :diamonds}]) (def four-suits-deck [{:rank :ace, :suit :clubs} {:rank :ace, :suit :diamonds} {:rank :ace, :suit :hearts} {:rank :ace, :suit :spades} {:rank :two, :suit :clubs} {:rank :two, :suit :diamonds} {:rank :two, :suit :hearts} {:rank :two, :suit :spades} {:rank :three, :suit :clubs} {:rank :three, :suit :diamonds} {:rank :three, :suit :hearts} {:rank :three, :suit :spades} {:rank :four, :suit :clubs} {:rank :four, :suit :diamonds} {:rank :four, :suit :hearts} {:rank :four, :suit :spades} {:rank :five, :suit :clubs} {:rank :five, :suit :diamonds} {:rank :five, :suit :hearts} {:rank :five, :suit :spades} {:rank :six, :suit :clubs} {:rank :six, :suit :diamonds} {:rank :six, :suit :hearts} {:rank :six, :suit :spades} {:rank :seven, :suit :clubs} {:rank :seven, :suit :diamonds} {:rank :seven, :suit :hearts} {:rank :seven, :suit :spades} {:rank :eight, :suit :clubs} {:rank :eight, :suit :diamonds} {:rank :eight, :suit :hearts} {:rank :eight, :suit :spades} {:rank :nine, :suit :clubs} {:rank :nine, :suit :diamonds} {:rank :nine, :suit :hearts} {:rank :nine, :suit :spades} {:rank :ten, :suit :clubs} {:rank :ten, :suit :diamonds} {:rank :ten, :suit :hearts} {:rank :ten, :suit :spades} {:rank :jack, :suit :clubs} {:rank :jack, :suit :diamonds} {:rank :jack, :suit :hearts} {:rank :jack, :suit :spades} {:rank :queen, :suit :clubs} {:rank :queen, :suit :diamonds} {:rank :queen, :suit :hearts} {:rank :queen, :suit :spades} {:rank :king, :suit :clubs} {:rank :king, :suit :diamonds} {:rank :king, :suit :hearts} {:rank :king, :suit :spades}]) (deftest test-unshuffled-deck (is (= (unshuffled-deck 1) one-suit-deck)) (is (= (unshuffled-deck 2) two-suits-deck)) (is (= (unshuffled-deck 4) four-suits-deck))) (deftest test-shuffled-deck (is (= (set (deck 1)) (set one-suit-deck))) (is (= (set (deck 2)) (set two-suits-deck))) (is (= (set (deck 4)) (set four-suits-deck)))) (run-tests) Any idea why the test is not running? BTW, feel free to suggest improvements to the Clojure code. Thanks, Ralph

    Read the article

  • Asp.net with RegularExpression problem

    - by Eyla
    Greetings, I'm try to do valdation textbox input to valdate a phone number. I have a asp.net textbox and checkbox. the defualt is to validate a us phone number and when I check the checkbox I should change the RegularExpression and error message to validate an international phone using my own RegularExpression. I have no problem to validate the international phone but the problem is when validating the usa phone number I'm always getting error message that it is invalde phone number. I used diffrent RegularExpression but did not work. Please look at my code and davice me. Regards, ! ..................... ASP.net Code ..................... <%@ Page Language="C#" MasterPageFile="~/Master.Master" AutoEventWireup="true" CodeBehind="UpdateContact.aspx.cs" Inherits="IMAM_APPLICATION.UpdateContact" Title="Untitled Page" %> <%@ Register Assembly="AjaxControlToolkit" Namespace="AjaxControlToolkit" TagPrefix="cc1" %> <asp:Content ID="Content1" ContentPlaceHolderID="ContentPlaceHolder1" runat="server"> <script src="js/jquery-1.4.1-vsdoc.js" type="text/javascript"></script> <script src="js/jquery.validate.js" type="text/javascript"></script> <script src="js/js.js" type="text/javascript"></script> <script type="text/javascript"> $(document).ready(function() { ValidPhoneHome("#<%= chkIntphoneHome%>"); $("#aspnetForm").validate({ // debug: true, rules: { "<%=txtHomePhone.UniqueID %>": { phonehome: true } }, errorElement: "mydiv", wrapper: "mydiv", // a wrapper around the error message errorPlacement: function(error, element) { offset = element.offset(); error.insertBefore(element) error.addClass('message'); // add a class to the wrapper error.css('position', 'absolute'); error.css('left', offset.left + element.outerWidth()); error.css('top', offset.top - (element.height() / 2)); } }); }) </script> <div id="mydiv"> <asp:CheckBox ID="chkIntphoneHome" runat="server" Text="Internation Code" Style="position: absolute; top: 620px; left: 700px;" onclick=" ValidPhoneHome(this)" /> <asp:TextBox ID="txtHomePhone" runat="server" Style="top: 650px; left: 700px; position: absolute; height: 22px; width: 128px" ></asp:TextBox> </div> </asp:Content> ............................. js.js File ................... var RegularExpression; var USAPhone = /(^[a-z]([a-z_\.]*)@([a-z_\.]*)([.][a-z]{3})$)|(^[a-z]([a-z_\.]*)@([a-z_\.]*)(\.[a-z]{3})(\.[a-z]{2})*$)/i; var InterPhone = /^\d{9,12}$/; var errmsg; function ValidPhoneHome(sender) { if (sender.checked == true) { RegularExpression = InterPhone; errmsg = "Enter 9 to 12 numbers as international number"; } else { RegularExpression = USAPhone; errmsg = "Enter a valid number"; } jQuery.validator.addMethod("phonehome", function(value, element) { return this.optional(element) || RegularExpression.test(value); }, errmsg); }

    Read the article

  • SignalR Auto Disconnect when Page Changed in AngularJS

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2014/05/30/signalr-auto-disconnect-when-page-changed-in-angularjs.aspxIf we are using SignalR, the connection lifecycle was handled by itself very well. For example when we connect to SignalR service from browser through SignalR JavaScript Client the connection will be established. And if we refresh the page, close the tab or browser, or navigate to another URL then the connection will be closed automatically. This information had been well documented here. In a browser, SignalR client code that maintains a SignalR connection runs in the JavaScript context of a web page. That's why the SignalR connection has to end when you navigate from one page to another, and that's why you have multiple connections with multiple connection IDs if you connect from multiple browser windows or tabs. When the user closes a browser window or tab, or navigates to a new page or refreshes the page, the SignalR connection immediately ends because SignalR client code handles that browser event for you and calls the "Stop" method. But unfortunately this behavior doesn't work if we are using SignalR with AngularJS. AngularJS is a single page application (SPA) framework created by Google. It hijacks browser's address change event, based on the route table user defined, launch proper view and controller. Hence in AngularJS we address was changed but the web page still there. All changes of the page content are triggered by Ajax. So there's no page unload and load events. This is the reason why SignalR cannot handle disconnect correctly when works with AngularJS. If we dig into the source code of SignalR JavaScript Client source code we will find something below. It monitors the browser page "unload" and "beforeunload" event and send the "stop" message to server to terminate connection. But in AngularJS page change events were hijacked, so SignalR will not receive them and will not stop the connection. 1: // wire the stop handler for when the user leaves the page 2: _pageWindow.bind("unload", function () { 3: connection.log("Window unloading, stopping the connection."); 4:  5: connection.stop(asyncAbort); 6: }); 7:  8: if (isFirefox11OrGreater) { 9: // Firefox does not fire cross-domain XHRs in the normal unload handler on tab close. 10: // #2400 11: _pageWindow.bind("beforeunload", function () { 12: // If connection.stop() runs runs in beforeunload and fails, it will also fail 13: // in unload unless connection.stop() runs after a timeout. 14: window.setTimeout(function () { 15: connection.stop(asyncAbort); 16: }, 0); 17: }); 18: }   Problem Reproduce In the codes below I created a very simple example to demonstrate this issue. Here is the SignalR server side code. 1: public class GreetingHub : Hub 2: { 3: public override Task OnConnected() 4: { 5: Debug.WriteLine(string.Format("Connected: {0}", Context.ConnectionId)); 6: return base.OnConnected(); 7: } 8:  9: public override Task OnDisconnected() 10: { 11: Debug.WriteLine(string.Format("Disconnected: {0}", Context.ConnectionId)); 12: return base.OnDisconnected(); 13: } 14:  15: public void Hello(string user) 16: { 17: Clients.All.hello(string.Format("Hello, {0}!", user)); 18: } 19: } Below is the configuration code which hosts SignalR hub in an ASP.NET WebAPI project with IIS Express. 1: public class Startup 2: { 3: public void Configuration(IAppBuilder app) 4: { 5: app.Map("/signalr", map => 6: { 7: map.UseCors(CorsOptions.AllowAll); 8: map.RunSignalR(new HubConfiguration() 9: { 10: EnableJavaScriptProxies = false 11: }); 12: }); 13: } 14: } Since we will host AngularJS application in Node.js in another process and port, the SignalR connection will be cross domain. So I need to enable CORS above. In client side I have a Node.js file to host AngularJS application as a web server. You can use any web server you like such as IIS, Apache, etc.. Below is the "index.html" page which contains a navigation bar so that I can change the page/state. As you can see I added jQuery, AngularJS, SignalR JavaScript Client Library as well as my AngularJS entry source file "app.js". 1: <html data-ng-app="demo"> 2: <head> 3: <script type="text/javascript" src="jquery-2.1.0.js"></script> 1:  2: <script type="text/javascript" src="angular.js"> 1: </script> 2: <script type="text/javascript" src="angular-ui-router.js"> 1: </script> 2: <script type="text/javascript" src="jquery.signalR-2.0.3.js"> 1: </script> 2: <script type="text/javascript" src="app.js"></script> 4: </head> 5: <body> 6: <h1>SignalR Auto Disconnect with AngularJS by Shaun</h1> 7: <div> 8: <a href="javascript:void(0)" data-ui-sref="view1">View 1</a> | 9: <a href="javascript:void(0)" data-ui-sref="view2">View 2</a> 10: </div> 11: <div data-ui-view></div> 12: </body> 13: </html> Below is the "app.js". My SignalR logic was in the "View1" page and it will connect to server once the controller was executed. User can specify a user name and send to server, all clients that located in this page will receive the server side greeting message through SignalR. 1: 'use strict'; 2:  3: var app = angular.module('demo', ['ui.router']); 4:  5: app.config(['$stateProvider', '$locationProvider', function ($stateProvider, $locationProvider) { 6: $stateProvider.state('view1', { 7: url: '/view1', 8: templateUrl: 'view1.html', 9: controller: 'View1Ctrl' }); 10:  11: $stateProvider.state('view2', { 12: url: '/view2', 13: templateUrl: 'view2.html', 14: controller: 'View2Ctrl' }); 15:  16: $locationProvider.html5Mode(true); 17: }]); 18:  19: app.value('$', $); 20: app.value('endpoint', 'http://localhost:60448'); 21: app.value('hub', 'GreetingHub'); 22:  23: app.controller('View1Ctrl', function ($scope, $, endpoint, hub) { 24: $scope.user = ''; 25: $scope.response = ''; 26:  27: $scope.greeting = function () { 28: proxy.invoke('Hello', $scope.user) 29: .done(function () {}) 30: .fail(function (error) { 31: console.log(error); 32: }); 33: }; 34:  35: var connection = $.hubConnection(endpoint); 36: var proxy = connection.createHubProxy(hub); 37: proxy.on('hello', function (response) { 38: $scope.$apply(function () { 39: $scope.response = response; 40: }); 41: }); 42: connection.start() 43: .done(function () { 44: console.log('signlar connection established'); 45: }) 46: .fail(function (error) { 47: console.log(error); 48: }); 49: }); 50:  51: app.controller('View2Ctrl', function ($scope, $) { 52: }); When we went to View1 the server side "OnConnect" method will be invoked as below. And in any page we send the message to server, all clients will got the response. If we close one of the client, the server side "OnDisconnect" method will be invoked which is correct. But is we click "View 2" link in the page "OnDisconnect" method will not be invoked even though the content and browser address had been changed. This might cause many SignalR connections remain between the client and server. Below is what happened after I clicked "View 1" and "View 2" links four times. As you can see there are 4 live connections.   Solution Since the reason of this issue is because, AngularJS hijacks the page event that SignalR need to stop the connection, we can handle AngularJS route or state change event and stop SignalR connect manually. In the code below I moved the "connection" variant to global scope, added a handler to "$stateChangeStart" and invoked "stop" method of "connection" if its state was not "disconnected". 1: var connection; 2: app.run(['$rootScope', function ($rootScope) { 3: $rootScope.$on('$stateChangeStart', function () { 4: if (connection && connection.state && connection.state !== 4 /* disconnected */) { 5: console.log('signlar connection abort'); 6: connection.stop(); 7: } 8: }); 9: }]); Now if we refresh the page and navigated to View 1, the connection will be opened. At this state if we clicked "View 2" link the content will be changed and the SignalR connection will be closed automatically.   Summary In this post I demonstrated an issue when we are using SignalR with AngularJS. The connection cannot be closed automatically when we navigate to other page/state in AngularJS. And the solution I mentioned below is to move the SignalR connection as a global variant and close it manually when AngularJS route/state changed. You can download the full sample code here. Moving the SignalR connection as a global variant might not be a best solution. It's just for easy to demo here. In production code I suggest wrapping all SignalR operations into an AngularJS factory. Since AngularJS factory is a singleton object, we can safely put the connection variant in the factory function scope.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Metro: Query Selectors

    - by Stephen.Walther
    The goal of this blog entry is to explain how to perform queries using selectors when using the WinJS library. In particular, you learn how to use the WinJS.Utilities.query() method and the QueryCollection class to retrieve and modify the elements of an HTML document. Introduction to Selectors When you are building a Web application, you need some way of easily retrieving elements from an HTML document. For example, you might want to retrieve all of the input elements which have a certain class. Or, you might want to retrieve the one and only element with an id of favoriteColor. The standard way of retrieving elements from an HTML document is by using a selector. Anyone who has ever created a Cascading Style Sheet has already used selectors. You use selectors in Cascading Style Sheets to apply formatting rules to elements in a document. For example, the following Cascading Style Sheet rule changes the background color of every INPUT element with a class of .required in a document to the color red: input.red { background-color: red } The “input.red” part is the selector which matches all INPUT elements with a class of red. The W3C standard for selectors (technically, their recommendation) is entitled “Selectors Level 3” and the standard is located here: http://www.w3.org/TR/css3-selectors/ Selectors are not only useful for adding formatting to the elements of a document. Selectors are also useful when you need to apply behavior to the elements of a document. For example, you might want to select a particular BUTTON element with a selector and add a click handler to the element so that something happens whenever you click the button. Selectors are not specific to Cascading Style Sheets. You can use selectors in your JavaScript code to retrieve elements from an HTML document. jQuery is famous for its support for selectors. Using jQuery, you can use a selector to retrieve matching elements from a document and modify the elements. The WinJS library enables you to perform the same types of queries as jQuery using the W3C selector syntax. Performing Queries with the WinJS.Utilities.query() Method When using the WinJS library, you perform a query using a selector by using the WinJS.Utilities.query() method.  The following HTML document contains a BUTTON and a DIV element: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Application1</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- Application1 references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> </head> <body> <button>Click Me!</button> <div style="display:none"> <h1>Secret Message</h1> </div> </body> </html> The document contains a reference to the following JavaScript file named \js\default.js: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { WinJS.Utilities.query("button").listen("click", function () { WinJS.Utilities.query("div").clearStyle("display"); }); } }; app.start(); })(); The default.js script uses the WinJS.Utilities.query() method to retrieve all of the BUTTON elements in the page. The listen() method is used to wire an event handler to the BUTTON click event. When you click the BUTTON, the secret message contained in the hidden DIV element is displayed. The clearStyle() method is used to remove the display:none style attribute from the DIV element. Under the covers, the WinJS.Utilities.query() method uses the standard querySelectorAll() method. This means that you can use any selector which is compatible with the querySelectorAll() method when using the WinJS.Utilities.query() method. The querySelectorAll() method is defined in the W3C Selectors API Level 1 standard located here: http://www.w3.org/TR/selectors-api/ Unlike the querySelectorAll() method, the WinJS.Utilities.query() method returns a QueryCollection. We talk about the methods of the QueryCollection class below. Retrieving a Single Element with the WinJS.Utilities.id() Method If you want to retrieve a single element from a document, instead of matching a set of elements, then you can use the WinJS.Utilities.id() method. For example, the following line of code changes the background color of an element to the color red: WinJS.Utilities.id("message").setStyle("background-color", "red"); The statement above matches the one and only element with an Id of message. For example, the statement matches the following DIV element: <div id="message">Hello!</div> Notice that you do not use a hash when matching a single element with the WinJS.Utilities.id() method. You would need to use a hash when using the WinJS.Utilities.query() method to do the same thing like this: WinJS.Utilities.query("#message").setStyle("background-color", "red"); Under the covers, the WinJS.Utilities.id() method calls the standard document.getElementById() method. The WinJS.Utilities.id() method returns the result as a QueryCollection. If no element matches the identifier passed to WinJS.Utilities.id() then you do not get an error. Instead, you get a QueryCollection with no elements (length=0). Using the WinJS.Utilities.children() method The WinJS.Utilities.children() method enables you to retrieve a QueryCollection which contains all of the children of a DOM element. For example, imagine that you have a DIV element which contains children DIV elements like this: <div id="discussContainer"> <div>Message 1</div> <div>Message 2</div> <div>Message 3</div> </div> You can use the following code to add borders around all of the child DIV element and not the container DIV element: var discussContainer = WinJS.Utilities.id("discussContainer").get(0); WinJS.Utilities.children(discussContainer).setStyle("border", "2px dashed red");   It is important to understand that the WinJS.Utilities.children() method only works with a DOM element and not a QueryCollection. Notice that the get() method is used to retrieve the DOM element which represents the discussContainer. Working with the QueryCollection Class Both the WinJS.Utilities.query() method and the WinJS.Utilities.id() method return an instance of the QueryCollection class. The QueryCollection class derives from the base JavaScript Array class and adds several useful methods for working with HTML elements: addClass(name) – Adds a class to every element in the QueryCollection. clearStyle(name) – Removes a style from every element in the QueryCollection. conrols(ctor, options) – Enables you to create controls. get(index) – Retrieves the element from the QueryCollection at the specified index. getAttribute(name) – Retrieves the value of an attribute for the first element in the QueryCollection. hasClass(name) – Returns true if the first element in the QueryCollection has a certain class. include(items) – Includes a collection of items in the QueryCollection. listen(eventType, listener, capture) – Adds an event listener to every element in the QueryCollection. query(query) – Performs an additional query on the QueryCollection and returns a new QueryCollection. removeClass(name) – Removes a class from the every element in the QueryCollection. removeEventListener(eventType, listener, capture) – Removes an event listener from every element in the QueryCollection. setAttribute(name, value) – Adds an attribute to every element in the QueryCollection. setStyle(name, value) – Adds a style attribute to every element in the QueryCollection. template(templateElement, data, renderDonePromiseContract) – Renders a template using the supplied data.  toggleClass(name) – Toggles the specified class for every element in the QueryCollection. Because the QueryCollection class derives from the base Array class, it also contains all of the standard Array methods like forEach() and slice(). Summary In this blog post, I’ve described how you can perform queries using selectors within a Windows Metro Style application written with JavaScript. You learned how to return an instance of the QueryCollection class by using the WinJS.Utilities.query(), WinJS.Utilities.id(), and WinJS.Utilities.children() methods. You also learned about the methods of the QueryCollection class.

    Read the article

  • Investigation: Can different combinations of components effect Dataflow performance?

    - by jamiet
    Introduction The Dataflow task is one of the core components (if not the core component) of SQL Server Integration Services (SSIS) and often the most misunderstood. This is not surprising, its an incredibly complicated beast and we’re abstracted away from that complexity via some boxes that go yellow red or green and that have some lines drawn between them. Example dataflow In this blog post I intend to look under that facade and get into some of the nuts and bolts of the Dataflow Task by investigating how the decisions we make when building our packages can affect performance. I will do this by comparing the performance of three dataflows that all have the same input, all produce the same output, but which all operate slightly differently by way of having different transformation components. I also want to use this blog post to challenge a common held opinion that I see perpetuated over and over again on the SSIS forum. That is, that people assume adding components to a dataflow will be detrimental to overall performance. Its not surprising that people think this –it is intuitive to think that more components means more work- however this is not a view that I share. I have always been of the opinion that there are many factors affecting dataflow duration and the number of components is actually one of the less important ones; having said that I have never proven that assertion and that is one reason for this investigation. I have actually seen evidence that some people think dataflow duration is simply a function of number of rows and number of components. I’ll happily call that one out as a myth even without any investigation!  The Setup I have a 2GB datafile which is a list of 4731904 (~4.7million) customer records with various attributes against them and it contains 2 columns that I am going to use for categorisation: [YearlyIncome] [BirthDate] The data file is a SSIS raw format file which I chose to use because it is the quickest way of getting data into a dataflow and given that I am testing the transformations, not the source or destination adapters, I want to minimise external influences as much as possible. In the test I will split the customers according to month of birth (12 of those) and whether or not their yearly income is above or below 50000 (2 of those); in other words I will be splitting them into 24 discrete categories and in order to do it I shall be using different combinations of SSIS’ Conditional Split and Derived Column transformation components. The 24 datapaths that occur will each input to a rowcount component, again because this is the least resource intensive means of terminating a datapath. The test is being carried out on a Dell XPS Studio laptop with a quad core (8 logical Procs) Intel Core i7 at 1.73GHz and Samsung SSD hard drive. Its running SQL Server 2008 R2 on Windows 7. The Variables Here are the three combinations of components that I am going to test:     One Conditional Split - A single Conditional Split component CSPL Split by Month of Birth and income category that will use expressions on [YearlyIncome] & [BirthDate] to send each row to one of 24 outputs. This next screenshot displays the expression logic in use: Derived Column & Conditional Split - A Derived Column component DER Income Category that adds a new column [IncomeCategory] which will contain one of two possible text values {“LessThan50000”,”GreaterThan50000”} and uses [YearlyIncome] to determine which value each row should get. A Conditional Split component CSPL Split by Month of Birth and Income Category then uses that new column in conjunction with [BirthDate] to determine which of the same 24 outputs to send each row to. Put more simply, I am separating the Conditional Split of #1 into a Derived Column and a Conditional Split. The next screenshots display the expression logic in use: DER Income Category         CSPL Split by Month of Birth and Income Category       Three Conditional Splits - A Conditional Split component that produces two outputs based on [YearlyIncome], one for each Income Category. Each of those outputs will go to a further Conditional Split that splits the input into 12 outputs, one for each month of birth (identical logic in each). In this case then I am separating the single Conditional Split of #1 into three Conditional Split components. The next screenshots display the expression logic in use: CSPL Split by Income Category         CSPL Split by Month of Birth 1& 2       Each of these combinations will provide an input to one of the 24 rowcount components, just the same as before. For illustration here is a screenshot of the dataflow containing three Conditional Split components: As you can these dataflows have a fair bit of work to do and remember that they’re doing that work for 4.7million rows. I will execute each dataflow 10 times and use the average for comparison. I foresee three possible outcomes: The dataflow containing just one Conditional Split (i.e. #1) will be quicker There is no significant difference between any of them One of the two dataflows containing multiple transformation components will be quicker Regardless of which of those outcomes come to pass we will have learnt something and that makes this an interesting test to carry out. Note that I will be executing the dataflows using dtexec.exe rather than hitting F5 within BIDS. The Results and Analysis The table below shows all of the executions, 10 for each dataflow. It also shows the average for each along with a standard deviation. All durations are in seconds. I’m pasting a screenshot because I frankly can’t be bothered with the faffing about needed to make a presentable HTML table. It is plain to see from the average that the dataflow containing three conditional splits is significantly faster, the other two taking 43% and 52% longer respectively. This seems strange though, right? Why does the dataflow containing the most components outperform the other two by such a big margin? The answer is actually quite logical when you put some thought into it and I’ll explain that below. Before progressing, a side note. The standard deviation for the “Three Conditional Splits” dataflow is orders of magnitude smaller – indicating that performance for this dataflow can be predicted with much greater confidence too. The Explanation I refer you to the screenshot above that shows how CSPL Split by Month of Birth and salary category in the first dataflow is setup. Observe that there is a case for each combination of Month Of Date and Income Category – 24 in total. These expressions get evaluated in the order that they appear and hence if we assume that Month of Date and Income Category are uniformly distributed in the dataset we can deduce that the expected number of expression evaluations for each row is 12.5 i.e. 1 (the minimum) + 24 (the maximum) divided by 2 = 12.5. Now take a look at the screenshots for the second dataflow. We are doing one expression evaluation in DER Income Category and we have the same 24 cases in CSPL Split by Month of Birth and Income Category as we had before, only the expression differs slightly. In this case then we have 1 + 12.5 = 13.5 expected evaluations for each row – that would account for the slightly longer average execution time for this dataflow. Now onto the third dataflow, the quick one. CSPL Split by Income Category does a maximum of 2 expression evaluations thus the expected number of evaluations per row is 1.5. CSPL Split by Month of Birth 1 & CSPL Split by Month of Birth 2 both have less work to do than the previous Conditional Split components because they only have 12 cases to test for thus the expected number of expression evaluations is 6.5 There are two of them so total expected number of expression evaluations for this dataflow is 6.5 + 6.5 + 1.5 = 14.5. 14.5 is still more than 12.5 & 13.5 though so why is the third dataflow so much quicker? Simple, the conditional expressions in the first two dataflows have two boolean predicates to evaluate – one for Income Category and one for Month of Birth; the expressions in the Conditional Split in the third dataflow however only have one predicate thus they are doing a lot less work. To sum up, the difference in execution times can be attributed to the difference between: MONTH(BirthDate) == 1 && YearlyIncome <= 50000 and MONTH(BirthDate) == 1 In the first two dataflows YearlyIncome <= 50000 gets evaluated an average of 12.5 times for every row whereas in the third dataflow it is evaluated once and once only. Multiply those 11.5 extra operations by 4.7million rows and you get a significant amount of extra CPU cycles – that’s where our duration difference comes from. The Wrap-up The obvious point here is that adding new components to a dataflow isn’t necessarily going to make it go any slower, moreover you may be able to achieve significant improvements by splitting logic over multiple components rather than one. Performance tuning is all about reducing the amount of work that needs to be done and that doesn’t necessarily mean use less components, indeed sometimes you may be able to reduce workload in ways that aren’t immediately obvious as I think I have proven here. Of course there are many variables in play here and your mileage will most definitely vary. I encourage you to download the package and see if you get similar results – let me know in the comments. The package contains all three dataflows plus a fourth dataflow that will create the 2GB raw file for you (you will also need the [AdventureWorksDW2008] sample database from which to source the data); simply disable all dataflows except the one you want to test before executing the package and remember, execute using dtexec, not within BIDS. If you want to explore dataflow performance tuning in more detail then here are some links you might want to check out: Inequality joins, Asynchronous transformations and Lookups Destination Adapter Comparison Don’t turn the dataflow into a cursor SSIS Dataflow – Designing for performance (webinar) Any comments? Let me know! @Jamiet

    Read the article

  • Filtering List Data with a jQuery-searchFilter Plugin

    - by Rick Strahl
    When dealing with list based data on HTML forms, filtering that data down based on a search text expression is an extremely useful feature. We’re used to search boxes on just about anything these days and HTML forms should be no different. In this post I’ll describe how you can easily filter a list down to just the elements that match text typed into a search box. It’s a pretty simple task and it’s super easy to do, but I get a surprising number of comments from developers I work with who are surprised how easy it is to hook up this sort of behavior, that I thought it’s worth a blog post. But Angular does that out of the Box, right? These days it seems everybody is raving about Angular and the rich SPA features it provides. One of the cool features of Angular is the ability to do drop dead simple filters where you can specify a filter expression as part of a looping construct and automatically have that filter applied so that only items that match the filter show. I think Angular has single handedly elevated search filters to first rate, front-row status because it’s so easy. I love using Angular myself, but Angular is not a generic solution to problems like this. For one thing, using Angular requires you to render the list data with Angular – if you have data that is server rendered or static, then Angular doesn’t work. Not all applications are client side rendered SPAs – not by a long shot, and nor do all applications need to become SPAs. Long story short, it’s pretty easy to achieve text filtering effects using jQuery (or plain JavaScript for that matter) with just a little bit of work. Let’s take a look at an example. Why Filter? Client side filtering is a very useful tool that can make it drastically easier to sift through data displayed in client side lists. In my applications I like to display scrollable lists that contain a reasonably large amount of data, rather than the classic paging style displays which tend to be painful to use. So I often display 50 or so items per ‘page’ and it’s extremely useful to be able to filter this list down. Here’s an example in my Time Trakker application where I can quickly glance at various common views of my time entries. I can see Recent Entries, Unbilled Entries, Open Entries etc and filter those down by individual customers and so forth. Each of these lists results tends to be a few pages worth of scrollable content. The following screen shot shows a filtered view of Recent Entries that match the search keyword of CellPage: As you can see in this animated GIF, the filter is applied as you type, displaying only entries that match the text anywhere inside of the text of each of the list items. This is an immediately useful feature for just about any list display and adds significant value. A few lines of jQuery The good news is that this is trivially simple using jQuery. To get an idea what this looks like, here’s the relevant page layout showing only the search box and the list layout:<div id="divItemWrapper"> <div class="time-entry"> <div class="time-entry-right"> May 11, 2014 - 7:20pm<br /> <span style='color:steelblue'>0h:40min</span><br /> <a id="btnDeleteButton" href="#" class="hoverbutton" data-id="16825"> <img src="images/remove.gif" /> </a> </div> <div class="punchedoutimg"></div> <b><a href='/TimeTrakkerWeb/punchout/16825'>Project Housekeeping</a></b><br /> <small><i>Sawgrass</i></small> </div> ... more items here </div> So we have a searchbox txtSearchPage and a bunch of DIV elements with a .time-entry CSS class attached that makes up the list of items displayed. To hook up the search filter with jQuery is merely a matter of a few lines of jQuery code hooked to the .keyup() event handler: <script type="text/javascript"> $("#txtSearchPage").keyup(function() { var search = $(this).val(); $(".time-entry").show(); if (search) $(".time-entry").not(":contains(" + search + ")").hide(); }); </script> The idea here is pretty simple: You capture the keystroke in the search box and capture the search text. Using that search text you first make all items visible and then hide all the items that don’t match. Since DOM changes are applied after a method finishes execution in JavaScript, the show and hide operations are effectively batched up and so the view changes only to the final list rather than flashing the whole list and then removing items on a slow machine. You get the desired effect of the list showing the items in question. Case Insensitive Filtering But there is one problem with the solution above: The jQuery :contains filter is case sensitive, so your search text has to match expressions explicitly which is a bit cumbersome when typing. In the screen capture above I actually cheated – I used a custom filter that provides case insensitive contains behavior. jQuery makes it really easy to create custom query filters, and so I created one called containsNoCase. Here’s the implementation of this custom filter:$.expr[":"].containsNoCase = function(el, i, m) { var search = m[3]; if (!search) return false; return new RegExp(search, "i").test($(el).text()); }; This filter can be added anywhere where page level JavaScript runs – in page script or a seperately loaded .js file.  The filter basically extends jQuery with a : expression. Filters get passed a tokenized array that contains the expression. In this case the m[3] contains the search text from inside of the brackets. A filter basically looks at the active element that is passed in and then can return true or false to determine whether the item should be matched. Here I check a regular expression that looks for the search text in the element’s text. So the code for the filter now changes to:$(".time-entry").not(":containsNoCase(" + search + ")").hide(); And voila – you now have a case insensitive search.You can play around with another simpler example using this Plunkr:http://plnkr.co/edit/hDprZ3IlC6uzwFJtgHJh?p=preview Wrapping it up in a jQuery Plug-in To make this even easier to use and so that you can more easily remember how to use this search type filter, we can wrap this logic into a small jQuery plug-in:(function($, undefined) { $.expr[":"].containsNoCase = function(el, i, m) { var search = m[3]; if (!search) return false; return new RegExp(search, "i").test($(el).text()); }; $.fn.searchFilter = function(options) { var opt = $.extend({ // target selector targetSelector: "", // number of characters before search is applied charCount: 1 }, options); return this.each(function() { var $el = $(this); $el.keyup(function() { var search = $(this).val(); var $target = $(opt.targetSelector); $target.show(); if (search && search.length >= opt.charCount) $target.not(":containsNoCase(" + search + ")").hide(); }); }); }; })(jQuery); To use this plug-in now becomes a one liner:$("#txtSearchPagePlugin").searchFilter({ targetSelector: ".time-entry", charCount: 2}) You attach the .searchFilter() plug-in to the text box you are searching and specify a targetSelector that is to be filtered. Optionally you can specify a character count at which the filter kicks in since it’s kind of useless to filter at a single character typically. Summary This is s a very easy solution to a cool user interface feature your users will thank you for. Search filtering is a simple but highly effective user interface feature, and as you’ve seen in this post it’s very simple to create this behavior with just a few lines of jQuery code. While all the cool kids are doing Angular these days, jQuery is still useful in many applications that don’t embrace the ‘everything generated in JavaScript’ paradigm. I hope this jQuery plug-in or just the raw jQuery will be useful to some of you… Resources Example on Plunker© Rick Strahl, West Wind Technologies, 2005-2014Posted in jQuery  HTML5  JavaScript   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • GZip/Deflate Compression in ASP.NET MVC

    - by Rick Strahl
    A long while back I wrote about GZip compression in ASP.NET. In that article I describe two generic helper methods that I've used in all sorts of ASP.NET application from WebForms apps to HttpModules and HttpHandlers that require gzip or deflate compression. The same static methods also work in ASP.NET MVC. Here are the two routines:/// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } /// <summary> /// Sets up the current page or handler to use GZip through a Response.Filter /// IMPORTANT: /// You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() { HttpResponse Response = HttpContext.Current.Response; if (IsGZipSupported()) { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (AcceptEncoding.Contains("gzip")) { Response.Filter = new System.IO.Compression.GZipStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "gzip"); } else { Response.Filter = new System.IO.Compression.DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "deflate"); } } // Allow proxy servers to cache encoded and unencoded versions separately Response.AppendHeader("Vary", "Content-Encoding"); } The first method checks whether the client sending the request includes the accept-encoding for either gzip or deflate, and if if it does it returns true. The second function uses IsGzipSupported() to decide whether it should encode content and uses an Response Filter to do its job. Basically response filters look at the Response output stream as it's written and convert the data flowing through it. Filters are a bit tricky to work with but the two .NET filter streams for GZip and Deflate Compression make this a snap to implement. In my old code and even now in MVC I can always do:public ActionResult List(string keyword=null, int category=0) { WebUtils.GZipEncodePage(); …} to encode my content. And that works just fine. The proper way: Create an ActionFilterAttribute However in MVC this sort of thing is typically better handled by an ActionFilter which can be applied with an attribute. So to be all prim and proper I created an CompressContentAttribute ActionFilter that incorporates those two helper methods and which looks like this:/// <summary> /// Attribute that can be added to controller methods to force content /// to be GZip encoded if the client supports it /// </summary> public class CompressContentAttribute : ActionFilterAttribute { /// <summary> /// Override to compress the content that is generated by /// an action method. /// </summary> /// <param name="filterContext"></param> public override void OnActionExecuting(ActionExecutingContext filterContext) { GZipEncodePage(); } /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } /// <summary> /// Sets up the current page or handler to use GZip through a Response.Filter /// IMPORTANT: /// You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() { HttpResponse Response = HttpContext.Current.Response; if (IsGZipSupported()) { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (AcceptEncoding.Contains("gzip")) { Response.Filter = new System.IO.Compression.GZipStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "gzip"); } else { Response.Filter = new System.IO.Compression.DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "deflate"); } } // Allow proxy servers to cache encoded and unencoded versions separately Response.AppendHeader("Vary", "Content-Encoding"); } } It's basically the same code wrapped into an ActionFilter attribute, which intercepts requests MVC requests to Controller methods and lets you hook up logic before and after the methods have executed. Here I want to override OnActionExecuting() which fires before the Controller action is fired. With the CompressContentAttribute created, it can now be applied to either the controller as a whole:[CompressContent] public class ClassifiedsController : ClassifiedsBaseController { … } or to one of the Action methods:[CompressContent] public ActionResult List(string keyword=null, int category=0) { … } The former applies compression to every action method, while the latter is selective and only applies it to the individual action method. Is the attribute better than the static utility function? Not really, but it is the standard MVC way to hook up 'filter' content and that's where others are likely to expect to set options like this. In fact,  you have a bit more control with the utility function because you can conditionally apply it in code, but this is actually much less likely in MVC applications than old WebForms apps since controller methods tend to be more focused. Compression Caveats Http compression is very cool and pretty easy to implement in ASP.NET but you have to be careful with it - especially if your content might get transformed or redirected inside of ASP.NET. A good example, is if an error occurs and a compression filter is applied. ASP.NET errors don't clear the filter, but clear the Response headers which results in some nasty garbage because the compressed content now no longer matches the headers. Another issue is Caching, which has to account for all possible ways of compression and non-compression that the content is served. Basically compressed content and caching don't mix well. I wrote about several of these issues in an old blog post and I recommend you take a quick peek before diving into making every bit of output Gzip encoded. None of these are show stoppers, but you have to be aware of the issues. Related Posts GZip Compression with ASP.NET Content ASP.NET GZip Encoding Caveats© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Parallelism in .NET – Part 18, Task Continuations with Multiple Tasks

    - by Reed
    In my introduction to Task continuations I demonstrated how the Task class provides a more expressive alternative to traditional callbacks.  Task continuations provide a much cleaner syntax to traditional callbacks, but there are other reasons to switch to using continuations… Task continuations provide a clean syntax, and a very simple, elegant means of synchronizing asynchronous method results with the user interface.  In addition, continuations provide a very simple, elegant means of working with collections of tasks. Prior to .NET 4, working with multiple related asynchronous method calls was very tricky.  If, for example, we wanted to run two asynchronous operations, followed by a single method call which we wanted to run when the first two methods completed, we’d have to program all of the handling ourselves.  We would likely need to take some approach such as using a shared callback which synchronized against a common variable, or using a WaitHandle shared within the callbacks to allow one to wait for the second.  Although this could be accomplished easily enough, it requires manually placing this handling into every algorithm which requires this form of blocking.  This is error prone, difficult, and can easily lead to subtle bugs. Similar to how the Task class static methods providing a way to block until multiple tasks have completed, TaskFactory contains static methods which allow a continuation to be scheduled upon the completion of multiple tasks: TaskFactory.ContinueWhenAll. This allows you to easily specify a single delegate to run when a collection of tasks has completed.  For example, suppose we have a class which fetches data from the network.  This can be a long running operation, and potentially fail in certain situations, such as a server being down.  As a result, we have three separate servers which we will “query” for our information.  Now, suppose we want to grab data from all three servers, and verify that the results are the same from all three. With traditional asynchronous programming in .NET, this would require using three separate callbacks, and managing the synchronization between the various operations ourselves.  The Task and TaskFactory classes simplify this for us, allowing us to write: var server1 = Task.Factory.StartNew( () => networkClass.GetResults(firstServer) ); var server2 = Task.Factory.StartNew( () => networkClass.GetResults(secondServer) ); var server3 = Task.Factory.StartNew( () => networkClass.GetResults(thirdServer) ); var result = Task.Factory.ContinueWhenAll( new[] {server1, server2, server3 }, (tasks) => { // Propogate exceptions (see below) Task.WaitAll(tasks); return this.CompareTaskResults( tasks[0].Result, tasks[1].Result, tasks[2].Result); }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This is clean, simple, and elegant.  The one complication is the Task.WaitAll(tasks); statement. Although the continuation will not complete until all three tasks (server1, server2, and server3) have completed, there is a potential snag.  If the networkClass.GetResults method fails, and raises an exception, we want to make sure to handle it cleanly.  By using Task.WaitAll, any exceptions raised within any of our original tasks will get wrapped into a single AggregateException by the WaitAll method, providing us a simplified means of handling the exceptions.  If we wait on the continuation, we can trap this AggregateException, and handle it cleanly.  Without this line, it’s possible that an exception could remain uncaught and unhandled by a task, which later might trigger a nasty UnobservedTaskException.  This would happen any time two of our original tasks failed. Just as we can schedule a continuation to occur when an entire collection of tasks has completed, we can just as easily setup a continuation to run when any single task within a collection completes.  If, for example, we didn’t need to compare the results of all three network locations, but only use one, we could still schedule three tasks.  We could then have our completion logic work on the first task which completed, and ignore the others.  This is done via TaskFactory.ContinueWhenAny: var server1 = Task.Factory.StartNew( () => networkClass.GetResults(firstServer) ); var server2 = Task.Factory.StartNew( () => networkClass.GetResults(secondServer) ); var server3 = Task.Factory.StartNew( () => networkClass.GetResults(thirdServer) ); var result = Task.Factory.ContinueWhenAny( new[] {server1, server2, server3 }, (firstTask) => { return this.ProcessTaskResult(firstTask.Result); }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, instead of working with all three tasks, we’re just using the first task which finishes.  This is very useful, as it allows us to easily work with results of multiple operations, and “throw away” the others.  However, you must take care when using ContinueWhenAny to properly handle exceptions.  At some point, you should always wait on each task (or use the Task.Result property) in order to propogate any exceptions raised from within the task.  Failing to do so can lead to an UnobservedTaskException.

    Read the article

  • DropDownList and SelectListItem Array Item Updates in MVC

    - by Rick Strahl
    So I ran into an interesting behavior today as I deployed my first MVC 4 app tonight. I have a list form that has a filter drop down that allows selection of categories. This list is static and rarely changes so rather than loading these items from the database each time I load the items once and then cache the actual SelectListItem[] array in a static property. However, when we put the site online tonight we immediately noticed that the drop down list was coming up with pre-set values that randomly changed. Didn't take me long to trace this back to the cached list of SelectListItem[]. Clearly the list was getting updated - apparently through the model binding process in the selection postback. To clarify the scenario here's the drop down list definition in the Razor View:@Html.DropDownListFor(mod => mod.QueryParameters.Category, Model.CategoryList, "All Categories") where Model.CategoryList gets set with:[HttpPost] [CompressContent] public ActionResult List(MessageListViewModel model) { InitializeViewModel(model); busEntry entryBus = new busEntry(); var entries = entryBus.GetEntryList(model.QueryParameters); model.Entries = entries; model.DisplayMode = ApplicationDisplayModes.Standard; model.CategoryList = AppUtils.GetCachedCategoryList(); return View(model); } The AppUtils.GetCachedCategoryList() method gets the cached list or loads the list on the first access. The code to load up the list is housed in a Web utility class. The method looks like this:/// <summary> /// Returns a static category list that is cached /// </summary> /// <returns></returns> public static SelectListItem[] GetCachedCategoryList() { if (_CategoryList != null) return _CategoryList; lock (_SyncLock) { if (_CategoryList != null) return _CategoryList; var catBus = new busCategory(); var categories = catBus.GetCategories().ToList(); // Turn list into a SelectItem list var catList= categories .Select(cat => new SelectListItem() { Text = cat.Name, Value = cat.Id.ToString() }) .ToList(); catList.Insert(0, new SelectListItem() { Value = ((int)SpecialCategories.AllCategoriesButRealEstate).ToString(), Text = "All Categories except Real Estate" }); catList.Insert(1, new SelectListItem() { Value = "-1", Text = "--------------------------------" }); _CategoryList = catList.ToArray(); } return _CategoryList; } private static SelectListItem[] _CategoryList ; This seemed normal enough to me - I've been doing stuff like this forever caching smallish lists in memory to avoid an extra trip to the database. This list is used in various places throughout the application - for the list display and also when adding new items and setting up for notifications etc.. Watch that ModelBinder! However, it turns out that this code is clearly causing a problem. It appears that the model binder on the [HttpPost] method is actually updating the list that's bound to and changing the actual entry item in the list and setting its selected value. If you look at the code above I'm not setting the SelectListItem.Selected value anywhere - the only place this value can get set is through ModelBinding. Sure enough when stepping through the code I see that when an item is selected the actual model - model.CategoryList[x].Selected - reflects that. This is bad on several levels: First it's obviously affecting the application behavior - nobody wants to see their drop down list values jump all over the place randomly. But it's also a problem because the array is getting updated by multiple ASP.NET threads which likely would lead to odd crashes from time to time. Not good! In retrospect the modelbinding behavior makes perfect sense. The actual items and the Selected property is the ModelBinder's way of keeping track of one or more selected values. So while I assumed the list to be read-only, the ModelBinder is actually updating it on a post back producing the rather surprising results. Totally missed this during testing and is another one of those little - "Did you know?" moments. So, is there a way around this? Yes but it's maybe not quite obvious. I can't change the behavior of the ModelBinder, but I can certainly change the way that the list is generated. Rather than returning the cached list, I can return a brand new cloned list from the cached items like this:/// <summary> /// Returns a static category list that is cached /// </summary> /// <returns></returns> public static SelectListItem[] GetCachedCategoryList() { if (_CategoryList != null) { // Have to create new instances via projection // to avoid ModelBinding updates to affect this // globally return _CategoryList .Select(cat => new SelectListItem() { Value = cat.Value, Text = cat.Text }) .ToArray(); } …}  The key is that newly created instances of SelectListItems are returned not just filtered instances of the original list. The key here is 'new instances' so that the ModelBinding updates do not update the actual static instance. The code above uses LINQ and a projection into new SelectListItem instances to create this array of fresh instances. And this code works correctly - no more cross-talk between users. Unfortunately this code is also less efficient - it has to reselect the items and uses extra memory for the new array. Knowing what I know now I probably would have not cached the list and just take the hit to read from the database. If there is even a possibility of thread clashes I'm very wary of creating code like this. But since the method already exists and handles this load in one place this fix was easy enough to put in. Live and learn. It's little things like this that can cause some interesting head scratchers sometimes…© Rick Strahl, West Wind Technologies, 2005-2012Posted in MVC  ASP.NET  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Wishful Thinking: Why can't HTML fix Script Attacks at the Source?

    - by Rick Strahl
    The Web can be an evil place, especially if you're a Web Developer blissfully unaware of Cross Site Script Attacks (XSS). Even if you are aware of XSS in all of its insidious forms, it's extremely complex to deal with all the issues if you're taking user input and you're actually allowing users to post raw HTML into an application. I'm dealing with this again today in a Web application where legacy data contains raw HTML that has to be displayed and users ask for the ability to use raw HTML as input for listings. The first line of defense of course is: Just say no to HTML input from users. If you don't allow HTML input directly and use HTML Encoding (HttyUtility.HtmlEncode() in .NET or using standard ASP.NET MVC output @Model.Content) you're fairly safe at least from the HTML input provided. Both WebForms and Razor support HtmlEncoded content, although Razor makes it the default. In Razor the default @ expression syntax:@Model.UserContent automatically produces HTML encoded content - you actually have to go out of your way to create raw HTML content (safe by default) using @Html.Raw() or the HtmlString class. In Web Forms (V4) you can use:<%: Model.UserContent %> or if you're using a version prior to 4.0:<%= HttpUtility.HtmlEncode(Model.UserContent) %> This works great as a hedge against embedded <script> tags and HTML markup as any HTML is turned into text that displays as HTML but doesn't render the HTML. But it turns any embedded HTML markup tags into plain text. If you need to display HTML in raw form with the markup tags rendering based on user input this approach is worthless. If you do accept HTML input and need to echo the rendered HTML input back, the task of cleaning up that HTML is a complex task. In the projects I work on, customers are frequently asking for the ability to post raw HTML quite frequently.  Almost every app that I've built where there's document content from users we start out with text only input - possibly using something like MarkDown - but inevitably users want to just post plain old HTML they created in some other rich editing application. See this a lot with realtors especially who often want to reuse their postings easily in multiple places. In my work this is a common problem I need to deal with and I've tried dozens of different methods from sanitizing, simple rejection of input to custom markup schemes none of which have ever felt comfortable to me. They work in a half assed, hacked together sort of way but I always live in fear of missing something vital which is *really easy to do*. My Wishlist Item: A <restricted> tag in HTML Let me dream here for a second on how to address this problem. It seems to me the easiest place where this can be fixed is: In the browser. Browsers are actually executing script code so they have a lot of control over the script code that resides in a page. What if there was a way to specify that you want to turn off script code for a block of HTML? The main issue when dealing with HTML raw input isn't that we as developers are unaware of the implications of user input, but the fact that we sometimes have to display raw HTML input the user provides. So the problem markup is usually isolated in only a very specific part of the document. So, what if we had a way to specify that in any given HTML block, no script code could execute by wrapping it into a tag that disables all script functionality in the browser? This would include <script> tags and any document script attributes like onclick, onfocus etc. and potentially also disallow things like iFrames that can potentially be scripted from the within the iFrame's target. I'd like to see something along these lines:<article> <restricted allowscripts="no" allowiframes="no"> <div>Some content</div> <script>alert('go ahead make my day, punk!");</script> <div onfocus="$.getJson('http://evilsite.com/')">more content</div> </restricted> </article> A tag like this would basically disallow all script code from firing from any HTML that's rendered within it. You'd use this only on code that you actually render from your data only and only if you are dealing with custom data. So something like this:<article> <restricted> @Html.Raw(Model.UserContent) </restricted> </article> For browsers this would actually be easy to intercept. They render the DOM and control loading and execution of scripts that are loaded through it. All the browser would have to do is suspend execution of <script> tags and not hookup any event handlers defined via markup in this block. Given all the crazy XSS attacks that exist and the prevalence of this problem this would go a long way towards preventing at least coded script attacks in the DOM. And it seems like a totally doable solution that wouldn't be very difficult to implement by vendors. There would also need to be some logic in the parser to not allow an </restricted> or <restricted> tag into the content as to short-circuit the rstricted section (per James Hart's comment). I'm sure there are other issues to consider as well that I didn't think of in my off-the-back-of-a-napkin concept here but the idea overall seems worth consideration I think. Without code running in a user supplied HTML block it'd be pretty hard to compromise a local HTML document and pass information like Cookies to a server. Or even send data to a server period. Short of an iFrame that can access the parent frame (which is another restriction that should be available on this <restricted> tag) that could potentially communicate back, there's not a lot a malicious site could do. The HTML could still 'phone home' via image links and href links potentially and basically say this site was accessed, but without the ability to run script code it would be pretty tough to pass along critical information to the server beyond that. Ahhhh… one can dream… Not holding my breath of course. The design by committee that is the W3C can't agree on anything in timeframes measured less than decades, but maybe this is one place where browser vendors can actually step up the pressure. This is something in their best interest to reduce the attack surface for vulnerabilities on their browser platforms significantly. Several people commented on Twitter today that there isn't enough discussion on issues like this that address serious needs in the web browser space. Realistically security has to be a number one concern with Web applications in general - there isn't a Web app out there that is not vulnerable. And yet nothing has been done to address these security issues even though there might be relatively easy solutions to make this happen. It'll take time, and it's probably not going to happen in our lifetime, but maybe this rambling thought sparks some ideas on how this sort of restriction can get into browsers in some way in the future.© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  HTML5  HTML  Security   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Make your CHM Help Files show HTML5 and CSS3 content

    - by Rick Strahl
    The HTML Help 1.0 specification aka CHM files, is pretty old. In fact, it's practically ancient as it was introduced in 1997 when Internet Explorer 4 was introduced. Html Help 1.0 is basically a completely HTML based Help system that uses a Help Viewer that internally uses Internet Explorer to render the HTML Help content. Because of its use of the Internet Explorer shell for rendering there were many security issues in the past, which resulted in locking down of the Web Browser control in Windows and also the Help Engine which caused some unfortunate side effects. Even so, CHM continues to be a popular help format because it is very easy to produce content for it, using plain HTML and because it works with many Windows application platforms out of the box. While there have been various attempts to replace CHM help files CHM files still seem to be a popular choice for many applications to display their help systems. The biggest alternative these days is no system based help at all, but links to online documentation. For Windows apps though it's still very common to see CHM help files and there are still a ton of CHM help out there and lots of tools (including our own West Wind Html Help Builder) that produce output for CHM files as well as Web output. Image is Everything and you ain't got it! One problem with the CHM engine is that it's stuck with an ancient Internet Explorer version for rendering. For example if you have help content that uses HTML5 or CSS3 content you might have an HTML Help topic like the following shown here in a full Web Browser instance of Internet Explorer: The page clearly uses some CSS3 features like rounded corners and box shadows that are rendered using plain CSS 3 features. Note that I used Internet Explorer on purpose here to demonstrate that IE9 on Windows 7 can properly render this content using some of the new features of CSS, but the same is true for all other recent versions of the major browsers (FireFox 3.1+, Safari 4.5+, WebKit 9+ etc.). Unfortunately if you take this nice and simple CSS3 content and run it through the HTML Help compiler to produce a CHM file the resulting output on the same machine looks a bit less flashy: All the CSS3 styling is gone and although the page display and functionality still works, but all the extra styling features are gone. This even though I am running this on a Windows 7 machine that has IE9 that should be able to render these CSS features. Bummer. Web Browser Control - perpetually stuck in IE 7 Mode The problem is the Web Browser/Shell Components in Windows. This component is and has been part of Windows for as long as Internet Explorer has been around, but the Web Browser control hasn't kept up with the latest versions of IE. In a nutshell the control is stuck in IE7 rendering mode for engine compatibility reasons by default. However, there is at least one way to fix this explicitly using Registry keys on a per application basis. The key point from that blog article is that you can override the IE rendering engine for a particular executable by setting one (or more) registry flags that tell the Windows Shell which version of the Internet Explorer rendering engine to load. An application that wishes to use a more recent version of Internet Explorer can then register itself during installation for the specific IE version desired and from then on the application will use that version of the Web Browser component. If the application is older than the specified version it falls back to the default version (IE 7 rendering). Forcing CHM files to display with IE9 (or later) Rendering Knowing that we can force the IE usage for a given process it's also possible to affect the CHM rendering by setting same keys on the executable that's hosting the CHM file. What that executable file is depends on the type of application as there are a number of ways that can launch the help engine. hh.exeThe standalone Windows CHM Help Viewer that launches when you launch a CHM from Windows Explorer. You can manually add hh.exe to the registry keys. YourApplication.exeIf you're using .NET or any tool that internally uses the hhControl ActiveX control to launch help content your application is your host. You should add your application's exe to the registry during application startup. foxhhelp9.exeIf you're building a FoxPro application that uses the built-in help features, foxhhelp9.exe is used to actually host the help controls. Make sure to add this executable to the registry. What to set You can configure the Internet Explorer version used for an application in the registry by specifying the executable file name and a value that specifies the IE version desired. There are two different sets of keys for 32 bit and 64 bit applications. 32 bit only or 64 bit: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Internet Explorer\MAIN\FeatureControl\FEATURE_BROWSER_EMULATION Value Key: hh.exe 32 bit on 64 bit machine: HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Internet Explorer\MAIN\FeatureControl\FEATURE_BROWSER_EMULATION Value Key: hh.exe Note that it's best to always set both values ideally when you install your application so it works regardless of which platform you run on. The value specified is a DWORD value and the interesting values are decimal 9000 for IE9 rendering mode depending on !DOCTYPE settings or 9999 for IE 9 standards mode always. You can use the same logic for 8000 and 8888 for IE8 and the final value of 7000 for IE7 (one has to wonder what they're going todo for version 10 to perpetuate that pattern). I think 9000 is the value you'd most likely want to use. 9000 means that IE9 will be used for rendering but unless the right doctypes are used (XHTML and HTML5 specifically) IE will still fall back into quirks mode as needed. This should allow existing pages to continue to use the fallback engine while new pages that have the proper HTML doctype set can take advantage of the newest features. Here's an example of how I set the registry keys in my Tarma Installmate registry configuration: Note that I set all three values both under the Software and Wow6432Node keys so that this works regardless of where these EXEs are launched from. Even though all apps are 32 bit apps, the 64 bit (the default one shown selected) key is often used. So, now once I've set the registry key for hh.exe I can now launch my CHM help file from Explorer and see the following CSS3 IE9 rendered display: Summary It sucks that we have to go through all these hoops to get what should be natural behavior for an application to support the latest features available on a system. But it shouldn't be a surprise - the Windows Help team (if there even is such a thing) has not been known for forward looking technologies. It's a pretty big hassle that we have to resort to setting registry keys in order to get the Web Browser control and the internal CHM engine to render itself properly but at least it's possible to make it work after all. Using this technique it's possible to ship an application with a help file and allow your CHM help to display with richer CSS markup and correct rendering using the stricter and more consistent XHTML or HTML5 doctypes. If you provide both Web help and in-application help (and why not if you're building from a single source) you now can side step the issue of your customers asking: Why does my help file look so much shittier than the online help… No more!© Rick Strahl, West Wind Technologies, 2005-2012Posted in HTML5  Help  Html Help Builder  Internet Explorer  Windows   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Dynamic Types and DynamicObject References in C#

    - by Rick Strahl
    I've been working a bit with C# custom dynamic types for several customers recently and I've seen some confusion in understanding how dynamic types are referenced. This discussion specifically centers around types that implement IDynamicMetaObjectProvider or subclass from DynamicObject as opposed to arbitrary type casts of standard .NET types. IDynamicMetaObjectProvider types  are treated special when they are cast to the dynamic type. Assume for a second that I've created my own implementation of a custom dynamic type called DynamicFoo which is about as simple of a dynamic class that I can think of:public class DynamicFoo : DynamicObject { Dictionary<string, object> properties = new Dictionary<string, object>(); public string Bar { get; set; } public DateTime Entered { get; set; } public override bool TryGetMember(GetMemberBinder binder, out object result) { result = null; if (!properties.ContainsKey(binder.Name)) return false; result = properties[binder.Name]; return true; } public override bool TrySetMember(SetMemberBinder binder, object value) { properties[binder.Name] = value; return true; } } This class has an internal dictionary member and I'm exposing this dictionary member through a dynamic by implementing DynamicObject. This implementation exposes the properties dictionary so the dictionary keys can be referenced like properties (foo.NewProperty = "Cool!"). I override TryGetMember() and TrySetMember() which are fired at runtime every time you access a 'property' on a dynamic instance of this DynamicFoo type. Strong Typing and Dynamic Casting I now can instantiate and use DynamicFoo in a couple of different ways: Strong TypingDynamicFoo fooExplicit = new DynamicFoo(); var fooVar = new DynamicFoo(); These two commands are essentially identical and use strong typing. The compiler generates identical code for both of them. The var statement is merely a compiler directive to infer the type of fooVar at compile time and so the type of fooExplicit is DynamicFoo, just like fooExplicit. This is very static - nothing dynamic about it - and it completely ignores the IDynamicMetaObjectProvider implementation of my class above as it's never used. Using either of these I can access the native properties:DynamicFoo fooExplicit = new DynamicFoo();// static typing assignmentsfooVar.Bar = "Barred!"; fooExplicit.Entered = DateTime.Now; // echo back static values Console.WriteLine(fooVar.Bar); Console.WriteLine(fooExplicit.Entered); but I have no access whatsoever to the properties dictionary. Basically this creates a strongly typed instance of the type with access only to the strongly typed interface. You get no dynamic behavior at all. The IDynamicMetaObjectProvider features don't kick in until you cast the type to dynamic. If I try to access a non-existing property on fooExplicit I get a compilation error that tells me that the property doesn't exist. Again, it's clearly and utterly non-dynamic. Dynamicdynamic fooDynamic = new DynamicFoo(); fooDynamic on the other hand is created as a dynamic type and it's a completely different beast. I can also create a dynamic by simply casting any type to dynamic like this:DynamicFoo fooExplicit = new DynamicFoo(); dynamic fooDynamic = fooExplicit; Note that dynamic typically doesn't require an explicit cast as the compiler automatically performs the cast so there's no need to use as dynamic. Dynamic functionality works at runtime and allows for the dynamic wrapper to look up and call members dynamically. A dynamic type will look for members to access or call in two places: Using the strongly typed members of the object Using theIDynamicMetaObjectProvider Interface methods to access members So rather than statically linking and calling a method or retrieving a property, the dynamic type looks up - at runtime  - where the value actually comes from. It's essentially late-binding which allows runtime determination what action to take when a member is accessed at runtime *if* the member you are accessing does not exist on the object. Class members are checked first before IDynamicMetaObjectProvider interface methods are kick in. All of the following works with the dynamic type:dynamic fooDynamic = new DynamicFoo(); // dynamic typing assignments fooDynamic.NewProperty = "Something new!"; fooDynamic.LastAccess = DateTime.Now; // dynamic assigning static properties fooDynamic.Bar = "dynamic barred"; fooDynamic.Entered = DateTime.Now; // echo back dynamic values Console.WriteLine(fooDynamic.NewProperty); Console.WriteLine(fooDynamic.LastAccess); Console.WriteLine(fooDynamic.Bar); Console.WriteLine(fooDynamic.Entered); The dynamic type can access the native class properties (Bar and Entered) and create and read new ones (NewProperty,LastAccess) all using a single type instance which is pretty cool. As you can see it's pretty easy to create an extensible type this way that can dynamically add members at runtime dynamically. The Alter Ego of IDynamicObject The key point here is that all three statements - explicit, var and dynamic - declare a new DynamicFoo(), but the dynamic declaration results in completely different behavior than the first two simply because the type has been cast to dynamic. Dynamic binding means that the type loses its typical strong typing, compile time features. You can see this easily in the Visual Studio code editor. As soon as you assign a value to a dynamic you lose Intellisense and you see which means there's no Intellisense and no compiler type checking on any members you apply to this instance. If you're new to the dynamic type it might seem really confusing that a single type can behave differently depending on how it is cast, but that's exactly what happens when you use a type that implements IDynamicMetaObjectProvider. Declare the type as its strong type name and you only get to access the native instance members of the type. Declare or cast it to dynamic and you get dynamic behavior which accesses native members plus it uses IDynamicMetaObjectProvider implementation to handle any missing member definitions by running custom code. You can easily cast objects back and forth between dynamic and the original type:dynamic fooDynamic = new DynamicFoo(); fooDynamic.NewProperty = "New Property Value"; DynamicFoo foo = fooDynamic; foo.Bar = "Barred"; Here the code starts out with a dynamic cast and a dynamic assignment. The code then casts back the value to the DynamicFoo. Notice that when casting from dynamic to DynamicFoo and back we typically do not have to specify the cast explicitly - the compiler can induce the type so I don't need to specify as dynamic or as DynamicFoo. Moral of the Story This easy interchange between dynamic and the underlying type is actually super useful, because it allows you to create extensible objects that can expose non-member data stores and expose them as an object interface. You can create an object that hosts a number of strongly typed properties and then cast the object to dynamic and add additional dynamic properties to the same type at runtime. You can easily switch back and forth between the strongly typed instance to access the well-known strongly typed properties and to dynamic for the dynamic properties added at runtime. Keep in mind that dynamic object access has quite a bit of overhead and is definitely slower than strongly typed binding, so if you're accessing the strongly typed parts of your objects you definitely want to use a strongly typed reference. Reserve dynamic for the dynamic members to optimize your code. The real beauty of dynamic is that with very little effort you can build expandable objects or objects that expose different data stores to an object interface. I'll have more on this in my next post when I create a customized and extensible Expando object based on DynamicObject.© Rick Strahl, West Wind Technologies, 2005-2012Posted in CSharp  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Replacing jQuery.live() with jQuery.on()

    - by Rick Strahl
    jQuery 1.9 and 1.10 have introduced a host of changes, but for the most part these changes are mostly transparent to existing application usage of jQuery. After spending some time last week with a few of my projects and going through them with a specific eye for jQuery failures I found that for the most part there wasn't a big issue. The vast majority of code continues to run just fine with either 1.9 or 1.10 (which are supposed to be in sync but with 1.10 removing support for legacy Internet Explorer pre-9.0 versions). However, one particular change in the new versions has caused me quite a bit of update trouble, is the removal of the jQuery.live() function. This is my own fault I suppose - .live() has been deprecated for a while, but with 1.9 and later it was finally removed altogether from jQuery. In the past I had quite a bit of jQuery code that used .live() and it's one of the things that's holding back my upgrade process, although I'm slowly cleaning up my code and switching to the .on() function as the replacement. jQuery.live() jQuery.live() was introduced a long time ago to simplify handling events on matched elements that exist currently on the document and those that are are added in the future and also match the selector. jQuery uses event bubbling, special event binding, plus some magic using meta data attached to a parent level element to check and see if the original target event element matches the selected selected elements (for more info see Elijah Manor's comment below). An Example Assume a list of items like the following in HTML for example and further assume that the items in this list can be appended to at a later point. In this app there's a smallish initial list that loads to start, and as the user scrolls towards the end of the initial small list more items are loaded dynamically and added to the list.<div id="PostItemContainer" class="scrollbox"> <div class="postitem" data-id="4z6qhomm"> <div class="post-icon"></div> <div class="postitemheader"><a href="show/4z6qhomm" target="Content">1999 Buick Century For Sale!</a></div> <div class="postitemprice rightalign">$ 3,500 O.B.O.</div> <div class="smalltext leftalign">Jun. 07 @ 1:06am</div> <div class="post-byline">- Vehicles - Automobiles</div> </div> <div class="postitem" data-id="2jtvuu17"> <div class="postitemheader"><a href="show/2jtvuu17" target="Content">Toyota VAN 1987</a></div> <div class="postitemprice rightalign">$950</div> <div class="smalltext leftalign">Jun. 07 @ 12:29am</div> <div class="post-byline">- Vehicles - Automobiles</div> </div> … </div> With the jQuery.live() function you could easily select elements and hook up a click handler like this:$(".postitem").live("click", function() {...}); Simple and perfectly readable. The behavior of the .live handler generally was the same as the corresponding simple event handlers like .click(), except that you have to explicitly name the event instead of using one of the methods. Re-writing with jQuery.on() With .live() removed in 1.9 and later we have to re-write .live() code above with an alternative. The jQuery documentation points you at the .on() or .delegate() functions to update your code. jQuery.on() is a more generic event handler function, and it's what jQuery uses internally to map the high level event functions like .click(),.change() etc. that jQuery exposes. Using jQuery.on() however is not a one to one replacement of the .live() function. While .on() can handle events directly and use the same syntax as .live() did, you'll find if you simply switch out .live() with .on() that events on not-yet existing elements will not fire. IOW, the key feature of .live() is not working. You can use .on() to get the desired effect however, but you have to change the syntax to explicitly handle the event you're interested in on the container and then provide a filter selector to specify which elements you are actually interested in for handling the event for. Sounds more complicated than it is and it's easier to see with an example. For the list above hooking .postitem clicks, using jQuery.on() looks like this:$("#PostItemContainer").on("click", ".postitem", function() {...}); You specify a container that can handle the .click event and then provide a filter selector to find the child elements that trigger the  the actual event. So here #PostItemContainer contains many .postitems, whose click events I want to handle. Any container will do including document, but I tend to use the container closest to the elements I actually want to handle the events on to minimize the event bubbling that occurs to capture the event. With this code I get the same behavior as with .live() and now as new .postitem elements are added the click events are always available. Sweet. Here's the full event signature for the .on() function: .on( events [, selector ] [, data ], handler(eventObject) ) Note that the selector is optional - if you omit it you essentially create a simple event handler that handles the event directly on the selected object. The filter/child selector required if you want life-like - uh, .live() like behavior to happen. While it's a bit more verbose than what .live() did, .on() provides the same functionality by being more explicit on what your parent container for trapping events is. .on() is good Practice even for ordinary static Element Lists As a side note, it's a good practice to use jQuery.on() or jQuery.delegate() for events in most cases anyway, using this 'container event trapping' syntax. That's because rather than requiring lots of event handlers on each of the child elements (.postitem in the sample above), there's just one event handler on the container, and only when clicked does jQuery drill down to find the matching filter element and tries to match it to the originating element. In the early days of jQuery I used manually build handlers that did this and manually drilled from the event object into the originalTarget to determine if it's a matching element. With later versions of jQuery the various event functions in jQuery essentially provide this functionality out of the box with functions like .on() and .delegate(). All of this is nothing new, but I thought I'd write this up because I have on a few occasions forgotten what exactly was needed to replace the many .live() function calls that litter my code - especially older code. This will be a nice reminder next time I have a memory blank on this topic. And maybe along the way I've helped one or two of you as well to clean up your .live() code…© Rick Strahl, West Wind Technologies, 2005-2013Posted in jQuery   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

< Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >