Search Results

Search found 7689 results on 308 pages for 'typeface js'.

Page 80/308 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • Should I use the cool, new and awesome stuff [closed]

    - by Ieyasu Sawada
    I'm in the field of Web Development. I follow a lot of awesome people on Twitter(Paul Irish, Alex Sexton, David Walsh, Rebecca Murphey, etc.) By following these people I'm constantly updated of the new and cool stuff in web development(backbone.js, angular.js, require.js, ember.js, jasmine, etc.) Now I'm creating a web application for a client and because of the different tools, libraries, plugins that I'm aware of I don't even know anymore when do I need to use those or do I even need those, or how do I even implement it in a sane way where I won't have to repeat myself(DRY). What's your advice on what I really need to do in order to become better. Do I really need to use these cool new tools or should I just stick with what I know for now and try to make my code better. Should I unfollow these people in order to not pollute my mind with stuff that I can't really use now because I don't have the necessary experience in order to use it. What sort of things should I really be focusing on for someone like me who has only about 2 years of experience in the field of web development.

    Read the article

  • What do I need to know to design a language and write a interpreter for it?

    - by alFReD NSH
    I know this question has been asked and even there are thousands of books and articles about it. But the problem is that there are too many, and I don't know are they good enough, I have to design a language and write a interpreter for it. The base language is javascript (using nodejs) but it's ok if the compiler was written in another language that I can use from node. I had done a research about compiler compilers in JS, there is jison (Bison implementaion in JS), waxeye, peg.js. I decided to give jison a try, due to the popularity and its being used by coffee script, so it should be able to cover my language too. The grammar definition syntax is similar to bison. But when I tried read the bison manual it seemed very hard to understand for me. And I think it's because I don't know a lot of things about what I'm doing. Like I don't what is formal language theory. I am experienced in Javascript (I'm more talented in JS than most average programmers). And also know basic C and C++ (not much experience but can write a working code for basic things). I haven't had any formal education, so I may not be familiar with some software engineering and computer science principles. Though everyday I try to grasp a lot of articles and improve. So I'm asking if you know any good book or article that can help me. Please also write why the resource you're suggesting is good. --update-- The language I'm trying to create, is not really complicated. All it has is expressions (with or without units), comparisons and logical operators. There are no functions, loops, ... The goal is to create a language that non-programmers can easily learn. And to write customized validations and calculations.

    Read the article

  • Deciding on a company-wide javascript strategy [on hold]

    - by drogon
    Our company is moving most of its software from thick-client winforms apps to web apps. We are using asp.net mvc on the server side. Most of the developers are brand new to the web and need to become efficient and knowledgeable at writing client-side web code (javascript). We are deciding on a number of things and would appreciate feedback on the following: Angular.js or Backbone.js? Backbone (w/ Underscore) is certainly more light weight, but requires more custom development. Angular seems to be a full-fledged framework, but would require everyone to embrace it and probably a longer learning curve(??). (Note: I know nothing about Angular at this point) Require.js or script includes w/ MVC bundleconfig? Require.js makes development "feel like" c# (importing namespaces). But, integrating the build/minification process can be a pain (especially the configuration). Bundling via mvc requires developers to worry more about which scripts to include but has less overall development friction. Typescript vs Javascript Regardless of frameworks, our developers are going to need to learn the basics. Typescript is more like c# and MAY be easier for c# developers to understand. However, learning TypeScript before javascript may hinder their mastery of javascript at the expense of efficiency.

    Read the article

  • Google Analytics Social Tracking implementation. Is Google's example correct?

    - by s_a
    The current Google Analytics help page on Social tracking (developers.google.com/analytics/devguides/collection/gajs/gaTrackingSocial?hl=es-419) links to this page with an example of the implementation: http://analytics-api-samples.googlecode.com/svn/trunk/src/tracking/javascript/v5/social/facebook_js_async.html I've followed the example carefully yet social interactions are not registered. This is the webpage with the non-working setup: http://bit.ly/1dA00dY (obscured domain as per Google's Webmaster Central recommendations for their product forums) This is the structure of the page: In the : ga async code copied from the analytics' page a script tag linking to stored in the same domain. the twitter js loading tag In the the fb-root div the facebook async loading js including the _ga.trackFacebook(); call the social buttons afterwards, like so: (with the proper URL) Tweet (with the proper handle) That's it. As far as I can tell, I have implemented it exactly like in the example, but likes and twitts aren't registered. I have also altered the ga_social_tracking.js to register the social interactions as events, adding the code below. It doesn't work either. What could be wrong? Thanks! Code added to ga_social_tracking.js var url = document.URL; var category = 'Social Media'; /* Facebook */ FB.Event.subscribe('edge.create', function(href, widget) { _gaq.push(['_trackEvent', category, 'Facebook', url]); }); /* Twitter */ twttr.events.bind('tweet', function(event) { _gaq.push(['_trackEvent', category, 'Twitter', url]); });

    Read the article

  • MVC 3 Remote Validation jQuery error on submit

    - by Richard Reddy
    I seem to have a weird issue with remote validation on my project. I am doing a simple validation check on an email field to ensure that it is unique. I've noticed that unless I put the cursor into the textbox and then remove it to trigger the validation at least once before submitting my form I will get a javascript error. e[h] is not a function jquery.min.js line 3 If I try to resubmit the form after the above error is returned everything works as expected. It's almost like the form tried to submit before waiting for the validation to return or something. Am I required to silently fire off a remote validation request on submit before submitting my form? Below is a snapshot of the code I'm using: (I've also tried GET instead of POST but I get the same result). As mentioned above, the code works fine but the form returns a jquery error unless the validation is triggered at least once. Model: public class RegisterModel { [Required] [Remote("DoesUserNameExist", "Account", HttpMethod = "POST", ErrorMessage = "User name taken.")] [Display(Name = "User name")] public string UserName { get; set; } [Required] [Display(Name = "Firstname")] public string Firstname { get; set; } [Display(Name = "Surname")] public string Surname { get; set; } [Required] [Remote("DoesEmailExist", "Account", HttpMethod = "POST", ErrorMessage = "Email taken.", AdditionalFields = "UserName")] [Display(Name = "Email address")] public string Email { get; set; } [StringLength(100, ErrorMessage = "The {0} must be at least {2} characters long.", MinimumLength = 8)] [DataType(DataType.Password)] [Display(Name = "Password")] public string Password { get; set; } [StringLength(100, ErrorMessage = "The {0} must be at least {2} characters long.", MinimumLength = 8)] [DataType(DataType.Password)] [Display(Name = "Confirm password")] public string ConfirmPassword { get; set; } [Display(Name = "Approved?")] public bool IsApproved { get; set; } } public class UserRoleModel { [Display(Name = "Assign Roles")] public IEnumerable<RoleViewModel> AllRoles { get; set; } public RegisterModel RegisterUser { get; set; } } Controller: // POST: /Account/DoesEmailExist // passing in username so that I can ignore the same email address for the same user on edit page [HttpPost] public JsonResult DoesEmailExist([Bind(Prefix = "RegisterUser.Email")]string Email, [Bind(Prefix = "RegisterUser.UserName")]string UserName) { var user = Membership.GetUserNameByEmail(Email); if (!String.IsNullOrEmpty(UserName)) { if (user == UserName) return Json(true); } return Json(user == null); } View: <script src="//ajax.googleapis.com/ajax/libs/jquery/1.7.1/jquery.min.js" type="text/javascript"></script> <script src="//ajax.googleapis.com/ajax/libs/jqueryui/1.8.17/jquery-ui.min.js" type="text/javascript"></script> <script type="text/javascript" src="/Content/web/js/jquery.unobtrusive-ajax.min.js"></script> <script type="text/javascript" src="/Content/web/js/jquery.validate.min.js"></script> <script type="text/javascript" src="/Content/web/js/jquery.validate.unobtrusive.min.js"></script> ...... @using (Html.BeginForm()) { @Html.AntiForgeryToken() <div class="titleh"> <h3>Edit a user account</h3> </div> <div class="body"> @Html.HiddenFor(model => model.RegisterUser.UserName) @Html.Partial("_CreateOrEdit", Model) <div class="st-form-line"> <span class="st-labeltext">@Html.LabelFor(model => model.RegisterUser.IsApproved)</span> @Html.RadioButtonFor(model => model.RegisterUser.IsApproved, true, new { @class = "uniform" }) Active @Html.RadioButtonFor(model => model.RegisterUser.IsApproved, false, new { @class = "uniform" }) Disabled <div class="clear"></div> </div> <div class="button-box"> <input type="submit" name="submit" value="Save" class="st-button"/> @Html.ActionLink("Back to List", "Index", null, new { @class = "st-clear" }) </div> </div> } CreateEdit Partial View @model Project.Domain.Entities.UserRoleModel <div class="st-form-line"> <span class="st-labeltext">@Html.LabelFor(m => m.RegisterUser.Firstname)</span> @Html.TextBoxFor(m => m.RegisterUser.Firstname, new { @class = "st-forminput", @style = "width:300px" }) @Html.ValidationMessageFor(m => m.RegisterUser.Firstname) <div class="clear"></div> </div> <div class="st-form-line"> <span class="st-labeltext">@Html.LabelFor(m => m.RegisterUser.Surname)</span> @Html.TextBoxFor(m => m.RegisterUser.Surname, new { @class = "st-forminput", @style = "width:300px" }) @Html.ValidationMessageFor(m => m.RegisterUser.Surname) <div class="clear"></div> </div> <div class="st-form-line"> <span class="st-labeltext">@Html.LabelFor(m => m.RegisterUser.Email)</span> @Html.TextBoxFor(m => m.RegisterUser.Email, new { @class = "st-forminput", @style = "width:300px" }) @Html.ValidationMessageFor(m => m.RegisterUser.Email) <div class="clear"></div> </div> Thanks, Rich

    Read the article

  • An Introduction to ASP.NET Web API

    - by Rick Strahl
    Microsoft recently released ASP.NET MVC 4.0 and .NET 4.5 and along with it, the brand spanking new ASP.NET Web API. Web API is an exciting new addition to the ASP.NET stack that provides a new, well-designed HTTP framework for creating REST and AJAX APIs (API is Microsoft’s new jargon for a service, in case you’re wondering). Although Web API ships and installs with ASP.NET MVC 4, you can use Web API functionality in any ASP.NET project, including WebForms, WebPages and MVC or just a Web API by itself. And you can also self-host Web API in your own applications from Console, Desktop or Service applications. If you're interested in a high level overview on what ASP.NET Web API is and how it fits into the ASP.NET stack you can check out my previous post: Where does ASP.NET Web API fit? In the following article, I'll focus on a practical, by example introduction to ASP.NET Web API. All the code discussed in this article is available in GitHub: https://github.com/RickStrahl/AspNetWebApiArticle [republished from my Code Magazine Article and updated for RTM release of ASP.NET Web API] Getting Started To start I’ll create a new empty ASP.NET application to demonstrate that Web API can work with any kind of ASP.NET project. Although you can create a new project based on the ASP.NET MVC/Web API template to quickly get up and running, I’ll take you through the manual setup process, because one common use case is to add Web API functionality to an existing ASP.NET application. This process describes the steps needed to hook up Web API to any ASP.NET 4.0 application. Start by creating an ASP.NET Empty Project. Then create a new folder in the project called Controllers. Add a Web API Controller Class Once you have any kind of ASP.NET project open, you can add a Web API Controller class to it. Web API Controllers are very similar to MVC Controller classes, but they work in any kind of project. Add a new item to this folder by using the Add New Item option in Visual Studio and choose Web API Controller Class, as shown in Figure 1. Figure 1: This is how you create a new Controller Class in Visual Studio   Make sure that the name of the controller class includes Controller at the end of it, which is required in order for Web API routing to find it. Here, the name for the class is AlbumApiController. For this example, I’ll use a Music Album model to demonstrate basic behavior of Web API. The model consists of albums and related songs where an album has properties like Name, Artist and YearReleased and a list of songs with a SongName and SongLength as well as an AlbumId that links it to the album. You can find the code for the model (and the rest of these samples) on Github. To add the file manually, create a new folder called Model, and add a new class Album.cs and copy the code into it. There’s a static AlbumData class with a static CreateSampleAlbumData() method that creates a short list of albums on a static .Current that I’ll use for the examples. Before we look at what goes into the controller class though, let’s hook up routing so we can access this new controller. Hooking up Routing in Global.asax To start, I need to perform the one required configuration task in order for Web API to work: I need to configure routing to the controller. Like MVC, Web API uses routing to provide clean, extension-less URLs to controller methods. Using an extension method to ASP.NET’s static RouteTable class, you can use the MapHttpRoute() (in the System.Web.Http namespace) method to hook-up the routing during Application_Start in global.asax.cs shown in Listing 1.using System; using System.Web.Routing; using System.Web.Http; namespace AspNetWebApi { public class Global : System.Web.HttpApplication { protected void Application_Start(object sender, EventArgs e) { RouteTable.Routes.MapHttpRoute( name: "AlbumVerbs", routeTemplate: "albums/{title}", defaults: new { symbol = RouteParameter.Optional, controller="AlbumApi" } ); } } } This route configures Web API to direct URLs that start with an albums folder to the AlbumApiController class. Routing in ASP.NET is used to create extensionless URLs and allows you to map segments of the URL to specific Route Value parameters. A route parameter, with a name inside curly brackets like {name}, is mapped to parameters on the controller methods. Route parameters can be optional, and there are two special route parameters – controller and action – that determine the controller to call and the method to activate respectively. HTTP Verb Routing Routing in Web API can route requests by HTTP Verb in addition to standard {controller},{action} routing. For the first examples, I use HTTP Verb routing, as shown Listing 1. Notice that the route I’ve defined does not include an {action} route value or action value in the defaults. Rather, Web API can use the HTTP Verb in this route to determine the method to call the controller, and a GET request maps to any method that starts with Get. So methods called Get() or GetAlbums() are matched by a GET request and a POST request maps to a Post() or PostAlbum(). Web API matches a method by name and parameter signature to match a route, query string or POST values. In lieu of the method name, the [HttpGet,HttpPost,HttpPut,HttpDelete, etc] attributes can also be used to designate the accepted verbs explicitly if you don’t want to follow the verb naming conventions. Although HTTP Verb routing is a good practice for REST style resource APIs, it’s not required and you can still use more traditional routes with an explicit {action} route parameter. When {action} is supplied, the HTTP verb routing is ignored. I’ll talk more about alternate routes later. When you’re finished with initial creation of files, your project should look like Figure 2.   Figure 2: The initial project has the new API Controller Album model   Creating a small Album Model Now it’s time to create some controller methods to serve data. For these examples, I’ll use a very simple Album and Songs model to play with, as shown in Listing 2. public class Song { public string AlbumId { get; set; } [Required, StringLength(80)] public string SongName { get; set; } [StringLength(5)] public string SongLength { get; set; } } public class Album { public string Id { get; set; } [Required, StringLength(80)] public string AlbumName { get; set; } [StringLength(80)] public string Artist { get; set; } public int YearReleased { get; set; } public DateTime Entered { get; set; } [StringLength(150)] public string AlbumImageUrl { get; set; } [StringLength(200)] public string AmazonUrl { get; set; } public virtual List<Song> Songs { get; set; } public Album() { Songs = new List<Song>(); Entered = DateTime.Now; // Poor man's unique Id off GUID hash Id = Guid.NewGuid().GetHashCode().ToString("x"); } public void AddSong(string songName, string songLength = null) { this.Songs.Add(new Song() { AlbumId = this.Id, SongName = songName, SongLength = songLength }); } } Once the model has been created, I also added an AlbumData class that generates some static data in memory that is loaded onto a static .Current member. The signature of this class looks like this and that's what I'll access to retrieve the base data:public static class AlbumData { // sample data - static list public static List<Album> Current = CreateSampleAlbumData(); /// <summary> /// Create some sample data /// </summary> /// <returns></returns> public static List<Album> CreateSampleAlbumData() { … }} You can check out the full code for the data generation online. Creating an AlbumApiController Web API shares many concepts of ASP.NET MVC, and the implementation of your API logic is done by implementing a subclass of the System.Web.Http.ApiController class. Each public method in the implemented controller is a potential endpoint for the HTTP API, as long as a matching route can be found to invoke it. The class name you create should end in Controller, which is how Web API matches the controller route value to figure out which class to invoke. Inside the controller you can implement methods that take standard .NET input parameters and return .NET values as results. Web API’s binding tries to match POST data, route values, form values or query string values to your parameters. Because the controller is configured for HTTP Verb based routing (no {action} parameter in the route), any methods that start with Getxxxx() are called by an HTTP GET operation. You can have multiple methods that match each HTTP Verb as long as the parameter signatures are different and can be matched by Web API. In Listing 3, I create an AlbumApiController with two methods to retrieve a list of albums and a single album by its title .public class AlbumApiController : ApiController { public IEnumerable<Album> GetAlbums() { var albums = AlbumData.Current.OrderBy(alb => alb.Artist); return albums; } public Album GetAlbum(string title) { var album = AlbumData.Current .SingleOrDefault(alb => alb.AlbumName.Contains(title)); return album; }} To access the first two requests, you can use the following URLs in your browser: http://localhost/aspnetWebApi/albumshttp://localhost/aspnetWebApi/albums/Dirty%20Deeds Note that you’re not specifying the actions of GetAlbum or GetAlbums in these URLs. Instead Web API’s routing uses HTTP GET verb to route to these methods that start with Getxxx() with the first mapping to the parameterless GetAlbums() method and the latter to the GetAlbum(title) method that receives the title parameter mapped as optional in the route. Content Negotiation When you access any of the URLs above from a browser, you get either an XML or JSON result returned back. The album list result for Chrome 17 and Internet Explorer 9 is shown Figure 3. Figure 3: Web API responses can vary depending on the browser used, demonstrating Content Negotiation in action as these two browsers send different HTTP Accept headers.   Notice that the results are not the same: Chrome returns an XML response and IE9 returns a JSON response. Whoa, what’s going on here? Shouldn’t we see the same result in both browsers? Actually, no. Web API determines what type of content to return based on Accept headers. HTTP clients, like browsers, use Accept headers to specify what kind of content they’d like to see returned. Browsers generally ask for HTML first, followed by a few additional content types. Chrome (and most other major browsers) ask for: Accept: text/html, application/xhtml+xml,application/xml; q=0.9,*/*;q=0.8 IE9 asks for: Accept: text/html, application/xhtml+xml, */* Note that Chrome’s Accept header includes application/xml, which Web API finds in its list of supported media types and returns an XML response. IE9 does not include an Accept header type that works on Web API by default, and so it returns the default format, which is JSON. This is an important and very useful feature that was missing from any previous Microsoft REST tools: Web API automatically switches output formats based on HTTP Accept headers. Nowhere in the server code above do you have to explicitly specify the output format. Rather, Web API determines what format the client is requesting based on the Accept headers and automatically returns the result based on the available formatters. This means that a single method can handle both XML and JSON results.. Using this simple approach makes it very easy to create a single controller method that can return JSON, XML, ATOM or even OData feeds by providing the appropriate Accept header from the client. By default you don’t have to worry about the output format in your code. Note that you can still specify an explicit output format if you choose, either globally by overriding the installed formatters, or individually by returning a lower level HttpResponseMessage instance and setting the formatter explicitly. More on that in a minute. Along the same lines, any content sent to the server via POST/PUT is parsed by Web API based on the HTTP Content-type of the data sent. The same formats allowed for output are also allowed on input. Again, you don’t have to do anything in your code – Web API automatically performs the deserialization from the content. Accessing Web API JSON Data with jQuery A very common scenario for Web API endpoints is to retrieve data for AJAX calls from the Web browser. Because JSON is the default format for Web API, it’s easy to access data from the server using jQuery and its getJSON() method. This example receives the albums array from GetAlbums() and databinds it into the page using knockout.js.$.getJSON("albums/", function (albums) { // make knockout template visible $(".album").show(); // create view object and attach array var view = { albums: albums }; ko.applyBindings(view); }); Figure 4 shows this and the next example’s HTML output. You can check out the complete HTML and script code at http://goo.gl/Ix33C (.html) and http://goo.gl/tETlg (.js). Figu Figure 4: The Album Display sample uses JSON data loaded from Web API.   The result from the getJSON() call is a JavaScript object of the server result, which comes back as a JavaScript array. In the code, I use knockout.js to bind this array into the UI, which as you can see, requires very little code, instead using knockout’s data-bind attributes to bind server data to the UI. Of course, this is just one way to use the data – it’s entirely up to you to decide what to do with the data in your client code. Along the same lines, I can retrieve a single album to display when the user clicks on an album. The response returns the album information and a child array with all the songs. The code to do this is very similar to the last example where we pulled the albums array:$(".albumlink").live("click", function () { var id = $(this).data("id"); // title $.getJSON("albums/" + id, function (album) { ko.applyBindings(album, $("#divAlbumDialog")[0]); $("#divAlbumDialog").show(); }); }); Here the URL looks like this: /albums/Dirty%20Deeds, where the title is the ID captured from the clicked element’s data ID attribute. Explicitly Overriding Output Format When Web API automatically converts output using content negotiation, it does so by matching Accept header media types to the GlobalConfiguration.Configuration.Formatters and the SupportedMediaTypes of each individual formatter. You can add and remove formatters to globally affect what formats are available and it’s easy to create and plug in custom formatters.The example project includes a JSONP formatter that can be plugged in to provide JSONP support for requests that have a callback= querystring parameter. Adding, removing or replacing formatters is a global option you can use to manipulate content. It’s beyond the scope of this introduction to show how it works, but you can review the sample code or check out my blog entry on the subject (http://goo.gl/UAzaR). If automatic processing is not desirable in a particular Controller method, you can override the response output explicitly by returning an HttpResponseMessage instance. HttpResponseMessage is similar to ActionResult in ASP.NET MVC in that it’s a common way to return an abstract result message that contains content. HttpResponseMessage s parsed by the Web API framework using standard interfaces to retrieve the response data, status code, headers and so on[MS2] . Web API turns every response – including those Controller methods that return static results – into HttpResponseMessage instances. Explicitly returning an HttpResponseMessage instance gives you full control over the output and lets you mostly bypass WebAPI’s post-processing of the HTTP response on your behalf. HttpResponseMessage allows you to customize the response in great detail. Web API’s attention to detail in the HTTP spec really shows; many HTTP options are exposed as properties and enumerations with detailed IntelliSense comments. Even if you’re new to building REST-based interfaces, the API guides you in the right direction for returning valid responses and response codes. For example, assume that I always want to return JSON from the GetAlbums() controller method and ignore the default media type content negotiation. To do this, I can adjust the output format and headers as shown in Listing 4.public HttpResponseMessage GetAlbums() { var albums = AlbumData.Current.OrderBy(alb => alb.Artist); // Create a new HttpResponse with Json Formatter explicitly var resp = new HttpResponseMessage(HttpStatusCode.OK); resp.Content = new ObjectContent<IEnumerable<Album>>( albums, new JsonMediaTypeFormatter()); // Get Default Formatter based on Content Negotiation //var resp = Request.CreateResponse<IEnumerable<Album>>(HttpStatusCode.OK, albums); resp.Headers.ConnectionClose = true; resp.Headers.CacheControl = new CacheControlHeaderValue(); resp.Headers.CacheControl.Public = true; return resp; } This example returns the same IEnumerable<Album> value, but it wraps the response into an HttpResponseMessage so you can control the entire HTTP message result including the headers, formatter and status code. In Listing 4, I explicitly specify the formatter using the JsonMediaTypeFormatter to always force the content to JSON.  If you prefer to use the default content negotiation with HttpResponseMessage results, you can create the Response instance using the Request.CreateResponse method:var resp = Request.CreateResponse<IEnumerable<Album>>(HttpStatusCode.OK, albums); This provides you an HttpResponse object that's pre-configured with the default formatter based on Content Negotiation. Once you have an HttpResponse object you can easily control most HTTP aspects on this object. What's sweet here is that there are many more detailed properties on HttpResponse than the core ASP.NET Response object, with most options being explicitly configurable with enumerations that make it easy to pick the right headers and response codes from a list of valid codes. It makes HTTP features available much more discoverable even for non-hardcore REST/HTTP geeks. Non-Serialized Results The output returned doesn’t have to be a serialized value but can also be raw data, like strings, binary data or streams. You can use the HttpResponseMessage.Content object to set a number of common Content classes. Listing 5 shows how to return a binary image using the ByteArrayContent class from a Controller method. [HttpGet] public HttpResponseMessage AlbumArt(string title) { var album = AlbumData.Current.FirstOrDefault(abl => abl.AlbumName.StartsWith(title)); if (album == null) { var resp = Request.CreateResponse<ApiMessageError>( HttpStatusCode.NotFound, new ApiMessageError("Album not found")); return resp; } // kinda silly - we would normally serve this directly // but hey - it's a demo. var http = new WebClient(); var imageData = http.DownloadData(album.AlbumImageUrl); // create response and return var result = new HttpResponseMessage(HttpStatusCode.OK); result.Content = new ByteArrayContent(imageData); result.Content.Headers.ContentType = new MediaTypeHeaderValue("image/jpeg"); return result; } The image retrieval from Amazon is contrived, but it shows how to return binary data using ByteArrayContent. It also demonstrates that you can easily return multiple types of content from a single controller method, which is actually quite common. If an error occurs - such as a resource can’t be found or a validation error – you can return an error response to the client that’s very specific to the error. In GetAlbumArt(), if the album can’t be found, we want to return a 404 Not Found status (and realistically no error, as it’s an image). Note that if you are not using HTTP Verb-based routing or not accessing a method that starts with Get/Post etc., you have to specify one or more HTTP Verb attributes on the method explicitly. Here, I used the [HttpGet] attribute to serve the image. Another option to handle the error could be to return a fixed placeholder image if no album could be matched or the album doesn’t have an image. When returning an error code, you can also return a strongly typed response to the client. For example, you can set the 404 status code and also return a custom error object (ApiMessageError is a class I defined) like this:return Request.CreateResponse<ApiMessageError>( HttpStatusCode.NotFound, new ApiMessageError("Album not found") );   If the album can be found, the image will be returned. The image is downloaded into a byte[] array, and then assigned to the result’s Content property. I created a new ByteArrayContent instance and assigned the image’s bytes and the content type so that it displays properly in the browser. There are other content classes available: StringContent, StreamContent, ByteArrayContent, MultipartContent, and ObjectContent are at your disposal to return just about any kind of content. You can create your own Content classes if you frequently return custom types and handle the default formatter assignments that should be used to send the data out . Although HttpResponseMessage results require more code than returning a plain .NET value from a method, it allows much more control over the actual HTTP processing than automatic processing. It also makes it much easier to test your controller methods as you get a response object that you can check for specific status codes and output messages rather than just a result value. Routing Again Ok, let’s get back to the image example. Using the original routing we have setup using HTTP Verb routing there's no good way to serve the image. In order to return my album art image I’d like to use a URL like this: http://localhost/aspnetWebApi/albums/Dirty%20Deeds/image In order to create a URL like this, I have to create a new Controller because my earlier routes pointed to the AlbumApiController using HTTP Verb routing. HTTP Verb based routing is great for representing a single set of resources such as albums. You can map operations like add, delete, update and read easily using HTTP Verbs. But you cannot mix action based routing into a an HTTP Verb routing controller - you can only map HTTP Verbs and each method has to be unique based on parameter signature. You can't have multiple GET operations to methods with the same signature. So GetImage(string id) and GetAlbum(string title) are in conflict in an HTTP GET routing scenario. In fact, I was unable to make the above Image URL work with any combination of HTTP Verb plus Custom routing using the single Albums controller. There are number of ways around this, but all involve additional controllers.  Personally, I think it’s easier to use explicit Action routing and then add custom routes if you need to simplify your URLs further. So in order to accommodate some of the other examples, I created another controller – AlbumRpcApiController – to handle all requests that are explicitly routed via actions (/albums/rpc/AlbumArt) or are custom routed with explicit routes defined in the HttpConfiguration. I added the AlbumArt() method to this new AlbumRpcApiController class. For the image URL to work with the new AlbumRpcApiController, you need a custom route placed before the default route from Listing 1.RouteTable.Routes.MapHttpRoute( name: "AlbumRpcApiAction", routeTemplate: "albums/rpc/{action}/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumRpcApi", action = "GetAblums" } ); Now I can use either of the following URLs to access the image: Custom route: (/albums/rpc/{title}/image)http://localhost/aspnetWebApi/albums/PowerAge/image Action route: (/albums/rpc/action/{title})http://localhost/aspnetWebAPI/albums/rpc/albumart/PowerAge Sending Data to the Server To send data to the server and add a new album, you can use an HTTP POST operation. Since I’m using HTTP Verb-based routing in the original AlbumApiController, I can implement a method called PostAlbum()to accept a new album from the client. Listing 6 shows the Web API code to add a new album.public HttpResponseMessage PostAlbum(Album album) { if (!this.ModelState.IsValid) { // my custom error class var error = new ApiMessageError() { message = "Model is invalid" }; // add errors into our client error model for client foreach (var prop in ModelState.Values) { var modelError = prop.Errors.FirstOrDefault(); if (!string.IsNullOrEmpty(modelError.ErrorMessage)) error.errors.Add(modelError.ErrorMessage); else error.errors.Add(modelError.Exception.Message); } return Request.CreateResponse<ApiMessageError>(HttpStatusCode.Conflict, error); } // update song id which isn't provided foreach (var song in album.Songs) song.AlbumId = album.Id; // see if album exists already var matchedAlbum = AlbumData.Current .SingleOrDefault(alb => alb.Id == album.Id || alb.AlbumName == album.AlbumName); if (matchedAlbum == null) AlbumData.Current.Add(album); else matchedAlbum = album; // return a string to show that the value got here var resp = Request.CreateResponse(HttpStatusCode.OK, string.Empty); resp.Content = new StringContent(album.AlbumName + " " + album.Entered.ToString(), Encoding.UTF8, "text/plain"); return resp; } The PostAlbum() method receives an album parameter, which is automatically deserialized from the POST buffer the client sent. The data passed from the client can be either XML or JSON. Web API automatically figures out what format it needs to deserialize based on the content type and binds the content to the album object. Web API uses model binding to bind the request content to the parameter(s) of controller methods. Like MVC you can check the model by looking at ModelState.IsValid. If it’s not valid, you can run through the ModelState.Values collection and check each binding for errors. Here I collect the error messages into a string array that gets passed back to the client via the result ApiErrorMessage object. When a binding error occurs, you’ll want to return an HTTP error response and it’s best to do that with an HttpResponseMessage result. In Listing 6, I used a custom error class that holds a message and an array of detailed error messages for each binding error. I used this object as the content to return to the client along with my Conflict HTTP Status Code response. If binding succeeds, the example returns a string with the name and date entered to demonstrate that you captured the data. Normally, a method like this should return a Boolean or no response at all (HttpStatusCode.NoConent). The sample uses a simple static list to hold albums, so once you’ve added the album using the Post operation, you can hit the /albums/ URL to see that the new album was added. The client jQuery code to call the POST operation from the client with jQuery is shown in Listing 7. var id = new Date().getTime().toString(); var album = { "Id": id, "AlbumName": "Power Age", "Artist": "AC/DC", "YearReleased": 1977, "Entered": "2002-03-11T18:24:43.5580794-10:00", "AlbumImageUrl": http://ecx.images-amazon.com/images/…, "AmazonUrl": http://www.amazon.com/…, "Songs": [ { "SongName": "Rock 'n Roll Damnation", "SongLength": 3.12}, { "SongName": "Downpayment Blues", "SongLength": 4.22 }, { "SongName": "Riff Raff", "SongLength": 2.42 } ] } $.ajax( { url: "albums/", type: "POST", contentType: "application/json", data: JSON.stringify(album), processData: false, beforeSend: function (xhr) { // not required since JSON is default output xhr.setRequestHeader("Accept", "application/json"); }, success: function (result) { // reload list of albums page.loadAlbums(); }, error: function (xhr, status, p3, p4) { var err = "Error"; if (xhr.responseText && xhr.responseText[0] == "{") err = JSON.parse(xhr.responseText).message; alert(err); } }); The code in Listing 7 creates an album object in JavaScript to match the structure of the .NET Album class. This object is passed to the $.ajax() function to send to the server as POST. The data is turned into JSON and the content type set to application/json so that the server knows what to convert when deserializing in the Album instance. The jQuery code hooks up success and failure events. Success returns the result data, which is a string that’s echoed back with an alert box. If an error occurs, jQuery returns the XHR instance and status code. You can check the XHR to see if a JSON object is embedded and if it is, you can extract it by de-serializing it and accessing the .message property. REST standards suggest that updates to existing resources should use PUT operations. REST standards aside, I’m not a big fan of separating out inserts and updates so I tend to have a single method that handles both. But if you want to follow REST suggestions, you can create a PUT method that handles updates by forwarding the PUT operation to the POST method:public HttpResponseMessage PutAlbum(Album album) { return PostAlbum(album); } To make the corresponding $.ajax() call, all you have to change from Listing 7 is the type: from POST to PUT. Model Binding with UrlEncoded POST Variables In the example in Listing 7 I used JSON objects to post a serialized object to a server method that accepted an strongly typed object with the same structure, which is a common way to send data to the server. However, Web API supports a number of different ways that data can be received by server methods. For example, another common way is to use plain UrlEncoded POST  values to send to the server. Web API supports Model Binding that works similar (but not the same) as MVC's model binding where POST variables are mapped to properties of object parameters of the target method. This is actually quite common for AJAX calls that want to avoid serialization and the potential requirement of a JSON parser on older browsers. For example, using jQUery you might use the $.post() method to send a new album to the server (albeit one without songs) using code like the following:$.post("albums/",{AlbumName: "Dirty Deeds", YearReleased: 1976 … },albumPostCallback); Although the code looks very similar to the client code we used before passing JSON, here the data passed is URL encoded values (AlbumName=Dirty+Deeds&YearReleased=1976 etc.). Web API then takes this POST data and maps each of the POST values to the properties of the Album object in the method's parameter. Although the client code is different the server can both handle the JSON object, or the UrlEncoded POST values. Dynamic Access to POST Data There are also a few options available to dynamically access POST data, if you know what type of data you're dealing with. If you have POST UrlEncoded values, you can dynamically using a FormsDataCollection:[HttpPost] public string PostAlbum(FormDataCollection form) { return string.Format("{0} - released {1}", form.Get("AlbumName"),form.Get("RearReleased")); } The FormDataCollection is a very simple object, that essentially provides the same functionality as Request.Form[] in ASP.NET. Request.Form[] still works if you're running hosted in an ASP.NET application. However as a general rule, while ASP.NET's functionality is always available when running Web API hosted inside of an  ASP.NET application, using the built in classes specific to Web API makes it possible to run Web API applications in a self hosted environment outside of ASP.NET. If your client is sending JSON to your server, and you don't want to map the JSON to a strongly typed object because you only want to retrieve a few simple values, you can also accept a JObject parameter in your API methods:[HttpPost] public string PostAlbum(JObject jsonData) { dynamic json = jsonData; JObject jalbum = json.Album; JObject juser = json.User; string token = json.UserToken; var album = jalbum.ToObject<Album>(); var user = juser.ToObject<User>(); return String.Format("{0} {1} {2}", album.AlbumName, user.Name, token); } There quite a few options available to you to receive data with Web API, which gives you more choices for the right tool for the job. Unfortunately one shortcoming of Web API is that POST data is always mapped to a single parameter. This means you can't pass multiple POST parameters to methods that receive POST data. It's possible to accept multiple parameters, but only one can map to the POST content - the others have to come from the query string or route values. I have a couple of Blog POSTs that explain what works and what doesn't here: Passing multiple POST parameters to Web API Controller Methods Mapping UrlEncoded POST Values in ASP.NET Web API   Handling Delete Operations Finally, to round out the server API code of the album example we've been discussin, here’s the DELETE verb controller method that allows removal of an album by its title:public HttpResponseMessage DeleteAlbum(string title) { var matchedAlbum = AlbumData.Current.Where(alb => alb.AlbumName == title) .SingleOrDefault(); if (matchedAlbum == null) return new HttpResponseMessage(HttpStatusCode.NotFound); AlbumData.Current.Remove(matchedAlbum); return new HttpResponseMessage(HttpStatusCode.NoContent); } To call this action method using jQuery, you can use:$(".removeimage").live("click", function () { var $el = $(this).parent(".album"); var txt = $el.find("a").text(); $.ajax({ url: "albums/" + encodeURIComponent(txt), type: "Delete", success: function (result) { $el.fadeOut().remove(); }, error: jqError }); }   Note the use of the DELETE verb in the $.ajax() call, which routes to DeleteAlbum on the server. DELETE is a non-content operation, so you supply a resource ID (the title) via route value or the querystring. Routing Conflicts In all requests with the exception of the AlbumArt image example shown so far, I used HTTP Verb routing that I set up in Listing 1. HTTP Verb Routing is a recommendation that is in line with typical REST access to HTTP resources. However, it takes quite a bit of effort to create REST-compliant API implementations based only on HTTP Verb routing only. You saw one example that didn’t really fit – the return of an image where I created a custom route albums/{title}/image that required creation of a second controller and a custom route to work. HTTP Verb routing to a controller does not mix with custom or action routing to the same controller because of the limited mapping of HTTP verbs imposed by HTTP Verb routing. To understand some of the problems with verb routing, let’s look at another example. Let’s say you create a GetSortableAlbums() method like this and add it to the original AlbumApiController accessed via HTTP Verb routing:[HttpGet] public IQueryable<Album> SortableAlbums() { var albums = AlbumData.Current; // generally should be done only on actual queryable results (EF etc.) // Done here because we're running with a static list but otherwise might be slow return albums.AsQueryable(); } If you compile this code and try to now access the /albums/ link, you get an error: Multiple Actions were found that match the request. HTTP Verb routing only allows access to one GET operation per parameter/route value match. If more than one method exists with the same parameter signature, it doesn’t work. As I mentioned earlier for the image display, the only solution to get this method to work is to throw it into another controller. Because I already set up the AlbumRpcApiController I can add the method there. First, I should rename the method to SortableAlbums() so I’m not using a Get prefix for the method. This also makes the action parameter look cleaner in the URL - it looks less like a method and more like a noun. I can then create a new route that handles direct-action mapping:RouteTable.Routes.MapHttpRoute( name: "AlbumRpcApiAction", routeTemplate: "albums/rpc/{action}/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumRpcApi", action = "GetAblums" } ); As I am explicitly adding a route segment – rpc – into the route template, I can now reference explicit methods in the Web API controller using URLs like this: http://localhost/AspNetWebApi/rpc/SortableAlbums Error Handling I’ve already done some minimal error handling in the examples. For example in Listing 6, I detected some known-error scenarios like model validation failing or a resource not being found and returning an appropriate HttpResponseMessage result. But what happens if your code just blows up or causes an exception? If you have a controller method, like this:[HttpGet] public void ThrowException() { throw new UnauthorizedAccessException("Unauthorized Access Sucka"); } You can call it with this: http://localhost/AspNetWebApi/albums/rpc/ThrowException The default exception handling displays a 500-status response with the serialized exception on the local computer only. When you connect from a remote computer, Web API throws back a 500  HTTP Error with no data returned (IIS then adds its HTML error page). The behavior is configurable in the GlobalConfiguration:GlobalConfiguration .Configuration .IncludeErrorDetailPolicy = IncludeErrorDetailPolicy.Never; If you want more control over your error responses sent from code, you can throw explicit error responses yourself using HttpResponseException. When you throw an HttpResponseException the response parameter is used to generate the output for the Controller action. [HttpGet] public void ThrowError() { var resp = Request.CreateResponse<ApiMessageError>( HttpStatusCode.BadRequest, new ApiMessageError("Your code stinks!")); throw new HttpResponseException(resp); } Throwing an HttpResponseException stops the processing of the controller method and immediately returns the response you passed to the exception. Unlike other Exceptions fired inside of WebAPI, HttpResponseException bypasses the Exception Filters installed and instead just outputs the response you provide. In this case, the serialized ApiMessageError result string is returned in the default serialization format – XML or JSON. You can pass any content to HttpResponseMessage, which includes creating your own exception objects and consistently returning error messages to the client. Here’s a small helper method on the controller that you might use to send exception info back to the client consistently:private void ThrowSafeException(string message, HttpStatusCode statusCode = HttpStatusCode.BadRequest) { var errResponse = Request.CreateResponse<ApiMessageError>(statusCode, new ApiMessageError() { message = message }); throw new HttpResponseException(errResponse); } You can then use it to output any captured errors from code:[HttpGet] public void ThrowErrorSafe() { try { List<string> list = null; list.Add("Rick"); } catch (Exception ex) { ThrowSafeException(ex.Message); } }   Exception Filters Another more global solution is to create an Exception Filter. Filters in Web API provide the ability to pre- and post-process controller method operations. An exception filter looks at all exceptions fired and then optionally creates an HttpResponseMessage result. Listing 8 shows an example of a basic Exception filter implementation.public class UnhandledExceptionFilter : ExceptionFilterAttribute { public override void OnException(HttpActionExecutedContext context) { HttpStatusCode status = HttpStatusCode.InternalServerError; var exType = context.Exception.GetType(); if (exType == typeof(UnauthorizedAccessException)) status = HttpStatusCode.Unauthorized; else if (exType == typeof(ArgumentException)) status = HttpStatusCode.NotFound; var apiError = new ApiMessageError() { message = context.Exception.Message }; // create a new response and attach our ApiError object // which now gets returned on ANY exception result var errorResponse = context.Request.CreateResponse<ApiMessageError>(status, apiError); context.Response = errorResponse; base.OnException(context); } } Exception Filter Attributes can be assigned to an ApiController class like this:[UnhandledExceptionFilter] public class AlbumRpcApiController : ApiController or you can globally assign it to all controllers by adding it to the HTTP Configuration's Filters collection:GlobalConfiguration.Configuration.Filters.Add(new UnhandledExceptionFilter()); The latter is a great way to get global error trapping so that all errors (short of hard IIS errors and explicit HttpResponseException errors) return a valid error response that includes error information in the form of a known-error object. Using a filter like this allows you to throw an exception as you normally would and have your filter create a response in the appropriate output format that the client expects. For example, an AJAX application can on failure expect to see a JSON error result that corresponds to the real error that occurred rather than a 500 error along with HTML error page that IIS throws up. You can even create some custom exceptions so you can differentiate your own exceptions from unhandled system exceptions - you often don't want to display error information from 'unknown' exceptions as they may contain sensitive system information or info that's not generally useful to users of your application/site. This is just one example of how ASP.NET Web API is configurable and extensible. Exception filters are just one example of how you can plug-in into the Web API request flow to modify output. Many more hooks exist and I’ll take a closer look at extensibility in Part 2 of this article in the future. Summary Web API is a big improvement over previous Microsoft REST and AJAX toolkits. The key features to its usefulness are its ease of use with simple controller based logic, familiar MVC-style routing, low configuration impact, extensibility at all levels and tight attention to exposing and making HTTP semantics easily discoverable and easy to use. Although none of the concepts used in Web API are new or radical, Web API combines the best of previous platforms into a single framework that’s highly functional, easy to work with, and extensible to boot. I think that Microsoft has hit a home run with Web API. Related Resources Where does ASP.NET Web API fit? Sample Source Code on GitHub Passing multiple POST parameters to Web API Controller Methods Mapping UrlEncoded POST Values in ASP.NET Web API Creating a JSONP Formatter for ASP.NET Web API Removing the XML Formatter from ASP.NET Web API Applications© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • How and where to implement basic authentication in Kibana 3

    - by Jabb
    I have put my elasticsearch server behind a Apache reverse proxy that provides basic authentication. Authenticating to Apache directly from the browser works fine. However, when I use Kibana 3 to access the server, I receive authentication errors. Obviously because no auth headers are sent along with Kibana's Ajax calls. I added the below to elastic-angular-client.js in the Kibana vendor directory to implement authentication quick and dirty. But for some reason it does not work. $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); What is the best approach and place to implement basic authentication in Kibana? /*! elastic.js - v1.1.1 - 2013-05-24 * https://github.com/fullscale/elastic.js * Copyright (c) 2013 FullScale Labs, LLC; Licensed MIT */ /*jshint browser:true */ /*global angular:true */ 'use strict'; /* Angular.js service wrapping the elastic.js API. This module can simply be injected into your angular controllers. */ angular.module('elasticjs.service', []) .factory('ejsResource', ['$http', function ($http) { return function (config) { var // use existing ejs object if it exists ejs = window.ejs || {}, /* results are returned as a promise */ promiseThen = function (httpPromise, successcb, errorcb) { return httpPromise.then(function (response) { (successcb || angular.noop)(response.data); return response.data; }, function (response) { (errorcb || angular.noop)(response.data); return response.data; }); }; // check if we have a config object // if not, we have the server url so // we convert it to a config object if (config !== Object(config)) { config = {server: config}; } // set url to empty string if it was not specified if (config.server == null) { config.server = ''; } /* implement the elastic.js client interface for angular */ ejs.client = { server: function (s) { if (s == null) { return config.server; } config.server = s; return this; }, post: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); console.log($http.defaults.headers); path = config.server + path; var reqConfig = {url: path, data: data, method: 'POST'}; return promiseThen($http(angular.extend(reqConfig, config)), successcb, errorcb); }, get: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); path = config.server + path; // no body on get request, data will be request params var reqConfig = {url: path, params: data, method: 'GET'}; return promiseThen($http(angular.extend(reqConfig, config)), successcb, errorcb); }, put: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); path = config.server + path; var reqConfig = {url: path, data: data, method: 'PUT'}; return promiseThen($http(angular.extend(reqConfig, config)), successcb, errorcb); }, del: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); path = config.server + path; var reqConfig = {url: path, data: data, method: 'DELETE'}; return promiseThen($http(angular.extend(reqConfig, config)), successcb, errorcb); }, head: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); path = config.server + path; // no body on HEAD request, data will be request params var reqConfig = {url: path, params: data, method: 'HEAD'}; return $http(angular.extend(reqConfig, config)) .then(function (response) { (successcb || angular.noop)(response.headers()); return response.headers(); }, function (response) { (errorcb || angular.noop)(undefined); return undefined; }); } }; return ejs; }; }]); UPDATE 1: I implemented Matts suggestion. However, the server returns a weird response. It seems that the authorization header is not working. Could it have to do with the fact, that I am running Kibana on port 81 and elasticsearch on 8181? OPTIONS /solar_vendor/_search HTTP/1.1 Host: 46.252.46.173:8181 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:25.0) Gecko/20100101 Firefox/25.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Origin: http://46.252.46.173:81 Access-Control-Request-Method: POST Access-Control-Request-Headers: authorization,content-type Connection: keep-alive Pragma: no-cache Cache-Control: no-cache This is the response HTTP/1.1 401 Authorization Required Date: Fri, 08 Nov 2013 23:47:02 GMT WWW-Authenticate: Basic realm="Username/Password" Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 346 Connection: close Content-Type: text/html; charset=iso-8859-1 UPDATE 2: Updated all instances with the modified headers in these Kibana files root@localhost:/var/www/kibana# grep -r 'ejsResource(' . ./src/app/controllers/dash.js: $scope.ejs = ejsResource({server: config.elasticsearch, headers: {'Access-Control-Request-Headers': 'Accept, Origin, Authorization', 'Authorization': 'Basic XXXXXXXXXXXXXXXXXXXXXXXXXXXXX=='}}); ./src/app/services/querySrv.js: var ejs = ejsResource({server: config.elasticsearch, headers: {'Access-Control-Request-Headers': 'Accept, Origin, Authorization', 'Authorization': 'Basic XXXXXXXXXXXXXXXXXXXXXXXXXXXXX=='}}); ./src/app/services/filterSrv.js: var ejs = ejsResource({server: config.elasticsearch, headers: {'Access-Control-Request-Headers': 'Accept, Origin, Authorization', 'Authorization': 'Basic XXXXXXXXXXXXXXXXXXXXXXXXXXXXX=='}}); ./src/app/services/dashboard.js: var ejs = ejsResource({server: config.elasticsearch, headers: {'Access-Control-Request-Headers': 'Accept, Origin, Authorization', 'Authorization': 'Basic XXXXXXXXXXXXXXXXXXXXXXXXXXXXX=='}}); And modified my vhost conf for the reverse proxy like this <VirtualHost *:8181> ProxyRequests Off ProxyPass / http://127.0.0.1:9200/ ProxyPassReverse / https://127.0.0.1:9200/ <Location /> Order deny,allow Allow from all AuthType Basic AuthName “Username/Password” AuthUserFile /var/www/cake2.2.4/.htpasswd Require valid-user Header always set Access-Control-Allow-Methods "GET, POST, DELETE, OPTIONS, PUT" Header always set Access-Control-Allow-Headers "Content-Type, X-Requested-With, X-HTTP-Method-Override, Origin, Accept, Authorization" Header always set Access-Control-Allow-Credentials "true" Header always set Cache-Control "max-age=0" Header always set Access-Control-Allow-Origin * </Location> ErrorLog ${APACHE_LOG_DIR}/error.log </VirtualHost> Apache sends back the new response headers but the request header still seems to be wrong somewhere. Authentication just doesn't work. Request Headers OPTIONS /solar_vendor/_search HTTP/1.1 Host: 46.252.26.173:8181 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:25.0) Gecko/20100101 Firefox/25.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Origin: http://46.252.26.173:81 Access-Control-Request-Method: POST Access-Control-Request-Headers: authorization,content-type Connection: keep-alive Pragma: no-cache Cache-Control: no-cache Response Headers HTTP/1.1 401 Authorization Required Date: Sat, 09 Nov 2013 08:48:48 GMT Access-Control-Allow-Methods: GET, POST, DELETE, OPTIONS, PUT Access-Control-Allow-Headers: Content-Type, X-Requested-With, X-HTTP-Method-Override, Origin, Accept, Authorization Access-Control-Allow-Credentials: true Cache-Control: max-age=0 Access-Control-Allow-Origin: * WWW-Authenticate: Basic realm="Username/Password" Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 346 Connection: close Content-Type: text/html; charset=iso-8859-1 SOLUTION: After doing some more research, I found out that this is definitely a configuration issue with regard to CORS. There are quite a few posts available regarding that topic but it appears that in order to solve my problem, it would be necessary to to make some very granular configurations on apache and also make sure that the right stuff is sent from the browser. So I reconsidered the strategy and found a much simpler solution. Just modify the vhost reverse proxy config to move the elastisearch server AND kibana on the same http port. This also adds even better security to Kibana. This is what I did: <VirtualHost *:8181> ProxyRequests Off ProxyPass /bigdatadesk/ http://127.0.0.1:81/bigdatadesk/src/ ProxyPassReverse /bigdatadesk/ http://127.0.0.1:81/bigdatadesk/src/ ProxyPass / http://127.0.0.1:9200/ ProxyPassReverse / https://127.0.0.1:9200/ <Location /> Order deny,allow Allow from all AuthType Basic AuthName “Username/Password” AuthUserFile /var/www/.htpasswd Require valid-user </Location> ErrorLog ${APACHE_LOG_DIR}/error.log </VirtualHost>

    Read the article

  • Metro: Creating an IndexedDbDataSource for WinJS

    - by Stephen.Walther
    The goal of this blog entry is to describe how you can create custom data sources which you can use with the controls in the WinJS library. In particular, I explain how you can create an IndexedDbDataSource which you can use to store and retrieve data from an IndexedDB database. If you want to skip ahead, and ignore all of the fascinating content in-between, I’ve included the complete code for the IndexedDbDataSource at the very bottom of this blog entry. What is IndexedDB? IndexedDB is a database in the browser. You can use the IndexedDB API with all modern browsers including Firefox, Chrome, and Internet Explorer 10. And, of course, you can use IndexedDB with Metro style apps written with JavaScript. If you need to persist data in a Metro style app written with JavaScript then IndexedDB is a good option. Each Metro app can only interact with its own IndexedDB databases. And, IndexedDB provides you with transactions, indices, and cursors – the elements of any modern database. An IndexedDB database might be different than the type of database that you normally use. An IndexedDB database is an object-oriented database and not a relational database. Instead of storing data in tables, you store data in object stores. You store JavaScript objects in an IndexedDB object store. You create new IndexedDB object stores by handling the upgradeneeded event when you attempt to open a connection to an IndexedDB database. For example, here’s how you would both open a connection to an existing database named TasksDB and create the TasksDB database when it does not already exist: var reqOpen = window.indexedDB.open(“TasksDB”, 2); reqOpen.onupgradeneeded = function (evt) { var newDB = evt.target.result; newDB.createObjectStore("tasks", { keyPath: "id", autoIncrement: true }); }; reqOpen.onsuccess = function () { var db = reqOpen.result; // Do something with db }; When you call window.indexedDB.open(), and the database does not already exist, then the upgradeneeded event is raised. In the code above, the upgradeneeded handler creates a new object store named tasks. The new object store has an auto-increment column named id which acts as the primary key column. If the database already exists with the right version, and you call window.indexedDB.open(), then the success event is raised. At that point, you have an open connection to the existing database and you can start doing something with the database. You use asynchronous methods to interact with an IndexedDB database. For example, the following code illustrates how you would add a new object to the tasks object store: var transaction = db.transaction(“tasks”, “readwrite”); var reqAdd = transaction.objectStore(“tasks”).add({ name: “Feed the dog” }); reqAdd.onsuccess = function() { // Tasks added successfully }; The code above creates a new database transaction, adds a new task to the tasks object store, and handles the success event. If the new task gets added successfully then the success event is raised. Creating a WinJS IndexedDbDataSource The most powerful control in the WinJS library is the ListView control. This is the control that you use to display a collection of items. If you want to display data with a ListView control, you need to bind the control to a data source. The WinJS library includes two objects which you can use as a data source: the List object and the StorageDataSource object. The List object enables you to represent a JavaScript array as a data source and the StorageDataSource enables you to represent the file system as a data source. If you want to bind an IndexedDB database to a ListView then you have a choice. You can either dump the items from the IndexedDB database into a List object or you can create a custom data source. I explored the first approach in a previous blog entry. In this blog entry, I explain how you can create a custom IndexedDB data source. Implementing the IListDataSource Interface You create a custom data source by implementing the IListDataSource interface. This interface contains the contract for the methods which the ListView needs to interact with a data source. The easiest way to implement the IListDataSource interface is to derive a new object from the base VirtualizedDataSource object. The VirtualizedDataSource object requires a data adapter which implements the IListDataAdapter interface. Yes, because of the number of objects involved, this is a little confusing. Your code ends up looking something like this: var IndexedDbDataSource = WinJS.Class.derive( WinJS.UI.VirtualizedDataSource, function (dbName, dbVersion, objectStoreName, upgrade, error) { this._adapter = new IndexedDbDataAdapter(dbName, dbVersion, objectStoreName, upgrade, error); this._baseDataSourceConstructor(this._adapter); }, { nuke: function () { this._adapter.nuke(); }, remove: function (key) { this._adapter.removeInternal(key); } } ); The code above is used to create a new class named IndexedDbDataSource which derives from the base VirtualizedDataSource class. In the constructor for the new class, the base class _baseDataSourceConstructor() method is called. A data adapter is passed to the _baseDataSourceConstructor() method. The code above creates a new method exposed by the IndexedDbDataSource named nuke(). The nuke() method deletes all of the objects from an object store. The code above also overrides a method named remove(). Our derived remove() method accepts any type of key and removes the matching item from the object store. Almost all of the work of creating a custom data source goes into building the data adapter class. The data adapter class implements the IListDataAdapter interface which contains the following methods: · change() · getCount() · insertAfter() · insertAtEnd() · insertAtStart() · insertBefore() · itemsFromDescription() · itemsFromEnd() · itemsFromIndex() · itemsFromKey() · itemsFromStart() · itemSignature() · moveAfter() · moveBefore() · moveToEnd() · moveToStart() · remove() · setNotificationHandler() · compareByIdentity Fortunately, you are not required to implement all of these methods. You only need to implement the methods that you actually need. In the case of the IndexedDbDataSource, I implemented the getCount(), itemsFromIndex(), insertAtEnd(), and remove() methods. If you are creating a read-only data source then you really only need to implement the getCount() and itemsFromIndex() methods. Implementing the getCount() Method The getCount() method returns the total number of items from the data source. So, if you are storing 10,000 items in an object store then this method would return the value 10,000. Here’s how I implemented the getCount() method: getCount: function () { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore().then(function (store) { var reqCount = store.count(); reqCount.onerror = that._error; reqCount.onsuccess = function (evt) { complete(evt.target.result); }; }); }); } The first thing that you should notice is that the getCount() method returns a WinJS promise. This is a requirement. The getCount() method is asynchronous which is a good thing because all of the IndexedDB methods (at least the methods implemented in current browsers) are also asynchronous. The code above retrieves an object store and then uses the IndexedDB count() method to get a count of the items in the object store. The value is returned from the promise by calling complete(). Implementing the itemsFromIndex method When a ListView displays its items, it calls the itemsFromIndex() method. By default, it calls this method multiple times to get different ranges of items. Three parameters are passed to the itemsFromIndex() method: the requestIndex, countBefore, and countAfter parameters. The requestIndex indicates the index of the item from the database to show. The countBefore and countAfter parameters represent hints. These are integer values which represent the number of items before and after the requestIndex to retrieve. Again, these are only hints and you can return as many items before and after the request index as you please. Here’s how I implemented the itemsFromIndex method: itemsFromIndex: function (requestIndex, countBefore, countAfter) { var that = this; return new WinJS.Promise(function (complete, error) { that.getCount().then(function (count) { if (requestIndex >= count) { return WinJS.Promise.wrapError(new WinJS.ErrorFromName(WinJS.UI.FetchError.doesNotExist)); } var startIndex = Math.max(0, requestIndex - countBefore); var endIndex = Math.min(count, requestIndex + countAfter + 1); that._getObjectStore().then(function (store) { var index = 0; var items = []; var req = store.openCursor(); req.onerror = that._error; req.onsuccess = function (evt) { var cursor = evt.target.result; if (index < startIndex) { index = startIndex; cursor.advance(startIndex); return; } if (cursor && index < endIndex) { index++; items.push({ key: cursor.value[store.keyPath].toString(), data: cursor.value }); cursor.continue(); return; } results = { items: items, offset: requestIndex - startIndex, totalCount: count }; complete(results); }; }); }); }); } In the code above, a cursor is used to iterate through the objects in an object store. You fetch the next item in the cursor by calling either the cursor.continue() or cursor.advance() method. The continue() method moves forward by one object and the advance() method moves forward a specified number of objects. Each time you call continue() or advance(), the success event is raised again. If the cursor is null then you know that you have reached the end of the cursor and you can return the results. Some things to be careful about here. First, the return value from the itemsFromIndex() method must implement the IFetchResult interface. In particular, you must return an object which has an items, offset, and totalCount property. Second, each item in the items array must implement the IListItem interface. Each item should have a key and a data property. Implementing the insertAtEnd() Method When creating the IndexedDbDataSource, I wanted to go beyond creating a simple read-only data source and support inserting and deleting objects. If you want to support adding new items with your data source then you need to implement the insertAtEnd() method. Here’s how I implemented the insertAtEnd() method for the IndexedDbDataSource: insertAtEnd:function(unused, data) { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore("readwrite").done(function(store) { var reqAdd = store.add(data); reqAdd.onerror = that._error; reqAdd.onsuccess = function (evt) { var reqGet = store.get(evt.target.result); reqGet.onerror = that._error; reqGet.onsuccess = function (evt) { var newItem = { key:evt.target.result[store.keyPath].toString(), data:evt.target.result } complete(newItem); }; }; }); }); } When implementing the insertAtEnd() method, you need to be careful to return an object which implements the IItem interface. In particular, you should return an object that has a key and a data property. The key must be a string and it uniquely represents the new item added to the data source. The value of the data property represents the new item itself. Implementing the remove() Method Finally, you use the remove() method to remove an item from the data source. You call the remove() method with the key of the item which you want to remove. Implementing the remove() method in the case of the IndexedDbDataSource was a little tricky. The problem is that an IndexedDB object store uses an integer key and the VirtualizedDataSource requires a string key. For that reason, I needed to override the remove() method in the derived IndexedDbDataSource class like this: var IndexedDbDataSource = WinJS.Class.derive( WinJS.UI.VirtualizedDataSource, function (dbName, dbVersion, objectStoreName, upgrade, error) { this._adapter = new IndexedDbDataAdapter(dbName, dbVersion, objectStoreName, upgrade, error); this._baseDataSourceConstructor(this._adapter); }, { nuke: function () { this._adapter.nuke(); }, remove: function (key) { this._adapter.removeInternal(key); } } ); When you call remove(), you end up calling a method of the IndexedDbDataAdapter named removeInternal() . Here’s what the removeInternal() method looks like: setNotificationHandler: function (notificationHandler) { this._notificationHandler = notificationHandler; }, removeInternal: function(key) { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore("readwrite").done(function (store) { var reqDelete = store.delete (key); reqDelete.onerror = that._error; reqDelete.onsuccess = function (evt) { that._notificationHandler.removed(key.toString()); complete(); }; }); }); } The removeInternal() method calls the IndexedDB delete() method to delete an item from the object store. If the item is deleted successfully then the _notificationHandler.remove() method is called. Because we are not implementing the standard IListDataAdapter remove() method, we need to notify the data source (and the ListView control bound to the data source) that an item has been removed. The way that you notify the data source is by calling the _notificationHandler.remove() method. Notice that we get the _notificationHandler in the code above by implementing another method in the IListDataAdapter interface: the setNotificationHandler() method. You can raise the following types of notifications using the _notificationHandler: · beginNotifications() · changed() · endNotifications() · inserted() · invalidateAll() · moved() · removed() · reload() These methods are all part of the IListDataNotificationHandler interface in the WinJS library. Implementing the nuke() Method I wanted to implement a method which would remove all of the items from an object store. Therefore, I created a method named nuke() which calls the IndexedDB clear() method: nuke: function () { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore("readwrite").done(function (store) { var reqClear = store.clear(); reqClear.onerror = that._error; reqClear.onsuccess = function (evt) { that._notificationHandler.reload(); complete(); }; }); }); } Notice that the nuke() method calls the _notificationHandler.reload() method to notify the ListView to reload all of the items from its data source. Because we are implementing a custom method here, we need to use the _notificationHandler to send an update. Using the IndexedDbDataSource To illustrate how you can use the IndexedDbDataSource, I created a simple task list app. You can add new tasks, delete existing tasks, and nuke all of the tasks. You delete an item by selecting an item (swipe or right-click) and clicking the Delete button. Here’s the HTML page which contains the ListView, the form for adding new tasks, and the buttons for deleting and nuking tasks: <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <title>DataSources</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.1.0.RC/css/ui-dark.css" rel="stylesheet" /> <script src="//Microsoft.WinJS.1.0.RC/js/base.js"></script> <script src="//Microsoft.WinJS.1.0.RC/js/ui.js"></script> <!-- DataSources references --> <link href="indexedDb.css" rel="stylesheet" /> <script type="text/javascript" src="indexedDbDataSource.js"></script> <script src="indexedDb.js"></script> </head> <body> <div id="tmplTask" data-win-control="WinJS.Binding.Template"> <div class="taskItem"> Id: <span data-win-bind="innerText:id"></span> <br /><br /> Name: <span data-win-bind="innerText:name"></span> </div> </div> <div id="lvTasks" data-win-control="WinJS.UI.ListView" data-win-options="{ itemTemplate: select('#tmplTask'), selectionMode: 'single' }"></div> <form id="frmAdd"> <fieldset> <legend>Add Task</legend> <label>New Task</label> <input id="inputTaskName" required /> <button>Add</button> </fieldset> </form> <button id="btnNuke">Nuke</button> <button id="btnDelete">Delete</button> </body> </html> And here is the JavaScript code for the TaskList app: /// <reference path="//Microsoft.WinJS.1.0.RC/js/base.js" /> /// <reference path="//Microsoft.WinJS.1.0.RC/js/ui.js" /> function init() { WinJS.UI.processAll().done(function () { var lvTasks = document.getElementById("lvTasks").winControl; // Bind the ListView to its data source var tasksDataSource = new DataSources.IndexedDbDataSource("TasksDB", 1, "tasks", upgrade); lvTasks.itemDataSource = tasksDataSource; // Wire-up Add, Delete, Nuke buttons document.getElementById("frmAdd").addEventListener("submit", function (evt) { evt.preventDefault(); tasksDataSource.beginEdits(); tasksDataSource.insertAtEnd(null, { name: document.getElementById("inputTaskName").value }).done(function (newItem) { tasksDataSource.endEdits(); document.getElementById("frmAdd").reset(); lvTasks.ensureVisible(newItem.index); }); }); document.getElementById("btnDelete").addEventListener("click", function () { if (lvTasks.selection.count() == 1) { lvTasks.selection.getItems().done(function (items) { tasksDataSource.remove(items[0].data.id); }); } }); document.getElementById("btnNuke").addEventListener("click", function () { tasksDataSource.nuke(); }); // This method is called to initialize the IndexedDb database function upgrade(evt) { var newDB = evt.target.result; newDB.createObjectStore("tasks", { keyPath: "id", autoIncrement: true }); } }); } document.addEventListener("DOMContentLoaded", init); The IndexedDbDataSource is created and bound to the ListView control with the following two lines of code: var tasksDataSource = new DataSources.IndexedDbDataSource("TasksDB", 1, "tasks", upgrade); lvTasks.itemDataSource = tasksDataSource; The IndexedDbDataSource is created with four parameters: the name of the database to create, the version of the database to create, the name of the object store to create, and a function which contains code to initialize the new database. The upgrade function creates a new object store named tasks with an auto-increment property named id: function upgrade(evt) { var newDB = evt.target.result; newDB.createObjectStore("tasks", { keyPath: "id", autoIncrement: true }); } The Complete Code for the IndexedDbDataSource Here’s the complete code for the IndexedDbDataSource: (function () { /************************************************ * The IndexedDBDataAdapter enables you to work * with a HTML5 IndexedDB database. *************************************************/ var IndexedDbDataAdapter = WinJS.Class.define( function (dbName, dbVersion, objectStoreName, upgrade, error) { this._dbName = dbName; // database name this._dbVersion = dbVersion; // database version this._objectStoreName = objectStoreName; // object store name this._upgrade = upgrade; // database upgrade script this._error = error || function (evt) { console.log(evt.message); }; }, { /******************************************* * IListDataAdapter Interface Methods ********************************************/ getCount: function () { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore().then(function (store) { var reqCount = store.count(); reqCount.onerror = that._error; reqCount.onsuccess = function (evt) { complete(evt.target.result); }; }); }); }, itemsFromIndex: function (requestIndex, countBefore, countAfter) { var that = this; return new WinJS.Promise(function (complete, error) { that.getCount().then(function (count) { if (requestIndex >= count) { return WinJS.Promise.wrapError(new WinJS.ErrorFromName(WinJS.UI.FetchError.doesNotExist)); } var startIndex = Math.max(0, requestIndex - countBefore); var endIndex = Math.min(count, requestIndex + countAfter + 1); that._getObjectStore().then(function (store) { var index = 0; var items = []; var req = store.openCursor(); req.onerror = that._error; req.onsuccess = function (evt) { var cursor = evt.target.result; if (index < startIndex) { index = startIndex; cursor.advance(startIndex); return; } if (cursor && index < endIndex) { index++; items.push({ key: cursor.value[store.keyPath].toString(), data: cursor.value }); cursor.continue(); return; } results = { items: items, offset: requestIndex - startIndex, totalCount: count }; complete(results); }; }); }); }); }, insertAtEnd:function(unused, data) { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore("readwrite").done(function(store) { var reqAdd = store.add(data); reqAdd.onerror = that._error; reqAdd.onsuccess = function (evt) { var reqGet = store.get(evt.target.result); reqGet.onerror = that._error; reqGet.onsuccess = function (evt) { var newItem = { key:evt.target.result[store.keyPath].toString(), data:evt.target.result } complete(newItem); }; }; }); }); }, setNotificationHandler: function (notificationHandler) { this._notificationHandler = notificationHandler; }, /***************************************** * IndexedDbDataSource Method ******************************************/ removeInternal: function(key) { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore("readwrite").done(function (store) { var reqDelete = store.delete (key); reqDelete.onerror = that._error; reqDelete.onsuccess = function (evt) { that._notificationHandler.removed(key.toString()); complete(); }; }); }); }, nuke: function () { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore("readwrite").done(function (store) { var reqClear = store.clear(); reqClear.onerror = that._error; reqClear.onsuccess = function (evt) { that._notificationHandler.reload(); complete(); }; }); }); }, /******************************************* * Private Methods ********************************************/ _ensureDbOpen: function () { var that = this; // Try to get cached Db if (that._cachedDb) { return WinJS.Promise.wrap(that._cachedDb); } // Otherwise, open the database return new WinJS.Promise(function (complete, error, progress) { var reqOpen = window.indexedDB.open(that._dbName, that._dbVersion); reqOpen.onerror = function (evt) { error(); }; reqOpen.onupgradeneeded = function (evt) { that._upgrade(evt); that._notificationHandler.invalidateAll(); }; reqOpen.onsuccess = function () { that._cachedDb = reqOpen.result; complete(that._cachedDb); }; }); }, _getObjectStore: function (type) { type = type || "readonly"; var that = this; return new WinJS.Promise(function (complete, error) { that._ensureDbOpen().then(function (db) { var transaction = db.transaction(that._objectStoreName, type); complete(transaction.objectStore(that._objectStoreName)); }); }); }, _get: function (key) { return new WinJS.Promise(function (complete, error) { that._getObjectStore().done(function (store) { var reqGet = store.get(key); reqGet.onerror = that._error; reqGet.onsuccess = function (item) { complete(item); }; }); }); } } ); var IndexedDbDataSource = WinJS.Class.derive( WinJS.UI.VirtualizedDataSource, function (dbName, dbVersion, objectStoreName, upgrade, error) { this._adapter = new IndexedDbDataAdapter(dbName, dbVersion, objectStoreName, upgrade, error); this._baseDataSourceConstructor(this._adapter); }, { nuke: function () { this._adapter.nuke(); }, remove: function (key) { this._adapter.removeInternal(key); } } ); WinJS.Namespace.define("DataSources", { IndexedDbDataSource: IndexedDbDataSource }); })(); Summary In this blog post, I provided an overview of how you can create a new data source which you can use with the WinJS library. I described how you can create an IndexedDbDataSource which you can use to bind a ListView control to an IndexedDB database. While describing how you can create a custom data source, I explained how you can implement the IListDataAdapter interface. You also learned how to raise notifications — such as a removed or invalidateAll notification — by taking advantage of the methods of the IListDataNotificationHandler interface.

    Read the article

  • Using custom fonts on Android

    - by karse23
    Hi there, I'm trying to load a custom font as follows: private Paint customFont18; customFont18 = new Paint(); customFont18.setTextSize(18); Typeface fontFace = Typeface.createFromAsset(getAssets(), "FONT.TTF"); customFont18.setTypeface(fontFace); The getAssets fails, thows this: -The method getAssets() is undefined for the type MyClass -assetManager cannot be resolved to a variable What is my problem? I've seen several examples like this but none works in my case. Thanks in advance.

    Read the article

  • Rounded Corners and Shadows &ndash; Dialogs with CSS

    - by Rick Strahl
    Well, it looks like we’ve finally arrived at a place where at least all of the latest versions of main stream browsers support rounded corners and box shadows. The two CSS properties that make this possible are box-shadow and box-radius. Both of these CSS Properties now supported in all the major browsers as shown in this chart from QuirksMode: In it’s simplest form you can use box-shadow and border radius like this: .boxshadow { -moz-box-shadow: 3px 3px 5px #535353; -webkit-box-shadow: 3px 3px 5px #535353; box-shadow: 3px 3px 5px #535353; } .roundbox { -moz-border-radius: 6px 6px 6px 6px; -webkit-border-radius: 6px; border-radius: 6px 6px 6px 6px; } box-shadow: horizontal-shadow-pixels vertical-shadow-pixels blur-distance shadow-color box-shadow attributes specify the the horizontal and vertical offset of the shadow, the blur distance (to give the shadow a smooth soft look) and a shadow color. The spec also supports multiple shadows separated by commas using the attributes above but we’re not using that functionality here. box-radius: top-left-radius top-right-radius bottom-right-radius bottom-left-radius border-radius takes a pixel size for the radius for each corner going clockwise. CSS 3 also specifies each of the individual corner elements such as border-top-left-radius, but support for these is much less prevalent so I would recommend not using them for now until support improves. Instead use the single box-radius to specify all corners. Browser specific Support in older Browsers Notice that there are two variations: The actual CSS 3 properties (box-shadow and box-radius) and the browser specific ones (-moz, –webkit prefixes for FireFox and Chrome/Safari respectively) which work in slightly older versions of modern browsers before official CSS 3 support was added. The goal is to spread support as widely as possible and the prefix versions extend the range slightly more to those browsers that provided early support for these features. Notice that box-shadow and border-radius are used after the browser specific versions to ensure that the latter versions get precedence if the browser supports both (last assignment wins). Use the .boxshadow and .roundbox Styles in HTML To use these two styles create a simple rounded box with a shadow you can use HTML like this: <!-- Simple Box with rounded corners and shadow --> <div class="roundbox boxshadow" style="width: 550px; border: solid 2px steelblue"> <div class="boxcontenttext"> Simple Rounded Corner Box. </div> </div> which looks like this in the browser: This works across browsers and it’s pretty sweet and simple. Watch out for nested Elements! There are a couple of things to be aware of however when using rounded corners. Specifically, you need to be careful when you nest other non-transparent content into the rounded box. For example check out what happens when I change the inside <div> to have a colored background: <!-- Simple Box with rounded corners and shadow --> <div class="roundbox boxshadow" style="width: 550px; border: solid 2px steelblue"> <div class="boxcontenttext" style="background: khaki;"> Simple Rounded Corner Box. </div> </div> which renders like this:   If you look closely you’ll find that the inside <div>’s corners are not rounded and so ‘poke out’ slightly over the rounded corners. It looks like the rounded corners are ‘broken’ up instead of a solid rounded line around the corner, which his pretty ugly. The bigger the radius the more drastic this effect becomes . To fix this issue the inner <div> also has have rounded corners at the same or slightly smaller radius than the outer <div>. The simple fix for this is to simply also apply the roundbox style to the inner <div> in addition to the boxcontenttext style already applied: <div class="boxcontenttext roundbox" style="background: khaki;"> The fixed display now looks proper: Separate Top and Bottom Elements This gets even a little more tricky if you have an element at the top or bottom only of the rounded box. What if you need to add something like a header or footer <div> that have non-transparent backgrounds which is a pretty common scenario? In those cases you want only the top or bottom corners rounded and not both. To make this work a couple of additional styles to round only the top and bottom corners can be created: .roundbox-top { -moz-border-radius: 4px 4px 0 0; -webkit-border-radius: 4px 4px 0 0; border-radius: 4px 4px 0 0; } .roundbox-bottom { -moz-border-radius: 0 0 4px 4px; -webkit-border-radius: 0 0 4px 4px; border-radius: 0 0 4px 4px; } Notice that radius used for the ‘inside’ rounding is smaller (4px) than the outside radius (6px). This is so the inner radius fills into the outer border – if you use the same size you may have some white space showing between inner and out rounded corners. Experiment with values to see what works – in my experimenting the behavior across browsers here is consistent (thankfully). These styles can be applied in addition to other styles to make only the top or bottom portions of an element rounded. For example imagine I have styles like this: .gridheader, .gridheaderbig, .gridheaderleft, .gridheaderright { padding: 4px 4px 4px 4px; background: #003399 url(images/vertgradient.png) repeat-x; text-align: center; font-weight: bold; text-decoration: none; color: khaki; } .gridheaderleft { text-align: left; } .gridheaderright { text-align: right; } .gridheaderbig { font-size: 135%; } If I just apply say gridheader by itself in HTML like this: <div class="roundbox boxshadow" style="width: 550px; border: solid 2px steelblue"> <div class="gridheaderleft">Box with a Header</div> <div class="boxcontenttext" style="background: khaki;"> Simple Rounded Corner Box. </div> </div> This results in a pretty funky display – again due to the fact that the inner elements render square rather than rounded corners: If you look close again you can see that both the header and the main content have square edges which jumps out at the eye. To fix this you can now apply the roundbox-top and roundbox-bottom to the header and content respectively: <div class="roundbox boxshadow" style="width: 550px; border: solid 2px steelblue"> <div class="gridheaderleft roundbox-top">Box with a Header</div> <div class="boxcontenttext roundbox-bottom" style="background: khaki;"> Simple Rounded Corner Box. </div> </div> Which now gives the proper display with rounded corners both on the top and bottom: All of this is sweet to be supported – at least by the newest browser – without having to resort to images and nasty JavaScripts solutions. While this is still not a mainstream feature yet for the majority of actually installed browsers, the majority of browser users are very likely to have this support as most browsers other than IE are actively pushing users to upgrade to newer versions. Since this is a ‘visual display only feature it degrades reasonably well in non-supporting browsers: You get an uninteresting square and non-shadowed browser box, but the display is still overall functional. The main sticking point – as always is Internet Explorer versions 8.0 and down as well as older versions of other browsers. With those browsers you get a functional view that is a little less interesting to look at obviously: but at least it’s still functional. Maybe that’s just one more incentive for people using older browsers to upgrade to a  more modern browser :-) Creating Dialog Related Styles In a lot of my AJAX based applications I use pop up windows which effectively work like dialogs. Using the simple CSS behaviors above, it’s really easy to create some fairly nice looking overlaid windows with nothing but CSS. Here’s what a typical ‘dialog’ I use looks like: The beauty of this is that it’s plain CSS – no plug-ins or images (other than the gradients which are optional) required. Add jQuery-ui draggable (or ww.jquery.js as shown below) and you have a nice simple inline implementation of a dialog represented by a simple <div> tag. Here’s the HTML for this dialog: <div id="divDialog" class="dialog boxshadow" style="width: 450px;"> <div class="dialog-header"> <div class="closebox"></div> User Sign-in </div> <div class="dialog-content"> <label>Username:</label> <input type="text" name="txtUsername" value=" " /> <label>Password</label> <input type="text" name="txtPassword" value=" " /> <hr /> <input type="button" id="btnLogin" value="Login" /> </div> <div class="dialog-statusbar">Ready</div> </div> Most of this behavior is driven by the ‘dialog’ styles which are fairly basic and easy to understand. They do use a few support images for the gradients which are provided in the sample I’ve provided. Here’s what the CSS looks like: .dialog { background: White; overflow: hidden; border: solid 1px steelblue; -moz-border-radius: 6px 6px 4px 4px; -webkit-border-radius: 6px 6px 4px 4px; border-radius: 6px 6px 3px 3px; } .dialog-header { background-image: url(images/dialogheader.png); background-repeat: repeat-x; text-align: left; color: cornsilk; padding: 5px; padding-left: 10px; font-size: 1.02em; font-weight: bold; position: relative; -moz-border-radius: 4px 4px 0px 0px; -webkit-border-radius: 4px 4px 0px 0px; border-radius: 4px 4px 0px 0px; } .dialog-top { -moz-border-radius: 4px 4px 0px 0px; -webkit-border-radius: 4px 4px 0px 0px; border-radius: 4px 4px 0px 0px; } .dialog-bottom { -moz-border-radius: 0 0 3px 3px; -webkit-border-radius: 0 0 3px 3px; border-radius: 0 0 3px 3px; } .dialog-content { padding: 15px; } .dialog-statusbar, .dialog-toolbar { background: #eeeeee; background-image: url(images/dialogstrip.png); background-repeat: repeat-x; padding: 5px; padding-left: 10px; border-top: solid 1px silver; border-bottom: solid 1px silver; font-size: 0.8em; } .dialog-statusbar { -moz-border-radius: 0 0 3px 3px; -webkit-border-radius: 0 0 3px 3px; border-radius: 0 0 3px 3px; padding-right: 10px; } .closebox { position: absolute; right: 2px; top: 2px; background-image: url(images/close.gif); background-repeat: no-repeat; width: 14px; height: 14px; cursor: pointer; opacity: 0.60; filter: alpha(opacity="80"); } .closebox:hover { opacity: 1; filter: alpha(opacity="100"); } The main style is the dialog class which is the outer box. It has the rounded border that serves as the outline. Note that I didn’t add the box-shadow to this style because in some situations I just want the rounded box in an inline display that doesn’t have a shadow so it’s still applied separately. dialog-header, then has the rounded top corners and displays a typical dialog heading format. dialog-bottom and dialog-top then provide the same functionality as roundbox-top and roundbox-bottom described earlier but are provided mainly in the stylesheet for consistency to match the dialog’s round edges and making it easier to  remember and find in Intellisense as it shows up in the same dialog- group. dialog-statusbar and dialog-toolbar are two elements I use a lot for floating windows – the toolbar serves for buttons and options and filters typically, while the status bar provides information specific to the floating window. Since the the status bar is always on the bottom of the dialog it automatically handles the rounding of the bottom corners. Finally there’s  closebox style which is to be applied to an empty <div> tag in the header typically. What this does is render a close image that is by default low-lighted with a low opacity value, and then highlights when hovered over. All you’d have to do handle the close operation is handle the onclick of the <div>. Note that the <div> right aligns so typically you should specify it before any other content in the header. Speaking of closable – some time ago I created a closable jQuery plug-in that basically automates this process and can be applied against ANY element in a page, automatically removing or closing the element with some simple script code. Using this you can leave out the <div> tag for closable and just do the following: To make the above dialog closable (and draggable) which makes it effectively and overlay window, you’d add jQuery.js and ww.jquery.js to the page: <script type="text/javascript" src="../../scripts/jquery.min.js"></script> <script type="text/javascript" src="../../scripts/ww.jquery.min.js"></script> and then simply call: <script type="text/javascript"> $(document).ready(function () { $("#divDialog") .draggable({ handle: ".dialog-header" }) .closable({ handle: ".dialog-header", closeHandler: function () { alert("Window about to be closed."); return true; // true closes - false leaves open } }); }); </script> * ww.jquery.js emulates base features in jQuery-ui’s draggable. If jQuery-ui is loaded its draggable version will be used instead and voila you have now have a draggable and closable window – here in mid-drag:   The dragging and closable behaviors are of course optional, but it’s the final touch that provides dialog like window behavior. Relief for older Internet Explorer Versions with CSS Pie If you want to get these features to work with older versions of Internet Explorer all the way back to version 6 you can check out CSS Pie. CSS Pie provides an Internet Explorer behavior file that attaches to specific CSS rules and simulates these behavior using script code in IE (mostly by implementing filters). You can simply add the behavior to each CSS style that uses box-shadow and border-radius like this: .boxshadow {     -moz-box-shadow: 3px 3px 5px #535353;     -webkit-box-shadow: 3px 3px 5px #535353;           box-shadow: 3px 3px 5px #535353;     behavior: url(scripts/PIE.htc);           } .roundbox {      -moz-border-radius: 6px 6px 6px 6px;     -webkit-border-radius: 6px;      border-radius: 6px 6px 6px 6px;     behavior: url(scripts/PIE.htc); } CSS Pie requires the PIE.htc on your server and referenced from each CSS style that needs it. Note that the url() for IE behaviors is NOT CSS file relative as other CSS resources, but rather PAGE relative , so if you have more than one folder you probably need to reference the HTC file with a fixed path like this: behavior: url(/MyApp/scripts/PIE.htc); in the style. Small price to pay, but a royal pain if you have a common CSS file you use in many applications. Once the PIE.htc file has been copied and you have applied the behavior to each style that uses these new features Internet Explorer will render rounded corners and box shadows! Yay! Hurray for box-shadow and border-radius All of this functionality is very welcome natively in the browser. If you think this is all frivolous visual candy, you might be right :-), but if you take a look on the Web and search for rounded corner solutions that predate these CSS attributes you’ll find a boatload of stuff from image files, to custom drawn content to Javascript solutions that play tricks with a few images. It’s sooooo much easier to have this functionality built in and I for one am glad to see that’s it’s finally becoming standard in the box. Still remember that when you use these new CSS features, they are not universal, and are not going to be really soon. Legacy browsers, especially old versions of Internet Explorer that can’t be updated will continue to be around and won’t work with this shiny new stuff. I say screw ‘em: Let them get a decent recent browser or see a degraded and ugly UI. We have the luxury with this functionality in that it doesn’t typically affect usability – it just doesn’t look as nice. Resources Download the Sample The sample includes the styles and images and sample page as well as ww.jquery.js for the draggable/closable example. Online Sample Check out the sample described in this post online. Closable and Draggable Documentation Documentation for the closeable and draggable plug-ins in ww.jquery.js. You can also check out the full documentation for all the plug-ins contained in ww.jquery.js here. © Rick Strahl, West Wind Technologies, 2005-2011Posted in HTML  CSS  

    Read the article

  • DataTables warning (table id = 'example-advanced'): Cannot reinitialise DataTable while using treetable and datatable at the same time

    - by Nyaro
    DataTables warning (table id = 'example-advanced'): Cannot reinitialise DataTable while using treetable and datatable at the same time. Here is my code: <script src="jquery-1.7.2.min.js"></script> <script src='jquery.dataTables.min.js'></script> <script src="jquery.treetable.js"></script> <script> $("#example-advanced").treetable({ expandable: true }); </script> <script> $('#example-advanced').dataTable( { "bSort": false } ); </script> Actually I wanted to get rid of the sorting part of the datatable coz it was giving error in treetable display so i want the sorting part from the datatable out and keep other functions like search and pagination. Please help me out. Thanks in advance.

    Read the article

  • If my team has low skill, should I lower the skill of my code?

    - by Florian Margaine
    For example, there is a common snippet in JS to get a default value: function f(x) { x = x || 10; } This kind of snippet is not easily understood by all the members of my team, their JS level being low. Should I not use this trick then? It makes the code less readable by peers, but more readable than the following according to any JS dev: function f(x) { if (!x) { x = 10; } } Sure, if I use this trick and a colleague sees it, then they can learn something. But the case is often that they see this as "trying to be clever". So, should I lower the level of my code if my teammates have a lower level than me?

    Read the article

  • Readability of || statements

    - by Devin G Rhode
    On HTML5 Boilerplate they use this code for jQuery: <!-- Load jQuery with a protocol relative URL; fall back to local if offline --> <script src="//ajax.googleapis.com/ajax/libs/jquery/1.7.2/jquery.min.js"></scrip> <script>window.jQuery || document.write('<script src="js/libs/jquery-1.7.2.min.js"><\/script>')</script> The question is simple, what's more readable: if (!jQuery) document.write( -local jQuery- ); or window.jQuery || document.write( -local jQuery- );

    Read the article

  • node-xmpp-bosh error on Ubuntu 11.10

    - by megueloby
    I am newbie in Linux word. I want to implement a bosh server. Because it is hard on Windows platform I decided to deploy it on a Ubuntu virtual machine via vmware. I made installation without problems. I followed the processes on this page. Now I want to test my bosh server with the command sudo bosh or sudo /etc/init.d/bosh start, after typing those I get on the terminal Starting bosh server after, nothing. I looked on the bosh.err file and I see exec: 2: /usr/local/lib/bosh/run-server.js: Permission denied I don? t know why this error with sudo. When I try ls -l /usr/local/lib/bosh/run-server.js it show -rw-r--r-- 1 root root 4889 2012-04-01 18:50 /usr/local/lib/bosh/run-server.js How can I make bosh start?

    Read the article

  • Circle vs Edge collision detection / resolution

    - by topheman
    I made a javascript class Ball.js that handles physics interactions betweens balls as well as painting. In the v1.0, the ball vs ball collision detection and resolution is well handled. In the next version (v2), I'm trying to add edgeCollision handling. I'm having some problems, maybe you will be able to help me. All the v2 branch source code is on github repository : https://github.com/topheman/Ball.js/tree/v2 The v2 demos (where you can see the bug I will be talking about) : http://labs.topheman.com/Ball-v2/#help As you will see on the demo, I have two major problems that I'm having a really hard time to solve on Ball.js : method resolveEdgeCollision : bounce angle is inconsistent method checkEdgeCollision : if the ball's velocity (the length that it runs each frame) is higher than its diameter, eventually, it will pass through an edge, without triggering any collision Any Ideas ?...

    Read the article

  • Largest successful JavaScript project? [closed]

    - by 80x24 console
    A common theme in the GWT community is "I wouldn't want to build a project of THAT size using a pure JavaScript library!" What is the largest project that you have successfully delivered with frontend functionality written in JavaScript? (not Java or GWT) Please provide at least a hand-wavy SLOC estimate of the unique JS code (not including libraries, frameworks, toolkits, test code, generated code, server-side processing such as PHP, etc.) that was in the finished product. Note to GWT advocates: Please read the question carefully before answering. I've heard plenty of stories about JS failures and GWT successes, but I'd like to hear some quantified JS successes. Note to mods: This is primarily a business-of-software question, not a tools question. It factors into a real-world business decision.

    Read the article

  • Is my webhost infected? [on hold]

    - by Svein Erik
    I have 2 websites, and on both websites, randomly on time, get's redirected automatically to porn sites. I can't figure out if it's something in the code or if the webhost is infected somehow. The two sites is: http://www.storkas.com/vm and http://www.prowebdesign.no/vm. Where should i start to find out what's happening..? I scanned the sites on hxxp://sitecheck.sucuri.net. One odd thing i saw there, is a reference to: hxxp://js.nohealth.org/js/jquery-1.1.js. I do not have this in my html code..

    Read the article

  • What set of tools make up "the rails way" of testing javascript in the browser?

    - by Jordan Feldstein
    What's the concensus for doing in-browser (either headless or remote-controlled) testing of javascript? Unit testing my JS is nice, but can't protect against irresponsible changes to the DOM. Unit testing of the JS and functional testing of the views to make sure they both provide and utilize the same, correct DOM, might work, but then the link between JS and DOM is being covered in two places which seems brittle or cumbersome. Is there an acknowledged "Rails Way" to implement full-stack tests, where I can run my javascript against the DOM rendered by the rest of the app, and check the results? (Something like what PHPUnit and Selenium give us, but inside the rails framework?)

    Read the article

  • Which is better? Native App or hybrid App?

    - by Prabakaran
    I want to develop a simple App for iOS, Android and windows phone. I just wondered that a simple HTML5, JS and CSS combination can work in all of these platforms. I want to know which one will be efficient? No problem with time and coding. But if i can achieve everything with HTML5-JS itself, i will chose Hybrid development. I want to know the major difference between the Native and Hybrid Development with example(I know that the main difference is HTML5-JS supports cross platform). Note : I am not making a game app.

    Read the article

  • .NET HTML Sanitation for rich HTML Input

    - by Rick Strahl
    Recently I was working on updating a legacy application to MVC 4 that included free form text input. When I set up the new site my initial approach was to not allow any rich HTML input, only simple text formatting that would respect a few simple HTML commands for bold, lists etc. and automatically handles line break processing for new lines and paragraphs. This is typical for what I do with most multi-line text input in my apps and it works very well with very little development effort involved. Then the client sprung another note: Oh by the way we have a bunch of customers (real estate agents) who need to post complete HTML documents. Oh uh! There goes the simple theory. After some discussion and pleading on my part (<snicker>) to try and avoid this type of raw HTML input because of potential XSS issues, the client decided to go ahead and allow raw HTML input anyway. There has been lots of discussions on this subject on StackOverFlow (and here and here) but to after reading through some of the solutions I didn't really find anything that would work even closely for what I needed. Specifically we need to be able to allow just about any HTML markup, with the exception of script code. Remote CSS and Images need to be loaded, links need to work and so. While the 'legit' HTML posted by these agents is basic in nature it does span most of the full gamut of HTML (4). Most of the solutions XSS prevention/sanitizer solutions I found were way to aggressive and rendered the posted output unusable mostly because they tend to strip any externally loaded content. In short I needed a custom solution. I thought the best solution to this would be to use an HTML parser - in this case the Html Agility Pack - and then to run through all the HTML markup provided and remove any of the blacklisted tags and a number of attributes that are prone to JavaScript injection. There's much discussion on whether to use blacklists vs. whitelists in the discussions mentioned above, but I found that whitelists can make sense in simple scenarios where you might allow manual HTML input, but when you need to allow a larger array of HTML functionality a blacklist is probably easier to manage as the vast majority of elements and attributes could be allowed. Also white listing gets a bit more complex with HTML5 and the new proliferation of new HTML tags and most new tags generally don't affect XSS issues directly. Pure whitelisting based on elements and attributes also doesn't capture many edge cases (see some of the XSS cheat sheets listed below) so even with a white list, custom logic is still required to handle many of those edge cases. The Microsoft Web Protection Library (AntiXSS) My first thought was to check out the Microsoft AntiXSS library. Microsoft has an HTML Encoding and Sanitation library in the Microsoft Web Protection Library (formerly AntiXSS Library) on CodePlex, which provides stricter functions for whitelist encoding and sanitation. Initially I thought the Sanitation class and its static members would do the trick for me,but I found that this library is way too restrictive for my needs. Specifically the Sanitation class strips out images and links which rendered the full HTML from our real estate clients completely useless. I didn't spend much time with it, but apparently I'm not alone if feeling this library is not really useful without some way to configure operation. To give you an example of what didn't work for me with the library here's a small and simple HTML fragment that includes script, img and anchor tags. I would expect the script to be stripped and everything else to be left intact. Here's the original HTML:var value = "<b>Here</b> <script>alert('hello')</script> we go. Visit the " + "<a href='http://west-wind.com'>West Wind</a> site. " + "<img src='http://west-wind.com/images/new.gif' /> " ; and the code to sanitize it with the AntiXSS Sanitize class:@Html.Raw(Microsoft.Security.Application.Sanitizer.GetSafeHtmlFragment(value)) This produced a not so useful sanitized string: Here we go. Visit the <a>West Wind</a> site. While it removed the <script> tag (good) it also removed the href from the link and the image tag altogether (bad). In some situations this might be useful, but for most tasks I doubt this is the desired behavior. While links can contain javascript: references and images can 'broadcast' information to a server, without configuration to tell the library what to restrict this becomes useless to me. I couldn't find any way to customize the white list, nor is there code available in this 'open source' library on CodePlex. Using Html Agility Pack for HTML Parsing The WPL library wasn't going to cut it. After doing a bit of research I decided the best approach for a custom solution would be to use an HTML parser and inspect the HTML fragment/document I'm trying to import. I've used the HTML Agility Pack before for a number of apps where I needed an HTML parser without requiring an instance of a full browser like the Internet Explorer Application object which is inadequate in Web apps. In case you haven't checked out the Html Agility Pack before, it's a powerful HTML parser library that you can use from your .NET code. It provides a simple, parsable HTML DOM model to full HTML documents or HTML fragments that let you walk through each of the elements in your document. If you've used the HTML or XML DOM in a browser before you'll feel right at home with the Agility Pack. Blacklist based HTML Parsing to strip XSS Code For my purposes of HTML sanitation, the process involved is to walk the HTML document one element at a time and then check each element and attribute against a blacklist. There's quite a bit of argument of what's better: A whitelist of allowed items or a blacklist of denied items. While whitelists tend to be more secure, they also require a lot more configuration. In the case of HTML5 a whitelist could be very extensive. For what I need, I only want to ensure that no JavaScript is executed, so a blacklist includes the obvious <script> tag plus any tag that allows loading of external content including <iframe>, <object>, <embed> and <link> etc. <form>  is also excluded to avoid posting content to a different location. I also disallow <head> and <meta> tags in particular for my case, since I'm only allowing posting of HTML fragments. There is also some internal logic to exclude some attributes or attributes that include references to JavaScript or CSS expressions. The default tag blacklist reflects my use case, but is customizable and can be added to. Here's my HtmlSanitizer implementation:using System.Collections.Generic; using System.IO; using System.Xml; using HtmlAgilityPack; namespace Westwind.Web.Utilities { public class HtmlSanitizer { public HashSet<string> BlackList = new HashSet<string>() { { "script" }, { "iframe" }, { "form" }, { "object" }, { "embed" }, { "link" }, { "head" }, { "meta" } }; /// <summary> /// Cleans up an HTML string and removes HTML tags in blacklist /// </summary> /// <param name="html"></param> /// <returns></returns> public static string SanitizeHtml(string html, params string[] blackList) { var sanitizer = new HtmlSanitizer(); if (blackList != null && blackList.Length > 0) { sanitizer.BlackList.Clear(); foreach (string item in blackList) sanitizer.BlackList.Add(item); } return sanitizer.Sanitize(html); } /// <summary> /// Cleans up an HTML string by removing elements /// on the blacklist and all elements that start /// with onXXX . /// </summary> /// <param name="html"></param> /// <returns></returns> public string Sanitize(string html) { var doc = new HtmlDocument(); doc.LoadHtml(html); SanitizeHtmlNode(doc.DocumentNode); //return doc.DocumentNode.WriteTo(); string output = null; // Use an XmlTextWriter to create self-closing tags using (StringWriter sw = new StringWriter()) { XmlWriter writer = new XmlTextWriter(sw); doc.DocumentNode.WriteTo(writer); output = sw.ToString(); // strip off XML doc header if (!string.IsNullOrEmpty(output)) { int at = output.IndexOf("?>"); output = output.Substring(at + 2); } writer.Close(); } doc = null; return output; } private void SanitizeHtmlNode(HtmlNode node) { if (node.NodeType == HtmlNodeType.Element) { // check for blacklist items and remove if (BlackList.Contains(node.Name)) { node.Remove(); return; } // remove CSS Expressions and embedded script links if (node.Name == "style") { if (string.IsNullOrEmpty(node.InnerText)) { if (node.InnerHtml.Contains("expression") || node.InnerHtml.Contains("javascript:")) node.ParentNode.RemoveChild(node); } } // remove script attributes if (node.HasAttributes) { for (int i = node.Attributes.Count - 1; i >= 0; i--) { HtmlAttribute currentAttribute = node.Attributes[i]; var attr = currentAttribute.Name.ToLower(); var val = currentAttribute.Value.ToLower(); span style="background: white; color: green">// remove event handlers if (attr.StartsWith("on")) node.Attributes.Remove(currentAttribute); // remove script links else if ( //(attr == "href" || attr== "src" || attr == "dynsrc" || attr == "lowsrc") && val != null && val.Contains("javascript:")) node.Attributes.Remove(currentAttribute); // Remove CSS Expressions else if (attr == "style" && val != null && val.Contains("expression") || val.Contains("javascript:") || val.Contains("vbscript:")) node.Attributes.Remove(currentAttribute); } } } // Look through child nodes recursively if (node.HasChildNodes) { for (int i = node.ChildNodes.Count - 1; i >= 0; i--) { SanitizeHtmlNode(node.ChildNodes[i]); } } } } } Please note: Use this as a starting point only for your own parsing and review the code for your specific use case! If your needs are less lenient than mine were you can you can make this much stricter by not allowing src and href attributes or CSS links if your HTML doesn't allow it. You can also check links for external URLs and disallow those - lots of options.  The code is simple enough to make it easy to extend to fit your use cases more specifically. It's also quite easy to make this code work using a WhiteList approach if you want to go that route. The code above is semi-generic for allowing full featured HTML fragments that only disallow script related content. The Sanitize method walks through each node of the document and then recursively drills into all of its children until the entire document has been traversed. Note that the code here uses an XmlTextWriter to write output - this is done to preserve XHTML style self-closing tags which are otherwise left as non-self-closing tags. The sanitizer code scans for blacklist elements and removes those elements not allowed. Note that the blacklist is configurable either in the instance class as a property or in the static method via the string parameter list. Additionally the code goes through each element's attributes and looks for a host of rules gleaned from some of the XSS cheat sheets listed at the end of the post. Clearly there are a lot more XSS vulnerabilities, but a lot of them apply to ancient browsers (IE6 and versions of Netscape) - many of these glaring holes (like CSS expressions - WTF IE?) have been removed in modern browsers. What a Pain To be honest this is NOT a piece of code that I wanted to write. I think building anything related to XSS is better left to people who have far more knowledge of the topic than I do. Unfortunately, I was unable to find a tool that worked even closely for me, or even provided a working base. For the project I was working on I had no choice and I'm sharing the code here merely as a base line to start with and potentially expand on for specific needs. It's sad that Microsoft Web Protection Library is currently such a train wreck - this is really something that should come from Microsoft as the systems vendor or possibly a third party that provides security tools. Luckily for my application we are dealing with a authenticated and validated users so the user base is fairly well known, and relatively small - this is not a wide open Internet application that's directly public facing. As I mentioned earlier in the post, if I had my way I would simply not allow this type of raw HTML input in the first place, and instead rely on a more controlled HTML input mechanism like MarkDown or even a good HTML Edit control that can provide some limits on what types of input are allowed. Alas in this case I was overridden and we had to go forward and allow *any* raw HTML posted. Sometimes I really feel sad that it's come this far - how many good applications and tools have been thwarted by fear of XSS (or worse) attacks? So many things that could be done *if* we had a more secure browser experience and didn't have to deal with every little script twerp trying to hack into Web pages and obscure browser bugs. So much time wasted building secure apps, so much time wasted by others trying to hack apps… We're a funny species - no other species manages to waste as much time, effort and resources as we humans do :-) Resources Code on GitHub Html Agility Pack XSS Cheat Sheet XSS Prevention Cheat Sheet Microsoft Web Protection Library (AntiXss) StackOverflow Links: http://stackoverflow.com/questions/341872/html-sanitizer-for-net http://blog.stackoverflow.com/2008/06/safe-html-and-xss/ http://code.google.com/p/subsonicforums/source/browse/trunk/SubSonic.Forums.Data/HtmlScrubber.cs?r=61© Rick Strahl, West Wind Technologies, 2005-2012Posted in Security  HTML  ASP.NET  JavaScript   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Using JSON.NET for dynamic JSON parsing

    - by Rick Strahl
    With the release of ASP.NET Web API as part of .NET 4.5 and MVC 4.0, JSON.NET has effectively pushed out the .NET native serializers to become the default serializer for Web API. JSON.NET is vastly more flexible than the built in DataContractJsonSerializer or the older JavaScript serializer. The DataContractSerializer in particular has been very problematic in the past because it can't deal with untyped objects for serialization - like values of type object, or anonymous types which are quite common these days. The JavaScript Serializer that came before it actually does support non-typed objects for serialization but it can't do anything with untyped data coming in from JavaScript and it's overall model of extensibility was pretty limited (JavaScript Serializer is what MVC uses for JSON responses). JSON.NET provides a robust JSON serializer that has both high level and low level components, supports binary JSON, JSON contracts, Xml to JSON conversion, LINQ to JSON and many, many more features than either of the built in serializers. ASP.NET Web API now uses JSON.NET as its default serializer and is now pulled in as a NuGet dependency into Web API projects, which is great. Dynamic JSON Parsing One of the features that I think is getting ever more important is the ability to serialize and deserialize arbitrary JSON content dynamically - that is without mapping the JSON captured directly into a .NET type as DataContractSerializer or the JavaScript Serializers do. Sometimes it isn't possible to map types due to the differences in languages (think collections, dictionaries etc), and other times you simply don't have the structures in place or don't want to create them to actually import the data. If this topic sounds familiar - you're right! I wrote about dynamic JSON parsing a few months back before JSON.NET was added to Web API and when Web API and the System.Net HttpClient libraries included the System.Json classes like JsonObject and JsonArray. With the inclusion of JSON.NET in Web API these classes are now obsolete and didn't ship with Web API or the client libraries. I re-linked my original post to this one. In this post I'll discus JToken, JObject and JArray which are the dynamic JSON objects that make it very easy to create and retrieve JSON content on the fly without underlying types. Why Dynamic JSON? So, why Dynamic JSON parsing rather than strongly typed parsing? Since applications are interacting more and more with third party services it becomes ever more important to have easy access to those services with easy JSON parsing. Sometimes it just makes lot of sense to pull just a small amount of data out of large JSON document received from a service, because the third party service isn't directly related to your application's logic most of the time - and it makes little sense to map the entire service structure in your application. For example, recently I worked with the Google Maps Places API to return information about businesses close to me (or rather the app's) location. The Google API returns a ton of information that my application had no interest in - all I needed was few values out of the data. Dynamic JSON parsing makes it possible to map this data, without having to map the entire API to a C# data structure. Instead I could pull out the three or four values I needed from the API and directly store it on my business entities that needed to receive the data - no need to map the entire Maps API structure. Getting JSON.NET The easiest way to use JSON.NET is to grab it via NuGet and add it as a reference to your project. You can add it to your project with: PM> Install-Package Newtonsoft.Json From the Package Manager Console or by using Manage NuGet Packages in your project References. As mentioned if you're using ASP.NET Web API or MVC 4 JSON.NET will be automatically added to your project. Alternately you can also go to the CodePlex site and download the latest version including source code: http://json.codeplex.com/ Creating JSON on the fly with JObject and JArray Let's start with creating some JSON on the fly. It's super easy to create a dynamic object structure with any of the JToken derived JSON.NET objects. The most common JToken derived classes you are likely to use are JObject and JArray. JToken implements IDynamicMetaProvider and so uses the dynamic  keyword extensively to make it intuitive to create object structures and turn them into JSON via dynamic object syntax. Here's an example of creating a music album structure with child songs using JObject for the base object and songs and JArray for the actual collection of songs:[TestMethod] public void JObjectOutputTest() { // strong typed instance var jsonObject = new JObject(); // you can explicitly add values here using class interface jsonObject.Add("Entered", DateTime.Now); // or cast to dynamic to dynamically add/read properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; album.Artist = "AC/DC"; album.YearReleased = 1976; album.Songs = new JArray() as dynamic; dynamic song = new JObject(); song.SongName = "Dirty Deeds Done Dirt Cheap"; song.SongLength = "4:11"; album.Songs.Add(song); song = new JObject(); song.SongName = "Love at First Feel"; song.SongLength = "3:10"; album.Songs.Add(song); Console.WriteLine(album.ToString()); } This produces a complete JSON structure: { "Entered": "2012-08-18T13:26:37.7137482-10:00", "AlbumName": "Dirty Deeds Done Dirt Cheap", "Artist": "AC/DC", "YearReleased": 1976, "Songs": [ { "SongName": "Dirty Deeds Done Dirt Cheap", "SongLength": "4:11" }, { "SongName": "Love at First Feel", "SongLength": "3:10" } ] } Notice that JSON.NET does a nice job formatting the JSON, so it's easy to read and paste into blog posts :-). JSON.NET includes a bunch of configuration options that control how JSON is generated. Typically the defaults are just fine, but you can override with the JsonSettings object for most operations. The important thing about this code is that there's no explicit type used for holding the values to serialize to JSON. Rather the JSON.NET objects are the containers that receive the data as I build up my JSON structure dynamically, simply by adding properties. This means this code can be entirely driven at runtime without compile time restraints of structure for the JSON output. Here I use JObject to create a album 'object' and immediately cast it to dynamic. JObject() is kind of similar in behavior to ExpandoObject in that it allows you to add properties by simply assigning to them. Internally, JObject values are stored in pseudo collections of key value pairs that are exposed as properties through the IDynamicMetaObject interface exposed in JSON.NET's JToken base class. For objects the syntax is very clean - you add simple typed values as properties. For objects and arrays you have to explicitly create new JObject or JArray, cast them to dynamic and then add properties and items to them. Always remember though these values are dynamic - which means no Intellisense and no compiler type checking. It's up to you to ensure that the names and values you create are accessed consistently and without typos in your code. Note that you can also access the JObject instance directly (not as dynamic) and get access to the underlying JObject type. This means you can assign properties by string, which can be useful for fully data driven JSON generation from other structures. Below you can see both styles of access next to each other:// strong type instance var jsonObject = new JObject(); // you can explicitly add values here jsonObject.Add("Entered", DateTime.Now); // expando style instance you can just 'use' properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; JContainer (the base class for JObject and JArray) is a collection so you can also iterate over the properties at runtime easily:foreach (var item in jsonObject) { Console.WriteLine(item.Key + " " + item.Value.ToString()); } The functionality of the JSON objects are very similar to .NET's ExpandObject and if you used it before, you're already familiar with how the dynamic interfaces to the JSON objects works. Importing JSON with JObject.Parse() and JArray.Parse() The JValue structure supports importing JSON via the Parse() and Load() methods which can read JSON data from a string or various streams respectively. Essentially JValue includes the core JSON parsing to turn a JSON string into a collection of JsonValue objects that can be then referenced using familiar dynamic object syntax. Here's a simple example:public void JValueParsingTest() { var jsonString = @"{""Name"":""Rick"",""Company"":""West Wind"", ""Entered"":""2012-03-16T00:03:33.245-10:00""}"; dynamic json = JValue.Parse(jsonString); // values require casting string name = json.Name; string company = json.Company; DateTime entered = json.Entered; Assert.AreEqual(name, "Rick"); Assert.AreEqual(company, "West Wind"); } The JSON string represents an object with three properties which is parsed into a JObject class and cast to dynamic. Once cast to dynamic I can then go ahead and access the object using familiar object syntax. Note that the actual values - json.Name, json.Company, json.Entered - are actually of type JToken and I have to cast them to their appropriate types first before I can do type comparisons as in the Asserts at the end of the test method. This is required because of the way that dynamic types work which can't determine the type based on the method signature of the Assert.AreEqual(object,object) method. I have to either assign the dynamic value to a variable as I did above, or explicitly cast ( (string) json.Name) in the actual method call. The JSON structure can be much more complex than this simple example. Here's another example of an array of albums serialized to JSON and then parsed through with JsonValue():[TestMethod] public void JsonArrayParsingTest() { var jsonString = @"[ { ""Id"": ""b3ec4e5c"", ""AlbumName"": ""Dirty Deeds Done Dirt Cheap"", ""Artist"": ""AC/DC"", ""YearReleased"": 1976, ""Entered"": ""2012-03-16T00:13:12.2810521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/61kTaH-uZBL._AA115_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/gp/product/…ASIN=B00008BXJ4"", ""Songs"": [ { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Dirty Deeds Done Dirt Cheap"", ""SongLength"": ""4:11"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Love at First Feel"", ""SongLength"": ""3:10"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Big Balls"", ""SongLength"": ""2:38"" } ] }, { ""Id"": ""7b919432"", ""AlbumName"": ""End of the Silence"", ""Artist"": ""Henry Rollins Band"", ""YearReleased"": 1992, ""Entered"": ""2012-03-16T00:13:12.2800521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/51FO3rb1tuL._SL160_AA160_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/End-Silence-Rollins-Band/dp/B0000040OX/ref=sr_1_5?ie=UTF8&qid=1302232195&sr=8-5"", ""Songs"": [ { ""AlbumId"": ""7b919432"", ""SongName"": ""Low Self Opinion"", ""SongLength"": ""5:24"" }, { ""AlbumId"": ""7b919432"", ""SongName"": ""Grip"", ""SongLength"": ""4:51"" } ] } ]"; JArray jsonVal = JArray.Parse(jsonString) as JArray; dynamic albums = jsonVal; foreach (dynamic album in albums) { Console.WriteLine(album.AlbumName + " (" + album.YearReleased.ToString() + ")"); foreach (dynamic song in album.Songs) { Console.WriteLine("\t" + song.SongName); } } Console.WriteLine(albums[0].AlbumName); Console.WriteLine(albums[0].Songs[1].SongName); } JObject and JArray in ASP.NET Web API Of course these types also work in ASP.NET Web API controller methods. If you want you can accept parameters using these object or return them back to the server. The following contrived example receives dynamic JSON input, and then creates a new dynamic JSON object and returns it based on data from the first:[HttpPost] public JObject PostAlbumJObject(JObject jAlbum) { // dynamic input from inbound JSON dynamic album = jAlbum; // create a new JSON object to write out dynamic newAlbum = new JObject(); // Create properties on the new instance // with values from the first newAlbum.AlbumName = album.AlbumName + " New"; newAlbum.NewProperty = "something new"; newAlbum.Songs = new JArray(); foreach (dynamic song in album.Songs) { song.SongName = song.SongName + " New"; newAlbum.Songs.Add(song); } return newAlbum; } The raw POST request to the server looks something like this: POST http://localhost/aspnetwebapi/samples/PostAlbumJObject HTTP/1.1User-Agent: FiddlerContent-type: application/jsonHost: localhostContent-Length: 88 {AlbumName: "Dirty Deeds",Songs:[ { SongName: "Problem Child"},{ SongName: "Squealer"}]} and the output that comes back looks like this: {  "AlbumName": "Dirty Deeds New",  "NewProperty": "something new",  "Songs": [    {      "SongName": "Problem Child New"    },    {      "SongName": "Squealer New"    }  ]} The original values are echoed back with something extra appended to demonstrate that we're working with a new object. When you receive or return a JObject, JValue, JToken or JArray instance in a Web API method, Web API ignores normal content negotiation and assumes your content is going to be received and returned as JSON, so effectively the parameter and result type explicitly determines the input and output format which is nice. Dynamic to Strong Type Mapping You can also map JObject and JArray instances to a strongly typed object, so you can mix dynamic and static typing in the same piece of code. Using the 2 Album jsonString shown earlier, the code below takes an array of albums and picks out only a single album and casts that album to a static Album instance.[TestMethod] public void JsonParseToStrongTypeTest() { JArray albums = JArray.Parse(jsonString) as JArray; // pick out one album JObject jalbum = albums[0] as JObject; // Copy to a static Album instance Album album = jalbum.ToObject<Album>(); Assert.IsNotNull(album); Assert.AreEqual(album.AlbumName,jalbum.Value<string>("AlbumName")); Assert.IsTrue(album.Songs.Count > 0); } This is pretty damn useful for the scenario I mentioned earlier - you can read a large chunk of JSON and dynamically walk the property hierarchy down to the item you want to access, and then either access the specific item dynamically (as shown earlier) or map a part of the JSON to a strongly typed object. That's very powerful if you think about it - it leaves you in total control to decide what's dynamic and what's static. Strongly typed JSON Parsing With all this talk of dynamic let's not forget that JSON.NET of course also does strongly typed serialization which is drop dead easy. Here's a simple example on how to serialize and deserialize an object with JSON.NET:[TestMethod] public void StronglyTypedSerializationTest() { // Demonstrate deserialization from a raw string var album = new Album() { AlbumName = "Dirty Deeds Done Dirt Cheap", Artist = "AC/DC", Entered = DateTime.Now, YearReleased = 1976, Songs = new List<Song>() { new Song() { SongName = "Dirty Deeds Done Dirt Cheap", SongLength = "4:11" }, new Song() { SongName = "Love at First Feel", SongLength = "3:10" } } }; // serialize to string string json2 = JsonConvert.SerializeObject(album,Formatting.Indented); Console.WriteLine(json2); // make sure we can serialize back var album2 = JsonConvert.DeserializeObject<Album>(json2); Assert.IsNotNull(album2); Assert.IsTrue(album2.AlbumName == "Dirty Deeds Done Dirt Cheap"); Assert.IsTrue(album2.Songs.Count == 2); } JsonConvert is a high level static class that wraps lower level functionality, but you can also use the JsonSerializer class, which allows you to serialize/parse to and from streams. It's a little more work, but gives you a bit more control. The functionality available is easy to discover with Intellisense, and that's good because there's not a lot in the way of documentation that's actually useful. Summary JSON.NET is a pretty complete JSON implementation with lots of different choices for JSON parsing from dynamic parsing to static serialization, to complex querying of JSON objects using LINQ. It's good to see this open source library getting integrated into .NET, and pushing out the old and tired stock .NET parsers so that we finally have a bit more flexibility - and extensibility - in our JSON parsing. Good to go! Resources Sample Test Project http://json.codeplex.com/© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  Web Api  AJAX   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Using an alternate JSON Serializer in ASP.NET Web API

    - by Rick Strahl
    The new ASP.NET Web API that Microsoft released alongside MVC 4.0 Beta last week is a great framework for building REST and AJAX APIs. I've been working with it for quite a while now and I really like the way it works and the complete set of features it provides 'in the box'. It's about time that Microsoft gets a decent API for building generic HTTP endpoints into the framework. DataContractJsonSerializer sucks As nice as Web API's overall design is one thing still sucks: The built-in JSON Serialization uses the DataContractJsonSerializer which is just too limiting for many scenarios. The biggest issues I have with it are: No support for untyped values (object, dynamic, Anonymous Types) MS AJAX style Date Formatting Ugly serialization formats for types like Dictionaries To me the most serious issue is dealing with serialization of untyped objects. I have number of applications with AJAX front ends that dynamically reformat data from business objects to fit a specific message format that certain UI components require. The most common scenario I have there are IEnumerable query results from a database with fields from the result set rearranged to fit the sometimes unconventional formats required for the UI components (like jqGrid for example). Creating custom types to fit these messages seems like overkill and projections using Linq makes this much easier to code up. Alas DataContractJsonSerializer doesn't support it. Neither does DataContractSerializer for XML output for that matter. What this means is that you can't do stuff like this in Web API out of the box:public object GetAnonymousType() { return new { name = "Rick", company = "West Wind", entered= DateTime.Now }; } Basically anything that doesn't have an explicit type DataContractJsonSerializer will not let you return. FWIW, the same is true for XmlSerializer which also doesn't work with non-typed values for serialization. The example above is obviously contrived with a hardcoded object graph, but it's not uncommon to get dynamic values returned from queries that have anonymous types for their result projections. Apparently there's a good possibility that Microsoft will ship Json.NET as part of Web API RTM release.  Scott Hanselman confirmed this as a footnote in his JSON Dates post a few days ago. I've heard several other people from Microsoft confirm that Json.NET will be included and be the default JSON serializer, but no details yet in what capacity it will show up. Let's hope it ends up as the default in the box. Meanwhile this post will show you how you can use it today with the beta and get JSON that matches what you should see in the RTM version. What about JsonValue? To be fair Web API DOES include a new JsonValue/JsonObject/JsonArray type that allow you to address some of these scenarios. JsonValue is a new type in the System.Json assembly that can be used to build up an object graph based on a dictionary. It's actually a really cool implementation of a dynamic type that allows you to create an object graph and spit it out to JSON without having to create .NET type first. JsonValue can also receive a JSON string and parse it without having to actually load it into a .NET type (which is something that's been missing in the core framework). This is really useful if you get a JSON result from an arbitrary service and you don't want to explicitly create a mapping type for the data returned. For serialization you can create an object structure on the fly and pass it back as part of an Web API action method like this:public JsonValue GetJsonValue() { dynamic json = new JsonObject(); json.name = "Rick"; json.company = "West Wind"; json.entered = DateTime.Now; dynamic address = new JsonObject(); address.street = "32 Kaiea"; address.zip = "96779"; json.address = address; dynamic phones = new JsonArray(); json.phoneNumbers = phones; dynamic phone = new JsonObject(); phone.type = "Home"; phone.number = "808 123-1233"; phones.Add(phone); phone = new JsonObject(); phone.type = "Home"; phone.number = "808 123-1233"; phones.Add(phone); //var jsonString = json.ToString(); return json; } which produces the following output (formatted here for easier reading):{ name: "rick", company: "West Wind", entered: "2012-03-08T15:33:19.673-10:00", address: { street: "32 Kaiea", zip: "96779" }, phoneNumbers: [ { type: "Home", number: "808 123-1233" }, { type: "Mobile", number: "808 123-1234" }] } If you need to build a simple JSON type on the fly these types work great. But if you have an existing type - or worse a query result/list that's already formatted JsonValue et al. become a pain to work with. As far as I can see there's no way to just throw an object instance at JsonValue and have it convert into JsonValue dictionary. It's a manual process. Using alternate Serializers in Web API So, currently the default serializer in WebAPI is DataContractJsonSeriaizer and I don't like it. You may not either, but luckily you can swap the serializer fairly easily. If you'd rather use the JavaScriptSerializer built into System.Web.Extensions or Json.NET today, it's not too difficult to create a custom MediaTypeFormatter that uses these serializers and can replace or partially replace the native serializer. Here's a MediaTypeFormatter implementation using the ASP.NET JavaScriptSerializer:using System; using System.Net.Http.Formatting; using System.Threading.Tasks; using System.Web.Script.Serialization; using System.Json; using System.IO; namespace Westwind.Web.WebApi { public class JavaScriptSerializerFormatter : MediaTypeFormatter { public JavaScriptSerializerFormatter() { SupportedMediaTypes.Add(new System.Net.Http.Headers.MediaTypeHeaderValue("application/json")); } protected override bool CanWriteType(Type type) { // don't serialize JsonValue structure use default for that if (type == typeof(JsonValue) || type == typeof(JsonObject) || type== typeof(JsonArray) ) return false; return true; } protected override bool CanReadType(Type type) { if (type == typeof(IKeyValueModel)) return false; return true; } protected override System.Threading.Tasks.Taskobject OnReadFromStreamAsync(Type type, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext) { var task = Taskobject.Factory.StartNew(() = { var ser = new JavaScriptSerializer(); string json; using (var sr = new StreamReader(stream)) { json = sr.ReadToEnd(); sr.Close(); } object val = ser.Deserialize(json,type); return val; }); return task; } protected override System.Threading.Tasks.Task OnWriteToStreamAsync(Type type, object value, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext, System.Net.TransportContext transportContext) { var task = Task.Factory.StartNew( () = { var ser = new JavaScriptSerializer(); var json = ser.Serialize(value); byte[] buf = System.Text.Encoding.Default.GetBytes(json); stream.Write(buf,0,buf.Length); stream.Flush(); }); return task; } } } Formatter implementation is pretty simple: You override 4 methods to tell which types you can handle and then handle the input or output streams to create/parse the JSON data. Note that when creating output you want to take care to still allow JsonValue/JsonObject/JsonArray types to be handled by the default serializer so those objects serialize properly - if you let either JavaScriptSerializer or JSON.NET handle them they'd try to render the dictionaries which is very undesirable. If you'd rather use Json.NET here's the JSON.NET version of the formatter:// this code requires a reference to JSON.NET in your project #if true using System; using System.Net.Http.Formatting; using System.Threading.Tasks; using System.Web.Script.Serialization; using System.Json; using Newtonsoft.Json; using System.IO; using Newtonsoft.Json.Converters; namespace Westwind.Web.WebApi { public class JsonNetFormatter : MediaTypeFormatter { public JsonNetFormatter() { SupportedMediaTypes.Add(new System.Net.Http.Headers.MediaTypeHeaderValue("application/json")); } protected override bool CanWriteType(Type type) { // don't serialize JsonValue structure use default for that if (type == typeof(JsonValue) || type == typeof(JsonObject) || type == typeof(JsonArray)) return false; return true; } protected override bool CanReadType(Type type) { if (type == typeof(IKeyValueModel)) return false; return true; } protected override System.Threading.Tasks.Taskobject OnReadFromStreamAsync(Type type, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext) { var task = Taskobject.Factory.StartNew(() = { var settings = new JsonSerializerSettings() { NullValueHandling = NullValueHandling.Ignore, }; var sr = new StreamReader(stream); var jreader = new JsonTextReader(sr); var ser = new JsonSerializer(); ser.Converters.Add(new IsoDateTimeConverter()); object val = ser.Deserialize(jreader, type); return val; }); return task; } protected override System.Threading.Tasks.Task OnWriteToStreamAsync(Type type, object value, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext, System.Net.TransportContext transportContext) { var task = Task.Factory.StartNew( () = { var settings = new JsonSerializerSettings() { NullValueHandling = NullValueHandling.Ignore, }; string json = JsonConvert.SerializeObject(value, Formatting.Indented, new JsonConverter[1] { new IsoDateTimeConverter() } ); byte[] buf = System.Text.Encoding.Default.GetBytes(json); stream.Write(buf,0,buf.Length); stream.Flush(); }); return task; } } } #endif   One advantage of the Json.NET serializer is that you can specify a few options on how things are formatted and handled. You get null value handling and you can plug in the IsoDateTimeConverter which is nice to product proper ISO dates that I would expect any Json serializer to output these days. Hooking up the Formatters Once you've created the custom formatters you need to enable them for your Web API application. To do this use the GlobalConfiguration.Configuration object and add the formatter to the Formatters collection. Here's what this looks like hooked up from Application_Start in a Web project:protected void Application_Start(object sender, EventArgs e) { // Action based routing (used for RPC calls) RouteTable.Routes.MapHttpRoute( name: "StockApi", routeTemplate: "stocks/{action}/{symbol}", defaults: new { symbol = RouteParameter.Optional, controller = "StockApi" } ); // WebApi Configuration to hook up formatters and message handlers // optional RegisterApis(GlobalConfiguration.Configuration); } public static void RegisterApis(HttpConfiguration config) { // Add JavaScriptSerializer formatter instead - add at top to make default //config.Formatters.Insert(0, new JavaScriptSerializerFormatter()); // Add Json.net formatter - add at the top so it fires first! // This leaves the old one in place so JsonValue/JsonObject/JsonArray still are handled config.Formatters.Insert(0, new JsonNetFormatter()); } One thing to remember here is the GlobalConfiguration object which is Web API's static configuration instance. I think this thing is seriously misnamed given that GlobalConfiguration could stand for anything and so is hard to discover if you don't know what you're looking for. How about WebApiConfiguration or something more descriptive? Anyway, once you know what it is you can use the Formatters collection to insert your custom formatter. Note that I insert my formatter at the top of the list so it takes precedence over the default formatter. I also am not removing the old formatter because I still want JsonValue/JsonObject/JsonArray to be handled by the default serialization mechanism. Since they process in sequence and I exclude processing for these types JsonValue et al. still get properly serialized/deserialized. Summary Currently DataContractJsonSerializer in Web API is a pain, but at least we have the ability with relatively limited effort to replace the MediaTypeFormatter and plug in our own JSON serializer. This is useful for many scenarios - if you have existing client applications that used MVC JsonResult or ASP.NET AJAX results from ASMX AJAX services you can plug in the JavaScript serializer and get exactly the same serializer you used in the past so your results will be the same and don't potentially break clients. JSON serializers do vary a bit in how they serialize some of the more complex types (like Dictionaries and dates for example) and so if you're migrating it might be helpful to ensure your client code doesn't break when you switch to ASP.NET Web API. Going forward it looks like Microsoft is planning on plugging in Json.Net into Web API and make that the default. I think that's an awesome choice since Json.net has been around forever, is fast and easy to use and provides a ton of functionality as part of this great library. I just wish Microsoft would have figured this out sooner instead of now at the last minute integrating with it especially given that Json.Net has a similar set of lower level JSON objects JsonValue/JsonObject etc. which now will end up being duplicated by the native System.Json stuff. It's not like we don't already have enough confusion regarding which JSON serializer to use (JavaScriptSerializer, DataContractJsonSerializer, JsonValue/JsonObject/JsonArray and now Json.net). For years I've been using my own JSON serializer because the built in choices are both limited. However, with an official encorsement of Json.Net I'm happily moving on to use that in my applications. Let's see and hope Microsoft gets this right before ASP.NET Web API goes gold.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api  AJAX  ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Use IIS Application Initialization for keeping ASP.NET Apps alive

    - by Rick Strahl
    I've been working quite a bit with Windows Services in the recent months, and well, it turns out that Windows Services are quite a bear to debug, deploy, update and maintain. The process of getting services set up,  debugged and updated is a major chore that has to be extensively documented and or automated specifically. On most projects when a service is built, people end up scrambling for the right 'process' to use for administration. Web app deployment and maintenance on the other hand are common and well understood today, as we are constantly dealing with Web apps. There's plenty of infrastructure and tooling built into Web Tools like Visual Studio to facilitate the process. By comparison Windows Services or anything self-hosted for that matter seems convoluted.In fact, in a recent blog post I mentioned that on a recent project I'd been using self-hosting for SignalR inside of a Windows service, because the application is in fact a 'service' that also needs to send out lots of messages via SignalR. But the reality is that it could just as well be an IIS application with a service component that runs in the background. Either way you look at it, it's either a Windows Service with a built in Web Server, or an IIS application running a Service application, neither of which follows the standard Service or Web App template.Personally I much prefer Web applications. Running inside of IIS I get all the benefits of the IIS platform including service lifetime management (crash and restart), controlled shutdowns, the whole security infrastructure including easy certificate support, hot-swapping of code and the the ability to publish directly to IIS from within Visual Studio with ease.Because of these benefits we set out to move from the self hosted service into an ASP.NET Web app instead.The Missing Link for ASP.NET as a Service: Auto-LoadingI've had moments in the past where I wanted to run a 'service like' application in ASP.NET because when you think about it, it's so much easier to control a Web application remotely. Services are locked into start/stop operations, but if you host inside of a Web app you can write your own ticket and control it from anywhere. In fact nearly 10 years ago I built a background scheduling application that ran inside of ASP.NET and it worked great and it's still running doing its job today.The tricky part for running an app as a service inside of IIS then and now, is how to get IIS and ASP.NET launched so your 'service' stays alive even after an Application Pool reset. 7 years ago I faked it by using a web monitor (my own West Wind Web Monitor app) I was running anyway to monitor my various web sites for uptime, and having the monitor ping my 'service' every 20 seconds to effectively keep ASP.NET alive or fire it back up after a reload. I used a simple scheduler class that also includes some logic for 'self-reloading'. Hacky for sure, but it worked reliably.Luckily today it's much easier and more integrated to get IIS to launch ASP.NET as soon as an Application Pool is started by using the Application Initialization Module. The Application Initialization Module basically allows you to turn on Preloading on the Application Pool and the Site/IIS App, which essentially fires a request through the IIS pipeline as soon as the Application Pool has been launched. This means that effectively your ASP.NET app becomes active immediately, Application_Start is fired making sure your app stays up and running at all times. All the other features like Application Pool recycling and auto-shutdown after idle time still work, but IIS will then always immediately re-launch the application.Getting started with Application InitializationAs of IIS 8 Application Initialization is part of the IIS feature set. For IIS 7 and 7.5 there's a separate download available via Web Platform Installer. Using IIS 8 Application Initialization is an optional install component in Windows or the Windows Server Role Manager: This is an optional component so make sure you explicitly select it.IIS Configuration for Application InitializationInitialization needs to be applied on the Application Pool as well as the IIS Application level. As of IIS 8 these settings can be made through the IIS Administration console.Start with the Application Pool:Here you need to set both the Start Automatically which is always set, and the StartMode which should be set to AlwaysRunning. Both have to be set - the Start Automatically flag is set true by default and controls the starting of the application pool itself while Always Running flag is required in order to launch the application. Without the latter flag set the site settings have no effect.Now on the Site/Application level you can specify whether the site should pre load: Set the Preload Enabled flag to true.At this point ASP.NET apps should auto-load. This is all that's needed to pre-load the site if all you want is to get your site launched automatically.If you want a little more control over the load process you can add a few more settings to your web.config file that allow you to show a static page while the App is starting up. This can be useful if startup is really slow, so rather than displaying blank screen while the user is fiddling their thumbs you can display a static HTML page instead: <system.webServer> <applicationInitialization remapManagedRequestsTo="Startup.htm" skipManagedModules="true"> <add initializationPage="ping.ashx" /> </applicationInitialization> </system.webServer>This allows you to specify a page to execute in a dry run. IIS basically fakes request and pushes it directly into the IIS pipeline without hitting the network. You specify a page and IIS will fake a request to that page in this case ping.ashx which just returns a simple OK string - ie. a fast pipeline request. This request is run immediately after Application Pool restart, and while this request is running and your app is warming up, IIS can display an alternate static page - Startup.htm above. So instead of showing users an empty loading page when clicking a link on your site you can optionally show some sort of static status page that says, "we'll be right back".  I'm not sure if that's such a brilliant idea since this can be pretty disruptive in some cases. Personally I think I prefer letting people wait, but at least get the response they were supposed to get back rather than a random page. But it's there if you need it.Note that the web.config stuff is optional. If you don't provide it IIS hits the default site link (/) and even if there's no matching request at the end of that request it'll still fire the request through the IIS pipeline. Ideally though you want to make sure that an ASP.NET endpoint is hit either with your default page, or by specify the initializationPage to ensure ASP.NET actually gets hit since it's possible for IIS fire unmanaged requests only for static pages (depending how your pipeline is configured).What about AppDomain Restarts?In addition to full Worker Process recycles at the IIS level, ASP.NET also has to deal with AppDomain shutdowns which can occur for a variety of reasons:Files are updated in the BIN folderWeb Deploy to your siteweb.config is changedHard application crashThese operations don't cause the worker process to restart, but they do cause ASP.NET to unload the current AppDomain and start up a new one. Because the features above only apply to Application Pool restarts, AppDomain restarts could also cause your 'ASP.NET service' to stop processing in the background.In order to keep the app running on AppDomain recycles, you can resort to a simple ping in the Application_End event:protected void Application_End() { var client = new WebClient(); var url = App.AdminConfiguration.MonitorHostUrl + "ping.aspx"; client.DownloadString(url); Trace.WriteLine("Application Shut Down Ping: " + url); }which fires any ASP.NET url to the current site at the very end of the pipeline shutdown which in turn ensures that the site immediately starts back up.Manual Configuration in ApplicationHost.configThe above UI corresponds to the following ApplicationHost.config settings. If you're using IIS 7, there's no UI for these flags so you'll have to manually edit them.When you install the Application Initialization component into IIS it should auto-configure the module into ApplicationHost.config. Unfortunately for me, with Mr. Murphy in his best form for me, the module registration did not occur and I had to manually add it.<globalModules> <add name="ApplicationInitializationModule" image="%windir%\System32\inetsrv\warmup.dll" /> </globalModules>Most likely you won't need ever need to add this, but if things are not working it's worth to check if the module is actually registered.Next you need to configure the ApplicationPool and the Web site. The following are the two relevant entries in ApplicationHost.config.<system.applicationHost> <applicationPools> <add name="West Wind West Wind Web Connection" autoStart="true" startMode="AlwaysRunning" managedRuntimeVersion="v4.0" managedPipelineMode="Integrated"> <processModel identityType="LocalSystem" setProfileEnvironment="true" /> </add> </applicationPools> <sites> <site name="Default Web Site" id="1"> <application path="/MPress.Workflow.WebQueueMessageManager" applicationPool="West Wind West Wind Web Connection" preloadEnabled="true"> <virtualDirectory path="/" physicalPath="C:\Clients\…" /> </application> </site> </sites> </system.applicationHost>On the Application Pool make sure to set the autoStart and startMode flags to true and AlwaysRunning respectively. On the site make sure to set the preloadEnabled flag to true.And that's all you should need. You can still set the web.config settings described above as well.ASP.NET as a Service?In the particular application I'm working on currently, we have a queue manager that runs as standalone service that polls a database queue and picks out jobs and processes them on several threads. The service can spin up any number of threads and keep these threads alive in the background while IIS is running doing its own thing. These threads are newly created threads, so they sit completely outside of the IIS thread pool. In order for this service to work all it needs is a long running reference that keeps it alive for the life time of the application.In this particular app there are two components that run in the background on their own threads: A scheduler that runs various scheduled tasks and handles things like picking up emails to send out outside of IIS's scope and the QueueManager. Here's what this looks like in global.asax:public class Global : System.Web.HttpApplication { private static ApplicationScheduler scheduler; private static ServiceLauncher launcher; protected void Application_Start(object sender, EventArgs e) { // Pings the service and ensures it stays alive scheduler = new ApplicationScheduler() { CheckFrequency = 600000 }; scheduler.Start(); launcher = new ServiceLauncher(); launcher.Start(); // register so shutdown is controlled HostingEnvironment.RegisterObject(launcher); }}By keeping these objects around as static instances that are set only once on startup, they survive the lifetime of the application. The code in these classes is essentially unchanged from the Windows Service code except that I could remove the various overrides required for the Windows Service interface (OnStart,OnStop,OnResume etc.). Otherwise the behavior and operation is very similar.In this application ASP.NET serves two purposes: It acts as the host for SignalR and provides the administration interface which allows remote management of the 'service'. I can start and stop the service remotely by shutting down the ApplicationScheduler very easily. I can also very easily feed stats from the queue out directly via a couple of Web requests or (as we do now) through the SignalR service.Registering a Background Object with ASP.NETNotice also the use of the HostingEnvironment.RegisterObject(). This function registers an object with ASP.NET to let it know that it's a background task that should be notified if the AppDomain shuts down. RegisterObject() requires an interface with a Stop() method that's fired and allows your code to respond to a shutdown request. Here's what the IRegisteredObject::Stop() method looks like on the launcher:public void Stop(bool immediate = false) { LogManager.Current.LogInfo("QueueManager Controller Stopped."); Controller.StopProcessing(); Controller.Dispose(); Thread.Sleep(1500); // give background threads some time HostingEnvironment.UnregisterObject(this); }Implementing IRegisterObject should help with reliability on AppDomain shutdowns. Thanks to Justin Van Patten for pointing this out to me on Twitter.RegisterObject() is not required but I would highly recommend implementing it on whatever object controls your background processing to all clean shutdowns when the AppDomain shuts down.Testing it outI'm still in the testing phase with this particular service to see if there are any side effects. But so far it doesn't look like it. With about 50 lines of code I was able to replace the Windows service startup to Web start up - everything else just worked as is. An honorable mention goes to SignalR 2.0's oWin hosting, because with the new oWin based hosting no code changes at all were required, merely a couple of configuration file settings and an assembly directive needed, to point at the SignalR startup class. Sweet!It also seems like SignalR is noticeably faster running inside of IIS compared to self-host. Startup feels faster because of the preload.Starting and Stopping the 'Service'Because the application is running as a Web Server, it's easy to have a Web interface for starting and stopping the services running inside of the service. For our queue manager the SignalR service and front monitoring app has a play and stop button for toggling the queue.If you want more administrative control and have it work more like a Windows Service you can also stop the application pool explicitly from the command line which would be equivalent to stopping and restarting a service.To start and stop from the command line you can use the IIS appCmd tool. To stop:> %windir%\system32\inetsrv\appcmd stop apppool /apppool.name:"Weblog"and to start> %windir%\system32\inetsrv\appcmd start apppool /apppool.name:"Weblog"Note that when you explicitly force the AppPool to stop running either in the UI (on the ApplicationPools page use Start/Stop) or via command line tools, the application pool will not auto-restart immediately. You have to manually start it back up.What's not to like?There are certainly a lot of benefits to running a background service in IIS, but… ASP.NET applications do have more overhead in terms of memory footprint and startup time is a little slower, but generally for server applications this is not a big deal. If the application is stable the service should fire up and stay running indefinitely. A lot of times this kind of service interface can simply be attached to an existing Web application, or if scalability requires be offloaded to its own Web server.Easier to work withBut the ultimate benefit here is that it's much easier to work with a Web app as opposed to a service. While developing I can simply turn off the auto-launch features and launch the service on demand through IIS simply by hitting a page on the site. If I want to shut down an IISRESET -stop will shut down the service easily enough. I can then attach a debugger anywhere I want and this works like any other ASP.NET application. Yes you end up on a background thread for debugging but Visual Studio handles that just fine and if you stay on a single thread this is no different than debugging any other code.SummaryUsing ASP.NET to run background service operations is probably not a super common scenario, but it probably should be something that is considered carefully when building services. Many applications have service like features and with the auto-start functionality of the Application Initialization module, it's easy to build this functionality into ASP.NET. Especially when combined with the notification features of SignalR it becomes very, very easy to create rich services that can also communicate their status easily to the outside world.Whether it's existing applications that need some background processing for scheduling related tasks, or whether you just create a separate site altogether just to host your service it's easy to do and you can leverage the same tool chain you're already using for other Web projects. If you have lots of service projects it's worth considering… give it some thought…© Rick Strahl, West Wind Technologies, 2005-2013Posted in ASP.NET  SignalR  IIS   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Setting up and using Bing Translate API Service for Machine Translation

    - by Rick Strahl
    Last week I spent quite a bit of time trying to set up the Bing Translate API service. I can honestly say this was one of the most screwed up developer experiences I've had in a long while - specifically related to the byzantine sign up process that Microsoft has in place. Not only is it nearly impossible to find decent documentation on the required signup process, some of the links in the docs are just plain wrong, and some of the account pages you need to access the actual account information once signed up are not linked anywhere from the administration UI. To make things even harder is the fact that the APIs changed a while back, with a completely new authentication scheme that's described and not directly linked documentation topic also made for a very frustrating search experience. It's a bummer that this is the case too, because the actual API itself is easy to use and works very well - fast and reasonably accurate (as accurate as you can expect machine translation to be). But the sign up process is a pain in the ass doubtlessly leaving many people giving up in frustration. In this post I'll try to hit all the points needed to set up to use the Bing Translate API in one place since such a document seems to be missing from Microsoft. Hopefully the API folks at Microsoft will get their shit together and actually provide this sort of info on their site… Signing Up The first step required is to create a Windows Azure MarketPlace account. Go to: https://datamarket.azure.com/ Sign in with your Windows Live Id If you don't have an account you will be taken to a registration page which you have to fill out. Follow the links and complete the registration. Once you're signed in you can start adding services. Click on the Data Link on the main page Select Microsoft Translator from the list This adds the Microsoft Bing Translator to your services. Pricing The page shows the pricing matrix and the free service which provides 2 megabytes for translations a month for free. Prices go up steeply from there. Pricing is determined by actual bytes of the result translations used. Max translations are 1000 characters so at minimum this means you get around 2000 translations a month for free. However most translations are probable much less so you can expect larger number of translations to go through. For testing or low volume translations this should be just fine. Once signed up there are no further instructions and you're left in limbo on the MS site. Register your Application Once you've created the Data association with Translator the next step is registering your application. To do this you need to access your developer account. Go to https://datamarket.azure.com/developer/applications/register Provide a ClientId, which is effectively the unique string identifier for your application (not your customer id!) Provide your name The client secret was auto-created and this becomes your 'password' For the redirect url provide any https url: https://microsoft.com works Give this application a description of your choice so you can identify it in the list of apps Now, once you've registered your application, keep track of the ClientId and ClientSecret - those are the two keys you need to authenticate before you can call the Translate API. Oddly the applications page is hidden from the Azure Portal UI. I couldn't find a direct link from anywhere on the site back to this page where I can examine my developer application keys. To find them you can go to: https://datamarket.azure.com/developer/applications You can come back here to look at your registered applications and pick up the ClientID and ClientSecret. Fun eh? But we're now ready to actually call the API and do some translating. Using the Bing Translate API The good news is that after this signup hell, using the API is pretty straightforward. To use the translation API you'll need to actually use two services: You need to call an authentication API service first, before you can call the actual translator API. These two APIs live on different domains, and the authentication API returns JSON data while the translator service returns XML. So much for consistency. Authentication The first step is authentication. The service uses oAuth authentication with a  bearer token that has to be passed to the translator API. The authentication call retrieves the oAuth token that you can then use with the translate API call. The bearer token has a short 10 minute life time, so while you can cache it for successive calls, the token can't be cached for long periods. This means for Web backend requests you typically will have to authenticate each time unless you build a more elaborate caching scheme that takes the timeout into account (perhaps using the ASP.NET Cache object). For low volume operations you can probably get away with simply calling the auth API for every translation you do. To call the Authentication API use code like this:/// /// Retrieves an oAuth authentication token to be used on the translate /// API request. The result string needs to be passed as a bearer token /// to the translate API. /// /// You can find client ID and Secret (or register a new one) at: /// https://datamarket.azure.com/developer/applications/ /// /// The client ID of your application /// The client secret or password /// public string GetBingAuthToken(string clientId = null, string clientSecret = null) { string authBaseUrl = https://datamarket.accesscontrol.windows.net/v2/OAuth2-13; if (string.IsNullOrEmpty(clientId) || string.IsNullOrEmpty(clientSecret)) { ErrorMessage = Resources.Resources.Client_Id_and_Client_Secret_must_be_provided; return null; } var postData = string.Format("grant_type=client_credentials&client_id={0}" + "&client_secret={1}" + "&scope=http://api.microsofttranslator.com", HttpUtility.UrlEncode(clientId), HttpUtility.UrlEncode(clientSecret)); // POST Auth data to the oauth API string res, token; try { var web = new WebClient(); web.Encoding = Encoding.UTF8; res = web.UploadString(authBaseUrl, postData); } catch (Exception ex) { ErrorMessage = ex.GetBaseException().Message; return null; } var ser = new JavaScriptSerializer(); var auth = ser.Deserialize<BingAuth>(res); if (auth == null) return null; token = auth.access_token; return token; } private class BingAuth { public string token_type { get; set; } public string access_token { get; set; } } This code basically takes the client id and secret and posts it at the oAuth endpoint which returns a JSON string. Here I use the JavaScript serializer to deserialize the JSON into a custom object I created just for deserialization. You can also use JSON.NET and dynamic deserialization if you are already using JSON.NET in your app in which case you don't need the extra type. In my library that houses this component I don't, so I just rely on the built in serializer. The auth method returns a long base64 encoded string which can be used as a bearer token in the translate API call. Translation Once you have the authentication token you can use it to pass to the translate API. The auth token is passed as an Authorization header and the value is prefixed with a 'Bearer ' prefix for the string. Here's what the simple Translate API call looks like:/// /// Uses the Bing API service to perform translation /// Bing can translate up to 1000 characters. /// /// Requires that you provide a CLientId and ClientSecret /// or set the configuration values for these two. /// /// More info on setup: /// http://www.west-wind.com/weblog/ /// /// Text to translate /// Two letter culture name /// Two letter culture name /// Pass an access token retrieved with GetBingAuthToken. /// If not passed the default keys from .config file are used if any /// public string TranslateBing(string text, string fromCulture, string toCulture, string accessToken = null) { string serviceUrl = "http://api.microsofttranslator.com/V2/Http.svc/Translate"; if (accessToken == null) { accessToken = GetBingAuthToken(); if (accessToken == null) return null; } string res; try { var web = new WebClient(); web.Headers.Add("Authorization", "Bearer " + accessToken); string ct = "text/plain"; string postData = string.Format("?text={0}&from={1}&to={2}&contentType={3}", HttpUtility.UrlEncode(text), fromCulture, toCulture, HttpUtility.UrlEncode(ct)); web.Encoding = Encoding.UTF8; res = web.DownloadString(serviceUrl + postData); } catch (Exception e) { ErrorMessage = e.GetBaseException().Message; return null; } // result is a single XML Element fragment var doc = new XmlDocument(); doc.LoadXml(res); return doc.DocumentElement.InnerText; } The first of this code deals with ensuring the auth token exists. You can either pass the token into the method manually or let the method automatically retrieve the auth code on its own. In my case I'm using this inside of a Web application and in that situation I simply need to re-authenticate every time as there's no convenient way to manage the lifetime of the auth cookie. The auth token is added as an Authorization HTTP header prefixed with 'Bearer ' and attached to the request. The text to translate, the from and to language codes and a result format are passed on the query string of this HTTP GET request against the Translate API. The translate API returns an XML string which contains a single element with the translated string. Using the Wrapper Methods It should be pretty obvious how to use these two methods but here are a couple of test methods that demonstrate the two usage scenarios:[TestMethod] public void TranslateBingWithAuthTest() { var translate = new TranslationServices(); string clientId = DbResourceConfiguration.Current.BingClientId; string clientSecret = DbResourceConfiguration.Current.BingClientSecret; string auth = translate.GetBingAuthToken(clientId, clientSecret); Assert.IsNotNull(auth); string text = translate.TranslateBing("Hello World we're back home!", "en", "de",auth); Assert.IsNotNull(text, translate.ErrorMessage); Console.WriteLine(text); } [TestMethod] public void TranslateBingIntegratedTest() { var translate = new TranslationServices(); string text = translate.TranslateBing("Hello World we're back home!","en","de"); Assert.IsNotNull(text, translate.ErrorMessage); Console.WriteLine(text); } Other API Methods The Translate API has a number of methods available and this one is the simplest one but probably also the most common one that translates a single string. You can find additional methods for this API here: http://msdn.microsoft.com/en-us/library/ff512419.aspx Soap and AJAX APIs are also available and documented on MSDN: http://msdn.microsoft.com/en-us/library/dd576287.aspx These links will be your starting points for calling other methods in this API. Dual Interface I've talked about my database driven localization provider here in the past, and it's for this tool that I added the Bing localization support. Basically I have a localization administration form that allows me to translate individual strings right out of the UI, using both Google and Bing APIs: As you can see in this example, the results from Google and Bing can vary quite a bit - in this case Google is stumped while Bing actually generated a valid translation. At other times it's the other way around - it's pretty useful to see multiple translations at the same time. Here I can choose from one of the values and driectly embed them into the translated text field. Lost in Translation There you have it. As I mentioned using the API once you have all the bureaucratic crap out of the way calling the APIs is fairly straight forward and reasonably fast, even if you have to call the Auth API for every call. Hopefully this post will help out a few of you trying to navigate the Microsoft bureaucracy, at least until next time Microsoft upends everything and introduces new ways to sign up again. Until then - happy translating… Related Posts Translation method Source on Github Translating with Google Translate without Google API Keys Creating a data-driven ASP.NET Resource Provider© Rick Strahl, West Wind Technologies, 2005-2013Posted in Localization  ASP.NET  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Passing multiple POST parameters to Web API Controller Methods

    - by Rick Strahl
    ASP.NET Web API introduces a new API for creating REST APIs and making AJAX callbacks to the server. This new API provides a host of new great functionality that unifies many of the features of many of the various AJAX/REST APIs that Microsoft created before it - ASP.NET AJAX, WCF REST specifically - and combines them into a whole more consistent API. Web API addresses many of the concerns that developers had with these older APIs, namely that it was very difficult to build consistent REST style resource APIs easily. While Web API provides many new features and makes many scenarios much easier, a lot of the focus has been on making it easier to build REST compliant APIs that are focused on resource based solutions and HTTP verbs. But  RPC style calls that are common with AJAX callbacks in Web applications, have gotten a lot less focus and there are a few scenarios that are not that obvious, especially if you're expecting Web API to provide functionality similar to ASP.NET AJAX style AJAX callbacks. RPC vs. 'Proper' REST RPC style HTTP calls mimic calling a method with parameters and returning a result. Rather than mapping explicit server side resources or 'nouns' RPC calls tend simply map a server side operation, passing in parameters and receiving a typed result where parameters and result values are marshaled over HTTP. Typically RPC calls - like SOAP calls - tend to always be POST operations rather than following HTTP conventions and using the GET/POST/PUT/DELETE etc. verbs to implicitly determine what operation needs to be fired. RPC might not be considered 'cool' anymore, but for typical private AJAX backend operations of a Web site I'd wager that a large percentage of use cases of Web API will fall towards RPC style calls rather than 'proper' REST style APIs. Web applications that have needs for things like live validation against data, filling data based on user inputs, handling small UI updates often don't lend themselves very well to limited HTTP verb usage. It might not be what the cool kids do, but I don't see RPC calls getting replaced by proper REST APIs any time soon.  Proper REST has its place - for 'real' API scenarios that manage and publish/share resources, but for more transactional operations RPC seems a better choice and much easier to implement than trying to shoehorn a boatload of endpoint methods into a few HTTP verbs. In any case Web API does a good job of providing both RPC abstraction as well as the HTTP Verb/REST abstraction. RPC works well out of the box, but there are some differences especially if you're coming from ASP.NET AJAX service or WCF Rest when it comes to multiple parameters. Action Routing for RPC Style Calls If you've looked at Web API demos you've probably seen a bunch of examples of how to create HTTP Verb based routing endpoints. Verb based routing essentially maps a controller and then uses HTTP verbs to map the methods that are called in response to HTTP requests. This works great for resource APIs but doesn't work so well when you have many operational methods in a single controller. HTTP Verb routing is limited to the few HTTP verbs available (plus separate method signatures) and - worse than that - you can't easily extend the controller with custom routes or action routing beyond that. Thankfully Web API also supports Action based routing which allows you create RPC style endpoints fairly easily:RouteTable.Routes.MapHttpRoute( name: "AlbumRpcApiAction", routeTemplate: "albums/{action}/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumApi", action = "GetAblums" } ); This uses traditional MVC style {action} method routing which is different from the HTTP verb based routing you might have read a bunch about in conjunction with Web API. Action based routing like above lets you specify an end point method in a Web API controller either via the {action} parameter in the route string or via a default value for custom routes. Using routing you can pass multiple parameters either on the route itself or pass parameters on the query string, via ModelBinding or content value binding. For most common scenarios this actually works very well. As long as you are passing either a single complex type via a POST operation, or multiple simple types via query string or POST buffer, there's no issue. But if you need to pass multiple parameters as was easily done with WCF REST or ASP.NET AJAX things are not so obvious. Web API has no issue allowing for single parameter like this:[HttpPost] public string PostAlbum(Album album) { return String.Format("{0} {1:d}", album.AlbumName, album.Entered); } There are actually two ways to call this endpoint: albums/PostAlbum Using the Model Binder with plain POST values In this mechanism you're sending plain urlencoded POST values to the server which the ModelBinder then maps the parameter. Each property value is matched to each matching POST value. This works similar to the way that MVC's  ModelBinder works. Here's how you can POST using the ModelBinder and jQuery:$.ajax( { url: "albums/PostAlbum", type: "POST", data: { AlbumName: "Dirty Deeds", Entered: "5/1/2012" }, success: function (result) { alert(result); }, error: function (xhr, status, p3, p4) { var err = "Error " + " " + status + " " + p3; if (xhr.responseText && xhr.responseText[0] == "{") err = JSON.parse(xhr.responseText).message; alert(err); } }); Here's what the POST data looks like for this request: The model binder and it's straight form based POST mechanism is great for posting data directly from HTML pages to model objects. It avoids having to do manual conversions for many operations and is a great boon for AJAX callback requests. Using Web API JSON Formatter The other option is to post data using a JSON string. The process for this is similar except that you create a JavaScript object and serialize it to JSON first.album = { AlbumName: "PowerAge", Entered: new Date(1977,0,1) } $.ajax( { url: "albums/PostAlbum", type: "POST", contentType: "application/json", data: JSON.stringify(album), success: function (result) { alert(result); } }); Here the data is sent using a JSON object rather than form data and the data is JSON encoded over the wire. The trace reveals that the data is sent using plain JSON (Source above), which is a little more efficient since there's no UrlEncoding that occurs. BTW, notice that WebAPI automatically deals with the date. I provided the date as a plain string, rather than a JavaScript date value and the Formatter and ModelBinder both automatically map the date propertly to the Entered DateTime property of the Album object. Passing multiple Parameters to a Web API Controller Single parameters work fine in either of these RPC scenarios and that's to be expected. ModelBinding always works against a single object because it maps a model. But what happens when you want to pass multiple parameters? Consider an API Controller method that has a signature like the following:[HttpPost] public string PostAlbum(Album album, string userToken) Here I'm asking to pass two objects to an RPC method. Is that possible? This used to be fairly straight forward either with WCF REST and ASP.NET AJAX ASMX services, but as far as I can tell this is not directly possible using a POST operation with WebAPI. There a few workarounds that you can use to make this work: Use both POST *and* QueryString Parameters in Conjunction If you have both complex and simple parameters, you can pass simple parameters on the query string. The above would actually work with: /album/PostAlbum?userToken=sekkritt but that's not always possible. In this example it might not be a good idea to pass a user token on the query string though. It also won't work if you need to pass multiple complex objects, since query string values do not support complex type mapping. They only work with simple types. Use a single Object that wraps the two Parameters If you go by service based architecture guidelines every service method should always pass and return a single value only. The input should wrap potentially multiple input parameters and the output should convey status as well as provide the result value. You typically have a xxxRequest and a xxxResponse class that wraps the inputs and outputs. Here's what this method might look like:public PostAlbumResponse PostAlbum(PostAlbumRequest request) { var album = request.Album; var userToken = request.UserToken; return new PostAlbumResponse() { IsSuccess = true, Result = String.Format("{0} {1:d} {2}", album.AlbumName, album.Entered,userToken) }; } with these support types:public class PostAlbumRequest { public Album Album { get; set; } public User User { get; set; } public string UserToken { get; set; } } public class PostAlbumResponse { public string Result { get; set; } public bool IsSuccess { get; set; } public string ErrorMessage { get; set; } }   To call this method you now have to assemble these objects on the client and send it up as JSON:var album = { AlbumName: "PowerAge", Entered: "1/1/1977" } var user = { Name: "Rick" } var userToken = "sekkritt"; $.ajax( { url: "samples/PostAlbum", type: "POST", contentType: "application/json", data: JSON.stringify({ Album: album, User: user, UserToken: userToken }), success: function (result) { alert(result.Result); } }); I assemble the individual types first and then combine them in the data: property of the $.ajax() call into the actual object passed to the server, that mimics the structure of PostAlbumRequest server class that has Album, User and UserToken properties. This works well enough but it gets tedious if you have to create Request and Response types for each method signature. If you have common parameters that are always passed (like you always pass an album or usertoken) you might be able to abstract this to use a single object that gets reused for all methods, but this gets confusing too: Overload a single 'parameter' too much and it becomes a nightmare to decipher what your method actual can use. Use JObject to parse multiple Property Values out of an Object If you recall, ASP.NET AJAX and WCF REST used a 'wrapper' object to make default AJAX calls. Rather than directly calling a service you always passed an object which contained properties for each parameter: { parm1: Value, parm2: Value2 } WCF REST/ASP.NET AJAX would then parse this top level property values and map them to the parameters of the endpoint method. This automatic type wrapping functionality is no longer available directly in Web API, but since Web API now uses JSON.NET for it's JSON serializer you can actually simulate that behavior with a little extra code. You can use the JObject class to receive a dynamic JSON result and then using the dynamic cast of JObject to walk through the child objects and even parse them into strongly typed objects. Here's how to do this on the API Controller end:[HttpPost] public string PostAlbum(JObject jsonData) { dynamic json = jsonData; JObject jalbum = json.Album; JObject juser = json.User; string token = json.UserToken; var album = jalbum.ToObject<Album>(); var user = juser.ToObject<User>(); return String.Format("{0} {1} {2}", album.AlbumName, user.Name, token); } This is clearly not as nice as having the parameters passed directly, but it works to allow you to pass multiple parameters and access them using Web API. JObject is JSON.NET's generic object container which sports a nice dynamic interface that allows you to walk through the object's properties using standard 'dot' object syntax. All you have to do is cast the object to dynamic to get access to the property interface of the JSON type. Additionally JObject also allows you to parse JObject instances into strongly typed objects, which enables us here to retrieve the two objects passed as parameters from this jquery code:var album = { AlbumName: "PowerAge", Entered: "1/1/1977" } var user = { Name: "Rick" } var userToken = "sekkritt"; $.ajax( { url: "samples/PostAlbum", type: "POST", contentType: "application/json", data: JSON.stringify({ Album: album, User: user, UserToken: userToken }), success: function (result) { alert(result); } }); Summary ASP.NET Web API brings many new features and many advantages over the older Microsoft AJAX and REST APIs, but realize that some things like passing multiple strongly typed object parameters will work a bit differently. It's not insurmountable, but just knowing what options are available to simulate this behavior is good to know. Now let me say here that it's probably not a good practice to pass a bunch of parameters to an API call. Ideally APIs should be closely factored to accept single parameters or a single content parameter at least along with some identifier parameters that can be passed on the querystring. But saying that doesn't mean that occasionally you don't run into a situation where you have the need to pass several objects to the server and all three of the options I mentioned might have merit in different situations. For now I'm sure the question of how to pass multiple parameters will come up quite a bit from people migrating WCF REST or ASP.NET AJAX code to Web API. At least there are options available to make it work.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >