Search Results

Search found 11639 results on 466 pages for 'numerical methods'.

Page 100/466 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • Simplify your Ajax code by using jQuery Global Ajax Handlers and ajaxSetup low-level interface

    - by hajan
    Creating web applications with consistent layout and user interface is very important for your users. In several ASP.NET projects I’ve completed lately, I’ve been using a lot jQuery and jQuery Ajax to achieve rich user experience and seamless interaction between the client and the server. In almost all of them, I took advantage of the nice jQuery global ajax handlers and jQuery ajax functions. Let’s say you build web application which mainly interacts using Ajax post and get to accomplish various operations. As you may already know, you can easily perform Ajax operations using jQuery Ajax low-level method or jQuery $.get, $.post, etc. Simple get example: $.get("/Home/GetData", function (d) { alert(d); }); As you can see, this is the simplest possible way to make Ajax call. What it does in behind is constructing low-level Ajax call by specifying all necessary information for the request, filling with default information set for the required properties such as data type, content type, etc... If you want to have some more control over what is happening with your Ajax Request, you can easily take advantage of the global ajax handlers. In order to register global ajax handlers, jQuery API provides you set of global Ajax methods. You can find all the methods in the following link http://api.jquery.com/category/ajax/global-ajax-event-handlers/, and these are: ajaxComplete ajaxError ajaxSend ajaxStart ajaxStop ajaxSuccess And the low-level ajax interfaces http://api.jquery.com/category/ajax/low-level-interface/: ajax ajaxPrefilter ajaxSetup For global settings, I usually use ajaxSetup combining it with the ajax event handlers. $.ajaxSetup is very good to help you set default values that you will use in all of your future Ajax Requests, so that you won’t need to repeat the same properties all the time unless you want to override the default settings. Mainly, I am using global ajaxSetup function similarly to the following way: $.ajaxSetup({ cache: false, error: function (x, e) { if (x.status == 550) alert("550 Error Message"); else if (x.status == "403") alert("403. Not Authorized"); else if (x.status == "500") alert("500. Internal Server Error"); else alert("Error..."); }, success: function (x) { //do something global on success... } }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now, you can make ajax call using low-level $.ajax interface and you don’t need to worry about specifying any of the properties we’ve set in the $.ajaxSetup function. So, you can create your own ways to handle various situations when your Ajax requests are occurring. Sometimes, some of your Ajax Requests may take much longer than expected… So, in order to make user friendly UI that will show some progress bar or animated image that something is happening in behind, you can combine ajaxStart and ajaxStop methods to do the same. First of all, add one <div id=”loading” style=”display:none;”> <img src="@Url.Content("~/Content/images/ajax-loader.gif")" alt="Ajax Loader" /></div> anywhere on your Master Layout / Master page (you can download nice ajax loading images from http://ajaxload.info/). Then, add the following two handlers: $(document).ajaxStart(function () { $("#loading").attr("style", "position:absolute; z-index: 1000; top: 0px; "+ "left:0px; text-align: center; display:none; background-color: #ddd; "+ "height: 100%; width: 100%; /* These three lines are for transparency "+ "in all browsers. */-ms-filter:\"progid:DXImageTransform.Microsoft.Alpha(Opacity=50)\";"+ " filter: alpha(opacity=50); opacity:.5;"); $("#loading img").attr("style", "position:relative; top:40%; z-index:5;"); $("#loading").show(); }); $(document).ajaxStop(function () { $("#loading").removeAttr("style"); $("#loading img").removeAttr("style"); $("#loading").hide(); }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Note: While you can reorganize the style in a more reusable way, since these are global Ajax Start/Stop, it is very possible that you won’t use the same style in other places. With this way, you will see that now for any ajax request in your web site or application, you will have the loading image appearing providing better user experience. What I’ve shown is several useful examples on how to simplify your Ajax code by using Global Ajax Handlers and the low-level AjaxSetup function. Of course, you can do a lot more with the other methods as well. Hope this was helpful. Regards, Hajan

    Read the article

  • Using delegates in C# (Part 2)

    - by rajbk
    Part 1 of this post can be read here. We are now about to see the different syntaxes for invoking a delegate and some c# syntactic sugar which allows you to code faster. We have the following console application. 1: public delegate double Operation(double x, double y); 2:  3: public class Program 4: { 5: [STAThread] 6: static void Main(string[] args) 7: { 8: Operation op1 = new Operation(Division); 9: double result = op1.Invoke(10, 5); 10: 11: Console.WriteLine(result); 12: Console.ReadLine(); 13: } 14: 15: static double Division(double x, double y) { 16: return x / y; 17: } 18: } Line 1 defines a delegate type called Operation with input parameters (double x, double y) and a return type of double. On Line 8, we create an instance of this delegate and set the target to be a static method called Division (Line 15) On Line 9, we invoke the delegate (one entry in the invocation list). The program outputs 5 when run. The language provides shortcuts for creating a delegate and invoking it (see line 9 and 11). Line 9 is a syntactical shortcut for creating an instance of the Delegate. The C# compiler will infer on its own what the delegate type is and produces intermediate language that creates a new instance of that delegate. Line 11 uses a a syntactical shortcut for invoking the delegate by removing the Invoke method. The compiler sees the line and generates intermediate language which invokes the delegate. When this code is compiled, the generated IL will look exactly like the IL of the compiled code above. 1: public delegate double Operation(double x, double y); 2:  3: public class Program 4: { 5: [STAThread] 6: static void Main(string[] args) 7: { 8: //shortcut constructor syntax 9: Operation op1 = Division; 10: //shortcut invoke syntax 11: double result = op1(10, 2); 12: 13: Console.WriteLine(result); 14: Console.ReadLine(); 15: } 16: 17: static double Division(double x, double y) { 18: return x / y; 19: } 20: } C# 2.0 introduced Anonymous Methods. Anonymous methods avoid the need to create a separate method that contains the same signature as the delegate type. Instead you write the method body in-line. There is an interesting fact about Anonymous methods and closures which won’t be covered here. Use your favorite search engine ;-)We rewrite our code to use anonymous methods (see line 9): 1: public delegate double Operation(double x, double y); 2:  3: public class Program 4: { 5: [STAThread] 6: static void Main(string[] args) 7: { 8: //Anonymous method 9: Operation op1 = delegate(double x, double y) { 10: return x / y; 11: }; 12: double result = op1(10, 2); 13: 14: Console.WriteLine(result); 15: Console.ReadLine(); 16: } 17: 18: static double Division(double x, double y) { 19: return x / y; 20: } 21: } We could rewrite our delegate to be of a generic type like so (see line 2 and line 9). You will see why soon. 1: //Generic delegate 2: public delegate T Operation<T>(T x, T y); 3:  4: public class Program 5: { 6: [STAThread] 7: static void Main(string[] args) 8: { 9: Operation<double> op1 = delegate(double x, double y) { 10: return x / y; 11: }; 12: double result = op1(10, 2); 13: 14: Console.WriteLine(result); 15: Console.ReadLine(); 16: } 17: 18: static double Division(double x, double y) { 19: return x / y; 20: } 21: } The .NET 3.5 framework introduced a whole set of predefined delegates for us including public delegate TResult Func<T1, T2, TResult>(T1 arg1, T2 arg2); Our code can be modified to use this delegate instead of the one we declared. Our delegate declaration has been removed and line 7 has been changed to use the Func delegate type. 1: public class Program 2: { 3: [STAThread] 4: static void Main(string[] args) 5: { 6: //Func is a delegate defined in the .NET 3.5 framework 7: Func<double, double, double> op1 = delegate (double x, double y) { 8: return x / y; 9: }; 10: double result = op1(10, 2); 11: 12: Console.WriteLine(result); 13: Console.ReadLine(); 14: } 15: 16: static double Division(double x, double y) { 17: return x / y; 18: } 19: } .NET 3.5 also introduced lambda expressions. A lambda expression is an anonymous function that can contain expressions and statements, and can be used to create delegates or expression tree types. We change our code to use lambda expressions. 1: public class Program 2: { 3: [STAThread] 4: static void Main(string[] args) 5: { 6: //lambda expression 7: Func<double, double, double> op1 = (x, y) => x / y; 8: double result = op1(10, 2); 9: 10: Console.WriteLine(result); 11: Console.ReadLine(); 12: } 13: 14: static double Division(double x, double y) { 15: return x / y; 16: } 17: } C# 3.0 introduced the keyword var (implicitly typed local variable) where the type of the variable is inferred based on the type of the associated initializer expression. We can rewrite our code to use var as shown below (line 7).  The implicitly typed local variable op1 is inferred to be a delegate of type Func<double, double, double> at compile time. 1: public class Program 2: { 3: [STAThread] 4: static void Main(string[] args) 5: { 6: //implicitly typed local variable 7: var op1 = (x, y) => x / y; 8: double result = op1(10, 2); 9: 10: Console.WriteLine(result); 11: Console.ReadLine(); 12: } 13: 14: static double Division(double x, double y) { 15: return x / y; 16: } 17: } You have seen how we can write code in fewer lines by using a combination of the Func delegate type, implicitly typed local variables and lambda expressions.

    Read the article

  • When OneTug Just Isn&rsquo;t Enough&hellip;

    - by onefloridacoder
    I stole that from the back of a T-shirt I saw at the Orlando Code Camp 2010.  This was my first code camp and my first time volunteering for an event like this as well.  It was an awesome day.  I cannot begin to count the “aaahh”, “I did-not-know I could do that”, in the crowds and for myself.  I think it was a great day of learning for everyone at all levels.  All of the presenters were different and provided great insights into the topics they were presenting.  Here’s a list of the ones that I attended. KodeFuGuru, “Pirates vs. Ninjas” He touched on many good topics to relax some of the ways we think when we are writing out code, and still looks good, readable, etc.  As he pointed out in all of his examples, we might not always realize everything that’s going on under the covers.  He exposed a bug in his own code, and verbalized the mental gymnastics he went through when he knew there was something wrong with one of his IEnumerable implementations.  For me, it was great to hear that someone else labors over these gut reactions to code quickly snapped together, to the point that we rush to the refactor stage to fix what’s bothering us – and learn.  He has some content on extension methods that was very interesting.  My “that is so cool” moment was when he swapped out AddEntity method on an entity class and used a With extension method instead.  Some of the LINQ scales fell off my eyes at that moment, and I realized my own code could be a lot more powerful (and readable) if incorporate a few of these examples at the appropriate times.  And he cautioned as well… “don’t go crazy with this stuff”, there’s a place and time for everything.  One of his examples demo’d toward the end of the talk is on his sight where he’s chaining methods together, cool stuff. Quotes I liked: “Extension Methods - Extension methods to put features back on the model type, without impacting the type.” “Favor Declarative Code” – Check out the ? and ?? operators if you’re not already using them. “Favor Fluent Code” “Avoid Pirate Ninja Zombies!  If you see one run!” I’m definitely going to be looking at “Extract Projection” when I get into VS2010. BDD 101 – Sean Chambers http://github.com/schambers This guy had a whole host of gremlins against him, final score Sean 5, Gremlins 1.  He ran the code samples from his github repo  in the code github code viewer since the PC they school gave him to use didn’t have VS installed. He did a great job of converting the grammar between BDD and TDD, and how this style of development can be used in integration tests as well as the different types of gated builds on a CI box – he didn’t go into a discussion around CI, but we could infer that it could work. Like when we use WSSF, it does cause a class explosion to happen however the amount of code per class it limit to just covering the concern at hand – no more, no less.  As in “When I as a <Role>, expect {something} to happen, because {}”  This keeps us (the developer) from gold plating our solutions and creating less waste.  He basically keeps the code that prove out the requirement to two lines of code.  Nice. He uses SpecUnit to merge this grammar into his .NET projects and gave an overview on how this ties into writing his own BDD tests.  Some folks were familiar with Given / When / Then as story acceptance criteria and here’s how he mapped it: “Given <Context>  When <Something Happens> Then <I expect...>”  There are a few base classes and overrides in the SpecUnit framework that help with setting up the context for each test which looked very handy. Successfully Running Your Own Coding Business The speaker ran through a list of items that sounded like common sense stuff LLC, banking, separating expenses, etc.  Then moved into role playing with business owners and an ISV.  That was pretty good stuff, it pays to be a good listener all of the time even if your client is sitting on the other side of the phone tearing you head off for you – but that’s all it is, and get used to it its par for the course.  Oh, yeah always answer the phone was one simple thing that you can do to move  your business forward.  But like Cory Foy tweeted this week, “If you owe me a lot of money, don’t have a message that says your away for five weeks skiing in Colorado.”  Lots of food for thought that’s on my list of “todo’s and to-don’ts”. Speaker Idol Next, I had the pleasure of helping Russ Fustino tape this part of Code Camp as my primary volunteer opportunity that day.  You remember Russ, “know the code” from the awesome Russ’ Tool Shed series.  He did a great job orchestrating and capturing the Speaker Idol finals.   So I didn’t actually miss any sessions, but was able to see three back to back in one setting.  The idol finalists gave a 10 minute talk and very deep subjects, but different styles of talks.  No one walked away empty handed for jobs very well done.  Russ has details on his site.  The pictures and  video captured is supposed to be published on Channel 9 at a later date.  It was also a valuable experience to see what makes technical speakers effective in their talks.  I picked up quite a few speaking tips from what I heard from the judges and contestants. Design For Developers – Diane Leeper If you are a great developer, you’re probably a lousy designer.  Diane didn’t come to poke holes in what we think we can do with UI layout and design, but she provided some tools we can use to figure out metaphors for visualizing data.  If you need help with that check out Silverlight Pivot – that’s what she was getting at.  I was first introduced to her at one of John Papa’s talks last year at a Lakeland User Group meeting and she’s very passionate about design.  She was able to discuss different elements of Pivot, while to a developer is just looked cool. I believe she was providing the deck from her talk to folks after her talk, so send her an email if you’re interested.   She says she can talk about design for hours and hours – we all left that session believing her.   Rinse and Repeat Orlando Code Camp 2010 was awesome, and would totally do it again.  There were lots of folks from my shop there, and some that have left my shop to go elsewhere.  So it was a reunion of sorts and a great celebration for the simple fact that its great to be a developer and there’s a community that supports and recognizes it as well.  The sponsors were generous and the organizers were very tired, namely Esteban Garcia and Will Strohl who were responsible for making a lot of this magic happen.  And if you don’t believe me, check out the chatter on Twitter.

    Read the article

  • Wrapping ASP.NET Client Callbacks

    - by Ricardo Peres
    Client Callbacks are probably the less known (and I dare say, less loved) of all the AJAX options in ASP.NET, which also include the UpdatePanel, Page Methods and Web Services. The reason for that, I believe, is it’s relative complexity: Get a reference to a JavaScript function; Dynamically register function that calls the above reference; Have a JavaScript handler call the registered function. However, it has some the nice advantage of being self-contained, that is, doesn’t need additional files, such as web services, JavaScript libraries, etc, or static methods declared on a page, or any kind of attributes. So, here’s what I want to do: Have a DOM element which exposes a method that is executed server side, passing it a string and returning a string; Have a server-side event that handles the client-side call; Have two client-side user-supplied callback functions for handling the success and error results. I’m going to develop a custom control without user interface that does the registration of the client JavaScript method as well as a server-side event that can be hooked by some handler on a page. My markup will look like this: 1: <script type="text/javascript"> 1:  2:  3: function onCallbackSuccess(result, context) 4: { 5: } 6:  7: function onCallbackError(error, context) 8: { 9: } 10:  </script> 2: <my:CallbackControl runat="server" ID="callback" SendAllData="true" OnCallback="OnCallback"/> The control itself looks like this: 1: public class CallbackControl : Control, ICallbackEventHandler 2: { 3: #region Public constructor 4: public CallbackControl() 5: { 6: this.SendAllData = false; 7: this.Async = true; 8: } 9: #endregion 10:  11: #region Public properties and events 12: public event EventHandler<CallbackEventArgs> Callback; 13:  14: [DefaultValue(true)] 15: public Boolean Async 16: { 17: get; 18: set; 19: } 20:  21: [DefaultValue(false)] 22: public Boolean SendAllData 23: { 24: get; 25: set; 26: } 27:  28: #endregion 29:  30: #region Protected override methods 31:  32: protected override void Render(HtmlTextWriter writer) 33: { 34: writer.AddAttribute(HtmlTextWriterAttribute.Id, this.ClientID); 35: writer.RenderBeginTag(HtmlTextWriterTag.Span); 36:  37: base.Render(writer); 38:  39: writer.RenderEndTag(); 40: } 41:  42: protected override void OnInit(EventArgs e) 43: { 44: String reference = this.Page.ClientScript.GetCallbackEventReference(this, "arg", "onCallbackSuccess", "context", "onCallbackError", this.Async); 45: String script = String.Concat("\ndocument.getElementById('", this.ClientID, "').callback = function(arg, context, onCallbackSuccess, onCallbackError){", ((this.SendAllData == true) ? "__theFormPostCollection.length = 0; __theFormPostData = ''; WebForm_InitCallback(); " : String.Empty), reference, ";};\n"); 46:  47: this.Page.ClientScript.RegisterStartupScript(this.GetType(), String.Concat("callback", this.ClientID), script, true); 48:  49: base.OnInit(e); 50: } 51:  52: #endregion 53:  54: #region Protected virtual methods 55: protected virtual void OnCallback(CallbackEventArgs args) 56: { 57: EventHandler<CallbackEventArgs> handler = this.Callback; 58:  59: if (handler != null) 60: { 61: handler(this, args); 62: } 63: } 64:  65: #endregion 66:  67: #region ICallbackEventHandler Members 68:  69: String ICallbackEventHandler.GetCallbackResult() 70: { 71: CallbackEventArgs args = new CallbackEventArgs(this.Context.Items["Data"] as String); 72:  73: this.OnCallback(args); 74:  75: return (args.Result); 76: } 77:  78: void ICallbackEventHandler.RaiseCallbackEvent(String eventArgument) 79: { 80: this.Context.Items["Data"] = eventArgument; 81: } 82:  83: #endregion 84: } And the event argument class: 1: [Serializable] 2: public class CallbackEventArgs : EventArgs 3: { 4: public CallbackEventArgs(String argument) 5: { 6: this.Argument = argument; 7: this.Result = String.Empty; 8: } 9:  10: public String Argument 11: { 12: get; 13: private set; 14: } 15:  16: public String Result 17: { 18: get; 19: set; 20: } 21: } You will notice two properties on the CallbackControl: Async: indicates if the call should be made asynchronously or synchronously (the default); SendAllData: indicates if the callback call will include the view and control state of all of the controls on the page, so that, on the server side, they will have their properties set when the Callback event is fired. The CallbackEventArgs class exposes two properties: Argument: the read-only argument passed to the client-side function; Result: the result to return to the client-side callback function, set from the Callback event handler. An example of an handler for the Callback event would be: 1: protected void OnCallback(Object sender, CallbackEventArgs e) 2: { 3: e.Result = String.Join(String.Empty, e.Argument.Reverse()); 4: } Finally, in order to fire the Callback event from the client, you only need this: 1: <input type="text" id="input"/> 2: <input type="button" value="Get Result" onclick="document.getElementById('callback').callback(callback(document.getElementById('input').value, 'context', onCallbackSuccess, onCallbackError))"/> The syntax of the callback function is: arg: some string argument; context: some context that will be passed to the callback functions (success or failure); callbackSuccessFunction: some function that will be called when the callback succeeds; callbackFailureFunction: some function that will be called if the callback fails for some reason. Give it a try and see if it helps!

    Read the article

  • New Features in ASP.NET Web API 2 - Part I

    - by dwahlin
    I’m a big fan of ASP.NET Web API. It provides a quick yet powerful way to build RESTful HTTP services that can easily be consumed by a variety of clients. While it’s simple to get started using, it has a wealth of features such as filters, formatters, and message handlers that can be used to extend it when needed. In this post I’m going to provide a quick walk-through of some of the key new features in version 2. I’ll focus on some two of my favorite features that are related to routing and HTTP responses and cover additional features in a future post.   Attribute Routing Routing has been a core feature of Web API since it’s initial release and something that’s built into new Web API projects out-of-the-box. However, there are a few scenarios where defining routes can be challenging such as nested routes (more on that in a moment) and any situation where a lot of custom routes have to be defined. For this example, let’s assume that you’d like to define the following nested route:   /customers/1/orders   This type of route would select a customer with an Id of 1 and then return all of their orders. Defining this type of route in the standard WebApiConfig class is certainly possible, but it isn’t the easiest thing to do for people who don’t understand routing well. Here’s an example of how the route shown above could be defined:   public static class WebApiConfig { public static void Register(HttpConfiguration config) { config.Routes.MapHttpRoute( name: "CustomerOrdersApiGet", routeTemplate: "api/customers/{custID}/orders", defaults: new { custID = 0, controller = "Customers", action = "Orders" } ); config.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); GlobalConfiguration.Configuration.Formatters.Insert(0, new JsonpFormatter()); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   With attribute based routing, defining these types of nested routes is greatly simplified. To get started you first need to make a call to the new MapHttpAttributeRoutes() method in the standard WebApiConfig class (or a custom class that you may have created that defines your routes) as shown next:   public static class WebApiConfig { public static void Register(HttpConfiguration config) { // Allow for attribute based routes config.MapHttpAttributeRoutes(); config.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); } } Once attribute based routes are configured, you can apply the Route attribute to one or more controller actions. Here’s an example:   [HttpGet] [Route("customers/{custId:int}/orders")] public List<Order> Orders(int custId) { var orders = _Repository.GetOrders(custId); if (orders == null) { throw new HttpResponseException(new HttpResponseMessage(HttpStatusCode.NotFound)); } return orders; }   This example maps the custId route parameter to the custId parameter in the Orders() method and also ensures that the route parameter is typed as an integer. The Orders() method can be called using the following route: /customers/2/orders   While this is extremely easy to use and gets the job done, it doesn’t include the default “api” string on the front of the route that you might be used to seeing. You could add “api” in front of the route and make it “api/customers/{custId:int}/orders” but then you’d have to repeat that across other attribute-based routes as well. To simply this type of task you can add the RoutePrefix attribute above the controller class as shown next so that “api” (or whatever the custom starting point of your route is) is applied to all attribute routes: [RoutePrefix("api")] public class CustomersController : ApiController { [HttpGet] [Route("customers/{custId:int}/orders")] public List<Order> Orders(int custId) { var orders = _Repository.GetOrders(custId); if (orders == null) { throw new HttpResponseException(new HttpResponseMessage(HttpStatusCode.NotFound)); } return orders; } }   There’s much more that you can do with attribute-based routing in ASP.NET. Check out the following post by Mike Wasson for more details.   Returning Responses with IHttpActionResult The first version of Web API provided a way to return custom HttpResponseMessage objects which were pretty easy to use overall. However, Web API 2 now wraps some of the functionality available in version 1 to simplify the process even more. A new interface named IHttpActionResult (similar to ActionResult in ASP.NET MVC) has been introduced which can be used as the return type for Web API controller actions. To return a custom response you can use new helper methods exposed through ApiController such as: Ok NotFound Exception Unauthorized BadRequest Conflict Redirect InvalidModelState Here’s an example of how IHttpActionResult and the helper methods can be used to cleanup code. This is the typical way to return a custom HTTP response in version 1:   public HttpResponseMessage Delete(int id) { var status = _Repository.DeleteCustomer(id); if (status) { return new HttpResponseMessage(HttpStatusCode.OK); } else { throw new HttpResponseException(HttpStatusCode.NotFound); } } With version 2 we can replace HttpResponseMessage with IHttpActionResult and simplify the code quite a bit:   public IHttpActionResult Delete(int id) { var status = _Repository.DeleteCustomer(id); if (status) { //return new HttpResponseMessage(HttpStatusCode.OK); return Ok(); } else { //throw new HttpResponseException(HttpStatusCode.NotFound); return NotFound(); } } You can also cleanup post (insert) operations as well using the helper methods. Here’s a version 1 post action:   public HttpResponseMessage Post([FromBody]Customer cust) { var newCust = _Repository.InsertCustomer(cust); if (newCust != null) { var msg = new HttpResponseMessage(HttpStatusCode.Created); msg.Headers.Location = new Uri(Request.RequestUri + newCust.ID.ToString()); return msg; } else { throw new HttpResponseException(HttpStatusCode.Conflict); } } This is what the code looks like in version 2:   public IHttpActionResult Post([FromBody]Customer cust) { var newCust = _Repository.InsertCustomer(cust); if (newCust != null) { return Created<Customer>(Request.RequestUri + newCust.ID.ToString(), newCust); } else { return Conflict(); } } More details on IHttpActionResult and the different helper methods provided by the ApiController base class can be found here. Conclusion Although there are several additional features available in Web API 2 that I could cover (CORS support for example), this post focused on two of my favorites features. If you have .NET 4.5.1 available then I definitely recommend checking the new features out. Additional articles that cover features in ASP.NET Web API 2 can be found here.

    Read the article

  • Of C# Iterators and Performance

    - by James Michael Hare
    Some of you reading this will be wondering, "what is an iterator" and think I'm locked in the world of C++.  Nope, I'm talking C# iterators.  No, not enumerators, iterators.   So, for those of you who do not know what iterators are in C#, I will explain it in summary, and for those of you who know what iterators are but are curious of the performance impacts, I will explore that as well.   Iterators have been around for a bit now, and there are still a bunch of people who don't know what they are or what they do.  I don't know how many times at work I've had a code review on my code and have someone ask me, "what's that yield word do?"   Basically, this post came to me as I was writing some extension methods to extend IEnumerable<T> -- I'll post some of the fun ones in a later post.  Since I was filtering the resulting list down, I was using the standard C# iterator concept; but that got me wondering: what are the performance implications of using an iterator versus returning a new enumeration?   So, to begin, let's look at a couple of methods.  This is a new (albeit contrived) method called Every(...).  The goal of this method is to access and enumeration and return every nth item in the enumeration (including the first).  So Every(2) would return items 0, 2, 4, 6, etc.   Now, if you wanted to write this in the traditional way, you may come up with something like this:       public static IEnumerable<T> Every<T>(this IEnumerable<T> list, int interval)     {         List<T> newList = new List<T>();         int count = 0;           foreach (var i in list)         {             if ((count++ % interval) == 0)             {                 newList.Add(i);             }         }           return newList;     }     So basically this method takes any IEnumerable<T> and returns a new IEnumerable<T> that contains every nth item.  Pretty straight forward.   The problem?  Well, Every<T>(...) will construct a list containing every nth item whether or not you care.  What happens if you were searching this result for a certain item and find that item after five tries?  You would have generated the rest of the list for nothing.   Enter iterators.  This C# construct uses the yield keyword to effectively defer evaluation of the next item until it is asked for.  This can be very handy if the evaluation itself is expensive or if there's a fair chance you'll never want to fully evaluate a list.   We see this all the time in Linq, where many expressions are chained together to do complex processing on a list.  This would be very expensive if each of these expressions evaluated their entire possible result set on call.    Let's look at the same example function, this time using an iterator:       public static IEnumerable<T> Every<T>(this IEnumerable<T> list, int interval)     {         int count = 0;         foreach (var i in list)         {             if ((count++ % interval) == 0)             {                 yield return i;             }         }     }   Notice it does not create a new return value explicitly, the only evidence of a return is the "yield return" statement.  What this means is that when an item is requested from the enumeration, it will enter this method and evaluate until it either hits a yield return (in which case that item is returned) or until it exits the method or hits a yield break (in which case the iteration ends.   Behind the scenes, this is all done with a class that the CLR creates behind the scenes that keeps track of the state of the iteration, so that every time the next item is asked for, it finds that item and then updates the current position so it knows where to start at next time.   It doesn't seem like a big deal, does it?  But keep in mind the key point here: it only returns items as they are requested. Thus if there's a good chance you will only process a portion of the return list and/or if the evaluation of each item is expensive, an iterator may be of benefit.   This is especially true if you intend your methods to be chainable similar to the way Linq methods can be chained.    For example, perhaps you have a List<int> and you want to take every tenth one until you find one greater than 10.  We could write that as:       List<int> someList = new List<int>();         // fill list here         someList.Every(10).TakeWhile(i => i <= 10);     Now is the difference more apparent?  If we use the first form of Every that makes a copy of the list.  It's going to copy the entire list whether we will need those items or not, that can be costly!    With the iterator version, however, it will only take items from the list until it finds one that is > 10, at which point no further items in the list are evaluated.   So, sounds neat eh?  But what's the cost is what you're probably wondering.  So I ran some tests using the two forms of Every above on lists varying from 5 to 500,000 integers and tried various things.    Now, iteration isn't free.  If you are more likely than not to iterate the entire collection every time, iterator has some very slight overhead:   Copy vs Iterator on 100% of Collection (10,000 iterations) Collection Size Num Iterated Type Total ms 5 5 Copy 5 5 5 Iterator 5 50 50 Copy 28 50 50 Iterator 27 500 500 Copy 227 500 500 Iterator 247 5000 5000 Copy 2266 5000 5000 Iterator 2444 50,000 50,000 Copy 24,443 50,000 50,000 Iterator 24,719 500,000 500,000 Copy 250,024 500,000 500,000 Iterator 251,521   Notice that when iterating over the entire produced list, the times for the iterator are a little better for smaller lists, then getting just a slight bit worse for larger lists.  In reality, given the number of items and iterations, the result is near negligible, but just to show that iterators come at a price.  However, it should also be noted that the form of Every that returns a copy will have a left-over collection to garbage collect.   However, if we only partially evaluate less and less through the list, the savings start to show and make it well worth the overhead.  Let's look at what happens if you stop looking after 80% of the list:   Copy vs Iterator on 80% of Collection (10,000 iterations) Collection Size Num Iterated Type Total ms 5 4 Copy 5 5 4 Iterator 5 50 40 Copy 27 50 40 Iterator 23 500 400 Copy 215 500 400 Iterator 200 5000 4000 Copy 2099 5000 4000 Iterator 1962 50,000 40,000 Copy 22,385 50,000 40,000 Iterator 19,599 500,000 400,000 Copy 236,427 500,000 400,000 Iterator 196,010       Notice that the iterator form is now operating quite a bit faster.  But the savings really add up if you stop on average at 50% (which most searches would typically do):     Copy vs Iterator on 50% of Collection (10,000 iterations) Collection Size Num Iterated Type Total ms 5 2 Copy 5 5 2 Iterator 4 50 25 Copy 25 50 25 Iterator 16 500 250 Copy 188 500 250 Iterator 126 5000 2500 Copy 1854 5000 2500 Iterator 1226 50,000 25,000 Copy 19,839 50,000 25,000 Iterator 12,233 500,000 250,000 Copy 208,667 500,000 250,000 Iterator 122,336   Now we see that if we only expect to go on average 50% into the results, we tend to shave off around 40% of the time.  And this is only for one level deep.  If we are using this in a chain of query expressions it only adds to the savings.   So my recommendation?  If you have a resonable expectation that someone may only want to partially consume your enumerable result, I would always tend to favor an iterator.  The cost if they iterate the whole thing does not add much at all -- and if they consume only partially, you reap some really good performance gains.   Next time I'll discuss some of my favorite extensions I've created to make development life a little easier and maintainability a little better.

    Read the article

  • The Data Scientist

    - by BuckWoody
    A new term - well, perhaps not that new - has come up and I’m actually very excited about it. The term is Data Scientist, and since it’s new, it’s fairly undefined. I’ll explain what I think it means, and why I’m excited about it. In general, I’ve found the term deals at its most basic with analyzing data. Of course, we all do that, and the term itself in that definition is redundant. There is no science that I know of that does not work with analyzing lots of data. But the term seems to refer to more than the common practices of looking at data visually, putting it in a spreadsheet or report, or even using simple coding to examine data sets. The term Data Scientist (as far as I can make out this early in it’s use) is someone who has a strong understanding of data sources, relevance (statistical and otherwise) and processing methods as well as front-end displays of large sets of complicated data. Some - but not all - Business Intelligence professionals have these skills. In other cases, senior developers, database architects or others fill these needs, but in my experience, many lack the strong mathematical skills needed to make these choices properly. I’ve divided the knowledge base for someone that would wear this title into three large segments. It remains to be seen if a given Data Scientist would be responsible for knowing all these areas or would specialize. There are pretty high requirements on the math side, specifically in graduate-degree level statistics, but in my experience a company will only have a few of these folks, so they are expected to know quite a bit in each of these areas. Persistence The first area is finding, cleaning and storing the data. In some cases, no cleaning is done prior to storage - it’s just identified and the cleansing is done in a later step. This area is where the professional would be able to tell if a particular data set should be stored in a Relational Database Management System (RDBMS), across a set of key/value pair storage (NoSQL) or in a file system like HDFS (part of the Hadoop landscape) or other methods. Or do you examine the stream of data without storing it in another system at all? This is an important decision - it’s a foundation choice that deals not only with a lot of expense of purchasing systems or even using Cloud Computing (PaaS, SaaS or IaaS) to source it, but also the skillsets and other resources needed to care and feed the system for a long time. The Data Scientist sets something into motion that will probably outlast his or her career at a company or organization. Often these choices are made by senior developers, database administrators or architects in a company. But sometimes each of these has a certain bias towards making a decision one way or another. The Data Scientist would examine these choices in light of the data itself, starting perhaps even before the business requirements are created. The business may not even be aware of all the strategic and tactical data sources that they have access to. Processing Once the decision is made to store the data, the next set of decisions are based around how to process the data. An RDBMS scales well to a certain level, and provides a high degree of ACID compliance as well as offering a well-known set-based language to work with this data. In other cases, scale should be spread among multiple nodes (as in the case of Hadoop landscapes or NoSQL offerings) or even across a Cloud provider like Windows Azure Table Storage. In fact, in many cases - most of the ones I’m dealing with lately - the data should be split among multiple types of processing environments. This is a newer idea. Many data professionals simply pick a methodology (RDBMS with Star Schemas, NoSQL, etc.) and put all data there, regardless of its shape, processing needs and so on. A Data Scientist is familiar not only with the various processing methods, but how they work, so that they can choose the right one for a given need. This is a huge time commitment, hence the need for a dedicated title like this one. Presentation This is where the need for a Data Scientist is most often already being filled, sometimes with more or less success. The latest Business Intelligence systems are quite good at allowing you to create amazing graphics - but it’s the data behind the graphics that are the most important component of truly effective displays. This is where the mathematics requirement of the Data Scientist title is the most unforgiving. In fact, someone without a good foundation in statistics is not a good candidate for creating reports. Even a basic level of statistics can be dangerous. Anyone who works in analyzing data will tell you that there are multiple errors possible when data just seems right - and basic statistics bears out that you’re on the right track - that are only solvable when you understanding why the statistical formula works the way it does. And there are lots of ways of presenting data. Sometimes all you need is a “yes” or “no” answer that can only come after heavy analysis work. In that case, a simple e-mail might be all the reporting you need. In others, complex relationships and multiple components require a deep understanding of the various graphical methods of presenting data. Knowing which kind of chart, color, graphic or shape conveys a particular datum best is essential knowledge for the Data Scientist. Why I’m excited I love this area of study. I like math, stats, and computing technologies, but it goes beyond that. I love what data can do - how it can help an organization. I’ve been fortunate enough in my professional career these past two decades to work with lots of folks who perform this role at companies from aerospace to medical firms, from manufacturing to retail. Interestingly, the size of the company really isn’t germane here. I worked with one very small bio-tech (cryogenics) company that worked deeply with analysis of complex interrelated data. So  watch this space. No, I’m not leaving Azure or distributed computing or Microsoft. In fact, I think I’m perfectly situated to investigate this role further. We have a huge set of tools, from RDBMS to Hadoop to allow me to explore. And I’m happy to share what I learn along the way.

    Read the article

  • Checking who is connected to your server, with PowerShell.

    - by Fatherjack
    There are many occasions when, as a DBA, you want to see who is connected to your SQL Server, along with how they are connecting and what sort of activities they are carrying out. I’m going to look at a couple of ways of getting this information and compare the effort required and the results achieved of each. SQL Server comes with a couple of stored procedures to help with this sort of task – sp_who and its undocumented counterpart sp_who2. There is also the pumped up version of these called sp_whoisactive, written by Adam Machanic which does way more than these procedures. I wholly recommend you try it out if you don’t already know how it works. When it comes to serious interrogation of your SQL Server activity then it is absolutely indispensable. Anyway, back to the point of this blog, we are going to look at getting the information from sp_who2 for a remote server. I wrote this Powershell script a week or so ago and was quietly happy with it for a while. I’m relatively new to Powershell so forgive both my rather low threshold for entertainment and the fact that something so simple is a moderate achievement for me. $Server = 'SERVERNAME' $SMOServer = New-Object Microsoft.SQLServer.Management.SMO.Server $Server # connection and query stuff         $ConnectionStr = "Server=$Server;Database=Master;Integrated Security=True" $Query = "EXEC sp_who2" $Connection = new-object system.Data.SQLClient.SQLConnection $Table = new-object "System.Data.DataTable" $Connection.connectionstring = $ConnectionStr try{ $Connection.open() $Command = $Connection.CreateCommand() $Command.commandtext = $Query $result = $Command.ExecuteReader() $Table.Load($result) } catch{ # Show error $error[0] | format-list -Force } $Title = "Data access processes (" + $Table.Rows.Count + ")" $Table | Out-GridView -Title $Title $Connection.close() So this is pretty straightforward, create an SMO object that represents our chosen server, define a connection to the database and a table object for the results when we get them, execute our query over the connection, load the results into our table object and then, if everything is error free display these results to the PowerShell grid viewer. The query simply gets the results of ‘EXEC sp_who2′ for us. Depending on how many connections there are will influence how long the query runs. The grid viewer lets me sort and search the results so it can be a pretty handy way to locate troublesome connections. Like I say, I was quite pleased with this, it seems a pretty simple script and was working well for me, I have added a few parameters to control the output and give me more specific details but then I see a script that uses the $SMOServer object itself to provide the process information and saves having to define the connection object and query specifications. $Server = 'SERVERNAME' $SMOServer = New-Object Microsoft.SQLServer.Management.SMO.Server $Server $Processes = $SMOServer.EnumProcesses() $Title = "SMO processes (" + $Processes.Rows.Count + ")" $Processes | Out-GridView -Title $Title Create the SMO object of our server and then call the EnumProcesses method to get all the process information from the server. Staggeringly simple! The results are a little different though. Some columns are the same and we can see the same basic information so my first thought was to which runs faster – so that I can get my results more quickly and also so that I place less stress on my server(s). PowerShell comes with a great way of testing this – the Measure-Command function. All you have to do is wrap your piece of code in Measure-Command {[your code here]} and it will spit out the time taken to execute the code. So, I placed both of the above methods of getting SQL Server process connections in two Measure-Command wrappers and pressed F5! The Powershell console goes blank for a while as the code is executed internally when Measure-Command is used but the grid viewer windows appear and the console shows this. You can take the output from Measure-Command and format it for easier reading but in a simple comparison like this we can simply cross refer the TotalMilliseconds values from the two result sets to see how the two methods performed. The query execution method (running EXEC sp_who2 ) is the first set of timings and the SMO EnumProcesses is the second. I have run these on a variety of servers and while the results vary from execution to execution I have never seen the SMO version slower than the other. The difference has varied and the time for both has ranged from sub-second as we see above to almost 5 seconds on other systems. This difference, I would suggest is partly due to the cost overhead of having to construct the data connection and so on where as the SMO EnumProcesses method has the connection to the server already in place and just needs to call back the process information. There is also the difference in the data sets to consider. Let’s take a look at what we get and where the two methods differ Query execution method (sp_who2) SMO EnumProcesses Description - Urn What looks like an XML or JSON representation of the server name and the process ID SPID Spid The process ID Status Status The status of the process Login Login The login name of the user executing the command HostName Host The name of the computer where the  process originated BlkBy BlockingSpid The SPID of a process that is blocking this one DBName Database The database that this process is connected to Command Command The type of command that is executing CPUTime Cpu The CPU activity related to this process DiskIO - The Disk IO activity related to this process LastBatch - The time the last batch was executed from this process. ProgramName Program The application that is facilitating the process connection to the SQL Server. SPID1 - In my experience this is always the same value as SPID. REQUESTID - In my experience this is always 0 - Name In my experience this is always the same value as SPID and so could be seen as analogous to SPID1 from sp_who2 - MemUsage An indication of the memory used by this process but I don’t know what it is measured in (bytes, Kb, Mb…) - IsSystem True or False depending on whether the process is internal to the SQL Server instance or has been created by an external connection requesting data. - ExecutionContextID In my experience this is always 0 so could be analogous to REQUESTID from sp_who2. Please note, these are my own very brief descriptions of these columns, detail can be found from MSDN for columns in the sp_who results here http://msdn.microsoft.com/en-GB/library/ms174313.aspx. Where the columns are common then I would use that description, in other cases then the information returned is purely for interpretation by the reader. Rather annoyingly both result sets have useful information that the other doesn’t. sp_who2 returns Disk IO and LastBatch information which is really useful but the SMO processes method give you IsSystem and MemUsage which have their place in fault diagnosis methods too. So which is better? On reflection I think I prefer to use the sp_who2 method primarily but knowing that the SMO Enumprocesses method is there when I need it is really useful and I’m sure I’ll use it regularly. I’m OK with the fact that it is the slower method because Measure-Command has shown me how close it is to the other option and that it really isn’t a large enough margin to matter.

    Read the article

  • How to use caching to increase render performance?

    - by Christian Ivicevic
    First of all I am going to cover the basic design of my 2d tile-based engine written with SDL in C++, then I will point out what I am up to and where I need some hints. Concept of my engine My engine uses the concept of GameScreens which are stored on a stack in the main game class. The main methods of a screen are usually LoadContent, Render, Update and InitMultithreading. (I use the last one because I am using v8 as a JavaScript bridge to the engine. The main game loop then renders the top screen on the stack (if there is one; otherwise, it exits the game) - actually it calls the render methods, but stores all items to be rendered in a list. After gathering all this information the methods like SDL_BlitSurface are called by my GameUIRenderer which draws the enqueued content and then draws some overlay. The code looks like this: while(Game is running) { Handle input if(Screens on stack == 0) exit Update timer etc. Clear the screen Peek the screen on the stack and collect information on what to render Actually render the enqueue screen stuff and some overlay etc. Flip the screen } The GameUIRenderer uses as hinted a std::vector<std::shared_ptr<ImageToRender>> to hold all necessary information described by this class: class ImageToRender { private: SDL_Surface* image; int x, y, w, h, xOffset, yOffset; }; This bunch of attributes is usually needed if I have a texture atlas with all tiles in one SDL_Surface and then the engine should crop one specific area and draw this to the screen. The GameUIRenderer::Render() method then just iterates over all elements and renders them something like this: std::for_each( this->m_vImageVector.begin(), this->m_vImageVector.end(), [this](std::shared_ptr<ImageToRender> pCurrentImage) { SDL_Rect rc = { pCurrentImage->x, pCurrentImage->y, 0, 0 }; // For the sake of simplicity ignore offsets... SDL_Rect srcRect = { 0, 0, pCurrentImage->w, pCurrentImage->h }; SDL_BlitSurface(pCurrentImage->pImage, &srcRect, g_pFramework->GetScreen(), &rc); } ); this->m_vImageVector.clear(); Current ideas which need to be reviewed The specified approach works really good and IMHO it is really has a good structure, however the performance could be definitely increased. I would like to know what do you suggest, how to implement efficient caching of surfaces etc so that there is no need to redraw the same scene over and over again? The map itself would be almost static, only when the player moves, we would need to move the map. Furthermore animated entities would either require updates of the whole map or updates of only the specific areas the entities are currently moving in. My first approaches were to include a flag IsTainted which should be used by the GameUIRenderer to decide whether to redraw everything or use cached version (or to not render anything so that we do not have to Clear the screen and let the last frame persist). However this seems to be quite messy if I have to manually handle in my Render method of the screen class if something has changed or not.

    Read the article

  • C# 4.0: COM Interop Improvements

    - by Paulo Morgado
    Dynamic resolution as well as named and optional arguments greatly improve the experience of interoperating with COM APIs such as Office Automation Primary Interop Assemblies (PIAs). But, in order to alleviate even more COM Interop development, a few COM-specific features were also added to C# 4.0. Ommiting ref Because of a different programming model, many COM APIs contain a lot of reference parameters. These parameters are typically not meant to mutate a passed-in argument, but are simply another way of passing value parameters. Specifically for COM methods, the compiler allows to declare the method call passing the arguments by value and will automatically generate the necessary temporary variables to hold the values in order to pass them by reference and will discard their values after the call returns. From the point of view of the programmer, the arguments are being passed by value. This method call: object fileName = "Test.docx"; object missing = Missing.Value; document.SaveAs(ref fileName, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing); can now be written like this: document.SaveAs("Test.docx", Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value); And because all parameters that are receiving the Missing.Value value have that value as its default value, the declaration of the method call can even be reduced to this: document.SaveAs("Test.docx"); Dynamic Import Many COM methods accept and return variant types, which are represented in the PIAs as object. In the vast majority of cases, a programmer calling these methods already knows the static type of a returned object form the context of the call, but has to explicitly perform a cast on the returned values to make use of that knowledge. These casts are so common that they constitute a major nuisance. To make the developer’s life easier, it is now possible to import the COM APIs in such a way that variants are instead represented using the type dynamic which means that COM signatures have now occurrences of dynamic instead of object. This means that members of a returned object can now be easily accessed or assigned into a strongly typed variable without having to cast. Instead of this code: ((Excel.Range)(excel.Cells[1, 1])).Value2 = "Hello World!"; this code can now be used: excel.Cells[1, 1] = "Hello World!"; And instead of this: Excel.Range range = (Excel.Range)(excel.Cells[1, 1]); this can be used: Excel.Range range = excel.Cells[1, 1]; Indexed And Default Properties A few COM interface features are still not available in C#. On the top of the list are indexed properties and default properties. As mentioned above, these will be possible if the COM interface is accessed dynamically, but will not be recognized by statically typed C# code. No PIAs – Type Equivalence And Type Embedding For assemblies indentified with PrimaryInteropAssemblyAttribute, the compiler will create equivalent types (interfaces, structs, enumerations and delegates) and embed them in the generated assembly. To reduce the final size of the generated assembly, only the used types and their used members will be generated and embedded. Although this makes development and deployment of applications using the COM components easier because there’s no need to deploy the PIAs, COM component developers are still required to build the PIAs.

    Read the article

  • Using Managed Beans with your ADF Mobile Client Applications

    - by [email protected]
    Did you know it's easy to extend your ADF Mobile Client application with a Managed Bean just like it is with an ADF web application?  Here's how: Using the New Gallery (File -> New), create a new Java class.  This class should extend oracle.adfnmc.el.utils.BeanResolver.         Add this java class as a managed bean: Go to your task flow, select the Overview tab at the bottom and go to the Managed Bean section.  Add an entry and name your new Managed Bean and point to the java class you just created.        Add your custom methods and properties to your java class   Since reflection is not supported in the J2ME version on some platforms (BlackBerry), you need to provide dispatch code if you want to invoke/access any of your methods/properties from EL.  Here's a sample:  MyBeanClass.java    Use Expression Language (EL) to access your properties and invoke your methods on your MCX pages.  Here's an sample:     <?xml version="1.0" encoding="UTF-8" ?><amc:view xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"          xmlns:amc="http://xmlns.oracle.com/jdev/amc">  <amc:form id="form0">    <amc:menuControl refId="menu0"/>    <amc:panelGroupLayout id="panelGroupLayout1" width="100%">      <amc:panelGroupLayout id="panelGroupLayout2" layout="horizontal"                            width="100%">        <amc:image id="image1" source="logo_sm.png"/>        <amc:outputText value="Home" id="outputText1" verticalAlign="center"                        fontSize="20" fontWeight="bold"                        foregroundColor="#ff0000"/>      </amc:panelGroupLayout>      <amc:commandLink text="#{MyBean.property1}" id="commandLink1"                       actionListener="#{MyBean.doFoo}"                       foregroundColor="#0000ff" action="patientlist"/>    </amc:panelGroupLayout>  </amc:form>  <amc:menu type="main" id="menu0">    <amc:menuGroup id="menuGroup1">      <amc:commandMenuItem id="commandMenuItem1" action="exit" label="Exit"                           index="1" weight="0"/>    </amc:menuGroup>  </amc:menu></amc:view> 

    Read the article

  • MvcExtensions - ActionFilter

    - by kazimanzurrashid
    One of the thing that people often complains is dependency injection in Action Filters. Since the standard way of applying action filters is to either decorate the Controller or the Action methods, there is no way you can inject dependencies in the action filter constructors. There are quite a few posts on this subject, which shows the property injection with a custom action invoker, but all of them suffers from the same small bug (you will find the BuildUp is called more than once if the filter implements multiple interface e.g. both IActionFilter and IResultFilter). The MvcExtensions supports both property injection as well as fluent filter configuration api. There are a number of benefits of this fluent filter configuration api over the regular attribute based filter decoration. You can pass your dependencies in the constructor rather than property. Lets say, you want to create an action filter which will update the User Last Activity Date, you can create a filter like the following: public class UpdateUserLastActivityAttribute : FilterAttribute, IResultFilter { public UpdateUserLastActivityAttribute(IUserService userService) { Check.Argument.IsNotNull(userService, "userService"); UserService = userService; } public IUserService UserService { get; private set; } public void OnResultExecuting(ResultExecutingContext filterContext) { // Do nothing, just sleep. } public void OnResultExecuted(ResultExecutedContext filterContext) { Check.Argument.IsNotNull(filterContext, "filterContext"); string userName = filterContext.HttpContext.User.Identity.IsAuthenticated ? filterContext.HttpContext.User.Identity.Name : null; if (!string.IsNullOrEmpty(userName)) { UserService.UpdateLastActivity(userName); } } } As you can see, it is nothing different than a regular filter except that we are passing the dependency in the constructor. Next, we have to configure this filter for which Controller/Action methods will execute: public class ConfigureFilters : ConfigureFiltersBase { protected override void Configure(IFilterRegistry registry) { registry.Register<HomeController, UpdateUserLastActivityAttribute>(); } } You can register more than one filter for the same Controller/Action Methods: registry.Register<HomeController, UpdateUserLastActivityAttribute, CompressAttribute>(); You can register the filters for a specific Action method instead of the whole controller: registry.Register<HomeController, UpdateUserLastActivityAttribute, CompressAttribute>(c => c.Index()); You can even set various properties of the filter: registry.Register<ControlPanelController, CustomAuthorizeAttribute>( attribute => { attribute.AllowedRole = Role.Administrator; }); The Fluent Filter registration also reduces the number of base controllers in your application. It is very common that we create a base controller and decorate it with action filters and then we create concrete controller(s) so that the base controllers action filters are also executed in the concrete controller. You can do the  same with a single line statement with the fluent filter registration: Registering the Filters for All Controllers: registry.Register<ElmahHandleErrorAttribute>(new TypeCatalogBuilder().Add(GetType().Assembly).Include(type => typeof(Controller).IsAssignableFrom(type))); Registering Filters for selected Controllers: registry.Register<ElmahHandleErrorAttribute>(new TypeCatalogBuilder().Add(GetType().Assembly).Include(type => typeof(Controller).IsAssignableFrom(type) && (type.Name.StartsWith("Home") || type.Name.StartsWith("Post")))); You can also use the built-in filters in the fluent registration, for example: registry.Register<HomeController, OutputCacheAttribute>(attribute => { attribute.Duration = 60; }); With the fluent filter configuration you can even apply filters to controllers that source code is not available to you (may be the controller is a part of a third part component). That’s it for today, in the next post we will discuss about the Model binding support in MvcExtensions. So stay tuned.

    Read the article

  • Implementing features in an Entity System

    - by Bane
    After asking two questions on Entity Systems (1, 2), and reading some articles on them, I think that I understand them much better than before. But, I still have some uncertainties, and mainly they are about building a Particle Emitter, an Input system, and a Camera. I obviously still have some problems understanding Entity Systems, and they might apply to a whole other range of objects, but I chose these three because they are very different concepts and should cover a pretty big ground, and help me understand Entity Systems and how to handle problems like these myself, as they come along. I am building an engine in Javascript, and I've implemented most of the core features, which include: input handling, flexible animation system, particle emitter, math classes and functions, scene handling, a camera and a render, and a whole bunch of other things that engines usually support. Then, I read Byte56's answer that got me interested into making the engine into an Entity System one. It would still remain an HTML5 game engine with the basic Scene philosophy, but it should support dynamic creation of entities from components. These are some of the definitions from the previous questions, updated: An Entity is an identifier. It doesn't have any data, it's not an object, it's a simple id that represents an index in the Scene's list of all entities (which I actually plan to implement as a component matrix). A Component is a data holder, but with methods that can operate on that data. The best example is a Vector2D, or a "Position" component. It has data: x and y, but also some methods that make operating on the data a bit easier: add(), normalize(), and so on. A System is something that can operate on a set of entities that meet the certain requirements, usually they (the entities) need to have a specified (by the system itself) set of components to be operated upon. The system is the "logic" part, the "algorithm" part, all the functionality supplied by components is purely for easier data management. The problem that I have now is fitting my old engine concept into this new programming paradigm. Lets start with the simplest one, a Camera. The camera has a position property (Vector2D), a rotation property and some methods for centering it around a point. Each frame, it is fed to a renderer, along with a scene, and all the objects are translated according to it's position. Then the scene is rendered. How could I represent this kind of an object in an Entity System? Would the camera be an entity or simply a component? A combination (see my answer)? Another issues that is bothering me is implementing a Particle Emitter. For what exactly I mean by that, you can check out my video of it: http://youtu.be/BObargIMQsE. The problem I have with this is, again, what should be what. I'm pretty sure that particles themselves shouldn't be entities, as I want to support 10k+ of them, and creating that much entities would be a heavy blow on my performance, I believe. Or maybe not? Depends on the implementation, but anyone with experience: please, do answer. The last bit I wan't to talk about, which is also bugging me the most, is how input should be handled. In my current version of the engine, there is a class called Input. It's a handler that subscribes to browser's events, such as keypresses, and mouse position changes, and also it maintains an internal state. Then, the player class has a react() method, which accepts an input object as an argument. The advantage of this is that the input object could be serialized into JSON and then shared over the network, allowing for smooth multiplayer simulations. But how does this translate into an Entity System?

    Read the article

  • Acr.ExtDirect &ndash; Part 1 &ndash; Method Resolvers

    - by Allan Ritchie
    One of the most important things of any open source libraries in my opinion is to be as open as possible while avoiding having your library become invasive to your code/business model design.  I personally could never stand marking my business and/or data access code with attributes everywhere.  XML also isn’t really a fav with too many people these days since it comes with a startup performance hit and requires runtime compiling.  I find that there is a whole ton of communication libraries out there currently requiring this (ie. WCF, RIA, etc).  Even though Acr.ExtDirect comes with its own set of attributes, you can piggy-back the [ServiceContract] & [OperationContract] attributes from WCF if you choose.  It goes beyond that though, there are 2 others “out-of-the-box” implementations – Convention based & XML Configuration.    Convention – I don’t actually recommend using this one since it opens up all of your public instance methods to remote execution calls. XML Configuration – This isn’t so bad but requires you enter all of your methods and there operation types into the Castle XML configuration & as I said earlier, XML isn’t the fav these days.   So what are your options if you don’t like attributes, convention, or XML Configuration?  Well, Acr.ExtDirect has its own extension base to give the API a list of methods and components to make available for remote execution.  1: public interface IDirectMethodResolver { 2:   3: bool IsServiceType(ComponentModel model, Type type); 4: string GetNamespace(ComponentModel model); 5: string[] GetDirectMethodNames(ComponentModel model); 6: DirectMethodType GetMethodType(ComponentModel model, MethodInfo method); 7: }   Now to implement our own method resolver:   1: public class TestResolver : IDirectMethodResolver { 2:   3: #region IDirectMethodResolver Members 4:   5: /// <summary> 6: /// Determine if you are calling a service 7: /// </summary> 8: /// <param name="model"></param> 9: /// <param name="type"></param> 10: /// <returns></returns> 11: public bool IsServiceType(ComponentModel model, Type type) { 12: return (type.Namespace == "MyBLL.Data"); 13: } 14:   15: /// <summary> 16: /// Return the calling name for the client side 17: /// </summary> 18: /// <param name="model"></param> 19: /// <returns></returns> 20: public string GetNamespace(ComponentModel model) { 21: return model.Name; 22: } 23:   24: public string[] GetDirectMethodNames(ComponentModel model) { 25: switch (model.Name) { 26: case "Products" : 27: return new [] { 28: "GetProducts", 29: "LoadProduct", 30: "Save", 31: "Update" 32: }; 33:   34: case "Categories" : 35: return new [] { 36: "GetProducts" 37: }; 38:   39: default : 40: throw new ArgumentException("Invalid type"); 41: } 42: } 43:   44: public DirectMethodType GetMethodType(ComponentModel model, MethodInfo method) { 45: if (method.Name.StartsWith("Save") || method.Name.StartsWith("Update")) 46: return DirectMethodType.FormSubmit; 47: 48: else if (method.Name.StartsWith("Load")) 49: return DirectMethodType.FormLoad; 50:   51: else 52: return DirectMethodType.Direct; 53: } 54:   55: #endregion 56: }   And there you have it, your own custom method resolver.  Pretty easy and pretty open ended!

    Read the article

  • Subterranean IL: Custom modifiers

    - by Simon Cooper
    In IL, volatile is an instruction prefix used to set a memory barrier at that instruction. However, in C#, volatile is applied to a field to indicate that all accesses on that field should be prefixed with volatile. As I mentioned in my previous post, this means that the field definition needs to store this information somehow, as such a field could be accessed from another assembly. However, IL does not have a concept of a 'volatile field'. How is this information stored? Attributes The standard way of solving this is to apply a VolatileAttribute or similar to the field; this extra metadata notifies the C# compiler that all loads and stores to that field should use the volatile prefix. However, there is a problem with this approach, namely, the .NET C++ compiler. C++ allows methods to be overloaded using properties, like volatile or const, on the parameters; this is perfectly legal C++: public ref class VolatileMethods { void Method(int *i) {} void Method(volatile int *i) {} } If volatile was specified using a custom attribute, then the VolatileMethods class wouldn't be compilable to IL, as there is nothing to differentiate the two methods from each other. This is where custom modifiers come in. Custom modifiers Custom modifiers are similar to custom attributes, but instead of being applied to an IL element separately to its declaration, they are embedded within the field or parameter's type signature itself. The VolatileMethods class would be compiled to the following IL: .class public VolatileMethods { .method public instance void Method(int32* i) {} .method public instance void Method( int32 modreq( [mscorlib]System.Runtime.CompilerServices.IsVolatile)* i) {} } The modreq([mscorlib]System.Runtime.CompilerServices.IsVolatile) is the custom modifier. This adds a TypeDef or TypeRef token to the signature of the field or parameter, and even though they are mostly ignored by the CLR when it's executing the program, this allows methods and fields to be overloaded in ways that wouldn't be allowed using attributes. Because the modifiers are part of the signature, they need to be fully specified when calling such a method in IL: call instance void Method( int32 modreq([mscorlib]System.Runtime.CompilerServices.IsVolatile)*) There are two ways of applying modifiers; modreq specifies required modifiers (like IsVolatile), and modopt specifies optional modifiers that can be ignored by compilers (like IsLong or IsConst). The type specified as the modifier argument are simple placeholders; if you have a look at the definitions of IsVolatile and IsLong they are completely empty. They exist solely to be referenced by a modifier. Custom modifiers are used extensively by the C++ compiler to specify concepts that aren't expressible in IL, but still need to be taken into account when calling method overloads. C++ and C# That's all very well and good, but how does this affect C#? Well, the C++ compiler uses modreq(IsVolatile) to specify volatility on both method parameters and fields, as it would be slightly odd to have the same concept represented using a modifier or attribute depending on what it was applied to. Once you've compiled your C++ project, it can then be referenced and used from C#, so the C# compiler has to recognise the modreq(IsVolatile) custom modifier applied to fields, and vice versa. So, even though you can't overload fields or parameters with volatile using C#, volatile needs to be expressed using a custom modifier rather than an attribute to guarentee correct interoperability and behaviour with any C++ dlls that happen to come along. Next up: a closer look at attributes, and how certain attributes compile in unexpected ways.

    Read the article

  • How to write simple code using TDD [migrated]

    - by adeel41
    Me and my colleagues do a small TDD-Kata practice everyday for 30 minutes. For reference this is the link for the excercise http://osherove.com/tdd-kata-1/ The objective is to write better code using TDD. This is my code which I've written public class Calculator { public int Add( string numbers ) { const string commaSeparator = ","; int result = 0; if ( !String.IsNullOrEmpty( numbers ) ) result = numbers.Contains( commaSeparator ) ? AddMultipleNumbers( GetNumbers( commaSeparator, numbers ) ) : ConvertToNumber( numbers ); return result; } private int AddMultipleNumbers( IEnumerable getNumbers ) { return getNumbers.Sum(); } private IEnumerable GetNumbers( string separator, string numbers ) { var allNumbers = numbers .Replace( "\n", separator ) .Split( new string[] { separator }, StringSplitOptions.RemoveEmptyEntries ); return allNumbers.Select( ConvertToNumber ); } private int ConvertToNumber( string number ) { return Convert.ToInt32( number ); } } and the tests for this class are [TestFixture] public class CalculatorTests { private int ArrangeAct( string numbers ) { var calculator = new Calculator(); return calculator.Add( numbers ); } [Test] public void Add_WhenEmptyString_Returns0() { Assert.AreEqual( 0, ArrangeAct( String.Empty ) ); } [Test] [Sequential] public void Add_When1Number_ReturnNumber( [Values( "1", "56" )] string number, [Values( 1, 56 )] int expected ) { Assert.AreEqual( expected, ArrangeAct( number ) ); } [Test] public void Add_When2Numbers_AddThem() { Assert.AreEqual( 3, ArrangeAct( "1,2" ) ); } [Test] public void Add_WhenMoreThan2Numbers_AddThemAll() { Assert.AreEqual( 6, ArrangeAct( "1,2,3" ) ); } [Test] public void Add_SeparatorIsNewLine_AddThem() { Assert.AreEqual( 6, ArrangeAct( @"1 2,3" ) ); } } Now I'll paste code which they have written public class StringCalculator { private const char Separator = ','; public int Add( string numbers ) { const int defaultValue = 0; if ( ShouldReturnDefaultValue( numbers ) ) return defaultValue; return ConvertNumbers( numbers ); } private int ConvertNumbers( string numbers ) { var numberParts = GetNumberParts( numbers ); return numberParts.Select( ConvertSingleNumber ).Sum(); } private string[] GetNumberParts( string numbers ) { return numbers.Split( Separator ); } private int ConvertSingleNumber( string numbers ) { return Convert.ToInt32( numbers ); } private bool ShouldReturnDefaultValue( string numbers ) { return String.IsNullOrEmpty( numbers ); } } and the tests [TestFixture] public class StringCalculatorTests { [Test] public void Add_EmptyString_Returns0() { ArrangeActAndAssert( String.Empty, 0 ); } [Test] [TestCase( "1", 1 )] [TestCase( "2", 2 )] public void Add_WithOneNumber_ReturnsThatNumber( string numberText, int expected ) { ArrangeActAndAssert( numberText, expected ); } [Test] [TestCase( "1,2", 3 )] [TestCase( "3,4", 7 )] public void Add_WithTwoNumbers_ReturnsSum( string numbers, int expected ) { ArrangeActAndAssert( numbers, expected ); } [Test] public void Add_WithThreeNumbers_ReturnsSum() { ArrangeActAndAssert( "1,2,3", 6 ); } private void ArrangeActAndAssert( string numbers, int expected ) { var calculator = new StringCalculator(); var result = calculator.Add( numbers ); Assert.AreEqual( expected, result ); } } Now the question is which one is better? My point here is that we do not need so many small methods initially because StringCalculator has no sub classes and secondly the code itself is so simple that we don't need to break it up too much that it gets confusing after having so many small methods. Their point is that code should read like english and also its better if they can break it up earlier than doing refactoring later and third when they will do refactoring it would be much easier to move these methods quite easily into separate classes. My point of view against is that we never made a decision that code is difficult to understand so why we are breaking it up so early. So I need a third person's opinion to understand which option is much better.

    Read the article

  • Node.js Lockstep Multiplayer Architecture

    - by Wakaka
    Background I'm using the lockstep model for a multiplayer Node.js/Socket.IO game in a client-server architecture. User input (mouse or keypress) is parsed into commands like 'attack' and 'move' on the client, which are sent to the server and scheduled to be executed on a certain tick. This is in contrast to sending state data to clients, which I don't wish to use due to bandwidth issues. Each tick, the server will send the list of commands on that tick (possibly empty) to each client. The server and all clients will then process the commands and simulate that tick in exactly the same way. With Node.js this is actually quite simple due to possibility of code sharing between server and client. I'll just put the deterministic simulator in the /shared folder which can be run by both server and client. The server simulation is required so that there is an authoritative version of the simulation which clients cannot alter. Problem Now, the game has many entity classes, like Unit, Item, Tree etc. Entities are created in the simulator. However, for each class, it has some methods that are shared and some that are client-specific. For instance, the Unit class has addHp method which is shared. It also has methods like getSprite (gets the image of the entity), isVisible (checks if unit can be seen by the client), onDeathInClient (does a bunch of stuff when it dies only on the client like adding announcements) and isMyUnit (quick function to check if the client owns the unit). Up till now, I have been piling all the client functions into the shared Unit class, and adding a this.game.isServer() check when necessary. For instance, when the unit dies, it will call if (!this.game.isServer()) { this.onDeathInClient(); }. This approach has worked pretty fine so far, in terms of functionality. But as the codebase grew bigger, this style of coding seems a little strange. Firstly, the client code is clearly not shared, and yet is placed under the /shared folder. Secondly, client-specific variables for each entity are also instantiated on the server entity (like unit.sprite) and can run into problems when the server cannot instantiate the variable (it doesn't have Image class like on browsers). So my question is, is there a better way to organize the client code, or is this a common way of doing things for lockstep multiplayer games? I can think of a possible workaround, but it does have its own problems. Possible workaround (with problems) I could use Javascript mixins that are only added when in a browser. Thus, in the /shared/unit.js file in the /shared folder, I would have this code at the end: if (typeof exports !== 'undefined') module.exports = Unit; else mixin(Unit, LocalUnit); Then I would have /client/localunit.js store an object LocalUnit of client-side methods for Unit. Now, I already have a publish-subscribe system in place for events in the simulator. To remove the this.game.isServer() checks, I could publish entity-specific events whenever I want the client to do something. For instance, I would do this.publish('Death') in /shared/unit.js and do this.subscribe('Death', this.onDeathInClient) in /client/localunit.js. But this would make the simulator's event listeners list on the server and the client different. Now if I want to clear all subscribed events only from the shared simulator, I can't. Of course, it is possible to create two event subscription systems - one client-specific and one shared - but now the publish() method would have to do if (!this.game.isServer()) { this.publishOnClient(event); }. All in all, the workaround off the top of my head seems pretty complicated for something as simple as separating the client and shared code. Thus, I wonder if there is an established and simpler method for better code organization, hopefully specific to Node.js games.

    Read the article

  • How do you report out user research results?

    - by user12277104
    A couple weeks ago, one of my mentees asked to meet, because she wanted my advice on how to report out user research results. She had just conducted her first usability test for her new employer, and was getting to the point where she wanted to put together some slides, but she didn't want them to be boring. She wanted to talk with me about what to present and how best to present results to stakeholders. While I couldn't meet for another week, thanks to slideshare, I could quickly point her in the direction that my in-person advice would have led her. First, I'd put together a panel for the February 2012 New Hampshire UPA monthly meeting that we then repeated for the 2012 Boston UPA annual conference. In this panel, I described my reporting techniques, as did six of my colleagues -- two of whom work for companies smaller than mine, and four of whom are independent consultants. Before taking questions, we each presented for 3 to 5 minutes on how we presented research results. The differences were really interesting. For example, when do you really NEED a long, written report (as opposed to an email, spreadsheet, or slide deck with callouts)? When you are reporting your test results to the FDA -- that makes sense. in this presentation, I describe two modes of reporting results that I use.  Second, I'd been a participant in the CUE-9 study. CUE stands for Comparative Usability Evaluation, and this was the 9th of these studies that Rolf Molich had designed. Originally, the studies were designed to show the variability in evaluation methods practitioners use to evaluate websites and applications. Of course, using methods and tasks of their own choosing, the results were wildly different. However, in this 9th study, the tasks were the same, the participants were the same, and the problem severity scale was the same, so how would the results of the 19 practitioners compare? Still wildly variable. But for the purposes of this discussion, it gave me a work product that was not proprietary to the company I work for -- a usability test report that I could share publicly. This was the way I'd been reporting results since 2005, and pretty much what I still do, when time allows.  That said, I have been continuing to evolve my methods and reporting techniques, and sometimes, there is no time to create that kind of report -- the team can't wait the days that it takes to take screen shots, go through my notes, refer back to recordings, and write it all up. So in those cases, I use bullet points in email, talk through the findings with stakeholders in a 1-hour meeting, and then post the take-aways on a wiki page. There are other requirements for that kind of reporting to work -- for example, the stakeholders need to attend each of the sessions, and the sessions can't take more than a day to complete, but you get the idea: there is no one "right" way to report out results. If the method of reporting you are using is giving your stakeholders the information they need, in a time frame in which it is useful, and in a format that meets their needs (FDA report or bullet points on a wiki), then that's the "right" way to report your results. 

    Read the article

  • 3 Incredibly Useful Projects to jump-start your Kinect Development.

    - by mbcrump
    I’ve been playing with the Kinect SDK Beta for the past few days and have noticed a few projects on CodePlex worth checking out. I decided to blog about them to help spread awareness. If you want to learn more about Kinect SDK then you check out my”Busy Developer’s Guide to the Kinect SDK Beta”. Let’s get started:   KinectContrib is a set of VS2010 Templates that will help you get started building a Kinect project very quickly. Once you have it installed you will have the option to select the following Templates: KinectDepth KinectSkeleton KinectVideo Please note that KinectContrib requires the Kinect for Windows SDK beta to be installed. Kinect Templates after installing the Template Pack. The reference to Microsoft.Research.Kinect is added automatically.  Here is a sample of the code for the MainWindow.xaml in the “Video” template: <Window x:Class="KinectVideoApplication1.MainWindow" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="MainWindow" Height="480" Width="640"> <Grid> <Image Name="videoImage"/> </Grid> </Window> and MainWindow.xaml.cs using System; using System.Windows; using System.Windows.Media; using System.Windows.Media.Imaging; using Microsoft.Research.Kinect.Nui; namespace KinectVideoApplication1 { public partial class MainWindow : Window { //Instantiate the Kinect runtime. Required to initialize the device. //IMPORTANT NOTE: You can pass the device ID here, in case more than one Kinect device is connected. Runtime runtime = new Runtime(); public MainWindow() { InitializeComponent(); //Runtime initialization is handled when the window is opened. When the window //is closed, the runtime MUST be unitialized. this.Loaded += new RoutedEventHandler(MainWindow_Loaded); this.Unloaded += new RoutedEventHandler(MainWindow_Unloaded); //Handle the content obtained from the video camera, once received. runtime.VideoFrameReady += new EventHandler<Microsoft.Research.Kinect.Nui.ImageFrameReadyEventArgs>(runtime_VideoFrameReady); } void MainWindow_Unloaded(object sender, RoutedEventArgs e) { runtime.Uninitialize(); } void MainWindow_Loaded(object sender, RoutedEventArgs e) { //Since only a color video stream is needed, RuntimeOptions.UseColor is used. runtime.Initialize(Microsoft.Research.Kinect.Nui.RuntimeOptions.UseColor); //You can adjust the resolution here. runtime.VideoStream.Open(ImageStreamType.Video, 2, ImageResolution.Resolution640x480, ImageType.Color); } void runtime_VideoFrameReady(object sender, Microsoft.Research.Kinect.Nui.ImageFrameReadyEventArgs e) { PlanarImage image = e.ImageFrame.Image; BitmapSource source = BitmapSource.Create(image.Width, image.Height, 96, 96, PixelFormats.Bgr32, null, image.Bits, image.Width * image.BytesPerPixel); videoImage.Source = source; } } } You will find this template pack is very handy especially for those new to Kinect Development.   Next up is The Coding4Fun Kinect Toolkit which contains extension methods and a WPF control to help you develop with the Kinect SDK. After downloading the package simply add a reference to the .dll using either the WPF or WinForms version. Now you will have access to several methods that can help you save an image: (for example) For a full list of extension methods and properties, please visit the site at http://c4fkinect.codeplex.com/. Kinductor – This is a great application for just learning how to use the Kinect SDK. The project uses MVVM Light and is a great start for those looking how to structure their first Kinect Application. Conclusion: Things are already getting easier for those working with the Kinect SDK. I imagine that after a few more months we will see the SDK go out of beta and allow commercial applications to run using it. I am very excited and hope that you continue reading my blog for more Kinect, WPF and Silverlight news.  Subscribe to my feed

    Read the article

  • Non-blocking I/O using Servlet 3.1: Scalable applications using Java EE 7 (TOTD #188)

    - by arungupta
    Servlet 3.0 allowed asynchronous request processing but only traditional I/O was permitted. This can restrict scalability of your applications. In a typical application, ServletInputStream is read in a while loop. public class TestServlet extends HttpServlet {    protected void doGet(HttpServletRequest request, HttpServletResponse response)         throws IOException, ServletException {     ServletInputStream input = request.getInputStream();       byte[] b = new byte[1024];       int len = -1;       while ((len = input.read(b)) != -1) {          . . .        }   }} If the incoming data is blocking or streamed slower than the server can read then the server thread is waiting for that data. The same can happen if the data is written to ServletOutputStream. This is resolved in Servet 3.1 (JSR 340, to be released as part Java EE 7) by adding event listeners - ReadListener and WriteListener interfaces. These are then registered using ServletInputStream.setReadListener and ServletOutputStream.setWriteListener. The listeners have callback methods that are invoked when the content is available to be read or can be written without blocking. The updated doGet in our case will look like: AsyncContext context = request.startAsync();ServletInputStream input = request.getInputStream();input.setReadListener(new MyReadListener(input, context)); Invoking setXXXListener methods indicate that non-blocking I/O is used instead of the traditional I/O. At most one ReadListener can be registered on ServletIntputStream and similarly at most one WriteListener can be registered on ServletOutputStream. ServletInputStream.isReady and ServletInputStream.isFinished are new methods to check the status of non-blocking I/O read. ServletOutputStream.canWrite is a new method to check if data can be written without blocking.  MyReadListener implementation looks like: @Overridepublic void onDataAvailable() { try { StringBuilder sb = new StringBuilder(); int len = -1; byte b[] = new byte[1024]; while (input.isReady() && (len = input.read(b)) != -1) { String data = new String(b, 0, len); System.out.println("--> " + data); } } catch (IOException ex) { Logger.getLogger(MyReadListener.class.getName()).log(Level.SEVERE, null, ex); }}@Overridepublic void onAllDataRead() { System.out.println("onAllDataRead"); context.complete();}@Overridepublic void onError(Throwable t) { t.printStackTrace(); context.complete();} This implementation has three callbacks: onDataAvailable callback method is called whenever data can be read without blocking onAllDataRead callback method is invoked data for the current request is completely read. onError callback is invoked if there is an error processing the request. Notice, context.complete() is called in onAllDataRead and onError to signal the completion of data read. For now, the first chunk of available data need to be read in the doGet or service method of the Servlet. Rest of the data can be read in a non-blocking way using ReadListener after that. This is going to get cleaned up where all data read can happen in ReadListener only. The sample explained above can be downloaded from here and works with GlassFish 4.0 build 64 and onwards. The slides and a complete re-run of What's new in Servlet 3.1: An Overview session at JavaOne is available here. Here are some more references for you: Java EE 7 Specification Status Servlet Specification Project JSR Expert Group Discussion Archive Servlet 3.1 Javadocs

    Read the article

  • If the model is validating the data, shouldn't it throw exceptions on bad input?

    - by Carlos Campderrós
    Reading this SO question it seems that throwing exceptions for validating user input is frowned upon. But who should validate this data? In my applications, all validations are done in the business layer, because only the class itself really knows which values are valid for each one of its properties. If I were to copy the rules for validating a property to the controller, it is possible that the validation rules change and now there are two places where the modification should be made. Is my premise that validation should be done on the business layer wrong? What I do So my code usually ends up like this: <?php class Person { private $name; private $age; public function setName($n) { $n = trim($n); if (mb_strlen($n) == 0) { throw new ValidationException("Name cannot be empty"); } $this->name = $n; } public function setAge($a) { if (!is_int($a)) { if (!ctype_digit(trim($a))) { throw new ValidationException("Age $a is not valid"); } $a = (int)$a; } if ($a < 0 || $a > 150) { throw new ValidationException("Age $a is out of bounds"); } $this->age = $a; } // other getters, setters and methods } In the controller, I just pass the input data to the model, and catch thrown exceptions to show the error(s) to the user: <?php $person = new Person(); $errors = array(); // global try for all exceptions other than ValidationException try { // validation and process (if everything ok) try { $person->setAge($_POST['age']); } catch (ValidationException $e) { $errors['age'] = $e->getMessage(); } try { $person->setName($_POST['name']); } catch (ValidationException $e) { $errors['name'] = $e->getMessage(); } ... } catch (Exception $e) { // log the error, send 500 internal server error to the client // and finish the request } if (count($errors) == 0) { // process } else { showErrorsToUser($errors); } Is this a bad methodology? Alternate method Should maybe I create methods for isValidAge($a) that return true/false and then call them from the controller? <?php class Person { private $name; private $age; public function setName($n) { $n = trim($n); if ($this->isValidName($n)) { $this->name = $n; } else { throw new Exception("Invalid name"); } } public function setAge($a) { if ($this->isValidAge($a)) { $this->age = $a; } else { throw new Exception("Invalid age"); } } public function isValidName($n) { $n = trim($n); if (mb_strlen($n) == 0) { return false; } return true; } public function isValidAge($a) { if (!is_int($a)) { if (!ctype_digit(trim($a))) { return false; } $a = (int)$a; } if ($a < 0 || $a > 150) { return false; } return true; } // other getters, setters and methods } And the controller will be basically the same, just instead of try/catch there are now if/else: <?php $person = new Person(); $errors = array(); if ($person->isValidAge($age)) { $person->setAge($age); } catch (Exception $e) { $errors['age'] = "Invalid age"; } if ($person->isValidName($name)) { $person->setName($name); } catch (Exception $e) { $errors['name'] = "Invalid name"; } ... if (count($errors) == 0) { // process } else { showErrorsToUser($errors); } So, what should I do? I'm pretty happy with my original method, and my colleagues to whom I have showed it in general have liked it. Despite this, should I change to the alternate method? Or am I doing this terribly wrong and I should look for another way?

    Read the article

  • MVC Communication Pattern

    - by Kedu
    This is kind of a follow up question to this http://stackoverflow.com/questions/23743285/model-view-controller-and-callbacks, but I wanted to post it separately, because its kind of a different topic. I'm working on a multiplayer cardgame for the Android platform. I split the project into MVC which fits the needs pretty good, but I'm currently stuck because I can't figure out a good way to communicate between the different parts. I have everything setup and working with the controller being a big state machine, which is called over and over from the gameloop, and calls getter methods from the GUI and the android/network part to get the input. The input itself in the GUI and network is set by inputlisteners that set a local variable which I read in the getter method. So far so good, this is working. But my problem is, the controller has to check every input separately,so if I want to add an input I have to check in which states its valid and call the getter method from all these states. This is not good, and lets the code look pretty ugly, makes additions uncomfortable and adds redundance. So what I've got from the question I mentioned above is that some kind of command or event pattern will fit my needs. What I want to do is to create a shared and threadsafe queue in the controller and instead of calling all these getter methods, I just check the queue for new input and proceed it. On the other side, the GUI and network don't have all these getters, but instead create an event or command and send it to the controller through, for example, observer/observable. Now my problem: I can't figure out a way, for these commands/events to fit a common interface (which the queue can store) and still transport different kind of data (button clicks, cards that are played, the player id the command comes from, synchronization data etc.). If I design the communication as command pattern, I have to stick all the information that is needed to execute the command into it when its created, that's impossible because the GUI or network has no knowledge of all the things the controller needs to execute stuff that needs to be done when for example a card is played. I thought about getting this stuff into the command when executing it. But over all the different commands I have, I would need all the information the controller has, and thus give the command a reference to the controller which would make everything in it public, which is real bad design I guess. So, I could try some kind of event pattern. I have to transport data in the event. So, like the command, I would have an interface, which all events have in common, and can be stored in the shared queue. I could create a big enum with all the different events that a are possible, save one of these enums in the actual event, and build a big switch case for the events, to proceed different stuff for different events. The problem here: I have different data for all the events. But I need a common interface, to store the events in a queue. How do I get the specific data, if I can only access the event through the interface? Even if that wouldn't be a problem, I'm creating another big switch case, which looks ugly, and when i want to add a new event, I have to create the event itself, the case, the enum, and the method that's called with the data. I could of course check the event with the enum and cast it to its type, so I can call event type specific methods that give me the data I need, but that looks like bad design too.

    Read the article

  • JEP 124: Enhance the Certificate Revocation-Checking API

    - by smullan
    Revocation checking is the mechanism to determine the revocation status of a certificate. If it is revoked, it is considered invalid and should not be used. Currently as of JDK 7, the PKIX implementation of java.security.cert.CertPathValidator  includes a revocation checking implementation that supports both OCSP and CRLs, the two main methods of checking revocation. However, there are very few options that allow you to configure the behavior. You can always implement your own revocation checker, but that's a lot of work. JEP 124 (Enhance the Certificate Revocation-Checking API) is one of the 11 new security features in JDK 8. This feature enhances the java.security.cert API to support various revocation settings such as best-effort checking, end-entity certificate checking, and mechanism-specific options and parameters. Let's describe each of these in more detail and show some examples. The features are provided through a new class named PKIXRevocationChecker. A PKIXRevocationChecker instance is returned by a PKIX CertPathValidator as follows: CertPathValidator cpv = CertPathValidator.getInstance("PKIX"); PKIXRevocationChecker prc = (PKIXRevocationChecker)cpv.getRevocationChecker(); You can now set various revocation options by calling different methods of the returned PKIXRevocationChecker object. For example, the best-effort option (called soft-fail) allows the revocation check to succeed if the status cannot be obtained due to a network connection failure or an overloaded server. It is enabled as follows: prc.setOptions(Enum.setOf(Option.SOFT_FAIL)); When the SOFT_FAIL option is specified, you can still obtain any exceptions that may have been thrown due to network issues. This can be useful if you want to log this information or treat it as a warning. You can obtain these exceptions by calling the getSoftFailExceptions method: List<CertPathValidatorException> exceptions = prc.getSoftFailExceptions(); Another new option called ONLY_END_ENTITY allows you to only check the revocation status of the end-entity certificate. This can improve performance, but you should be careful using this option, as the revocation status of CA certificates will not be checked. To set more than one option, simply specify them together, for example: prc.setOptions(Enum.setOf(Option.SOFT_FAIL, Option.ONLY_END_ENTITY)); By default, PKIXRevocationChecker will try to check the revocation status of a certificate using OCSP first, and then CRLs as a fallback. However, you can switch the order using the PREFER_CRLS option, or disable the fallback altogether using the NO_FALLBACK option. For example, here is how you would only use CRLs to check the revocation status: prc.setOptions(Enum.setOf(Option.PREFER_CRLS, Option.NO_FALLBACK)); There are also a number of other useful methods which allow you to specify various options such as the OCSP responder URI, the trusted OCSP responder certificate, and OCSP request extensions. However, one of the most useful features is the ability to specify a cached OCSP response with the setOCSPResponse method. This can be quite useful if the OCSPResponse has already been obtained, for example in a protocol that uses OCSP stapling. After you have set all of your preferred options, you must add the PKIXRevocationChecker to your PKIXParameters object as one of your custom CertPathCheckers before you validate the certificate chain, as follows: PKIXParameters params = new PKIXParameters(keystore); params.addCertPathChecker(prc); CertPathValidatorResult result = cpv.validate(path, params); Early access binaries of JDK 8 can be downloaded from http://jdk8.java.net/download.html

    Read the article

  • Running response time tests on php code - how much is 7.2E-5 microseconds?

    - by Ali
    Hi guys I'm using microtime() function of php to tell how long certain snippets of code take to run I do this by taking the time before and after the snippet and subtracting them using microtime function. I got the following results though for the different snippets: 1 - 0.022976 2 - 0.003656 3 - -0.196361 4- 0.006563 5- 7.2E-5 6- 0.847695 7- 0.005092 8- 7.6E-5 9- 0.08024 The first numbers represent the snippt and the following the time taken... I've forgotten whatever I learnt back in College on numerical methods :( - how big is 7.2E-5 microseconds?

    Read the article

  • iPhone virtual keyboard bug

    - by Chandan Shetty SP
    In iPhone virtual keyboard... 1.Change the alphabetical keypad view to numerical view. 2.Tap on the single quote(') button the view changes to alphabetical. 3.In this view tapping on space twice displays a fullstop. I don't know whether it is apple bug or feature, How to fix this issue through coding? Thanks,

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >