Search Results

Search found 24721 results on 989 pages for 'int tostring'.

Page 386/989 | < Previous Page | 382 383 384 385 386 387 388 389 390 391 392 393  | Next Page >

  • Passing multiple simple POST Values to ASP.NET Web API

    - by Rick Strahl
    A few weeks backs I posted a blog post  about what does and doesn't work with ASP.NET Web API when it comes to POSTing data to a Web API controller. One of the features that doesn't work out of the box - somewhat unexpectedly -  is the ability to map POST form variables to simple parameters of a Web API method. For example imagine you have this form and you want to post this data to a Web API end point like this via AJAX: <form> Name: <input type="name" name="name" value="Rick" /> Value: <input type="value" name="value" value="12" /> Entered: <input type="entered" name="entered" value="12/01/2011" /> <input type="button" id="btnSend" value="Send" /> </form> <script type="text/javascript"> $("#btnSend").click( function() { $.post("samples/PostMultipleSimpleValues?action=kazam", $("form").serialize(), function (result) { alert(result); }); }); </script> or you might do this more explicitly by creating a simple client map and specifying the POST values directly by hand:$.post("samples/PostMultipleSimpleValues?action=kazam", { name: "Rick", value: 1, entered: "12/01/2012" }, $("form").serialize(), function (result) { alert(result); }); On the wire this generates a simple POST request with Url Encoded values in the content:POST /AspNetWebApi/samples/PostMultipleSimpleValues?action=kazam HTTP/1.1 Host: localhost User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64; rv:15.0) Gecko/20100101 Firefox/15.0.1 Accept: application/json Connection: keep-alive Content-Type: application/x-www-form-urlencoded; charset=UTF-8 X-Requested-With: XMLHttpRequest Referer: http://localhost/AspNetWebApi/FormPostTest.html Content-Length: 41 Pragma: no-cache Cache-Control: no-cachename=Rick&value=12&entered=12%2F10%2F2011 Seems simple enough, right? We are basically posting 3 form variables and 1 query string value to the server. Unfortunately Web API can't handle request out of the box. If I create a method like this:[HttpPost] public string PostMultipleSimpleValues(string name, int value, DateTime entered, string action = null) { return string.Format("Name: {0}, Value: {1}, Date: {2}, Action: {3}", name, value, entered, action); }You'll find that you get an HTTP 404 error and { "Message": "No HTTP resource was found that matches the request URI…"} Yes, it's possible to pass multiple POST parameters of course, but Web API expects you to use Model Binding for this - mapping the post parameters to a strongly typed .NET object, not to single parameters. Alternately you can also accept a FormDataCollection parameter on your API method to get a name value collection of all POSTed values. If you're using JSON only, using the dynamic JObject/JValue objects might also work. ModelBinding is fine in many use cases, but can quickly become overkill if you only need to pass a couple of simple parameters to many methods. Especially in applications with many, many AJAX callbacks the 'parameter mapping type' per method signature can lead to serious class pollution in a project very quickly. Simple POST variables are also commonly used in AJAX applications to pass data to the server, even in many complex public APIs. So this is not an uncommon use case, and - maybe more so a behavior that I would have expected Web API to support natively. The question "Why aren't my POST parameters mapping to Web API method parameters" is already a frequent one… So this is something that I think is fairly important, but unfortunately missing in the base Web API installation. Creating a Custom Parameter Binder Luckily Web API is greatly extensible and there's a way to create a custom Parameter Binding to provide this functionality! Although this solution took me a long while to find and then only with the help of some folks Microsoft (thanks Hong Mei!!!), it's not difficult to hook up in your own projects. It requires one small class and a GlobalConfiguration hookup. Web API parameter bindings allow you to intercept processing of individual parameters - they deal with mapping parameters to the signature as well as converting the parameters to the actual values that are returned. Here's the implementation of the SimplePostVariableParameterBinding class:public class SimplePostVariableParameterBinding : HttpParameterBinding { private const string MultipleBodyParameters = "MultipleBodyParameters"; public SimplePostVariableParameterBinding(HttpParameterDescriptor descriptor) : base(descriptor) { } /// <summary> /// Check for simple binding parameters in POST data. Bind POST /// data as well as query string data /// </summary> public override Task ExecuteBindingAsync(ModelMetadataProvider metadataProvider, HttpActionContext actionContext, CancellationToken cancellationToken) { // Body can only be read once, so read and cache it NameValueCollection col = TryReadBody(actionContext.Request); string stringValue = null; if (col != null) stringValue = col[Descriptor.ParameterName]; // try reading query string if we have no POST/PUT match if (stringValue == null) { var query = actionContext.Request.GetQueryNameValuePairs(); if (query != null) { var matches = query.Where(kv => kv.Key.ToLower() == Descriptor.ParameterName.ToLower()); if (matches.Count() > 0) stringValue = matches.First().Value; } } object value = StringToType(stringValue); // Set the binding result here SetValue(actionContext, value); // now, we can return a completed task with no result TaskCompletionSource<AsyncVoid> tcs = new TaskCompletionSource<AsyncVoid>(); tcs.SetResult(default(AsyncVoid)); return tcs.Task; } private object StringToType(string stringValue) { object value = null; if (stringValue == null) value = null; else if (Descriptor.ParameterType == typeof(string)) value = stringValue; else if (Descriptor.ParameterType == typeof(int)) value = int.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(Int32)) value = Int32.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(Int64)) value = Int64.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(decimal)) value = decimal.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(double)) value = double.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(DateTime)) value = DateTime.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(bool)) { value = false; if (stringValue == "true" || stringValue == "on" || stringValue == "1") value = true; } else value = stringValue; return value; } /// <summary> /// Read and cache the request body /// </summary> /// <param name="request"></param> /// <returns></returns> private NameValueCollection TryReadBody(HttpRequestMessage request) { object result = null; // try to read out of cache first if (!request.Properties.TryGetValue(MultipleBodyParameters, out result)) { // parsing the string like firstname=Hongmei&lastname=Ge result = request.Content.ReadAsFormDataAsync().Result; request.Properties.Add(MultipleBodyParameters, result); } return result as NameValueCollection; } private struct AsyncVoid { } }   The ExecuteBindingAsync method is fired for each parameter that is mapped and sent for conversion. This custom binding is fired only if the incoming parameter is a simple type (that gets defined later when I hook up the binding), so this binding never fires on complex types or if the first type is not a simple type. For the first parameter of a request the Binding first reads the request body into a NameValueCollection and caches that in the request.Properties collection. The request body can only be read once, so the first parameter request reads it and then caches it. Subsequent parameters then use the cached POST value collection. Once the form collection is available the value of the parameter is read, and the value is translated into the target type requested by the Descriptor. SetValue writes out the value to be mapped. Once you have the ParameterBinding in place, the binding has to be assigned. This is done along with all other Web API configuration tasks at application startup in global.asax's Application_Start:GlobalConfiguration.Configuration.ParameterBindingRules .Insert(0, (HttpParameterDescriptor descriptor) => { var supportedMethods = descriptor.ActionDescriptor.SupportedHttpMethods; // Only apply this binder on POST and PUT operations if (supportedMethods.Contains(HttpMethod.Post) || supportedMethods.Contains(HttpMethod.Put)) { var supportedTypes = new Type[] { typeof(string), typeof(int), typeof(decimal), typeof(double), typeof(bool), typeof(DateTime) }; if (supportedTypes.Where(typ => typ == descriptor.ParameterType).Count() > 0) return new SimplePostVariableParameterBinding(descriptor); } // let the default bindings do their work return null; });   The ParameterBindingRules.Insert method takes a delegate that checks which type of requests it should handle. The logic here checks whether the request is POST or PUT and whether the parameter type is a simple type that is supported. Web API calls this delegate once for each method signature it tries to map and the delegate returns null to indicate it's not handling this parameter, or it returns a new parameter binding instance - in this case the SimplePostVariableParameterBinding. Once the parameter binding and this hook up code is in place, you can now pass simple POST values to methods with simple parameters. The examples I showed above should now work in addition to the standard bindings. Summary Clearly this is not easy to discover. I spent quite a bit of time digging through the Web API source trying to figure this out on my own without much luck. It took Hong Mei at Micrsoft to provide a base example as I asked around so I can't take credit for this solution :-). But once you know where to look, Web API is brilliantly extensible to make it relatively easy to customize the parameter behavior. I'm very stoked that this got resolved  - in the last two months I've had two customers with projects that decided not to use Web API in AJAX heavy SPA applications because this POST variable mapping wasn't available. This might actually change their mind to still switch back and take advantage of the many great features in Web API. I too frequently use plain POST variables for communicating with server AJAX handlers and while I could have worked around this (with untyped JObject or the Form collection mostly), having proper POST to parameter mapping makes things much easier. I said this in my last post on POST data and say it again here: I think POST to method parameter mapping should have been shipped in the box with Web API, because without knowing about this limitation the expectation is that simple POST variables map to parameters just like query string values do. I hope Microsoft considers including this type of functionality natively in the next version of Web API natively or at least as a built-in HttpParameterBinding that can be just added. This is especially true, since this binding doesn't affect existing bindings. Resources SimplePostVariableParameterBinding Source on GitHub Global.asax hookup source Mapping URL Encoded Post Values in  ASP.NET Web API© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api  AJAX   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Revisiting ANTS Performance Profiler 7.4

    - by James Michael Hare
    Last year, I did a small review on the ANTS Performance Profiler 6.3, now that it’s a year later and a major version number higher, I thought I’d revisit the review and revise my last post. This post will take the same examples as the original post and update them to show what’s new in version 7.4 of the profiler. Background A performance profiler’s main job is to keep track of how much time is typically spent in each unit of code. This helps when we have a program that is not running at the performance we expect, and we want to know where the program is experiencing issues. There are many profilers out there of varying capabilities. Red Gate’s typically seem to be the very easy to “jump in” and get started with very little training required. So let’s dig into the Performance Profiler. I’ve constructed a very crude program with some obvious inefficiencies. It’s a simple program that generates random order numbers (or really could be any unique identifier), adds it to a list, sorts the list, then finds the max and min number in the list. Ignore the fact it’s very contrived and obviously inefficient, we just want to use it as an example to show off the tool: 1: // our test program 2: public static class Program 3: { 4: // the number of iterations to perform 5: private static int _iterations = 1000000; 6: 7: // The main method that controls it all 8: public static void Main() 9: { 10: var list = new List<string>(); 11: 12: for (int i = 0; i < _iterations; i++) 13: { 14: var x = GetNextId(); 15: 16: AddToList(list, x); 17: 18: var highLow = GetHighLow(list); 19: 20: if ((i % 1000) == 0) 21: { 22: Console.WriteLine("{0} - High: {1}, Low: {2}", i, highLow.Item1, highLow.Item2); 23: Console.Out.Flush(); 24: } 25: } 26: } 27: 28: // gets the next order id to process (random for us) 29: public static string GetNextId() 30: { 31: var random = new Random(); 32: var num = random.Next(1000000, 9999999); 33: return num.ToString(); 34: } 35: 36: // add it to our list - very inefficiently! 37: public static void AddToList(List<string> list, string item) 38: { 39: list.Add(item); 40: list.Sort(); 41: } 42: 43: // get high and low of order id range - very inefficiently! 44: public static Tuple<int,int> GetHighLow(List<string> list) 45: { 46: return Tuple.Create(list.Max(s => Convert.ToInt32(s)), list.Min(s => Convert.ToInt32(s))); 47: } 48: } So let’s run it through the profiler and see what happens! Visual Studio Integration First, let’s look at how the ANTS profilers integrate with Visual Studio’s menu system. Once you install the ANTS profilers, you will get an ANTS menu item with several options: Notice that you can either Profile Performance or Launch ANTS Performance Profiler. These sound similar but achieve two slightly different actions: Profile Performance: this immediately launches the profiler with all defaults selected to profile the active project in Visual Studio. Launch ANTS Performance Profiler: this launches the profiler much the same way as starting it from the Start Menu. The profiler will pre-populate the application and path information, but allow you to change the settings before beginning the profile run. So really, the main difference is that Profile Performance immediately begins profiling with the default selections, where Launch ANTS Performance Profiler allows you to change the defaults and attach to an already-running application. Let’s Fire it Up! So when you fire up ANTS either via Start Menu or Launch ANTS Performance Profiler menu in Visual Studio, you are presented with a very simple dialog to get you started: Notice you can choose from many different options for application type. You can profile executables, services, web applications, or just attach to a running process. In fact, in version 7.4 we see two new options added: ASP.NET Web Application (IIS Express) SharePoint web application (IIS) So this gives us an additional way to profile ASP.NET applications and the ability to profile SharePoint applications as well. You can also choose your level of detail in the Profiling Mode drop down. If you choose Line-Level and method-level timings detail, you will get a lot more detail on the method durations, but this will also slow down profiling somewhat. If you really need the profiler to be as unintrusive as possible, you can change it to Sample method-level timings. This is performing very light profiling, where basically the profiler collects timings of a method by examining the call-stack at given intervals. Which method you choose depends a lot on how much detail you need to find the issue and how sensitive your program issues are to timing. So for our example, let’s just go with the line and method timing detail. So, we check that all the options are correct (if you launch from VS2010, the executable and path are filled in already), and fire it up by clicking the [Start Profiling] button. Profiling the Application Once you start profiling the application, you will see a real-time graph of CPU usage that will indicate how much your application is using the CPU(s) on your system. During this time, you can select segments of the graph and bookmark them, giving them mnemonic names. This can be useful if you want to compare performance in one part of the run to another part of the run. Notice that once you select a block, it will give you the call tree breakdown for that selection only, and the relative performance of those calls. Once you feel you have collected enough information, you can click [Stop Profiling] to stop the application run and information collection and begin a more thorough analysis. Analyzing Method Timings So now that we’ve halted the run, we can look around the GUI and see what we can see. By default, the times are shown in terms of percentage of time of the total run of the application, though you can change it in the View menu item to milliseconds, ticks, or seconds as well. This won’t affect the percentages of methods, it only affects what units the times are shown. Notice also that the major hotspot seems to be in a method without source, ANTS Profiler will filter these out by default, but you can right-click on the line and remove the filter to see more detail. This proves especially handy when a bottleneck is due to a method in the BCL. So now that we’ve removed the filter, we see a bit more detail: In addition, ANTS Performance Profiler gives you the ability to decompile the methods without source so that you can dive even deeper, though typically this isn’t necessary for our purposes. When looking at timings, there are generally two types of timings for each method call: Time: This is the time spent ONLY in this method, not including calls this method makes to other methods. Time With Children: This is the total of time spent in both this method AND including calls this method makes to other methods. In other words, the Time tells you how much work is being done exclusively in this method, and the Time With Children tells you how much work is being done inclusively in this method and everything it calls. You can also choose to display the methods in a tree or in a grid. The tree view is the default and it shows the method calls arranged in terms of the tree representing all method calls and the parent method that called them, etc. This is useful for when you find a hot-spot method, you can see who is calling it to determine if the problem is the method itself, or if it is being called too many times. The grid method represents each method only once with its totals and is useful for quickly seeing what method is the trouble spot. In addition, you can choose to display Methods with source which are generally the methods you wrote (as opposed to native or BCL code), or Any Method which shows not only your methods, but also native calls, JIT overhead, synchronization waits, etc. So these are just two ways of viewing the same data, and you’re free to choose the organization that best suits what information you are after. Analyzing Method Source If we look at the timings above, we see that our AddToList() method (and in particular, it’s call to the List<T>.Sort() method in the BCL) is the hot-spot in this analysis. If ANTS sees a method that is consuming the most time, it will flag it as a hot-spot to help call out potential areas of concern. This doesn’t mean the other statistics aren’t meaningful, but that the hot-spot is most likely going to be your biggest bang-for-the-buck to concentrate on. So let’s select the AddToList() method, and see what it shows in the source window below: Notice the source breakout in the bottom pane when you select a method (from either tree or grid view). This shows you the timings in this method per line of code. This gives you a major indicator of where the trouble-spot in this method is. So in this case, we see that performing a Sort() on the List<T> after every Add() is killing our performance! Of course, this was a very contrived, duh moment, but you’d be surprised how many performance issues become duh moments. Note that this one line is taking up 86% of the execution time of this application! If we eliminate this bottleneck, we should see drastic improvement in the performance. So to fix this, if we still wanted to maintain the List<T> we’d have many options, including: delay Sort() until after all Add() methods, using a SortedSet, SortedList, or SortedDictionary depending on which is most appropriate, or forgoing the sorting all together and using a Dictionary. Rinse, Repeat! So let’s just change all instances of List<string> to SortedSet<string> and run this again through the profiler: Now we see the AddToList() method is no longer our hot-spot, but now the Max() and Min() calls are! This is good because we’ve eliminated one hot-spot and now we can try to correct this one as well. As before, we can then optimize this part of the code (possibly by taking advantage of the fact the list is now sorted and returning the first and last elements). We can then rinse and repeat this process until we have eliminated as many bottlenecks as possible. Calls by Web Request Another feature that was added recently is the ability to view .NET methods grouped by the HTTP requests that caused them to run. This can be helpful in determining which pages, web services, etc. are causing hot spots in your web applications. Summary If you like the other ANTS tools, you’ll like the ANTS Performance Profiler as well. It is extremely easy to use with very little product knowledge required to get up and running. There are profilers built into the higher product lines of Visual Studio, of course, which are also powerful and easy to use. But for quickly jumping in and finding hot spots rapidly, Red Gate’s Performance Profiler 7.4 is an excellent choice. Technorati Tags: Influencers,ANTS,Performance Profiler,Profiler

    Read the article

  • Auto-Suggest via &lsquo;Trie&rsquo; (Pre-fix Tree)

    - by Strenium
    Auto-Suggest (Auto-Complete) “thing” has been around for a few years. Here’s my little snippet on the subject. For one of my projects, I had to deal with a non-trivial set of items to be pulled via auto-suggest used by multiple concurrent users. Simple, dumb iteration through a list in local cache or back-end access didn’t quite cut it. Enter a nifty little structure, perfectly suited for storing and matching verbal data: “Trie” (http://tinyurl.com/db56g) also known as a Pre-fix Tree: “Unlike a binary search tree, no node in the tree stores the key associated with that node; instead, its position in the tree defines the key with which it is associated. All the descendants of a node have a common prefix of the string associated with that node, and the root is associated with the empty string. Values are normally not associated with every node, only with leaves and some inner nodes that correspond to keys of interest.” This is a very scalable, performing structure. Though, as usual, something ‘fast’ comes at a cost of ‘size’; fortunately RAM is more plentiful today so I can live with that. I won’t bore you with the detailed algorithmic performance here - Google can do a better job of such. So, here’s C# implementation of all this. Let’s start with individual node: Trie Node /// <summary> /// Contains datum of a single trie node. /// </summary> public class AutoSuggestTrieNode {     public char Value { get; set; }       /// <summary>     /// Gets a value indicating whether this instance is leaf node.     /// </summary>     /// <value>     ///     <c>true</c> if this instance is leaf node; otherwise, a prefix node <c>false</c>.     /// </value>     public bool IsLeafNode { get; private set; }       public List<AutoSuggestTrieNode> DescendantNodes { get; private set; }         /// <summary>     /// Initializes a new instance of the <see cref="AutoSuggestTrieNode"/> class.     /// </summary>     /// <param name="value">The phonetic value.</param>     /// <param name="isLeafNode">if set to <c>true</c> [is leaf node].</param>     public AutoSuggestTrieNode(char value = ' ', bool isLeafNode = false)     {         Value = value;         IsLeafNode = isLeafNode;           DescendantNodes = new List<AutoSuggestTrieNode>();     }       /// <summary>     /// Gets the descendants of the pre-fix node, if any.     /// </summary>     /// <param name="descendantValue">The descendant value.</param>     /// <returns></returns>     public AutoSuggestTrieNode GetDescendant(char descendantValue)     {         return DescendantNodes.FirstOrDefault(descendant => descendant.Value == descendantValue);     } }   Quite self-explanatory, imho. A node is either a “Pre-fix” or a “Leaf” node. “Leaf” contains the full “word”, while the “Pre-fix” nodes act as indices used for matching the results.   Ok, now the Trie: Trie Structure /// <summary> /// Contains structure and functionality of an AutoSuggest Trie (Pre-fix Tree) /// </summary> public class AutoSuggestTrie {     private readonly AutoSuggestTrieNode _root = new AutoSuggestTrieNode();       /// <summary>     /// Adds the word to the trie by breaking it up to pre-fix nodes + leaf node.     /// </summary>     /// <param name="word">Phonetic value.</param>     public void AddWord(string word)     {         var currentNode = _root;         word = word.Trim().ToLower();           for (int i = 0; i < word.Length; i++)         {             var child = currentNode.GetDescendant(word[i]);               if (child == null) /* this character hasn't yet been indexed in the trie */             {                 var newNode = new AutoSuggestTrieNode(word[i], word.Count() - 1 == i);                   currentNode.DescendantNodes.Add(newNode);                 currentNode = newNode;             }             else                 currentNode = child; /* this character is already indexed, move down the trie */         }     }         /// <summary>     /// Gets the suggested matches.     /// </summary>     /// <param name="word">The phonetic search value.</param>     /// <returns></returns>     public List<string> GetSuggestedMatches(string word)     {         var currentNode = _root;         word = word.Trim().ToLower();           var indexedNodesValues = new StringBuilder();         var resultBag = new ConcurrentBag<string>();           for (int i = 0; i < word.Trim().Length; i++)  /* traverse the trie collecting closest indexed parent (parent can't be leaf, obviously) */         {             var child = currentNode.GetDescendant(word[i]);               if (child == null || word.Count() - 1 == i)                 break; /* done looking, the rest of the characters aren't indexed in the trie */               indexedNodesValues.Append(word[i]);             currentNode = child;         }           Action<AutoSuggestTrieNode, string> collectAllMatches = null;         collectAllMatches = (node, aggregatedValue) => /* traverse the trie collecting matching leafNodes (i.e. "full words") */             {                 if (node.IsLeafNode) /* full word */                     resultBag.Add(aggregatedValue); /* thread-safe write */                   Parallel.ForEach(node.DescendantNodes, descendandNode => /* asynchronous recursive traversal */                 {                     collectAllMatches(descendandNode, String.Format("{0}{1}", aggregatedValue, descendandNode.Value));                 });             };           collectAllMatches(currentNode, indexedNodesValues.ToString());           return resultBag.OrderBy(o => o).ToList();     }         /// <summary>     /// Gets the total words (leafs) in the trie. Recursive traversal.     /// </summary>     public int TotalWords     {         get         {             int runningCount = 0;               Action<AutoSuggestTrieNode> traverseAllDecendants = null;             traverseAllDecendants = n => { runningCount += n.DescendantNodes.Count(o => o.IsLeafNode); n.DescendantNodes.ForEach(traverseAllDecendants); };             traverseAllDecendants(this._root);               return runningCount;         }     } }   Matching operations and Inserts involve traversing the nodes before the right “spot” is found. Inserts need be synchronous since ordering of data matters here. However, matching can be done in parallel traversal using recursion (line 64). Here’s sample usage:   [TestMethod] public void AutoSuggestTest() {     var autoSuggestCache = new AutoSuggestTrie();       var testInput = @"Lorem ipsum dolor sit amet, consectetur adipiscing elit. Integer nec odio. Praesent libero.                 Sed cursus ante dapibus diam. Sed nisi. Nulla quis sem at nibh elementum imperdiet. Duis sagittis ipsum. Praesent mauris.                 Fusce nec tellus sed augue semper porta. Mauris massa. Vestibulum lacinia arcu eget nulla. Class aptent taciti sociosqu ad                 litora torquent per conubia nostra, per inceptos himenaeos. Curabitur sodales ligula in libero. Sed dignissim lacinia nunc.                 Curabitur tortor. Pellentesque nibh. Aenean quam. In scelerisque sem at dolor. Maecenas mattis. Sed convallis tristique sem.                 Proin ut ligula vel nunc egestas porttitor. Morbi lectus risus, iaculis vel, suscipit quis, luctus non, massa. Fusce ac                 turpis quis ligula lacinia aliquet. Mauris ipsum. Nulla metus metus, ullamcorper vel, tincidunt sed, euismod in, nibh. Quisque                 volutpat condimentum velit. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Nam                 nec ante. Sed lacinia, urna non tincidunt mattis, tortor neque adipiscing diam, a cursus ipsum ante quis turpis. Nulla                 facilisi. Ut fringilla. Suspendisse potenti. Nunc feugiat mi a tellus consequat imperdiet. Vestibulum sapien. Proin quam. Etiam                 ultrices. Suspendisse in justo eu magna luctus suscipit. Sed lectus. Integer euismod lacus luctus magna. Quisque cursus, metus                 vitae pharetra auctor, sem massa mattis sem, at interdum magna augue eget diam. Vestibulum ante ipsum primis in faucibus orci                 luctus et ultrices posuere cubilia Curae; Morbi lacinia molestie dui. Praesent blandit dolor. Sed non quam. In vel mi sit amet                 augue congue elementum. Morbi in ipsum sit amet pede facilisis laoreet. Donec lacus nunc, viverra nec.";       testInput.Split(' ').ToList().ForEach(word => autoSuggestCache.AddWord(word));       var testMatches = autoSuggestCache.GetSuggestedMatches("le"); }   ..and the result: That’s it!

    Read the article

  • Export mysql database tables to php code to create same tables in other database?

    - by chefnelone
    How do I Export mysql database tables to php code so that it allows me to create and populate same tables in other database? I have a local database, I exported to sql syntax, then I get something like: CREATE TABLE `boletinSuscritos` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(120) NOT NULL, `email` varchar(120) NOT NULL, `date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=3 ; INSERT INTO `boletinSuscritos` VALUES(1, 'walter', '[email protected]', '2010-03-24 12:53:12'); INSERT INTO `boletinSuscritos` VALUES(2, 'Paco', '[email protected]', '2010-03-24 12:56:56'); but I need it to be: (Is there any way to export the tables in this way) $sql = "CREATE TABLE boletinSuscritos ( id int(11) NOT NULL AUTO_INCREMENT, name varchar(120) NOT NULL, email varchar(120) NOT NULL, date timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, PRIMARY KEY ( id ) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=3 )"; mysql_query($sql,$conexion); mysql_query("INSERT INTO boletinSuscritos VALUES(1, 'walter', '[email protected]', '2010-03-24 12:53:12')"); mysql_query("INSERT INTO boletinSuscritos VALUES(2, 'Paco', '[email protected]', '2010-03-24 12:56:56')");

    Read the article

  • Using Take and skip keyword to filter records in LINQ

    - by vik20000in
    In LINQ we can use the take keyword to filter out the number of records that we want to retrieve from the query. Let’s say we want to retrieve only the first 5 records for the list or array then we can use the following query     int[] numbers = { 5, 4, 1, 3, 9, 8, 6, 7, 2, 0 };     var first3Numbers = numbers.Take(3); The TAKE keyword can also be easily applied to list of object in the following way. var first3WAOrders = (         from cust in customers         from order in cust.Orders         select cust ) .Take(3); [Note in the query above we are using the order clause so that the data is first ordered based on the orders field and then the first 3 records are taken. In both the above example we have been able to filter out data based on the number of records we want to fetch. But in both the cases we were fetching the records from the very beginning. But there can be some requirements whereby we want to fetch the records after skipping some of the records like in paging. For this purpose LINQ has provided us with the skip method which skips the number of records passed as parameter in the result set. int[] numbers = { 5, 4, 1, 3, 9, 8, 6, 7, 2, 0 }; var allButFirst4Numbers = numbers.Skip(4); The SKIP keyword can also be easily applied to list of object in the following way. var first3WAOrders = (         from cust in customers         from order in cust.Orders         select cust ).Skip(3);  Vikram

    Read the article

  • Hello Operator, My Switch Is Bored

    - by Paul White
    This is a post for T-SQL Tuesday #43 hosted by my good friend Rob Farley. The topic this month is Plan Operators. I haven’t taken part in T-SQL Tuesday before, but I do like to write about execution plans, so this seemed like a good time to start. This post is in two parts. The first part is primarily an excuse to use a pretty bad play on words in the title of this blog post (if you’re too young to know what a telephone operator or a switchboard is, I hate you). The second part of the post looks at an invisible query plan operator (so to speak). 1. My Switch Is Bored Allow me to present the rare and interesting execution plan operator, Switch: Books Online has this to say about Switch: Following that description, I had a go at producing a Fast Forward Cursor plan that used the TOP operator, but had no luck. That may be due to my lack of skill with cursors, I’m not too sure. The only application of Switch in SQL Server 2012 that I am familiar with requires a local partitioned view: CREATE TABLE dbo.T1 (c1 int NOT NULL CHECK (c1 BETWEEN 00 AND 24)); CREATE TABLE dbo.T2 (c1 int NOT NULL CHECK (c1 BETWEEN 25 AND 49)); CREATE TABLE dbo.T3 (c1 int NOT NULL CHECK (c1 BETWEEN 50 AND 74)); CREATE TABLE dbo.T4 (c1 int NOT NULL CHECK (c1 BETWEEN 75 AND 99)); GO CREATE VIEW V1 AS SELECT c1 FROM dbo.T1 UNION ALL SELECT c1 FROM dbo.T2 UNION ALL SELECT c1 FROM dbo.T3 UNION ALL SELECT c1 FROM dbo.T4; Not only that, but it needs an updatable local partitioned view. We’ll need some primary keys to meet that requirement: ALTER TABLE dbo.T1 ADD CONSTRAINT PK_T1 PRIMARY KEY (c1);   ALTER TABLE dbo.T2 ADD CONSTRAINT PK_T2 PRIMARY KEY (c1);   ALTER TABLE dbo.T3 ADD CONSTRAINT PK_T3 PRIMARY KEY (c1);   ALTER TABLE dbo.T4 ADD CONSTRAINT PK_T4 PRIMARY KEY (c1); We also need an INSERT statement that references the view. Even more specifically, to see a Switch operator, we need to perform a single-row insert (multi-row inserts use a different plan shape): INSERT dbo.V1 (c1) VALUES (1); And now…the execution plan: The Constant Scan manufactures a single row with no columns. The Compute Scalar works out which partition of the view the new value should go in. The Assert checks that the computed partition number is not null (if it is, an error is returned). The Nested Loops Join executes exactly once, with the partition id as an outer reference (correlated parameter). The Switch operator checks the value of the parameter and executes the corresponding input only. If the partition id is 0, the uppermost Clustered Index Insert is executed, adding a row to table T1. If the partition id is 1, the next lower Clustered Index Insert is executed, adding a row to table T2…and so on. In case you were wondering, here’s a query and execution plan for a multi-row insert to the view: INSERT dbo.V1 (c1) VALUES (1), (2); Yuck! An Eager Table Spool and four Filters! I prefer the Switch plan. My guess is that almost all the old strategies that used a Switch operator have been replaced over time, using things like a regular Concatenation Union All combined with Start-Up Filters on its inputs. Other new (relative to the Switch operator) features like table partitioning have specific execution plan support that doesn’t need the Switch operator either. This feels like a bit of a shame, but perhaps it is just nostalgia on my part, it’s hard to know. Please do let me know if you encounter a query that can still use the Switch operator in 2012 – it must be very bored if this is the only possible modern usage! 2. Invisible Plan Operators The second part of this post uses an example based on a question Dave Ballantyne asked using the SQL Sentry Plan Explorer plan upload facility. If you haven’t tried that yet, make sure you’re on the latest version of the (free) Plan Explorer software, and then click the Post to SQLPerformance.com button. That will create a site question with the query plan attached (which can be anonymized if the plan contains sensitive information). Aaron Bertrand and I keep a close eye on questions there, so if you have ever wanted to ask a query plan question of either of us, that’s a good way to do it. The problem The issue I want to talk about revolves around a query issued against a calendar table. The script below creates a simplified version and adds 100 years of per-day information to it: USE tempdb; GO CREATE TABLE dbo.Calendar ( dt date NOT NULL, isWeekday bit NOT NULL, theYear smallint NOT NULL,   CONSTRAINT PK__dbo_Calendar_dt PRIMARY KEY CLUSTERED (dt) ); GO -- Monday is the first day of the week for me SET DATEFIRST 1;   -- Add 100 years of data INSERT dbo.Calendar WITH (TABLOCKX) (dt, isWeekday, theYear) SELECT CA.dt, isWeekday = CASE WHEN DATEPART(WEEKDAY, CA.dt) IN (6, 7) THEN 0 ELSE 1 END, theYear = YEAR(CA.dt) FROM Sandpit.dbo.Numbers AS N CROSS APPLY ( VALUES (DATEADD(DAY, N.n - 1, CONVERT(date, '01 Jan 2000', 113))) ) AS CA (dt) WHERE N.n BETWEEN 1 AND 36525; The following query counts the number of weekend days in 2013: SELECT Days = COUNT_BIG(*) FROM dbo.Calendar AS C WHERE theYear = 2013 AND isWeekday = 0; It returns the correct result (104) using the following execution plan: The query optimizer has managed to estimate the number of rows returned from the table exactly, based purely on the default statistics created separately on the two columns referenced in the query’s WHERE clause. (Well, almost exactly, the unrounded estimate is 104.289 rows.) There is already an invisible operator in this query plan – a Filter operator used to apply the WHERE clause predicates. We can see it by re-running the query with the enormously useful (but undocumented) trace flag 9130 enabled: Now we can see the full picture. The whole table is scanned, returning all 36,525 rows, before the Filter narrows that down to just the 104 we want. Without the trace flag, the Filter is incorporated in the Clustered Index Scan as a residual predicate. It is a little bit more efficient than using a separate operator, but residual predicates are still something you will want to avoid where possible. The estimates are still spot on though: Anyway, looking to improve the performance of this query, Dave added the following filtered index to the Calendar table: CREATE NONCLUSTERED INDEX Weekends ON dbo.Calendar(theYear) WHERE isWeekday = 0; The original query now produces a much more efficient plan: Unfortunately, the estimated number of rows produced by the seek is now wrong (365 instead of 104): What’s going on? The estimate was spot on before we added the index! Explanation You might want to grab a coffee for this bit. Using another trace flag or two (8606 and 8612) we can see that the cardinality estimates were exactly right initially: The highlighted information shows the initial cardinality estimates for the base table (36,525 rows), the result of applying the two relational selects in our WHERE clause (104 rows), and after performing the COUNT_BIG(*) group by aggregate (1 row). All of these are correct, but that was before cost-based optimization got involved :) Cost-based optimization When cost-based optimization starts up, the logical tree above is copied into a structure (the ‘memo’) that has one group per logical operation (roughly speaking). The logical read of the base table (LogOp_Get) ends up in group 7; the two predicates (LogOp_Select) end up in group 8 (with the details of the selections in subgroups 0-6). These two groups still have the correct cardinalities as trace flag 8608 output (initial memo contents) shows: During cost-based optimization, a rule called SelToIdxStrategy runs on group 8. It’s job is to match logical selections to indexable expressions (SARGs). It successfully matches the selections (theYear = 2013, is Weekday = 0) to the filtered index, and writes a new alternative into the memo structure. The new alternative is entered into group 8 as option 1 (option 0 was the original LogOp_Select): The new alternative is to do nothing (PhyOp_NOP = no operation), but to instead follow the new logical instructions listed below the NOP. The LogOp_GetIdx (full read of an index) goes into group 21, and the LogOp_SelectIdx (selection on an index) is placed in group 22, operating on the result of group 21. The definition of the comparison ‘the Year = 2013’ (ScaOp_Comp downwards) was already present in the memo starting at group 2, so no new memo groups are created for that. New Cardinality Estimates The new memo groups require two new cardinality estimates to be derived. First, LogOp_Idx (full read of the index) gets a predicted cardinality of 10,436. This number comes from the filtered index statistics: DBCC SHOW_STATISTICS (Calendar, Weekends) WITH STAT_HEADER; The second new cardinality derivation is for the LogOp_SelectIdx applying the predicate (theYear = 2013). To get a number for this, the cardinality estimator uses statistics for the column ‘theYear’, producing an estimate of 365 rows (there are 365 days in 2013!): DBCC SHOW_STATISTICS (Calendar, theYear) WITH HISTOGRAM; This is where the mistake happens. Cardinality estimation should have used the filtered index statistics here, to get an estimate of 104 rows: DBCC SHOW_STATISTICS (Calendar, Weekends) WITH HISTOGRAM; Unfortunately, the logic has lost sight of the link between the read of the filtered index (LogOp_GetIdx) in group 22, and the selection on that index (LogOp_SelectIdx) that it is deriving a cardinality estimate for, in group 21. The correct cardinality estimate (104 rows) is still present in the memo, attached to group 8, but that group now has a PhyOp_NOP implementation. Skipping over the rest of cost-based optimization (in a belated attempt at brevity) we can see the optimizer’s final output using trace flag 8607: This output shows the (incorrect, but understandable) 365 row estimate for the index range operation, and the correct 104 estimate still attached to its PhyOp_NOP. This tree still has to go through a few post-optimizer rewrites and ‘copy out’ from the memo structure into a tree suitable for the execution engine. One step in this process removes PhyOp_NOP, discarding its 104-row cardinality estimate as it does so. To finish this section on a more positive note, consider what happens if we add an OVER clause to the query aggregate. This isn’t intended to be a ‘fix’ of any sort, I just want to show you that the 104 estimate can survive and be used if later cardinality estimation needs it: SELECT Days = COUNT_BIG(*) OVER () FROM dbo.Calendar AS C WHERE theYear = 2013 AND isWeekday = 0; The estimated execution plan is: Note the 365 estimate at the Index Seek, but the 104 lives again at the Segment! We can imagine the lost predicate ‘isWeekday = 0’ as sitting between the seek and the segment in an invisible Filter operator that drops the estimate from 365 to 104. Even though the NOP group is removed after optimization (so we don’t see it in the execution plan) bear in mind that all cost-based choices were made with the 104-row memo group present, so although things look a bit odd, it shouldn’t affect the optimizer’s plan selection. I should also mention that we can work around the estimation issue by including the index’s filtering columns in the index key: CREATE NONCLUSTERED INDEX Weekends ON dbo.Calendar(theYear, isWeekday) WHERE isWeekday = 0 WITH (DROP_EXISTING = ON); There are some downsides to doing this, including that changes to the isWeekday column may now require Halloween Protection, but that is unlikely to be a big problem for a static calendar table ;)  With the updated index in place, the original query produces an execution plan with the correct cardinality estimation showing at the Index Seek: That’s all for today, remember to let me know about any Switch plans you come across on a modern instance of SQL Server! Finally, here are some other posts of mine that cover other plan operators: Segment and Sequence Project Common Subexpression Spools Why Plan Operators Run Backwards Row Goals and the Top Operator Hash Match Flow Distinct Top N Sort Index Spools and Page Splits Singleton and Range Seeks Bitmaps Hash Join Performance Compute Scalar © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • Distinguishing repetitive code with the same implementation

    - by KyelJmD
    Given this sample code import java.util.ArrayList; import blackjack.model.items.Card; public class BlackJackPlayer extends Player { private double bet; private Hand hand01 = new Hand(); private Hand hand02 = new Hand(); public void addCardToHand01(Card c) { hand01.addCard(c); } public void addCardToHand02(Card c) { hand02.addCard(c); } public void bustHand01() { hand01.setBust(true); } public void bustHand02() { hand02.setBust(true); } public void standHand01() { hand01.setStand(true); } public void standHand02() { hand02.setStand(true); } public boolean isHand01Bust() { return hand01.isBust(); } public boolean isHand02Bust() { return hand02.isBust(); } public boolean isHand01Standing() { return hand01.isStanding(); } public boolean isHand02Standing() { return hand02.isStanding(); } public int getHand01Score(){ return hand01.getCardScore(); } public int getHand02Score(){ return hand02.getCardScore(); } } Is this considered as a repetitive code? providing that each method is operating a seperate field but doing the same implementation ? Note that hand01 and hand02 should be distinct. if this is considered as repetitive code, how would I address this? providing that each hand is a seperate entity

    Read the article

  • Using the StopWatch class to calculate the execution time of a block of code

    - by vik20000in
      Many of the times while doing the performance tuning of some, class, webpage, component, control etc. we first measure the current time taken in the execution of that code. This helps in understanding the location in code which is actually causing the performance issue and also help in measuring the amount of improvement by making the changes. This measurement is very important as it helps us understand the problem in code, Helps us to write better code next time (as we have already learnt what kind of improvement can be made with different code) . Normally developers create 2 objects of the DateTime class. The exact time is collected before and after the code where the performance needs to be measured.  Next the difference between the two objects is used to know about the time spent in the code that is measured. Below is an example of the sample code.             DateTime dt1, dt2;             dt1 = DateTime.Now;             for (int i = 0; i < 1000000; i++)             {                 string str = "string";             }             dt2 = DateTime.Now;             TimeSpan ts = dt2.Subtract(dt1);             Console.WriteLine("Time Spent : " + ts.TotalMilliseconds.ToString());   The above code works great. But the dot net framework also provides for another way to capture the time spent on the code without doing much effort (creating 2 datetime object, timespan object etc..). We can use the inbuilt StopWatch class to get the exact time spent. Below is an example of the same work with the help of the StopWatch class.             Stopwatch sw = Stopwatch.StartNew();             for (int i = 0; i < 1000000; i++)             {                 string str = "string";             }             sw.Stop();             Console.WriteLine("Time Spent : " +sw.Elapsed.TotalMilliseconds.ToString());   [Note the StopWatch class resides in the System.Diagnostics namespace] If you use the StopWatch class the time taken for measuring the performance is much better, with very little effort. Vikram

    Read the article

  • Inter Quake Model IQM render Directx9

    - by Andrew_0
    I'm trying to render an Inter Quake Model(http://lee.fov120.com/iqm/) in DirectX9 that I exported from blender. I want to display animations which IQM supports and my model format does not. The model is a cylinder. It loads fine in the iqm sdk opengl viewer but when i try to render it in directx9 using for example(this is just to render the vertices): IDirect3DDevice9 * device; HRESULT hr = S_OK; for(int i = 0; i < nummeshes; i++) { iqmmesh &m = meshes[0]; hr = device->DrawIndexedPrimitiveUP(D3DPT_TRIANGLELIST, 0, 3*m.num_triangles, m.num_triangles ,&tris[m.first_triangle] ,D3DFMT_INDEX32 ,inposition ,sizeof(unsigned int)); } It renders like this: Incorrect The light grey bit that looks like two triangles in the middle is what is rendered(ignore the other stuff). Whereas it is meant to look like this(using a custom importer which I designed which matches what is displayed in blender): Correct Anyone have any suggestions on what might be going wrong?

    Read the article

  • TakeWhile and SkipWhile method in LINQ

    - by vik20000in
     In my last post I talked about how to use the take and the Skip keyword to filter out the number of records that we are fetching. But there is only problem with the take and skip statement. The problem lies in the dependency where by the number of records to be fetched has to be passed to it. Many a times the number of records to be fetched is also based on the query itself. For example if we want to continue fetching records till a certain condition is met on the record set. Let’s say we want to fetch records from the array of number till we get 7. For this kind of query LINQ has exposed the TakeWhile Method.     int[] numbers = { 5, 4, 1, 3, 9, 8, 6, 7, 2, 0 };     var firstNumbersLessThan6 = numbers.TakeWhile(n => n < 7);   In the same way we can also use the SkipWhile statement. The skip while statement will skip all the records that do not match certain condition provided. In the example below we are skiping all those number which are not divisible by 3. Remember we could have done this with where clause also, but SkipWhile method can be useful in many other situation and hence the example and the keyword.     int[] numbers = { 5, 4, 1, 3, 9, 8, 6, 7, 2, 0 };     var allButFirst3Numbers = numbers.SkipWhile(n => n % 3 != 0); Vikram

    Read the article

  • AS3: StageWidth for BOX2D?

    - by Gabriel Meono
    I know BOX2D uses meters, and AS3 uses pixels. I'm trying to create objects which are limited to the stageWidth. If I do this variable: for (var i:int = 0; i<(stage.stageWidth); i++){...} The animation will freeze, and this output appears: TypeError: Error #1009: Cannot access a property or method of a null object reference. at Box2D.Collision::b2BroadPhase/CreateProxy() at Box2D.Collision.Shapes::b2Shape/CreateProxy() at Box2D.Dynamics::b2Body/CreateShape() at com.actionsnippet.qbox.objects::CircleObject/build() at com.actionsnippet.qbox::QuickObject/init() at com.actionsnippet.qbox::QuickObject() at com.actionsnippet.qbox.objects::CircleObject() at com.actionsnippet.qbox::QuickBox2D/create() at com.actionsnippet.qbox::QuickBox2D/addCircle() at BOX2D_Test_Tutorial_fla::MainTimeline/frame1() Does anyone know how to fix this? Full Code: [SWF(width = 350, height = 600, frameRate = 60)] import com.actionsnippet.qbox.*; var sim:QuickBox2D = new QuickBox2D(this); sim.createStageWalls(); // make a heavy circle sim.addCircle({x:3, y:3, radius:0.4, density:1}); // create a few platforms // make pins for (var i:int = 0; i<(stage.stageWidth); i++){ //End sim.addCircle({x:1 + i * 1.5, y:18, radius:0.1, density:0}); sim.addCircle({x:2 + i * 1.5, y:17, radius:0.1, density:0}); sim.addCircle({x:1 + i * 1.5, y:16, radius:0.1, density:0}); sim.addCircle({x:2 + i * 1.5, y:15, radius:0.1, density:0}); //Mid end sim.addCircle({x:0 + i * 2, y:14, radius:0.1, density:0}); sim.addCircle({x:0 + i * 2, y:13, radius:0.1, density:0}); sim.addCircle({x:0 + i * 2, y:12, radius:0.1, density:0}); sim.addCircle({x:0 + i * 2, y:11, radius:0.1, density:0}); sim.addCircle({x:0 + i * 2, y:10, radius:0.1, density:0}); } sim.start(); sim.mouseDrag();

    Read the article

  • How to set Alpha value from pixel shader in SlimDX Direct3d9

    - by Yashwinder
    I am trying to set alpha value of color as color.a = 0.5f in my pixel shader but all the time it is giving an exception. I can set color.r, color.g, color.b but it is not allowing me to set color.a and throwing an exception D3DERR_INVALIDCALL: Invalid call (-2005530516). I have just created a direct3d9 device and assigned my pixel shader to it. My pixel shader code is as below sampler2D ourImage : register(s0); float4 main(float2 locationInSource : TEXCOORD) : COLOR { float4 color = tex2D( ourImage , locationInSource.xy); color.a = 0.2; return color; } I am creating my pixel shader as byte[] byteCode = GiveFxFile(transitionEffect.PixelShaderFileName); var shaderBytecode = ShaderBytecode.Compile(byteCode, "main", "ps_2_0", ShaderFlags.None); var pixelShader = new PixelShader(device, ShaderBytecode); _device.PixelShader=pixelShader; I have initialized my device as var _presentParams = new PresentParameters { Windowed = _isWindowedMode, BackBufferWidth = (int)SystemParameters.PrimaryScreenWidth, BackBufferHeight = (int)SystemParameters.PrimaryScreenHeight, // Enable Z-Buffer // This is not really needed in this sample but real applications generaly use it EnableAutoDepthStencil = true, AutoDepthStencilFormat = Format.D16, // How to swap backbuffer in front and how many per screen refresh BackBufferCount = 1, SwapEffect = SwapEffect.Copy, BackBufferFormat = _direct3D.Adapters[0].CurrentDisplayMode.Format, PresentationInterval = PresentInterval.Immediate, DeviceWindowHandle = _windowHandle }; _device = new Device(_direct3D, 0, DeviceType.Hardware, _windowHandle, deviceFlags | CreateFlags.Multithreaded, _presentParams);

    Read the article

  • Stage3D Camera problem

    - by Thomas Versteeg
    I am trying to create a 2D Stage3D game where you can move the camera around the level in an RTS style. I thought about using Orthographic Matrix3D functions for this but when I try to scroll the whole "stage" also scrolls. This is the Camera code: public function Camera2D(width:int, height:int, zoom:Number = 1) { resize(width, height); _zoom = zoom; } public function resize(width:Number, height:Number):void { _width = width; _height = height; _projectionMatrix = makeMatrix(0, width, 0, height); _recalculate = true; } protected function makeMatrix(left:Number, right:Number, top:Number, bottom:Number, zNear:Number = 0, zFar:Number = 1):Matrix3D { return new Matrix3D(Vector.<Number>([ 2 / (right - left), 0, 0, 0, 0, 2 / (top - bottom), 0, 0, 0, 0, 1 / (zFar - zNear), 0, 0, 0, zNear / (zNear - zFar), 1 ])); } public function get viewMatrix():Matrix3D { if (_recalculate) { _recalculate = false; _viewMatrix.identity(); _viewMatrix.appendTranslation( -_width / 2 - _x, -_height / 2 - y, 0); _viewMatrix.appendScale(_zoom, _zoom, 1); _renderMatrix.identity(); _renderMatrix.append(_viewMatrix); _renderMatrix.append(_projectionMatrix); } return _renderMatrix; } Here are two screenshots to show what I mean: How do I only let the inside of the stage3D scroll and not the whole stage?

    Read the article

  • SQL Authority News – Download SQL Server Data Type Conversion Chart

    - by pinaldave
    Datatypes are very important concepts of SQL Server and there are quite often need to convert them from one datatypes to another datatype. I have seen that deveoper often get confused when they have to convert the datatype. There are two important concept when it is about datatype conversion. Implicit Conversion: Implicit conversions are those conversions that occur without specifying either the CAST or CONVERT function. Explicit Conversions: Explicit conversions are those conversions that require the CAST or CONVERT function to be specified. What it means is that if you are trying to convert value from datetime2 to time or from tinyint to int, SQL Server will automatically convert (implicit conversation) for you. However, if you are attempting to convert timestamp to smalldatetime or datetime to int you will need to explicitely convert them using either CAST or CONVERT function as well appropriate parameters. Let us see a quick example of Implict Conversion and Explict Conversion. Implicit Conversion: Explicit Conversion: You can see from above example that how we need both of the types of conversion in different situation. There are so many different datatypes and it is humanly impossible to know which datatype require implicit and which require explicit conversion. Additionally there are cases when the conversion is not possible as well. Microsoft have published a chart where the grid displays various conversion possibilities as well a quick guide. Download SQL Server Data Type Conversion Chart Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Why is this LTO4 Tape-drive not working

    - by Tim Haegele
    # modprobe mptsas # dmesg [ 4274.796796] scsi target7:0:0: mptsas: ioc1: delete device: fw_channel 0, fw_id 0, phy 0, sas_addr 0x50050763124b29ac [ 4274.939579] mptsas 0000:01:00.0: PCI INT A disabled [ 4280.934531] Fusion MPT SAS Host driver 3.04.12 [ 4280.934552] mptsas 0000:01:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 4280.934692] mptbase: ioc2: Initiating bringup [ 4281.490183] ioc2: LSISAS1064E B3: Capabilities={Initiator} [ 4281.490203] mptsas 0000:01:00.0: setting latency timer to 64 [ 4293.555274] scsi8 : ioc2: LSISAS1064E B3, FwRev=011e0000h, Ports=1, MaxQ=277, IRQ=16 [ 4293.574906] mptsas: ioc2: attaching ssp device: fw_channel 0, fw_id 0, phy 0, sas_addr 0x50050763124b29ac [ 4293.576471] scsi 8:0:0:0: Sequential-Access IBM ULTRIUM-HH4 B6W1 PQ: 0 ANSI: 6 [ 4293.578549] st 8:0:0:0: Attached scsi tape st0 [ 4293.578550] st 8:0:0:0: st0: try direct i/o: yes (alignment 512 B) [ 4293.578577] st 8:0:0:0: Attached scsi generic sg5 type 1 # mt -f /dev/st0 status mt -f /dev/st0 status mt: /dev/st0: rmtopen failed: Input/output error # dd if=/dev/zero of=/dev/nst0 bs=1024 count=10 dd: opening `/dev/nst0': Input/output error I am running debian squeeze 2.6.32-5-amd64 #1 SMP Sun May 6 04:00:17 UTC 2012 x86_64 GNU/Linux Server is Fujitsu TX140 with Controller Symbios Logic SAS1064ET PCI-Express Fusion-MPT SAS Tape+Hardware is new.

    Read the article

  • Determining the health of a Cisco switch port?

    - by ewwhite
    I've been chasing a packet-loss and network stability issue for a handful of end-users on an internal network for the past few days... These issues surfaced recently, however, the location was struck by lightning six weeks ago. I was seeing 5-10% packet loss between a stack of four Cisco 2960's and several PC's and phones on the other side of a 77-meter run. The PC's were run inline with the phones over a trunked link. We were seeing dropped calls and interruptions in client-server applications and Microsoft Exchange connectivity. I tried the usual troubleshooting steps remotely, having a local technician do the following during breaks in user and production activity: change cables between the wall jack and device. change patch cables between the patch panel and switch port(s). try different switch ports within the 2960 stack. change end-user devices with known-good equipment (new phones, different PC's). clear switch port interface counters and monitor incrementing errors closely. (Pastebin output of sh int) Pored over the device logs and Observium RRD graphs. No link up/down issues from the switch side. change power strips on the end-user side. test cable runs from the Cisco 2960 using test cable-diagnostics tdr int Gi4/0/9 (clean)* test cable runs with a Tripp-Lite cable tester. (clean) run diagnostics on the switch stack members. (clean) In the end, it took three changes of switch ports to find a stable solution. The only logical conclusion is that a few Cisco 2960 switch ports are bad or flaky... Not dead, but not consistent in behavior either. I'm not used to seeing individual ports die in this manner. What else can I test or check to determine if these devices are bad? Is it common for single ports to have problems, rather than a contiguous bank of ports? BTW - show cable-diagnostics tdr int Gi4/0/14 is very cool... Interface Speed Local pair Pair length Remote pair Pair status --------- ----- ---------- ------------------ ----------- -------------------- Gi4/0/14 1000M Pair A 79 +/- 0 meters Pair B Normal Pair B 75 +/- 0 meters Pair A Normal Pair C 77 +/- 0 meters Pair D Normal Pair D 79 +/- 0 meters Pair C Normal

    Read the article

  • Code Trivia #4

    - by João Angelo
    Got the inspiration for this one in a recent stackoverflow question. What should the following code output and why? class Program { class Author { public string FirstName { get; set; } public string LastName { get; set; } public override string ToString() { return LastName + ", " + FirstName; } } static void Main() { Author[] authors = new[] { new Author { FirstName = "John", LastName = "Doe" }, new Author { FirstName = "Jane", LastName="Doe" } }; var line1 = String.Format("Authors: {0} and {1}", authors); Console.WriteLine(line1); string[] serial = new string[] { "AG27H", "6GHW9" }; var line2 = String.Format("Serial: {0}-{1}", serial); Console.WriteLine(line2); int[] version = new int[] { 1, 0 }; var line3 = String.Format("Version: {0}.{1}", version); Console.WriteLine(line3); } } Update: The code will print the first two lines // Authors: Doe, John and Doe, Jane // Serial: AG27H-6GHW9 and then throw an exception on the third call to String.Format because array covariance is not supported in value types. Given this the third call of String.Format will not resolve to String.Format(string, params object[]), like the previous two, but to String.Format(string, object) which fails to provide the second argument for the specified format and will then cause the exception.

    Read the article

  • 2D Procedural Terrain with box2d Assets

    - by Alex
    I'm making a game that invloves a tire moving through terrain that is generated randomly over 1000 points. The terrain is always a downwards incline like so: The actual box2d terrain extends one screen width behind and infront of the circular character. I'm now at a stage where I need to add gameplay elements to the terrain such as chasms or physical objects (like the demo polygon in the picture) and am not sure of the best way to structure the procedural generation of the terrain and objects. I currently have a very simple for loop like so: for(int i = 0; i < kMaxHillPoints; i++) { hillKeyPoints[i] = CGPointMake(terrainX, terrainY); if(i%50 == 0) { i += [self generateCasmAtIndex:i]; } terrainX += winsize.width/20; terrainY -= random() % ((int) winsize.height/20); } With the generateCasmAtIndex function add points to the hillKeyPoints array and incrementing the for loop by the required amount. If I want to generate box2d objects as well for specific terrain elements, I'll also have to keep track of the current position of the player and have some sort of array of box2d objects that need to be created at certain locations. I am not sure of an efficient way to accomplish this procedural generation of terrain elements with accompanying box2d objects. My thoughts are: 1) Have many functions for each terrain element (chasm, jump etc.) which add elements to be drawn to an array that is check on each game step - similar to what I've shown above. 2) Create an array of terrain element objects that string together and are looped over to create the terrain and generate the box2d objects. Each object would hold an array of points to be drawn and and array of accompanying box2d objects. Any help on this is much appreciated as I cannot see a 'clean' solution.

    Read the article

  • How is the MTU is 65535 in UDP but ethernet does not allow frame size more than 1500 bytes

    - by nikku
    I am using a fast ethernet of 100 Mbps, whose frame size is less than 1500 bytes (1472 bytes for payload as per my textbook). In that, I was able to send and receive a UDP packet of message size 65507 bytes, which means the packet size was 65507 + 20 (IP Header) + 8 (UDP Header) = 65535. If the frame's payload size itself is maximum of 1472 bytes (as per my textbook), how can the packet size of IP be greater than that which here is 65535? I used sender code as char buffer[100000]; for (int i = 1; i < 100000; i++) { int len = send (socket_id, buffer, i); printf("%d\n", len); } Receiver code as while (len = recv (socket_id, buffer, 100000)) { printf("%d\n". len); } I observed that send returns -1 on i > 65507 and recv prints or receives a packet of maximum of length 65507.

    Read the article

  • OpenGL texture on sphere

    - by Cilenco
    I want to create a rolling, textured ball in OpenGL ES 1.0 for Android. With this function I can create a sphere: public Ball(GL10 gl, float radius) { ByteBuffer bb = ByteBuffer.allocateDirect(40000); bb.order(ByteOrder.nativeOrder()); sphereVertex = bb.asFloatBuffer(); points = build(); } private int build() { double dTheta = STEP * Math.PI / 180; double dPhi = dTheta; int points = 0; for(double phi = -(Math.PI/2); phi <= Math.PI/2; phi+=dPhi) { for(double theta = 0.0; theta <= (Math.PI * 2); theta+=dTheta) { sphereVertex.put((float) (raduis * Math.sin(phi) * Math.cos(theta))); sphereVertex.put((float) (raduis * Math.sin(phi) * Math.sin(theta))); sphereVertex.put((float) (raduis * Math.cos(phi))); points++; } } sphereVertex.position(0); return points; } public void draw() { texture.bind(); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, sphereVertex); gl.glDrawArrays(GL10.GL_TRIANGLE_FAN, 0, points); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); } My problem now is that I want to use this texture for the sphere but then only a black ball is created (of course because the top right corner s black). I use this texture coordinates because I want to use the whole texture: 0|0 0|1 1|1 1|0 That's what I learned from texturing a triangle. Is that incorrect if I want to use it with a sphere? What do I have to do to use the texture correctly?

    Read the article

  • Movement and Collision with AABB

    - by Jeremy Giberson
    I'm having a little difficulty figuring out the following scenarios. http://i.stack.imgur.com/8lM6i.png In scenario A, the moving entity has fallen to (and slightly into the floor). The current position represents the projected position that will occur if I apply the acceleration & velocity as usual without worrying about collision. The Next position, represents the corrected projection position after collision check. The resulting end position is the falling entity now rests ON the floor--that is, in a consistent state of collision by sharing it's bottom X axis with the floor's top X axis. My current update loop looks like the following: // figure out forces & accelerations and project an objects next position // check collision occurrence from current position -> projected position // if a collision occurs, adjust projection position Which seems to be working for the scenario of my object falling to the floor. However, the situation becomes sticky when trying to figure out scenario's B & C. In scenario B, I'm attempt to move along the floor on the X axis (player is pressing right direction button) additionally, gravity is pulling the object into the floor. The problem is, when the object attempts to move the collision detection code is going to recognize that the object is already colliding with the floor to begin with, and auto correct any movement back to where it was before. In scenario C, I'm attempting to jump off the floor. Again, because the object is already in a constant collision with the floor, when the collision routine checks to make sure moving from current position to projected position doesn't result in a collision, it will fail because at the beginning of the motion, the object is already colliding. How do you allow movement along the edge of an object? How do you allow movement away from an object you are already colliding with. Extra Info My collision routine is based on AABB sweeping test from an old gamasutra article, http://www.gamasutra.com/view/feature/3383/simple_intersection_tests_for_games.php?page=3 My bounding box implementation is based on top left/bottom right instead of midpoint/extents, so my min/max functions are adjusted. Otherwise, here is my bounding box class with collision routines: public class BoundingBox { public XYZ topLeft; public XYZ bottomRight; public BoundingBox(float x, float y, float z, float w, float h, float d) { topLeft = new XYZ(); bottomRight = new XYZ(); topLeft.x = x; topLeft.y = y; topLeft.z = z; bottomRight.x = x+w; bottomRight.y = y+h; bottomRight.z = z+d; } public BoundingBox(XYZ position, XYZ dimensions, boolean centered) { topLeft = new XYZ(); bottomRight = new XYZ(); topLeft.x = position.x; topLeft.y = position.y; topLeft.z = position.z; bottomRight.x = position.x + (centered ? dimensions.x/2 : dimensions.x); bottomRight.y = position.y + (centered ? dimensions.y/2 : dimensions.y); bottomRight.z = position.z + (centered ? dimensions.z/2 : dimensions.z); } /** * Check if a point lies inside a bounding box * @param box * @param point * @return */ public static boolean isPointInside(BoundingBox box, XYZ point) { if(box.topLeft.x <= point.x && point.x <= box.bottomRight.x && box.topLeft.y <= point.y && point.y <= box.bottomRight.y && box.topLeft.z <= point.z && point.z <= box.bottomRight.z) return true; return false; } /** * Check for overlap between two bounding boxes using separating axis theorem * if two boxes are separated on any axis, they cannot be overlapping * @param a * @param b * @return */ public static boolean isOverlapping(BoundingBox a, BoundingBox b) { XYZ dxyz = new XYZ(b.topLeft.x - a.topLeft.x, b.topLeft.y - a.topLeft.y, b.topLeft.z - a.topLeft.z); // if b - a is positive, a is first on the axis and we should use its extent // if b -a is negative, b is first on the axis and we should use its extent // check for x axis separation if ((dxyz.x >= 0 && a.bottomRight.x-a.topLeft.x < dxyz.x) // negative scale, reverse extent sum, flip equality ||(dxyz.x < 0 && b.topLeft.x-b.bottomRight.x > dxyz.x)) return false; // check for y axis separation if ((dxyz.y >= 0 && a.bottomRight.y-a.topLeft.y < dxyz.y) // negative scale, reverse extent sum, flip equality ||(dxyz.y < 0 && b.topLeft.y-b.bottomRight.y > dxyz.y)) return false; // check for z axis separation if ((dxyz.z >= 0 && a.bottomRight.z-a.topLeft.z < dxyz.z) // negative scale, reverse extent sum, flip equality ||(dxyz.z < 0 && b.topLeft.z-b.bottomRight.z > dxyz.z)) return false; // not separated on any axis, overlapping return true; } public static boolean isContactEdge(int xyzAxis, BoundingBox a, BoundingBox b) { switch(xyzAxis) { case XYZ.XCOORD: if(a.topLeft.x == b.bottomRight.x || a.bottomRight.x == b.topLeft.x) return true; return false; case XYZ.YCOORD: if(a.topLeft.y == b.bottomRight.y || a.bottomRight.y == b.topLeft.y) return true; return false; case XYZ.ZCOORD: if(a.topLeft.z == b.bottomRight.z || a.bottomRight.z == b.topLeft.z) return true; return false; } return false; } /** * Sweep test min extent value * @param box * @param xyzCoord * @return */ public static float min(BoundingBox box, int xyzCoord) { switch(xyzCoord) { case XYZ.XCOORD: return box.topLeft.x; case XYZ.YCOORD: return box.topLeft.y; case XYZ.ZCOORD: return box.topLeft.z; default: return 0f; } } /** * Sweep test max extent value * @param box * @param xyzCoord * @return */ public static float max(BoundingBox box, int xyzCoord) { switch(xyzCoord) { case XYZ.XCOORD: return box.bottomRight.x; case XYZ.YCOORD: return box.bottomRight.y; case XYZ.ZCOORD: return box.bottomRight.z; default: return 0f; } } /** * Test if bounding box A will overlap bounding box B at any point * when box A moves from position 0 to position 1 and box B moves from position 0 to position 1 * Note, sweep test assumes bounding boxes A and B's dimensions do not change * * @param a0 box a starting position * @param a1 box a ending position * @param b0 box b starting position * @param b1 box b ending position * @param aCollisionOut xyz of box a's position when/if a collision occurs * @param bCollisionOut xyz of box b's position when/if a collision occurs * @return */ public static boolean sweepTest(BoundingBox a0, BoundingBox a1, BoundingBox b0, BoundingBox b1, XYZ aCollisionOut, XYZ bCollisionOut) { // solve in reference to A XYZ va = new XYZ(a1.topLeft.x-a0.topLeft.x, a1.topLeft.y-a0.topLeft.y, a1.topLeft.z-a0.topLeft.z); XYZ vb = new XYZ(b1.topLeft.x-b0.topLeft.x, b1.topLeft.y-b0.topLeft.y, b1.topLeft.z-b0.topLeft.z); XYZ v = new XYZ(vb.x-va.x, vb.y-va.y, vb.z-va.z); // check for initial overlap if(BoundingBox.isOverlapping(a0, b0)) { // java pass by ref/value gotcha, have to modify value can't reassign it aCollisionOut.x = a0.topLeft.x; aCollisionOut.y = a0.topLeft.y; aCollisionOut.z = a0.topLeft.z; bCollisionOut.x = b0.topLeft.x; bCollisionOut.y = b0.topLeft.y; bCollisionOut.z = b0.topLeft.z; return true; } // overlap min/maxs XYZ u0 = new XYZ(); XYZ u1 = new XYZ(1,1,1); float t0, t1; // iterate axis and find overlaps times (x=0, y=1, z=2) for(int i = 0; i < 3; i++) { float aMax = max(a0, i); float aMin = min(a0, i); float bMax = max(b0, i); float bMin = min(b0, i); float vi = XYZ.getCoord(v, i); if(aMax < bMax && vi < 0) XYZ.setCoord(u0, i, (aMax-bMin)/vi); else if(bMax < aMin && vi > 0) XYZ.setCoord(u0, i, (aMin-bMax)/vi); if(bMax > aMin && vi < 0) XYZ.setCoord(u1, i, (aMin-bMax)/vi); else if(aMax > bMin && vi > 0) XYZ.setCoord(u1, i, (aMax-bMin)/vi); } // get times of collision t0 = Math.max(u0.x, Math.max(u0.y, u0.z)); t1 = Math.min(u1.x, Math.min(u1.y, u1.z)); // collision only occurs if t0 < t1 if(t0 <= t1 && t0 != 0) // not t0 because we already tested it! { // t0 is the normalized time of the collision // then the position of the bounding boxes would // be their original position + velocity*time aCollisionOut.x = a0.topLeft.x + va.x*t0; aCollisionOut.y = a0.topLeft.y + va.y*t0; aCollisionOut.z = a0.topLeft.z + va.z*t0; bCollisionOut.x = b0.topLeft.x + vb.x*t0; bCollisionOut.y = b0.topLeft.y + vb.y*t0; bCollisionOut.z = b0.topLeft.z + vb.z*t0; return true; } else return false; } }

    Read the article

  • MySQL – Beginning Temporary Tables in MySQL

    - by Pinal Dave
    MySQL supports Temporary tables to store the resultsets temporarily for a given connection. Temporary tables are created with the keyword TEMPORARY along with the CREATE TABLE statement. Let us create the temporary table named Temp CREATE TEMPORARY TABLE TEMP (id INT); Now you can find out the column names using DESC command DESC TEMP; The above returns the following result This table can be accessed only for the current connection and it can be used like a permanent table and automatically dropped when the connection is closed. However, you can not find temporary tables using INFORMATION_SCHEMA. TABLES system view. It will only list out the permanent tables. MySQL usually stores the data of temporary tables in memory and processed by Memory Storage engine. But if the data size is too large MySQL automatically converts this to the on – disk table and use MyISAM engine. You can also create a permanent table with the same name of a temporary table in the same connection. However the structure of permanent table is visible only if the temporary table with the same name is dropped. Let us create a permanent table with the same name Temp as below CREATE TABLE TEMP (id INT, names VARCHAR(100)); Now running the following command stills gives you the structure of the temporary table temp created earlier. DESC TEMP; You can drop the temporary table using DROP TEMPORARY TABLE command; DROP TEMPORARY TABLE TEMP; After you executed the temporary table, run the following command DESC TEMP; Now you will see the structure of the permanent table named temp In summary – If there is a Temporary Table in MySQL it gets first priority over the permanent table in the session. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Tips and Tricks, T SQL

    Read the article

  • Bone creation in XNA Content Pipeline

    - by cod3monk3y
    I'm trying to manually create a ModelContent instance that includes custom Bone data in a custom ContentProcessor in the XNA Content Pipeline. I can't seem to create or assign manually created bone data due to either private constructors or read-only collections (at every turn). The code I have right now that creates a single triangle ModelContent that I'd like to create a bone for is: MeshContent mc = new MeshContent(); mc.Positions.Add(new Vector3(-10, 0, 0)); mc.Positions.Add(new Vector3(0, 10, 0)); mc.Positions.Add(new Vector3(10, 0, 0)); GeometryContent gc = new GeometryContent(); gc.Indices.AddRange(new int[] { 0, 1, 2 }); gc.Vertices.AddRange(new int[] { 0, 1, 2 }); mc.Geometry.Add(gc); // Create normals MeshHelper.CalculateNormals(mc, true); // finally, convert it to a model ModelContent model = context.Convert<MeshContent, ModelContent>(mc, "ModelProcessor"); The documentation on XNA is amazingly sparse. I've been referencing the class diagrams created by DigitalRune and Sean Hargreaves blog, but I haven't found anything on creating bone content. Once the ModelContent is created, it's not possible to add bones because the Bones collection is read-only. And it seems the only way to create the ModelContent instance is to call the standard ModelProcessor via ContentProcessorContext.Convert. So it's a bit of a catch-22. The BoneContent class has a constructor but no methods except those inherited from NodeContent... though now (true to form) maybe I've realized the solution by asking the question. Should I create a root NodeContent with two children: one MeshContent and one BoneContent as the root of my skeleton; then pass the root NodeContent to ContentProcessorContext.Convert? Off to try that now...

    Read the article

  • Perpendicularity of a normal and a velocity?

    - by Milo
    I'm trying to fake angular velocity on my vehicle when it hits a wall by getting the dot product of the normal of the edge the car is hitting and the vehicle's velocity: Vector2D normVel = new Vector2D(); normVel.equals(vehicle.getVelocity()); normVel.normalize(); float dot = normVel.dot(outNorm); dot = -dot; vehicle.setAngularVelocity(vehicle.getAngularVelocity() + (dot * vehicle.getVelocity().length() * 0.01f)); outNorm is the normal of the wall. The problem is it only works half the time. It seems no matter what, the car always goes clockwise. If the car should head clockwise: -------------------------------------- / / I want the angular velocity to be positive, otherwise if it needs to go CCW: -------------------------------------- \ \ Then the angular velocity should be negative... What should I change to achieve this? Thanks Hmmm... Im not sure why this is not working... for(int i = 0; i < buildings.size(); ++i) { e = buildings.get(i); ArrayList<Vector2D> colPts = vehicle.getRect().getCollsionPoints(e.getRect()); float dist = OBB2D.collisionResponse(vehicle.getRect(), e.getRect(), outNorm); for(int u = 0; u < colPts.size(); ++u) { Vector2D p = colPts.get(u).subtract(vehicle.getRect().getCenter()); vehicle.setTorque(vehicle.getTorque() + p.cross(outNorm)); }

    Read the article

  • Filtering GridViewComboBoxColumn in RadGridView for WPF

    The GridViewComboBox column is used to display lookup data in user friendly manner. For our demo we bind RadGridView for WPF to a collection of custom Location objects. As you may notice – each location has a selectable Country field.  Here is the underlying ‘data model’: public class Location {  public int CountryID { get; set; }  public string CityName { get; set; } } public class Country {  public int ID { get; set; }  public string Name { get; set; } }   The location object contains the integer CountryID. Each CountryID corresponds to a certain country and with the help of GridViewComboBoxColumn we see some human readable country names instead of the underlying numeric IDs. Now The Problem: Unfortunately the filter control does not know that our Country IDs are associated with Country objects, so it displays the ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 382 383 384 385 386 387 388 389 390 391 392 393  | Next Page >