Search Results

Search found 4077 results on 164 pages for 'throw'.

Page 105/164 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • The dislikes of TDD

    - by andrewstopford
    I enjoy debates about TDD and Brian Harrys blog post is no exception. Brian sounds out what he likes and dislikes about TDD and it's the dislikes I'll focus on. The idea of having unit tests that cover virtually every line of code that I’ve written that I have to refactor every time I refactor my code makes me shudder.  Doing this way makes me take nearly twice as long as it would otherwise take and I don’t feel like I get sufficient benefits from it. Refactoring your tests to match your refactored code sounds like the tests are suffering. Too many hard dependencies with no SOLID concerns are a sure fire reason you would do this. Maybe at the start of a TDD cycle you would need to do this as your design evolves and you remove these dependencies but this should quickly be resolved as you refactor. If you find your self still doing it then stop and look back at your design. Don’t get me wrong, I’m a big fan of unit tests.  I just prefer to write them after the code has stopped shaking a bit.  In fact most of my early testing is “manual”.  Either I write a small UI on top of my service that allows me to plug in values and try it or write some quick API tests that I throw away as soon as I have validated them. The problem with this is that a UI can make assumptions on your code that then just unit test around and very quickly the design becomes bad and you technical debt sweeps in. If you want to blackbox test your code with a UI then do so after your TDD cycles not before. This is probably by biggest issue with a literal TDD interpretation.  TDD says you never write a line of code without a failing test to show you need it.  I find it leads developers down a dangerous path.  Without any help from a methodology, I have met way too many developers in my life that “back into a solution”.  By this, I mean they write something, it mostly works and they discover a new requirement so they tack it on, and another and another and when they are done, they’ve got a monstrosity of special cases each designed to handle one specific scenario.  There’s way more code than there should be and it’s way too complicated to understand. I believe in finding general solutions to problems from which all the special cases naturally derive rather than building a solution of special cases.  In my mind, to do this, you have to start by conceptualizing and coding the framework of the general algorithm.  For me, that’s a relatively monolithic exercise. TDD is an development pratice not a methodology, the danger is that the solution becomes a mass of different things that violate DRY. TDD won't solve these problems, only good communication and practices like pairing will help. Above all else an assumption that TDD replaces a methodology is a mistake, combine it with what ever works for your team\business but only good communication will help. A good naming scheme\structure for folders, files and tests can help you and your team isolate what tests are for what.

    Read the article

  • Kanban Tools Review

    - by GeekAgilistMercenary
    The first two sessions on Sunday were Collaboration and why it is so hard and the following, which was a perfect following session was on Kanban.  While in that second session two online Saas Style Tools were mentioned; AgileZen and Leankit.  I decided right then and there that I would throw together some first impressions and setup some sample projects.  I did this by setting up an account and creating the projects. Agile Zen Account Creation Setting up the initial account required an e-mail verification, which is understandable.  Within a few seconds it was mailed out and I was logged in. Setting Up the Kanban Board The initial setup of the board was pretty easy.  I maybe clicked around an extra few times, but overall everything I needed to use the tool was immediately available.  The representation of everything was very similar to what one expects in a real Kanban Board too.  This is a HUGE plus, especially if a team is smart and places this tool in a centrally viewable area to allow for visibility. Each of the board items is just like a post it, being blue, grey, green, pink, or one of another few colors.  Dragging them onto each swim lane on the board was flawless, making changes through the work super easy and intuitive. The other thing I really liked about AgileZen is that the Kanban Board had the swim lanes setup immediately.  One can change them, but when you know you immediately need a Ready Lane, Working Lane, and a Complete Lane it is nice to just have them right in front of you in the interface.  In addition, the Backlog is simply a little tab on the left hand side.  This is perfect for the Backlog Queue.  Out of the way, with the focus on the primary items. Once  I got the items onto the board I was easily able to get back to the actual work at hand versus playing around with the tool.  The fact that it was so easy to use, fast and easy UX, and overall a great layout put me back to work on things I needed to do versus sitting a playing with the tool.  That, in the end is the key to using these tools. LeanKit Kanban Account Creation Setting up the account got me straight into the online tool.  This I thought was pretty cool. Setting Up the Kanban Board Setting up the Kanban Board within Leankit was a bit of trouble.  There were multiple UX issues in regard to process and intuitiveness.  The Leankit basically forces one to design the whole board first, making no assumptions about how the board should look.  The swim lanes in my humble opinion should be setup immediately without any manipulation with the most common lanes;  ready, working, and complete. The other UX hiccup that I had a problem with is that as soon as I managed to get the swim lanes into place, I wanted to remove the redundant Backlog Lane.  The Backlog Lane, or Backlog Bucket should be somewhere that I accidentally added as a lane.  Then on top of that I screwed up and added an item inside the lane, which then prevented me from deleting the lane.  I had to go back out of the lane manipulation, remove the item, and then remove the excess lane.  Summary Leankit wasn't a bad interface, it just wasn't as good as AgileZen.  The AgileZen interface was just better UX design overall.  AgileZen also presents a much better user interface graphical design all together.  It is much closer to what the Kanban Board would look like if it were a physical Kanban Board.  Since one of the HUGE reasons for Kanban is to increase visibility, the fact the design is similar to what a real Kanban Board is actually a pretty big deal. This is an image (click for larger) that shows the two Kanban Boards side by side.  The one on the left is AgileZen and the right is Leankit. Original Entry

    Read the article

  • ASP.NET Routing not working on IIS 7.0

    - by Rick Strahl
    I ran into a nasty little problem today when deploying an application using ASP.NET 4.0 Routing to my live server. The application and its Routing were working just fine on my dev machine (Windows 7 and IIS 7.5), but when I deployed (Windows 2008 R1 and IIS 7.0) Routing would just not work. Every time I hit a routed url IIS would just throw up a 404 error: This is an IIS error, not an ASP.NET error so this doesn’t actually come from ASP.NET’s routing engine but from IIS’s handling of expressionless URLs. Note that it’s clearly falling through all the way to the StaticFile handler which is the last handler to fire in the typical IIS handler list. In other words IIS is trying to parse the extension less URL and not firing it into ASP.NET but failing. As I mentioned on my local machine this all worked fine and to make sure local and live setups match I re-copied my Web.config, double checked handler mappings in IIS and re-copied the actual application assemblies to the server. It all looked exactly matched. However no workey on the server with IIS 7.0!!! Finally, totally by chance, I remembered the runAllManagedModulesForAllRequests attribute flag on the modules key in web.config and set it to true: <system.webServer> <modules runAllManagedModulesForAllRequests="true"> <add name="ScriptCompressionModule" type="Westwind.Web.ScriptCompressionModule,Westwind.Web" /> </modules> </system.webServer> And lo and behold, Routing started working on the live server and IIS 7.0! This seems really obvious now of course, but the really tricky thing about this is that on IIS 7.5 this key is not necessary. So on my Windows 7 machine ASP.NET Routing was working just fine without the key set. However on IIS 7.0 on my live server the same missing setting was not working. On IIS 7.0 this key must be present or Routing will not work. Oddly on IIS 7.5 it appears that you can’t even turn off the behavior – setting runtAllManagedModuleForAllRequests="false" had no effect at all and Routing continued to work just fine even with the flag set to false, which is NOT what I would have expected. Kind of disappointing too that Windows Server 2008 (R1) can’t be upgraded to IIS 7.5. It sure seems like that should have been possible since the OS server core changes in R2 are pretty minor. For the future I really hope Microsoft will allow updating IIS versions without tying them explicitly to the OS. It looks like that with the release of IIS Express Microsoft has taken some steps to untie some of those tight OS links from IIS. Let’s hope that’s the case for the future – it sure is nice to run the same IIS version on dev and live boxes, but upgrading live servers is too big a deal to do just because an updated OS release came out. Moral of the story – never assume that your dev setup will work as is on the live setup. It took me forever to figure this out because I assumed that because my web.config on the local machine was fine and working and I copied all relevant web.config data to the server it can’t be the configuration settings. I was looking everywhere but in the .config file forever before getting desperate and remembering the flag when I accidentally checked the intellisense settings in the modules key. Never assume anything. The other moral is: Try to keep your dev machine and server OS’s in sync whenever possible. Maybe it’s time to upgrade to Windows Server 2008 R2 after all. More info on Extensionless URLs in IIS Want to find out more exactly on how extensionless Urls work on IIS 7? The check out  How ASP.NET MVC Routing Works and its Impact on the Performance of Static Requests which goes into great detail on the complexities of the process. Thanks to Jeff Graves for pointing me at this article – a great linked reference for this topic!© Rick Strahl, West Wind Technologies, 2005-2011Posted in IIS7  Windows  

    Read the article

  • A debugging experience with "highly compatible" ASP.NET 4.5

    - by Jeff
    I have to admit that I will pretty much upgrade software for no reason other than being on the latest version. I won't do it if it's super expensive (Adobe gets money from me about once every three or four years at best), but particularly with frameworks and stuff generally available as part of my MSDN subscription, I'll be bleeding edge. CoasterBuzz was running on the MVC 4 framework pretty much as soon as they did a "go live" license for it. I didn't really jump in head-first with Windows 8 and Visual Studio 2012, in part because I just wasn't interested in doing the reinstalls for each new version. Turns out there weren't that many revisions anyway. But when the final versions were released a week and a half ago, I jumped in. I saw on one of the Microsoft sites that .Net 4.5 was a "highly compatible in-place update" to the framework. Good enough for me. I was obviously running it by default in Windows 8, and installed it on my production server. I suppose it's "highly compatible," except when it isn't. Three of my sites are running with various flavors of the MVC version of POP Forums. All of them stopped working under ASP.NET 4.5. It was not immediately obvious what the problem might be beyond an exception indicating that there were no repository classes registered with Ninject, which I use for dependency injection in the forums. This was made all the more weird by the fact that it ran fine locally in the dev Web host. My first instinct was to spin up a Windows Server VM on my local box and put the remote debugger on it. (Side note: running multiple VM's on a Retina MacBook Pro with 16 gigs of RAM is pretty much the most awesome thing ever. I can't believe this computer is for real, and not a 50-pound tower under my desk.) What might have been going on in IIS that doesn't happen in Visual Studio? In the debugging process, I realized that I might be looking in the wrong place. POP Forums creates a Ninject container using a method called from a PreApplicationStartMethod attribute, and at that time registers a module (what Ninject uses to map interfaces to implementations) that maps all of the core dependencies. It also creates an instance of an HttpModule that originally hosted the "services" (search indexing, mailer, etc.), but now just records errors. That's all well and good, but the actual repository mapping, where data is actually read or persisted, happens in Application_Start() in global.asax. The idea there is that you can swap out the SqlSingleWebServer repos for something tuned for multiple servers, Oracle or something else. Of course, if I used something like StructureMap, which does convention-based mapping for dependency injection (a class implementing ISettingsRepository called SettingsRepository is automagically mapped), I wouldn't have to worry about it. In any case, the HttpModule, being instantiated before Application_Start() gets to run, would throw because there was no repo mapped where it could get settings from the database. This makes total sense. The fix is sort of a hack, where I don't setup the innards of the HttpModule until a call to its BeginRequest is made. I say it's a hack, because its primary function, logging exceptions, won't work until the app has warmed up. Still, this brings up an interesting question about the race condition, and what changed in 4.5 when it's running in IIS. In ASP.NET 4, it would appear that the code called via the PreApplicationStartMethod was either failing silently, and running again later, or it was getting to that code after Application_Start was called. In any case, weird thing. The real pain point I'm experiencing now is a bug in MVC 4 that is extremely serious because it renders the mobile/alternate view functionality very much broken.

    Read the article

  • SQL SERVER – Guest Post – Glenn Berry – Wait Type – Day 26 of 28

    - by pinaldave
    Glenn Berry works as a Database Architect at NewsGator Technologies in Denver, CO. He is a SQL Server MVP, and has a whole collection of Microsoft certifications, including MCITP, MCDBA, MCSE, MCSD, MCAD, and MCTS. He is also an Adjunct Faculty member at University College – University of Denver, where he has been teaching since 2000. He is one wonderful blogger and often blogs at here. I am big fan of the Dynamic Management Views (DMV) scripts of Glenn. His script are extremely popular and the reality is that he has inspired me to start this series with his famous DMV which I have mentioned in very first  wait stats blog post (I had forgot to request his permission to re-use the script but when asked later on his whole hearty approved it). Here is is his excellent blog post on this subject of wait stats: Analyzing cumulative wait stats in SQL Server 2005 and above has become a popular and effective technique for diagnosing performance issues and further focusing your troubleshooting and diagnostic  efforts.  Rather than just guessing about what resource(s) that SQL Server is waiting on, you can actually find out by running a relatively simple DMV query. Once you know what resources that SQL Server is spending the most time waiting on, you can run more specific queries that focus on that resource to get a better idea what is causing the problem. I do want to throw out a few caveats about using wait stats as a diagnostic tool. First, they are most useful when your SQL Server instance is experiencing performance problems. If your instance is running well, with no indication of any resource pressure from other sources, then you should not worry that much about what the top wait types are. SQL Server will always be waiting on some resource, but many wait types are quite benign, and can be safely ignored. In spite of this, I quite often see experienced DBAs obsessing over the top wait type, even when their SQL Server instance is running extremely well. Second, I often see DBAs jump to the wrong conclusion based on seeing a particular well-known wait type. A good example is CXPACKET waits. People typically jump to the conclusion that high CXPACKET waits means that they should immediately change their instance-level MADOP setting to 1. This is not always the best solution. You need to consider your workload type, and look carefully for any important “missing” indexes that might be causing the query optimizer to use a parallel plan to compensate for the missing index. In this case, correcting the index problem is usually a better solution than changing MAXDOP, since you are curing the disease rather than just treating the symptom. Finally, you should get in the habit of clearing out your cumulative wait stats with the  DBCC SQLPERF(‘sys.dm_os_wait_stats’, CLEAR); command. This is especially important if you have made an configuration or index changes, or if your workload has changed recently. Otherwise, your cumulative wait stats will be polluted with the old stats from weeks or months ago (since the last time SQL Server was started or the stats were cleared).  If you make a change to your SQL Server instance, or add an index, you should clear out your wait stats, and then wait a while to see what your new top wait stats are. At any rate, enjoy Pinal Dave’s series on Wait Stats. This blog post has been written by Glenn Berry (Twitter | Blog) Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Enhanced Dynamic Filtering

    - by Ricardo Peres
    Remember my last post on dynamic filtering? Well, this time I'm extending the code in order to allow two levels of querying: Match type, represented by the following options: public enum MatchType { StartsWith = 0, Contains = 1 } And word match: public enum WordMatch { AnyWord = 0, AllWords = 1, ExactPhrase = 2 } You can combine the two levels in order to achieve the following combinations: MatchType.StartsWith + WordMatch.AnyWord Matches any record that starts with any of the words specified MatchType.StartsWith + WordMatch.AllWords Not available: does not make sense, throws an exception MatchType.StartsWith + WordMatch.ExactPhrase Matches any record that starts with the exact specified phrase MatchType.Contains + WordMatch.AnyWord Matches any record that contains any of the specified words MatchType.Contains + WordMatch.AllWords Matches any record that contains all of the specified words MatchType.Contains + WordMatch.ExactPhrase Matches any record that contains the exact specified phrase Here is the code: public static IList Search(IQueryable query, Type entityType, String dataTextField, String phrase, MatchType matchType, WordMatch wordMatch, Int32 maxCount) { String [] terms = phrase.Split(' ').Distinct().ToArray(); StringBuilder result = new StringBuilder(); PropertyInfo displayProperty = entityType.GetProperty(dataTextField); IList searchList = null; MethodInfo orderByMethod = typeof(Queryable).GetMethods(BindingFlags.Public | BindingFlags.Static).Where(m = m.Name == "OrderBy").ToArray() [ 0 ].MakeGenericMethod(entityType, displayProperty.PropertyType); MethodInfo takeMethod = typeof(Queryable).GetMethod("Take", BindingFlags.Public | BindingFlags.Static).MakeGenericMethod(entityType); MethodInfo whereMethod = typeof(Queryable).GetMethods(BindingFlags.Public | BindingFlags.Static).Where(m = m.Name == "Where").ToArray() [ 0 ].MakeGenericMethod(entityType); MethodInfo distinctMethod = typeof(Queryable).GetMethods(BindingFlags.Public | BindingFlags.Static).Where(m = m.Name == "Distinct" && m.GetParameters().Length == 1).Single().MakeGenericMethod(entityType); MethodInfo toListMethod = typeof(Enumerable).GetMethod("ToList", BindingFlags.Static | BindingFlags.Public).MakeGenericMethod(entityType); MethodInfo matchMethod = typeof(String).GetMethod ( (matchType == MatchType.StartsWith) ? "StartsWith" : "Contains", new Type [] { typeof(String) } ); MemberExpression member = Expression.MakeMemberAccess ( Expression.Parameter(entityType, "n"), displayProperty ); MethodCallExpression call = null; LambdaExpression where = null; LambdaExpression orderBy = Expression.Lambda ( member, member.Expression as ParameterExpression ); switch (matchType) { case MatchType.StartsWith: switch (wordMatch) { case WordMatch.AnyWord: call = Expression.Call ( member, matchMethod, Expression.Constant(terms [ 0 ]) ); where = Expression.Lambda ( call, member.Expression as ParameterExpression ); for (Int32 i = 1; i ()); where = Expression.Lambda ( Expression.Or ( where.Body, exp ), where.Parameters.ToArray() ); } break; case WordMatch.ExactPhrase: call = Expression.Call ( member, matchMethod, Expression.Constant(phrase) ); where = Expression.Lambda ( call, member.Expression as ParameterExpression ); break; case WordMatch.AllWords: throw (new Exception("The match type StartsWith is not supported with word match AllWords")); } break; case MatchType.Contains: switch (wordMatch) { case WordMatch.AnyWord: call = Expression.Call ( member, matchMethod, Expression.Constant(terms [ 0 ]) ); where = Expression.Lambda ( call, member.Expression as ParameterExpression ); for (Int32 i = 1; i ()); where = Expression.Lambda ( Expression.Or ( where.Body, exp ), where.Parameters.ToArray() ); } break; case WordMatch.ExactPhrase: call = Expression.Call ( member, matchMethod, Expression.Constant(phrase) ); where = Expression.Lambda ( call, member.Expression as ParameterExpression ); break; case WordMatch.AllWords: call = Expression.Call ( member, matchMethod, Expression.Constant(terms [ 0 ]) ); where = Expression.Lambda ( call, member.Expression as ParameterExpression ); for (Int32 i = 1; i ()); where = Expression.Lambda ( Expression.AndAlso ( where.Body, exp ), where.Parameters.ToArray() ); } break; } break; } query = orderByMethod.Invoke(null, new Object [] { query, orderBy }) as IQueryable; query = whereMethod.Invoke(null, new Object [] { query, where }) as IQueryable; if (maxCount != 0) { query = takeMethod.Invoke(null, new Object [] { query, maxCount }) as IQueryable; } searchList = toListMethod.Invoke(null, new Object [] { query }) as IList; return (searchList); } And this is how you'd use it: IQueryable query = ctx.MyEntities; IList list = Search(query, typeof(MyEntity), "Name", "Ricardo Peres", MatchType.Contains, WordMatch.ExactPhrase, 10 /*0 for all*/); SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/2.0.320/scripts/clipboard.swf'; SyntaxHighlighter.brushes.CSharp.aliases = ['c#', 'c-sharp', 'csharp']; SyntaxHighlighter.all();

    Read the article

  • Avoiding new operator in JavaScript -- the better way

    - by greengit
    Warning: This is a long post. Let's keep it simple. I want to avoid having to prefix the new operator every time I call a constructor in JavaScript. This is because I tend to forget it, and my code screws up badly. The simple way around this is this... function Make(x) { if ( !(this instanceof arguments.callee) ) return new arguments.callee(x); // do your stuff... } But, I need this to accept variable no. of arguments, like this... m1 = Make(); m2 = Make(1,2,3); m3 = Make('apple', 'banana'); The first immediate solution seems to be the 'apply' method like this... function Make() { if ( !(this instanceof arguments.callee) ) return new arguments.callee.apply(null, arguments); // do your stuff } This is WRONG however -- the new object is passed to the apply method and NOT to our constructor arguments.callee. Now, I've come up with three solutions. My simple question is: which one seems best. Or, if you have a better method, tell it. First – use eval() to dynamically create JavaScript code that calls the constructor. function Make(/* ... */) { if ( !(this instanceof arguments.callee) ) { // collect all the arguments var arr = []; for ( var i = 0; arguments[i]; i++ ) arr.push( 'arguments[' + i + ']' ); // create code var code = 'new arguments.callee(' + arr.join(',') + ');'; // call it return eval( code ); } // do your stuff with variable arguments... } Second – Every object has __proto__ property which is a 'secret' link to its prototype object. Fortunately this property is writable. function Make(/* ... */) { var obj = {}; // do your stuff on 'obj' just like you'd do on 'this' // use the variable arguments here // now do the __proto__ magic // by 'mutating' obj to make it a different object obj.__proto__ = arguments.callee.prototype; // must return obj return obj; } Third – This is something similar to second solution. function Make(/* ... */) { // we'll set '_construct' outside var obj = new arguments.callee._construct(); // now do your stuff on 'obj' just like you'd do on 'this' // use the variable arguments here // you have to return obj return obj; } // now first set the _construct property to an empty function Make._construct = function() {}; // and then mutate the prototype of _construct Make._construct.prototype = Make.prototype; eval solution seems clumsy and comes with all the problems of "evil eval". __proto__ solution is non-standard and the "Great Browser of mIsERY" doesn't honor it. The third solution seems overly complicated. But with all the above three solutions, we can do something like this, that we can't otherwise... m1 = Make(); m2 = Make(1,2,3); m3 = Make('apple', 'banana'); m1 instanceof Make; // true m2 instanceof Make; // true m3 instanceof Make; // true Make.prototype.fire = function() { // ... }; m1.fire(); m2.fire(); m3.fire(); So effectively the above solutions give us "true" constructors that accept variable no. of arguments and don't require new. What's your take on this. -- UPDATE -- Some have said "just throw an error". My response is: we are doing a heavy app with 10+ constructors and I think it'd be far more wieldy if every constructor could "smartly" handle that mistake without throwing error messages on the console.

    Read the article

  • Reverse subarray of an array with O(1)

    - by Babibu
    I have an idea how to implement sub array reverse with O(1), not including precalculation such as reading the input. I will have many reverse operations, and I can't use the trivial solution of O(N). Edit: To be more clear I want to build data structure behind the array with access layer that knows about reversing requests and inverts the indexing logic as necessary when someone wants to iterate over the array. Edit 2: The data structure will only be used for iterations I been reading this and this and even this questions but they aren't helping. There are 3 cases that need to be taking care of: Regular reverse operation Reverse that including reversed area Intersection between reverse and part of other reversed area in the array Here is my implementation for the first two parts, I will need your help with the last one. This is the rule class: class Rule { public int startingIndex; public int weight; } It is used in my basic data structure City: public class City { Rule rule; private static AtomicInteger _counter = new AtomicInteger(-1); public final int id = _counter.incrementAndGet(); @Override public String toString() { return "" + id; } } This is the main class: public class CitiesList implements Iterable<City>, Iterator<City> { private int position; private int direction = 1; private ArrayList<City> cities; private ArrayDeque<City> citiesQeque = new ArrayDeque<>(); private LinkedList<Rule> rulesQeque = new LinkedList<>(); public void init(ArrayList<City> cities) { this.cities = cities; } public void swap(int index1, int index2){ Rule rule = new Rule(); rule.weight = Math.abs(index2 - index1); cities.get(index1).rule = rule; cities.get(index2 + 1).rule = rule; } @Override public void remove() { throw new IllegalStateException("Not implemented"); } @Override public City next() { City city = cities.get(position); if (citiesQeque.peek() == city){ citiesQeque.pop(); changeDirection(); position += (city.rule.weight + 1) * direction; city = cities.get(position); } if(city.rule != null){ if(city.rule != rulesQeque.peekLast()){ rulesQeque.add(city.rule); position += city.rule.weight * direction; changeDirection(); citiesQeque.push(city); } else{ rulesQeque.removeLast(); position += direction; } } else{ position += direction; } return city; } private void changeDirection() { direction *= -1; } @Override public boolean hasNext() { return position < cities.size(); } @Override public Iterator<City> iterator() { position = 0; return this; } } And here is a sample program: public static void main(String[] args) { ArrayList<City> list = new ArrayList<>(); for(int i = 0 ; i < 20; i++){ list.add(new City()); } CitiesList citiesList = new CitiesList(); citiesList.init(list); for (City city : citiesList) { System.out.print(city + " "); } System.out.println("\n******************"); citiesList.swap(4, 8); for (City city : citiesList) { System.out.print(city + " "); } System.out.println("\n******************"); citiesList.swap(2, 15); for (City city : citiesList) { System.out.print(city + " "); } } How do I handle reverse intersections?

    Read the article

  • Part 15: Fail a build based on the exit code of a console application

    In the series the following parts have been published Part 1: Introduction Part 2: Add arguments and variables Part 3: Use more complex arguments Part 4: Create your own activity Part 5: Increase AssemblyVersion Part 6: Use custom type for an argument Part 7: How is the custom assembly found Part 8: Send information to the build log Part 9: Impersonate activities (run under other credentials) Part 10: Include Version Number in the Build Number Part 11: Speed up opening my build process template Part 12: How to debug my custom activities Part 13: Get control over the Build Output Part 14: Execute a PowerShell script Part 15: Fail a build based on the exit code of a console application When you have a Console Application or a batch file that has errors, the exitcode is set to another value then 0. You would expect that the build would see this and report an error. This is not true however. First we setup the scenario. Add a ConsoleApplication project to your solution you are building. In the Main function set the ExitCode to 1     class Program    {        static void Main(string[] args)        {            Console.WriteLine("This is an error in the script.");            Environment.ExitCode = 1;        }    } Checkin the code. You can choose to include this Console Application in the build or you can decide to add the exe to source control Now modify the Build Process Template CustomTemplate.xaml Add an argument ErrornousScript Scroll down beneath the TryCatch activity called “Try Compile, Test, and Associate Changesets and Work Items” Add an Sequence activity to the template In the Sequence, add a ConvertWorkspaceItem and an InvokeProcess activity (see Part 14: Execute a PowerShell script  for more detailed steps) In the FileName property of the InvokeProcess use the ErrornousScript so the ConsoleApplication will be called. Modify the build definition and make sure that the ErrornousScript is executing the exe that is setting the ExitCode to 1. You have now setup a build definition that will execute the errornous Console Application. When you run it, you will see that the build succeeds. This is not what you want! To solve this, you can make use of the Result property on the InvokeProcess activity. So lets change our Build Process Template. Add the new variables (scoped to the sequence where you run the Console Application) called ExitCode (type = Int32) and ErrorMessage Click on the InvokeProcess activity and change the Result property to ExitCode In the Handle Standard Output of the InvokeProcess add a Sequence activity In the Sequence activity, add an Assign primitive. Set the following properties: To = ErrorMessage Value = If(Not String.IsNullOrEmpty(ErrorMessage), Environment.NewLine + ErrorMessage, "") + stdOutput And add the default BuildMessage to the sequence that outputs the stdOutput Add beneath the InvokeProcess activity and If activity with the condition ExitCode <> 0 In the Then section add a Throw activity and set the Exception property to New Exception(ErrorMessage) The complete workflow looks now like When you now check in the Build Process Template and run the build, you get the following result And that is exactly what we want.   You can download the full solution at BuildProcess.zip. It will include the sources of every part and will continue to evolve.

    Read the article

  • Patching and PCI Compliance

    - by Joel Weise
    One of my friends and master of the security universe, Darren Moffat, pointed me to Dan Anderson's blog the other day.  Dan went to Toorcon which is a security conference where he went to a talk on security patching titled, "Stop Patching, for Stronger PCI Compliance".  I realize that often times speakers will use a headline grabbing title to create interest in their talk and this one certainly got my attention.  I did not go to the conference and did not see the presentation, so I can only go by what is in the Toorcon agenda summary and on Dan's blog, but the general statement to stop patching for stronger PCI compliance seems a bit misleading to me.  Clearly patching is important to all systems management and should be a part of any organization's security hygiene.  Further, PCI does require the patching of systems to maintain compliance.  So it's important to mention that organizations should not simply stop patching their systems; and I want to believe that was not the speakers intent. So let's look at PCI requirement 6: "Unscrupulous individuals use security vulnerabilities to gain privileged access to systems. Many of these vulnerabilities are fixed by vendor- provided security patches, which must be installed by the entities that manage the systems. All critical systems must have the most recently released, appropriate software patches to protect against exploitation and compromise of cardholder data by malicious individuals and malicious software." Notice the word "appropriate" in the requirement.  This is stated to give organizations some latitude and apply patches that make sense in their environment and that target the vulnerabilities in question.  Haven't we all seen a vulnerability scanner throw a false positive and flag some module and point to a recommended patch, only to realize that the module doesn't exist on our system?  Applying such a patch would obviously not be appropriate.  This does not mean an organization can ignore the fact they need to apply security patches.  It's pretty clear they must.  Of course, organizations have other options in terms of compliance when it comes to patching.  For example, they could remove a system from scope and make sure that system does not process or contain cardholder data.  [This may or may not be a significant undertaking.  I just wanted to point out that there are always options available.] PCI DSS requirement 6.1 also includes the following note: "Note: An organization may consider applying a risk-based approach to prioritize their patch installations. For example, by prioritizing critical infrastructure (for example, public-facing devices and systems, databases) higher than less-critical internal devices, to ensure high-priority systems and devices are addressed within one month, and addressing less critical devices and systems within three months." Notice there is no mention to stop patching one's systems.  And the note also states organization may apply a risk based approach. [A smart approach but also not mandated].  Such a risk based approach is not intended to remove the requirement to patch one's systems.  It is meant, as stated, to allow one to prioritize their patch installations.   So what does this mean to an organization that must comply with PCI DSS and maintain some sanity around their patch management and overall operational readiness?  I for one like to think that most organizations take a common sense and balanced approach to their business and security posture.  If patching is becoming an unbearable task, review why that is the case and possibly look for means to improve operational efficiencies; but also recognize that security is important to maintaining the availability and integrity of one's systems.  Likewise, whether we like it or not, the cyber-world we live in is getting more complex and threatening - and I dont think it's going to get better any time soon.

    Read the article

  • Algorithm to Find the Aggregate Mass of "Granola Bar"-Like Structures?

    - by Stuart Robbins
    I'm a planetary science researcher and one project I'm working on is N-body simulations of Saturn's rings. The goal of this particular study is to watch as particles clump together under their own self-gravity and measure the aggregate mass of the clumps versus the mean velocity of all particles in the cell. We're trying to figure out if this can explain some observations made by the Cassini spacecraft during the Saturnian summer solstice when large structures were seen casting shadows on the nearly edge-on rings. Below is a screenshot of what any given timestep looks like. (Each particle is 2 m in diameter and the simulation cell itself is around 700 m across.) The code I'm using already spits out the mean velocity at every timestep. What I need to do is figure out a way to determine the mass of particles in the clumps and NOT the stray particles between them. I know every particle's position, mass, size, etc., but I don't know easily that, say, particles 30,000-40,000 along with 102,000-105,000 make up one strand that to the human eye is obvious. So, the algorithm I need to write would need to be a code with as few user-entered parameters as possible (for replicability and objectivity) that would go through all the particle positions, figure out what particles belong to clumps, and then calculate the mass. It would be great if it could do it for "each" clump/strand as opposed to everything over the cell, but I don't think I actually need it to separate them out. The only thing I was thinking of was doing some sort of N2 distance calculation where I'd calculate the distance between every particle and if, say, the closest 100 particles were within a certain distance, then that particle would be considered part of a cluster. But that seems pretty sloppy and I was hoping that you CS folks and programmers might know of a more elegant solution? Edited with My Solution: What I did was to take a sort of nearest-neighbor / cluster approach and do the quick-n-dirty N2 implementation first. So, take every particle, calculate distance to all other particles, and the threshold for in a cluster or not was whether there were N particles within d distance (two parameters that have to be set a priori, unfortunately, but as was said by some responses/comments, I wasn't going to get away with not having some of those). I then sped it up by not sorting distances but simply doing an order N search and increment a counter for the particles within d, and that sped stuff up by a factor of 6. Then I added a "stupid programmer's tree" (because I know next to nothing about tree codes). I divide up the simulation cell into a set number of grids (best results when grid size ˜7 d) where the main grid lines up with the cell, one grid is offset by half in x and y, and the other two are offset by 1/4 in ±x and ±y. The code then divides particles into the grids, then each particle N only has to have distances calculated to the other particles in that cell. Theoretically, if this were a real tree, I should get order N*log(N) as opposed to N2 speeds. I got somewhere between the two, where for a 50,000-particle sub-set I got a 17x increase in speed, and for a 150,000-particle cell, I got a 38x increase in speed. 12 seconds for the first, 53 seconds for the second, 460 seconds for a 500,000-particle cell. Those are comparable speeds to how long the code takes to run the simulation 1 timestep forward, so that's reasonable at this point. Oh -- and it's fully threaded, so it'll take as many processors as I can throw at it.

    Read the article

  • How to mix textures in DirectX?

    - by tobsen
    I am new to DirectX development and I am wondering if I am taking the wrong route to achieve the following: I would like to mix three textures which contain transparent areas and some solid areas (Red, Blue, Green). The three textures should blend like shown in this example: How can I achieve that in DirectX (preferably in directx9)? A link or example code would be nice. Update: My rendering method looks like this and I still think I am doing it wrong, because the sprite only shows the last texture (nothing is rendered transparent or blended): void D3DTester::render() { d3ddevice->Clear(0, NULL, D3DCLEAR_TARGET, D3DCOLOR_XRGB(0,0,0), 1.0f, 0); d3ddevice->BeginScene(); d3ddevice->SetRenderState(D3DRS_ALPHABLENDENABLE, TRUE); d3ddevice->SetRenderState(D3DRS_SRCBLEND, D3DBLEND_ONE); d3ddevice->SetRenderState(D3DRS_DESTBLEND, D3DBLEND_ONE); LPD3DXSPRITE sprite=NULL; HRESULT hres = D3DXCreateSprite(d3ddevice, &sprite); if(hres != S_OK) { throw std::exception(); } sprite->Begin(D3DXSPRITE_ALPHABLEND); std::vector<LPDIRECT3DTEXTURE9>::iterator it; for ( it=textures.begin() ; it < textures.end(); it++ ) { sprite->Draw(*it, NULL, NULL, NULL, 0xFFFFFFFF); } sprite->End(); d3ddevice->EndScene(); d3ddevice->Present(NULL, NULL, NULL, NULL); } The resulting image looks like this: But I need it to look like this instead: Update2: I figured out that I have to SetRenderState after I use sprite->Begin(D3DXSPRITE_ALPHABLEND); thanks to the hint by Josh Petrie. However, by using this: sprite->Begin(D3DXSPRITE_ALPHABLEND); d3ddevice->SetRenderState(D3DRS_ALPHABLENDENABLE, TRUE); d3ddevice->SetRenderState(D3DRS_SRCBLEND, D3DBLEND_ONE); d3ddevice->SetRenderState(D3DRS_DESTBLEND, D3DBLEND_ONE); std::vector<LPDIRECT3DTEXTURE9>::iterator it; for ( it=textures.begin() ; it < textures.end(); it++ ) { sprite->Draw(*it, NULL, NULL, NULL, 0xFFFFFFFF); } sprite->End(); The sprites colors are becoming transparent towards the background scene e.g.: if I use d3ddevice->Clear(0, NULL, D3DCLEAR_TARGET, D3DCOLOR_XRGB(0,100,21), 1.0f, 0); the result looks like: Is there any way to avoid that? I would like the sprites be transparent to each other but to be still solid to the background. Update3: After having sombody explained to me, how to do what @LaurentCouvidou and @JoshPetrie suggested, I have a working solution and therfore accept the answer: d3ddevice->BeginScene(); D3DCOLOR white = D3DCOLOR_RGBA((UINT)255, (UINT)255, (UINT)255, 255); D3DCOLOR black = D3DCOLOR_RGBA((UINT)0, (UINT)0, (UINT)0, 255); sprite->Begin(D3DXSPRITE_ALPHABLEND); sprite->Draw(pTextureRed, NULL, NULL, NULL, black); sprite->Draw(pTextureGreen, NULL, NULL, NULL, black); sprite->Draw(pTextureBlue, NULL, NULL, NULL, black); sprite->End(); sprite->Begin(D3DXSPRITE_ALPHABLEND); d3ddevice->SetRenderState(D3DRS_ALPHATESTENABLE, TRUE); d3ddevice->SetRenderState(D3DRS_BLENDOP, D3DBLENDOP_ADD); d3ddevice->SetRenderState(D3DRS_SRCBLEND, D3DBLEND_ONE); d3ddevice->SetRenderState(D3DRS_DESTBLEND, D3DBLEND_ONE); sprite->Draw(pTextureRed, NULL, NULL, NULL, white); sprite->Draw(pTextureGreen, NULL, NULL, NULL, white); sprite->Draw(pTextureBlue, NULL, NULL, NULL, white); sprite->End(); d3ddevice->EndScene(); d3ddevice->Present(NULL, NULL, NULL, NULL);

    Read the article

  • F# and the rose-tinted reflection

    - by CliveT
    We're already seeing increasing use of many cores on client desktops. It is a change that has been long predicted. It is not just a change in architecture, but our notions of efficiency in a program. No longer can we focus on the asymptotic complexity of an algorithm by counting the steps that a single core processor would take to execute it. Instead we'll soon be more concerned about the scalability of the algorithm and how well we can increase the performance as we increase the number of cores. This may even lead us to throw away our most efficient algorithms, and switch to less efficient algorithms that scale better. We might even be willing to waste cycles in order to speculatively execute at the algorithm rather than the hardware level. State is the big headache in this parallel world. At the hardware level, main memory doesn't necessarily contain the definitive value corresponding to a particular address. An update to a location might still be held in a CPU's local cache and it might be some time before the value gets propagated. To get the latest value, and the notion of "latest" takes a lot of defining in this world of rapidly mutating state, the CPUs may well need to communicate to decide who has the definitive value of a particular address in order to avoid lost updates. At the user program level, this means programmers will need to lock objects before modifying them, or attempt to avoid the overhead of locking by understanding the memory models at a very deep level. I think it's this need to avoid statefulness that has led to the recent resurgence of interest in functional languages. In the 1980s, functional languages started getting traction when research was carried out into how programs in such languages could be auto-parallelised. Sadly, the impracticality of some of the languages, the overheads of communication during this parallel execution, and rapid improvements in compiler technology on stock hardware meant that the functional languages fell by the wayside. The one thing that these languages were good at was getting rid of implicit state, and this single idea seems like a solution to the problems we are going to face in the coming years. Whether these languages will catch on is hard to predict. The mindset for writing a program in a functional language is really very different from the way that object-oriented problem decomposition happens - one has to focus on the verbs instead of the nouns, which takes some getting used to. There are a number of hybrid functional/object languages that have been becoming more popular in recent times. These half-way houses make it easy to use functional ideas for some parts of the program while still allowing access to the underlying object-focused platform without a great deal of impedance mismatch. One example is F# running on the CLR which, in Visual Studio 2010, has because a first class member of the pack. Inside Visual Studio 2010, the tooling for F# has improved to the point where it is easy to set breakpoints and watch values change while debugging at the source level. In my opinion, it is the tooling support that will enable the widespread adoption of functional languages - without this support, people will put off any transition into the functional world for as long as they possibly can. Without tool support it will make it hard to learn these languages. One tool that doesn't currently support F# is Reflector. The idea of decompiling IL to a functional language is daunting, but F# is potentially so important I couldn't dismiss the idea. As I'm currently developing Reflector 6.5, I thought it wise to take four days just to see how far I could get in doing so, even if it achieved little more than to be clearer on how much was possible, and how long it might take. You can read what happened here, and of the insights it gave us on ways to improve the tool.

    Read the article

  • Acr.ExtDirect &ndash; Part 1 &ndash; Method Resolvers

    - by Allan Ritchie
    One of the most important things of any open source libraries in my opinion is to be as open as possible while avoiding having your library become invasive to your code/business model design.  I personally could never stand marking my business and/or data access code with attributes everywhere.  XML also isn’t really a fav with too many people these days since it comes with a startup performance hit and requires runtime compiling.  I find that there is a whole ton of communication libraries out there currently requiring this (ie. WCF, RIA, etc).  Even though Acr.ExtDirect comes with its own set of attributes, you can piggy-back the [ServiceContract] & [OperationContract] attributes from WCF if you choose.  It goes beyond that though, there are 2 others “out-of-the-box” implementations – Convention based & XML Configuration.    Convention – I don’t actually recommend using this one since it opens up all of your public instance methods to remote execution calls. XML Configuration – This isn’t so bad but requires you enter all of your methods and there operation types into the Castle XML configuration & as I said earlier, XML isn’t the fav these days.   So what are your options if you don’t like attributes, convention, or XML Configuration?  Well, Acr.ExtDirect has its own extension base to give the API a list of methods and components to make available for remote execution.  1: public interface IDirectMethodResolver { 2:   3: bool IsServiceType(ComponentModel model, Type type); 4: string GetNamespace(ComponentModel model); 5: string[] GetDirectMethodNames(ComponentModel model); 6: DirectMethodType GetMethodType(ComponentModel model, MethodInfo method); 7: }   Now to implement our own method resolver:   1: public class TestResolver : IDirectMethodResolver { 2:   3: #region IDirectMethodResolver Members 4:   5: /// <summary> 6: /// Determine if you are calling a service 7: /// </summary> 8: /// <param name="model"></param> 9: /// <param name="type"></param> 10: /// <returns></returns> 11: public bool IsServiceType(ComponentModel model, Type type) { 12: return (type.Namespace == "MyBLL.Data"); 13: } 14:   15: /// <summary> 16: /// Return the calling name for the client side 17: /// </summary> 18: /// <param name="model"></param> 19: /// <returns></returns> 20: public string GetNamespace(ComponentModel model) { 21: return model.Name; 22: } 23:   24: public string[] GetDirectMethodNames(ComponentModel model) { 25: switch (model.Name) { 26: case "Products" : 27: return new [] { 28: "GetProducts", 29: "LoadProduct", 30: "Save", 31: "Update" 32: }; 33:   34: case "Categories" : 35: return new [] { 36: "GetProducts" 37: }; 38:   39: default : 40: throw new ArgumentException("Invalid type"); 41: } 42: } 43:   44: public DirectMethodType GetMethodType(ComponentModel model, MethodInfo method) { 45: if (method.Name.StartsWith("Save") || method.Name.StartsWith("Update")) 46: return DirectMethodType.FormSubmit; 47: 48: else if (method.Name.StartsWith("Load")) 49: return DirectMethodType.FormLoad; 50:   51: else 52: return DirectMethodType.Direct; 53: } 54:   55: #endregion 56: }   And there you have it, your own custom method resolver.  Pretty easy and pretty open ended!

    Read the article

  • Measuring Code Quality

    - by DotNetBlues
    Several months back, I was tasked with measuring the quality of code in my organization. Foolishly, I said, "No problem." I figured that Visual Studio has a built-in code metrics tool (Analyze -> Calculate Code Metrics) and that would be a fine place to start with. I was right, but also very wrong. The Visual Studio calculates five primary metrics: Maintainability Index, Cyclomatic Complexity, Depth of Inheritance, Class Coupling, and Lines of Code. The first two are figured at the method level, the second at (primarily) the class level, and the last is a simple count. The first question any reasonable person should ask is "Which one do I look at first?" The first question any manager is going to ask is, "What one number tells me about the whole application?" My answer to both, in a way, was "Maintainability Index." Why? Because each of the other numbers represent one element of quality while MI is a composite number that includes Cyclomatic Complexity. I'd be lying if I said no consideration was given to the fact that it was abstract enough that it's harder for some surly developer (I've been known to resemble that remark) to start arguing why a high coupling or inheritance is no big deal or how complex requirements are to blame for complex code. I should also note that I don't think there is one magic bullet metric that will tell you objectively how good a code base is. There are a ton of different metrics out there, and each one was created for a specific purpose in mind and has a pet theory behind it. When you've got a group of developers who aren't accustomed to measuring code quality, picking a 0-100 scale, non-controversial metric that can be easily generated by tools you already own really isn't a bad place to start. That sort of answers the question a developer would ask, but what about the management question; how do you dashboard this stuff when Visual Studio doesn't roll up the numbers to the solution level? Since VS does roll up the MI to the project level, I thought I could just figure out what sort of weighting Microsoft used to roll method scores up to the class level and then to the namespace and project levels. I was a bit surprised by the answer: there is no weighting. That means that a class with one 1300 line method (which will score a 0 MI) and one empty constructor (which will score a 100 MI) will have an overall MI of a respectable 50. Throw in a couple of DTOs that are nothing more than getters and setters (which tend to score 95 or better) and the project ends up looking really, really healthy. The next poor bastard who has to work on the application is probably not going to be singing the praises of its maintainability, though. For the record, that 1300 line method isn't a hypothetical, either. So, what does one do with that? Well, I decided to weight the average by the Lines of Code per method. For our above example, the formula for the class's MI becomes ((1300 * 0) + (1 * 100))/1301 = .077, rounded to 0. Sounds about right. Continue the pattern for namespace, project, solution, and even multi-solution application MI scores. This can be done relatively easily by using the "export to Excel" button and running a quick formula against the data. On the short list of follow-up questions would be, "How do I improve my application's score?" That's an answer for another time, though.

    Read the article

  • WebGrid Helper and Complex Types

    - by imran_ku07
        Introduction:           WebGrid helper makes it very easy to show tabular data. It was originally designed for ASP.NET Web Pages(WebMatrix) to display, edit, page and sort tabular data but you can also use this helper in ASP.NET Web Forms and ASP.NET MVC. When using this helper, sometimes you may run into a problem if you use complex types in this helper. In this article, I will show you how you can use complex types in WebGrid helper.       Description:             Let's say you need to show the employee data and you have the following classes,   public class Employee { public string Name { get; set; } public Address Address { get; set; } public List<string> ContactNumbers { get; set; } } public class Address { public string City { get; set; } }               The Employee class contain a Name, an Address and list of ContactNumbers. You may think that you can easily show City in WebGrid using Address.City, but no. The WebGrid helper will throw an exception at runtime if any Address property is null in the Employee list. Also, you cannot directly show ContactNumbers property. The easiest way to show these properties is to add some additional properties,   public Address NotNullableAddress { get { return Address ?? new Address(); } } public string Contacts { get { return string.Join("; ",ContactNumbers); } }               Now you can easily use these properties in WebGrid. Here is the complete code of this example,  @functions{ public class Employee { public Employee(){ ContactNumbers = new List<string>(); } public string Name { get; set; } public Address Address { get; set; } public List<string> ContactNumbers { get; set; } public Address NotNullableAddress { get { return Address ?? new Address(); } } public string Contacts { get { return string.Join("; ",ContactNumbers); } } } public class Address { public string City { get; set; } } } @{ var myClasses = new List<Employee>{ new Employee { Name="A" , Address = new Address{ City="AA" }, ContactNumbers = new List<string>{"021-216452","9231425651"}}, new Employee { Name="C" , Address = new Address{ City="CC" }}, new Employee { Name="D" , ContactNumbers = new List<string>{"045-14512125","21531212121"}} }; var grid = new WebGrid(source: myClasses); } @grid.GetHtml(columns: grid.Columns( grid.Column("NotNullableAddress.City", header: "City"), grid.Column("Name"), grid.Column("Contacts")))                    Summary:           You can use WebGrid helper to show tabular data in ASP.NET MVC, ASP.NET Web Forms and  ASP.NET Web Pages. Using this helper, you can also show complex types in the grid. In this article, I showed you how you use complex types with WebGrid helper. Hopefully you will enjoy this article too.  

    Read the article

  • SQL – Step by Step Guide to Download and Install NuoDB – Getting Started with NuoDB

    - by Pinal Dave
    Let us take a look at the application you own at your business. If you pay attention to the underlying database for that application you will be amazed. Every successful business these days processes way more data than they used to process before. The number of transactions and the amount of data is growing at an exponential rate. Every single day there is way more data to process than before. Big data is no longer a concept; it is now turning into reality. If you look around there are so many different big data solutions and it can be a quite difficult task to figure out where to begin. Personally, I have been experimenting with a lot of different solutions which allow my database to scale immediately without much hassle while maintaining optimal database performance.  There are for sure some solutions out there, but for many I even have to learn their specific language and there is a lot of new exploration to do. Honestly, what I prefer is a product, which works with the language I know (SQL) and follows all the RDBMS concepts which I am familiar with (ACID etc.). NuoDB is one such solution.  It is an operational NewSQL database built on a patented emergent architecture with full support for SQL and ACID guarantees. In this blog post, I will explore how one can download and install NuoDB database. Step 1: Follow me and go to the NuoDB download page. Simply fill out the form, accept the online license agreement, and you will be taken directly to a page where you can select any platform you prefer to install NuoDB. In my example below, I select the Windows 64-bit platform as it is one of the most popular NuoDB platforms. (You can also run NuoDB on Amazon Web Services but I prefer to install it on my local machine for the purposes of this blog). Step 2: Once you have downloaded the NuoDB installer, double click on it to install it on the Windows platform. Here is the enlarged the icon of the installer. Step 3: Follow the wizard installation, as it is pretty straight forward and easy to do so. I have selected all the options to install as the overall installation is very simple and it does not take up much space. I have installed it on my C drive but you can select your preferred drive. It is quite possible that if you do not have 64 bit Java, it will throw following error. If you face following error, I suggest you to download 64-bit Java from here. Make sure that you download 64-bit Java from following link: http://java.com/en/download/manual.jsp If already have Java 64-bit installed, you can continue with the installation as described in following image. Otherwise, install Java and start from with Step 1. As in my case, I already have 64-bit Java installed – and you won’t believe me when I say that the entire installation of NuoDB only took me around 90 seconds. Click on Finish to end to exit the installation. Step 4: Once the installation is successful, NuoDB will automatically open the following two tabs – Console and DevCenter — in your preferred browser. On the Console tab you can explore various components of the NuoDB solution, e.g. QuickStart, Admin, Explorer, Storefront and Samples. We will see various components and their usage in future blog posts. If you follow these steps in this post, which I have followed to install NuoDB, you will agree that the installation of NuoDB is extremely smooth and it was indeed a pleasure to install a database product with such ease. If you have installed other database products in the past, you will absolutely agree with me. So download NuoDB and install it today, and in tomorrow’s blog post I will take the installation to the next level. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • F1 Pit Pragmatics

    - by mikef
    "I hate computers. No, really, I hate them. I love the communications they facilitate, I love the conveniences they provide to my life. but I actually hate the computers themselves." - Scott Merrill, 'I hate computers: confessions of a Sysadmin' If Scott's goal was to polarize opinion and trigger raging arguments over the 'real reasons why computers suck', then he certainly succeeded. Impassioned vitriol sits side-by-side with rational debate. Yet Scott's fundamental point is absolutely on the money - Computers are a means to an end. The IT industry is finally starting to put weight behind the notion that good User Experience is an absolutely crucial goal, a cause championed by the likes of Microsoft's Bill Buxton, and which Apple's increasingly ubiquitous touch screen interface exemplifies. However, that doesn't change the fact that, occasionally, you just have to man up and deal with complex systems. In fact, sometimes you just need to sacrifice everything else in the name of performance. You'll find a perfect example of this Faustian bargain in Trevor Clarke's fascinating look into the (diabolical) IT infrastructure of modern F1 racing - high performance, high availability. high everything. To paraphrase, each car has up to 100 sensors, transmitting around 30Gb of data over the course of a race (70% in real-time). This data is then processed by no less than 3 servers (per car) so that the engineers in the pit have access to telemetry, strategy information, timing feeds, a connection back to the operations room in the team's home base - the list goes on. All of this while the servers are exposed "to carbon dust, oil, vibration, rain, heat, [and] variable power". Now, this is admittedly an extreme context where there's no real choice but to use complex systems where ease-of-use is, at best, a secondary concern. The flip-side is seen in small-scale personal computing such as that seen in Apple's iDevices, which are incredibly intuitive but limited in their scope. In terms of what kinds of systems they prefer to use, I suspect that most SysAdmins find themselves somewhere along this axis of Power vs. Usability, and which end of this axis you resonate with also hints at where you think the IT industry should focus its energy. Do you see yourself in the F1 pit, making split-second decisions, wrestling with information flows and reticent hardware to bend them to your will? If so, I imagine you feel that computers are subtle tools which need to be tuned and honed, using the advanced knowledge possessed only by responsible SysAdmins (If you have an iPhone, I suspect it's jail-broken). If the machines throw enigmatic errors, it's the price of flexibility and raw power. Alternatively, would you prefer to have your role more accessible, with users empowered by knowledge, spreading the load of managing IT environments? In that case, then you want hardware and software to have User Experience as their primary focus, and are of the "means to an end" school of thought (you're probably also fed up with users not listening to you when you try and help). At its heart, the dichotomy is between raw power (which might be difficult to use) and ease-of-use (which might have some limitations, but you can be up and running immediately). Of course, the ultimate goal is a fusion of flexibility, power and usability all in one system. It's achievable in specific software environments, and Red Gate considers it a target worth aiming for, but in other cases it's a goal right up there with cold fusion. I think it'll be a long time before we see it become ubiquitous. In the meantime, are you Power-Hungry or a Champion of Usability? Cheers, Michael Francis Simple Talk SysAdmin Editor

    Read the article

  • Implementing a modern web application with Web API on top of old services

    - by Gaui
    My company has many WCF services which may or may not be replaced in the near future. The old web application is written in WebForms and communicates straight with these services via SOAP and returns DataTables. Now I am designing a new modern web application in a modern style, an AngularJS client which communicates with an ASP.NET Web API via JSON. The Web API then communicates with the WCF services via SOAP. In the future I want to let the Web API handle all requests and go straight to the database, but because the business logic implemented in the WCF services is complicated it's going to take some time to rewrite and replace it. Now to the problem: I'm trying to make it easy in the near future to replace the WCF services with some other data storage, e.g. another endpoint, database or whatever. I also want to make it easy to unit test the business logic. That's why I have structured the Web API with a repository layer and a service layer. The repository layer has a straight communication with the data storage (WCF service, database, or whatever) and the service layer then uses the repository (Dependency Injection) to get the data. It doesn't care where it gets the data from. Later on I can be in control and structure the data returned from the data storage (DataTable to POCO) and be able to test the logic in the service layer with some mock repository (using Dependency Injection). Below is some code to explain where I'm going with this. But my question is, does this all make sense? Am I making this overly complicated and could this be simplified in any way possible? Does this simplicity make this too complicated to maintain? My main goal is to make it as easy as possible to switch to another data storage later on, e.g. an ORM and be able to test the logic in the service layer. And because the majority of the business logic is implemented in these WCF services (and they return DataTables), I want to be in control of the data and the structure returned to the client. Any advice is greatly appreciated. Update 20/08/14 I created a repository factory, so services would all share repositories. Now it's easy to mock a repository, add it to the factory and create a provider using that factory. Any advice is much appreciated. I want to know if I'm making things more complicated than they should be. So it looks like this: 1. Repository Factory public class RepositoryFactory { private Dictionary<Type, IServiceRepository> repositories; public RepositoryFactory() { this.repositories = new Dictionary<Type, IServiceRepository>(); } public void AddRepository<T>(IServiceRepository repo) where T : class { if (this.repositories.ContainsKey(typeof(T))) { this.repositories.Remove(typeof(T)); } this.repositories.Add(typeof(T), repo); } public dynamic GetRepository<T>() { if (this.repositories.ContainsKey(typeof(T))) { return this.repositories[typeof(T)]; } throw new RepositoryNotFoundException("No repository found for " + typeof(T).Name); } } I'm not very fond of dynamic but I don't know how to retrieve that repository otherwise. 2. Repository and service // Service repository interface // All repository interfaces extend this public interface IServiceRepository { } // Invoice repository interface // Makes it easy to mock the repository later on public interface IInvoiceServiceRepository : IServiceRepository { List<Invoice> GetInvoices(); } // Invoice repository // Connects to some data storage to retrieve invoices public class InvoiceServiceRepository : IInvoiceServiceRepository { public List<Invoice> GetInvoices() { // Get the invoices from somewhere // This could be a WCF, a database, or whatever using(InvoiceServiceClient proxy = new InvoiceServiceClient()) { return proxy.GetInvoices(); } } } // Invoice service // Service that handles talking to a real or a mock repository public class InvoiceService { // Repository factory RepositoryFactory repoFactory; // Default constructor // Default connects to the real repository public InvoiceService(RepositoryFactory repo) { repoFactory = repo; } // Service function that gets all invoices from some repository (mock or real) public List<Invoice> GetInvoices() { // Query the repository return repoFactory.GetRepository<IInvoiceServiceRepository>().GetInvoices(); } }

    Read the article

  • Camera frustum calculation coming out wrong

    - by Telanor
    I'm trying to calculate a view/projection/bounding frustum for the 6 directions of a point light and I'm having trouble with the views pointing along the Y axis. Our game uses a right-handed, Y-up system. For the other 4 directions I create the LookAt matrix using (0, 1, 0) as the up vector. Obviously that doesn't work when looking along the Y axis so for those I use an up vector of (-1, 0, 0) for -Y and (1, 0, 0) for +Y. The view matrix seems to come out correctly (and the projection matrix always stays the same), but the bounding frustum is definitely wrong. Can anyone see what I'm doing wrong? This is the code I'm using: camera.Projection = Matrix.PerspectiveFovRH((float)Math.PI / 2, ShadowMapSize / (float)ShadowMapSize, 1, 5); for(var i = 0; i < 6; i++) { var renderTargetView = shadowMap.GetRenderTargetView((TextureCubeFace)i); var up = DetermineLightUp((TextureCubeFace) i); var forward = DirectionToVector((TextureCubeFace) i); camera.View = Matrix.LookAtRH(this.Position, this.Position + forward, up); camera.BoundingFrustum = new BoundingFrustum(camera.View * camera.Projection); } private static Vector3 DirectionToVector(TextureCubeFace direction) { switch (direction) { case TextureCubeFace.NegativeX: return -Vector3.UnitX; case TextureCubeFace.NegativeY: return -Vector3.UnitY; case TextureCubeFace.NegativeZ: return -Vector3.UnitZ; case TextureCubeFace.PositiveX: return Vector3.UnitX; case TextureCubeFace.PositiveY: return Vector3.UnitY; case TextureCubeFace.PositiveZ: return Vector3.UnitZ; default: throw new ArgumentOutOfRangeException("direction"); } } private static Vector3 DetermineLightUp(TextureCubeFace direction) { switch (direction) { case TextureCubeFace.NegativeY: return -Vector3.UnitX; case TextureCubeFace.PositiveY: return Vector3.UnitX; default: return Vector3.UnitY; } } Edit: Here's what the values are coming out to for the PositiveX and PositiveY directions: Constants: Position = {X:0 Y:360 Z:0} camera.Projection = [M11:0.9999999 M12:0 M13:0 M14:0] [M21:0 M22:0.9999999 M23:0 M24:0] [M31:0 M32:0 M33:-1.25 M34:-1] [M41:0 M42:0 M43:-1.25 M44:0] PositiveX: up = {X:0 Y:1 Z:0} target = {X:1 Y:360 Z:0} camera.View = [M11:0 M12:0 M13:-1 M14:0] [M21:0 M22:1 M23:0 M24:0] [M31:1 M32:0 M33:0 M34:0] [M41:0 M42:-360 M43:0 M44:1] camera.BoundingFrustum: Matrix = [M11:0 M12:0 M13:1.25 M14:1] [M21:0 M22:0.9999999 M23:0 M24:0] [M31:0.9999999 M32:0 M33:0 M34:0] [M41:0 M42:-360 M43:-1.25 M44:0] Top = {A:0.7071068 B:-0.7071068 C:0 D:254.5584} Bottom = {A:0.7071068 B:0.7071068 C:0 D:-254.5584} Left = {A:0.7071068 B:0 C:0.7071068 D:0} Right = {A:0.7071068 B:0 C:-0.7071068 D:0} Near = {A:1 B:0 C:0 D:-1} Far = {A:-1 B:0 C:0 D:5} PositiveY: up = {X:0 Y:0 Z:-1} target = {X:0 Y:361 Z:0} camera.View = [M11:-1 M12:0 M13:0 M14:0] [M21:0 M22:0 M23:-1 M24:0] [M31:0 M32:-1 M33:0 M34:0] [M41:0 M42:0 M43:360 M44:1] camera.BoundingFrustum: Matrix = [M11:-0.9999999 M12:0 M13:0 M14:0] [M21:0 M22:0 M23:1.25 M24:1] [M31:0 M32:-0.9999999 M33:0 M34:0] [M41:0 M42:0 M43:-451.25 M44:-360] Top = {A:0 B:0.7071068 C:0.7071068 D:-254.5585} Bottom = {A:0 B:0.7071068 C:-0.7071068 D:-254.5585} Left = {A:-0.7071068 B:0.7071068 C:0 D:-254.5585} Right = {A:0.7071068 B:0.7071068 C:0 D:-254.5585} Near = {A:0 B:1 C:0 D:-361} Far = {A:0 B:-1 C:0 D:365} When I use the resulting BoundingFrustum to cull regions outside of it, this is the result: Pass PositiveX: Drew 3 regions Pass NegativeX: Drew 6 regions Pass PositiveY: Drew 400 regions Pass NegativeY: Drew 36 regions Pass PositiveZ: Drew 3 regions Pass NegativeZ: Drew 6 regions There are only 400 regions to draw and the light is in the center of them. As you can see, the PositiveY direction is drawing every single region. With the near/far planes of the perspective matrix set as small as they are, there's no way a single frustum could contain every single region.

    Read the article

  • Improving WIF&rsquo;s Claims-based Authorization - Part 1

    - by Your DisplayName here!
    As mentioned in my last post, I made several additions to WIF’s built-in authorization infrastructure to make it more flexible and easy to use. The foundation for all this work is that you have to be able to directly call the registered ClaimsAuthorizationManager. The following snippet is the universal way to get to the WIF configuration that is currently in effect: public static ServiceConfiguration ServiceConfiguration {     get     {         if (OperationContext.Current == null)         {             // no WCF             return FederatedAuthentication.ServiceConfiguration;         }         // search message property         if (OperationContext.Current.IncomingMessageProperties. ContainsKey("ServiceConfiguration"))         {             var configuration = OperationContext.Current. IncomingMessageProperties["ServiceConfiguration"] as ServiceConfiguration;             if (configuration != null)             {                 return configuration;             }         }         // return configuration from configuration file         return new ServiceConfiguration();     } }   From here you can grab ServiceConfiguration.ClaimsAuthoriationManager which give you direct access to the CheckAccess method (and thus control over claim types and values). I then created the following wrapper methods: public static bool CheckAccess(string resource, string action) {     return CheckAccess(resource, action, Thread.CurrentPrincipal as IClaimsPrincipal); } public static bool CheckAccess(string resource, string action, IClaimsPrincipal principal) {     var context = new AuthorizationContext(principal, resource, action);     return AuthorizationManager.CheckAccess(context); } public static bool CheckAccess(Collection<Claim> actions, Collection<Claim> resources) {     return CheckAccess(new AuthorizationContext(         Thread.CurrentPrincipal.AsClaimsPrincipal(), resources, actions)); } public static bool CheckAccess(AuthorizationContext context) {     return AuthorizationManager.CheckAccess(context); } I also created the same set of methods but called DemandAccess. They internally use CheckAccess and will throw a SecurityException when false is returned. All the code is part of Thinktecture.IdentityModel on Codeplex – or via NuGet (Install-Package Thinktecture.IdentityModel).

    Read the article

  • Launch 7:Windows Phone 7 Style Live Tiles On Android Mobiles

    - by Gopinath
    Android is a great mobile OS but one thought that lingers in the mind of few Android owners is: Am I using a cheap iPhone? This is valid thought for many low end Android users as their phones runs sluggish and the user interface of Android looks like an imitation of iOS. When it comes to Windows Phone 7 users, even though their operating system features are not as great as iPhone/Android but it has its unique user interface; Windows Phone 7 user interface is a very intuitive and fresh, it’s constantly updating Live Tiles show all the required information on the home screen. Android has best mobile operating system features except UI and Windows Phone 7 has excellent user interface. How about porting Windows Phone 7 Tiles interface on an Android? That should be great. Launch 7 app brings the best of Windows Phone 7 look and feel to Android OS. Once the Launcher 7 app is installed and activated, it brings Live Tiles or constantly updating controls that show information on Android home screen. Apart from simple and smooth tiles, there are handful of customization options provided. Users can change colour of the tiles, add new tiles, enable/disable transitions. The reviews on Android Market are on the positive side with 4.4 stars by 10,000 + reviewers. Here are few user reviews 1. Does what it says. only issue for me is that the app drawer doesn’t rotate. And I would like the UI to rotate when my KB is opened. HTC desire z – Jonathan 2. Works great on atrix.Kudos to developers. Awesome. Though needs: Better notification bar More stock images of tiles Better fitting of widgets on tiles – Manny 3. Looks really good like it much more than I thought I would runs real smooth running royal ginger 2.1 – Jay 4. Omg amazing i am definetly keeping it as my default best of android and windows – Devon 5. Man! An update every week! Very very responsive developer! – Andrew You can read more reviews on Android Market here.  There is no doubt that this application is receiving rave reviews. After scanning a while through the reviews, few complaints throw light on the negative side: Battery drains a bit faster & Low end mobile run a bit sluggish. The application is available in two versions – an ad supported free version and $1.41 ad free version. Download Launcher 7 from Android Market This article titled,Launch 7:Windows Phone 7 Style Live Tiles On Android Mobiles, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Best depth sorting method for a Top Down 2D game using a 3D physics engine

    - by Alic44
    I've spent many days googling this and still have issues with my game engine I'd like to ask about, which I haven't seen addressed before. I think the problem is that my game is an unusual combination of a completely 2D graphical approach using XNA's SpriteBatch, and a completely 3D engine (the amazing BEPU physics engine) with rotation mostly disabled. In essence, my question is similar to this one (the part about "faux 3D"), but the difference is that in my game, the player as well as every other creature is represented by 3D objects, and they can all jump, pick up other objects, and throw them around. What this means is that sorting by one value, such as a Z position (how far north/south a character is on the screen) won't work, because as soon as a smaller creature jumps on top of a larger creature, or a box, and walks backwards, the moment its z value is less than that other creature, it will appear to be behind the object it is actually standing on. I actually originally solved this problem by splitting every object in the game into physics boxes which MUST have a Y height equal to their Z depth. I then based the depth sorting value on the object's y position (how high it is off the ground) PLUS its z position (how far north or south it is on the screen). The problem with this approach is that it requires all moving objects in the game to be split graphically into chunks which match up with a physical box which has its y dimension equal to its z dimension. Which is stupid. So, I got inspired last night to rewrite with a fresh approach. My new method is a little more complex, but I think a little more sane: every object which needs to be sorted by depth in the game exposes the interface IDepthDrawable and is added to a list owned by the DepthDrawer object. IDepthDrawable contains: public interface IDepthDrawable { Rectangle Bounds { get; } //possibly change this to a class if struct copying of the xna Rectangle type becomes an issue DepthDrawShape DepthShape { get; } void Draw(SpriteBatch spriteBatch); } The Bounds Rectangle of each IDepthDrawable object represents the 2D Axis-Aligned Bounding Box it will take up when drawn to the screen. Anything that doesn't intersect the screen will be culled at this stage and the remaining on-screen IDepthDrawables will be Bounds tested for intersections with each other. This is where I get a little less sure of what I'm doing. Each group of collisions will be added to a list or other collection, and each list will sort itself based on its DepthShape property, which will have access to the object-to-be-drawn's physics information. For starting out, lets assume everything in the game is an axis aligned 3D Box shape. Boxes are pretty easy to sort. Something like: if (depthShape1.Back > depthShape2.Front) //if depthShape1 is in front of depthShape2. //depthShape1 goes on top. else if (depthShape1.Bottom > depthShape2.Top) //if depthShape1 is above depthShape2. //depthShape1 goes on top. //if neither of these are true, depthShape2 must be in front or above. So, by sorting draw order by several different factors from the physics engine, I believe I can get a really correct draw order. My question is, is this a good way of going about this, or is there some tried and true, tested way which is completely different and has somehow completely eluded me on the internets? And, if this does seem like a good way to remake my draw order sorting, what's the right sorting algorithm for reordering the Bounds Rectangle collision lists, and how do you deal with a Bounds Rectangle colliding with two different object which don't collide with eachother. I know these are solved problems, but I've only been programming for a year so any specific input here will be greatly appreciated. Thanks for reading this far, ye who made it -- sorry it was so long!

    Read the article

  • An Unusual UpdatePanel

    - by João Angelo
    The code you are about to see was mostly to prove a point, to myself, and probably has limited applicability. Nonetheless, in the remote possibility this is useful to someone here it goes… So this is a control that acts like a normal UpdatePanel where all child controls are registered as postback triggers except for a single control specified by the TriggerControlID property. You could basically achieve the same thing by registering all controls as postback triggers in the regular UpdatePanel. However with this, that process is performed automatically. Finally, here is the code: public sealed class SingleAsyncTriggerUpdatePanel : WebControl, INamingContainer { public string TriggerControlID { get; set; } [TemplateInstance(TemplateInstance.Single)] [PersistenceMode(PersistenceMode.InnerProperty)] public ITemplate ContentTemplate { get; set; } public override ControlCollection Controls { get { this.EnsureChildControls(); return base.Controls; } } protected override void CreateChildControls() { if (string.IsNullOrWhiteSpace(this.TriggerControlID)) throw new InvalidOperationException( "The TriggerControlId property must be set."); this.Controls.Clear(); var updatePanel = new UpdatePanel() { ID = string.Concat(this.ID, "InnerUpdatePanel"), ChildrenAsTriggers = false, UpdateMode = UpdatePanelUpdateMode.Conditional, ContentTemplate = this.ContentTemplate }; updatePanel.Triggers.Add(new SingleControlAsyncUpdatePanelTrigger { ControlID = this.TriggerControlID }); this.Controls.Add(updatePanel); } } internal sealed class SingleControlAsyncUpdatePanelTrigger : UpdatePanelControlTrigger { private Control target; private ScriptManager scriptManager; public Control Target { get { if (this.target == null) { this.target = this.FindTargetControl(true); } return this.target; } } public ScriptManager ScriptManager { get { if (this.scriptManager == null) { var page = base.Owner.Page; if (page != null) { this.scriptManager = ScriptManager.GetCurrent(page); } } return this.scriptManager; } } protected override bool HasTriggered() { string asyncPostBackSourceElementID = this.ScriptManager.AsyncPostBackSourceElementID; if (asyncPostBackSourceElementID == this.Target.UniqueID) return true; return asyncPostBackSourceElementID.StartsWith( string.Concat(this.target.UniqueID, "$"), StringComparison.Ordinal); } protected override void Initialize() { base.Initialize(); foreach (Control control in FlattenControlHierarchy(this.Owner.Controls)) { if (control == this.Target) continue; bool isApplicableControl = false; isApplicableControl |= control is INamingContainer; isApplicableControl |= control is IPostBackDataHandler; isApplicableControl |= control is IPostBackEventHandler; if (isApplicableControl) { this.ScriptManager.RegisterPostBackControl(control); } } } private static IEnumerable<Control> FlattenControlHierarchy( ControlCollection collection) { foreach (Control control in collection) { yield return control; if (control.Controls.Count > 0) { foreach (Control child in FlattenControlHierarchy(control.Controls)) { yield return child; } } } } } You can use it like this, meaning that only the B2 button will trigger an async postback: <cc:SingleAsyncTriggerUpdatePanel ID="Test" runat="server" TriggerControlID="B2"> <ContentTemplate> <asp:Button ID="B1" Text="B1" runat="server" OnClick="Button_Click" /> <asp:Button ID="B2" Text="B2" runat="server" OnClick="Button_Click" /> <asp:Button ID="B3" Text="B3" runat="server" OnClick="Button_Click" /> <asp:Label ID="LInner" Text="LInner" runat="server" /> </ContentTemplate> </cc:SingleAsyncTriggerUpdatePanel>

    Read the article

  • Green (Screen) Computing

    - by onefloridacoder
    I recently was given an assignment to create a UX where a user could use the up and down arrow keys, as well as the tab and enter keys to move through a Silverlight datagrid that is going be used as part of a high throughput data entry UI. And to be honest, I’ve not trapped key codes since I coded JavaScript a few years ago.  Although the frameworks I’m using made it easy, it wasn’t without some trial and error.    The other thing that bothered me was that the customer tossed this into the use case as they were delivering the use case.  Fine.  I’ll take a whack at anything and beat up myself and beg (I’m not beyond begging for help) the community for help to get something done if I have to. It wasn’t as bad as I thought and I thought I would hopefully save someone a few keystrokes if you wanted to build a green screen for your customer.   Here’s the ValueConverter to handle changing the strings to decimals and then back again.  The value is a nullable valuetype so there are few extra steps to take.  Usually the “ConvertBack()” method doesn’t get addressed but in this case we have two-way binding and the converter needs to ensure that if the user doesn’t enter a value it will remain null when the value is reapplied to the model object’s setter.  1: using System; 2: using System.Windows.Data; 3: using System.Globalization; 4:  5: public class NullableDecimalToStringConverter : IValueConverter 6: { 7: public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) 8: { 9: if (!(((decimal?)value).HasValue)) 10: { 11: return (decimal?)null; 12: } 13: if (!(value is decimal)) 14: { 15: throw new ArgumentException("The value must be of type decimal"); 16: } 17:  18: NumberFormatInfo nfi = culture.NumberFormat; 19: nfi.NumberDecimalDigits = 4; 20:  21: return ((decimal)value).ToString("N", nfi); 22: } 23:  24: public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) 25: { 26: decimal nullableDecimal; 27: decimal.TryParse(value.ToString(), out nullableDecimal); 28:  29: return nullableDecimal == 0 ? null : nullableDecimal.ToString(); 30: } 31: }            The ConvertBack() method uses TryParse to create a value from the incoming string so if the parse fails, we get a null value back, which is what we would expect.  But while I was testing I realized that if the user types something like “2..4” instead of “2.4”, TryParse will fail and still return a null.  The user is getting “puuu-lenty” of eye-candy to ensure they know how many values are affected in this particular view. Here’s the XAML code.   This is the simple part, we just have a DataGrid with one column here that’s bound to the the appropriate ViewModel property with the Converter referenced as well. 1: <data:DataGridTextColumn 2: Header="On-Hand" 3: Binding="{Binding Quantity, 4: Mode=TwoWay, 5: Converter={StaticResource DecimalToStringConverter}}" 6: IsReadOnly="False" /> Nothing too magical here.  Just some XAML to hook things up.   Here’s the code behind that’s handling the DataGridKeyup event.  These are wired to a local/private method but could be converted to something the ViewModel could use, but I just need to get this working for now. 1: // Wire up happens in the constructor 2: this.PicDataGrid.KeyUp += (s, e) => this.HandleKeyUp(e);   1: // DataGrid.BeginEdit fires when DataGrid.KeyUp fires. 2: private void HandleKeyUp(KeyEventArgs args) 3: { 4: if (args.Key == Key.Down || 5: args.Key == Key.Up || 6: args.Key == Key.Tab || 7: args.Key == Key.Enter ) 8: { 9: this.PicDataGrid.BeginEdit(); 10: } 11: }   And that’s it.  The ValueConverter was the biggest problem starting out because I was using an existing converter that didn’t take nullable value types into account.   Once the converter was passing back the appropriate value (null, “#.####”) the grid cell(s) and the model objects started working as I needed them to. HTH.

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >