Search Results

Search found 18096 results on 724 pages for 'let'.

Page 96/724 | < Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >

  • Our winners- and some BBQ for everyone

    - by Steve Tunstall
    Congrats to our two winners for the first two comments on my last entry. Steve from Australia and John Lemon. Steve won since he was the first person over the International Date Line to see the post I made so late after a workday on Friday. So not only does he get to live in a country with the 2nd most beautiful women in the world, but now he gets some cool Oracle Swag, too. (Yes, I live on the beach in southern California, so you can guess where 1st place is for that other contest…Now if Steve happens to live in Manly, we may actually have a tie going…) OK, ok, for everyone else, you can be winners, too. How you ask? I will make you the envy of every guy and gal in your neighborhood or campsite. What follows is the way to smoke the best ribs you or anyone you know have ever tasted. Follow my instructions and give it a try. People at your party/cookout/campsite will tell you that they’re the best ribs they’ve ever had, and I will let you take all the credit. Yes, I fully realize this post is going to be longer than any post I’ve done yet. But let’s get serious here. Smoking meat is much more important, agreed? J In all honesty, this is a repeat of another blog I did, so I’m just copying and pasting. Step 1. Get some ribs. I actually really like Costco’s pack. They have both St. Louis and Baby Back. (They are the same ribs, but cut in half down the sides. St. Louis style is the ‘front’ of the ribs closest to the stomach, and ‘Baby back’ is the part of the ribs where is connects to the backbone). I like them both, so here you see I got one pack of each. About 4 racks to a pack. So these two packs for $25 each will feed about 16-20 of my guests. So around 3 bucks a person is a pretty good deal for the best ribs you’ll ever have. Step 2. Prep the ribs the night before you’re going to smoke. You need to trim them to fit your smoker racks, and also take off the membrane and add your rub. Then cover and set in fridge overnight. Here’s how to take off the membrane, which will not break down with heat and smoke like the rest of the meat, so must be removed. Use a butter knife to work in a ways between the membrane and the white bone. Just enough to make room for your finger. Try really hard not to poke through the membrane, you want to keep it whole. See how my gloved fingers can now start to lift up and pull off the membrane? This is what you are trying to do. It’s awesome when the whole thing can come off at once. This one is going great, maybe the best one I’ve ever done. Sometime, it falls apart and doesn't come off in one nice piece. I hate when that happens. Now, add your rub and pat it down once into the meat with your other hand. My rub is not secret. I got it from my mentor, a BBQ competitive chef who is currently ranked #1 in California and #3 in the nation on the BBQ circuit. He does full-day classes in southern California if anyone is interested in taking his class. Go to www.slapyodaddybbq.com to check him out. I tweaked his run recipe a tad and made my own. It’s one part Lawry’s, one part sugar, one part Montreal Steak Seasoning, one part garlic powder, one-half part red chili powder, one-half part paprika, and then 1/20th part cayenne. You can adjust that last ingredient, or leave it out. Real cheap stuff you can get at Costco. This lets you make enough rub to last about a year or two. Don’t make it all at once, make a shaker’s worth and use it up before you make more. Place it all in a bowl, mix well, and then add to a shaker like you see here. You can get a shaker with medium sized holes on it at any restaurant supply store or Smart & Final. The kind you see at pizza places for their red pepper flakes works best. Now cover and place in fridge overnight. Step 3. The next day. Ok, I’m ready to go. Get your stuff together. You will need your smoker, some good foil, a can of peach nectar, a bottle of Agave syrup, and a package of brown sugar. You will need this stuff later. I also use a clean spray bottle, and apple juice. Step 4. Make your fire, or turn on your electric smoker. In this example I’m using my portable charcoal smoker. I got this for only $40. I then modified it to be useful. Once modified, these guys actually work very well. Trust me, your food DOES NOT KNOW how expensive your smoker is. Someone who tells you that you need to spend a bunch of money on a smoker is an idiot. I also have an electric smoker that stays in my backyard. It’s cleaner and larger so I can smoke more food. But this little $40 one works great for going camping. Here is what my fire-bowl looks like. I leave a space in the middle open, and place cold charcoal and wood chucks in a circle going outwards. This makes it so when I dump the hot coals down the middle, they will slowly burn outwards, hitting different wood chucks at different times, allowing me to go 4-5 hours without having to even touch my fire. For ribs, I use apple and pecan wood. Pecan works for anything. Apple or any fruit wood is excellent for pork. So now I make my hot charcoal with a chimney only about half-full. I found a great use for that side-burner on my grill that I never use. It makes a fantastic chimney starter. You never use fluids of any kind, nor ever use that stupid charcoal that has lighter fluid built into it. Never, ever, ever. Step 5. Smoke. Add your ribs in the racks and stack them up in your smoker. I have a digital thermometer on a probe that I use to keep track of the temp in the smoker. I just lay the probe on the top rack and shut the lid. This cheap guy is a little harder to maintain the right temperature of around 225 F, so I do have to keep my eye on it more than my electric one or a more expensive charcoal one with the cool gadgets that regulate your temp for you. Every hour, spray apple juice all over your ribs using that spray bottle. After about 3 hours, you should have a very good crust (called the Bark) on your ribs. Once you have the Bark where you want it, carefully remove your ribs and place them in a tray. We are now ready for a very important part to make the flavor. Get a large piece of foil and place one rib section on it. Splash some of the peach nectar on it, and then a drizzle of the Agave syrup. Then, use your gloved hand to pack on some brown sugar. Do this on BOTH sides, and then completely wrap it up TIGHT in the foil. Do this for each rib section, and then place all the wrapped sections back into the smoker for another 4 to 6 hours. This is where the meat will get tender and flavorful. The first three hours is only to make the smoke bark. You don’t need smoke anymore, since the ribs are wrapped, you only need to keep the heat around 225 for the next 4-6 hours. Obviously you don’t spray anymore. Just time and slow heat. Be patient. It’s actually really hard to overdo it. You can let them go longer, and all that will happen is they will get even MORE tender!!! If you take them out too soon, they will be tough. How do you know? Take out one package (use long tongs) and open it up. If you grab a bone with your tongs and it just falls apart and breaks away from the rest of the meat, you are done!!! Enjoy!!! Step 6. Eat. It pulls apart like this when it’s done. By the way, smoking tri-tip is way easier. Just rub it with the same rub, and put in your smoker for about 2.5 hours at 250 F. That’s it. Low-maintenance. It comes out like this, with a fantastic smoke ring and amazing flavor. Thanks, and I will put up another good tip, about the ZFSSA, around the end of November. Steve 

    Read the article

  • Managed Service Architectures Part I

    - by barryoreilly
    Instead of thinking about service oriented architecture, a concept that is continually defined, redefined, abused and mistreated, perhaps it is time to drop the acronym and consider what we actually need to get the job done.   ‘Pure’ SOA involves the modeling of an organisation’s processes, the so called ‘Top Down’ approach, followed by the implementation of these processes as services.     Another approach, more commonly seen in the wild, is the bottom up approach. This usually involves services that simply start popping up in the organization, and SOA in this case is often just an attempt to rein in these services. Such projects, although described as SOA projects for a variety of reasons, have clearly little relation to process driven architecture. Much has been written about these two approaches, with many deciding that a hybrid of both methods is needed to succeed with SOA.   These hybrid methods are a sensible compromise, but one gets the feeling that there is too much focus on ‘Succeeding with SOA’. Organisations who focus too much on bottom up development, or who waste too much time and money on top down approaches that don’t produce results, are often recommended to attempt an ‘agile’(Erl) or ‘middle-out’ (Microsoft) approach in order to succeed with SOA.  The problem with recommending this approach is that, in most cases, succeeding with SOA isn’t the aim of the project. If a project is started with the simple aim of ‘Succeeding with SOA’ then the reasons for the projects existence probably need to be questioned.   There are a number of things we can be sure of: ·         An organisation will have a number of disparate IT systems ·         Some of these systems will have redundant data and functionality ·         Integration will give considerable ROI ·         Integration will already be under way. ·         Services will already exist in the organisation ·         These services will be inconsistent in their implementation and in their governance   So there are three goals here: 1.       Alignment between the business and IT 2.     Integration of disparate systems 3.     Management of services.   2 and 3 are going to happen,  in fact they must happen if any degree of return is expected from the IT department. Ignoring 1 is considered a typical mistake in SOA implementations, as it ignores the business implications. However, the business implication of this approach is the money saved in more efficient IT processes. 2 and 3 are ongoing, and they will continue happening, even if a large project to produce a SOA metamodel is started. The result will then be an unstructured cackle of services, and a metamodel that is already going out of date. So we get stuck in and rebuild our services so that they match the metamodel, with the far reaching consequences that this will have on all our LOB systems are current. Lets imagine that this actually works ( how often do we rip and replace working software because it doesn't fit a certain pattern? Never -that's the point of integration), we will now be working with a metamodel that is out of date, and most likely incomplete if the organisation is large.      Accepting that an object can have more than one model over time, with perhaps more than one model being  at any given time will help us realise the limitations of the top down model. It is entirely normal , and perhaps necessary, for an organisation to be able to view an entity from different perspectives.   So, instead of trying to constantly force these goals in a straight line, why not let them happen in parallel, and manage the changes in each layer.     If  company A has chosen to model their business processes and create a business architecture, there will be a reason behind this. Often the aim is to make the business more flexible and able to cope with change, through alignment between the business and the IT department.   If company B’s IT department recognizes the problem of wild services springing up everywhere, and decides to do something about it, by designing a platform and processes for the introduction of services, is this not a valid approach?   With the hybrid approach, it is recommended that company A begin deploying services as quickly as possible. Based on models that are clearly incomplete, and which will therefore change rapidly and often in the near future. Natural business evolution will also mean that the models can be guaranteed to change in the not so near future. To ‘Succeed with SOA’ Company B needs to go back to the drawing board and start modeling processes and objects. So, in effect, we are telling business analysts to start developing code based on a model they are unsure of, and telling programmers to ignore the obvious and growing problems in their IT department and start drawing lines and boxes.     Could the problem be that there are two different problem domains? And the whole concept of SOA as it being described by clever salespeople today creates an example of oft dreaded ‘tight coupling’ between these two domains?   Could it be that we have taken two large problem areas, and bundled the solution together in order to create a magic bullet? And then convinced ourselves that the bullet actually exists?   Company A wants to have a closer relationship between the business and its IT department, in order to become a more flexible organization. Company B wants to decrease the maintenance costs of its IT infrastructure. If both companies focus on succeeding with SOA, then they aren’t focusing on their actual goals.   If Company A starts building services from incomplete models, without a gameplan, they will end up in the same situation as company B, with wild services. If company B focuses on modeling, they could easily end up with the same problems as company A.   Now we have two companies, who a short while ago had one problem each, that now have two problems each. This has happened because of a focus on ‘Succeeding with SOA’, rather than solving the problem at hand.   This is not to suggest that the two problem domains are unrelated, a strategy that encompasses both will obviously be good for the organization. But only if the organization realizes this and can develop such a strategy. This strategy cannot be bought in a box.       Anyone who has worked with SOA for a while will be used to analyzing the solutions to a problem and judging the solution’s level of coupling. If we have two applications that each perform separate functions, but need to communicate with each other, we create a integration layer between them, perhaps with a service, but we do all we can to reduce the dependency between the two systems. Using the same approach, we can separate the modeling (business architecture) and the service hosting (technical architecture).     The business architecture describes the processes and business objects in the business domain.   The technical architecture describes the hosting and management and implementation of services.   The glue that binds these together, the integration layer in our analogy, is the service contract, where the operations map the processes to their technical implementation, and the messages map business concepts to software objects in the implementation.   If we reduce the coupling between these layers, we should be able to allow developers to develop services, and business analysts to develop models, without the changes rippling through from one side to the other.   This would allow company A to carry on modeling, and company B to develop a service platform, each achieving their intended goal, without necessarily creating the problems seen in pure top down or bottom up approaches. Company B could then at a later date map their service infrastructure to a unified model, and company A could carry on modeling, insulating deployed services from changes in the ongoing modeling.   How do we do this?  The concept of service virtualization has been around for a while, and is instantly realizable in Microsoft’s Managed Services Engine. Here we can create a layer of virtual services, which represent the business analyst’s view, presenting uniform contracts to the outside world. These services can then transform and route messages to the actual service implementations. I like to think of the virtual services with their beautifully modeled interfaces as ‘SOA services’, and the implementations as simple integration ‘adapter’ services providing an interface to a technical implementation. The Managed Services Engine also provides policy based control over services, regardless of where they are deployed, simplifying handling of security, logging, exception handling etc.   This solves a big problem. The pressure to deliver services quickly is always there in projects. It is very important to quickly show value when implementing service architectures. There is also pressure to deliver quality, and you can’t easily do both at the same time. This approach allows quick delivery with quality increasing over time, allowing modeling and service development to occur in parallel and independent of each other. The link between business modeling and service implementation is not one that is obvious to many organizations, and requires a certain maturity to realize and drive forward. It is also completely possible that a company can benefit from one without the other, even if this approach is frowned upon today, there are many companies doing so and seeing ROI.   Of course there are disadvantages to this. The biggest one being the transformations necessary between the virtual interfaces and the service implementations. Bad choices in developing the services in the service implementation could mean that it is impossible to map the modeled processes to the implementation with redevelopment of the service. In many cases the architect will not have a choice here anyway, as proprietary systems are often delivered with predeveloped services. The alternative is to wait until the model is finished and then build the service according the model. However, if that approach worked we wouldn’t be having this discussion! And even when it does work, natural business evolution will mean that the two concepts (model and implementation) will immediately start to drift away from each other, so coupling them tightly together so that they are forever bound to the model that only applies at the time of the modeling work will not really achieve a great deal. Architecture is all about trade offs, and here a choice has to be made. The choice is between something will initially be of low quality but will work, or something that may well be impossible to achieve in most situations.         In conclusion, top-down is a natural approach for business analysts, and bottom-up  is a natural approach for developers. Instead of trying to force something on both that neither want, and which has not shown itself to be successful,  why not let them get on with their jobs, and let an enterprise architect coordinate the processes?

    Read the article

  • Using SPServices &amp; jQuery to Find My Stuff from Multi-Select Person/Group Field

    - by Mark Rackley
    Okay… quick blog post for all you SPServices fans out there. I needed to quickly write a script that would return all the tasks currently assigned to me.  I also wanted it to return any task that was assigned to a group I belong to. This can actually be done with a CAML query, so no big deal, right?  The rub is that the “assigned to” field is a multi-select person or group field. As far as I know (and I actually know so little) you cannot just write a CAML query to return this information. If you can, please leave a comment below and disregard the rest of this blog post… So… what’s a hacker to do? As always, I break things down to their most simple components (I really love the KISS principle and would get it tattooed on my back if people wouldn’t think it meant “Knights In Satan’s Service”. You really gotta be an old far to get that reference).  Here’s what we’re going to do: Get currently logged in user’s name as it is stored in a person field Find all the SharePoint groups the current user belongs to Retrieve a set of assigned tasks from the task list and then find those that are assigned to current  user or group current user belongs to Nothing too hairy… So let’s get started Some Caveats before I continue There are some obvious performance implications with this solution as I make a total of four SPServices calls and there’s a lot of looping going on. Also, the CAML query in this blog has NOT been optimized. If you move forward with this code, tweak it so that it returns a further subset of data or you will see horrible performance if you have a few hundred entries in your task list. Add a date range to the CAML or something. Find some way to limit the results as much as possible. Lastly, if you DO have a better solution, I would like you to share. Iron sharpens iron and all…   Alright, let’s really get started. Get currently logged in user’s name as it is stored in a person field First thing we need to do is understand how a person group looks when you look at the XML returned from a SharePoint Web Service call. It turns out it’s stored like any other multi select item in SharePoint which is <id>;#<value> and when you assign a person to that field the <value> equals the person’s name “Mark Rackley” in my case. This is for Windows Authentication, I would expect this to be different in FBA, but I’m not using FBA. If you want to know what it looks like with FBA you can use the code in this blog and strategically place an alert to see the value.  Anyway… I need to find the name of the user who is currently logged in as it is stored in the person field. This turns out to be one SPServices call: var userName = $().SPServices.SPGetCurrentUser({                     fieldName: "Title",                     debug: false                     }); As you can see, the “Title” field has the information we need. I suspect (although again, I haven’t tried) that the Title field also contains the user’s name as we need it if I was using FBA. Okay… last thing we need to do is store our users name in an array for processing later: myGroups = new Array(); myGroups.push(userName); Find all the SharePoint groups the current user belongs to Now for the groups. How are groups returned in that XML stream?  Same as the person <ID>;#<Group Name>, and if it’s a mutli select it’s all returned in one big long string “<ID>;#<Group Name>;#<ID>;#<Group Name>;#<ID>;#<Group Name>;#<ID>;#<Group Name>;#<ID>;#<Group Name>”.  So, how do we find all the groups the current user belongs to? This is also a simple SPServices call. Using the “GetGroupCollectionFromUser” operation we can find all the groups a user belongs to. So, let’s execute this method and store all our groups. $().SPServices({       operation: "GetGroupCollectionFromUser",       userLoginName: $().SPServices.SPGetCurrentUser(),       async: false,       completefunc: function(xData, Status) {          $(xData.responseXML).find("[nodeName=Group]").each(function() {                 myGroups.push($(this).attr("Name"));          });         }     }); So, all we did in the above code was execute the “GetGroupCollectionFromUser” operation and look for the each “Group” node (row) and store the name for each group in our array that we put the user’s name in previously (myGroups). Now we have an array that contains the current user’s name as it will appear in the person field XML and  all the groups the current user belongs to. The Rest Now comes the easy part for all of you familiar with SPServices. We are going to retrieve our tasks from the Task list using “GetListItems” and look at each entry to see if it belongs to this person. If it does belong to this person we are going to store it for later processing. That code looks something like this: // get list of assigned tasks that aren't closed... *modify the CAML to perform better!*             $().SPServices({                   operation: "GetListItems",                   async: false,                   listName: "Tasks",                   CAMLViewFields: "<ViewFields>" +                             "<FieldRef Name='AssignedTo' />" +                             "<FieldRef Name='Title' />" +                             "<FieldRef Name='StartDate' />" +                             "<FieldRef Name='EndDate' />" +                             "<FieldRef Name='Status' />" +                             "</ViewFields>",                   CAMLQuery: "<Query><Where><And><IsNotNull><FieldRef Name='AssignedTo'/></IsNotNull><Neq><FieldRef Name='Status'/><Value Type='Text'>Completed</Value></Neq></And></Where></Query>",                     completefunc: function (xData, Status) {                         var aDataSet = new Array();                        //loop through each returned Task                         $(xData.responseXML).find("[nodeName=z:row]").each(function() {                             //store the multi-select string of who task is assigned to                             var assignedToString = $(this).attr("ows_AssignedTo");                             found = false;                            //loop through the persons name and all the groups they belong to                             for(var i=0; i<myGroups.length; i++) {                                 //if the person's name or group exists in the assigned To string                                 //then the task is assigned to them                                 if (assignedToString.indexOf(myGroups[i]) >= 0){                                     found = true;                                     break;                                 }                             }                             //if the Task belongs to this person then store or display it                             //(I'm storing it in an array)                             if (found){                                 var thisName = $(this).attr("ows_Title");                                 var thisStartDate = $(this).attr("ows_StartDate");                                 var thisEndDate = $(this).attr("ows_EndDate");                                 var thisStatus = $(this).attr("ows_Status");                                                                  var aDataRow=new Array(                                     thisName,                                     thisStartDate,                                     thisEndDate,                                     thisStatus);                                 aDataSet.push(aDataRow);                             }                          });                          SomeFunctionToDisplayData(aDataSet);                     }                 }); Some notes on why I did certain things and additional caveats. You will notice in my code that I’m doing an AssignedToString.indexOf(GroupName) to see if the task belongs to the person. This could possibly return bad results if you have SharePoint Group names that are named in such a way that the “IndexOf” returns a false positive.  For example if you have a Group called “My Users” and a group called “My Users – SuperUsers” then if a user belonged to “My Users” it would return a false positive on executing “My Users – SuperUsers”.IndexOf(“My Users”). Make sense? Just be aware of this when naming groups, we don’t have this problem. This is where also some fine-tuning can probably be done by those smarter than me. This is a pretty inefficient method to determine if a task belongs to a user, I mean what if a user belongs to 20 groups? That’s a LOT of looping.  See all the opportunities I give you guys to do something fun?? Also, why am I storing my values in an array instead of just writing them out to a Div? Well.. I want to pass my data to a jQuery library to format it all nice and pretty and an Array is a great way to do that. When all is said and done and we put all the code together it looks like:   $(document).ready(function() {         var userName = $().SPServices.SPGetCurrentUser({                     fieldName: "Title",                     debug: false                     });         myGroups = new Array();     myGroups.push(userName );       $().SPServices({       operation: "GetGroupCollectionFromUser",       userLoginName: $().SPServices.SPGetCurrentUser(),       async: false,       completefunc: function(xData, Status) {          $(xData.responseXML).find("[nodeName=Group]").each(function() {                 myGroups.push($(this).attr("Name"));          });                      // get list of assigned tasks that aren't closed... *modify this CAML to perform better!*             $().SPServices({                   operation: "GetListItems",                   async: false,                   listName: "Tasks",                   CAMLViewFields: "<ViewFields>" +                             "<FieldRef Name='AssignedTo' />" +                             "<FieldRef Name='Title' />" +                             "<FieldRef Name='StartDate' />" +                             "<FieldRef Name='EndDate' />" +                             "<FieldRef Name='Status' />" +                             "</ViewFields>",                   CAMLQuery: "<Query><Where><And><IsNotNull><FieldRef Name='AssignedTo'/></IsNotNull><Neq><FieldRef Name='Status'/><Value Type='Text'>Completed</Value></Neq></And></Where></Query>",                     completefunc: function (xData, Status) {                         var aDataSet = new Array();                         //loop through each returned Task                         $(xData.responseXML).find("[nodeName=z:row]").each(function() {                             //store the multi-select string of who task is assigned to                             var assignedToString = $(this).attr("ows_AssignedTo");                             found = false;                            //loop through the persons name and all the groups they belong to                             for(var i=0; i<myGroups.length; i++) {                                 //if the person's name or group exists in the assigned To string                                 //then the task is assigned to them                                 if (assignedToString.indexOf(myGroups[i]) >= 0){                                     found = true;                                     break;                                 }                             }                            //if the Task belongs to this person then store or display it                             //(I'm storing it in an array)                             if (found){                                 var thisName = $(this).attr("ows_Title");                                 var thisStartDate = $(this).attr("ows_StartDate");                                 var thisEndDate = $(this).attr("ows_EndDate");                                 var thisStatus = $(this).attr("ows_Status");                                                                  var aDataRow=new Array(                                     thisName,                                     thisStartDate,                                     thisEndDate,                                     thisStatus);                                 aDataSet.push(aDataRow);                             }                          });                          SomeFunctionToDisplayData(aDataSet);                     }                 });       }    });  }); Final Thoughts So, there you have it. Take it and run with it. Make it something cool (and tell me how you did it). Another possible way to improve performance in this scenario is to use a DVWP to display the tasks and use jQuery and the “myGroups” array from this blog post to hide all those rows that don’t belong to the current user. I haven’t tried it, but it does move some of the processing off to the server (generating the view) so it may perform better.  As always, thanks for stopping by… hope you have a Merry Christmas…

    Read the article

  • Why unhandled exceptions are useful

    - by Simon Cooper
    It’s the bane of most programmers’ lives – an unhandled exception causes your application or webapp to crash, an ugly dialog gets displayed to the user, and they come complaining to you. Then, somehow, you need to figure out what went wrong. Hopefully, you’ve got a log file, or some other way of reporting unhandled exceptions (obligatory employer plug: SmartAssembly reports an application’s unhandled exceptions straight to you, along with the entire state of the stack and variables at that point). If not, you have to try and replicate it yourself, or do some psychic debugging to try and figure out what’s wrong. However, it’s good that the program crashed. Or, more precisely, it is correct behaviour. An unhandled exception in your application means that, somewhere in your code, there is an assumption that you made that is actually invalid. Coding assumptions Let me explain a bit more. Every method, every line of code you write, depends on implicit assumptions that you have made. Take this following simple method, that copies a collection to an array and includes an item if it isn’t in the collection already, using a supplied IEqualityComparer: public static T[] ToArrayWithItem( ICollection<T> coll, T obj, IEqualityComparer<T> comparer) { // check if the object is in collection already // using the supplied comparer foreach (var item in coll) { if (comparer.Equals(item, obj)) { // it's in the collection already // simply copy the collection to an array // and return it T[] array = new T[coll.Count]; coll.CopyTo(array, 0); return array; } } // not in the collection // copy coll to an array, and add obj to it // then return it T[] array = new T[coll.Count+1]; coll.CopyTo(array, 0); array[array.Length-1] = obj; return array; } What’s all the assumptions made by this fairly simple bit of code? coll is never null comparer is never null coll.CopyTo(array, 0) will copy all the items in the collection into the array, in the order defined for the collection, starting at the first item in the array. The enumerator for coll returns all the items in the collection, in the order defined for the collection comparer.Equals returns true if the items are equal (for whatever definition of ‘equal’ the comparer uses), false otherwise comparer.Equals, coll.CopyTo, and the coll enumerator will never throw an exception or hang for any possible input and any possible values of T coll will have less than 4 billion items in it (this is a built-in limit of the CLR) array won’t be more than 2GB, both on 32 and 64-bit systems, for any possible values of T (again, a limit of the CLR) There are no threads that will modify coll while this method is running and, more esoterically: The C# compiler will compile this code to IL according to the C# specification The CLR and JIT compiler will produce machine code to execute the IL on the user’s computer The computer will execute the machine code correctly That’s a lot of assumptions. Now, it could be that all these assumptions are valid for the situations this method is called. But if this does crash out with an exception, or crash later on, then that shows one of the assumptions has been invalidated somehow. An unhandled exception shows that your code is running in a situation which you did not anticipate, and there is something about how your code runs that you do not understand. Debugging the problem is the process of learning more about the new situation and how your code interacts with it. When you understand the problem, the solution is (usually) obvious. The solution may be a one-line fix, the rewrite of a method or class, or a large-scale refactoring of the codebase, but whatever it is, the fix for the crash will incorporate the new information you’ve gained about your own code, along with the modified assumptions. When code is running with an assumption or invariant it depended on broken, then the result is ‘undefined behaviour’. Anything can happen, up to and including formatting the entire disk or making the user’s computer sentient and start doing a good impression of Skynet. You might think that those can’t happen, but at Halting problem levels of generality, as soon as an assumption the code depended on is broken, the program can do anything. That is why it’s important to fail-fast and stop the program as soon as an invariant is broken, to minimise the damage that is done. What does this mean in practice? To start with, document and check your assumptions. As with most things, there is a level of judgement required. How you check and document your assumptions depends on how the code is used (that’s some more assumptions you’ve made), how likely it is a method will be passed invalid arguments or called in an invalid state, how likely it is the assumptions will be broken, how expensive it is to check the assumptions, and how bad things are likely to get if the assumptions are broken. Now, some assumptions you can assume unless proven otherwise. You can safely assume the C# compiler, CLR, and computer all run the method correctly, unless you have evidence of a compiler, CLR or processor bug. You can also assume that interface implementations work the way you expect them to; implementing an interface is more than simply declaring methods with certain signatures in your type. The behaviour of those methods, and how they work, is part of the interface contract as well. For example, for members of a public API, it is very important to document your assumptions and check your state before running the bulk of the method, throwing ArgumentException, ArgumentNullException, InvalidOperationException, or another exception type as appropriate if the input or state is wrong. For internal and private methods, it is less important. If a private method expects collection items in a certain order, then you don’t necessarily need to explicitly check it in code, but you can add comments or documentation specifying what state you expect the collection to be in at a certain point. That way, anyone debugging your code can immediately see what’s wrong if this does ever become an issue. You can also use DEBUG preprocessor blocks and Debug.Assert to document and check your assumptions without incurring a performance hit in release builds. On my coding soapbox… A few pet peeves of mine around assumptions. Firstly, catch-all try blocks: try { ... } catch { } A catch-all hides exceptions generated by broken assumptions, and lets the program carry on in an unknown state. Later, an exception is likely to be generated due to further broken assumptions due to the unknown state, causing difficulties when debugging as the catch-all has hidden the original problem. It’s much better to let the program crash straight away, so you know where the problem is. You should only use a catch-all if you are sure that any exception generated in the try block is safe to ignore. That’s a pretty big ask! Secondly, using as when you should be casting. Doing this: (obj as IFoo).Method(); or this: IFoo foo = obj as IFoo; ... foo.Method(); when you should be doing this: ((IFoo)obj).Method(); or this: IFoo foo = (IFoo)obj; ... foo.Method(); There’s an assumption here that obj will always implement IFoo. If it doesn’t, then by using as instead of a cast you’ve turned an obvious InvalidCastException at the point of the cast that will probably tell you what type obj actually is, into a non-obvious NullReferenceException at some later point that gives you no information at all. If you believe obj is always an IFoo, then say so in code! Let it fail-fast if not, then it’s far easier to figure out what’s wrong. Thirdly, document your assumptions. If an algorithm depends on a non-trivial relationship between several objects or variables, then say so. A single-line comment will do. Don’t leave it up to whoever’s debugging your code after you to figure it out. Conclusion It’s better to crash out and fail-fast when an assumption is broken. If it doesn’t, then there’s likely to be further crashes along the way that hide the original problem. Or, even worse, your program will be running in an undefined state, where anything can happen. Unhandled exceptions aren’t good per-se, but they give you some very useful information about your code that you didn’t know before. And that can only be a good thing.

    Read the article

  • CodePlex Daily Summary for Saturday, June 15, 2013

    CodePlex Daily Summary for Saturday, June 15, 2013Popular ReleasesNuPattern: NuPattern 1.3.22.0: Just download the 'NuPattern Installer.msi' above to install NuPattern for both VS2010 and VS2012. All the extensions listed above are included within this installer. This release includes a version of the 'NuPattern Toolkit Builder' tools and 'NuPattern Toolkit Builder Hands-On Labs' VSIXes for both VS2010 and for VS2012. Both of these extensions are installed in to both VS2010 and VS2012 in the combined MSI installer. Once NuPattern has been installed, please go to the Getting Started pag...BlackJumboDog: Ver5.9.1: 2013.06.13 Ver5.9.1 (1) Web??????SSI?#include???、CGI?????????????????????? (2) ???????????????????????????Lakana - WPF Framework: Lakana V2.1 RTM: - Dynamic text localization - A new application wide message busFree language translator and file converter: Free Language Translator 3.3: some bug fixes and a new link to video tutorials on Youtube.Pokemon Battle Online: ETV: ETV???2012?12??????,????,???????$/PBO/branches/PrivateBeta??。 ???????bug???????。 ???? Server??????,?????。 ?????????,?????????????,?????????。 ????????,????,?????????,???????????(??)??。 ???? ????????????。 ???????。 ???PP????,????????????????????PP????,??3。 ?????????????,??????????。 ???????? ??? ?? ???? ??? ???? ?? ?????????? ?? ??? ??? ??? ???????? ???? ???? ???????????????、???????????,??“???????”??。 ???bug ???Web Pages CMS: 0.5.0.5: Added empty media directoryModern UI for WPF: Modern UI 1.0.4: The ModernUI assembly including a demo app demonstrating the various features of Modern UI for WPF. Related downloads NuGet ModernUI for WPF is also available as NuGet package in the NuGet gallery, id: ModernUI.WPF Download Modern UI for WPF Templates A Visual Studio 2012 extension containing a collection of project and item templates for Modern UI for WPF. The extension includes the ModernUI.WPF NuGet package. DownloadToolbox for Dynamics CRM 2011: XrmToolBox (v1.2013.6.11): XrmToolbox improvement Add exception handling when loading plugins Updated information panel for displaying two lines of text Tools improvementMetadata Document Generator (v1.2013.6.10)New tool Web Resources Manager (v1.2013.6.11)Retrieve list of unused web resources Retrieve web resources from a solution All tools listAccess Checker (v1.2013.2.5) Attribute Bulk Updater (v1.2013.1.17) FetchXml Tester (v1.2013.3.4) Iconator (v1.2013.1.17) Metadata Document Generator (v1.2013.6.10) Privilege...Document.Editor: 2013.23: What's new for Document.Editor 2013.23: New Insert Emoticon support Improved Format support Minor Bug Fix's, improvements and speed upsChristoc's DotNetNuke Module Development Template: DotNetNuke 7 Project Templates V2.4 for VS2012: V2.4 - Release Date 6/10/2013 Items addressed in this 2.4 release Updated MSBuild Community Tasks reference to 1.4.0.61 Setting up your DotNetNuke Module Development Environment Installing Christoc's DotNetNuke Module Development Templates Customizing the latest DotNetNuke Module Development Project TemplatesLayered Architecture Sample for .NET: Leave Sample - June 2013 (for .NET 4.5): Thank You for downloading Layered Architecture Sample. Please read the accompanying README.txt file for setup and installation instructions. This is the first set of a series of revised samples that will be released to illustrate the layered architecture design pattern. This version is only supported on Visual Studio 2012. This set contains 2 samples that illustrates the use of: ASP.NET Web Forms, ASP.NET Model Binding, Windows Communications Foundation (WCF), Windows Workflow Foundation (W...Papercut: Papercut 2013-6-10: Feature: Shows From, To, Date and Subject of Email. Feature: Async UI and loading spinner. Enhancement: Improved speed when loading large attachments. Enhancement: Decoupled SMTP server into secondary assembly. Enhancement: Upgraded to .NET v4. Fix: Messages lost when received very fast. Fix: Email encoding issues on display/Automatically detect message Encoding Installation Note:Installation is copy and paste. Incoming messages are written to the start-up directory of Papercut. If you do n...Supporting Guidance and Whitepapers: v1.BETA Unit test Generator Documentation: Welcome to the Unit Test Generator Once you’ve moved to Visual Studio 2012, what’s a dev to do without the Create Unit Tests feature? Based on the high demand on User Voice for this feature to be restored, the Visual Studio ALM Rangers have introduced the Unit Test Generator Visual Studio Extension. The extension adds the “create unit test” feature back, with a focus on automating project creation, adding references and generating stubs, extensibility, and targeting of multiple test framewor...MapWindow 4: MapWindow GIS v4.8.8 - Release Candidate - 32Bit: Download the release notes here: http://svn.mapwindow.org/svnroot/MapWindow4Dev/Bin/MapWindowNotes.rtfLINQ to Twitter: LINQ to Twitter v2.1.06: Supports .NET 3.5, .NET 4.0, .NET 4.5, Silverlight 4.0, Windows Phone 7.1, Windows Phone 8, Client Profile, Windows 8, and Windows Azure. 100% Twitter API coverage. Also supports Twitter API v1.1! Also on NuGet.VR Player: VR Player 0.3.1 ALPHA: New plugin system with individual folders TrackIR support Maya and 3ds max formats support Dual screen support Mono layouts (left and right) Cylinder height parameter Barel effect factor parameter Razer hydra filter parameter VRPN bug fixes UI improvements Performances improvements Stabilization and logging with Log4Net New default values base on users feedback CTRL key to open menuSimCityPak: SimCityPak 0.1.0.8: SimCityPak 0.1.0.8 New features: Import BMP color palettes for vehicles Import RASTER file (uncompressed 8.8.8.8 DDS files) View different channels of RASTER files or preview of all layers combined Find text in javascripts TGA viewer Ground textures added to lot editor Many additional identified instances and propertiesWsus Package Publisher: Release v1.2.1306.09: Add more verifications on certificate validation. WPP will not let user to try publishing an update until the certificate is valid. Add certificate expiration date on the 'About' form. Filter Approbation to avoid a user to try to approve an update for uninstallation when the update do not support uninstallation. Add the server and console version on the 'About' form. WPP will not let user to publish an update until the server and console are not at the same level. WPP do not let user ...AJAX Control Toolkit: June 2013 Release: AJAX Control Toolkit Release Notes - June 2013 Release Version 7.0607June 2013 release of the AJAX Control Toolkit. AJAX Control Toolkit .NET 4.5 – AJAX Control Toolkit for .NET 4.5 and sample site (Recommended). AJAX Control Toolkit .NET 4 – AJAX Control Toolkit for .NET 4 and sample site (Recommended). AJAX Control Toolkit .NET 3.5 – AJAX Control Toolkit for .NET 3.5 and sample site (Recommended). Notes: - Instructions for using the AJAX Control Toolkit with ASP.NET 4.5 can be found at...Rawr: Rawr 5.2.1: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr Addon (NOT UPDATED YET FOR MOP)We now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including ba...New Projects3000????C?????????: 3000??C??????????AspTestPage: asp test Chiroredactie voor MS Word 2010: Chiroredactie voor MS Word dient om medewerkers en vrijwilligers van de Chiro te helpen wanneer ze teksten schrijven en/of eindredactie doen.Coastal Climbing: Coastal Climbing is the best bouldering gym in the south east!Data Moving Plug-in Examples: Examples of usage of the Data Moving Plug-in, of the Mvc Controls toolkit. The Data Moving Plug-in is an advanced set of controls and tools for Asp.net Mvc.Dynamics AX Admin Utilities: Dynamics AX Admin Utilities provide the utilities missing for Dynamics AX 2012: silent install, client and server configuration, AOS and Model Store management.Epilog Blogging Framework: A customisable, editable blogging framework based on MVC4 and written in C#.fashion photomania: fashion photomaniaGetemp: Gerenciador do Tempo de ProduçãoGlobal Tracker: Solução para rastreamento de dispositivos móveis composta por: - Uma Aplicação ASP.NET; - Um Web Service; - Uma Aplicação utilizando o plugin Xamarin.HyperLinkTagEditor: URL Appender using regexisotopos2: otra vez se borro, dejate de joder codeplexxlatex-tools: Latex supports, including syntax highlight.Lighter: Text to Braille: Lighter is an extensible and flexible Thai/English and further language translation to Braille format.Lock Screen Switcher: Change your Windows Logon or Lock Screen background image. Specify a directory and it will choose an image from that directory for your background.Patient Database Planning: The Patient Information Database will lower design costs for the Health Care Industry and the software will be user to use / understand for the average user.PERRS: perrsPHPReport: PHP ReportrapSharp: lightweight Audio PlayerRedmine Task List: Redmine Task List Visual Studio Package.RoHuay - 2013: Desarrollo del sistema web logistico para RoHuaySkautSIS: SkautSIS is a web based information system for Czech scouting centers which helps them manage their agenda and present their activities on the Web.test061501jabbr: testThorMVR: ThorMVRWindows Azure Cloud Sensor Sample for Windows Embedded Compact 2013: Windows Azure Cloud Sensor Sample for Windows Embedded Compact 2013, Windows CEWindows Phone 8 Mapping Samples: This project contains a set of code for basic location acquisition, location updates and mapping in Windows Phone 8.

    Read the article

  • Azure &ndash; Part 5 &ndash; Repository Pattern for Table Service

    - by Shaun
    In my last post I created a very simple WCF service with the user registration functionality. I created an entity for the user data and a DataContext class which provides some methods for operating the entities such as add, delete, etc. And in the service method I utilized it to add a new entity into the table service. But I didn’t have any validation before registering which is not acceptable in a real project. So in this post I would firstly add some validation before perform the data creation code and show how to use the LINQ for the table service.   LINQ to Table Service Since the table service utilizes ADO.NET Data Service to expose the data and the managed library of ADO.NET Data Service supports LINQ we can use it to deal with the data of the table service. Let me explain with my current example: I would like to ensure that when register a new user the email address should be unique. So I need to check the account entities in the table service before add. If you remembered, in my last post I mentioned that there’s a method in the TableServiceContext class – CreateQuery, which will create a IQueryable instance from a given type of entity. So here I would create a method under my AccountDataContext class to return the IQueryable<Account> which named Load. 1: public class AccountDataContext : TableServiceContext 2: { 3: private CloudStorageAccount _storageAccount; 4:  5: public AccountDataContext(CloudStorageAccount storageAccount) 6: : base(storageAccount.TableEndpoint.AbsoluteUri, storageAccount.Credentials) 7: { 8: _storageAccount = storageAccount; 9:  10: var tableStorage = new CloudTableClient(_storageAccount.TableEndpoint.AbsoluteUri, 11: _storageAccount.Credentials); 12: tableStorage.CreateTableIfNotExist("Account"); 13: } 14:  15: public void Add(Account accountToAdd) 16: { 17: AddObject("Account", accountToAdd); 18: SaveChanges(); 19: } 20:  21: public IQueryable<Account> Load() 22: { 23: return CreateQuery<Account>("Account"); 24: } 25: } The method returns the IQueryable<Account> so that I can perform the LINQ operation on it. And back to my service class, I will use it to implement my validation. 1: public bool Register(string email, string password) 2: { 3: var storageAccount = CloudStorageAccount.FromConfigurationSetting("DataConnectionString"); 4: var accountToAdd = new Account(email, password) { DateCreated = DateTime.Now }; 5: var accountContext = new AccountDataContext(storageAccount); 6:  7: // validation 8: var accountNumber = accountContext.Load() 9: .Where(a => a.Email == accountToAdd.Email) 10: .Count(); 11: if (accountNumber > 0) 12: { 13: throw new ApplicationException(string.Format("Your account {0} had been used.", accountToAdd.Email)); 14: } 15:  16: // create entity 17: try 18: { 19: accountContext.Add(accountToAdd); 20: return true; 21: } 22: catch (Exception ex) 23: { 24: Trace.TraceInformation(ex.ToString()); 25: } 26: return false; 27: } I used the Load method to retrieve the IQueryable<Account> and use Where method to find the accounts those email address are the same as the one is being registered. If it has I through an exception back to the client side. Let’s run it and test from my simple client application. Oops! Looks like we encountered an unexpected exception. It said the “Count” is not support by the ADO.NET Data Service LINQ managed library. That is because the table storage managed library (aka. TableServiceContext) is based on the ADO.NET Data Service and it supports very limit LINQ operation. Although I didn’t find a full list or documentation about which LINQ methods it supports I could even refer a page on msdn here. It gives us a roughly summary of which query operation the ADO.NET Data Service managed library supports and which doesn't. As you see the Count method is not in the supported list. Not only the query operation, there inner lambda expression in the Where method are limited when using the ADO.NET Data Service managed library as well. For example if you added (a => !a.DateDeleted.HasValue) in the Where method to exclude those deleted account it will raised an exception said "Invalid Input". Based on my experience you should always use the simple comparison (such as ==, >, <=, etc.) on the simple members (such as string, integer, etc.) and do not use any shortcut methods (such as string.Compare, string.IsNullOrEmpty etc.). 1: // validation 2: var accountNumber = accountContext.Load() 3: .Where(a => a.Email == accountToAdd.Email) 4: .ToList() 5: .Count; 6: if (accountNumber > 0) 7: { 8: throw new ApplicationException(string.Format("Your account {0} had been used.", accountToAdd.Email)); 9: } We changed the a bit and try again. Since I had created an account with my mail address so this time it gave me an exception said that the email had been used, which is correct.   Repository Pattern for Table Service The AccountDataContext takes the responsibility to save and load the account entity but only for that specific entity. Is that possible to have a dynamic or generic DataContext class which can operate any kinds of entity in my system? Of course yes. Although there's no typical database in table service we can threat the entities as the records, similar with the data entities if we used OR Mapping. As we can use some patterns for ORM architecture here we should be able to adopt the one of them - Repository Pattern in this example. We know that the base class - TableServiceContext provide 4 methods for operating the table entities which are CreateQuery, AddObject, UpdateObject and DeleteObject. And we can create a relationship between the enmity class, the table container name and entity set name. So it's really simple to have a generic base class for any kinds of entities. Let's rename the AccountDataContext to DynamicDataContext and make the type of Account as a type parameter if it. 1: public class DynamicDataContext<T> : TableServiceContext where T : TableServiceEntity 2: { 3: private CloudStorageAccount _storageAccount; 4: private string _entitySetName; 5:  6: public DynamicDataContext(CloudStorageAccount storageAccount) 7: : base(storageAccount.TableEndpoint.AbsoluteUri, storageAccount.Credentials) 8: { 9: _storageAccount = storageAccount; 10: _entitySetName = typeof(T).Name; 11:  12: var tableStorage = new CloudTableClient(_storageAccount.TableEndpoint.AbsoluteUri, 13: _storageAccount.Credentials); 14: tableStorage.CreateTableIfNotExist(_entitySetName); 15: } 16:  17: public void Add(T entityToAdd) 18: { 19: AddObject(_entitySetName, entityToAdd); 20: SaveChanges(); 21: } 22:  23: public void Update(T entityToUpdate) 24: { 25: UpdateObject(entityToUpdate); 26: SaveChanges(); 27: } 28:  29: public void Delete(T entityToDelete) 30: { 31: DeleteObject(entityToDelete); 32: SaveChanges(); 33: } 34:  35: public IQueryable<T> Load() 36: { 37: return CreateQuery<T>(_entitySetName); 38: } 39: } I saved the name of the entity type when constructed for performance matter. The table name, entity set name would be the same as the name of the entity class. The Load method returned a generic IQueryable instance which supports the lazy load feature. Then in my service class I changed the AccountDataContext to DynamicDataContext and that's all. 1: var accountContext = new DynamicDataContext<Account>(storageAccount); Run it again and register another account. The DynamicDataContext now can be used for any entities. For example, I would like the account has a list of notes which contains 3 custom properties: Account Email, Title and Content. We create the note entity class. 1: public class Note : TableServiceEntity 2: { 3: public string AccountEmail { get; set; } 4: public string Title { get; set; } 5: public string Content { get; set; } 6: public DateTime DateCreated { get; set; } 7: public DateTime? DateDeleted { get; set; } 8:  9: public Note() 10: : base() 11: { 12: } 13:  14: public Note(string email) 15: : base(email, string.Format("{0}_{1}", email, Guid.NewGuid().ToString())) 16: { 17: AccountEmail = email; 18: } 19: } And no need to tweak the DynamicDataContext we can directly go to the service class to implement the logic. Notice here I utilized two DynamicDataContext instances with the different type parameters: Note and Account. 1: public class NoteService : INoteService 2: { 3: public void Create(string email, string title, string content) 4: { 5: var storageAccount = CloudStorageAccount.FromConfigurationSetting("DataConnectionString"); 6: var accountContext = new DynamicDataContext<Account>(storageAccount); 7: var noteContext = new DynamicDataContext<Note>(storageAccount); 8:  9: // validate - email must be existed 10: var accounts = accountContext.Load() 11: .Where(a => a.Email == email) 12: .ToList() 13: .Count; 14: if (accounts <= 0) 15: throw new ApplicationException(string.Format("The account {0} does not exsit in the system please register and try again.", email)); 16:  17: // save the note 18: var noteToAdd = new Note(email) { Title = title, Content = content, DateCreated = DateTime.Now }; 19: noteContext.Add(noteToAdd); 20: } 21: } And updated our client application to test the service. I didn't implement any list service to show all notes but we can have a look on the local SQL database if we ran it at local development fabric.   Summary In this post I explained a bit about the limited LINQ support for the table service. And then I demonstrated about how to use the repository pattern in the table service data access layer and make the DataContext dynamically. The DynamicDataContext I created in this post is just a prototype. In fact we should create the relevant interface to make it testable and for better structure we'd better separate the DataContext classes for each individual kind of entity. So it should have IDataContextBase<T>, DataContextBase<T> and for each entity we would have class AccountDataContext<Account> : IDataContextBase<Account>, DataContextBase<Account> { … } class NoteDataContext<Note> : IDataContextBase<Note>, DataContextBase<Note> { … }   Besides the structured data saving and loading, another common scenario would be saving and loading some binary data such as images, files. In my next post I will show how to use the Blob Service to store the bindery data - make the account be able to upload their logo in my example.   Hope this helps, Shaun   All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Software development is (mostly) a trade, and what to do about it

    - by Jeff
    (This is another cross-post from my personal blog. I don’t even remember when I first started to write it, but I feel like my opinion is well enough baked to share.) I've been sitting on this for a long time, particularly as my opinion has changed dramatically over the last few years. That I've encountered more crappy code than maintainable, quality code in my career as a software developer only reinforces what I'm about to say. Software development is just a trade for most, and not a huge academic endeavor. For those of you with computer science degrees readying your pitchforks and collecting your algorithm interview questions, let me explain. This is not an assault on your way of life, and if you've been around, you know I'm right about the quality problem. You also know the HR problem is very real, or we wouldn't be paying top dollar for mediocre developers and importing people from all over the world to fill the jobs we can't fill. I'm going to try and outline what I see as some of the problems, and hopefully offer my views on how to address them. The recruiting problem I think a lot of companies are doing it wrong. Over the years, I've had two kinds of interview experiences. The first, and right, kind of experience involves talking about real life achievements, followed by some variation on white boarding in pseudo-code, drafting some basic system architecture, or even sitting down at a comprooder and pecking out some basic code to tackle a real problem. I can honestly say that I've had a job offer for every interview like this, save for one, because the task was to debug something and they didn't like me asking where to look ("everyone else in the company died in a plane crash"). The other interview experience, the wrong one, involves the classic torture test designed to make the candidate feel stupid and do things they never have, and never will do in their job. First they will question you about obscure academic material you've never seen, or don't care to remember. Then they'll ask you to white board some ridiculous algorithm involving prime numbers or some kind of string manipulation no one would ever do. In fact, if you had to do something like this, you'd Google for a solution instead of waste time on a solved problem. Some will tell you that the academic gauntlet interview is useful to see how people respond to pressure, how they engage in complex logic, etc. That might be true, unless of course you have someone who brushed up on the solutions to the silly puzzles, and they're playing you. But here's the real reason why the second experience is wrong: You're evaluating for things that aren't the job. These might have been useful tactics when you had to hire people to write machine language or C++, but in a world dominated by managed code in C#, or Java, people aren't managing memory or trying to be smarter than the compilers. They're using well known design patterns and techniques to deliver software. More to the point, these puzzle gauntlets don't evaluate things that really matter. They don't get into code design, issues of loose coupling and testability, knowledge of the basics around HTTP, or anything else that relates to building supportable and maintainable software. The first situation, involving real life problems, gives you an immediate idea of how the candidate will work out. One of my favorite experiences as an interviewee was with a guy who literally brought his work from that day and asked me how to deal with his problem. I had to demonstrate how I would design a class, make sure the unit testing coverage was solid, etc. I worked at that company for two years. So stop looking for algorithm puzzle crunchers, because a guy who can crush a Fibonacci sequence might also be a guy who writes a class with 5,000 lines of untestable code. Fashion your interview process on ways to reveal a developer who can write supportable and maintainable code. I would even go so far as to let them use the Google. If they want to cut-and-paste code, pass on them, but if they're looking for context or straight class references, hire them, because they're going to be life-long learners. The contractor problem I doubt anyone has ever worked in a place where contractors weren't used. The use of contractors seems like an obvious way to control costs. You can hire someone for just as long as you need them and then let them go. You can even give them the work that no one else wants to do. In practice, most places I've worked have retained and budgeted for the contractor year-round, meaning that the $90+ per hour they're paying (of which half goes to the person) would have been better spent on a full-time person with a $100k salary and benefits. But it's not even the cost that is an issue. It's the quality of work delivered. The accountability of a contractor is totally transient. They only need to deliver for as long as you keep them around, and chances are they'll never again touch the code. There's no incentive for them to get things right, there's little incentive to understand your system or learn anything. At the risk of making an unfair generalization, craftsmanship doesn't matter to most contractors. The education problem I don't know what they teach in college CS courses. I've believed for most of my adult life that a college degree was an essential part of being successful. Of course I would hold that bias, since I did it, and have the paper to show for it in a box somewhere in the basement. My first clue that maybe this wasn't a fully qualified opinion comes from the fact that I double-majored in journalism and radio/TV, not computer science. Eventually I worked with people who skipped college entirely, many of them at Microsoft. Then I worked with people who had a masters degree who sucked at writing code, next to the high school diploma types that rock it every day. I still think there's a lot to be said for the social development of someone who has the on-campus experience, but for software developers, college might not matter. As I mentioned before, most of us are not writing compilers, and we never will. It's actually surprising to find how many people are self-taught in the art of software development, and that should reveal some interesting truths about how we learn. The first truth is that we learn largely out of necessity. There's something that we want to achieve, so we do what I call just-in-time learning to meet those goals. We acquire knowledge when we need it. So what about the gaps in our knowledge? That's where the most valuable education occurs, via our mentors. They're the people we work next to and the people who write blogs. They are critical to our professional development. They don't need to be an encyclopedia of jargon, but they understand the craft. Even at this stage of my career, I probably can't tell you what SOLID stands for, but you can bet that I practice the principles behind that acronym every day. That comes from experience, augmented by my peers. I'm hell bent on passing that experience to others. Process issues If you're a manager type and don't do much in the way of writing code these days (shame on you for not messing around at least), then your job is to isolate your tradespeople from nonsense, while bringing your business into the realm of modern software development. That doesn't mean you slap up a white board with sticky notes and start calling yourself agile, it means getting all of your stakeholders to understand that frequent delivery of quality software is the best way to deal with change and evolving expectations. It also means that you have to play technical overlord to make sure the education and quality issues are dealt with. That's why I make the crack about sticky notes, because without the right technique being practiced among your code monkeys, you're just a guy with sticky notes. You're asking your business to accept frequent and iterative delivery, now make sure that the folks writing the code can handle the same thing. This means unit testing, the right instrumentation, integration tests, automated builds and deployments... all of the stuff that makes it easy to see when change breaks stuff. The prognosis I strongly believe that education is the most important part of what we do. I'm encouraged by things like The Starter League, and it's the kind of thing I'd love to see more of. I would go as far as to say I'd love to start something like this internally at an existing company. Most of all though, I can't emphasize enough how important it is that we mentor each other and share our knowledge. If you have people on your staff who don't want to learn, fire them. Seriously, get rid of them. A few months working with someone really good, who understands the craftsmanship required to build supportable and maintainable code, will change that person forever and increase their value immeasurably.

    Read the article

  • CodePlex Daily Summary for Friday, June 14, 2013

    CodePlex Daily Summary for Friday, June 14, 2013Popular ReleasesBlackJumboDog: Ver5.9.1: 2013.06.13 Ver5.9.1 (1) Web??????SSI?#include???、CGI?????????????????????? (2) ???????????????????????????BrightstarDB: BrightstarDB 1.3.40613: This is the first "official" BrightstarDB release under the MIT open source license. The code base has been reworked to replace / remove the use of third-party closed-source tools and has been updated to use a patched version of dotNetRDF 1.0 that includes the most recent updates for SPARQL 1.1 and Turtle. We have also extended the core RDF API to support targeting a specific graph with an update or query operation and made a change to the core profiling code to disable it by default, leadin...Lakana - WPF Framework: Lakana V2.1 RTM: - Dynamic text localization - A new application wide message busPokemon Battle Online: ETV: ETV???2012?12??????,????,???????$/PBO/branches/PrivateBeta??。 ???????bug???????。 ???? Server??????,?????。 ?????????,?????????????,?????????。 ????????,????,?????????,???????????(??)??。 ???? ????????????。 ???????。 ???PP????,????????????????????PP????,??3。 ?????????????,??????????。 ???????? ??? ?? ???? ??? ???? ?? ?????????? ?? ??? ??? ??? ???????? ???? ???? ???????????????、???????????,??“???????”??。 ???bug ???Web Pages CMS: 0.5.0.5: Added empty media directoryModern UI for WPF: Modern UI 1.0.4: The ModernUI assembly including a demo app demonstrating the various features of Modern UI for WPF. Related downloads NuGet ModernUI for WPF is also available as NuGet package in the NuGet gallery, id: ModernUI.WPF Download Modern UI for WPF Templates A Visual Studio 2012 extension containing a collection of project and item templates for Modern UI for WPF. The extension includes the ModernUI.WPF NuGet package. DownloadToolbox for Dynamics CRM 2011: XrmToolBox (v1.2013.6.11): XrmToolbox improvement Add exception handling when loading plugins Updated information panel for displaying two lines of text Tools improvementMetadata Document Generator (v1.2013.6.10)New tool Web Resources Manager (v1.2013.6.11)Retrieve list of unused web resources Retrieve web resources from a solution All tools listAccess Checker (v1.2013.2.5) Attribute Bulk Updater (v1.2013.1.17) FetchXml Tester (v1.2013.3.4) Iconator (v1.2013.1.17) Metadata Document Generator (v1.2013.6.10) Privilege...Document.Editor: 2013.23: What's new for Document.Editor 2013.23: New Insert Emoticon support Improved Format support Minor Bug Fix's, improvements and speed upsChristoc's DotNetNuke Module Development Template: DotNetNuke 7 Project Templates V2.4 for VS2012: V2.4 - Release Date 6/10/2013 Items addressed in this 2.4 release Updated MSBuild Community Tasks reference to 1.4.0.61 Setting up your DotNetNuke Module Development Environment Installing Christoc's DotNetNuke Module Development Templates Customizing the latest DotNetNuke Module Development Project TemplatesLayered Architecture Sample for .NET: Leave Sample - June 2013 (for .NET 4.5): Thank You for downloading Layered Architecture Sample. Please read the accompanying README.txt file for setup and installation instructions. This is the first set of a series of revised samples that will be released to illustrate the layered architecture design pattern. This version is only supported on Visual Studio 2012. This samples illustrates the use of ASP.NET Web Forms, ASP.NET Model Binding, Windows Communications Foundation (WCF), Windows Workflow Foundation (WF) and Microsoft Ente...Papercut: Papercut 2013-6-10: Feature: Shows From, To, Date and Subject of Email. Feature: Async UI and loading spinner. Enhancement: Improved speed when loading large attachments. Enhancement: Decoupled SMTP server into secondary assembly. Enhancement: Upgraded to .NET v4. Fix: Messages lost when received very fast. Fix: Email encoding issues on display/Automatically detect message Encoding Installation Note:Installation is copy and paste. Incoming messages are written to the start-up directory of Papercut. If you do n...Supporting Guidance and Whitepapers: v1.BETA Unit test Generator Documentation: Welcome to the Unit Test Generator Once you’ve moved to Visual Studio 2012, what’s a dev to do without the Create Unit Tests feature? Based on the high demand on User Voice for this feature to be restored, the Visual Studio ALM Rangers have introduced the Unit Test Generator Visual Studio Extension. The extension adds the “create unit test” feature back, with a focus on automating project creation, adding references and generating stubs, extensibility, and targeting of multiple test framewor...MapWindow 4: MapWindow GIS v4.8.8 - Release Candidate - 32Bit: Download the release notes here: http://svn.mapwindow.org/svnroot/MapWindow4Dev/Bin/MapWindowNotes.rtfLINQ to Twitter: LINQ to Twitter v2.1.06: Supports .NET 3.5, .NET 4.0, .NET 4.5, Silverlight 4.0, Windows Phone 7.1, Windows Phone 8, Client Profile, Windows 8, and Windows Azure. 100% Twitter API coverage. Also supports Twitter API v1.1! Also on NuGet.VR Player: VR Player 0.3.1 ALPHA: New plugin system with individual folders TrackIR support Maya and 3ds max formats support Dual screen support Mono layouts (left and right) Cylinder height parameter Barel effect factor parameter Razer hydra filter parameter VRPN bug fixes UI improvements Performances improvements Stabilization and logging with Log4Net New default values base on users feedback CTRL key to open menuSimCityPak: SimCityPak 0.1.0.8: SimCityPak 0.1.0.8 New features: Import BMP color palettes for vehicles Import RASTER file (uncompressed 8.8.8.8 DDS files) View different channels of RASTER files or preview of all layers combined Find text in javascripts TGA viewer Ground textures added to lot editor Many additional identified instances and propertiesWsus Package Publisher: Release v1.2.1306.09: Add more verifications on certificate validation. WPP will not let user to try publishing an update until the certificate is valid. Add certificate expiration date on the 'About' form. Filter Approbation to avoid a user to try to approve an update for uninstallation when the update do not support uninstallation. Add the server and console version on the 'About' form. WPP will not let user to publish an update until the server and console are not at the same level. WPP do not let user ...AJAX Control Toolkit: June 2013 Release: AJAX Control Toolkit Release Notes - June 2013 Release Version 7.0607June 2013 release of the AJAX Control Toolkit. AJAX Control Toolkit .NET 4.5 – AJAX Control Toolkit for .NET 4.5 and sample site (Recommended). AJAX Control Toolkit .NET 4 – AJAX Control Toolkit for .NET 4 and sample site (Recommended). AJAX Control Toolkit .NET 3.5 – AJAX Control Toolkit for .NET 3.5 and sample site (Recommended). Notes: - Instructions for using the AJAX Control Toolkit with ASP.NET 4.5 can be found at...Rawr: Rawr 5.2.1: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr Addon (NOT UPDATED YET FOR MOP)We now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including ba...VG-Ripper & PG-Ripper: PG-Ripper 1.4.13: changes NEW: Added Support for "ImageJumbo.com" links FIXED: Ripping of Threads with multiple pagesNew ProjectsControl de stock: sistema para control de stockEmails Outlook Mac Recovery Software That Is Provenly Better Than Others: Recover OLM Emails with Outlook Mac Recovery Software that restore Mac OLM files as well as Convert OLM files in EML and DBX file format. Horarios e Disciplinas: O projeto de Grade Horária do CEPE é desenvolvido por um analista programador experiente e dois estagiários ciosos de conhecimento nas linguagens C#, Razor, Entity Framework, AJAX.Hyper-v VHost and VM Inventory Reports: Get a detailed inventory/report from your VHost and it's VMs. Powershell script that dumps the report as a test file.jean0614changbranch: ddjean0614jabbrchangebranch: dLandscape: Planned Maintenance System, is a system being used for any fleet of assets where a regular maintenance is required.MongoCamp: MongoCamp for MongoDBNo-nonsense Flickr Uploader: Simple, easy to-use, stable, flickr uploader for large batch uploads.Outlook PST Converter to Export, Convert & Save PST in Other Formats: Outlook PST Converter to export Outlook emails in other formats from PST format which will be a better option to change format of Outlook PST file to PDF, MSG.Penrose Tiles: A simple windows forms based penrose tile generator.Ranch Master - an Object Oriented Progamming 2013 College Project: Raise the sheep by maintaining the Hunger & Thirst level low, keep the condition to 'Healthy', collect wools produced, trade the wools into points.SH Demo App: Summary!Stock Right: Initial releaseTesting Project: testingtestjabbr0613: testUnits of Measure: Simple solution to cope with units of measure in C# applications. You can develop your own unit system, be it SI-based or anything else that suits your needs.User Friendly Registration Plugin for DotNetNuke: This plugin makes DotNetNuke registration process more user friendly. Main idea is to inform user about possible mistakes in the registration form immediately,

    Read the article

  • CodePlex Daily Summary for Wednesday, June 12, 2013

    CodePlex Daily Summary for Wednesday, June 12, 2013Popular ReleasesWeb Pages CMS: 0.5.0.5: Added empty media directoryWindows 8 App Desing Reference Template: Education Big Picture: EducationBigPicture: Download includes the following -> Source (C# and JS) -> Package -> Snapshots -> DocumentationModern UI for WPF: Modern UI 1.0.4: The ModernUI assembly including a demo app demonstrating the various features of Modern UI for WPF. Related downloads NuGet ModernUI for WPF is also available as NuGet package in the NuGet gallery, id: ModernUI.WPF Download Modern UI for WPF Templates A Visual Studio 2012 extension containing a collection of project and item templates for Modern UI for WPF. The extension includes the ModernUI.WPF NuGet package. DownloadToolbox for Dynamics CRM 2011: XrmToolBox (v1.2013.6.11): XrmToolbox improvement Add exception handling when loading plugins Updated information panel for displaying two lines of text Tools improvementMetadata Document Generator (v1.2013.6.10)New tool Web Resources Manager (v1.2013.6.11)Retrieve list of unused web resources Retrieve web resources from a solution All tools listAccess Checker (v1.2013.2.5) Attribute Bulk Updater (v1.2013.1.17) FetchXml Tester (v1.2013.3.4) Iconator (v1.2013.1.17) Metadata Document Generator (v1.2013.6.10) Privilege...Document.Editor: 2013.23: What's new for Document.Editor 2013.23: New Insert Emoticon support Improved Format support Minor Bug Fix's, improvements and speed upsChristoc's DotNetNuke Module Development Template: DotNetNuke 7 Project Templates V2.4 for VS2012: V2.4 - Release Date 6/10/2013 Items addressed in this 2.4 release Updated MSBuild Community Tasks reference to 1.4.0.61 Setting up your DotNetNuke Module Development Environment Installing Christoc's DotNetNuke Module Development Templates Customizing the latest DotNetNuke Module Development Project TemplatesLayered Architecture Sample for .NET: Leave Sample - June 2013 (for .NET 4.5): Thank You for downloading Layered Architecture Sample. Please read the accompanying README.txt file for setup and installation instructions. This is the first set of a series of revised samples that will be released to illustrate the layered architecture design pattern. This version is only supported on Visual Studio 2012. This samples illustrates the use of ASP.NET Web Forms, ASP.NET Model Binding, Windows Communications Foundation (WCF), Windows Workflow Foundation (WF) and Microsoft Ente...Papercut: Papercut 2013-6-10: Feature: Shows From, To, Date and Subject of Email. Feature: Async UI and loading spinner. Enhancement: Improved speed when loading large attachments. Enhancement: Decoupled SMTP server into secondary assembly. Enhancement: Upgraded to .NET v4. Fix: Messages lost when received very fast. Fix: Email encoding issues on display/Automatically detect message Encoding Installation Note:Installation is copy and paste. Incoming messages are written to the start-up directory of Papercut. If you do n...SFDL.NET: SFDL.NET v1.1.0.4: Changelog: Ciritical Bug Fixed : Downloading of Files not possibleMapWinGIS ActiveX Map and GIS Component: MapWinGIS v4.8.8 Release Candidate - 32 Bit: This is the first release candidate of MapWinGIS. Please test it thoroughly.MapWindow 4: MapWindow GIS v4.8.8 - Release Candidate - 32Bit: Download the release notes here: http://svn.mapwindow.org/svnroot/MapWindow4Dev/Bin/MapWindowNotes.rtfLINQ to Twitter: LINQ to Twitter v2.1.06: Supports .NET 3.5, .NET 4.0, .NET 4.5, Silverlight 4.0, Windows Phone 7.1, Windows Phone 8, Client Profile, Windows 8, and Windows Azure. 100% Twitter API coverage. Also supports Twitter API v1.1! Also on NuGet.VR Player: VR Player 0.3 ALPHA: New plugin system with individual folders TrackIR support Maya and 3ds max formats support Dual screen support Mono layouts (left and right) Cylinder height parameter Barel effect factor parameter Razer hydra filter parameter VRPN bug fixes UI improvements Performances improvements Stabilization and logging with Log4Net New default values base on users feedback CTRL key to open menuSimCityPak: SimCityPak 0.1.0.8: SimCityPak 0.1.0.8 New features: Import BMP color palettes for vehicles Import RASTER file (uncompressed 8.8.8.8 DDS files) View different channels of RASTER files or preview of all layers combined Find text in javascripts TGA viewer Ground textures added to lot editor Many additional identified instances and propertiesWsus Package Publisher: Release v1.2.1306.09: Add more verifications on certificate validation. WPP will not let user to try publishing an update until the certificate is valid. Add certificate expiration date on the 'About' form. Filter Approbation to avoid a user to try to approve an update for uninstallation when the update do not support uninstallation. Add the server and console version on the 'About' form. WPP will not let user to publish an update until the server and console are not at the same level. WPP do not let user ...AJAX Control Toolkit: June 2013 Release: AJAX Control Toolkit Release Notes - June 2013 Release Version 7.0607June 2013 release of the AJAX Control Toolkit. AJAX Control Toolkit .NET 4.5 – AJAX Control Toolkit for .NET 4.5 and sample site (Recommended). AJAX Control Toolkit .NET 4 – AJAX Control Toolkit for .NET 4 and sample site (Recommended). AJAX Control Toolkit .NET 3.5 – AJAX Control Toolkit for .NET 3.5 and sample site (Recommended). Notes: - Instructions for using the AJAX Control Toolkit with ASP.NET 4.5 can be found at...Rawr: Rawr 5.2.1: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr Addon (NOT UPDATED YET FOR MOP)We now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including ba...VG-Ripper & PG-Ripper: PG-Ripper 1.4.13: changes NEW: Added Support for "ImageJumbo.com" links FIXED: Ripping of Threads with multiple pagesJson.NET: Json.NET 5.0 Release 6: New feature - Added serialized/deserialized JSON to verbose tracing New feature - Added support for using type name handling with ISerializable content Fix - Fixed not using default serializer settings with primitive values and JToken.ToObject Fix - Fixed error writing BigIntegers with JsonWriter.WriteToken Fix - Fixed serializing and deserializing flag enums with EnumMember attribute Fix - Fixed error deserializing interfaces with a valid type converter Fix - Fixed error deser...Biller: Biller 1.49: This release fixes company sales statisticsNew Projectsapiviewer: Documentation viewer for your service API.ArgusLib: Set of libraries I use in most of my projects.arwani: this is armani project.Blue Mercs Data Gateway: The Data Gateway is an open source project which simplifies coding fluently database SQL statements, stored procedure, connections and transactions.Convert Hashtable Rows into DataTable Columns in C#: Simplest way to convert a Hashtable into a DataTable with all the Hashtable rows converted into DataTable columns.everynet: EveryNet is an internet service provider servicing homes Fraktalysator: Software to visualize fractals such as the Mandelbrot Set. Still in early development state.Free BarCode API for .NET: Freee BarCode API for .NETGraphic filters in WPF: FiltersGrid Plugin: Grid Plugin , Turn your <table> into a fully functional price grid.mediamonitor: this is mediamonitor project.MyCodingStuffs: This solution includes C#, WCF, MVC4.0, ASP.net libraries.OpenErp .Net Connector: Access to OpenErp from .NetPhong and Flat shading: Phong shadingPixel Replacer: The Pixel Replacer is a simple library for replacing pixel colors with a new color, by setting a filter rule.plainetl in vb.net: a one db to another db transform, threaded, commandline tool, ... initially created to get a flat-File db (FoxPro) into a mssql.Programming in HTML5 with JavaScript and CSS3: Contains material developed while visiting a Microsoft course for the MCSD exam 70-480ProjectLinker2012: This tool helps to automatically create and maintain links from a source project to a target project to share code that is common to Silverlight and WPF.SALDERA Project: SALDERA TEST Project Summary Mire lesz ez jó? Majd Kiderül.TCPMessageServer: Simple TCP Client/Server used to pass an object between client and serverThats-Me Dot Net API: Thats-Me Dot Net API is a library which implements the Thats-Me.ch API for the Dot Net Framework.T-Sql Code Documentor: Document T-SQL Code and Extract Comments for all objects in a database.Windows 8 App Design Reference Template: Measurement: Measurement template will help if you want to build an app which addresses Mass/Weight, Distance/Length, Capacity/Volume etc. measurement placeholders.Windows 8 App Design Reference Template: Weather Clock: Weather Clock Template will help if you want to build an app which has the features to add Current/Previous day temperatures and add cities for coverage. Windows 8 App Desing Reference Template: Education Big Picture: Education Big Picture Template will help if you want to build an app which has an option for showcasing campus details, courses, student life etc dataWinodws 8 App Design Reference Template: Social Feed: Social Feed template will help if you want to build an app which has the Messaging, Facebook, twitter feeds sections. WP7FlacPlayer: Implemented JFlacLib in C# A simple player to play flac in windows phone system

    Read the article

  • Of transactions and Mongo

    - by Nuri Halperin
    Originally posted on: http://geekswithblogs.net/nuri/archive/2014/05/20/of-transactions-and-mongo-again.aspxWhat's the first thing you hear about NoSQL databases? That they lose your data? That there's no transactions? No joins? No hope for "real" applications? Well, you *should* be wondering whether a certain of database is the right one for your job. But if you do so, you should be wondering that about "traditional" databases as well! In the spirit of exploration let's take a look at a common challenge: You are a bank. You have customers with accounts. Customer A wants to pay B. You want to allow that only if A can cover the amount being transferred. Let's looks at the problem without any context of any database engine in mind. What would you do? How would you ensure that the amount transfer is done "properly"? Would you prevent a "transaction" from taking place unless A can cover the amount? There are several options: Prevent any change to A's account while the transfer is taking place. That boils down to locking. Apply the change, and allow A's balance to go below zero. Charge person A some interest on the negative balance. Not friendly, but certainly a choice. Don't do either. Options 1 and 2 are difficult to attain in the NoSQL world. Mongo won't save you headaches here either. Option 3 looks a bit harsh. But here's where this can go: ledger. See, and account doesn't need to be represented by a single row in a table of all accounts with only the current balance on it. More often than not, accounting systems use ledgers. And entries in ledgers - as it turns out – don't actually get updated. Once a ledger entry is written, it is not removed or altered. A transaction is represented by an entry in the ledger stating and amount withdrawn from A's account and an entry in the ledger stating an addition of said amount to B's account. For sake of space-saving, that entry in the ledger can happen using one entry. Think {Timestamp, FromAccountId, ToAccountId, Amount}. The implication of the original question – "how do you enforce non-negative balance rule" then boils down to: Insert entry in ledger Run validation of recent entries Insert reverse entry to roll back transaction if validation failed. What is validation? Sum up the transactions that A's account has (all deposits and debits), and ensure the balance is positive. For sake of efficiency, one can roll up transactions and "close the book" on transactions with a pseudo entry stating balance as of midnight or something. This lets you avoid doing math on the fly on too many transactions. You simply run from the latest "approved balance" marker to date. But that's an optimization, and premature optimizations are the root of (some? most?) evil.. Back to some nagging questions though: "But mongo is only eventually consistent!" Well, yes, kind of. It's not actually true that Mongo has not transactions. It would be more descriptive to say that Mongo's transaction scope is a single document in a single collection. A write to a Mongo document happens completely or not at all. So although it is true that you can't update more than one documents "at the same time" under a "transaction" umbrella as an atomic update, it is NOT true that there' is no isolation. So a competition between two concurrent updates is completely coherent and the writes will be serialized. They will not scribble on the same document at the same time. In our case - in choosing a ledger approach - we're not even trying to "update" a document, we're simply adding a document to a collection. So there goes the "no transaction" issue. Now let's turn our attention to consistency. What you should know about mongo is that at any given moment, only on member of a replica set is writable. This means that the writable instance in a set of replicated instances always has "the truth". There could be a replication lag such that a reader going to one of the replicas still sees "old" state of a collection or document. But in our ledger case, things fall nicely into place: Run your validation against the writable instance. It is guaranteed to have a ledger either with (after) or without (before) the ledger entry got written. No funky states. Again, the ledger writing *adds* a document, so there's no inconsistent document state to be had either way. Next, we might worry about data loss. Here, mongo offers several write-concerns. Write-concern in Mongo is a mode that marshals how uptight you want the db engine to be about actually persisting a document write to disk before it reports to the application that it is "done". The most volatile, is to say you don't care. In that case, mongo would just accept your write command and say back "thanks" with no guarantee of persistence. If the server loses power at the wrong moment, it may have said "ok" but actually no written the data to disk. That's kind of bad. Don't do that with data you care about. It may be good for votes on a pole regarding how cute a furry animal is, but not so good for business. There are several other write-concerns varying from flushing the write to the disk of the writable instance, flushing to disk on several members of the replica set, a majority of the replica set or all of the members of a replica set. The former choice is the quickest, as no network coordination is required besides the main writable instance. The others impose extra network and time cost. Depending on your tolerance for latency and read-lag, you will face a choice of what works for you. It's really important to understand that no data loss occurs once a document is flushed to an instance. The record is on disk at that point. From that point on, backup strategies and disaster recovery are your worry, not loss of power to the writable machine. This scenario is not different from a relational database at that point. Where does this leave us? Oh, yes. Eventual consistency. By now, we ensured that the "source of truth" instance has the correct data, persisted and coherent. But because of lag, the app may have gone to the writable instance, performed the update and then gone to a replica and looked at the ledger there before the transaction replicated. Here are 2 options to deal with this. Similar to write concerns, mongo support read preferences. An app may choose to read only from the writable instance. This is not an awesome choice to make for every ready, because it just burdens the one instance, and doesn't make use of the other read-only servers. But this choice can be made on a query by query basis. So for the app that our person A is using, we can have person A issue the transfer command to B, and then if that same app is going to immediately as "are we there yet?" we'll query that same writable instance. But B and anyone else in the world can just chill and read from the read-only instance. They have no basis to expect that the ledger has just been written to. So as far as they know, the transaction hasn't happened until they see it appear later. We can further relax the demand by creating application UI that reacts to a write command with "thank you, we will post it shortly" instead of "thank you, we just did everything and here's the new balance". This is a very powerful thing. UI design for highly scalable systems can't insist that the all databases be locked just to paint an "all done" on screen. People understand. They were trained by many online businesses already that your placing of an order does not mean that your product is already outside your door waiting (yes, I know, large retailers are working on it... but were' not there yet). The second thing we can do, is add some artificial delay to a transaction's visibility on the ledger. The way that works is simply adding some logic such that the query against the ledger never nets a transaction for customers newer than say 15 minutes and who's validation flag is not set. This buys us time 2 ways: Replication can catch up to all instances by then, and validation rules can run and determine if this transaction should be "negated" with a compensating transaction. In case we do need to "roll back" the transaction, the backend system can place the timestamp of the compensating transaction at the exact same time or 1ms after the original one. Effectively, once A or B visits their ledger, both transactions would be visible and the overall balance "as of now" would reflect no change.  The 2 transactions (attempted/ reverted) would be visible , since we do actually account for the attempt. Hold on a second. There's a hole in the story: what if several transfers from A to some accounts are registered, and 2 independent validators attempt to compute the balance concurrently? Is there a chance that both would conclude non-sufficient-funds even though rolling back transaction 100 would free up enough for transaction 117 (some random later transaction)? Yes. there is that chance. But the integrity of the business rule is not compromised, since the prime rule is don't dispense money you don't have. To minimize or eliminate this scenario, we can also assign a single validation process per origin account. This may seem non-scalable, but it can easily be done as a "sharded" distribution. Say we have 11 validation threads (or processing nodes etc.). We divide the account number space such that each validator is exclusively responsible for a certain range of account numbers. Sounds cunningly similar to Mongo's sharding strategy, doesn't it? Each validator then works in isolation. More capacity needed? Chop the account space into more chunks. So where  are we now with the nagging questions? "No joins": Huh? What are those for? "No transactions": You mean no cross-collection and no cross-document transactions? Granted - but don't always need them either. "No hope for real applications": well... There are more issues and edge cases to slog through, I'm sure. But hopefully this gives you some ideas of how to solve common problems without distributed locking and relational databases. But then again, you can choose relational databases if they suit your problem.

    Read the article

  • Farseer tutorial for the absolute beginners

    - by Bil Simser
    This post is inspired (and somewhat a direct copy) of a couple of posts Emanuele Feronato wrote back in 2009 about Box2D (his tutorial was ActionScript 3 based for Box2D, this is C# XNA for the Farseer Physics Engine). Here’s what we’re building: What is Farseer The Farseer Physics Engine is a collision detection system with realistic physics responses to help you easily create simple hobby games or complex simulation systems. Farseer was built as a .NET version of Box2D (based on the Box2D.XNA port of Box2D). While the constructs and syntax has changed over the years, the principles remain the same. This tutorial will walk you through exactly what Emanuele create for Flash but we’ll be doing it using C#, XNA and the Windows Phone platform. The first step is to download the library from its home on CodePlex. If you have NuGet installed, you can install the library itself using the NuGet package that but we’ll also be using some code from the Samples source that can only be obtained by downloading the library. Once you download and unpacked the zip file into a folder and open the solution, this is what you will get: The Samples XNA WP7 project (and content) have all the demos for Farseer. There’s a wealth of info here and great examples to look at to learn. The Farseer Physics XNA WP7 project contains the core libraries that do all the work. DebugView XNA contains an XNA-ready class to let you view debug data and information in the game draw loop (which you can copy into your project or build the source and reference the assembly). The downloaded version has to be compiled as it’s only available in source format so you can do that now if you want (open the solution file and rebuild everything). If you’re using the NuGet package you can just install that. We only need the core library and we’ll be copying in some code from the samples later. Your first Farseer experiment Start Visual Studio and create a new project using the Windows Phone template can call it whatever you want. It’s time to edit Game1.cs 1 public class Game1 : Game 2 { 3 private readonly GraphicsDeviceManager _graphics; 4 private DebugViewXNA _debugView; 5 private Body _floor; 6 private SpriteBatch _spriteBatch; 7 private float _timer; 8 private World _world; 9 10 public Game1() 11 { 12 _graphics = new GraphicsDeviceManager(this) 13 { 14 PreferredBackBufferHeight = 800, 15 PreferredBackBufferWidth = 480, 16 IsFullScreen = true 17 }; 18 19 Content.RootDirectory = "Content"; 20 21 // Frame rate is 30 fps by default for Windows Phone. 22 TargetElapsedTime = TimeSpan.FromTicks(333333); 23 24 // Extend battery life under lock. 25 InactiveSleepTime = TimeSpan.FromSeconds(1); 26 } 27 28 protected override void LoadContent() 29 { 30 // Create a new SpriteBatch, which can be used to draw textures. 31 _spriteBatch = new SpriteBatch(_graphics.GraphicsDevice); 32 33 // Load our font (DebugViewXNA needs it for the DebugPanel) 34 Content.Load<SpriteFont>("font"); 35 36 // Create our World with a gravity of 10 vertical units 37 if (_world == null) 38 { 39 _world = new World(Vector2.UnitY*10); 40 } 41 else 42 { 43 _world.Clear(); 44 } 45 46 if (_debugView == null) 47 { 48 _debugView = new DebugViewXNA(_world); 49 50 // default is shape, controller, joints 51 // we just want shapes to display 52 _debugView.RemoveFlags(DebugViewFlags.Controllers); 53 _debugView.RemoveFlags(DebugViewFlags.Joint); 54 55 _debugView.LoadContent(GraphicsDevice, Content); 56 } 57 58 // Create and position our floor 59 _floor = BodyFactory.CreateRectangle( 60 _world, 61 ConvertUnits.ToSimUnits(480), 62 ConvertUnits.ToSimUnits(50), 63 10f); 64 _floor.Position = ConvertUnits.ToSimUnits(240, 775); 65 _floor.IsStatic = true; 66 _floor.Restitution = 0.2f; 67 _floor.Friction = 0.2f; 68 } 69 70 protected override void Update(GameTime gameTime) 71 { 72 // Allows the game to exit 73 if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) 74 Exit(); 75 76 // Create a random box every second 77 _timer += (float) gameTime.ElapsedGameTime.TotalSeconds; 78 if (_timer >= 1.0f) 79 { 80 // Reset our timer 81 _timer = 0f; 82 83 // Determine a random size for each box 84 var random = new Random(); 85 var width = random.Next(20, 100); 86 var height = random.Next(20, 100); 87 88 // Create it and store the size in the user data 89 var box = BodyFactory.CreateRectangle( 90 _world, 91 ConvertUnits.ToSimUnits(width), 92 ConvertUnits.ToSimUnits(height), 93 10f, 94 new Point(width, height)); 95 96 box.BodyType = BodyType.Dynamic; 97 box.Restitution = 0.2f; 98 box.Friction = 0.2f; 99 100 // Randomly pick a location along the top to drop it from 101 box.Position = ConvertUnits.ToSimUnits(random.Next(50, 400), 0); 102 } 103 104 // Advance all the elements in the world 105 _world.Step(Math.Min((float) gameTime.ElapsedGameTime.TotalMilliseconds*0.001f, (1f/30f))); 106 107 // Clean up any boxes that have fallen offscreen 108 foreach (var box in from box in _world.BodyList 109 let pos = ConvertUnits.ToDisplayUnits(box.Position) 110 where pos.Y > _graphics.GraphicsDevice.Viewport.Height 111 select box) 112 { 113 _world.RemoveBody(box); 114 } 115 116 base.Update(gameTime); 117 } 118 119 protected override void Draw(GameTime gameTime) 120 { 121 GraphicsDevice.Clear(Color.FromNonPremultiplied(51, 51, 51, 255)); 122 123 _spriteBatch.Begin(); 124 125 var projection = Matrix.CreateOrthographicOffCenter( 126 0f, 127 ConvertUnits.ToSimUnits(_graphics.GraphicsDevice.Viewport.Width), 128 ConvertUnits.ToSimUnits(_graphics.GraphicsDevice.Viewport.Height), 0f, 0f, 129 1f); 130 _debugView.RenderDebugData(ref projection); 131 132 _spriteBatch.End(); 133 134 base.Draw(gameTime); 135 } 136 } 137 Lines 4: Declare the debug view we’ll use for rendering (more on that later). Lines 8: Declare _world variable of type class World. World is the main object to interact with the Farseer engine. It stores all the joints and bodies, and is responsible for stepping through the simulation. Lines 12-17: Create the graphics device we’ll be rendering on. This is an XNA component and we’re just setting it to be the same size as the phone and toggling it to be full screen (no system tray). Lines 34: We create a SpriteFont here by adding it to the project. It’s called “font” because that’s what the DebugView uses but you can name it whatever you want (and if you’re not using DebugView for your production app you might have several fonts). Lines 37-44: We create the physics environment that Farseer uses to contain all the objects by specifying it here. We’re using Vector2.UnitY*10 to represent the gravity to be used in the environment. In other words, 10 units going in a downward motion. Lines 46-56: We create the DebugViewXNA here. This is copied from the […] from the code you downloaded and provides the ability to render all entities onto the screen. In a production release you’ll be doing the rendering yourself of each object but we cheat a bit for the demo and let the DebugView do it for us. The other thing it can provide is to render out a panel of debugging information while the simulation is going on. This is useful in tracking down objects, figuring out how something works, or just keeping track of what’s in the engine. Lines 49-67: Here we create a rigid body (Farseer only supports rigid bodies) to represent the floor that we’ll drop objects onto. We create it by using one of the Farseer factories and specifying the width and height. The ConvertUnits class is copied from the samples code as-is and lets us toggle between display units (pixels) and simulation units (usually metres). We’re creating a floor that’s 480 pixels wide and 50 pixels high (converting them to SimUnits for the engine to understand). We also position it near the bottom of the screen. Values are in metres and when specifying values they refer to the centre of the body object. Lines 77-78: The game Update method fires 30 times a second, too fast to be creating objects this quickly. So we use a variable to track the elapsed seconds since the last update, accumulate that value, then create a new box to drop when 1 second has passed. Lines 89-94: We create a box the same way we created our floor (coming up with a random width and height for the box). Lines 96-101: We set the box to be Dynamic (rather than Static like the floor object) and position it somewhere along the top of the screen. And now you created the world. Gravity does the rest and the boxes fall to the ground. Here’s the result: Farseer Physics Engine Demo using XNA Lines 105: We must update the world at every frame. We do this with the Step method which takes in the time interval. [more] Lines 108-114: Body objects are added to the world but never automatically removed (because Farseer doesn’t know about the display world, it has no idea if an item is on the screen or not). Here we just loop through all the entities and anything that’s dropped off the screen (below the bottom) gets removed from the World. This keeps our entity count down (the simulation never has more than 30 or 40 objects in the world no matter how long you run it for). Too many entities and the app will grind to a halt. Lines 125-130: Farseer knows nothing about the UI so that’s entirely up to you as to how to draw things. Farseer is just tracking the objects and moving them around using the physics engine and it’s rules. You’ll still use XNA to draw items (using the SpriteBatch.Draw method) so you can load up your usual textures and draw items and pirates and dancing zombies all over the screen. Instead in this demo we’re going to cheat a little. In the sample code for Farseer you can download there’s a project called DebugView XNA. This project contains the DebugViewXNA class which just handles iterating through all the bodies in the world and drawing the shapes. So we call the RenderDebugData method here of that class to draw everything correctly. In the case of this demo, we just want to draw Shapes so take a look at the source code for the DebugViewXNA class as to how it extracts all the vertices for the shapes created (in this case simple boxes) and draws them. You’ll learn a *lot* about how Farseer works just by looking at this class. That’s it, that’s all. Simple huh? Hope you enjoy the code and library. Physics is hard and requires some math skills to really grok. The Farseer Physics Engine makes it pretty easy to get up and running and start building games. In future posts we’ll get more in-depth with things you can do with the engine so this is just the beginning. Enjoy!

    Read the article

  • SPF record doesn't work (not sure which DNS server to tweak)

    - by Ion
    Problem: Google (and perhaps others) marks our emails as SPF neutral. Let me give you some background about the setup: initially got a dedicated server (Hetzner) with Plesk installed to host a domain/web application, let's say: bigjaws.com. Plesk automatically creates a DNS zone for it with some records for the various services it provides out of the box, e.g. webmail.bigjaws.com as a CNAME to bigjaws.com to provide Horde/whatever, etc. Let me point out four relevant of these records (where XXX.XXX.XXX.158 is our dedicated IP): bigjaws.com. A XXX.XXX.XXX.158 mail.bigjaws.com. A XXX.XXX.XXX.158 bigjaws.com MX (10) mail.bigjaws.com. bigjaws.com. TXT v=spf1 +a +mx -all The above records are not(?) valid anymore though, because after using this dedicated server for a while, our site got bigger and bigger so we decided to move our operations over to AWS (EC2, RDS, ELB, etc), but we retained the mail functionality as is, i.e. emails from [email protected] are sent by connecting to our dedicated server where Plesk takes care of things. This was decided in order not to setup anything from scratch. Of course for all DNS-related things we now use Route53. In Route53 I have the following records: mail.schoox.com. A XXX.XXX.XXX.158 bigjaws.com. MX (10) mail.bigjaws.com bigjaws.com. SPF "v=spf1 +ip4:XXX.XXX.XXX.158 +mx ~all" From my understanding of SPF, the SPF status should have been passed: I designate that all email being sent by bigjaws.com from XXX.XXX.XXX.158 are valid/not spam (I added +mx there but I'm not sure if needed). When a mail server receives an email, doesn't it lookup the SPF record of the domain and checks against the IP it got the email from? Checking with spfquery: root@box:~# spfquery -ip XXX.XXX.XXX.158 -sender [email protected] -rcpt-to [email protected] StartError Context: Failed to query MAIL-FROM ErrorCode: (2) Could not find a valid SPF record Error: No DNS data for 'bigjaws.com'. EndError noneneutral Please see http://www.openspf.org/Why?id=employee1%40bigjaws.com&ip=XXX.XXX.XXX.158&receiver=spfquery : Reason: default spfquery: XXX.XXX.XXX.158 is neither permitted nor denied by domain of bigjaws.com Received-SPF: neutral (spfquery: XXX.XXX.XXX.158 is neither permitted nor denied by domain of bigjaws.com) client-ip=XXX.XXX.XXX.158; [email protected]; If I go to the address listed above (openspf.org) it tells me that the message should have been accepted(!): spfquery rejected a message that claimed an envelope sender address of [email protected]. spfquery received a message from static.158.XXX.XXX.XXX.clients.your-server.de (XXX.XXX.XXX.158) that claimed an envelope sender address of [email protected]. The domain bigjaws.com has authorized static.158.XXX.XXX.XXX.clients.your-server.de (XXX.XXX.XXX.158) to send mail on its behalf, so the message should have been accepted. It is impossible for us to say why it was rejected. What should I do? If the problem persists, contact the bigjaws.com postmaster. Also, here are some headers from an email sent by one of our [email protected] addresses to a gmail.com address (by the way, bigjaws.de listed in the "Received: from" field was the initial domain hosted on the dedicated server before adding the .com one -- both are still listed as separate subscriptions under Plesk). Delivered-To: [email protected] Received: by 10.14.177.70 with SMTP id c46csp289656eem; Wed, 23 Oct 2013 01:11:00 -0700 (PDT) X-Received: by 10.14.102.66 with SMTP id c42mr306186eeg.47.1382515860386; Wed, 23 Oct 2013 01:11:00 -0700 (PDT) Return-Path: <[email protected]> Received: from bigjaws.de (static.158.XXX.XXX.XXX.clients.your-server.de. [XXX.XXX.XXX.158]) by mx.google.com with ESMTPS id l4si19438578eew.161.2013.10.23.01.10.59 for <[email protected]> (version=TLSv1 cipher=RC4-SHA bits=128/128); Wed, 23 Oct 2013 01:10:59 -0700 (PDT) Received-SPF: neutral (google.com: XXX.XXX.XXX.158 is neither permitted nor denied by best guess record for domain of [email protected]) client-ip=XXX.XXX.XXX.158; Authentication-Results: mx.google.com; spf=neutral (google.com: XXX.XXX.XXX.158 is neither permitted nor denied by best guess record for domain of [email protected]) [email protected] DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=default; d=bigjaws.com; b=WwRAS0WKjp9lO17iMluYPXOHzqRcOueiQT4rPdvy3WFf0QzoXiy6rLfxU/Ra53jL1vlPbwlLNa5gjoJBi7ZwKfUcvs3s02hJI7b3ozl0fEgJtTPKoCfnwl4bLPbtXNFu; h=Received:Received:Message-ID:Date:From:User-Agent:MIME-Version:To:Subject:Content-Type:Content-Transfer-Encoding; Received: (qmail 22722 invoked from network); 23 Oct 2013 10:10:59 +0200 Received: from hostname.static.ISP.com (HELO ?192.168.1.60?) (YYY.YYY.ISP.IP) by static.158.XXX.XXX.XXX.clients.your-server.de. with ESMTPSA (DHE-RSA-AES256-SHA encrypted, authenticated); 23 Oct 2013 10:10:59 +0200 Message-ID: <[email protected]> Date: Wed, 23 Oct 2013 11:11:00 +0300 From: BigJaws Employee <[email protected]> User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.0.1 MIME-Version: 1.0 To: [email protected] Subject: test SPF Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit test SPF Any ideas why SPF is not working correctly? Also, are there any DNS settings that are not needed anymore and create a problem?

    Read the article

  • Is the Unix Philosophy still relevant in the Web 2.0 world?

    - by David Titarenco
    Introduction Hello, let me give you some background before I begin. I started programming when I was 5 or 6 on my dad's PSION II (some primitive BASIC-like language), then I learned more and more, eventually inching my way up to C, C++, Java, PHP, JS, etc. I think I'm a pretty decent coder. I think most people would agree. I'm not a complete social recluse, but I do stuff like write a virtual machine for fun. I've never taken a computer course in college because I've been in and out for the past couple of years and have only been taking core classes; never having been particularly amazing at school, perhaps I'm missing some basic tenet that most learn in CS101. I'm currently reading Coders at Work and this question is based on some ideas I read in there. A Brief (Fictionalized) Example So a certain sunny day I get an idea. I hire a designer and hammer away at some C/C++ code for a couple of months, soon thereafter releasing silvr.com, a website that transmutes lead into silver. Yep, I started my very own start-up and even gave it a clever web 2.0 name with a vowel missing. Mom and dad are proud. I come up with some numbers I should be seeing after 1, 2, 3, 6, 9, 12 months and set sail. Obviously, my transmuting server isn't perfect, sometimes it segfaults, sometimes it leaks memory. I fix it and keep truckin'. After all, gdb is my best friend. Eventually, I'm at a position where a very small community of people are happily transmuting lead into silver on a semi-regular basis, but they want to let their friends on MySpace know how many grams of lead they transmuted today. And they want to post images of their lead and silver nuggets on flickr. I'm losing out on potential traffic unless I let them log in with their Yahoo, Google, and Facebook accounts. They want webcam support and live cock fighting, merry-go-rounds and Jabberwockies. All these things seem necessary. The Aftermath Of course, I have to re-write the transmuting server! After all, I've been losing money all these months. I need OAuth libraries and OpenID libraries, JSON support, and the only stable Jabberwocky API is for Java. C++ isn't even an option anymore. I'm just one guy! The Java binary just grows and grows since I need some legacy Apache include for the JSON library, and some antiquated Sun dependency for OAuth support. Then I pick up a book like Coders at Work and read what people like jwz say about complexity... I think to myself.. Keep it simple, stupid. I like simple things. I've always loved the Unix Philosophy but even after trying to keep the new server source modular and sleek, I loathe having to write one more line of code. It feels that I'm just piling crap on top of other crap. Maybe I'm naive thinking every piece of software can be simple and clever. Maybe it's just a phase.. or is the Unix Philosophy basically dead when it comes to the current state of (web) development? I'm just kind of disheartened :(

    Read the article

  • How to generate a Program template by generating an abstract class

    - by Byron-Lim Timothy Steffan
    i have the following problem. The 1st step is to implement a program, which follows a specific protocol on startup. Therefore, functions as onInit, onConfigRequest, etc. will be necessary. (These are triggered e.g. by incoming message on a TCP Port) My goal is to generate a class for example abstract one, which has abstract functions as onInit(), etc. A programmer should just inherit from this base class and should merely override these abstract functions of the base class. The rest as of the protocol e.g. should be simply handled in the background (using the code of the base class) and should not need to appear in the programmers code. What is the correct design strategy for such tasks? and how do I deal with, that the static main method is not inheritable? What are the key-tags for this problem? (I have problem searching for a solution since I lack clear statements on this problem) Goal is to create some sort of library/class, which - included in ones code - results in executables following the protocol. EDIT (new explanation): Okay let me try to explain more detailled: In this case programs should be clients within a client server architecture. We have a client server connection via TCP/IP. Each program needs to follow a specific protocol upon program start: As soon as my program starts and gets connected to the server it will receive an Init Message (TcpClient), when this happens it should trigger the function onInit(). (Should this be implemented by an event system?) After onInit() a acknowledgement message should be sent to the server. Afterwards there are some other steps as e.g. a config message from the server which triggers an onConfig and so on. Let's concentrate on the onInit function. The idea is, that onInit (and onConfig and so on) should be the only functions the programmer should edit while the overall protocol messaging is hidden for him. Therefore, I thought using an abstract class with the abstract methods onInit(), onConfig() in it should be the right thing. The static Main class I would like to hide, since within it e.g. there will be some part which connects to the tcp port, which reacts on the Init Message and which will call the onInit function. 2 problems here: 1. the static main class cant be inherited, isn it? 2. I cannot call abstract functions from the main class in the abstract master class. Let me give an Pseudo-example for my ideas: public abstract class MasterClass { static void Main(string[] args){ 1. open TCP connection 2. waiting for Init Message from server 3. onInit(); 4. Send Acknowledgement, that Init Routine has ended successfully 5. waiting for Config message from server 6..... } public abstract void onInit(); public abstract void onConfig(); } I hope you get the idea now! The programmer should afterwards inherit from this masterclass and merely need to edit the functions onInit and so on. Is this way possible? How? What else do you recommend for solving this? EDIT: The strategy ideo provided below is a good one! Check out my comment on that.

    Read the article

  • ActiveRecord Logic Challenge - Smart Ways to Use AR Timestamp

    - by keruilin
    My question is somewhat specific to my app's issue, but the answer should be instructive in terms of use cases for association logic and the record timestamp. I have an NBA pick 'em game where I want to award badges for picking x number of games in a row correctly -- 10, 20, 30. Here are the models, attributes, and associations in-play: User id Pick id result # (values can be 'W', 'L', 'T', or nil. nil means hasn't resolved yet.) resolved # (values can be true, false, or nil.) game_time created_at *Note: There are cases where a pick's result field and resolved field will always be nil. Perhaps the game was cancelled. Badge id Award id user_id badge_id created_at User has many awards. User has many picks. Pick belongs to user. Badge has many awards. Award belongs to user. Award belongs to badge. One of the important rules here to capture in the code is that while a user can be awarded multiple streak badges (e.g., a user can win multiple 10-streak badges), the user CAN'T be awarded another badge for consecutive winning picks that were previously granted an award badge. One way to think of this is that all the dates of the winning picks must come after the date that the streak badge was awarded. For example, let's pretend that a user made 13 winning picks from May 5 to May 8, with the 10th winning pick occurring on May 7, and the last 3 on May 8. The user would be awarded a 10-streak badge on May 7. Now if the user makes another winning pick on May 9, the code must recognize that the user only has a streak of 4 winning picks, not 14, because the user already received an award for the first 10. Now let's assume that the user makes 6 more winning picks. In this case, the code must recognize that all winning picks since May 5 are eligible for a 20-streak badge award, and make the award. Another important rule is that when looking at a winning streak, we don't care about the game time, but rather when the pick was made (created_at). For example, let's say that Team A plays Team B on Sat. And Team C plays Team D on Sun. If the user picks Team C to beat Team D on Thurs, and Team A to beat Team C on Fri, and Team A wins on Sat, but Team C loses on Sun, then the user has a losing streak of 1. So when must the streak-check kick-in? As soon as a pick is a win. If it's a loss or tie, no point in checking. One more note: if the pick is not resolved (false) and the result is nil, that means the game was postponed and must be factored out. With all that said, what is the most efficient, effective and lean way to determine whether a user has a 10-, 20- or 30-win streak?

    Read the article

  • C++/msvc6 application crashes due to heap corruption, any hints?

    - by David Alfonso
    Hello all, let me say first that I'm writing this question after months of trying to find out the root of a crash happening in our application. I'll try to detail as much as possible what I've already found out about it. About the application It runs on Windows XP Professional SP2. It's built with Microsoft Visual C++ 6.0 with Service Pack 6. It's MFC based. It uses several external dlls (e.g. Xerces, ZLib or ACE). It has high performance requirements. It does a lot of network and hard disk I/O, but it's also cpu intensive. It has an exception handling mechanism which generates a minidump when an unhandled exception occurs. Facts about the crash It only happens on multiprocessor/multicore machines and under heavy loads of work. It happens at random (neither we nor our client have found a pattern yet). We cannot reproduce the crash on our testing lab. It only happens on some production systems (but always in multicore machines) It always ends up crashing at the same point, although the complete stack is not always the same. Let me add the stack of the crashing thread (obtained using WinDbg, sorry we don't have symbols) ChildEBP RetAddr Args to Child WARNING: Stack unwind information not available. Following frames may be wrong. 030af6c8 7c9206eb 77bfc3c9 01a80000 00224bc3 MyApplication+0x2a85b9 030af960 7c91e9c0 7c92901b 00000ab4 00000000 ntdll!RtlAllocateHeap+0xeac (FPO: [Non-Fpo]) 030af98c 7c9205c8 00000001 00000000 00000000 ntdll!ZwWaitForSingleObject+0xc (FPO: [3,0,0]) 030af9c0 7c920551 01a80898 7c92056d 313adfb0 ntdll!RtlpFreeToHeapLookaside+0x22 (FPO: [2,0,4]) 030afa8c 4ba3ae96 000307da 00130005 00040012 ntdll!RtlFreeHeap+0x1e9 (FPO: [Non-Fpo]) 030afacc 77bfc2e3 0214e384 3087c8d8 02151030 0x4ba3ae96 030afb00 7c91e306 7c80bfc1 00000948 00000001 msvcrt!free+0xc8 (FPO: [Non-Fpo]) 030afb20 0042965b 030afcc0 0214d780 02151218 ntdll!ZwReleaseSemaphore+0xc (FPO: [3,0,0]) 030afb7c 7c9206eb 02e6c471 02ea0000 00000008 MyApplication+0x2965b 030afe60 7c9205c8 02151248 030aff38 7c920551 ntdll!RtlAllocateHeap+0xeac (FPO: [Non-Fpo]) 030afe74 7c92056d 0210bfb8 02151250 02151250 ntdll!RtlpFreeToHeapLookaside+0x22 (FPO: [2,0,4]) 030aff38 77bfc2de 01a80000 00000000 77bfc2e3 ntdll!RtlFreeHeap+0x647 (FPO: [Non-Fpo]) 7c92056d c5ffffff ce7c94be ff7c94be 00ffffff msvcrt!free+0xc3 (FPO: [Non-Fpo]) 7c920575 ff7c94be 00ffffff 12000000 907c94be 0xc5ffffff 7c920579 00ffffff 12000000 907c94be 90909090 0xff7c94be *** WARNING: Unable to verify checksum for xerces-c_2_7.dll *** ERROR: Symbol file could not be found. Defaulted to export symbols for xerces-c_2_7.dll - 7c92057d 12000000 907c94be 90909090 8b55ff8b MyApplication+0xbfffff 7c920581 907c94be 90909090 8b55ff8b 08458bec xerces_c_2_7 7c920585 90909090 8b55ff8b 08458bec 04408b66 0x907c94be 7c920589 8b55ff8b 08458bec 04408b66 0004c25d 0x90909090 7c92058d 08458bec 04408b66 0004c25d 90909090 0x8b55ff8b The address MyApplication+0x2a85b9 corresponds to a call to erase() of a std::list. What I have tried so far Reviewing all the code related to the point where the crash ends happening. Trying to enable pageheap on our testing lab though nothing useful has been found by now. We have substituted the std::list for a C array and then it crashes in other part of the code (although it is related code, it's not in the code where the old list resided). Coincidentally, now it crashes in another erase, though this time of a std::multiset. Let me copy the stack contained in the dump: ntdll.dll!_RtlpCoalesceFreeBlocks@16() + 0x124e bytes ntdll.dll!_RtlFreeHeap@12() + 0x91f bytes msvcrt.dll!_free() + 0xc3 bytes MyApplication.exe!006a4fda() [Frames below may be incorrect and/or missing, no symbols loaded for MyApplication.exe] MyApplication.exe!0069f305() ntdll.dll!_NtFreeVirtualMemory@16() + 0xc bytes ntdll.dll!_RtlpSecMemFreeVirtualMemory@16() + 0x1b bytes ntdll.dll!_ZwWaitForSingleObject@12() + 0xc bytes ntdll.dll!_RtlpFreeToHeapLookaside@8() + 0x26 bytes ntdll.dll!_RtlFreeHeap@12() + 0x114 bytes msvcrt.dll!_free() + 0xc3 bytes c5ffffff() Possible solutions (that I'm aware of) which cannot be applied "Migrate the application to a newer compiler": We are working on this but It's not a solution at the moment. "Enable pageheap (normal or full)": We can't enable pageheap on production machines as this affects performance heavily. I think that's all I remember now, if I have forgotten something I'll add it asap. If you can give me some hint or propose some possible solution, don't hesitate to answer! Thank you in advance for your time and advice.

    Read the article

  • Is it a missing implementation with JPA implementation of hibernate??

    - by Jegan
    Hi all, On my way in understanding the transaction-type attribute of persistence.xml, i came across an issue / discrepency between hibernate-core and JPA-hibernate which looks weird. I am not pretty sure whether it is a missing implementation with JPA of hibernate. Let me post the comparison between the outcome of JPA implementation and the hibernate implementation of the same concept. Environment Eclipse 3.5.1 JSE v1.6.0_05 Hibernate v3.2.3 [for hibernate core] Hibernate-EntityManger v3.4.0 [for JPA] MySQL DB v5.0 Issue 1.Hibernate core package com.expt.hibernate.core; import java.io.Serializable; public final class Student implements Serializable { private int studId; private String studName; private String studEmailId; public Student(final String studName, final String studEmailId) { this.studName = studName; this.studEmailId = studEmailId; } public int getStudId() { return this.studId; } public String getStudName() { return this.studName; } public String getStudEmailId() { return this.studEmailId; } private void setStudId(int studId) { this.studId = studId; } private void setStudName(String studName) { this.studName = stuName; } private void setStudEmailId(int studEmailId) { this.studEmailId = studEmailId; } } 2. JPA implementaion of Hibernate package com.expt.hibernate.jpa; import java.io.Serializable; import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.Id; import javax.persistence.Table; @Entity @Table(name = "Student_Info") public final class Student implements Serializable { @Id @GeneratedValue @Column(name = "STUD_ID", length = 5) private int studId; @Column(name = "STUD_NAME", nullable = false, length = 25) private String studName; @Column(name = "STUD_EMAIL", nullable = true, length = 30) private String studEmailId; public Student(final String studName, final String studEmailId) { this.studName = studName; this.studEmailId = studEmailId; } public int getStudId() { return this.studId; } public String getStudName() { return this.studName; } public String getStudEmailId() { return this.studEmailId; } } Also, I have provided the DB configuration properties in the associated hibernate-cfg.xml [in case of hibernate core] and persistence.xml [in case of JPA (hibernate entity manager)]. create a driver and perform add a student and query for the list of students and print their details. Then the issue comes when you run the driver program. Hibernate core - output Exception in thread "main" org.hibernate.InstantiationException: No default constructor for entity: com.expt.hibernate.core.Student at org.hibernate.tuple.PojoInstantiator.instantiate(PojoInstantiator.java:84) at org.hibernate.tuple.PojoInstantiator.instantiate(PojoInstantiator.java:100) at org.hibernate.tuple.entity.AbstractEntityTuplizer.instantiate(AbstractEntityTuplizer.java:351) at org.hibernate.persister.entity.AbstractEntityPersister.instantiate(AbstractEntityPersister.java:3604) .... .... This exception is flashed when the driver is executed for the first time itself. JPA Hibernate - output First execution of the driver on a fresh DB provided the following output. DEBUG SQL:111 - insert into student.Student_Info (STUD_EMAIL, STUD_NAME) values (?, ?) 17:38:24,229 DEBUG SQL:111 - select student0_.STUD_ID as STUD1_0_, student0_.STUD_EMAIL as STUD2_0_, student0_.STUD_NAME as STUD3_0_ from student.Student_Info student0_ student list size == 1 1 || Jegan || [email protected] second execution of the driver provided the following output. DEBUG SQL:111 - insert into student.Student_Info (STUD_EMAIL, STUD_NAME) values (?, ?) 17:40:25,254 DEBUG SQL:111 - select student0_.STUD_ID as STUD1_0_, student0_.STUD_EMAIL as STUD2_0_, student0_.STUD_NAME as STUD3_0_ from student.Student_Info student0_ Exception in thread "main" javax.persistence.PersistenceException: org.hibernate.InstantiationException: No default constructor for entity: com.expt.hibernate.jpa.Student at org.hibernate.ejb.AbstractEntityManagerImpl.throwPersistenceException(AbstractEntityManagerImpl.java:614) at org.hibernate.ejb.QueryImpl.getResultList(QueryImpl.java:76) at driver.StudentDriver.main(StudentDriver.java:43) Caused by: org.hibernate.InstantiationException: No default constructor for entity: com.expt.hibernate.jpa.Student .... .... Could anyone please let me know if you have encountered this sort of inconsistency? Also, could anyone please let me know if the issue is a missing implementation with JPA-Hibernate? ~ Jegan

    Read the article

  • Why doesn't my QsciLexerCustom subclass work in PyQt4 using QsciScintilla?

    - by Jon Watte
    My end goal is to get Erlang syntax highlighting in QsciScintilla using PyQt4 and Python 2.6. I'm running on Windows 7, but will also need Ubuntu support. PyQt4 is missing the necessary wrapper code for the Erlang lexer/highlighter that "base" scintilla has, so I figured I'd write a lightweight one on top of QsciLexerCustom. It's a little bit problematic, because the Qsci wrapper seems to really want to talk about line+index rather than offset-from-start when getting/setting subranges of text. Meanwhile, the lexer gets arguments as offset-from-start. For now, I get a copy of the entire text, and split that up as appropriate. I have the following lexer, and I apply it with setLexer(). It gets all the appropriate calls when I open a new file and sets this as the lexer, and prints a bunch of appropriate lines based on what it's doing... but there is no styling in the document. I tried making all the defined styles red, and the document is still stubbornly black-on-white, so apparently the styles don't really "take effect" What am I doing wrong? If nobody here knows, what's the appropriate discussion forum where people might actually know these things? (It's an interesting intersection between Python, Qt and Scintilla, so I imagine the set of people who would know is small) Let's assume prefs.declare() just sets up a dict that returns the value for the given key (I've verified this -- it's not the problem). Let's assume scintilla is reasonably properly constructed into its host window QWidget. Specifically, if I apply a bundled lexer (such as QsciLexerPython), it takes effect and does show styled text. prefs.declare('font.name.margin', "MS Dlg") prefs.declare('font.size.margin', 8) prefs.declare('font.name.code', "Courier New") prefs.declare('font.size.code', 10) prefs.declare('color.editline', "#d0e0ff") class LexerErlang(Qsci.QsciLexerCustom): def __init__(self, obj = None): Qsci.QsciLexerCustom.__init__(self, obj) self.sci = None self.plainFont = QtGui.QFont() self.plainFont.setPointSize(int(prefs.get('font.size.code'))) self.plainFont.setFamily(prefs.get('font.name.code')) self.marginFont = QtGui.QFont() self.marginFont.setPointSize(int(prefs.get('font.size.code'))) self.marginFont.setFamily(prefs.get('font.name.margin')) self.boldFont = QtGui.QFont() self.boldFont.setPointSize(int(prefs.get('font.size.code'))) self.boldFont.setFamily(prefs.get('font.name.code')) self.boldFont.setBold(True) self.styles = [ Qsci.QsciStyle(0, QtCore.QString("base"), QtGui.QColor("#000000"), QtGui.QColor("#ffffff"), self.plainFont, True), Qsci.QsciStyle(1, QtCore.QString("comment"), QtGui.QColor("#008000"), QtGui.QColor("#eeffee"), self.marginFont, True), Qsci.QsciStyle(2, QtCore.QString("keyword"), QtGui.QColor("#000080"), QtGui.QColor("#ffffff"), self.boldFont, True), Qsci.QsciStyle(3, QtCore.QString("string"), QtGui.QColor("#800000"), QtGui.QColor("#ffffff"), self.marginFont, True), Qsci.QsciStyle(4, QtCore.QString("atom"), QtGui.QColor("#008080"), QtGui.QColor("#ffffff"), self.plainFont, True), Qsci.QsciStyle(5, QtCore.QString("macro"), QtGui.QColor("#808000"), QtGui.QColor("#ffffff"), self.boldFont, True), Qsci.QsciStyle(6, QtCore.QString("error"), QtGui.QColor("#000000"), QtGui.QColor("#ffd0d0"), self.plainFont, True), ] print("LexerErlang created") def description(self, ix): for i in self.styles: if i.style() == ix: return QtCore.QString(i.description()) return QtCore.QString("") def setEditor(self, sci): self.sci = sci Qsci.QsciLexerCustom.setEditor(self, sci) print("LexerErlang.setEditor()") def styleText(self, start, end): print("LexerErlang.styleText(%d,%d)" % (start, end)) lines = self.getText(start, end) offset = start self.startStyling(offset, 0) print("startStyling()") for i in lines: if i == "": self.setStyling(1, self.styles[0]) print("setStyling(1)") offset += 1 continue if i[0] == '%': self.setStyling(len(i)+1, self.styles[1]) print("setStyling(%)") offset += len(i)+1 continue self.setStyling(len(i)+1, self.styles[0]) print("setStyling(n)") offset += len(i)+1 def getText(self, start, end): data = self.sci.text() print("LexerErlang.getText(): " + str(len(data)) + " chars") return data[start:end].split('\n') Applied to the QsciScintilla widget as follows: _lexers = { 'erl': (Q.SCLEX_ERLANG, LexerErlang), 'hrl': (Q.SCLEX_ERLANG, LexerErlang), 'html': (Q.SCLEX_HTML, Qsci.QsciLexerHTML), 'css': (Q.SCLEX_CSS, Qsci.QsciLexerCSS), 'py': (Q.SCLEX_PYTHON, Qsci.QsciLexerPython), 'php': (Q.SCLEX_PHP, Qsci.QsciLexerHTML), 'inc': (Q.SCLEX_PHP, Qsci.QsciLexerHTML), 'js': (Q.SCLEX_CPP, Qsci.QsciLexerJavaScript), 'cpp': (Q.SCLEX_CPP, Qsci.QsciLexerCPP), 'h': (Q.SCLEX_CPP, Qsci.QsciLexerCPP), 'cxx': (Q.SCLEX_CPP, Qsci.QsciLexerCPP), 'hpp': (Q.SCLEX_CPP, Qsci.QsciLexerCPP), 'c': (Q.SCLEX_CPP, Qsci.QsciLexerCPP), 'hxx': (Q.SCLEX_CPP, Qsci.QsciLexerCPP), 'tpl': (Q.SCLEX_CPP, Qsci.QsciLexerCPP), 'xml': (Q.SCLEX_XML, Qsci.QsciLexerXML), } ... inside my document window class ... def addContentsDocument(self, contents, title): handler = self.makeScintilla() handler.title = title sci = handler.sci sci.append(contents) self.tabWidget.addTab(sci, title) self.tabWidget.setCurrentWidget(sci) self.applyLexer(sci, title) EventBus.bus.broadcast('command.done', {'text': 'Opened ' + title}) return handler def applyLexer(self, sci, title): (language, lexer) = language_and_lexer_from_title(title) if lexer: l = lexer() print("making lexer: " + str(l)) sci.setLexer(l) else: print("setting lexer by id: " + str(language)) sci.SendScintilla(Qsci.QsciScintillaBase.SCI_SETLEXER, language) linst = sci.lexer() print("lexer: " + str(linst)) def makeScintilla(self): sci = Qsci.QsciScintilla() sci.setUtf8(True) sci.setTabIndents(True) sci.setIndentationsUseTabs(False) sci.setIndentationWidth(4) sci.setMarginsFont(self.smallFont) sci.setMarginWidth(0, self.smallFontMetrics.width('00000')) sci.setFont(self.monoFont) sci.setAutoIndent(True) sci.setBraceMatching(Qsci.QsciScintilla.StrictBraceMatch) handler = SciHandler(sci) self.handlers[sci] = handler sci.setMarginLineNumbers(0, True) sci.setCaretLineVisible(True) sci.setCaretLineBackgroundColor(QtGui.QColor(prefs.get('color.editline'))) return handler Let's assume the rest of the application works, too (because it does :-)

    Read the article

  • Appropriate programming design questions.

    - by Edward
    I have a few questions on good programming design. I'm going to first describe the project I'm building so you are better equipped to help me out. I am coding a Remote Assistance Tool similar to TeamViewer, Microsoft Remote Desktop, CrossLoop. It will incorporate concepts like UDP networking (using Lidgren networking library), NAT traversal (since many computers are invisible behind routers nowadays), Mirror Drivers (using DFMirage's Mirror Driver (http://www.demoforge.com/dfmirage.htm) for realtime screen grabbing on the remote computer). That being said, this program has a concept of being a client-server architecture, but I made only one program with both the functionality of client and server. That way, when the user runs my program, they can switch between giving assistance and receiving assistance without having to download a separate client or server module. I have a Windows Form that allows the user to choose between giving assistance and receiving assistance. I have another Windows Form for a file explorer module. I have another Windows Form for a chat module. I have another Windows Form form for a registry editor module. I have another Windows Form for the live control module. So I've got a Form for each module, which raises the first question: 1. Should I process module-specific commands inside the code of the respective Windows Form? Meaning, let's say I get a command with some data that enumerates the remote user's files for a specific directory. Obviously, I would have to update this on the File Explorer Windows Form and add the entries to the ListView. Should I be processing this code inside the Windows Form though? Or should I be handling this in another class (although I have to eventually pass the data to the Form to draw, of course). Or is it like a hybrid in which I process most of the data in another class and pass the final result to the Form to draw? So I've got like 5-6 forms, one for each module. The user starts up my program, enters the remote machine's ID (not IP, ID, because we are registering with an intermediary server to enable NAT traversal), their password, and connects. Now let's suppose the connection is successful. Then the user is presented with a form with all the different modules. So he can open up a File Explorer, or he can mess with the Registry Editor, or he can choose to Chat with his buddy. So now the program is sort of idle, just waiting for the user to do something. If the user opens up Live Control, then the program will be spending most of it's time receiving packets from the remote machine and drawing them to the form to provide a 'live' view. 2. Second design question. A spin off question #1. How would I pass module-specific commands to their respective Windows Forms? What I mean is, I have a class like "NetworkHandler.cs" that checks for messages from the remote machine. NetworkHandler.cs is a static class globally accessible. So let's say I get a command that enumerates the remote user's files for a specific directory. How would I "give" that command to the File Explorer Form. I was thinking of making an OnCommandReceivedEvent inside NetworkHandler, and having each form register to that event. When the NetworkHandler received a command, it would raise the event, all forms would check it to see if it was relevant, and the appropriate form would take action. Is this an appropriate/the best solution available? 3. The networking library I'm using, Lidgren, provides two options for checking networking messages. One can either poll ReadMessage() to return null or a message, or one can use an AutoResetEvent OnMessageReceived (I'm guessing this is like an event). Which one is more appropriate?

    Read the article

  • [C#][Design] Appropriate programming design questions.

    - by Edward
    I have a few questions on good programming design. I'm going to first describe the project I'm building so you are better equipped to help me out. I am coding a Remote Assistance Tool similar to TeamViewer, Microsoft Remote Desktop, CrossLoop. It will incorporate concepts like UDP networking (using Lidgren networking library), NAT traversal (since many computers are invisible behind routers nowadays), Mirror Drivers (using DFMirage's Mirror Driver (http://www.demoforge.com/dfmirage.htm) for realtime screen grabbing on the remote computer). That being said, this program has a concept of being a client-server architecture, but I made only one program with both the functionality of client and server. That way, when the user runs my program, they can switch between giving assistance and receiving assistance without having to download a separate client or server module. I have a Windows Form that allows the user to choose between giving assistance and receiving assistance. I have another Windows Form for a file explorer module. I have another Windows Form for a chat module. I have another Windows Form form for a registry editor module. I have another Windows Form for the live control module. So I've got a Form for each module, which raises the first question: 1. Should I process module-specific commands inside the code of the respective Windows Form? Meaning, let's say I get a command with some data that enumerates the remote user's files for a specific directory. Obviously, I would have to update this on the File Explorer Windows Form and add the entries to the ListView. Should I be processing this code inside the Windows Form though? Or should I be handling this in another class (although I have to eventually pass the data to the Form to draw, of course). Or is it like a hybrid in which I process most of the data in another class and pass the final result to the Form to draw? So I've got like 5-6 forms, one for each module. The user starts up my program, enters the remote machine's ID (not IP, ID, because we are registering with an intermediary server to enable NAT traversal), their password, and connects. Now let's suppose the connection is successful. Then the user is presented with a form with all the different modules. So he can open up a File Explorer, or he can mess with the Registry Editor, or he can choose to Chat with his buddy. So now the program is sort of idle, just waiting for the user to do something. If the user opens up Live Control, then the program will be spending most of it's time receiving packets from the remote machine and drawing them to the form to provide a 'live' view. 2. Second design question. A spin off question #1. How would I pass module-specific commands to their respective Windows Forms? What I mean is, I have a class like "NetworkHandler.cs" that checks for messages from the remote machine. NetworkHandler.cs is a static class globally accessible. So let's say I get a command that enumerates the remote user's files for a specific directory. How would I "give" that command to the File Explorer Form. I was thinking of making an OnCommandReceivedEvent inside NetworkHandler, and having each form register to that event. When the NetworkHandler received a command, it would raise the event, all forms would check it to see if it was relevant, and the appropriate form would take action. Is this an appropriate/the best solution available? 3. The networking library I'm using, Lidgren, provides two options for checking networking messages. One can either poll ReadMessage() to return null or a message, or one can use an AutoResetEvent OnMessageReceived (I'm guessing this is like an event). Which one is more appropriate?

    Read the article

  • How to handle recurring dates (dates only) in .NET?

    - by Wayne M
    I am trying to figure out a good way to handle recurring events in .NET, specifically for an ASP.NET MVC application. The idea is that a user can create an event and specify that the event can occur repeatedly after a specific interval (e.g. "every two weeks", "once a month" and so on). What would be the best way to tackle this? My brainstorming right now is to have two tables: Job and RecurringJob. Job is the "master" record and has the description of the job as well a key to what customer it's for, while RecurringJob links back to Job and has additional info on what the occurrence frequency is (e.g. 1 for "once a month") as well as the timespan (e.g. "Weekly", "Monthly"). The issue is how to determine and set the next occurrence of the job since this will have to be something that's done regularly. I've seen two trains of thought with this: This logic should either be stored in a database column and periodically updated, or calculated on the fly in the code. Any thoughts or suggestions on tackling this? Edit: this is for a subscription based web app I'm creating to let service businesses schedule their common recurring jobs easily and track their customers. So a typical use might be to create a "Cut lawn" job for Mr Smith that occurs every month The exact date isn't important - it's the ability for the customer to see that Mr Smith gets his lawn cut every month and followup with him about it. Let me rephrase the above to better convey my idea. A sample use case for the application might be as follows: User pulls up the customer record for John Smith and clicks the Add Job link. The user fills out the form to create a job with a name of "Cut lawn", a start date of 11/15/2009, and selects a checkbox indicating that this job continually occurs. The user is presented with a secondary screen asking for the job frequency. The user indicates (haven't decided how at this point - let's assume select lists) that the job occurs once a month. User clicks save. Now, when the user views the record for John Smith, they can see that he has a job, "Cut lawn", that occurs every month starting from 11/15/2009. On the main dashboard when it's one week prior to the assumed start date, the user sees the job displayed with an indicator such as "12/15/2009 - Cut lawn (John Smith)". A week before the due date someone from the company calls him up to schedule and he says he's going to be out of town until 1/1/2010, so he wants his appointment rescheduled for that date. Our user can change the date for the job to be 1/1/2010, and now the recurrence will start one month from that date (e.g. next time will be 2/1/2010). The idea behind this is that the app is targeting businesses like lawn care, plumbers, carpet cleaners and the like where the exact date isn't as important (because it can and will change as people are busy), the key thing is to give the business an indicator that Mr. Smith's monthly service is coming up, and someone should give him a call to determine when exactly it can be scheduled for. In effect give these businesses a way to track repeat business and know when it's time to followup with a customer.

    Read the article

  • How to reserve public API to internal usage in .NET?

    - by mark
    Dear ladies and sirs. Let me first present the case, which will explain my question. This is going to be a bit long, so I apologize in advance :-). I have objects and collections, which should support the Merge API (it is my custom API, the signature of which is immaterial for this question). This API must be internal, meaning only my framework should be allowed to invoke it. However, derived types should be able to override the basic implementation. The natural way to implement this pattern as I see it, is this: The Merge API is declared as part of some internal interface, let us say IMergeable. Because the interface is internal, derived types would not be able to implement it directly. Rather they must inherit it from a common base type. So, a common base type is introduced, which would implement the IMergeable interface explicitly, where the interface methods delegate to respective protected virtual methods, providing the default implementation. This way the API is only callable by my framework, but derived types may override the default implementation. The following code snippet demonstrates the concept: internal interface IMergeable { void Merge(object obj); } public class BaseFrameworkObject : IMergeable { protected virtual void Merge(object obj) { // The default implementation. } void IMergeable.Merge(object obj) { Merge(obj); } } public class SomeThirdPartyObject : BaseFrameworkObject { protected override void Merge(object obj) { // A derived type implementation. } } All is fine, provided a single common base type suffices, which is usually true for non collection types. The thing is that collections must be mergeable as well. Collections do not play nicely with the presented concept, because developers do not develop collections from the scratch. There are predefined implementations - observable, filtered, compound, read-only, remove-only, ordered, god-knows-what, ... They may be developed from scratch in-house, but once finished, they serve wide range of products and should never be tailored to some specific product. Which means, that either: they do not implement the IMergeable interface at all, because it is internal to some product the scope of the IMergeable interface is raised to public and the API becomes open and callable by all. Let us refer to these collections as standard collections. Anyway, the first option screws my framework, because now each possible standard collection type has to be paired with the respective framework version, augmenting the standard with the IMergeable interface implementation - this is so bad, I am not even considering it. The second option breaks the framework as well, because the IMergeable interface should be internal for a reason (whatever it is) and now this interface has to open to all. So what to do? My solution is this. make IMergeable public API, but add an extra parameter to the Merge method, I call it a security token. The interface implementation may check that the token references some internal object, which is never exposed to the outside. If this is the case, then the method was called from within the framework, otherwise - some outside API consumer attempted to invoke it and so the implementation can blow up with a SecurityException. Here is the modified code snippet demonstrating this concept: internal static class InternalApi { internal static readonly object Token = new object(); } public interface IMergeable { void Merge(object obj, object token); } public class BaseFrameworkObject : IMergeable { protected virtual void Merge(object obj) { // The default implementation. } public void Merge(object obj, object token) { if (!object.ReferenceEquals(token, InternalApi.Token)) { throw new SecurityException("bla bla bla"); } Merge(obj); } } public class SomeThirdPartyObject : BaseFrameworkObject { protected override void Merge(object obj) { // A derived type implementation. } } Of course, this is less explicit than having an internally scoped interface and the check is moved from the compile time to run time, yet this is the best I could come up with. Now, I have a gut feeling that there is a better way to solve the problem I have presented. I do not know, may be using some standard Code Access Security features? I have only vague understanding of it, but can LinkDemand attribute be somehow related to it? Anyway, I would like to hear other opinions. Thanks.

    Read the article

  • If I use a facade class with generic methods to access the JPA API, how should I provide additional processing for specific types?

    - by Shaun
    Let's say I'm making a fairly simple web application using JAVA EE specs (I've heard this is possible). In this app, I only have about 10 domain/data objects, and these are represented by JPA Entities. Architecturally, I would consider the JPA API to perform the role of a DAO. Of course, I don't want to use the EntityManager directly in my UI (JSF) and I need to manage transactions, so I delegate these tasks to the so-called service layer. More specifically, I would like to be able to handle these tasks in a single DataService class (often also called CrudService) with generic methods. See this article by Adam Bien for an example interface: http://www.adam-bien.com/roller/abien/entry/generic_crud_service_aka_dao My project differs from that article in that I can't use EJBs, so my service classes are essentially just named beans and I handle transactions manually. Regardless, what I want is a single interface for simple CRUD operations on my data objects because having a different class for each data type would lead to a lot of duplicate and/or unnecessary code. Ideally, my views would be able to use a method such as public <T> List<T> findAll(Class<T> type) { ... } to retrieve data. Using JSF, it might look something like this: <h:dataTable value="#{dataService.findAll(data.class)}" var="d"> ... </h:dataTable> Similarly, after validating forms, my controller could submit the data with a method such as: public <T> void add(T entity) { ... } Granted, you'd probably actually want to return something useful to the caller. In any case, this works well if your data can be treated as homogenous in this manner. Alas, it breaks down when you need to perform additional processing on certain objects before passing them on to JPA. For example, let's say I'm dealing with Books and Authors which have a many-to-many relationship. Each Book has a set of IDs referring to its authors, and each Author has a set of IDs referring to their books. Normally, JPA can manage this kind of relationship for you, but in some cases it can't (for example, the google app engine JPA provider doesn't support this). Thus, when I persist a new book for example, I may need to update the corresponding author entities. My question, then, is if there's an elegant way to handle this or if I should reconsider the sanity of my whole design. Here's a couple ways I see of dealing with it: The instanceof operator. I could use this to target certain classes when special processing is needed. Perhaps maintainability suffers and it isn't beautiful code, but if there's only 10 or so domain objects it can't be all that bad... could it? Make a different service for each entity type (ie, BookService and AuthorService). All services would inherit from a generic DataService base class and override methods if special processing is needed. At this point, you could probably also just call them DAOs instead. As always, I appreciate the help. Let me know if any clarifications are needed, as I left out many smaller details.

    Read the article

  • A view interface for large object/array dumps

    - by user685107
    I want to embed in a page a detailed structure report of my model objects, like print_r() or var_export() produce (now I’m doing this with running var_export() on get_object_vars()). But what I actually want to see is only some properties (in most cases), but at this moment I have to use Ctrl+F and seek the variable I want, instead of just staring at it right after the page completes loading. So I’m embedding buttons to show/hide large arrays etc. but thought: ‘What if there already is the thing I do right now?’ So is there? Update: What would your ideal interface look like? First of all, dumped models fit in the first screen. All the properties can be seen at the first look at the screen (there are not many of them, around 10 per each, three models total, so it is possible). Small arrays can be shown unrolled too. Let the size of the array to count it as ‘small’ be definable. Ideally, the user can see values of the properties without doing any click, scrolling the screen or typing something. There must be some improvements to representing the values, say, if an array is empty, show array ‘My_big_array’ is empty and if a boolean variable starting with is_, has_, had_ has a false as the value, make the variable (let us take is_available for example) shown as is_NOT_available in red, and if it has true as the value, show is_available in green. Without any value shown. The same goes for defined constants. That would be ideal. I want to make focus on this kind of switches. Krumo seems useful, but since it always closes up the variable without making difference of how large it is, I cannot use it as is, but there might appear something similar on github soon :) Second update starts here: Any programmer who sees is_available = false will know what it means, no need to do more Bringing in color indication I forgot about one thing: the ‘switches’ let’s call them so, may me important or not. So I have right now some of them that will show in green or red, this is for something global, like caching, which is shown as Caching is… ON with ‘ON’ written in green, (and ‘OFF’ in red when disabled) while the words about what it is, i.e. ‘Caching is… ’ are written in black. And some which are not so important, for example I haven’t defined REVEAL_TIES is… not set with ‘not set’ written in gray, while the words describing what it is stay in black. And if it would be set the whole phrase would be in black since there is nothing important: if this small utility for showing some undercover things is working, I will see some messages after it, if it isn’t — site will be working independently of its state. Dividing switches into important ones and not with corresponding color match should improve readability, especially for those users who are not programmers and just enabled debug mode because some guy from bugzilla said do that — for them it would help to understand what is important and what is not.

    Read the article

  • Can I set a <probing> privatePath in ~/subdir/web.config without access to the root application .con

    - by Bago
    In ASP.NET, is it possible to set a path from a web.config in a subdir that is not setup as a virtual directory? In other words, can I reference a private assembly in the ~/subdir/bin folder? Take a look at my setup below, and let me know if I'm doing something wrong, please. Let me explain why I'm doing this: my page is setup in ~/subdir. I don't have write access to the root. I only have FTP access to the server (i.e., I am not an IIS admin and I can't login to the machine) I am trying to use FCKeditor in my subdir application Here is my folder structure: / | -subdir | | - Bin | | | *FredCK.FCKeditorV2.dll | | *Default.aspx | | *web.config Here is the <runtime> section of ~/subdir/web.config: <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <probing privatePath="subdir;subdir/bin" /> </dependentAssembly> </assemblyBinding> </runtime> I've tried all sorts of things to get to work. However, upon checking the Fusion logs, my subdir never shows up in the probing paths.

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >