Search Results

Search found 639 results on 26 pages for 'brain pusher'.

Page 26/26 | < Previous Page | 22 23 24 25 26 

  • JavaScript Optimisation

    - by Jayie
    I am using JavaScript to work out all the combinations of badminton doubles matches from a given list of players. Each player teams up with everyone else. EG. If I have the following players a, b, c & d. Their combinations can be: a & b V c & d a & c V b & d a & d V b & c I am using the code below, which I wrote to do the job, but it's a little inefficient. It loops through the PLAYERS array 4 times finding every single combination (including impossible ones). It then sorts the game out into alphabetical order and stores it in the GAMES array if it doesn't already exist. I can then use the first half of the GAMES array to list all game combinations. The trouble is if I have any more than 8 players it runs really slowly because the combination growth is exponential. Does anyone know a better way or algorithm I could use? The more I think about it the more my brain hurts! var PLAYERS = ["a", "b", "c", "d", "e", "f", "g"]; var GAMES = []; var p1, p2, p3, p4, i1, i2, i3, i4, entry, found, i; var pos = 0; var TEAM1 = []; var TEAM2 = []; // loop through players 4 times to get all combinations for (i1 = 0; i1 < PLAYERS.length; i1++) { p1 = PLAYERS[i1]; for (i2 = 0; i2 < PLAYERS.length; i2++) { p2 = PLAYERS[i2]; for (i3 = 0; i3 < PLAYERS.length; i3++) { p3 = PLAYERS[i3]; for (i4 = 0; i4 < PLAYERS.length; i4++) { p4 = PLAYERS[i4]; if ((p1 != p2 && p1 != p3 && p1 != p4) && (p2 != p1 && p2 != p3 && p2 != p4) && (p3 != p1 && p3 != p2 && p3 != p4) && (p4 != p1 && p4 != p2 && p4 != p3)) { // sort teams into alphabetical order (so we can compare them easily later) TEAM1[0] = p1; TEAM1[1] = p2; TEAM2[0] = p3; TEAM2[1] = p4; TEAM1.sort(); TEAM2.sort(); // work out the game and search the array to see if it already exists entry = TEAM1[0] + " & " + TEAM1[1] + " v " + TEAM2[0] + " & " + TEAM2[1]; found = false; for (i=0; i < GAMES.length; i++) { if (entry == GAMES[i]) found = true; } // if the game is unique then store it if (!found) { GAMES[pos] = entry; document.write((pos+1) + ": " + GAMES[pos] + "<br>"); pos++; } } } } } } Thanks in advance. Jason.

    Read the article

  • move element li into ul with jquery

    - by ron
    Hi everybody i'm looking for a solution to move an element that i need to move into a .... im using jquery and i leave here the code i tried with different things but wasn't working it. this is the big menu <ul class="sf-menu"> <li><a href="">Student Centre</a> <ul> <li><a href="9">STUDENT CENTRAL WEBSITE</a></li> <li><a href="10">STUDENT CENTRAL EMAIL</a></li> <li><a href="11">CCM STUDENT SURVIAL TIPS</a></li> <li><a href="12">VET TUTTTION ASSURANSE</a></li> <li><a href="13">WHAT GOING ON AT CCM</a></li> <li><a href="14">IMPORTANT STUDENTS NOTICE</a></li> </ul> </li> <li><a href="">Research</a> <ul> <li><a href="_16">WHAT IS HOLISTIC KINESIOLOGY ?</a></li> <li><a href="_17">TRANF. CHILDREN W/ LEARNING DIFFICULTIES</a></li> <li><a href="_18">HEALING WITH HOLISTIC KINESIOLOGY</a></li> <li><a href="_19">UNDERSTANDING ASPERGER'S SYNDROME</a></li> <li><a href="_20">DAVID CORBY THE DIRECTOR OF CCM</a></li> <li><a href="_21">HELPING PEOPLE CREATE THEIR OWN MIRACLES</a></li> <li><a href="_22">MAGNESIUM AND COLLOIDAL MINERALS</a></li> <li><a href="_23">BRAIN ENERGETICS CHAKRAS AND NADIS</a></li> <li><a href="_24">KINESIOLOGY FAQ'S</a></li>' </ul> </li> <li><a href="25">Contact Us</a></li> <li><a href="26">A - Z</a></li> </ul> and i need to add this li to ul but i need to put in the second place after the first not nested <li id="faculty"> <a href="#">Faculty Courses </a> <ul> <li><a href="">INTENSE SHORT COURSES</a><ul> <li><a href="_59">CRYSTAL KINESIOLOGY ONE</a></li> <li><a href="_60">APPLIED PHYSIOLOGY</a></li> <li><a href="_61">VIBRATIONAL HEALING SYSTEMS 1</a></li> <li><a href="_62">VIBRATIONAL HEALING SYSTEMS 2</a></li> <li><a href="_63">VIBRATIONAL HEALING SYSTEMS 3</a></li> <li><a href="_64">VIBRATIONAL HEALING SYSTEMS 6</a></li> <li><a href="_65">NUTRITIONAL KINESIOLOGY</a></li> <li><a href="_66">QUANTUM HARMONICS</a></li><li><a href="_67">CHAKRA HOLOGRAM</a></li> <li><a href="_68">CLINICAL APPLICATIONS OF KINESIOLOGY</a></li> <li><a href="_69">COUNSELLING KINESIOLOGY</a></li> <li><a href="_70">HARMONISING CHI FLOW</a></li> </ul> </li> <li><a href="=53">FREE INTRUCTION COURSE DAYS</a> <ul> <li><a href="=53_56">HOLISTIC KINESIOLOGY</a></li> <li><a href="=53_57">TRANSPERSONAL COUNSELLING</a></li> <li><a href="=53_58">SHAMMANISM &amp; TRANSFORMATIONAL MASK</a></li> </ul> </li> <li><a href="=50">DIPLOMA MASK AND TRADITIONAL HEALING</a></li> <li><a href="=43">DIPLOMA TRANSPERSONAL ART THERAPY</a></li> <li><a href="=42">DIPLOMA HOLISTIC KINESIOLOGY</a></li> <li><a href="=47">ADVANCE DIPLOMA HOLISTIC KINESIOLOGY</a></li> <li><a href="=48">DIPLOMA DINAMIC AND FUNCTIONAL</a></li> <li><a href="=49">CERTIFICATE MASK AND TRADITIONAL HEALING</a></li> <li><a href="=51">DIPLOMA TRANSPERSONAL COUNSELLING</a></li> <li><a href="=52">STUDENT CLINICS</a></li> </ul> </li> please i need you help Thanks in advance

    Read the article

  • CodePlex Daily Summary for Wednesday, February 16, 2011

    CodePlex Daily Summary for Wednesday, February 16, 2011Popular Releasesthinktecture WSCF.blue: WSCF.blue V1 Update (1.0.11): Features Added a new option that allows properties on data contract types to be marked as virtual. Bug Fixes Fixed a bug caused by certain project properties not being available on Web Service Software Factory projects. Fixed a bug that could result in the WrapperName value of the MessageContractAttribute being incorrect when the Adjust Casing option is used. The menu item code now caters for CommandBar instances that are not available. For example the Web Item CommandBar does not exist ...Document.Editor: 2011.5: Whats new for Document.Editor 2011.5: New export to email New export to image New document background color Improved Tooltips Minor Bug Fix's, improvements and speed upsTerminals: Version 2 - RC1: The "Clean Install" will overwrite your log4net configuration (if you have one). If you run in a Portable Environment, you can use the "Clean Install" and target your portable folder. Tested and it works fine. Changes for this release: Re-worked on the Toolstip settings are done, just to avoid the vs.net clash with auto-generating files for .settings files. renamed it to .settings.config Packged both log4net and ToolStripSettings files into the installer Upgraded the version inform...Export Test Cases From TFS: Test Case Export to Excel 1.0: Team Foundation Server (TFS) 2010 enables the users to manage test cases as Work Item(s). The complete description of the test case along with steps can be managed as single Work Item in TFS 2010. Before migrating to TFS 2010 many test teams will be using MS Excel to manage the test cases (or test scripts). However, after migrating to TFS 2010, test teams can manage the test cases in the server but there may be need to get the test cases into excel sheet like approval from Business Analysts ...AllNewsManager.NET: AllNewsManager.NET 1.3: AllNewsManager.NET 1.3. This new version provide several new features, improvements and bug fixes. Some new features: Online Users. Avatars. Copy function (to create a new article from another one). SEO improvements (friendly urls). New admin buttons. And more...Facebook Graph Toolkit: Facebook Graph Toolkit 0.8: Version 0.8 (15 Feb 2011)moved to Beta stage publish photo feature "email" field of User object added new Graph Api object: Group, Event new Graph Api connection: likes, groups, eventsDJME - The jQuery extensions for ASP.NET MVC: DJME2 -The jQuery extensions for ASP.NET MVC beta2: The source code and runtime library for DJME2. For more product info you can goto http://www.dotnetage.com/djme.html What is new ?The Grid extension added The ModelBinder added which helping you create Bindable data Action. The DnaFor() control factory added that enabled Model bindable extensions. Enhance the ListBox , ComboBox data binding.Jint - Javascript Interpreter for .NET: Jint - 0.9.0: New CLR interoperability features Many bugfixesBuild Version Increment Add-In Visual Studio: Build Version Increment v2.4.11046.2045: v2.4.11046.2045 Fixes and/or Improvements:Major: Added complete support for VC projects including .vcxproj & .vcproj. All padding issues fixed. A project's assembly versions are only changed if the project has been modified. Minor Order of versioning style values is now according to their respective positions in the attributes i.e. Major, Minor, Build, Revision. Fixed issue with global variable storage with some projects. Fixed issue where if a project item's file does not exist, a ...Coding4Fun Tools: Coding4Fun.Phone.Toolkit v1.1: Coding4Fun.Phone.Toolkit v1.1 release. Bug fixes and minor feature requests addedTV4Home - The all-in-one TV solution!: 0.1.0.0 Preview: This is the beta preview release of the TV4Home software.Finestra Virtual Desktops: 1.2: Fixes a few minor issues with 1.1 including the broken per-desktop backgrounds Further improves the speed of switching desktops A few UI performance improvements Added donations linksNuGet: NuGet 1.1: NuGet is a free, open source developer focused package management system for the .NET platform intent on simplifying the process of incorporating third party libraries into a .NET application during development. This release is a Visual Studio 2010 extension and contains the the Package Manager Console and the Add Package Dialog. The URL to the package OData feed is: http://go.microsoft.com/fwlink/?LinkID=206669 To see the list of issues fixed in this release, visit this our issues listEnhSim: EnhSim 2.4.0: 2.4.0This release supports WoW patch 4.06 at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 Changes since 2.3.0 - Upd...Sterling Isolated Storage Database with LINQ for Silverlight and Windows Phone 7: Sterling OODB v1.0: Note: use this changeset to download the source example that has been extended to show database generation, backup, and restore in the desktop example. Welcome to the Sterling 1.0 RTM. This version is not backwards-compatible with previous versions of Sterling. Sterling is also available via NuGet. This product has been used and tested in many applications and contains a full suite of unit tests. You can refer to the User's Guide for complete documentation, and use the unit tests as guide...PDF Rider: PDF Rider 0.5.1: Changes from the previous version * Use dynamic layout to better fit text in other languages * Includes French and Spanish localizations Prerequisites * Microsoft Windows Operating Systems (XP - Vista - 7) * Microsoft .NET Framework 3.5 runtime * A PDF rendering software (i.e. Adobe Reader) that can be opened inside Internet Explorer. Installation instructionsChoose one of the following methods: 1. Download and run the "pdfRider0.5.1-setup.exe" (reccomended) 2. Down...Snoop, the WPF Spy Utility: Snoop 2.6.1: This release is a bug fixing release. Most importantly, issues have been seen around WPF 4.0 applications not always showing up in the app chooser. Hopefully, they are fixed now. I thought this issue warranted a minor release since more and more people are going WPF 4.0 and I don't want anyone to have any problems. Dan Hanan also contributes again with several usability features. Thanks Dan! Happy Snooping! p.s. By request, I am also attaching a .zip file ... so that people can install it ...SharePoint Learning Kit: 1.5: SharePoint Learning Kit 1.5 has the following new functionality: *Support for SharePoint 2010 *E-Learning Actions can be localised *Two New Document Library Edit Options *Automatically add the Assignment List Web Part to the Web Part Gallery *Various Bug Fixes for the Drop Box There are 2 downloads for this release SLK-1.5-2010.zip for SharePoint 2010 SLK-1.5-2007.zip for SharePoint 2007 (WSS3 & MOSS 2007)Facebook C# SDK: 5.0.3 (BETA): This is fourth BETA release of the version 5 branch of the Facebook C# SDK. Remember this is a BETA build. Some things may change or not work exactly as planned. We are absolutely looking for feedback on this release to help us improve the final 5.X.X release. For more information about this release see the following blog posts: Facebook C# SDK - Writing your first Facebook Application Facebook C# SDK v5 Beta Internals Facebook C# SDK V5.0.0 (BETA) Released We have spend time trying ...NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Excel Template, version 1.0.1.161: The NodeXL Excel template displays a network graph using edge and vertex lists stored in an Excel 2007 or Excel 2010 workbook. What's NewThis release adds a new Twitter List network importer, makes some minor feature improvements, and fixes a few bugs. See the Complete NodeXL Release History for details. Installation StepsFollow these steps to install and use the template: Download the Zip file. Unzip it into any folder. Use WinZip or a similar program, or just right-click the Zip file...New Projects(OSV) On Screen Volume Library (VB6): OSV was written in VB6. The library shows a LED display when you change the master volume level. The OSV LED window is fully customizable which means you can choose your own colors and transparency. Supports Mute features. Requires CoreAudio Type Library. Ada: Automatic Download Agent: Ada is a C# newsgroup binary downloader targeting the Mono runtime. The project is separated into 3 sections, the core class library, the WCF service host, and the administration clients (Currently, only an ASP.NET client is in progress, though the design allows for others)Advanced Explorer for Wp7: With Advanced Explorer for Wp7 you can finally share files with your Desktop Pc. Features: - File managment - Send files from PC to the phone and from the phone to the PC. - Edit / Browse the registry - Use ProvisionXml Of course Advanced Explorer for Wp7 is written in C#. Author-it Post Publishing Processing Project: The Author-it Post Publishing Processing Project is a utility that makes changes to content after the content is published from Author-it.Baka MPlayer: Baka MPlayerBizTalk ESB Silverlight Portal: Silverlight Portal replacement for ESB toolkit portal.Cellz: Functional Silverlight Spreadsheet written in F# and bound to the DataGrid control.Comet - Visual Studio 2010 Add-In: Visual Studio Add-In for C# that helps to generate constructors from field/properties and constructors of the superclass. The commands are accessible from the text editor's context menu. Comet is developed in C#.Conway's Game of Life: A Windows C# app that implements a cellular automaton. Give an initial seed by clicking on the grid and making cells live or by letting the app create a random seed. Observe it evolve, control the speed of the simulation, stop it, save it or load previously saved confingurations.DACU: It's small app to use vkontakte. It's developed in C# (Visual Studio 2010) and use .Net 4.0 WPF technology.dWebBot: Web Bot for simply desight bots for online games etc.Enhanced Social Features for SharePoint 2010: Enhanced Social Features for SharePoint 2010Express Market: Express Market E-commerce project v1.0 Payment , Module xml moduleFatBaby(???): ??Solr???(javabin)Feedbacky: Feedbacky is an OpenSource alternative to services like Getsatisfaction and UserVoice. Feedbacky will let you to create and merge your own Feedback service within your website/webapp without paying anything, letting your users to give their opinion about your service.FT.Architecture: Data Access Layer framework with an NHibernate implementation. The framework is design to be completely independent from its implementation (NHibernate or otherwise). It brings entities equality, repositories, independent query language, and many other things for free.FusionDotNet: .Net client library for Google Fusion Tables. Wraps the REST-based API in a set of managed classes and integrates the results into ADO.Net objects. Also includes a set of Activities for use with Workflow Foundation 4.0Glyphx: Glyphx underlays your transparent taskbar with glyphs from the matrix and cyberspace, this projection makes your monitor emit a certain frequency, these waves infiltrate your brain and boosts your alpha waves, making you more productive and an efficient hacker.Hint Based Cooperative Caching: Project to simulate hint based cooperative caching schemes and try to find optimization for specific scenarios.Hydrator: Flexible Test Data Generator: Hydrator is a highly configurable test data generatorIQP Demosite: Demonstration site of IQP Ajax FrameworkIridium: Iridium Game Engine C++ OpenGL using FreeGlutMinecraft Applications!: <minecraft> <users> <gamers> <application> <crafting>MP3 Year Finder: This project is to try and find the release year of mp3 tracks using Wikipedia.Mr.Dev: A Day at the Office: XNA platform game. Inspired by Monty's Revenge and Battle Kid. This project will serve as an example of how to code a platform game in C# using the XNA framework. Features include: Gamestory, Bosses, Enemies, Simple 2D Physics, ...MVC N-Tier EMR Sample: A Sample MVC N-Tier EMR application that uses MVC3 for presentation, WCF, and Entity Framework Code First (CTP5).MyCodeplex: This is a project on somethingNHR Portal Development: NHR Portal Development Project All Project Related materials for initial Alpha Release (working prototype) are available in the download tab. Important discussions that are beneficial to the development team are in the Discussions Tab.Orchard Keep Alive: This Orchard module prevents the application from unloading by pingging periodically a specific page.PingStatistic: PingStatistic, PingPlanz: Planz™ provides a single, integrative document-like overlay to a windows file system. This overlay provides a context in which to create to-do list, notes, files, and links to Outlook email messages, files and web pages. It is flexible enough to implement a GTD system.Practicing Managed DirectX 9: My Test ProjectSPEx - SharePoint Javascript library extended: SPEx is a Javascript library, which extends some of the existing out of box functionality of SharePoint.Watcher: Boolean assertion guardian: Flexible boolean assertion guardian.WebAdvert: WebAdvert is an ASP.NET MVC 3 application. It is intended to demonstrate the basics and principles of ASP.NET MVC and Entity Framework 4 to the learners.WebSharpCompiler: Simple Web application that compiles the code in a textbox in C# and displays compilation messages as a resultWP7 Backup Service: WP7 Backup Service aims to create a very simple backup mechanism for your application data that reside inside the IsolatedStorage. It is application independent and allows for multiple backup and restore points.

    Read the article

  • SQLAuthority News – Job Interviewing the Right Way (and for the Right Reasons) – Guest Post by Feodor Georgiev

    - by pinaldave
    Feodor Georgiev is a SQL Server database specialist with extensive experience of thinking both within and outside the box. He has wide experience of different systems and solutions in the fields of architecture, scalability, performance, etc. Feodor has experience with SQL Server 2000 and later versions, and is certified in SQL Server 2008. Feodor has written excellent article on Job Interviewing the Right Way. Here is his article in his own language. A while back I was thinking to start a blog post series on interviewing and employing IT personnel. At that time I had just read the ‘Smart and gets things done’ book (http://www.joelonsoftware.com/items/2007/06/05.html) and I was hyped up on some debatable topics regarding finding and employing the best people in the branch. I have no problem with hiring the best of the best; it’s just the definition of ‘the best of the best’ that makes things a bit more complicated. One of the fundamental books one can read on the topic of interviewing is the one mentioned above. If you have not read it, then you must do so; not because it contains the ultimate truth, and not because it gives the answers to most questions on the subject, but because the book contains an extensive set of questions about interviewing and employing people. Of course, a big part of these questions have different answers, depending on location, culture, available funds and so on. (What works in the US may not necessarily work in the Nordic countries or India, or it may work in a different way). The only thing that is valid regardless of any external factor is this: curiosity. In my belief there are two kinds of people – curious and not-so-curious; regardless of profession. Think about it – professional success is directly proportional to the individual’s curiosity + time of active experience in the field. (I say ‘active experience’ because vacations and any distractions do not count as experience :)  ) So, curiosity is the factor which will distinguish a good employee from the not-so-good one. But let’s shift our attention to something else for now: a few tips and tricks for successful interviews. Tip and trick #1: get your priorities straight. Your status usually dictates your priorities; for example, if the person looking for a job has just relocated to a new country, they might tend to ignore some of their priorities and overload others. In other words, setting priorities straight means to define the personal criteria by which the interview process is lead. For example, similar to the following questions can help define the criteria for someone looking for a job: How badly do I need a (any) job? Is it more important to work in a clean and quiet environment or is it important to get paid well (or both, if possible)? And so on… Furthermore, before going to the interview, the candidate should have a list of priorities, sorted by the most importance: e.g. I want a quiet environment, x amount of money, great helping boss, a desk next to a window and so on. Also it is a good idea to be prepared and know which factors can be compromised and to what extent. Tip and trick #2: the interview is a two-way street. A job candidate should not forget that the interview process is not a one-way street. What I mean by this is that while the employer is interviewing the potential candidate, the job seeker should not miss the chance to interview the employer. Usually, the employer and the candidate will meet for an interview and talk about a variety of topics. In a quality interview the candidate will be presented to key members of the team and will have the opportunity to ask them questions. By asking the right questions both parties will define their opinion about each other. For example, if the candidate talks to one of the potential bosses during the interview process and they notice that the potential manager has a hard time formulating a question, then it is up to the candidate to decide whether working with such person is a red flag for them. There are as many interview processes out there as there are companies and each one is different. Some bigger companies and corporates can afford pre-selection processes, 3 or even 4 stages of interviews, small companies usually settle with one interview. Some companies even give cognitive tests on the interview. Why not? In his book Joel suggests that a good candidate should be pampered and spoiled beyond belief with a week-long vacation in New York, fancy hotels, food and who knows what. For all I can imagine, an interview might even take place at the top of the Eifel tower (right, Mr. Joel, right?) I doubt, however, that this is the optimal way to capture the attention of a good employee. The ‘curiosity’ topic What I have learned so far in my professional experience is that opinions can be subjective. Plus, opinions on technology subjects can also be subjective. According to Joel, only hiring the best of the best is worth it. If you ask me, there is no such thing as best of the best, simply because human nature (well, aside from some physical limitations, like putting your pants on through your head :) ) has no boundaries. And why would it have boundaries? I have seen many curious and interesting people, naturally good at technology, though uninterested in it as one  can possibly be; I have also seen plenty of people interested in technology, who (in an ideal world) should have stayed far from it. At any rate, all of this sums up at the end to the ‘supply and demand’ factor. The interview process big-bang boils down to this: If there is a mutual benefit for both the employer and the potential employee to work together, then it all sorts out nicely. If there is no benefit, then it is much harder to get to a common place. Tip and trick #3: word-of-mouth is worth a thousand words Here I would just mention that the best thing a job candidate can get during the interview process is access to future team members or other employees of the new company. Nowadays the world has become quite small and everyone knows everyone. Look at LinkedIn, look at other professional networks and you will realize how small the world really is. Knowing people is a good way to become more approachable and to approach them. Tip and trick #4: Be confident. It is true that for some people confidence is as natural as breathing and others have to work hard to express it. Confidence is, however, a key factor in convincing the other side (potential employer or employee) that there is a great chance for success by working together. But it cannot get you very far if it’s not backed up by talent, curiosity and knowledge. Tip and trick #5: The right reasons What really bothers me in Sweden (and I am sure that there are similar situations in other countries) is that there is a tendency to fill quotas and to filter out candidates by criteria different from their skill and knowledge. In job ads I see quite often the phrases ‘positive thinker’, ‘team player’ and many similar hints about personality features. So my guess here is that discrimination has evolved to a new level. Let me clear up the definition of discrimination: ‘unfair treatment of a person or group on the basis of prejudice’. And prejudice is the ‘partiality that prevents objective consideration of an issue or situation’. In other words, there is not much difference whether a job candidate is filtered out by race, gender or by personality features – it is all a bad habit. And in reality, there is no proven correlation between the technology knowledge paired with skills and the personal features (gender, race, age, optimism). It is true that a significantly greater number of Darwin awards were given to men than to women, but I am sure that somewhere there is a paper or theory explaining the genetics behind this. J This topic actually brings to mind one of my favorite work related stories. A while back I was working for a big company with many teams involved in their processes. One of the teams was occupying 2 rooms – one had the team members and was full of light, colorful posters, chit-chats and giggles, whereas the other room was dark, lighted only by a single monitor with a quiet person in front of it. Later on I realized that the ‘dark room’ person was the guru and the ultimate problem-solving-brain who did not like the chats and giggles and hence was in a separate room. In reality, all severe problems which the chatty and cheerful team members could not solve and all emergencies were directed to ‘the dark room’. And thus all worked out well. The moral of the story: Personality has nothing to do with technology knowledge and skills. End of story. Summary: I’d like to stress the fact that there is no ultimately perfect candidate for a job, and there is no such thing as ‘best-of-the-best’. From my personal experience, the main criteria by which I measure people (co-workers and bosses) is the curiosity factor; I know from experience that the more curious and inventive a person is, the better chances there are for great achievements in their field. Related stories: (for extra credit) 1) Get your priorities straight. A while back as a consultant I was working for a few days at a time at different offices and for different clients, and so I was able to compare and analyze the work environments. There were two different places which I compared and recently I asked a friend of mine the following question: “Which one would you prefer as a work environment: a noisy office full of people, or a quiet office full of faulty smells because the office is rarely cleaned?” My friend was puzzled for a while, thought about it and said: “Hmm, you are talking about two different kinds of pollution… I will probably choose the second, since I can clean the workplace myself a bit…” 2) The interview is a two-way street. One time, during a job interview, I met a potential boss that had a hard time phrasing a question. At that particular time it was clear to me that I would not have liked to work under this person. According to my work religion, the properly asked question contains at least half of the answer. And if I work with someone who cannot ask a question… then I’d be doing double or triple work. At another interview, after the technical part with the team leader of the department, I was introduced to one of the team members and we were left alone for 5 minutes. I immediately jumped on the occasion and asked the blunt question: ‘What have you learned here for the past year and how do you like your job?’ The team member looked at me and said ‘Nothing really. I like playing with my cats at home, so I am out of here at 5pm and I don’t have time for much.’ I was disappointed at the time and I did not take the job offer. I wasn’t that shocked a few months later when the company went bankrupt. 3) The right reasons to take a job: personality check. A while back I was asked to serve as a job reference for a coworker. I agreed, and after some weeks I got a phone call from the company where my colleague was applying for a job. The conversation started with the manager’s question about my colleague’s personality and about their social skills. (You can probably guess what my internal reaction was… J ) So, after 30 minutes of pouring common sense into the interviewer’s head, we finally agreed on the fact that a shy or quiet personality has nothing to do with work skills and knowledge. Some years down the road my former colleague is taking the manager’s position as the manager is demoted to a different department. Reference: Feodor Georgiev, Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • CodePlex Daily Summary for Sunday, April 08, 2012

    CodePlex Daily Summary for Sunday, April 08, 2012Popular ReleasesExtAspNet: ExtAspNet v3.1.3: ExtAspNet - ?? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ?????????? ExtAspNet ????? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ??????????。 ExtAspNet ??????? JavaScript,?? CSS,?? UpdatePanel,?? ViewState,?? WebServices ???????。 ??????: IE 7.0, Firefox 3.6, Chrome 3.0, Opera 10.5, Safari 3.0+ ????:Apache License 2.0 (Apache) ??:http://extasp.net/ ??:http://bbs.extasp.net/ ??:http://extaspnet.codeplex.com/ ??:http://sanshi.cnblogs.com/ ????: +2012-04-08 v3.1.3 -??Language="zh_TW"?JS???BUG(??)。 +?D...Coding4Fun Tools: Coding4Fun.Phone.Toolkit v1.5.5: New Controls ChatBubble ChatBubbleTextBox OpacityToggleButton New Stuff TimeSpan languages added: RU, SK, CS Expose the physics math from TimeSpanPicker Image Stretch now on buttons Bug Fixes Layout fix so RoundToggleButton and RoundButton are exactly the same Fix for ColorPicker when set via code behind ToastPrompt bug fix with OnNavigatedTo Toast now adjusts its layout if the SIP is up Fixed some issues with Expression Blend supportHarness - Internet Explorer Automation: Harness 2.0.3: support the operation fo frameset, frame and iframe Add commands SwitchFrame GetUrl GoBack GoForward Refresh SetTimeout GetTimeout Rename commands GetActiveWindow to GetActiveBrowser SetActiveWindow to SetActiveBrowser FindWindowAll to FindBrowser NewWindow to NewBrowser GetMajorVersion to GetVersionBetter Explorer: Better Explorer 2.0.0.861 Alpha: - fixed new folder button operation not work well in some situations - removed some unnecessary code like subclassing that is not needed anymore - Added option to make Better Exlorer default (at least for WIN+E operations) - Added option to enable file operation replacements (like Terracopy) to work with Better Explorer - Added some basic usability to "Share" button - Other fixesText Designer Outline Text: Version 2 Preview 2: Added Fake 3D demos for C++ MFC, C# Winform and C# WPFLightFarsiDictionary - ??????? ??? ?????/???????: LightFarsiDictionary - v1: LightFarsiDictionary - v1WPF Application Framework (WAF): WPF Application Framework (WAF) 2.5.0.3: Version: 2.5.0.3 (Milestone 3): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Changelog Legend: [B] Breaking change; [O] Marked member as obsolete [O] WAF: Mark the StringBuilderExtensions class as obsolete because the AppendInNewLine method can be replaced with string.Jo...GeoMedia PostGIS data server: PostGIS GDO 1.0.1.2: This is a new version of GeoMeda PostGIS data server which supports user rights. It means that only those feature classes, which the current user has rights to select, are visible in GeoMedia. Issues fixed in this release Fixed problem with renaming and deleting feature classes - IMPORTANT! - the gfeatures view must be recreated so that this issue is completely fixed. The attached script "GFeaturesView2.sql" can be used to accomplish this task. Another way is to drop and recreate the metadat...SkyDrive Connector for SharePoint: SkyDrive Connector for SharePoint: Fixed a few bugs pertaining to live authentication Removed dependency on Shared Documents Removed CallBack web part propertyClosedXML - The easy way to OpenXML: ClosedXML 0.65.2: Aside from many bug fixes we now have Conditional Formatting The conditional formatting was sponsored by http://www.bewing.nl (big thanks) New on v0.65.1 Fixed issue when loading conditional formatting with default values for icon sets New on v0.65.2 Fixed issue loading conditional formatting Improved inserts performanceLiberty: v3.2.0.0 Release 4th April 2012: Change Log-Added -Halo 3 support (invincibility, ammo editing) -Halo 3: ODST support (invincibility, ammo editing) -The file transfer page now shows its progress in the Windows 7 taskbar -"About this build" settings page -Reach Change what an object is carrying -Reach Change which node a carried object is attached to -Reach Object node viewer and exporter -Reach Change which weapons you are carrying from the object editor -Reach Edit the weapon controller of vehicles and turrets -An error dia...MSBuild Extension Pack: April 2012: Release Blog Post The MSBuild Extension Pack April 2012 release provides a collection of over 435 MSBuild tasks. A high level summary of what the tasks currently cover includes the following: System Items: Active Directory, Certificates, COM+, Console, Date and Time, Drives, Environment Variables, Event Logs, Files and Folders, FTP, GAC, Network, Performance Counters, Registry, Services, Sound Code: Assemblies, AsyncExec, CAB Files, Code Signing, DynamicExecute, File Detokenisation, GUID’...DotNetNuke® Community Edition CMS: 06.01.05: Major Highlights Fixed issue that stopped users from creating vocabularies when the portal ID was not zero Fixed issue that caused modules configured to be displayed on all pages to be added to the wrong container in new pages Fixed page quota restriction issue in the Ribbon Bar Removed restriction that would not allow users to use a dash in page names. Now users can create pages with names like "site-map" Fixed issue that was causing the wrong container to be loaded in modules wh...51Degrees.mobi - Mobile Device Detection and Redirection: 2.1.3.1: One Click Install from NuGet Changes to Version 2.1.3.11. [assembly: AllowPartiallyTrustedCallers] has been added back into the AssemblyInfo.cs file to prevent failures with other assemblies in Medium trust environments. 2. The Lite data embedded into the assembly has been updated to include devices from December 2011. The 42 new RingMark properties will return Unknown if RingMark data is not available. Changes to Version 2.1.2.11Code Changes 1. The project is now licenced under the Mozilla...MVC Controls Toolkit: Mvc Controls Toolkit 2.0.0: Added Support for Mvc4 beta and WebApi The SafeqQuery and HttpSafeQuery IQueryable implementations that works as wrappers aroung any IQueryable to protect it from unwished queries. "Client Side" pager specialized in paging javascript data coming either from a remote data source, or from local data. LinQ like fluent javascript api to build queries either against remote data sources, or against local javascript data, with exactly the same interface. There are 3 different query objects exp...nopCommerce. Open source shopping cart (ASP.NET MVC): nopcommerce 2.50: Highlight features & improvements: • Significant performance optimization. • Allow store owners to create several shipments per order. Added a new shipping status: “Partially shipped”. • Pre-order support added. Enables your customers to place a Pre-Order and pay for the item in advance. Displays “Pre-order” button instead of “Buy Now” on the appropriate pages. Makes it possible for customer to buy available goods and Pre-Order items during one session. It can be managed on a product variant ...WiX Toolset: WiX v3.6 RC0: WiX v3.6 RC0 (3.6.2803.0) provides support for VS11 and a more stable Burn engine. For more information see Rob's blog post about the release: http://robmensching.com/blog/posts/2012/4/3/WiX-v3.6-Release-Candidate-Zero-availableSageFrame: SageFrame 2.0: Sageframe is an open source ASP.NET web development framework developed using ASP.NET 3.5 with service pack 1 (sp1) technology. It is designed specifically to help developers build dynamic website by providing core functionality common to most web applications.iTuner - The iTunes Companion: iTuner 1.5.4475: Fix to parse empty playlists in iTunes LibraryDocument.Editor: 2012.2: Whats New for Document.Editor 2012.2: New Save Copy support New Page Setup support Minor Bug Fix's, improvements and speed upsNew ProjectsCameraSudokuSolver: Sudoku solver through camera for AndroidCardboardBox: This is a WPF based client application for the danbooru.donmai.us image board.Cloud in Net: Cloud in NetDataStoreCleaner: DataStoreCleaner clears "DataStore" folder which manages Windows Update History. It is useful for fixing WU error, or tune up Windows start-up. It's developed in C#.E-Ticaret - Sanal Magaza: Temel olarak gelismis e-ticaret sistemlerinin temel özelliklerini bulundurucak ve yönetecek bir sistem olacaktir. Sonrasinda ek gelistirmeler için özel çalismalar yapilacaktir.Finding all factors: this is a simple Winform application for finding all factors or divisors of a given number. i think is a simple example for learning how to work with essential backgroundWorker class .FlashGraphicBuilder: Simple graphic builder on AS3IpPbxImportRR: Command-line tool to import or export routing records to/from a SwyxWare VoIP PBX. This is a sample application showing how to use the SwyxWare Configuration API (CDS API). Use it at your own risk.KNX.net: KNX.net provides a KNX API for C#LevelDesigner: LevelDesignerN2F Yverdon Sanitizers: A system to help ease the pain of sanitizing data. The system comes with some basic sanitizers as well as the framework necessary to enable easy development of custom sanitizers.nAPI for Windows Phone: Windows Phone library for using Naver(R)'s open API service. (This project is 3rd party library project, NOT a NHN/Naver's official project) For more info about Naver Open API: http://dev.naver.com/openapi/NJena (Semantic Web Framework for .NET): A .NET source-level port of the Jena frameworkNWTCompiler: NWTCompiler is for those interested in creating their own compiled database of the New World Translation of the Holy Scriptures from the watchtower.org sources. Formats such as e-Sword are supported.Odin Game Engine: Odin is an attempt at mixing Windows Forms and XNA in the goal of making a game engine.qq: qqRanmal MVC Work: It is my Knowledge Exchange projectRichCode: RichCode is a UserControl that is easy to use, allows the custom syntax highlighting without much effortRUPSY Download manager: RUPSY Download managerSDCBCWEB: This is a web application for a class project. The goal is to create a content management system to be used for educational purposes by a nonprofit.Snake Game: Kursinis darbasSpec Pattern: Spec Pattern is a simple yet powerful implementation of specification pattern in C#. Relying in IQueryable it covers the three requirements this patterns aims to solve - Validation - Querying - BuildingSteel Brain: An open source project which aims to implement a neural net framework in unmanaged C++. The longer term describes why I chose C++ over the phenomenal C# which is creating a neural net that can harness the power of GPGPU's by using the OpenCL library. (I'm not a beleiver of projects that tries to provide solutions to developers who dont want to come out of the managed, cozy C# context) Sunday: Another Web Content Management System.TO7 License Plate Recognition: This project is part of a college assignment.ts: tsUnofficial RunUO: This project is a fork of the popular Ultima Online emulation software RunUO ( http://www.runuo.com ) The main difference between this repository and the official RunUO is that this project is fully set up and ready to launch in Visual Studio 2010, with full .NET 4.0 support!VibroStudio: Application for edit, view and process vibrosignalsWindows Phone AR.Drone controller: The Windows Phone AR.Drone controller is a controller library and client application for Windows Phone 7.1. It is written in C#.XamlImageConverter: A tool to convert Xaml to images. - It implements an MSBuild task that can be imported in any project to convert xaml to images in a project - It implements a HttpHandler that converts xaml to images on the fly. XEFFEX: XEFFEX is a play off the piano gadget. The shapes were manipulated and the sounds are set to different effects. The end goal was too have the pads play a sound and execute a program the user can pre-choose. The options menu allows quit access to your files. XiaMiRingtone: ?????windows phone??。 ???????,????????,???????????。

    Read the article

  • Transaction issue in java with hibernate - latest entries not pulled from database

    - by Gearóid
    Hi, I'm having what seems to be a transactional issue in my application. I'm using Java 1.6 and Hibernate 3.2.5. My application runs a monthly process where it creates billing entries for a every user in the database based on their monthly activity. These billing entries are then used to create Monthly Bill object. The process is: Get users who have activity in the past month Create the relevant billing entries for each user Get the set of billing entries that we've just created Create a Monthly Bill based on these entries Everything works fine until Step 3 above. The Billing Entries are correctly created (I can see them in the database if I add a breakpoint after the Billing Entry creation method), but they are not pulled out of the database. As a result, an incorrect Monthly Bill is generated. If I run the code again (without clearing out the database), new Billing Entries are created and Step 3 pulls out the entries created in the first run (but not the second run). This, to me, is very confusing. My code looks like the following: for (User user : usersWithActivities) { createBillingEntriesForUser(user.getId()); userBillingEntries = getLastMonthsBillingEntriesForUser(user.getId()); createXMLBillForUser(user.getId(), userBillingEntries); } The methods called look like the following: @Transactional public void createBillingEntriesForUser(Long id) { UserManager userManager = ManagerFactory.getUserManager(); User user = userManager.getUser(id); List<AccountEvent> events = getLastMonthsAccountEventsForUser(id); BillingEntry entry = new BillingEntry(); if (null != events) { for (AccountEvent event : events) { if (event.getEventType().equals(EventType.ENABLE)) { Calendar cal = Calendar.getInstance(); Date eventDate = event.getTimestamp(); cal.setTime(eventDate); double startDate = cal.get(Calendar.DATE); double numOfDaysInMonth = cal.getActualMaximum(Calendar.DAY_OF_MONTH); double numberOfDaysInUse = numOfDaysInMonth - startDate; double fractionToCharge = numberOfDaysInUse/numOfDaysInMonth; BigDecimal amount = BigDecimal.valueOf(fractionToCharge * Prices.MONTHLY_COST); amount.scale(); entry.setAmount(amount); entry.setUser(user); entry.setTimestamp(eventDate); userManager.saveOrUpdate(entry); } } } } @Transactional public Collection<BillingEntry> getLastMonthsBillingEntriesForUser(Long id) { if (log.isDebugEnabled()) log.debug("Getting all the billing entries for last month for user with ID " + id); //String queryString = "select billingEntry from BillingEntry as billingEntry where billingEntry>=:firstOfLastMonth and billingEntry.timestamp<:firstOfCurrentMonth and billingEntry.user=:user"; String queryString = "select be from BillingEntry as be join be.user as user where user.id=:id and be.timestamp>=:firstOfLastMonth and be.timestamp<:firstOfCurrentMonth"; //This parameter will be the start of the last month ie. start of billing cycle SearchParameter firstOfLastMonth = new SearchParameter(); firstOfLastMonth.setTemporalType(TemporalType.DATE); //this parameter holds the start of the CURRENT month - ie. end of billing cycle SearchParameter firstOfCurrentMonth = new SearchParameter(); firstOfCurrentMonth.setTemporalType(TemporalType.DATE); Query query = super.entityManager.createQuery(queryString); query.setParameter("firstOfCurrentMonth", getFirstOfCurrentMonth()); query.setParameter("firstOfLastMonth", getFirstOfLastMonth()); query.setParameter("id", id); List<BillingEntry> entries = query.getResultList(); return entries; } public MonthlyBill createXMLBillForUser(Long id, Collection<BillingEntry> billingEntries) { BillingHistoryManager manager = ManagerFactory.getBillingHistoryManager(); UserManager userManager = ManagerFactory.getUserManager(); MonthlyBill mb = new MonthlyBill(); User user = userManager.getUser(id); mb.setUser(user); mb.setTimestamp(new Date()); Set<BillingEntry> entries = new HashSet<BillingEntry>(); entries.addAll(billingEntries); String xml = createXmlForMonthlyBill(user, entries); mb.setXmlBill(xml); mb.setBillingEntries(entries); MonthlyBill bill = (MonthlyBill) manager.saveOrUpdate(mb); return bill; } Help with this issue would be greatly appreciated as its been wracking my brain for weeks now! Thanks in advance, Gearoid.

    Read the article

  • Drop down menus and pathetic art of styling

    - by fmz
    Sorry folks, I must be brain-dead or something but I can't get the styling on these drop-down menus to cooperate. I have the color and the font right, but I have unwanted spaces between the list elements and the a and the a:hover sizes should be the same. I would appreciate some help getting this to work correctly. Here is the page Here is the html: <ul class="dropdown"> <li><a href="#" id="home">Home</a></li> <li><a href="#" id="about">About Us</a> <ul class="sub-menu"> <li><a href="#">Our History</a></li> <li><a href="#">Our Process</a></li> <li><a href="#">Portfolio</a></li> <li><a href="#">Financing</a></li> <li><a href="#">Testimonials</a></li> <li><a href="#">Subcontractors</a></li> </ul> </li> <li><a href="#" id="personal">Personal Banking</a></li> <li><a href="#" id="commercial">Commercial Banking</a></li> <li><a href="#" id="service">Customer Service</a> <ul class="sub-menu"> <li><a href="#">Our History</a></li> <li><a href="#">Our Process</a></li> <li><a href="#">Portfolio</a></li> <li><a href="#">Financing</a></li> <li><a href="#">Testimonials</a></li> <li><a href="#">Subcontractors</a></li> </ul> </li> <li><a href="#" id="investors">Investor Relations</a></li> <li><a href="#" id="contact">Contact Us</a></li> Here is the CSS: ul.dropdown { position: relative; background: #4e8997; height: 40px; padding-left: 5px; } ul.dropdown li { float: left; zoom: 1; } ul.dropdown li a ul.dropdown li a:visited { display: block; margin-top: 5px; padding: .5em .6em; line-height: 16px; color: #fff; font: bold 14px "Helvetica Neue", Arial, Helvetica, Geneva, sans-serif; text-transform: uppercase; border: none; } ul.dropdown a:hover { background-color: #c29c5d; color: #fff; } ul.dropdown a:active { background-color: #c29c5d; color: #fff; } /* LEVEL TWO */ ul.dropdown ul { padding-left: 0; width: 200px; display: none; top:36px; margin-left: 0; position: absolute; } ul.dropdown ul li { font: 10px "Helvetica Neue", Arial, Helvetica, Geneva, sans-serif; border-bottom: 1px solid #ccc; display: block; margin: 0; padding: 0; float: none; color: #fff; background-color: #c29c5d; } ul.dropdown ul li a:link { display: block; font-size: 10px; width: 188px; height: 16px; } ul.dropdown ul li a:hover { background-color: #a2834d; color: #fff; } Thanks!

    Read the article

  • Link List Implementation Help - Visual C++

    - by Greenhouse Gases
    Hi there I'm trying to implement a link list which stores the city name (though you will see this commented out as I need to resolve the issue of not being able to use string and needing to use a primitive data type instead during the declaration), longitude, latitude and of course a pointer to the next node in the chain. I am new to the Visual C++ environment and my brain is somewhat scrambled after coding for several straight hours today so I wondered if anyone could help resolve the 2 errors I am getting (ignore the #include syntax as I had to change them to avoid the browser interpreting html!): 1U08221.obj : error LNK2028: unresolved token (0A000298) "public: __thiscall Locations::Locations(void)" (??0Locations@@$$FQAE@XZ) referenced in function "int __clrcall main(cli::array^)" (?main@@$$HYMHP$01AP$AAVString@System@@@Z) 1U08221.obj : error LNK2019: unresolved external symbol "public: __thiscall Locations::Locations(void)" (??0Locations@@$$FQAE@XZ) referenced in function "int __clrcall main(cli::array^)" (?main@@$$HYMHP$01AP$AAVString@System@@@Z) The code for my header file is here: include string struct locationNode { //char[10] nodeCityName; double nodeLati; double nodeLongi; locationNode* Next; }; class Locations { private: int size; public: Locations(); // constructor for the class locationNode* Head; int Add(locationNode* Item); }; and here is the code for the file containing the main method: // U08221.cpp : main project file. include "stdafx.h" include "Locations.h" include iostream include string using namespace std; int n = 0; int x; string cityNameInput; bool acceptedInput = false; int Locations::Add(locationNode *NewItem) { locationNode *Sample = new locationNode; Sample = NewItem; Sample-Next = Head; Head = Sample; return size++; } void CorrectCase(string name) // Correct upper and lower case letters of input { x = name.size(); int firstLetVal = name[0], letVal; n = 1; // variable for name index from second letter onwards if((name[0] 90) && (name[0] < 123)) // First letter is lower case { firstLetVal = firstLetVal - 32; // Capitalise first letter name[0] = firstLetVal; } while(n <= x - 1) { if((name[n] = 65) && (name[n] <= 90)) { letVal = name[n] + 32; name[n] = letVal; } n++; } cityNameInput = name; } void nameValidation(string name) { n = 0; // start from first letter x = name.size(); while(!acceptedInput) { if((name[n] = 65) && (name[n] <= 122)) // is in the range of letters { while(n <= x - 1) { while((name[n] =91) && (name[n] <=97)) // ERROR!! { cout << "Please enter a valid city name" << endl; cin name; } n++; } } else { cout << "Please enter a valid city name" << endl; cin name; } if(n <= x - 1) { acceptedInput = true; } } cityNameInput = name; } int main(array ^args) { cout << "Enter a city name" << endl; cin cityNameInput; nameValidation(cityNameInput); // check is made up of valid characters CorrectCase(cityNameInput); // corrects name to standard format of capitalised first letter, and lower case subsequent letters cout << cityNameInput; cin cityNameInput; Locations::Locations(); Locations *Parts = new Locations(); locationNode *Part; Part = new locationNode; //Part-nodeCityName = "London"; Part-nodeLati = 87; Part-nodeLongi = 80; Parts-Add(Part); } I am familiar with the concepts but somewhat inexperienced with OOP so am making some silly errors that you can never find when you've stared at something too long. Any help you can offer will be appreciated! Thanks

    Read the article

  • Multiple windows services in a single project = mystery

    - by Remoh
    I'm having a bizarre issue that I haven't seen before and I'm thinking it MUST be something simple that I'm not seeing in my code. I have a project with 2 windows services defined. One I've called DataSyncService, the other SubscriptionService. Both are added to the same project installer. Both use a timer control from System.Timers. If I start both services together, they seem to work fine. The timers elapse at the appropriate time and everything looks okay. However, if I start either service individually, leaving the other stopped, everything goes haywire. The timer elapses constantly and on the wrong service. In other words, if I start the DataSyncService, the SubscriptionService timer elapses over and over. ...which is obviously strange. The setup is similar to what I've done in the past so I'm really stumped. I even tried deleting both service and starting over but it doesn't seem to make a difference. At this point, I'm thinking I've made a simple error in the way I'm defining the services and my brain just won't let me see it. It must be creating some sort of threading issue that causes one service to race when the other is stopped. Here the code.... From Program.cs: static void Main() { ServiceBase[] ServicesToRun; ServicesToRun = new ServiceBase[] { new DataSyncService(), new SubscriptionService() }; ServiceBase.Run(ServicesToRun); } From ProjectInstaller.designer.cs: private void InitializeComponent() { this.serviceProcessInstaller1 = new System.ServiceProcess.ServiceProcessInstaller(); this.dataSyncInstaller = new System.ServiceProcess.ServiceInstaller(); this.subscriptionInstaller = new System.ServiceProcess.ServiceInstaller(); // // serviceProcessInstaller1 // this.serviceProcessInstaller1.Account = System.ServiceProcess.ServiceAccount.LocalSystem; this.serviceProcessInstaller1.Password = null; this.serviceProcessInstaller1.Username = null; // // dataSyncInstaller // this.dataSyncInstaller.DisplayName = "Data Sync Service"; this.dataSyncInstaller.ServiceName = "DataSyncService"; this.dataSyncInstaller.StartType = System.ServiceProcess.ServiceStartMode.Automatic; // // subscriptionInstaller // this.subscriptionInstaller.DisplayName = "Subscription Service"; this.subscriptionInstaller.ServiceName = "SubscriptionService"; this.subscriptionInstaller.StartType = System.ServiceProcess.ServiceStartMode.Automatic; // // ProjectInstaller // this.Installers.AddRange(new System.Configuration.Install.Installer[] { this.serviceProcessInstaller1, this.dataSyncInstaller, this.subscriptionInstaller}); } private System.ServiceProcess.ServiceProcessInstaller serviceProcessInstaller1; private System.ServiceProcess.ServiceInstaller dataSyncInstaller; private System.ServiceProcess.ServiceInstaller subscriptionInstaller; From DataSyncService.cs: public static readonly int _defaultInterval = 43200000; //log4net.ILog log; public DataSyncService() { InitializeComponent(); //log = LogFactory.Instance.GetLogger(this); } protected override void OnStart(string[] args) { timer1.Interval = _defaultInterval; //GetInterval(); timer1.Enabled = true; EventLog.WriteEntry("MyProj", "Data Sync Service Started", EventLogEntryType.Information); //log.Info("Data Sync Service Started"); } private void timer1_Elapsed(object sender, System.Timers.ElapsedEventArgs e) { EventLog.WriteEntry("MyProj", "Data Sync Timer Elapsed.", EventLogEntryType.Information); } private void InitializeComponent() { this.timer1 = new System.Timers.Timer(); ((System.ComponentModel.ISupportInitialize)(this.timer1)).BeginInit(); // // timer1 // this.timer1.Enabled = true; this.timer1.Elapsed += new System.Timers.ElapsedEventHandler(this.timer1_Elapsed); // // DataSyncService // this.ServiceName = "DataSyncService"; ((System.ComponentModel.ISupportInitialize)(this.timer1)).EndInit(); } From SubscriptionService: public static readonly int _defaultInterval = 300000; //log4net.ILog log; public SubscriptionService() { InitializeComponent(); } protected override void OnStart(string[] args) { timer1.Interval = _defaultInterval; //GetInterval(); timer1.Enabled = true; EventLog.WriteEntry("MyProj", "Subscription Service Started", EventLogEntryType.Information); //log.Info("Subscription Service Started"); } private void timer1_Elapsed(object sender, System.Timers.ElapsedEventArgs e) { EventLog.WriteEntry("MyProj", "Subscription Service Time Elapsed", EventLogEntryType.Information); } private void InitializeComponent() //in designer { this.timer1 = new System.Timers.Timer(); ((System.ComponentModel.ISupportInitialize)(this.timer1)).BeginInit(); // // timer1 // this.timer1.Enabled = true; this.timer1.Elapsed += new System.Timers.ElapsedEventHandler(this.timer1_Elapsed); // // SubscriptionService // this.ServiceName = "SubscriptionService"; ((System.ComponentModel.ISupportInitialize)(this.timer1)).EndInit(); } Again, the problem is that the timer1_elapsed handler runs constantly when only one of the services is started. And it's the handler on the OPPOSITE service. Anybody see anything?

    Read the article

  • Guidance: A Branching strategy for Scrum Teams

    - by Martin Hinshelwood
    Having a good branching strategy will save your bacon, or at least your code. Be careful when deviating from your branching strategy because if you do, you may be worse off than when you started! This is one possible branching strategy for Scrum teams and I will not be going in depth with Scrum but you can find out more about Scrum by reading the Scrum Guide and you can even assess your Scrum knowledge by having a go at the Scrum Open Assessment. You can also read SSW’s Rules to Better Scrum using TFS which have been developed during our own Scrum implementations. Acknowledgements Bill Heys – Bill offered some good feedback on this post and helped soften the language. Note: Bill is a VS ALM Ranger and co-wrote the Branching Guidance for TFS 2010 Willy-Peter Schaub – Willy-Peter is an ex Visual Studio ALM MVP turned blue badge and has been involved in most of the guidance including the Branching Guidance for TFS 2010 Chris Birmele – Chris wrote some of the early TFS Branching and Merging Guidance. Dr Paul Neumeyer, Ph.D Parallel Processes, ScrumMaster and SSW Solution Architect – Paul wanted to have feature branches coming from the release branch as well. We agreed that this is really a spin-off that needs own project, backlog, budget and Team. Scenario: A product is developed RTM 1.0 is released and gets great sales.  Extra features are demanded but the new version will have double to price to pay to recover costs, work is approved by the guys with budget and a few sprints later RTM 2.0 is released.  Sales a very low due to the pricing strategy. There are lots of clients on RTM 1.0 calling out for patches. As I keep getting Reverse Integration and Forward Integration mixed up and Bill keeps slapping my wrists I thought I should have a reminder: You still seemed to use reverse and/or forward integration in the wrong context. I would recommend reviewing your document at the end to ensure that it agrees with the common understanding of these terms merge (forward integration) from parent to child (same direction as the branch), and merge  (reverse integration) from child to parent (the reverse direction of the branch). - one of my many slaps on the wrist from Bill Heys.   As I mentioned previously we are using a single feature branching strategy in our current project. The single biggest mistake developers make is developing against the “Main” or “Trunk” line. This ultimately leads to messy code as things are added and never finished. Your only alternative is to NEVER check in unless your code is 100%, but this does not work in practice, even with a single developer. Your ADD will kick in and your half-finished code will be finished enough to pass the build and the tests. You do use builds don’t you? Sadly, this is a very common scenario and I have had people argue that branching merely adds complexity. Then again I have seen the other side of the universe ... branching  structures from he... We should somehow convince everyone that there is a happy between no-branching and too-much-branching. - Willy-Peter Schaub, VS ALM Ranger, Microsoft   A key benefit of branching for development is to isolate changes from the stable Main branch. Branching adds sanity more than it adds complexity. We do try to stress in our guidance that it is important to justify a branch, by doing a cost benefit analysis. The primary cost is the effort to do merges and resolve conflicts. A key benefit is that you have a stable code base in Main and accept changes into Main only after they pass quality gates, etc. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft The second biggest mistake developers make is branching anything other than the WHOLE “Main” line. If you branch parts of your code and not others it gets out of sync and can make integration a nightmare. You should have your Source, Assets, Build scripts deployment scripts and dependencies inside the “Main” folder and branch the whole thing. Some departments within MSFT even go as far as to add the environments used to develop the product in there as well; although I would not recommend that unless you have a massive SQL cluster to house your source code. We tried the “add environment” back in South-Africa and while it was “phenomenal”, especially when having to switch between environments, the disk storage and processing requirements killed us. We opted for virtualization to skin this cat of keeping a ready-to-go environment handy. - Willy-Peter Schaub, VS ALM Ranger, Microsoft   I think people often think that you should have separate branches for separate environments (e.g. Dev, Test, Integration Test, QA, etc.). I prefer to think of deploying to environments (such as from Main to QA) rather than branching for QA). - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   You can read about SSW’s Rules to better Source Control for some additional information on what Source Control to use and how to use it. There are also a number of branching Anti-Patterns that should be avoided at all costs: You know you are on the wrong track if you experience one or more of the following symptoms in your development environment: Merge Paranoia—avoiding merging at all cost, usually because of a fear of the consequences. Merge Mania—spending too much time merging software assets instead of developing them. Big Bang Merge—deferring branch merging to the end of the development effort and attempting to merge all branches simultaneously. Never-Ending Merge—continuous merging activity because there is always more to merge. Wrong-Way Merge—merging a software asset version with an earlier version. Branch Mania—creating many branches for no apparent reason. Cascading Branches—branching but never merging back to the main line. Mysterious Branches—branching for no apparent reason. Temporary Branches—branching for changing reasons, so the branch becomes a permanent temporary workspace. Volatile Branches—branching with unstable software assets shared by other branches or merged into another branch. Note   Branches are volatile most of the time while they exist as independent branches. That is the point of having them. The difference is that you should not share or merge branches while they are in an unstable state. Development Freeze—stopping all development activities while branching, merging, and building new base lines. Berlin Wall—using branches to divide the development team members, instead of dividing the work they are performing. -Branching and Merging Primer by Chris Birmele - Developer Tools Technical Specialist at Microsoft Pty Ltd in Australia   In fact, this can result in a merge exercise no-one wants to be involved in, merging hundreds of thousands of change sets and trying to get a consolidated build. Again, we need to find a happy medium. - Willy-Peter Schaub on Merge Paranoia Merge conflicts are generally the result of making changes to the same file in both the target and source branch. If you create merge conflicts, you will eventually need to resolve them. Often the resolution is manual. Merging more frequently allows you to resolve these conflicts close to when they happen, making the resolution clearer. Waiting weeks or months to resolve them, the Big Bang approach, means you are more likely to resolve conflicts incorrectly. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   Figure: Main line, this is where your stable code lives and where any build has known entities, always passes and has a happy test that passes as well? Many development projects consist of, a single “Main” line of source and artifacts. This is good; at least there is source control . There are however a couple of issues that need to be considered. What happens if: you and your team are working on a new set of features and the customer wants a change to his current version? you are working on two features and the customer decides to abandon one of them? you have two teams working on different feature sets and their changes start interfering with each other? I just use labels instead of branches? That's a lot of “what if’s”, but there is a simple way of preventing this. Branching… In TFS, labels are not immutable. This does not mean they are not useful. But labels do not provide a very good development isolation mechanism. Branching allows separate code sets to evolve separately (e.g. Current with hotfixes, and vNext with new development). I don’t see how labels work here. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   Figure: Creating a single feature branch means you can isolate the development work on that branch.   Its standard practice for large projects with lots of developers to use Feature branching and you can check the Branching Guidance for the latest recommendations from the Visual Studio ALM Rangers for other methods. In the diagram above you can see my recommendation for branching when using Scrum development with TFS 2010. It consists of a single Sprint branch to contain all the changes for the current sprint. The main branch has the permissions changes so contributors to the project can only Branch and Merge with “Main”. This will prevent accidental check-ins or checkouts of the “Main” line that would contaminate the code. The developers continue to develop on sprint one until the completion of the sprint. Note: In the real world, starting a new Greenfield project, this process starts at Sprint 2 as at the start of Sprint 1 you would have artifacts in version control and no need for isolation.   Figure: Once the sprint is complete the Sprint 1 code can then be merged back into the Main line. There are always good practices to follow, and one is to always do a Forward Integration from Main into Sprint 1 before you do a Reverse Integration from Sprint 1 back into Main. In this case it may seem superfluous, but this builds good muscle memory into your developer’s work ethic and means that no bad habits are learned that would interfere with additional Scrum Teams being added to the Product. The process of completing your sprint development: The Team completes their work according to their definition of done. Merge from “Main” into “Sprint1” (Forward Integration) Stabilize your code with any changes coming from other Scrum Teams working on the same product. If you have one Scrum Team this should be quick, but there may have been bug fixes in the Release branches. (we will talk about release branches later) Merge from “Sprint1” into “Main” to commit your changes. (Reverse Integration) Check-in Delete the Sprint1 branch Note: The Sprint 1 branch is no longer required as its useful life has been concluded. Check-in Done But you are not yet done with the Sprint. The goal in Scrum is to have a “potentially shippable product” at the end of every Sprint, and we do not have that yet, we only have finished code.   Figure: With Sprint 1 merged you can create a Release branch and run your final packaging and testing In 99% of all projects I have been involved in or watched, a “shippable product” only happens towards the end of the overall lifecycle, especially when sprints are short. The in-between releases are great demonstration releases, but not shippable. Perhaps it comes from my 80’s brain washing that we only ship when we reach the agreed quality and business feature bar. - Willy-Peter Schaub, VS ALM Ranger, Microsoft Although you should have been testing and packaging your code all the way through your Sprint 1 development, preferably using an automated process, you still need to test and package with stable unchanging code. This is where you do what at SSW we call a “Test Please”. This is first an internal test of the product to make sure it meets the needs of the customer and you generally use a resource external to your Team. Then a “Test Please” is conducted with the Product Owner to make sure he is happy with the output. You can read about how to conduct a Test Please on our Rules to Successful Projects: Do you conduct an internal "test please" prior to releasing a version to a client?   Figure: If you find a deviation from the expected result you fix it on the Release branch. If during your final testing or your “Test Please” you find there are issues or bugs then you should fix them on the release branch. If you can’t fix them within the time box of your Sprint, then you will need to create a Bug and put it onto the backlog for prioritization by the Product owner. Make sure you leave plenty of time between your merge from the development branch to find and fix any problems that are uncovered. This process is commonly called Stabilization and should always be conducted once you have completed all of your User Stories and integrated all of your branches. Even once you have stabilized and released, you should not delete the release branch as you would with the Sprint branch. It has a usefulness for servicing that may extend well beyond the limited life you expect of it. Note: Don't get forced by the business into adding features into a Release branch instead that indicates the unspoken requirement is that they are asking for a product spin-off. In this case you can create a new Team Project and branch from the required Release branch to create a new Main branch for that product. And you create a whole new backlog to work from.   Figure: When the Team decides it is happy with the product you can create a RTM branch. Once you have fixed all the bugs you can, and added any you can’t to the Product Backlog, and you Team is happy with the result you can create a Release. This would consist of doing the final Build and Packaging it up ready for your Sprint Review meeting. You would then create a read-only branch that represents the code you “shipped”. This is really an Audit trail branch that is optional, but is good practice. You could use a Label, but Labels are not Auditable and if a dispute was raised by the customer you can produce a verifiable version of the source code for an independent party to check. Rare I know, but you do not want to be at the wrong end of a legal battle. Like the Release branch the RTM branch should never be deleted, or only deleted according to your companies legal policy, which in the UK is usually 7 years.   Figure: If you have made any changes in the Release you will need to merge back up to Main in order to finalise the changes. Nothing is really ever done until it is in Main. The same rules apply when merging any fixes in the Release branch back into Main and you should do a reverse merge before a forward merge, again for the muscle memory more than necessity at this stage. Your Sprint is now nearly complete, and you can have a Sprint Review meeting knowing that you have made every effort and taken every precaution to protect your customer’s investment. Note: In order to really achieve protection for both you and your client you would add Automated Builds, Automated Tests, Automated Acceptance tests, Acceptance test tracking, Unit Tests, Load tests, Web test and all the other good engineering practices that help produce reliable software.     Figure: After the Sprint Planning meeting the process begins again. Where the Sprint Review and Retrospective meetings mark the end of the Sprint, the Sprint Planning meeting marks the beginning. After you have completed your Sprint Planning and you know what you are trying to achieve in Sprint 2 you can create your new Branch to develop in. How do we handle a bug(s) in production that can’t wait? Although in Scrum the only work done should be on the backlog there should be a little buffer added to the Sprint Planning for contingencies. One of these contingencies is a bug in the current release that can’t wait for the Sprint to finish. But how do you handle that? Willy-Peter Schaub asked an excellent question on the release activities: In reality Sprint 2 starts when sprint 1 ends + weekend. Should we not cater for a possible parallelism between Sprint 2 and the release activities of sprint 1? It would introduce FI’s from main to sprint 2, I guess. Your “Figure: Merging print 2 back into Main.” covers, what I tend to believe to be reality in most cases. - Willy-Peter Schaub, VS ALM Ranger, Microsoft I agree, and if you have a single Scrum team then your resources are limited. The Scrum Team is responsible for packaging and release, so at least one run at stabilization, package and release should be included in the Sprint time box. If more are needed on the current production release during the Sprint 2 time box then resource needs to be pulled from Sprint 2. The Product Owner and the Team have four choices (in order of disruption/cost): Backlog: Add the bug to the backlog and fix it in the next Sprint Buffer Time: Use any buffer time included in the current Sprint to fix the bug quickly Make time: Remove a Story from the current Sprint that is of equal value to the time lost fixing the bug(s) and releasing. Note: The Team must agree that it can still meet the Sprint Goal. Cancel Sprint: Cancel the sprint and concentrate all resource on fixing the bug(s) Note: This can be a very costly if the current sprint has already had a lot of work completed as it will be lost. The choice will depend on the complexity and severity of the bug(s) and both the Product Owner and the Team need to agree. In this case we will go with option #2 or #3 as they are uncomplicated but severe bugs. Figure: Real world issue where a bug needs fixed in the current release. If the bug(s) is urgent enough then then your only option is to fix it in place. You can edit the release branch to find and fix the bug, hopefully creating a test so it can’t happen again. Follow the prior process and conduct an internal and customer “Test Please” before releasing. You can read about how to conduct a Test Please on our Rules to Successful Projects: Do you conduct an internal "test please" prior to releasing a version to a client?   Figure: After you have fixed the bug you need to ship again. You then need to again create an RTM branch to hold the version of the code you released in escrow.   Figure: Main is now out of sync with your Release. We now need to get these new changes back up into the Main branch. Do a reverse and then forward merge again to get the new code into Main. But what about the branch, are developers not working on Sprint 2? Does Sprint 2 now have changes that are not in Main and Main now have changes that are not in Sprint 2? Well, yes… and this is part of the hit you take doing branching. But would this scenario even have been possible without branching?   Figure: Getting the changes in Main into Sprint 2 is very important. The Team now needs to do a Forward Integration merge into their Sprint and resolve any conflicts that occur. Maybe the bug has already been fixed in Sprint 2, maybe the bug no longer exists! This needs to be identified and resolved by the developers before they continue to get further out of Sync with Main. Note: Avoid the “Big bang merge” at all costs.   Figure: Merging Sprint 2 back into Main, the Forward Integration, and R0 terminates. Sprint 2 now merges (Reverse Integration) back into Main following the procedures we have already established.   Figure: The logical conclusion. This then allows the creation of the next release. By now you should be getting the big picture and hopefully you learned something useful from this post. I know I have enjoyed writing it as I find these exploratory posts coupled with real world experience really help harden my understanding.  Branching is a tool; it is not a silver bullet. Don’t over use it, and avoid “Anti-Patterns” where possible. Although the diagram above looks complicated I hope showing you how it is formed simplifies it as much as possible.   Technorati Tags: Branching,Scrum,VS ALM,TFS 2010,VS2010

    Read the article

  • A way of doing real-world test-driven development (and some thoughts about it)

    - by Thomas Weller
    Lately, I exchanged some arguments with Derick Bailey about some details of the red-green-refactor cycle of the Test-driven development process. In short, the issue revolved around the fact that it’s not enough to have a test red or green, but it’s also important to have it red or green for the right reasons. While for me, it’s sufficient to initially have a NotImplementedException in place, Derick argues that this is not totally correct (see these two posts: Red/Green/Refactor, For The Right Reasons and Red For The Right Reason: Fail By Assertion, Not By Anything Else). And he’s right. But on the other hand, I had no idea how his insights could have any practical consequence for my own individual interpretation of the red-green-refactor cycle (which is not really red-green-refactor, at least not in its pure sense, see the rest of this article). This made me think deeply for some days now. In the end I found out that the ‘right reason’ changes in my understanding depending on what development phase I’m in. To make this clear (at least I hope it becomes clear…) I started to describe my way of working in some detail, and then something strange happened: The scope of the article slightly shifted from focusing ‘only’ on the ‘right reason’ issue to something more general, which you might describe as something like  'Doing real-world TDD in .NET , with massive use of third-party add-ins’. This is because I feel that there is a more general statement about Test-driven development to make:  It’s high time to speak about the ‘How’ of TDD, not always only the ‘Why’. Much has been said about this, and me myself also contributed to that (see here: TDD is not about testing, it's about how we develop software). But always justifying what you do is very unsatisfying in the long run, it is inherently defensive, and it costs time and effort that could be used for better and more important things. And frankly: I’m somewhat sick and tired of repeating time and again that the test-driven way of software development is highly preferable for many reasons - I don’t want to spent my time exclusively on stating the obvious… So, again, let’s say it clearly: TDD is programming, and programming is TDD. Other ways of programming (code-first, sometimes called cowboy-coding) are exceptional and need justification. – I know that there are many people out there who will disagree with this radical statement, and I also know that it’s not a description of the real world but more of a mission statement or something. But nevertheless I’m absolutely sure that in some years this statement will be nothing but a platitude. Side note: Some parts of this post read as if I were paid by Jetbrains (the manufacturer of the ReSharper add-in – R#), but I swear I’m not. Rather I think that Visual Studio is just not production-complete without it, and I wouldn’t even consider to do professional work without having this add-in installed... The three parts of a software component Before I go into some details, I first should describe my understanding of what belongs to a software component (assembly, type, or method) during the production process (i.e. the coding phase). Roughly, I come up with the three parts shown below:   First, we need to have some initial sort of requirement. This can be a multi-page formal document, a vague idea in some programmer’s brain of what might be needed, or anything in between. In either way, there has to be some sort of requirement, be it explicit or not. – At the C# micro-level, the best way that I found to formulate that is to define interfaces for just about everything, even for internal classes, and to provide them with exhaustive xml comments. The next step then is to re-formulate these requirements in an executable form. This is specific to the respective programming language. - For C#/.NET, the Gallio framework (which includes MbUnit) in conjunction with the ReSharper add-in for Visual Studio is my toolset of choice. The third part then finally is the production code itself. It’s development is entirely driven by the requirements and their executable formulation. This is the delivery, the two other parts are ‘only’ there to make its production possible, to give it a decent quality and reliability, and to significantly reduce related costs down the maintenance timeline. So while the first two parts are not really relevant for the customer, they are very important for the developer. The customer (or in Scrum terms: the Product Owner) is not interested at all in how  the product is developed, he is only interested in the fact that it is developed as cost-effective as possible, and that it meets his functional and non-functional requirements. The rest is solely a matter of the developer’s craftsmanship, and this is what I want to talk about during the remainder of this article… An example To demonstrate my way of doing real-world TDD, I decided to show the development of a (very) simple Calculator component. The example is deliberately trivial and silly, as examples always are. I am totally aware of the fact that real life is never that simple, but I only want to show some development principles here… The requirement As already said above, I start with writing down some words on the initial requirement, and I normally use interfaces for that, even for internal classes - the typical question “intf or not” doesn’t even come to mind. I need them for my usual workflow and using them automatically produces high componentized and testable code anyway. To think about their usage in every single situation would slow down the production process unnecessarily. So this is what I begin with: namespace Calculator {     /// <summary>     /// Defines a very simple calculator component for demo purposes.     /// </summary>     public interface ICalculator     {         /// <summary>         /// Gets the result of the last successful operation.         /// </summary>         /// <value>The last result.</value>         /// <remarks>         /// Will be <see langword="null" /> before the first successful operation.         /// </remarks>         double? LastResult { get; }       } // interface ICalculator   } // namespace Calculator So, I’m not beginning with a test, but with a sort of code declaration - and still I insist on being 100% test-driven. There are three important things here: Starting this way gives me a method signature, which allows to use IntelliSense and AutoCompletion and thus eliminates the danger of typos - one of the most regular, annoying, time-consuming, and therefore expensive sources of error in the development process. In my understanding, the interface definition as a whole is more of a readable requirement document and technical documentation than anything else. So this is at least as much about documentation than about coding. The documentation must completely describe the behavior of the documented element. I normally use an IoC container or some sort of self-written provider-like model in my architecture. In either case, I need my components defined via service interfaces anyway. - I will use the LinFu IoC framework here, for no other reason as that is is very simple to use. The ‘Red’ (pt. 1)   First I create a folder for the project’s third-party libraries and put the LinFu.Core dll there. Then I set up a test project (via a Gallio project template), and add references to the Calculator project and the LinFu dll. Finally I’m ready to write the first test, which will look like the following: namespace Calculator.Test {     [TestFixture]     public class CalculatorTest     {         private readonly ServiceContainer container = new ServiceContainer();           [Test]         public void CalculatorLastResultIsInitiallyNull()         {             ICalculator calculator = container.GetService<ICalculator>();               Assert.IsNull(calculator.LastResult);         }       } // class CalculatorTest   } // namespace Calculator.Test       This is basically the executable formulation of what the interface definition states (part of). Side note: There’s one principle of TDD that is just plain wrong in my eyes: I’m talking about the Red is 'does not compile' thing. How could a compiler error ever be interpreted as a valid test outcome? I never understood that, it just makes no sense to me. (Or, in Derick’s terms: this reason is as wrong as a reason ever could be…) A compiler error tells me: Your code is incorrect, but nothing more.  Instead, the ‘Red’ part of the red-green-refactor cycle has a clearly defined meaning to me: It means that the test works as intended and fails only if its assumptions are not met for some reason. Back to our Calculator. When I execute the above test with R#, the Gallio plugin will give me this output: So this tells me that the test is red for the wrong reason: There’s no implementation that the IoC-container could load, of course. So let’s fix that. With R#, this is very easy: First, create an ICalculator - derived type:        Next, implement the interface members: And finally, move the new class to its own file: So far my ‘work’ was six mouse clicks long, the only thing that’s left to do manually here, is to add the Ioc-specific wiring-declaration and also to make the respective class non-public, which I regularly do to force my components to communicate exclusively via interfaces: This is what my Calculator class looks like as of now: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult         {             get             {                 throw new NotImplementedException();             }         }     } } Back to the test fixture, we have to put our IoC container to work: [TestFixture] public class CalculatorTest {     #region Fields       private readonly ServiceContainer container = new ServiceContainer();       #endregion // Fields       #region Setup/TearDown       [FixtureSetUp]     public void FixtureSetUp()     {        container.LoadFrom(AppDomain.CurrentDomain.BaseDirectory, "Calculator.dll");     }       ... Because I have a R# live template defined for the setup/teardown method skeleton as well, the only manual coding here again is the IoC-specific stuff: two lines, not more… The ‘Red’ (pt. 2) Now, the execution of the above test gives the following result: This time, the test outcome tells me that the method under test is called. And this is the point, where Derick and I seem to have somewhat different views on the subject: Of course, the test still is worthless regarding the red/green outcome (or: it’s still red for the wrong reasons, in that it gives a false negative). But as far as I am concerned, I’m not really interested in the test outcome at this point of the red-green-refactor cycle. Rather, I only want to assert that my test actually calls the right method. If that’s the case, I will happily go on to the ‘Green’ part… The ‘Green’ Making the test green is quite trivial. Just make LastResult an automatic property:     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult { get; private set; }     }         One more round… Now on to something slightly more demanding (cough…). Let’s state that our Calculator exposes an Add() method:         ...   /// <summary>         /// Adds the specified operands.         /// </summary>         /// <param name="operand1">The operand1.</param>         /// <param name="operand2">The operand2.</param>         /// <returns>The result of the additon.</returns>         /// <exception cref="ArgumentException">         /// Argument <paramref name="operand1"/> is &lt; 0.<br/>         /// -- or --<br/>         /// Argument <paramref name="operand2"/> is &lt; 0.         /// </exception>         double Add(double operand1, double operand2);       } // interface ICalculator A remark: I sometimes hear the complaint that xml comment stuff like the above is hard to read. That’s certainly true, but irrelevant to me, because I read xml code comments with the CR_Documentor tool window. And using that, it looks like this:   Apart from that, I’m heavily using xml code comments (see e.g. here for a detailed guide) because there is the possibility of automating help generation with nightly CI builds (using MS Sandcastle and the Sandcastle Help File Builder), and then publishing the results to some intranet location.  This way, a team always has first class, up-to-date technical documentation at hand about the current codebase. (And, also very important for speeding up things and avoiding typos: You have IntelliSense/AutoCompletion and R# support, and the comments are subject to compiler checking…).     Back to our Calculator again: Two more R# – clicks implement the Add() skeleton:         ...           public double Add(double operand1, double operand2)         {             throw new NotImplementedException();         }       } // class Calculator As we have stated in the interface definition (which actually serves as our requirement document!), the operands are not allowed to be negative. So let’s start implementing that. Here’s the test: [Test] [Row(-0.5, 2)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); } As you can see, I’m using a data-driven unit test method here, mainly for these two reasons: Because I know that I will have to do the same test for the second operand in a few seconds, I save myself from implementing another test method for this purpose. Rather, I only will have to add another Row attribute to the existing one. From the test report below, you can see that the argument values are explicitly printed out. This can be a valuable documentation feature even when everything is green: One can quickly review what values were tested exactly - the complete Gallio HTML-report (as it will be produced by the Continuous Integration runs) shows these values in a quite clear format (see below for an example). Back to our Calculator development again, this is what the test result tells us at the moment: So we’re red again, because there is not yet an implementation… Next we go on and implement the necessary parameter verification to become green again, and then we do the same thing for the second operand. To make a long story short, here’s the test and the method implementation at the end of the second cycle: // in CalculatorTest:   [Test] [Row(-0.5, 2)] [Row(295, -123)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); }   // in Calculator: public double Add(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }     if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }     throw new NotImplementedException(); } So far, we have sheltered our method from unwanted input, and now we can safely operate on the parameters without further caring about their validity (this is my interpretation of the Fail Fast principle, which is regarded here in more detail). Now we can think about the method’s successful outcomes. First let’s write another test for that: [Test] [Row(1, 1, 2)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } Again, I’m regularly using row based test methods for these kinds of unit tests. The above shown pattern proved to be extremely helpful for my development work, I call it the Defined-Input/Expected-Output test idiom: You define your input arguments together with the expected method result. There are two major benefits from that way of testing: In the course of refining a method, it’s very likely to come up with additional test cases. In our case, we might add tests for some edge cases like ‘one of the operands is zero’ or ‘the sum of the two operands causes an overflow’, or maybe there’s an external test protocol that has to be fulfilled (e.g. an ISO norm for medical software), and this results in the need of testing against additional values. In all these scenarios we only have to add another Row attribute to the test. Remember that the argument values are written to the test report, so as a side-effect this produces valuable documentation. (This can become especially important if the fulfillment of some sort of external requirements has to be proven). So your test method might look something like that in the end: [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 2)] [Row(0, 999999999, 999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, double.MaxValue)] [Row(4, double.MaxValue - 2.5, double.MaxValue)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } And this will produce the following HTML report (with Gallio):   Not bad for the amount of work we invested in it, huh? - There might be scenarios where reports like that can be useful for demonstration purposes during a Scrum sprint review… The last requirement to fulfill is that the LastResult property is expected to store the result of the last operation. I don’t show this here, it’s trivial enough and brings nothing new… And finally: Refactor (for the right reasons) To demonstrate my way of going through the refactoring portion of the red-green-refactor cycle, I added another method to our Calculator component, namely Subtract(). Here’s the code (tests and production): // CalculatorTest.cs:   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtract(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, result); }   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtractGivesExpectedLastResult(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, calculator.LastResult); }   ...   // ICalculator.cs: /// <summary> /// Subtracts the specified operands. /// </summary> /// <param name="operand1">The operand1.</param> /// <param name="operand2">The operand2.</param> /// <returns>The result of the subtraction.</returns> /// <exception cref="ArgumentException"> /// Argument <paramref name="operand1"/> is &lt; 0.<br/> /// -- or --<br/> /// Argument <paramref name="operand2"/> is &lt; 0. /// </exception> double Subtract(double operand1, double operand2);   ...   // Calculator.cs:   public double Subtract(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }       if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }       return (this.LastResult = operand1 - operand2).Value; }   Obviously, the argument validation stuff that was produced during the red-green part of our cycle duplicates the code from the previous Add() method. So, to avoid code duplication and minimize the number of code lines of the production code, we do an Extract Method refactoring. One more time, this is only a matter of a few mouse clicks (and giving the new method a name) with R#: Having done that, our production code finally looks like that: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         #region ICalculator           public double? LastResult { get; private set; }           public double Add(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 + operand2).Value;         }           public double Subtract(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 - operand2).Value;         }           #endregion // ICalculator           #region Implementation (Helper)           private static void ThrowIfOneOperandIsInvalid(double operand1, double operand2)         {             if (operand1 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand1");             }               if (operand2 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand2");             }         }           #endregion // Implementation (Helper)       } // class Calculator   } // namespace Calculator But is the above worth the effort at all? It’s obviously trivial and not very impressive. All our tests were green (for the right reasons), and refactoring the code did not change anything. It’s not immediately clear how this refactoring work adds value to the project. Derick puts it like this: STOP! Hold on a second… before you go any further and before you even think about refactoring what you just wrote to make your test pass, you need to understand something: if your done with your requirements after making the test green, you are not required to refactor the code. I know… I’m speaking heresy, here. Toss me to the wolves, I’ve gone over to the dark side! Seriously, though… if your test is passing for the right reasons, and you do not need to write any test or any more code for you class at this point, what value does refactoring add? Derick immediately answers his own question: So why should you follow the refactor portion of red/green/refactor? When you have added code that makes the system less readable, less understandable, less expressive of the domain or concern’s intentions, less architecturally sound, less DRY, etc, then you should refactor it. I couldn’t state it more precise. From my personal perspective, I’d add the following: You have to keep in mind that real-world software systems are usually quite large and there are dozens or even hundreds of occasions where micro-refactorings like the above can be applied. It’s the sum of them all that counts. And to have a good overall quality of the system (e.g. in terms of the Code Duplication Percentage metric) you have to be pedantic on the individual, seemingly trivial cases. My job regularly requires the reading and understanding of ‘foreign’ code. So code quality/readability really makes a HUGE difference for me – sometimes it can be even the difference between project success and failure… Conclusions The above described development process emerged over the years, and there were mainly two things that guided its evolution (you might call it eternal principles, personal beliefs, or anything in between): Test-driven development is the normal, natural way of writing software, code-first is exceptional. So ‘doing TDD or not’ is not a question. And good, stable code can only reliably be produced by doing TDD (yes, I know: many will strongly disagree here again, but I’ve never seen high-quality code – and high-quality code is code that stood the test of time and causes low maintenance costs – that was produced code-first…) It’s the production code that pays our bills in the end. (Though I have seen customers these days who demand an acceptance test battery as part of the final delivery. Things seem to go into the right direction…). The test code serves ‘only’ to make the production code work. But it’s the number of delivered features which solely counts at the end of the day - no matter how much test code you wrote or how good it is. With these two things in mind, I tried to optimize my coding process for coding speed – or, in business terms: productivity - without sacrificing the principles of TDD (more than I’d do either way…).  As a result, I consider a ratio of about 3-5/1 for test code vs. production code as normal and desirable. In other words: roughly 60-80% of my code is test code (This might sound heavy, but that is mainly due to the fact that software development standards only begin to evolve. The entire software development profession is very young, historically seen; only at the very beginning, and there are no viable standards yet. If you think about software development as a kind of casting process, where the test code is the mold and the resulting production code is the final product, then the above ratio sounds no longer extraordinary…) Although the above might look like very much unnecessary work at first sight, it’s not. With the aid of the mentioned add-ins, doing all the above is a matter of minutes, sometimes seconds (while writing this post took hours and days…). The most important thing is to have the right tools at hand. Slow developer machines or the lack of a tool or something like that - for ‘saving’ a few 100 bucks -  is just not acceptable and a very bad decision in business terms (though I quite some times have seen and heard that…). Production of high-quality products needs the usage of high-quality tools. This is a platitude that every craftsman knows… The here described round-trip will take me about five to ten minutes in my real-world development practice. I guess it’s about 30% more time compared to developing the ‘traditional’ (code-first) way. But the so manufactured ‘product’ is of much higher quality and massively reduces maintenance costs, which is by far the single biggest cost factor, as I showed in this previous post: It's the maintenance, stupid! (or: Something is rotten in developerland.). In the end, this is a highly cost-effective way of software development… But on the other hand, there clearly is a trade-off here: coding speed vs. code quality/later maintenance costs. The here described development method might be a perfect fit for the overwhelming majority of software projects, but there certainly are some scenarios where it’s not - e.g. if time-to-market is crucial for a software project. So this is a business decision in the end. It’s just that you have to know what you’re doing and what consequences this might have… Some last words First, I’d like to thank Derick Bailey again. His two aforementioned posts (which I strongly recommend for reading) inspired me to think deeply about my own personal way of doing TDD and to clarify my thoughts about it. I wouldn’t have done that without this inspiration. I really enjoy that kind of discussions… I agree with him in all respects. But I don’t know (yet?) how to bring his insights into the described production process without slowing things down. The above described method proved to be very “good enough” in my practical experience. But of course, I’m open to suggestions here… My rationale for now is: If the test is initially red during the red-green-refactor cycle, the ‘right reason’ is: it actually calls the right method, but this method is not yet operational. Later on, when the cycle is finished and the tests become part of the regular, automated Continuous Integration process, ‘red’ certainly must occur for the ‘right reason’: in this phase, ‘red’ MUST mean nothing but an unfulfilled assertion - Fail By Assertion, Not By Anything Else!

    Read the article

  • PHP inserting Apostrophes where it shouldn't

    - by Jack W-H
    Hi folks Not too sure what's going on here as this doesn't seem like standard practise to me. But basically I have a basic database thingy going on that lets users submit code snippets. They can provide up to 5 tags for their submission. Now I'm still learning so please forgive me if this is obvious! Here's the PHP script that makes it all work (note there may be some CodeIgniter specific functions in there): function submitform() { $this->load->helper(array('form', 'url')); $this->load->library('form_validation'); $this->load->database(); $this->form_validation->set_error_delimiters('<p style="color:#FF0000;">', '</p>'); $this->form_validation->set_rules('title', 'Title', 'trim|required|min_length[5]|max_length[255]|xss_clean'); $this->form_validation->set_rules('summary', 'Summary', 'trim|required|min_length[5]|max_length[255]|xss_clean'); $this->form_validation->set_rules('bbcode', 'Code', 'required|min_length[5]'); // No XSS clean (or <script> tags etc. are gone) $this->form_validation->set_rules('tags', 'Tags', 'trim|xss_clean|required|max_length[254]'); if ($this->form_validation->run() == FALSE) { // Do some stuff if it fails } else { // User's input values $title = $this->db->escape(set_value('title')); $summary = $this->db->escape(set_value('summary')); $code = $this->db->escape(set_value('bbcode')); $tags = $this->db->escape(set_value('tags')); // Stop things like <script> tags working $codesanitised = htmlspecialchars($code); // Other values to be entered $author = $this->tank_auth->get_user_id(); $bi1 = ""; $bi2 = ""; // This long messy bit basically sees which browsers the code is compatible with. if (isset($_POST['IE6'])) {$bi1 .= "IE6, "; $bi2 .= "1, ";} else {$bi1 .= "IE6, "; $bi2 .= "NULL, ";} if (isset($_POST['IE7'])) {$bi1 .= "IE7, "; $bi2 .= "1, ";} else {$bi1 .= "IE7, "; $bi2 .= "NULL, ";} if (isset($_POST['IE8'])) {$bi1 .= "IE8, "; $bi2 .= "1, ";} else {$bi1 .= "IE8, "; $bi2 .= "NULL, ";} if (isset($_POST['FF2'])) {$bi1 .= "FF2, "; $bi2 .= "1, ";} else {$bi1 .= "FF2, "; $bi2 .= "NULL, ";} if (isset($_POST['FF3'])) {$bi1 .= "FF3, "; $bi2 .= "1, ";} else {$bi1 .= "FF3, "; $bi2 .= "NULL, ";} if (isset($_POST['SA3'])) {$bi1 .= "SA3, "; $bi2 .= "1, ";} else {$bi1 .= "SA3, "; $bi2 .= "NULL, ";} if (isset($_POST['SA4'])) {$bi1 .= "SA4, "; $bi2 .= "1, ";} else {$bi1 .= "SA4, "; $bi2 .= "NULL, ";} if (isset($_POST['CHR'])) {$bi1 .= "CHR, "; $bi2 .= "1, ";} else {$bi1 .= "CHR, "; $bi2 .= "NULL, ";} if (isset($_POST['OPE'])) {$bi1 .= "OPE, "; $bi2 .= "1, ";} else {$bi1 .= "OPE, "; $bi2 .= "NULL, ";} if (isset($_POST['OTH'])) {$bi1 .= "OTH, "; $bi2 .= "1, ";} else {$bi1 .= "OTH, "; $bi2 .= "NULL, ";} // $b1 is $bi1 without the last two characters (, ) which would cause a query error $b1 = substr($bi1, 0, -2); $b2 = substr($bi2, 0, -2); // :::::::::::THIS IS WHERE THE IMPORTANT STUFF IS, STACKOVERFLOW READERS:::::::::: // Split up all the words in $tags into individual variables - each tag is seperated with a space $pieces = explode(" ", $tags); // Usage: // echo $pieces[0]; // piece1 etc $ti1 = ""; $ti2 = ""; // Now we'll do similar to what we did with the compatible browsers to generate a bit of a query string if ($pieces[0]!=NULL) {$ti1 .= "tag1, "; $ti2 .= "$pieces[0], ";} else {$ti1 .= "tag1, "; $ti2 .= "NULL, ";} if ($pieces[1]!=NULL) {$ti1 .= "tag2, "; $ti2 .= "$pieces[1], ";} else {$ti1 .= "tag2, "; $ti2 .= "NULL, ";} if ($pieces[2]!=NULL) {$ti1 .= "tag3, "; $ti2 .= "$pieces[2], ";} else {$ti1 .= "tag3, "; $ti2 .= "NULL, ";} if ($pieces[3]!=NULL) {$ti1 .= "tag4, "; $ti2 .= "$pieces[3], ";} else {$ti1 .= "tag4, "; $ti2 .= "NULL, ";} if ($pieces[4]!=NULL) {$ti1 .= "tag5, "; $ti2 .= "$pieces[4], ";} else {$ti1 .= "tag5, "; $ti2 .= "NULL, ";} $t1 = substr($ti1, 0, -2); $t2 = substr($ti2, 0, -2); $sql = "INSERT INTO code (id, title, author, summary, code, date, $t1, $b1) VALUES ('', $title, $author, $summary, $codesanitised, NOW(), $t2, $b2)"; $this->db->query($sql); $this->load->view('subviews/template/headerview'); $this->load->view('subviews/template/menuview'); $this->load->view('subviews/template/sidebar'); $this->load->view('thanksforsubmission'); $this->load->view('subviews/template/footerview'); } } Sorry about that boring drivel of code there. I realise I probably have a few bad practises in there - please point them out if so. This is what the outputted query looks like (it results in an error and isn't queried at all): A Database Error Occurred Error Number: 1136 Column count doesn't match value count at row 1 INSERT INTO code (id, title, author, summary, code, date, tag1, tag2, tag3, tag4, tag5, IE6, IE7, IE8, FF2, FF3, SA3, SA4, CHR, OPE, OTH) VALUES ('', 'test2', 1, 'test2', 'test2 ', NOW(), 'test2, test2, test2, test2, test2', NULL, NULL, 1, 1, 1, 1, 1, 1, 1, NULL) You'll see at the bit after NOW(), 'test2, test2, test2, test2, test2' - I never asked it to put all that in apostrophes. Did I? What I could do is put each of those lines like this: if ($pieces[0]!=NULL) {$ti1 .= "tag1, "; $ti2 .= "'$pieces[0]', ";} else {$ti1 .= "tag1, "; $ti2 .= "NULL, ";} With single quotes around $pieces[0] etc. - but then my problem is that this kinda fails when the user only enters 4 tags, or 3, or whatever. Sorry if that's the worst phrased question in history, I tried, but my brain has turned to mush. Thanks for your help! Jack

    Read the article

  • NetBeans Development 7 - Windows 7 64-bit … JNI native calls ... a how to guide

    - by CirrusFlyer
    I provide this for you to hopefully save you some time and pain. As part of my expereince in getting to know NB Development v7 on my Windows 64-bit workstation I found another frustrating adventure in trying to get the JNI (Java Native Interface) abilities up and working in my project. As such, I am including a brief summary of steps required (as all the documentation I found was completely incorrect for these versions of Windows and NetBeans on how to do JNI). It took a couple of days of experimentation and reviewing every webpage I could find that included these technologies as keyword searches. Yuk!! Not fun. To begin, as NetBeans Development is "all about modules" if you are reading this you probably have a need for one, or more, of your modules to perform JNI calls. Most of what is available on this site or the Internet in general (not to mention the help file in NB7) is either completely wrong for these versions, or so sparse as to be essentially unuseful to anyone other than a JNI expert. Here is what you are looking for ... the "cut to the chase" - "how to guide" to get a JNI call up and working on your NB7 / Windows 64-bit box. 1) From within your NetBeans Module (not the host appliation) declair your native method(s) and make sure you can compile the Java source without errors. Example: package org.mycompanyname.nativelogic; public class NativeInterfaceTest { static { try { if (System.getProperty( "os.arch" ).toLowerCase().equals( "amd64" ) ) System.loadLibrary( <64-bit_folder_name_on_file_system>/<file_name.dll> ); else System.loadLibrary( <32-bit_folder_name_on_file_system>/<file_name.dll> ); } catch (SecurityException se) {} catch (UnsatisfieldLinkError ule) {} catch (NullPointerException npe) {} } public NativeInterfaceTest() {} native String echoString(String s); } Take notice to the fact that we only load the Assembly once (as it's in a static block), because othersise you will throw exceptions if attempting to load it again. Also take note of our single (in this example) native method titled "echoString". This is the method that our C / C++ application is going to implement, then via the majic of JNI we'll call from our Java code. 2) If using a 64-bit version of Windows (which we are here) we need to open a 64-bit Visual Studio Command Prompt (versus the standard 32-bit version), and execute the "vcvarsall" BAT file, along with an "amd64" command line argument, to set the environment up for 64-bit tools. Example: <path_to_Microsoft_Visual_Studio_10.0>/VC/vcvarsall.bat amd64 Take note that you can use any version of the C / C++ compiler from Microsoft you wish. I happen to have Visual Studio 2005, 2008, and 2010 installed on my box so I chose to use "v10.0" but any that support 64-bit development will work fine. The other important aspect here is the "amd64" param. 3) In the Command Prompt change drives \ directories on your computer so that you are at the root of the fully qualified Class location on the file system that contains your native method declairation. Example: The fully qualified class name for my natively declair method is "org.mycompanyname.nativelogic.NativeInterfaceTest". As we successfully compiled our Java in Step 1 above, we should find it contained in our NetBeans Module something similar to the following: "/build/classes/org/mycompanyname/nativelogic/NativeInterfaceTest.class" We need to make sure our Command Prompt sets, as the current directly, "/build/classes" because of our next step. 4) In this step we'll create our C / C++ Header file that contains the JNI required statments. Type the following in the Command Prompt: javah -jni org.mycompanyname.nativelogic.NativeInterfaceTest and hit enter. If you receive any kind of error that states this is an unrecognized command that simply means your Windows computer does not know the PATH to that command (it's in your /bin folder). Either run the command from there, or include the fully qualified path name when invoking this application, or set your computer's PATH environmental variable to include that path in its search. This should produce a file called "org_mycompanyname_nativelogic_NativeInterfaceTest.h" ... a C Header file. I'd make a copy of this in case you need a backup later. 5) Edit the NativeInterfaceTest.h header file and include an implementation for the echoString() method. Example: JNIEXPORT jstring JNICALL Java_org_mycompanyname_nativelogic_NativeInterfaceTest_echoString (JNIEnv *env, jobject jobj, jstring js) { return((*env)->NewStringUTF(env, "My JNI is up and working after lots of research")); } Notice how you can't simply return a normal Java String (because you're in C at the moment). You have to tell the passed in JVM variable to create a Java String for you that will be returned back. Check out the following Oracle web page for other data types and how to create them for JNI purposes. 6) Close and Save your changes to the Header file. Now that you've added an implementation to the Header change the file extention from ".h" to ".c" as it's now a C source code file that properly implements the JNI required interface. Example: NativeInterfaceTest.c 7) We need to compile the newly created source code file and Link it too. From within the Command Prompt type the following: cl /I"path_to_my_jdks_include_folder" /I"path_to_my_jdks_include_win32_folder" /D:AMD64=1 /LD NativeInterfaceTest.c /FeNativeInterfaceTest.dll /link /machine:x64 Example: cl /I"D:/Program Files/Java/jdk1.6.0_21/include" /I"D:/Program Files/java/jdk1.6.0_21/include/win32" /D:AMD64=1 /LD NativeInterfaceTest.c /FeNativeInterfaceTest.dll /link /machine:x64 Notice the quotes around the paths to the 'include" and 'include/win32' folders is required because I have spaces in my folder names ... 'Program Files'. You can include them if you have no spaces without problems, but they are mandatory if you have spaces when using a command prompt. This will generate serveral files, but it's the DLL we're interested in. This is what the System.loadLirbary() java method is looking for. 8) Congratuations! You're at the last step. Simply take the DLL Assembly and paste it at the following location: <path_of_NetBeansProjects_folder>/<project_name>/<module_name>/build/cluster/modules/lib/x64 Note that you'll probably have to create the "lib" and "x64" folders. Example: C:\Users\<user_name>\Documents\NetBeansProjects\<application_name>\<module_name>\build\cluster\modules\lib\x64\NativeInterfaceTest.dll Java code ... notice how we don't inlude the ".dll" file extension in the loadLibrary() call? System.loadLibrary( "/x64/NativeInterfaceTest" ); Now, in your Java code you can create a NativeInterfaceTest object and call the echoString() method and it will return the String value you typed in the NativeInterfaceTest.c source code file. Hopefully this will save you the brain damage I endured trying to figure all this out on my own. Good luck and happy coding!

    Read the article

  • The Incremental Architect&rsquo;s Napkin - #5 - Design functions for extensibility and readability

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/08/24/the-incremental-architectrsquos-napkin---5---design-functions-for.aspx The functionality of programs is entered via Entry Points. So what we´re talking about when designing software is a bunch of functions handling the requests represented by and flowing in through those Entry Points. Designing software thus consists of at least three phases: Analyzing the requirements to find the Entry Points and their signatures Designing the functionality to be executed when those Entry Points get triggered Implementing the functionality according to the design aka coding I presume, you´re familiar with phase 1 in some way. And I guess you´re proficient in implementing functionality in some programming language. But in my experience developers in general are not experienced in going through an explicit phase 2. “Designing functionality? What´s that supposed to mean?” you might already have thought. Here´s my definition: To design functionality (or functional design for short) means thinking about… well, functions. You find a solution for what´s supposed to happen when an Entry Point gets triggered in terms of functions. A conceptual solution that is, because those functions only exist in your head (or on paper) during this phase. But you may have guess that, because it´s “design” not “coding”. And here is, what functional design is not: It´s not about logic. Logic is expressions (e.g. +, -, && etc.) and control statements (e.g. if, switch, for, while etc.). Also I consider calling external APIs as logic. It´s equally basic. It´s what code needs to do in order to deliver some functionality or quality. Logic is what´s doing that needs to be done by software. Transformations are either done through expressions or API-calls. And then there is alternative control flow depending on the result of some expression. Basically it´s just jumps in Assembler, sometimes to go forward (if, switch), sometimes to go backward (for, while, do). But calling your own function is not logic. It´s not necessary to produce any outcome. Functionality is not enhanced by adding functions (subroutine calls) to your code. Nor is quality increased by adding functions. No performance gain, no higher scalability etc. through functions. Functions are not relevant to functionality. Strange, isn´t it. What they are important for is security of investment. By introducing functions into our code we can become more productive (re-use) and can increase evolvability (higher unterstandability, easier to keep code consistent). That´s no small feat, however. Evolvable code can hardly be overestimated. That´s why to me functional design is so important. It´s at the core of software development. To sum this up: Functional design is on a level of abstraction above (!) logical design or algorithmic design. Functional design is only done until you get to a point where each function is so simple you are very confident you can easily code it. Functional design an logical design (which mostly is coding, but can also be done using pseudo code or flow charts) are complementary. Software needs both. If you start coding right away you end up in a tangled mess very quickly. Then you need back out through refactoring. Functional design on the other hand is bloodless without actual code. It´s just a theory with no experiments to prove it. But how to do functional design? An example of functional design Let´s assume a program to de-duplicate strings. The user enters a number of strings separated by commas, e.g. a, b, a, c, d, b, e, c, a. And the program is supposed to clear this list of all doubles, e.g. a, b, c, d, e. There is only one Entry Point to this program: the user triggers the de-duplication by starting the program with the string list on the command line C:\>deduplicate "a, b, a, c, d, b, e, c, a" a, b, c, d, e …or by clicking on a GUI button. This leads to the Entry Point function to get called. It´s the program´s main function in case of the batch version or a button click event handler in the GUI version. That´s the physical Entry Point so to speak. It´s inevitable. What then happens is a three step process: Transform the input data from the user into a request. Call the request handler. Transform the output of the request handler into a tangible result for the user. Or to phrase it a bit more generally: Accept input. Transform input into output. Present output. This does not mean any of these steps requires a lot of effort. Maybe it´s just one line of code to accomplish it. Nevertheless it´s a distinct step in doing the processing behind an Entry Point. Call it an aspect or a responsibility - and you will realize it most likely deserves a function of its own to satisfy the Single Responsibility Principle (SRP). Interestingly the above list of steps is already functional design. There is no logic, but nevertheless the solution is described - albeit on a higher level of abstraction than you might have done yourself. But it´s still on a meta-level. The application to the domain at hand is easy, though: Accept string list from command line De-duplicate Present de-duplicated strings on standard output And this concrete list of processing steps can easily be transformed into code:static void Main(string[] args) { var input = Accept_string_list(args); var output = Deduplicate(input); Present_deduplicated_string_list(output); } Instead of a big problem there are three much smaller problems now. If you think each of those is trivial to implement, then go for it. You can stop the functional design at this point. But maybe, just maybe, you´re not so sure how to go about with the de-duplication for example. Then just implement what´s easy right now, e.g.private static string Accept_string_list(string[] args) { return args[0]; } private static void Present_deduplicated_string_list( string[] output) { var line = string.Join(", ", output); Console.WriteLine(line); } Accept_string_list() contains logic in the form of an API-call. Present_deduplicated_string_list() contains logic in the form of an expression and an API-call. And then repeat the functional design for the remaining processing step. What´s left is the domain logic: de-duplicating a list of strings. How should that be done? Without any logic at our disposal during functional design you´re left with just functions. So which functions could make up the de-duplication? Here´s a suggestion: De-duplicate Parse the input string into a true list of strings. Register each string in a dictionary/map/set. That way duplicates get cast away. Transform the data structure into a list of unique strings. Processing step 2 obviously was the core of the solution. That´s where real creativity was needed. That´s the core of the domain. But now after this refinement the implementation of each step is easy again:private static string[] Parse_string_list(string input) { return input.Split(',') .Select(s => s.Trim()) .ToArray(); } private static Dictionary<string,object> Compile_unique_strings(string[] strings) { return strings.Aggregate( new Dictionary<string, object>(), (agg, s) => { agg[s] = null; return agg; }); } private static string[] Serialize_unique_strings( Dictionary<string,object> dict) { return dict.Keys.ToArray(); } With these three additional functions Main() now looks like this:static void Main(string[] args) { var input = Accept_string_list(args); var strings = Parse_string_list(input); var dict = Compile_unique_strings(strings); var output = Serialize_unique_strings(dict); Present_deduplicated_string_list(output); } I think that´s very understandable code: just read it from top to bottom and you know how the solution to the problem works. It´s a mirror image of the initial design: Accept string list from command line Parse the input string into a true list of strings. Register each string in a dictionary/map/set. That way duplicates get cast away. Transform the data structure into a list of unique strings. Present de-duplicated strings on standard output You can even re-generate the design by just looking at the code. Code and functional design thus are always in sync - if you follow some simple rules. But about that later. And as a bonus: all the functions making up the process are small - which means easy to understand, too. So much for an initial concrete example. Now it´s time for some theory. Because there is method to this madness ;-) The above has only scratched the surface. Introducing Flow Design Functional design starts with a given function, the Entry Point. Its goal is to describe the behavior of the program when the Entry Point is triggered using a process, not an algorithm. An algorithm consists of logic, a process on the other hand consists just of steps or stages. Each processing step transforms input into output or a side effect. Also it might access resources, e.g. a printer, a database, or just memory. Processing steps thus can rely on state of some sort. This is different from Functional Programming, where functions are supposed to not be stateful and not cause side effects.[1] In its simplest form a process can be written as a bullet point list of steps, e.g. Get data from user Output result to user Transform data Parse data Map result for output Such a compilation of steps - possibly on different levels of abstraction - often is the first artifact of functional design. It can be generated by a team in an initial design brainstorming. Next comes ordering the steps. What should happen first, what next etc.? Get data from user Parse data Transform data Map result for output Output result to user That´s great for a start into functional design. It´s better than starting to code right away on a given function using TDD. Please get me right: TDD is a valuable practice. But it can be unnecessarily hard if the scope of a functionn is too large. But how do you know beforehand without investing some thinking? And how to do this thinking in a systematic fashion? My recommendation: For any given function you´re supposed to implement first do a functional design. Then, once you´re confident you know the processing steps - which are pretty small - refine and code them using TDD. You´ll see that´s much, much easier - and leads to cleaner code right away. For more information on this approach I call “Informed TDD” read my book of the same title. Thinking before coding is smart. And writing down the solution as a bunch of functions possibly is the simplest thing you can do, I´d say. It´s more according to the KISS (Keep It Simple, Stupid) principle than returning constants or other trivial stuff TDD development often is started with. So far so good. A simple ordered list of processing steps will do to start with functional design. As shown in the above example such steps can easily be translated into functions. Moving from design to coding thus is simple. However, such a list does not scale. Processing is not always that simple to be captured in a list. And then the list is just text. Again. Like code. That means the design is lacking visuality. Textual representations need more parsing by your brain than visual representations. Plus they are limited in their “dimensionality”: text just has one dimension, it´s sequential. Alternatives and parallelism are hard to encode in text. In addition the functional design using numbered lists lacks data. It´s not visible what´s the input, output, and state of the processing steps. That´s why functional design should be done using a lightweight visual notation. No tool is necessary to draw such designs. Use pen and paper; a flipchart, a whiteboard, or even a napkin is sufficient. Visualizing processes The building block of the functional design notation is a functional unit. I mostly draw it like this: Something is done, it´s clear what goes in, it´s clear what comes out, and it´s clear what the processing step requires in terms of state or hardware. Whenever input flows into a functional unit it gets processed and output is produced and/or a side effect occurs. Flowing data is the driver of something happening. That´s why I call this approach to functional design Flow Design. It´s about data flow instead of control flow. Control flow like in algorithms is of no concern to functional design. Thinking about control flow simply is too low level. Once you start with control flow you easily get bogged down by tons of details. That´s what you want to avoid during design. Design is supposed to be quick, broad brush, abstract. It should give overview. But what about all the details? As Robert C. Martin rightly said: “Programming is abot detail”. Detail is a matter of code. Once you start coding the processing steps you designed you can worry about all the detail you want. Functional design does not eliminate all the nitty gritty. It just postpones tackling them. To me that´s also an example of the SRP. Function design has the responsibility to come up with a solution to a problem posed by a single function (Entry Point). And later coding has the responsibility to implement the solution down to the last detail (i.e. statement, API-call). TDD unfortunately mixes both responsibilities. It´s just coding - and thereby trying to find detailed implementations (green phase) plus getting the design right (refactoring). To me that´s one reason why TDD has failed to deliver on its promise for many developers. Using functional units as building blocks of functional design processes can be depicted very easily. Here´s the initial process for the example problem: For each processing step draw a functional unit and label it. Choose a verb or an “action phrase” as a label, not a noun. Functional design is about activities, not state or structure. Then make the output of an upstream step the input of a downstream step. Finally think about the data that should flow between the functional units. Write the data above the arrows connecting the functional units in the direction of the data flow. Enclose the data description in brackets. That way you can clearly see if all flows have already been specified. Empty brackets mean “no data is flowing”, but nevertheless a signal is sent. A name like “list” or “strings” in brackets describes the data content. Use lower case labels for that purpose. A name starting with an upper case letter like “String” or “Customer” on the other hand signifies a data type. If you like, you also can combine descriptions with data types by separating them with a colon, e.g. (list:string) or (strings:string[]). But these are just suggestions from my practice with Flow Design. You can do it differently, if you like. Just be sure to be consistent. Flows wired-up in this manner I call one-dimensional (1D). Each functional unit just has one input and/or one output. A functional unit without an output is possible. It´s like a black hole sucking up input without producing any output. Instead it produces side effects. A functional unit without an input, though, does make much sense. When should it start to work? What´s the trigger? That´s why in the above process even the first processing step has an input. If you like, view such 1D-flows as pipelines. Data is flowing through them from left to right. But as you can see, it´s not always the same data. It get´s transformed along its passage: (args) becomes a (list) which is turned into (strings). The Principle of Mutual Oblivion A very characteristic trait of flows put together from function units is: no functional units knows another one. They are all completely independent of each other. Functional units don´t know where their input is coming from (or even when it´s gonna arrive). They just specify a range of values they can process. And they promise a certain behavior upon input arriving. Also they don´t know where their output is going. They just produce it in their own time independent of other functional units. That means at least conceptually all functional units work in parallel. Functional units don´t know their “deployment context”. They now nothing about the overall flow they are place in. They are just consuming input from some upstream, and producing output for some downstream. That makes functional units very easy to test. At least as long as they don´t depend on state or resources. I call this the Principle of Mutual Oblivion (PoMO). Functional units are oblivious of others as well as an overall context/purpose. They are just parts of a whole focused on a single responsibility. How the whole is built, how a larger goal is achieved, is of no concern to the single functional units. By building software in such a manner, functional design interestingly follows nature. Nature´s building blocks for organisms also follow the PoMO. The cells forming your body do not know each other. Take a nerve cell “controlling” a muscle cell for example:[2] The nerve cell does not know anything about muscle cells, let alone the specific muscel cell it is “attached to”. Likewise the muscle cell does not know anything about nerve cells, let a lone a specific nerve cell “attached to” it. Saying “the nerve cell is controlling the muscle cell” thus only makes sense when viewing both from the outside. “Control” is a concept of the whole, not of its parts. Control is created by wiring-up parts in a certain way. Both cells are mutually oblivious. Both just follow a contract. One produces Acetylcholine (ACh) as output, the other consumes ACh as input. Where the ACh is going, where it´s coming from neither cell cares about. Million years of evolution have led to this kind of division of labor. And million years of evolution have produced organism designs (DNA) which lead to the production of these different cell types (and many others) and also to their co-location. The result: the overall behavior of an organism. How and why this happened in nature is a mystery. For our software, though, it´s clear: functional and quality requirements needs to be fulfilled. So we as developers have to become “intelligent designers” of “software cells” which we put together to form a “software organism” which responds in satisfying ways to triggers from it´s environment. My bet is: If nature gets complex organisms working by following the PoMO, who are we to not apply this recipe for success to our much simpler “machines”? So my rule is: Wherever there is functionality to be delivered, because there is a clear Entry Point into software, design the functionality like nature would do it. Build it from mutually oblivious functional units. That´s what Flow Design is about. In that way it´s even universal, I´d say. Its notation can also be applied to biology: Never mind labeling the functional units with nouns. That´s ok in Flow Design. You´ll do that occassionally for functional units on a higher level of abstraction or when their purpose is close to hardware. Getting a cockroach to roam your bedroom takes 1,000,000 nerve cells (neurons). Getting the de-duplication program to do its job just takes 5 “software cells” (functional units). Both, though, follow the same basic principle. Translating functional units into code Moving from functional design to code is no rocket science. In fact it´s straightforward. There are two simple rules: Translate an input port to a function. Translate an output port either to a return statement in that function or to a function pointer visible to that function. The simplest translation of a functional unit is a function. That´s what you saw in the above example. Functions are mutually oblivious. That why Functional Programming likes them so much. It makes them composable. Which is the reason, nature works according to the PoMO. Let´s be clear about one thing: There is no dependency injection in nature. For all of an organism´s complexity no DI container is used. Behavior is the result of smooth cooperation between mutually oblivious building blocks. Functions will often be the adequate translation for the functional units in your designs. But not always. Take for example the case, where a processing step should not always produce an output. Maybe the purpose is to filter input. Here the functional unit consumes words and produces words. But it does not pass along every word flowing in. Some words are swallowed. Think of a spell checker. It probably should not check acronyms for correctness. There are too many of them. Or words with no more than two letters. Such words are called “stop words”. In the above picture the optionality of the output is signified by the astrisk outside the brackets. It means: Any number of (word) data items can flow from the functional unit for each input data item. It might be none or one or even more. This I call a stream of data. Such behavior cannot be translated into a function where output is generated with return. Because a function always needs to return a value. So the output port is translated into a function pointer or continuation which gets passed to the subroutine when called:[3]void filter_stop_words( string word, Action<string> onNoStopWord) { if (...check if not a stop word...) onNoStopWord(word); } If you want to be nitpicky you might call such a function pointer parameter an injection. And technically you´re right. Conceptually, though, it´s not an injection. Because the subroutine is not functionally dependent on the continuation. Firstly continuations are procedures, i.e. subroutines without a return type. Remember: Flow Design is about unidirectional data flow. Secondly the name of the formal parameter is chosen in a way as to not assume anything about downstream processing steps. onNoStopWord describes a situation (or event) within the functional unit only. Translating output ports into function pointers helps keeping functional units mutually oblivious in cases where output is optional or produced asynchronically. Either pass the function pointer to the function upon call. Or make it global by putting it on the encompassing class. Then it´s called an event. In C# that´s even an explicit feature.class Filter { public void filter_stop_words( string word) { if (...check if not a stop word...) onNoStopWord(word); } public event Action<string> onNoStopWord; } When to use a continuation and when to use an event dependens on how a functional unit is used in flows and how it´s packed together with others into classes. You´ll see examples further down the Flow Design road. Another example of 1D functional design Let´s see Flow Design once more in action using the visual notation. How about the famous word wrap kata? Robert C. Martin has posted a much cited solution including an extensive reasoning behind his TDD approach. So maybe you want to compare it to Flow Design. The function signature given is:string WordWrap(string text, int maxLineLength) {...} That´s not an Entry Point since we don´t see an application with an environment and users. Nevertheless it´s a function which is supposed to provide a certain functionality. The text passed in has to be reformatted. The input is a single line of arbitrary length consisting of words separated by spaces. The output should consist of one or more lines of a maximum length specified. If a word is longer than a the maximum line length it can be split in multiple parts each fitting in a line. Flow Design Let´s start by brainstorming the process to accomplish the feat of reformatting the text. What´s needed? Words need to be assembled into lines Words need to be extracted from the input text The resulting lines need to be assembled into the output text Words too long to fit in a line need to be split Does sound about right? I guess so. And it shows a kind of priority. Long words are a special case. So maybe there is a hint for an incremental design here. First let´s tackle “average words” (words not longer than a line). Here´s the Flow Design for this increment: The the first three bullet points turned into functional units with explicit data added. As the signature requires a text is transformed into another text. See the input of the first functional unit and the output of the last functional unit. In between no text flows, but words and lines. That´s good to see because thereby the domain is clearly represented in the design. The requirements are talking about words and lines and here they are. But note the asterisk! It´s not outside the brackets but inside. That means it´s not a stream of words or lines, but lists or sequences. For each text a sequence of words is output. For each sequence of words a sequence of lines is produced. The asterisk is used to abstract from the concrete implementation. Like with streams. Whether the list of words gets implemented as an array or an IEnumerable is not important during design. It´s an implementation detail. Does any processing step require further refinement? I don´t think so. They all look pretty “atomic” to me. And if not… I can always backtrack and refine a process step using functional design later once I´ve gained more insight into a sub-problem. Implementation The implementation is straightforward as you can imagine. The processing steps can all be translated into functions. Each can be tested easily and separately. Each has a focused responsibility. And the process flow becomes just a sequence of function calls: Easy to understand. It clearly states how word wrapping works - on a high level of abstraction. And it´s easy to evolve as you´ll see. Flow Design - Increment 2 So far only texts consisting of “average words” are wrapped correctly. Words not fitting in a line will result in lines too long. Wrapping long words is a feature of the requested functionality. Whether it´s there or not makes a difference to the user. To quickly get feedback I decided to first implement a solution without this feature. But now it´s time to add it to deliver the full scope. Fortunately Flow Design automatically leads to code following the Open Closed Principle (OCP). It´s easy to extend it - instead of changing well tested code. How´s that possible? Flow Design allows for extension of functionality by inserting functional units into the flow. That way existing functional units need not be changed. The data flow arrow between functional units is a natural extension point. No need to resort to the Strategy Pattern. No need to think ahead where extions might need to be made in the future. I just “phase in” the remaining processing step: Since neither Extract words nor Reformat know of their environment neither needs to be touched due to the “detour”. The new processing step accepts the output of the existing upstream step and produces data compatible with the existing downstream step. Implementation - Increment 2 A trivial implementation checking the assumption if this works does not do anything to split long words. The input is just passed on: Note how clean WordWrap() stays. The solution is easy to understand. A developer looking at this code sometime in the future, when a new feature needs to be build in, quickly sees how long words are dealt with. Compare this to Robert C. Martin´s solution:[4] How does this solution handle long words? Long words are not even part of the domain language present in the code. At least I need considerable time to understand the approach. Admittedly the Flow Design solution with the full implementation of long word splitting is longer than Robert C. Martin´s. At least it seems. Because his solution does not cover all the “word wrap situations” the Flow Design solution handles. Some lines would need to be added to be on par, I guess. But even then… Is a difference in LOC that important as long as it´s in the same ball park? I value understandability and openness for extension higher than saving on the last line of code. Simplicity is not just less code, it´s also clarity in design. But don´t take my word for it. Try Flow Design on larger problems and compare for yourself. What´s the easier, more straightforward way to clean code? And keep in mind: You ain´t seen all yet ;-) There´s more to Flow Design than described in this chapter. In closing I hope I was able to give you a impression of functional design that makes you hungry for more. To me it´s an inevitable step in software development. Jumping from requirements to code does not scale. And it leads to dirty code all to quickly. Some thought should be invested first. Where there is a clear Entry Point visible, it´s functionality should be designed using data flows. Because with data flows abstraction is possible. For more background on why that´s necessary read my blog article here. For now let me point out to you - if you haven´t already noticed - that Flow Design is a general purpose declarative language. It´s “programming by intention” (Shalloway et al.). Just write down how you think the solution should work on a high level of abstraction. This breaks down a large problem in smaller problems. And by following the PoMO the solutions to those smaller problems are independent of each other. So they are easy to test. Or you could even think about getting them implemented in parallel by different team members. Flow Design not only increases evolvability, but also helps becoming more productive. All team members can participate in functional design. This goes beyon collective code ownership. We´re talking collective design/architecture ownership. Because with Flow Design there is a common visual language to talk about functional design - which is the foundation for all other design activities.   PS: If you like what you read, consider getting my ebook “The Incremental Architekt´s Napkin”. It´s where I compile all the articles in this series for easier reading. I like the strictness of Function Programming - but I also find it quite hard to live by. And it certainly is not what millions of programmers are used to. Also to me it seems, the real world is full of state and side effects. So why give them such a bad image? That´s why functional design takes a more pragmatic approach. State and side effects are ok for processing steps - but be sure to follow the SRP. Don´t put too much of it into a single processing step. ? Image taken from www.physioweb.org ? My code samples are written in C#. C# sports typed function pointers called delegates. Action is such a function pointer type matching functions with signature void someName(T t). Other languages provide similar ways to work with functions as first class citizens - even Java now in version 8. I trust you find a way to map this detail of my translation to your favorite programming language. I know it works for Java, C++, Ruby, JavaScript, Python, Go. And if you´re using a Functional Programming language it´s of course a no brainer. ? Taken from his blog post “The Craftsman 62, The Dark Path”. ?

    Read the article

< Previous Page | 22 23 24 25 26