Search Results

Search found 84 results on 4 pages for 'derick bailey'.

Page 1/4 | 1 2 3 4  | Next Page >

  • A way of doing real-world test-driven development (and some thoughts about it)

    - by Thomas Weller
    Lately, I exchanged some arguments with Derick Bailey about some details of the red-green-refactor cycle of the Test-driven development process. In short, the issue revolved around the fact that it’s not enough to have a test red or green, but it’s also important to have it red or green for the right reasons. While for me, it’s sufficient to initially have a NotImplementedException in place, Derick argues that this is not totally correct (see these two posts: Red/Green/Refactor, For The Right Reasons and Red For The Right Reason: Fail By Assertion, Not By Anything Else). And he’s right. But on the other hand, I had no idea how his insights could have any practical consequence for my own individual interpretation of the red-green-refactor cycle (which is not really red-green-refactor, at least not in its pure sense, see the rest of this article). This made me think deeply for some days now. In the end I found out that the ‘right reason’ changes in my understanding depending on what development phase I’m in. To make this clear (at least I hope it becomes clear…) I started to describe my way of working in some detail, and then something strange happened: The scope of the article slightly shifted from focusing ‘only’ on the ‘right reason’ issue to something more general, which you might describe as something like  'Doing real-world TDD in .NET , with massive use of third-party add-ins’. This is because I feel that there is a more general statement about Test-driven development to make:  It’s high time to speak about the ‘How’ of TDD, not always only the ‘Why’. Much has been said about this, and me myself also contributed to that (see here: TDD is not about testing, it's about how we develop software). But always justifying what you do is very unsatisfying in the long run, it is inherently defensive, and it costs time and effort that could be used for better and more important things. And frankly: I’m somewhat sick and tired of repeating time and again that the test-driven way of software development is highly preferable for many reasons - I don’t want to spent my time exclusively on stating the obvious… So, again, let’s say it clearly: TDD is programming, and programming is TDD. Other ways of programming (code-first, sometimes called cowboy-coding) are exceptional and need justification. – I know that there are many people out there who will disagree with this radical statement, and I also know that it’s not a description of the real world but more of a mission statement or something. But nevertheless I’m absolutely sure that in some years this statement will be nothing but a platitude. Side note: Some parts of this post read as if I were paid by Jetbrains (the manufacturer of the ReSharper add-in – R#), but I swear I’m not. Rather I think that Visual Studio is just not production-complete without it, and I wouldn’t even consider to do professional work without having this add-in installed... The three parts of a software component Before I go into some details, I first should describe my understanding of what belongs to a software component (assembly, type, or method) during the production process (i.e. the coding phase). Roughly, I come up with the three parts shown below:   First, we need to have some initial sort of requirement. This can be a multi-page formal document, a vague idea in some programmer’s brain of what might be needed, or anything in between. In either way, there has to be some sort of requirement, be it explicit or not. – At the C# micro-level, the best way that I found to formulate that is to define interfaces for just about everything, even for internal classes, and to provide them with exhaustive xml comments. The next step then is to re-formulate these requirements in an executable form. This is specific to the respective programming language. - For C#/.NET, the Gallio framework (which includes MbUnit) in conjunction with the ReSharper add-in for Visual Studio is my toolset of choice. The third part then finally is the production code itself. It’s development is entirely driven by the requirements and their executable formulation. This is the delivery, the two other parts are ‘only’ there to make its production possible, to give it a decent quality and reliability, and to significantly reduce related costs down the maintenance timeline. So while the first two parts are not really relevant for the customer, they are very important for the developer. The customer (or in Scrum terms: the Product Owner) is not interested at all in how  the product is developed, he is only interested in the fact that it is developed as cost-effective as possible, and that it meets his functional and non-functional requirements. The rest is solely a matter of the developer’s craftsmanship, and this is what I want to talk about during the remainder of this article… An example To demonstrate my way of doing real-world TDD, I decided to show the development of a (very) simple Calculator component. The example is deliberately trivial and silly, as examples always are. I am totally aware of the fact that real life is never that simple, but I only want to show some development principles here… The requirement As already said above, I start with writing down some words on the initial requirement, and I normally use interfaces for that, even for internal classes - the typical question “intf or not” doesn’t even come to mind. I need them for my usual workflow and using them automatically produces high componentized and testable code anyway. To think about their usage in every single situation would slow down the production process unnecessarily. So this is what I begin with: namespace Calculator {     /// <summary>     /// Defines a very simple calculator component for demo purposes.     /// </summary>     public interface ICalculator     {         /// <summary>         /// Gets the result of the last successful operation.         /// </summary>         /// <value>The last result.</value>         /// <remarks>         /// Will be <see langword="null" /> before the first successful operation.         /// </remarks>         double? LastResult { get; }       } // interface ICalculator   } // namespace Calculator So, I’m not beginning with a test, but with a sort of code declaration - and still I insist on being 100% test-driven. There are three important things here: Starting this way gives me a method signature, which allows to use IntelliSense and AutoCompletion and thus eliminates the danger of typos - one of the most regular, annoying, time-consuming, and therefore expensive sources of error in the development process. In my understanding, the interface definition as a whole is more of a readable requirement document and technical documentation than anything else. So this is at least as much about documentation than about coding. The documentation must completely describe the behavior of the documented element. I normally use an IoC container or some sort of self-written provider-like model in my architecture. In either case, I need my components defined via service interfaces anyway. - I will use the LinFu IoC framework here, for no other reason as that is is very simple to use. The ‘Red’ (pt. 1)   First I create a folder for the project’s third-party libraries and put the LinFu.Core dll there. Then I set up a test project (via a Gallio project template), and add references to the Calculator project and the LinFu dll. Finally I’m ready to write the first test, which will look like the following: namespace Calculator.Test {     [TestFixture]     public class CalculatorTest     {         private readonly ServiceContainer container = new ServiceContainer();           [Test]         public void CalculatorLastResultIsInitiallyNull()         {             ICalculator calculator = container.GetService<ICalculator>();               Assert.IsNull(calculator.LastResult);         }       } // class CalculatorTest   } // namespace Calculator.Test       This is basically the executable formulation of what the interface definition states (part of). Side note: There’s one principle of TDD that is just plain wrong in my eyes: I’m talking about the Red is 'does not compile' thing. How could a compiler error ever be interpreted as a valid test outcome? I never understood that, it just makes no sense to me. (Or, in Derick’s terms: this reason is as wrong as a reason ever could be…) A compiler error tells me: Your code is incorrect, but nothing more.  Instead, the ‘Red’ part of the red-green-refactor cycle has a clearly defined meaning to me: It means that the test works as intended and fails only if its assumptions are not met for some reason. Back to our Calculator. When I execute the above test with R#, the Gallio plugin will give me this output: So this tells me that the test is red for the wrong reason: There’s no implementation that the IoC-container could load, of course. So let’s fix that. With R#, this is very easy: First, create an ICalculator - derived type:        Next, implement the interface members: And finally, move the new class to its own file: So far my ‘work’ was six mouse clicks long, the only thing that’s left to do manually here, is to add the Ioc-specific wiring-declaration and also to make the respective class non-public, which I regularly do to force my components to communicate exclusively via interfaces: This is what my Calculator class looks like as of now: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult         {             get             {                 throw new NotImplementedException();             }         }     } } Back to the test fixture, we have to put our IoC container to work: [TestFixture] public class CalculatorTest {     #region Fields       private readonly ServiceContainer container = new ServiceContainer();       #endregion // Fields       #region Setup/TearDown       [FixtureSetUp]     public void FixtureSetUp()     {        container.LoadFrom(AppDomain.CurrentDomain.BaseDirectory, "Calculator.dll");     }       ... Because I have a R# live template defined for the setup/teardown method skeleton as well, the only manual coding here again is the IoC-specific stuff: two lines, not more… The ‘Red’ (pt. 2) Now, the execution of the above test gives the following result: This time, the test outcome tells me that the method under test is called. And this is the point, where Derick and I seem to have somewhat different views on the subject: Of course, the test still is worthless regarding the red/green outcome (or: it’s still red for the wrong reasons, in that it gives a false negative). But as far as I am concerned, I’m not really interested in the test outcome at this point of the red-green-refactor cycle. Rather, I only want to assert that my test actually calls the right method. If that’s the case, I will happily go on to the ‘Green’ part… The ‘Green’ Making the test green is quite trivial. Just make LastResult an automatic property:     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult { get; private set; }     }         One more round… Now on to something slightly more demanding (cough…). Let’s state that our Calculator exposes an Add() method:         ...   /// <summary>         /// Adds the specified operands.         /// </summary>         /// <param name="operand1">The operand1.</param>         /// <param name="operand2">The operand2.</param>         /// <returns>The result of the additon.</returns>         /// <exception cref="ArgumentException">         /// Argument <paramref name="operand1"/> is &lt; 0.<br/>         /// -- or --<br/>         /// Argument <paramref name="operand2"/> is &lt; 0.         /// </exception>         double Add(double operand1, double operand2);       } // interface ICalculator A remark: I sometimes hear the complaint that xml comment stuff like the above is hard to read. That’s certainly true, but irrelevant to me, because I read xml code comments with the CR_Documentor tool window. And using that, it looks like this:   Apart from that, I’m heavily using xml code comments (see e.g. here for a detailed guide) because there is the possibility of automating help generation with nightly CI builds (using MS Sandcastle and the Sandcastle Help File Builder), and then publishing the results to some intranet location.  This way, a team always has first class, up-to-date technical documentation at hand about the current codebase. (And, also very important for speeding up things and avoiding typos: You have IntelliSense/AutoCompletion and R# support, and the comments are subject to compiler checking…).     Back to our Calculator again: Two more R# – clicks implement the Add() skeleton:         ...           public double Add(double operand1, double operand2)         {             throw new NotImplementedException();         }       } // class Calculator As we have stated in the interface definition (which actually serves as our requirement document!), the operands are not allowed to be negative. So let’s start implementing that. Here’s the test: [Test] [Row(-0.5, 2)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); } As you can see, I’m using a data-driven unit test method here, mainly for these two reasons: Because I know that I will have to do the same test for the second operand in a few seconds, I save myself from implementing another test method for this purpose. Rather, I only will have to add another Row attribute to the existing one. From the test report below, you can see that the argument values are explicitly printed out. This can be a valuable documentation feature even when everything is green: One can quickly review what values were tested exactly - the complete Gallio HTML-report (as it will be produced by the Continuous Integration runs) shows these values in a quite clear format (see below for an example). Back to our Calculator development again, this is what the test result tells us at the moment: So we’re red again, because there is not yet an implementation… Next we go on and implement the necessary parameter verification to become green again, and then we do the same thing for the second operand. To make a long story short, here’s the test and the method implementation at the end of the second cycle: // in CalculatorTest:   [Test] [Row(-0.5, 2)] [Row(295, -123)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); }   // in Calculator: public double Add(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }     if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }     throw new NotImplementedException(); } So far, we have sheltered our method from unwanted input, and now we can safely operate on the parameters without further caring about their validity (this is my interpretation of the Fail Fast principle, which is regarded here in more detail). Now we can think about the method’s successful outcomes. First let’s write another test for that: [Test] [Row(1, 1, 2)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } Again, I’m regularly using row based test methods for these kinds of unit tests. The above shown pattern proved to be extremely helpful for my development work, I call it the Defined-Input/Expected-Output test idiom: You define your input arguments together with the expected method result. There are two major benefits from that way of testing: In the course of refining a method, it’s very likely to come up with additional test cases. In our case, we might add tests for some edge cases like ‘one of the operands is zero’ or ‘the sum of the two operands causes an overflow’, or maybe there’s an external test protocol that has to be fulfilled (e.g. an ISO norm for medical software), and this results in the need of testing against additional values. In all these scenarios we only have to add another Row attribute to the test. Remember that the argument values are written to the test report, so as a side-effect this produces valuable documentation. (This can become especially important if the fulfillment of some sort of external requirements has to be proven). So your test method might look something like that in the end: [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 2)] [Row(0, 999999999, 999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, double.MaxValue)] [Row(4, double.MaxValue - 2.5, double.MaxValue)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } And this will produce the following HTML report (with Gallio):   Not bad for the amount of work we invested in it, huh? - There might be scenarios where reports like that can be useful for demonstration purposes during a Scrum sprint review… The last requirement to fulfill is that the LastResult property is expected to store the result of the last operation. I don’t show this here, it’s trivial enough and brings nothing new… And finally: Refactor (for the right reasons) To demonstrate my way of going through the refactoring portion of the red-green-refactor cycle, I added another method to our Calculator component, namely Subtract(). Here’s the code (tests and production): // CalculatorTest.cs:   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtract(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, result); }   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtractGivesExpectedLastResult(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, calculator.LastResult); }   ...   // ICalculator.cs: /// <summary> /// Subtracts the specified operands. /// </summary> /// <param name="operand1">The operand1.</param> /// <param name="operand2">The operand2.</param> /// <returns>The result of the subtraction.</returns> /// <exception cref="ArgumentException"> /// Argument <paramref name="operand1"/> is &lt; 0.<br/> /// -- or --<br/> /// Argument <paramref name="operand2"/> is &lt; 0. /// </exception> double Subtract(double operand1, double operand2);   ...   // Calculator.cs:   public double Subtract(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }       if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }       return (this.LastResult = operand1 - operand2).Value; }   Obviously, the argument validation stuff that was produced during the red-green part of our cycle duplicates the code from the previous Add() method. So, to avoid code duplication and minimize the number of code lines of the production code, we do an Extract Method refactoring. One more time, this is only a matter of a few mouse clicks (and giving the new method a name) with R#: Having done that, our production code finally looks like that: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         #region ICalculator           public double? LastResult { get; private set; }           public double Add(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 + operand2).Value;         }           public double Subtract(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 - operand2).Value;         }           #endregion // ICalculator           #region Implementation (Helper)           private static void ThrowIfOneOperandIsInvalid(double operand1, double operand2)         {             if (operand1 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand1");             }               if (operand2 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand2");             }         }           #endregion // Implementation (Helper)       } // class Calculator   } // namespace Calculator But is the above worth the effort at all? It’s obviously trivial and not very impressive. All our tests were green (for the right reasons), and refactoring the code did not change anything. It’s not immediately clear how this refactoring work adds value to the project. Derick puts it like this: STOP! Hold on a second… before you go any further and before you even think about refactoring what you just wrote to make your test pass, you need to understand something: if your done with your requirements after making the test green, you are not required to refactor the code. I know… I’m speaking heresy, here. Toss me to the wolves, I’ve gone over to the dark side! Seriously, though… if your test is passing for the right reasons, and you do not need to write any test or any more code for you class at this point, what value does refactoring add? Derick immediately answers his own question: So why should you follow the refactor portion of red/green/refactor? When you have added code that makes the system less readable, less understandable, less expressive of the domain or concern’s intentions, less architecturally sound, less DRY, etc, then you should refactor it. I couldn’t state it more precise. From my personal perspective, I’d add the following: You have to keep in mind that real-world software systems are usually quite large and there are dozens or even hundreds of occasions where micro-refactorings like the above can be applied. It’s the sum of them all that counts. And to have a good overall quality of the system (e.g. in terms of the Code Duplication Percentage metric) you have to be pedantic on the individual, seemingly trivial cases. My job regularly requires the reading and understanding of ‘foreign’ code. So code quality/readability really makes a HUGE difference for me – sometimes it can be even the difference between project success and failure… Conclusions The above described development process emerged over the years, and there were mainly two things that guided its evolution (you might call it eternal principles, personal beliefs, or anything in between): Test-driven development is the normal, natural way of writing software, code-first is exceptional. So ‘doing TDD or not’ is not a question. And good, stable code can only reliably be produced by doing TDD (yes, I know: many will strongly disagree here again, but I’ve never seen high-quality code – and high-quality code is code that stood the test of time and causes low maintenance costs – that was produced code-first…) It’s the production code that pays our bills in the end. (Though I have seen customers these days who demand an acceptance test battery as part of the final delivery. Things seem to go into the right direction…). The test code serves ‘only’ to make the production code work. But it’s the number of delivered features which solely counts at the end of the day - no matter how much test code you wrote or how good it is. With these two things in mind, I tried to optimize my coding process for coding speed – or, in business terms: productivity - without sacrificing the principles of TDD (more than I’d do either way…).  As a result, I consider a ratio of about 3-5/1 for test code vs. production code as normal and desirable. In other words: roughly 60-80% of my code is test code (This might sound heavy, but that is mainly due to the fact that software development standards only begin to evolve. The entire software development profession is very young, historically seen; only at the very beginning, and there are no viable standards yet. If you think about software development as a kind of casting process, where the test code is the mold and the resulting production code is the final product, then the above ratio sounds no longer extraordinary…) Although the above might look like very much unnecessary work at first sight, it’s not. With the aid of the mentioned add-ins, doing all the above is a matter of minutes, sometimes seconds (while writing this post took hours and days…). The most important thing is to have the right tools at hand. Slow developer machines or the lack of a tool or something like that - for ‘saving’ a few 100 bucks -  is just not acceptable and a very bad decision in business terms (though I quite some times have seen and heard that…). Production of high-quality products needs the usage of high-quality tools. This is a platitude that every craftsman knows… The here described round-trip will take me about five to ten minutes in my real-world development practice. I guess it’s about 30% more time compared to developing the ‘traditional’ (code-first) way. But the so manufactured ‘product’ is of much higher quality and massively reduces maintenance costs, which is by far the single biggest cost factor, as I showed in this previous post: It's the maintenance, stupid! (or: Something is rotten in developerland.). In the end, this is a highly cost-effective way of software development… But on the other hand, there clearly is a trade-off here: coding speed vs. code quality/later maintenance costs. The here described development method might be a perfect fit for the overwhelming majority of software projects, but there certainly are some scenarios where it’s not - e.g. if time-to-market is crucial for a software project. So this is a business decision in the end. It’s just that you have to know what you’re doing and what consequences this might have… Some last words First, I’d like to thank Derick Bailey again. His two aforementioned posts (which I strongly recommend for reading) inspired me to think deeply about my own personal way of doing TDD and to clarify my thoughts about it. I wouldn’t have done that without this inspiration. I really enjoy that kind of discussions… I agree with him in all respects. But I don’t know (yet?) how to bring his insights into the described production process without slowing things down. The above described method proved to be very “good enough” in my practical experience. But of course, I’m open to suggestions here… My rationale for now is: If the test is initially red during the red-green-refactor cycle, the ‘right reason’ is: it actually calls the right method, but this method is not yet operational. Later on, when the cycle is finished and the tests become part of the regular, automated Continuous Integration process, ‘red’ certainly must occur for the ‘right reason’: in this phase, ‘red’ MUST mean nothing but an unfulfilled assertion - Fail By Assertion, Not By Anything Else!

    Read the article

  • Semantic Versioning and splitting apart a library, providing a bundled build

    - by Derick Bailey
    I've got a nice, fairly popular JavaScript library that is following Semantic Versioning. The current library has a few dependency libraries, which are available either as separate downloads or as part of a single bundled download. I see a need to head down this path further. I want to extract additional, smaller libraries out of the one larger library. Each of these extracted libraries would be available as separate files, or inside of the one bundled build, again. If I go down this path of extracting the libraries, and providing a bundled version of the final code, does this require a full version change in semantic versioning? Would I have to bump from 1.x to 2.x? My first thought it no: I will not change any public API, so I don't have to change the major version number. But then I wonder... well, I am restructuring a lot of things, even though the final API for the bundled version would be the same. Is there a clear answer from semver on something like this? Do I need to bump first, second or third dot? Or something else?

    Read the article

  • Will Google Analytics track URLs that just redirect?

    - by Derick Bailey
    I have a link on my site. That links goes to another URL on my site. The code on the server sees that resource being requested and redirects the browser to another website. Will Google Analytics be able to know that the user requested the URL from my server and was redirected? Specifically, I set up a /buy link on my watchmecode.net site to try and track who is clicking the "Buy & Download" button. This link/button hits my server, and my server immediately does a redirect to the PayPal processing so the user can buy the screencast. Is Google Analytics going to know that the user hit the /buy URL on my site, and track that for me? If not, what can I do to make that happen?

    Read the article

  • Installing Ubuntu 12.04 on the Hp Mini 210-2090nr

    - by Dalton Bailey
    When i got this netbook last year i planned n putting ubuntu netbook remix on it but i never did and now i can no longer booot n to windows for some reason so i finally decided to do it but after makig a usb stick with ubuntu on it it will not get to the menu where there is the black and white ubuntu logo and the option to install try and so on. I know to usb is configured correctly it will boot on other computers but on the netbook it only flashes SYSLINUX 4.06EOD..... and then flashs blue before turning black with the whit undercore in the top right corner for a very long time. any suggestions ive been told to disable acpi but i cant find it in the bios. (btw im uing 12.04 though ive tried 11.04 and used unetbootin linux live installer and universal usb installer to make the usb)

    Read the article

  • Is there a variable width font that does not change width when adding effects like bold, italic?

    - by George Bailey
    NetBeans has a word wrap feature now - but if the font changes width when bold then it gets all jumpy and sometimes hard to work with. Edit: It turns out that even with Courier New that NetBeans word wrap still jumps up and down lines at a time at random. I guess that this question no longer cares for an answer. However,, it seems that there is no answer. (at least nobody has brought one up yet) I am currently using Comic Sans MS which gets wider when bold.

    Read the article

  • Reuseable Platform For Custom Board Game

    - by George Bailey
    Is there a generic platform to allow me to customize the rules to a board game. The board game uses a square grid, similar to Checkers or Chess. I was hoping to take some of the work out of creating this computer opponent, by reusing what is already written. I would think that there would be a pre-written routine for deciding which moves would lead to the best outcome, and all that I would need to program is the pieces, legal moves, what layout constitutes a win/lose or draw, and perhaps some kind of scoring for value of pieces. I have seen chess programs that appear to use a recursive routine, so they think anywhere from 2 to 20 moves ahead to create varying degrees of difficulty. I have noticed this on chess.com. The game I am programming will not be as complex. Is there a platform designed to be re-used for different grid/piece based games. JavaScript would be preferable, but Java or Perl would be acceptable.

    Read the article

  • Is there a variable width font that does not change width when adding effects like bold, italic?

    - by George Bailey
    NetBeans has a word wrap feature now - but if the font changes width when bold then it gets all jumpy and sometimes hard to work with. Edit: It turns out that even with Courier New that NetBeans word wrap still jumps up and down lines at a time at random. I guess that this question no longer cares for an answer. However,, it seems that there is no answer. (at least nobody has brought one up yet) I am currently using Comic Sans MS which gets wider when bold.

    Read the article

  • Logic that can traverse all possible layouts, but not checking every combination of identical pieces?

    - by George Bailey
    Suppose we have a grid of arbitrary size, which is filled by blocks of various widths and heights. There are many 2x2 blocks (meaning they take a total of 4 cells in the grid) and many 3x3 blocks, as well as some 5x4, 4x5, 2x3, etc. I was hoping I could set up a program that would look at all possible layouts, and rank them, and find the best one. Simply it would look at all possible positions of these blocks, and see what setup is the best rank. (the rank based on how many of these can be connected by a roadway system of 1x1 road blocks, and how many squares can be left empty after this is done. - wanting to fit the most blocks as possible with the least roads.) My question, is how should I traverse all the possibilities? I could take all the blocks and try them one at a time, but since all 2x2 blocks are equal, and there are a couple dozen of them, there is no point in trying every combination there, as in the following AA BBB AA BBB CCBBB CCEEE DD EEE DD EEE is exactly the same as CC EEE CC EEE AAEEE AABBB DD BBB DD BBB You notice that there are 2 3x3 blocks and 3 2x2 blocks in my two examples. Based on the model I have now, the computer would try both of these combinations, as well as many others. The problem is that it is going to try every single possible variation of my couple dozen 2x2 blocks. And that is sorely inefficient. Is there a reasonable way to take out this duplicated work, somehow getting the computer program to treat all 2x2 blocks as equal/identical, instead of one requiring rearranging/swapping of these identical blocks? Can this be done?

    Read the article

  • Fork dead SVN based project on GitHub

    - by Quinn Bailey
    I previously asked this at stack overflow but it was closed, I believe because 'programmers' is a more appropriate venue for this question. I have done some work on the SVN Importer project (Apache license), which appears to be effectively dead (no published changes in 5 years). I have a login to their svn server but do not have commit rights. At any rate, I'd like to convert this project to Git and push my own changes to GitHub. The GitHub site suggests the svn2git tool for converting svn projects to Git, so I was planning to convert the SVN repository to Git, add my changes, and then push this Git repository to GitHub. I'm wondering, what are the legal requirements and common conventions of this process? Is it acceptable to clone the entire history of the project and move it to GitHub? Also, even though this is essentially a dead project, once I've translated the repository to Git should I put all of my commits onto a non master branch or is it acceptable to use master in this case?

    Read the article

  • Is there any way to discover the traffic of a site I don't control?

    - by George Bailey
    Given the following: The website does not call any external images or scripts, all the content is hosted on a server that is in our control. The website does not contain the meta tag, nor does it contain the html file that would authorize a Google Account access to Webmaster Tools. The access logs have not been provided to any 2nd or 3rd party. Is it possible for a 3rd party to get an idea of how many hits the site is getting, or are they limited to just seeing how high the site ranks? How could the 3rd party determine how well the site is doing under these restrictions? Is there a website for that that you know of?

    Read the article

  • accessing live usb files from new hd ubuntu install

    - by Robin Bailey
    After my live USB (ubuntu 12.04 lts) refused to boot, I proceeded to install the same Ubuntu version on the laptop hard drive (a dual boot next to Win xp). This all went well without a hitch. Previous to this, I spent several weeks enjoying and exploring ubuntu from the usb pendrive. During this time I changed lots of settings and customized Firefox and more. Now, I'd like to import the home folder from the usb drive into the new install home folder on the hard disk, which is the purported folder that holds all those special settings to my knowledge. Unfortunately and only being familiar with Windows file systems, the view of the usb file system from the new hdd install is totally perplexing. I can't find anything that looks anywhere close to the original file system. More, I can't find any of the files I had created and stored there, like the LibreOfficeCalc file that has all my passwords (this one is really discouraging) that was stored on the ubuntu desktop. Help me find this file alone and I'll bow down with full apologies to any and all computer gods. Being able to import all those customizing settings into the new install would be a major bonus also, but hey, I'm not greedy. I'll take the passwords file and be happy! And humble! I would be very grateful for some clear, understandable help on this. Thanks

    Read the article

  • i have xp partitioned and formatted fat32 it says it needs to be bootable to install ubuntu how do i do that? [duplicate]

    - by Rob Bailey
    This question already has an answer here: How do I install Ubuntu? 4 answers I have partitioned the hard drive and formatted it fat 32 as it is only 16gig. I have just read that I should not partition the hard drive so I have deleted the partition, I noticed that if i re-partition it my options would be NTFS, Fat32 or exfat. I tried to install ubuntu 12.10 but it flashed up something along the lines of the partition is not bootable and it must be for ubuntu to install. I know my copy works as a friend installed it over his copy of windows and it works perfect, I have ubuntu on my flashdrive, I want to run it along side xp.

    Read the article

  • Canon MF3200 Printer Problems

    - by Derick K.
    I bought a Canon MF3200 and used it on an old Windows laptop. The laptop broke, and now I'm trying to setup the printer on a laptop with Windows 7. I downloaded the new driver, and Windows was able to install it and mark it "Ready to Go." But whenever I try to print, an error comes up and it doesn't print. I tried to troubleshoot, to no avail. What else can I do? Any ideas on what the error might be?

    Read the article

  • OPN Exchange Keynote On-Demand

    - by kristin.jellison
    We hope everyone has had a chance to refresh and recharge after Oracle OpenWorld 2013. In case you didn’t have the opportunity to catch the full OPN Exchange keynote, we have it on demand for your viewing pleasure. A highlight reel is up on the OPN YouTube channel and on Oracle.com. You can also watch individual keynote segments, from Oracle Executives like Mark Hurd, John Fowler and Andy Bailey, highlighted below. So please, sit back, relax and enjoy the show! You know, in case your football team is on a bye this week. Mark Hurd, President, Oracle Executive Address John Fowler, Executive Vice President, Systems Hardware and Software Engineered to Work Together Joel Borellis, Group Vice President, Partner Enablement Technology, Middleware and Business Intelligence Chris Baker, Senior Vice President, Worldwide ISV,OEM and Java Sales Engineered Systems and Hardware Andy Bailey, Senior Vice President, Strategic Alliances Cloud, Fusion Applications and Customer Experience Thomas LaRocca, Senior Vice President, North America Sales Alliances and Channels Terri Hall, Group Vice President, North America Sales Alliances and Channels Oracle Partner Excellence Awards: North America Hugo Freytes, Senior Vice President, Latin America Alliances and Channels Oracle Partner Excellence Awards: Latin America Mark Lewis, Senior Vice President, APAC Alliances and Channels Hiroshi Watanabe, Senior Vice President, Japan Alliances and Channels Oracle Partner Excellence Awards: APAC and Japan David Callaghan, Senior Vice President, EMEA Alliances and Channels Oracle Partner Excellence Awards: EMEA Cheers! The OPN Communications Team

    Read the article

  • How do I enable a disabled Event Notification.

    - by Derick Mayberry
    I have a scenerio where I am using external notification to process documents being sent in from the entire navy fleet, normally I have no problems, but just a few days ago an administrator changed passwords and I my queue processing failed and I rolled back the transaction with this C# code: catch (Exception) { TransporterService.WriteEventToWindowsLog(AppName, "Rolling Back Transaction:", ERROR); broker.Tran.Rollback(); break; } after which my target queue would continue to fill up but nothing to the external activation queue. Does the Event Notification get disabled once a transaction is rolled back? Should I have done a broker.EndDialog here when catching my exception? Also, after my event notification is disabled(if that is actually whats happening) how do I re engage it? Do I have to drop it and recreate it? Thank in advance for any help, I love Service Broker and its workign wonderfully except for this bug that I hope to fix soon.

    Read the article

  • How do I select the max value from multiple tables in one column

    - by Derick
    I would like to get the last date of records modified. Here is a sample simple SELECT: SELECT t01.name, t01.last_upd date1, t02.last_upd date2, t03.last_upd date3, 'maxof123' maxdate FROM s_org_ext t01, s_org_ext_x t02, s_addr_org t03 WHERE t02.par_row_id(+)= t01.row_id and t03.row_id(+)= t01.pr_addr_id and t01.int_org_flg = 'n'; How can I get column maxdate to display the max of the three dates? Note: no UNION or sub SELECT statements ;)

    Read the article

  • Hudson: how do i use a parameterized build to do svn checkout and svn tag?

    - by Derick Bailey
    I'm setting up a parameterized build in hudson v1.362. the parameter i'm creting is used to determine which branch to checkout in subversion. I can set my svn repository url like this: https://my.svn.server/branches/${branch} and it does the checkout and the build just fine. now I want to tag the build after it finishes. i'm using the SVN tagging plugin for hudson to do this. so i go to the bottom of the project config screen for the hudson project and turn on "Perform Subversion tagging on successful build". here, i set my Tag Base URL to https://my.svn.server/tags/${branch}-${BUILD_NUMBER} and it gives me errors about those properties not being found. so i change them to environment variable usages like this: https://my.svn.server/tags/${env['branch']}-${env['BUILD_NUMBER']} and the svn tagging plugin is happy. the problem now is that my svn repository at the top is using the ${branch} syntax and the svn tagging plugin barfs on this: moduleLocation: Remote -https://my.svn.server/branches/$branch/ Tag Base URL: 'https://my.svn.server/tags/thebranchiused-1234'. There was no old tag at https://my.svn.server/tags/thebranchiused-1234. ERROR: Publisher hudson.plugins.svn_tag.SvnTagPublisher aborted due to exception java.lang.NullPointerException at hudson.plugins.svn_tag.SvnTagPlugin.perform(SvnTagPlugin.java:180) at hudson.plugins.svn_tag.SvnTagPublisher.perform(SvnTagPublisher.java:79) at hudson.tasks.BuildStepMonitor$3.perform(BuildStepMonitor.java:36) at hudson.model.AbstractBuild$AbstractRunner.perform(AbstractBuild.java:601) at hudson.model.AbstractBuild$AbstractRunner.performAllBuildSteps(AbstractBuild.java:580) at hudson.model.AbstractBuild$AbstractRunner.performAllBuildSteps(AbstractBuild.java:558) at hudson.model.Build$RunnerImpl.cleanUp(Build.java:167) at hudson.model.Run.run(Run.java:1295) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:124) Finished: FAILURE notice the first line, there: the svn tag is looking at ${branch} as part of the repository url... it's not parsing out the property value. i tried to change my original Repository URL for svn to use the ${env['branch']} syntax, but that blows up on the original checkout because this syntax is not getting parsed at all by the checkout. help?! how do i use a parameterized build to set the svn url for checkout and for tagging my build?!

    Read the article

  • What electronic scrum/kanban board do you use and recommend for distributed teams?

    - by Derick Bailey
    I have a coworker on a team that is fairly distributed, fairly large (for our company) and wants to take advantage of visual management tools like scrum / kanban boards. Since they are a somewhat distributed team, though, all of the issue management / work management must be done via an electronic tool (we currently use Trac). What issue / work management tools, with a visualization of a scrum / kanban board, do you use for your distributed scrum / kanban teams? would you recommend it, and if so, why?

    Read the article

  • What eletronic scrum/kanban board do you use and recommend for distributed teams?

    - by Derick Bailey
    I have a coworker on a team that is fairly distributed, fairly large (for our company) and wants to take advantage of visual management tools like scrum / kanban boards. Since they are a somewhat distributed team, though, all of the issue management / work management must be done via an electronic tool (we currently use Trac). What issue / work management tools, with a visualization of a scrum / kanban board, do you use for your distributed scrum / kanban teams? would you recommend it, and if so, why? Thanks.

    Read the article

  • Can I get build warnings from a custom build step in Qt Creator?

    - by Derick
    I have the following script that I run as a custom build step in Qt Creator: git ls-files . | egrep "\.cpp$|\.h$" | xargs vera++ Which then gives output: foo/bar.cpp:1: no copyright notice found Another script I also use is: cppcheck . --template gcc -q --enable=style,unusedFunctions With the output: apple.h:8: style: The class 'MyPie' has no constructor. Member variables not initialized. I would love to double-click on the error and go to the source in the Compile Output window. It seems that only gcc errors are detected and these custom ones are ignored even though they have the same format.

    Read the article

  • Rail 3 custom renderer: where do put this code?

    - by Derick Bailey
    I'm following along with Yehuda's example on how to build a custom renderer for Rails 3, according to this post: http://www.engineyard.com/blog/2010/render-options-in-rails-3/ I've got my code working, but I'm having a hard time figuring out where this code should live. Right now, I've got my code stuck right inside of my controller file. Doing this, everything works. When I move the code to the lib folder, though, I have explicitly 'require' my file in the controller that needs the renderer or it won't work. Yes, the file gets loaded when it sits in the lib folder, automatically. but the code to add the renderer isn't working for some reason, until I do a require on it. where should I put my code to add the renderer and mime type, so that rails 3 will pick it up and register it for me, without me having to manually require the file in my controller?

    Read the article

  • When to use WYSIWYG Editors?

    - by Derick K.
    It appears to me, from searching stackoverflow, that hand coding html/css is superior to using WYSIWYG editors. I'm a few weeks into learning html and css, and I've only hand-coded so far (though I do have the Adobe Suite). My questions: is it ever worth learning how to use a WYSIWYG editor (like dreamweaver)? And, more importantly, when would it be better to use it over handcoding?

    Read the article

  • How to Setup an RSS Feed using Feedburner?

    - by Derick K.
    I've searched Google, but only found information on how to setup a feedburner for wordpress or other blogging services. I've also searched stackoverflow, but not found the right information. I'm creating a website, for which I want to have an RSS. Feedburner seems to be a good, free option - so I'd like to use that. When I go to feedburner, using my google account, it says the website I want to claim is invalid. And it's not clear how to make it valid. I also have no experience with RSS (and really websites in general), so I'm not sure where to go from here. What are the steps I need to take, starting from scratch, to add an RSS feed (with feedburner) to a website?

    Read the article

  • MacRuby + Interface Builder: How to display, then close, then display a window again

    - by Derick Bailey
    I'm a complete n00b with MacRuby and Cocoa, though I've got more than a year of Ruby experience, so keep that in mind when answering - I need lots of details and explanation. :) I've set up a simple project that has 2 windows in it, both of which are built with Interface Builder. The first window is a simple list of accounts using a table view. It has a "+" button below the table. When I click the + button, I want to show an "Add New Account" window. I also have an AccountsController < NSWindowController and a AddNewAccountController class, set up as the delegates for these windows, with the appropriate button click methods wired up, and outlets to reference the needed windows. When I click the "+" button in the Accounts window, I have this code fire: @add_account.center @add_account.display @add_account.makeKeyAndOrderFront(nil) @add_account.orderFrontRegardless this works great the first time I click the + button. Everything shows up, I'm able to enter my data and have it bind to my model. however, when I close the add new account form, things start going bad. if I set the add new account window to release on close, then the second time I click the + button, the window will still pop up but it's frozen. i can't click any buttons, enter any data, or even close the form. i assume this is because the form's code has been released, so there is no message loop processing the form... but i'm not entirely sure about this. if i set the add new account window to not release on close, then the second time i click the + button, the window shows up fine and it is usable - but it still has all the data that i had previously entered... it's still bound to my previous Account class instance. what am I doing wrong? what's the correct way to create a new instance of the Add New Account form, create a new Account model, bind that model to the form and show the form, when I click the + button on the Accounts form? ... this is all being done on OSX 10.6.6, 64bit, with XCode 3.2.4

    Read the article

  • when will ruby 1.8.6 be retired?

    - by Derick Bailey
    I can't seem to find any info on this... when will ruby 1.8.6 be 'retired'? ruby 1.8.7 is much more functional while maintaining syntax compatibility, and ruby 1.9.1 is significantly better all around... any idea when 1.8.6 will be retired?

    Read the article

1 2 3 4  | Next Page >