So let's say that I have a test:
@Test
public void MoveY_MoveZero_DoesNotMove() {
Point p = new Point(50.0, 50.0);
p.MoveY(0.0);
Assert.assertAreEqual(50.0, p.Y);
}
This test then causes me to create the class Point:
public class Point {
double X; double Y;
public void MoveY(double yDisplace) {
throw new NotYetImplementedException();
}
}
Ok. It fails. Good. Then I remove the exception and I get green. Great, but of course I need to test if it changes value. So I write a test that calls p.MoveY(10.0) and checks if p.Y is equal to 60.0. It fails, so then I change the function to look like so:
public void MoveY(double yDisplace) {
Y += yDisplace;
}
Great, now I have green again and I can move on. I've tested not moving and moving in the positive direction, so naturally I should test a negative value. The only problem with this test is that if I wrote the test correctly, then it doesn't fail at first. That means that I didn't fit the principle of "Red, Green, Refactor."
Of course, This is a first-world problem of TDD, but getting a fail at first is helpful in that it shows that your test can fail. Otherwise this seemingly innocent test that is just passing for incorrect reasons could fail later because it was written wrong. That might not be a problem if it happened 5 minutes later, but what if it happens to the poor-sap that inheirited your code two years later. What he knows is that MoveY does not work with negative values because that is what the test is telling him. But, it really could work and just be a bug in the test.
I don't think that would happen in this particular case because the code sample is so simple, but if it were a large complicated system that might not be the case. It seems crazy to say that I want to fail my tests, but that is an important step in TDD, for good reasons.