Relevance of testing red green light

I'm starting out and loving TDD, however wondering about the concept of red green light. I understand in theory the importance of ensuring that you can skip the test before submitting it. In practice, however, I find it somewhat useless practice.

I feel like I cannot properly write failing or passing the test without implementing the code I intend to test. For example, if I write a test to show that my DataProvider is returning a DataRow, I need to write a boolean DAL to give a meaningful failure, a failure that is greater than a NullException or Null return from an empty method, something like this is pointless as I feel that the red light should indicate that I can create a failed test from the actual boolean I am testing.

In other words, if I just return null or false, from the function I am testing to get my error, which is indeed the value of the red light.

However, if I have already implemented a boolean (which in a way contradicts the first paradigm of the test), I believe that I am just testing for mutually exclusive concepts (IsTrue instead of IsFalse or IsNull instead of IsNotNull) just for the sake of getting red light instead of green and then switching them to the opposite. to get a pass.

I have no idea in concept, I am actually asking this question as this is what I noticed and I am wondering if I am doing something wrong.

EDIT

I accepted Charlie Martin's answer as it worked best for me, it in no way suggests that there was no credibility in the other answers and all of this helped me understand a concept that I apparently wasn't looking for properly

+1


source to share


5 answers


Think of it as some kind of specification. You start by asking yourself, "What should be the code that I ultimately want to do?" So, let's say you want to write a function that adds natural numbers. How did you know if this worked? Well, you know 2 + 2 = 4, so you can write a test (it's mostly python, but doesn't have a lot of details, see unittest module docs ):

def test2plus2(self):
    assertEquals(addNat(2,2), 4)

      

So, here you have defined a specification that says: "For natural numbers a and b, calculate a + b". Now you know what you need to write a function:

def addNat(a,b):
    return a+b

      

You run it and it passes the test. But then there are other things that you know; since this is only for natural numbers (for whatever reason), you need to add protection against unnatural numbers:



def testUnnatural(self):
    failUnlessRaises(AssertionErrof, addNat(-1, 2))

      

Now you have added a BOM that says "and throw an AssertionError if the numbers are negative." which tells you the following piece of code:

def addNat(a,b):
    """Version two"""
    assert((a >= 0) and (b>=0))
    return a+b

      

Run this now and this statement fails; success again.

The point is that TDD is a way of defining very clear, detailed specifications. Something like "addNat" doesn't need them, but real code, especially in a flexible world, you don't intuitively know the answer. TDD helps you sort and correct real claims,

+3


source


The meaning of the red right lies in its ability to detect false positives. What happened to me was that no matter what my implementation code was, it always passed the tests. This is where red light / green light testing helps.



It also happened to me that some of my tests did not run at all and all I saw was "Build Succeeded" when I was not using red light. If I were to use a red light to make sure my tests were failing, I would be suspicious the moment I see the build succeed, when I expect the build to fail.

+9


source


There are several motivating examples I can think of of why red light is useful and has helped me a lot.

  • Writing a red test for common sense. I'm pretty sure the test works to test some features that I know are not yet implemented, REALLY REALLY REALLY not.

  • When you find an error in your code, you write a failed test that indicates that error. With a red test right from the start, you are sure you have it and know when the bug is fixed.

There is probably one day an example where red light is not useful and that when you write functionality tests in a panel that work, they are usually green from the start. I would warn you about writing "green" tests, although it may happen that you have to redesign the classes and something significant that makes some tests obsolete. Everything that the green test writes does not mean anything!

+1


source


I'm not sure if I get your point of view, but this is how I see it.

Think less about what the function returns and what it does and what it thinks is true.

If my true / false function is some language version of the following C function:

bool isIntPrime( int testInt )

      

then you want the test to fail if you pass a double to it (instead of having a "useful" implicit listing, as you might encounter in some languages).

If you really can't find the red light case, your green light is pretty much meaningless. If you really run into such a case, then testing is probably not worth it to verify that the function / function is a waste of time. Perhaps it is so simple and reliable that it cannot fail? Then writing a bunch of "green lights" is a waste of time.

It's like the white rabbit experiment. If I believe that all rabbits are brown, then the brown rabbits count does nothing to establish the veracity of my claim. However, the first white rabbit I see proves that my statement is wrong.

Does this trivial example help?

+1


source


I always start with my code throwing a NotImplementedException, although some people argue that you should start by not implementing this method, and a bad compiler is your first failed test. There is some logic behind this: if you can write a test without using a method (and it will pass), you don't need to write any code. I usually do this in my head.

With the exception code written, I proceed to write the first test for the function I'm working on and get the first red light (presumably). Now I can continue the regular TDD rhythm - Red-Green-Refactor. Don't forget the last step. Refactoring when your tests pass, not when writing code to fix a failed test.

This approach takes discipline. And sometimes it seems like you are doing stupid things, as generally the easiest thing to do for the first test is to return some hardcoded data. However, persistence; this bootstrap phase is relatively short, and if you don't miss it, you may find that you are writing simpler and more convenient code than you would otherwise if your solution (or at least skeleton) magically spring. was whole on the first test. If you are not developing in small increments, you are not doing TDD.

Hard caveat: remember that TDD is all about unit testing. There are other types of testing (integration, adoption, uploading, etc.), and the need for other types of testing doesn't magically disappear when you start doing TDD.

+1


source







All Articles