Net Objectives

Net Objectives
If you are interested in coaching or training in ATDD or TDD please click here.

Friday, November 18, 2011

Redefining Test-Driven Development, Pt. 1

Download the Podcast


How you do something new is often influenced to a great extent by what you think you are doing -- its precise nature, the steps and work-flows, and how it relates to other things that you already do and understand.  The term “Test-Driven Development”, while well-established in our industry, is perhaps an unfortunate choice of words to describe what we are doing, and thus how we choose to do it.  Here in part 1 we’ll examine the problem, and then later in part 2 we’ll suggest a solution.

Let’s start with the word “test”.  This is a word we already have a definition for; typically we think of a test as an evaluation of something, or a judgement of something relative to a standard, or perhaps an action that determines the correctness or incorrectness about something.  Test is a verb: “I shall test this.”  It is also a noun: “Let’s conduct a test to find out if this works.” 

In any case, the presumption is that there is something that is either correct, or operates correctly, or does not.  Clearly this is a nonsensical idea if the thing to be tested does not actually exist yet. 

In a typical TDD process, we write the test before we create the code we’re testing [1].  At the “testing point”, there is nothing to test.  Will the test fail?  Of course it will [2].  Something that does not exist can neither be right nor can it do the right thing.  So it would seem that we’re not really doing anything meaningful [3]. 

Some of you are probably thinking: “The test won’t fail.  It won’t even compile!”  Very true, but this is only because our technology (typically) works the way it does.  In another technology (Python, for example) referencing something that does not exist might simply cause the system to ignore you, or return 0, or null, or something else.  This is one reason why we like strongly-typed languages and strict compilers.  However, note what the compiler is actually saying: “This makes no sense!  You’re trying to refer to something that does not exist!” 

All of this would seem to indicate that we have to do it the other way ‘round: that we’ve got to create the thing to be tested before we can create the test.  It’s just common sense.

Then there is the notion of “driven”.  The notion of “test” in conflict with the notion of “driven”.  If one activity drives another, then one would normally expect the driving activity to precede the driven activity, temporally.  If thing X happens which then causes thing Y, and if this causality can be proven, then we can say X drove Y.  But if the test must be created after the tested thing, then how can the test drive the tested?
 
Finally we have “development”.  Development is the creation of something, usually from a plan or goal or set of principles.  If tests are to drive development, then they must cause it.  Thus they must constitute the plan or goal or set of principles.  But tests in the traditional software sense are not plans, they are an examination of the system to determine if it meets its success criteria.. 

This confusion can cause lots of problems:
  1. People won’t get the point, and will reject the idea intellectually: “that makes no sense” 
  2. People will see this as “new work” for the team to do, and will thus slow the team down: “that will be wasteful”
  3.  People will see the product (a collection of tests) as a new maintenance burden for the team: “that cannot be sustained over time”
In other words, TDD tests would seem to constitute at best a tremendous added cost, and at worse a totally meaningless one.  This is categorically untrue, and we begin by re-defining what we’re doing. 

In TDD, as it turns out, we don’t write tests first.  In fact... in TDD we don’t write tests at all. 

Stay tuned for part 2... :) 


--- 

[1] As we will see in future blogs, the test-first technique does not actually equate to TDD, but it is a very common approach, and very compatible with TDD. 

[2] ...and what if it doesn’t?  What would that mean?  That’s the subject of another blog... 

[3] I can tell you a-priori that any test written before the thing it tests exists will fail, without even knowing what the test is about.  Therefore actually writing the test and watching it fail is not going to tell me something I didn’t already know.  So why do it?

No comments:

Post a Comment