Net Objectives

Net Objectives
If you are interested in coaching or training in ATDD or TDD please click here.

Wednesday, August 15, 2018

TDD Mark 3, part 2

I realized recently that this had been written, but never published.  Part 1 was, but never this second part.  Not sure how that happened.  Maybe I needed a test. :)

Anyway, here it is.  Part three is still pending.

-Scott-

 Expanding the thesis


Our central thesis thus far has centered on the notion that TDD is not really about testing, it is really about specification.  But we must also make a distinction between what TDD is and what it does.  Test-Driven Development is definitely a phrase that describes an action, if one focuses on the word “driven”.

What does TDD drive?  It drives development.  What is development?

Traditionally we have considered the creation of software to consist of a sequence of phases, usually something like:
  1. Analysis
  2. Design
  3. Construction (coding)
  4. Inspection (testing)
In agile methodologies we abandon the notion that these aspects of software development should be conducted in discreet phases, replacing this with incremental action.  In each increment (2 weeks if you adhere to Extreme Programming, one month if you choose to do Scrum, etc…) we conduct all aspects of development, from analysis to testing.

TDD, being a distinctly agile methodology, must therefore concern itself with all aspects of development. 

The analysis aspect of TDD is the reason we can consider the test suite to form a technical specification, and we can certainly say TDD drives us toward this by the simple fact that you cannot write a test about something you do not understand.  Automated tests are very unforgiving, and require a level of detailed understanding in order to create them.  Thus, they require rigorous analysis.

We like to say that the best specification “forces” you to write the correct code.  In investigating this fully (which we will do in a future blog) we’ll see that the tests we write, if done in the proper and complete way, do exactly this.  You cannot make the tests pass unless you write the right code.  Thus TDD leads to construction.

Also, while we do not write our tests for testing purposes, but rather as the spec that leads to the implemention code, we do not discard the tests one the code is complete.  They have, in essence, a second life where they provide a second value.  They become tests once we are done using them to create the system.  So TDD does apply to testing as well.  There may be other tests we write, but the TDD suite does contribute to the testing needs of the team.

That leaves design.  Can TDD also be said to apply to design?  Could TDD also be “Test-Driven Design”, in other words?  We say yes, decidedly so.  Much of what will follow in future blogs will demonstrate this.

But this integration of the test-writing activity into all aspects of software development means that the test suite itself becomes essentially part of the source code.  We must consider the tests to be first class citizens of the project, and thus we must also address ourselves to the design of the tests themselves.  They must be well-designed in order to be maintainable, and this is a critical issue when it comes to conducting TDD in a sustainable way, which is a clear focus of this blog series.

“Good” design


How does one define a good design?  This is not a trivial question.  Some would say that looking to the Design Patterns can provide excellent examples of good design.  Some would say that attending to a rubric like SOLID (Single responsibility, Open-closed, Liskov substitution, Interface segregation and Dependency inversion) can provide the guidance we need to produce high-quality designs.  We agree with these ideas, but also with the notion of the separation of concerns.

Martin Fowler, in his book “UML Distilled”, suggested that one way to approach this is to fundamentally ensure that the abstract aspects (what he called the “conceptual perspective”) of the system should not be intermixed with the specific way those concepts are executed (what he called the “implementation perspective”).

Let’s examine a counter example, where we do not follow this advice and mix these two perspectives. 

Let’s say we have an object that allows us to communicate via a USB port.  We’ll call it USBConnection, and we’ll give it a send() and receive() method.  Let’s furthermore say that, sometime after this object has been developed we have a new requirement to create a similar object, but that we need to also ensure that any packet sent over the port is verified to be well-formed, otherwise we throw a BadPackedException.  In the past, when we considered OO to be primarily focused on the notion of object reuse, we might have suggested something like this:


Figure 1: “Reusing” the USBConnection by deriving a new type from it

This can produce problems.

First, any change to USBConnection can also propagate down to VerifiedUSBConnection, whether that is appropriate/desired or not.  The opposite, however, is not true.  We can make changes to the verified version with complete confidence that these changes will have no effect on the original class.

Second, one can create an instance of VerifiedUSBConnection and, perhaps accidentally, cast it to the base type.  It will appear, in the code, to be the simple USBConnection, which never throws an exception, but this will not be true. The reverse, however, is impossible. We cannot cast an instance of USBConnection to type VerifiedUSBConnection and then compile the code successfully.

If we do this very much, we end up with a vague, muddy, confusing architecture, where changes and errors propagate in hard-to-predict ways, where we simply have to remember that certain issues are of concern where other are not, because the design does not explicitly control coupling.

But Fowler’s guidance would also lead us away from using inheritance like this, because the class USBConnection is essentially forming an interface which is implemented by VerifiedUSBConnection, while also being an implementation itself.  It is both conceptual, and an implementation, we have not separated these perspectives in this object.  If we want to completely separate the conceptual part of the system from its implementation we would be forced to design it differently:



Figure 2: Two ways of separating concept from implementation

In the first design, USBConnection is a conceptual type (interface, abstract class, pure virtual class, something along those lines) with two different implementing versions.  The conceptual type is only conceptual and the implementing types are only implementations; there is no mixing.

In the second design (which, if you are familiar with patterns is a combination of the Strategy Pattern with the Null Object Pattern), the concept of PacketVerifier is represented by a type that is strictly conceptual, whereas the two kinds of verifiers (one which performs the verification and one which does nothing at all) are only implementations, there is no mixing.

Either way (and we will examine which of these we prefer, and why, in a later blog) we have created this separation of concerns.  In the first design, a change to NonVerifiedUSBConnection will never propagate to VerifiedUSBConnection, and the same is true in the reverse.  Instances of neither of the implementing types can be accidentally cast to the other.  In the second design, these qualities are the same for the PacketVerifier implementations.

Design quality is all about maintainability, about the ability to add, modify, change, scale, and extend without excessive risk and waste.  If our tests are first-class citizens in our system, they must be well-designed too.
Let’s look back at a piece of code from TDD Mark 3 Introduced:

[TestMethod]
public void TestLoadingLessThanMinimalFundsThrowsException()
{
  LoadInitialFunds(MinimalFunds());
  uint insufficientFunds = MinimalFunds() - 1;
  try
  {
    LoadInitialFunds(insufficientFunds);
    Assert.Fail("Card should have thrown a " + 
            typeof(Account.InsufficientFundsException).Name());
  }
  catch (Account.InsufficientFundsException exception)
  {
    Assert.AreEqual(insufficientFunds, exception.Funds());
  }
}

private UInt MinimalFunds()   {
  return Account.MINIMAL_FUNDS;
}

private void LoadFunds(uint funds)
{
  Account account = new Account(funds);
}

The public method (marked [TestMethod] expresses the specification conceptually; the concept of loading funds, there being a notion of “minimal funds”, and the idea that a whole dollar epsilon of the behavioral boundary, these comprise the conceptual perspective.  The fact that “minimum funds” is a constant on the Account class, and the fact that the fund-loading behavior is done by the constructor of Account, these are implementation details that could be changed without the concepts being effected.

For example, we may later decide to store the minimal funds in a database, to make it configurable.  We may decide to validate the minimum level in a service object that Account uses, or we could build Account in a factory and allow the factory to validate that the funds are sufficient.  These changes would impact, in each case, a single private method on the test, and the conceptual public method would be unchanged.

This is the next step in sustainability, and we will be investigating many aspects of it.  How will it change the way we write tests?  How will it change dependency management?  Should these private methods actually be extracted into a separate class?  If so, when and why would we decide to do that?

We’d love to hear from you….



No comments:

Post a Comment