Download the Podcast
How unit tests inform us about our design
As proponents of Test-Driven Development, we constantly seek to identify and point out the various ways that the effort one puts into writing tests will be of value to the development team. As with most highly-powerful techniques, TDD tends to deliver multiple values across the entire development effort.
One example is this: We all agree that design is important, but most people would say that ‘good’ design is hard to define. It's like pornography and the Supreme Court ; we cannot quite define it but we know it when we see it. Good designers often have an instinctive ability to come up with good design.
The problem with this is that it is hard to confirm such an instinct, and even harder to argue the point if other members of the team disagree. If I think my proposed design is good and you think it is not, then unless I have some way of applying a metric or rubric it is hard to form a compelling argument that favors my design.
Support may come from an expected source – the system’s unit tests. We all know that bad tests may be written for the best designed systems, so for the purpose of our discussion we’ll assume that the tests we have written (or are planning to write) are pristine and cannot be improved upon. These pristine unit tests individually and collectively provide a unique insight into the way design of the system they cover. We call this ”test reflexology” .
By looking at various markers in the tests we are able to identify if we have issues with our design and also whether it is as good as it could be. So, without further ado, let’s look at some of these indicators, lovingly called test smells and see what they tell us about the design of the code they exercise. In part 1, we’ll deal with individual test smells. A future blog will deal with suite smells.
Often a unit test must create instances of one or more types which are somehow needed by the type(s) being tested. These instances are collectively called the "fixture" for the test. A large fixture (one that contains many additional instances) makes the test harder to write and may make it take longer to run.
A large fixture is a strong indication of coupling; if, in order to test the behavior of class Foo, we also need instances of classes Bar and Bas, then Foo is obviously coupled in some way to these other classes. If there are a lot of these "other classes", then this means we have a lot of coupling in the system being tested.
This coupling is not necessarily a bad thing. For example, if the tested class is a controller for other classes then the coupling would make sense. If, however, the coupling is to server classes we need to ask ourselves why does this class need to interact with so many other classes? Is it responsible for too many things? We may well have a cohesion problem – if so we should either split the class or introduce a façade.
Excessively Conditioning the Unit
When testing a unit, the conditioning (setup) done to the unit should be minimal. If conditioning is required, it should be the similar in nature for all tests run on the unit. For example, when testing Foo one may have to set a given instance a Foo into a particular condition before the test can be run. Let’s say there is a rule that Foo should throw an exception if it is called more than 10 times. The test would have to call it 10 times before asserting that it throws the exception on the 11th call. If a test does this, then it is an indicator that Foo may be doing too much; perhaps the “only 10” rule should be handled in another (helper) class. This is another example of a lack of cohesion.
Consider the following scenario: Class Foo needs to be tested, but part of what Foo does is access a web service. Furthermore, the WSDL of the connection is given to Foo though its constructor, or using a setter, or made available as global state in the system. Once Foo has the WSDL, it then creates a connection to the web-service. Unfortunately, we now must test Foo in the presence of the real web service, and thus our test is about more than one thing (and can therefore fail for more than one reason). Also, our test will likely run so slowly that we will not be able to run it frequently – anathema to TDD.
This indicates a different sort of cohesion problem: Foo is both the user and the creator of the web-service. Using and creating are different responsibilities. If we invert the dependency relationship, we can easily mock the web service and thus isolate the test of Foo. In other words, rather than passing the WSDL to Foo, we pass it the actual connection to the web service. This design change will also make Foo far more reusable as it will be decoupled from the nature of the connection.
Excessively Conditioning the Fixture
Similarly, instances of the fixture classes may need to be brought into specific states before the test can be conducted. “Whipping the fixture into shape” can be complex, time-consuming, and otherwise problematic as specific knowledge and understanding of these objects is needed. This is a coupling problem. Instead of coupling to interfaces the tested unit is coupled to specific implementations. This breaks a fundamental design guideline: Design to Interfaces.
The solution may seem pretty straightforward – introduce interfaces to all these specific implementations and mock them. This is not always as simple as it seems. These other objects may have complex interfaces which may be hard to mock. Moreover, these concrete objects may be difficult or impossible to change for any number of reasons. In this case, a simpler solution may be to introduce a mockable façade that will reduce the coupling in the system by managing the access to these external objects.
...continued on next post...
 "I can't define pornography, but I know it when I see it." Justice Stewart in Jacobellis v.Ohio 378 US 184 (1964)
 We like this analogy. If you don’t know what reflexology is, take a look here: http://en.wikipedia.org/wiki/Reflexology
 Hey, it’s a link! Follow it! In this case it points to our Pattern Repository where we explain the pattern and talk further about testing it.
 Another link, this time to one of our many Webinars; this one is about Design Wisdom.