Net Objectives

Net Objectives
If you are interested in coaching or training in ATDD or TDD please click here.

Thursday, September 27, 2012

Testing Through API's

Download the Podcast

We recently got a question from Tomas Vykruta, a colleague of ours, and we felt it turned out to be such a good, fruitful question that we wanted to pass it, and our answers, along in this blog.

Here is Tomas' question:
Do you prefer to have unit tests written against the public API, or to test individual functions inside the API? I've seen both approaches at my company, and in many cases, a single class is unit tested with a mix of the two. I haven't seen this topic addressed in any style or testing guides, so it seems to be left as a choice to the author.

While there is likely no right or wrong answer here and each class will require some combination, I thought it would be interesting to enumerate your real world experiences (good and bad) resulting from these 2 strategies. Off the top of my head, here are some pros (+) and cons (-).

API-level:
+ If internal implementation details of API change,s the unit tests don't have to. Less maintenance.
+ Serves as documentation for public usage of API.
+ Does not require fabricating the internal API in a way as to make every function easily testable.
+- Possibly less code to write.
- Does not serve as documentation for individual internal functions.
- Unit tests are less likely to test every single internal function thoroughly.
- Test failures can take some time to track down and identify and require understanding the internal API.

Internal API unit testing (individual functions):
+ Unit tests are very simple, short, quick to write and read.
+ Functions are very thoroughly tested, easy to verify against full range of inputs.
+ Serves as documentation for every internal function.
+ Test failures are easily identifiable even for engineers not familiar with the code base, since each test is focused on a very limited bit of code.
- When any implementation details change, the tests must change with it.
- Not useful to pure external API users who don't care about internal implementation details.

Scott's Response:
My view is this: if you consider the test suite as you would a specification of the system, then the question as to whether to test at one level or another becomes: “is it specified?” 

Systems produce behavioral effects, and these effects are what determine the value of the system.  Value, however, is always from the point of view of a “client” or “customer” and every system has several customers.  All these customers have a behavioral view of the system which can be specified.

For example, the end users have a specification: “this can accurately calculate my income tax”.  But so does the legal department: “this has a EULA that indemnifies us against tax penalties”.  And the marketing department: “the system has a tax-evaluation feature that our competitor does not”.  And the developers themselves: “this has an extensible system for taxation algorithms.”  Etc…

Anything in anyone’s spec needs a test.  Some of these will be at the API level, some will be further in.

Not all implementation details are part of the specification.  If you are able to refactor a particular implementation and still satisfy all customer specifications, then the implementation does not require a separate test.

Amir's Response:
Scott has already expanded on the difference between testing and specification. I would like to add a little to this ‘specification’ perspective.

Let me start by saying that all TDD tests must only use public interfaces. This can be interpreted to mean – you must only test through APIs, as they are the public interface of the system. This is true when you consider the external consumers of the system. They see only the public API and hence ‘feel’ the system’s behavior through it. The TDD test will specify what this behavior is (for better or worse).

And just to clarify – when we say ‘public interface,’ we do not refer only to the exposed functional interface. A public interface can also be the GUI, database schema, specific file formats, file names, URL format, a log (or trace facility),  Van Eck phreaking,  or a Ouija board. As long as the usage of the public interface allows an external entity to affect your system of vice versa, it is considered public.

Some of the interfaces mentioned above may be used by entities within the company, such as support or QA. For all intents and purposes they are still customers of the system and as such their needs (e.g., the types of error report generated under specific circumstances, or the ability to throttle the level of tracing done, or the ability to remotely control a client system) must be specified in the TDD tests. After all, you still want the ‘intra-organizational’ behavior to be known an invariant to other changes.

When we do TDD however, we are not concerned only about the system’s external behavior (as defined above), but also about its internal behavior. This internal behavior has two manifestations (and this is our arbitrary nomenclature, but I hope it makes sense). First is the architecture of the product, second is its design. These two may seem to be the same but there is a subtle difference between them.

The system’s architecture is the set of design decisions that were made to accomplish functional and performance goals. Once set, these become a requirement. An individual developer or team cannot decide to do things differently, but has to operate with these architectural guidelines. This is specified through a set of tests that specify how every architectural element contributes to the implementation of the desired overall behavior.

The system’s design is the set of design decision that are made by the team and individual developers, and are considered to be ‘implementation choices’. The team can assign whichever responsibilities it deems reasonable to the different design entities in order to achieve the desired behavior. This is all well, except that there is one ‘tacit’ requirement that is solely in the responsibility of the team (and probably the technical organization management). This requirement is maintainability, and it is what guides the team in their design choices. The TDD tests help us specify both what the system design is and also what the specific responsibilities assigned to the system entities are.

The point about both design and architecture is that they are internal to the system. As such, how can you test-drive them through the system’s APIs? By testing through the APIs I can see that the behavior is specified correctly. I cannot see that the architecture is adhered to or that the design promotes maintainability.

The answer to this paradox lies in the definition of the word ‘public’. Public is a relative term. If you live in a high rise condo, then the ‘public’ interface may be the building’s front door. But consider the individual apartments. The neighbors can’t come into your condo at will, can they? The condo has a public interface – its door, which is hidden to those outside the building (private) but visible and usable by the internal neighbors. Inside your condo this division continues. You have rooms, with doors (their public interfaces), and storage cabinets, with their doors, and boxes, with their lids, and bottles with their caps. What we get is a complex set of enclosures which are public to their immediate surrounding and private to anything further out.

Computer systems are the same. The APIs are the public doorways to the surrounding clients – these clients do not see the way the system is composed. But the elements of the system themselves do see this design –- they can see the other elements (which they interact with) although they cannot see inside these elements. The interfaces that these inner elements expose, are they private or public. Well, that depends on who you’re asking. From the perspective of the outside clients – they are private. From the perspective of the peer elements they are public. Since they are public, they should be specified through TDD, and this is exactly how we specify the system’s architecture and design.

So, in a nutshell, the answer to the question – “do we test external or internal APIs” is yes.
We would love to hear from all of you on this question!





Wednesday, August 8, 2012

Testing the Chain of Responsibility, Part 2

Download the podcast

Chain Composition Behaviors

We always design services for multiple clients.  Even if a service (like the Processor service in our example) has only a single client today, we want to allow for multiple clients in the future.  In fact, we want to promote this; any effort expended to create a service will return increasing value when multiple clients end up using it.

So, one thing we definitely want to do is to limit/reduce the coupling from the clients’ point of view. The run-time view of the CoR from the client’s point of view should be extremely limited:

Note that the reality, on the right, is hidden from the client, on the left.  This means we can add more processors, remove existing ones, change the order of them, change the rules of the termination of the chain, change how any/all of the rules are implemented... and when we do, this requires no maintenance on the clients.  This is especially important if there are (or will be, or may be) clients that we don’t even control.  Maybe they live in code belonging to someone else.

The one place where reality cannot be concealed is wherever the chain objects are instantiated.  The concrete types, the fact that this is a linked list, and the current order of the list will be revealed to the entity that creates the service.   If this is done in the client objects, then they all will have this information (it will be redundant).  Also, there is no guarantee that any given client will build the service correctly; there is no enforcement of the rules of its construction.  

This obviously leads us to prefer another option.  We may, for example, decide to move all creation issues into a separate factory object.

It may initially seem that by doing so we’re just moving the problem elsewhere, essentially sweeping it under the rug. The advantage comes from the fact that factory objects, unlike clients,  do not tend to increase in number.  So, at least we’ve limited our maintenance to one place.  Also, if factories are only factories then we are not intermixing client behavior and construction behavior.  This results in simpler code in the factories, which tends to be easier to maintain.  Finally, if all clients use the factory to create the service, then we know (if the factory works properly) that the service is always built correctly.

We call this the separation of use from creation, and it turns out to be a pretty important thing to focus on.  Here, this would lead us to create a ProcessorFactory that all clients can use to obtain the service, and then use it blindly.  Initially, this might seem like a very simple thing to do:

public class ProcessorFactory {
    public Processor GetProcessor() {
           return new LargeValueProcessor(
new SmallValueProcessor(
new TerminalProcessor()));
    }
}

Pretty darned simple.  From the clients’ perspective, the issue to specify in a test is also very straightforward: I get the right type from the factory:

[TestClass]
public class ProcessorFactoryTest {
    [TestMethod]
    public void TestFactoryReturnsProperType() {
         Processor processor =
              new ProcessorFactory().GetProcessor();
         Assert.IsTrue(processor is Processor);
    }
}

This test represents the requirement from the point of view of any client object.  Conceptually it tells the tale, though in strongly-typed language we might not want to actually write it.  This is something the compiler enforces, and therefore is a test that actually could never fail if it compiles.  Your mileage may vary.

However, there is another perspective, with different requirements that must also be specified.  In TDD, we need to specify in tests:

  1. Which processors are included in the chain (how many and their types)
  2. The order that they are placed into the chain (sometimes)  [4]

Now that the rules of construction are in one place (which is good) this also means that we must specify that it works as it should, given that all clients will now depend on this correctness.

However, when we try to specify the chain composition in this way we run into a challenge:  since we have strongly encapsulated all the details, we have also hidden them from the test.  We often encounter this in TDD; encapsulation, which is good, gets in the way of specification through tests.

Here is another use for mocks.  However, in this case we are going to use them not simply to break dependencies but rather to “spy” on the internal aspects of an otherwise well-encapsulated design. Knowing how to do this yields a huge advantage: it allows us to enjoy the benefits of strong encapsulation without giving up the equally important benefits of a completely automated specification and test suite.

This can seem a little tricky at first so we’ll go slow here, step by step.  Once you get the idea, however, it’s actually quite straightforward and a great thing to know how to do.

Step 1: Create internal separation in the factory

Let’s refactor the factory just a little bit.  We’re going to pull each object creation statement (new x()) into its own helper method.  This is very simple, and in fact most modern IDEs will do it for you; highlight the code, right-click > refactor > extract method..

public class ProcessorFactory {
    public Processor GetProcessor() {
           return MakeFirstProcessor(
MakeSecondProcessor(
MakeLastProcessor()));
    }

    protected virtual Processor MakeFirstProcessor(
Processor aProcessor)    {
           return new LargeValueProcessor(aProcessor);
    }

    protected virtual Processor MakeSecondProcessor(
Processor aProcessor)    {
           return new SmallValueProcessor(aProcessor);
    }

    protected virtual Processor MakeLastProcessor() {
           return new TerminalProcessor();
    }
}

Note that these helper method would almost certainly be made private by an automated refactoring tool.  We’ll have to change them to protected virtual (or just protected in a language like Java where methods are virtual by default) for our purposes.  You’ll see why.

Step 2: Subclass the factory to return mocks from the helper methods

This is another example of the endo testing technique we examined in our section on dependency injection:

private class TestableProcessorFactory : ProcessorFactory {
    protected override Processor MakeFirstProcessor(
Processor aProcessor)    {
           return new LoggingMockProcessor(
typeof(LargeValueProcessor), aProcessor);
    }

    protected override Processor MakeSecondProcessor(
Processor aProcessor)    {
           return new LoggingMockProcessor(
typeof(SmallValueProcessor), aProcessor);
    }

    protected override Processor MakeLastProcessor() {
           LoggingMockProcessor mock = new LoggingMockProcessor(
typeof(TerminalProcessor), null)
mock.iElect = true;
           return mock;
    }
}

This would almost certainly be a private inner class of the test.  If you look closely you’ll see three important details.  

  • Each helper method is returning an instance of the same type (which we’ll implement next),  LoggingMockProcessor, but in each case the mock is given a different type to specify in its constructor [5]
  • The presence of the aProcessor parameter  in each method specifies the chaining behavior of the factory (which is what we will observe behaviorally through the mocks)  
  • The MakeLastProcessor() conditions the mock to elect.  As you’ll see, these mocks do not elect by default (causing the entire chain to be traversed) but the last one must, to specify the end of delegation

Step 3: Create a logging mock object and a log object to track the chain from within

Here is the code for the mock:

private class LoggingMockProcessor : Processor {
    private readonly Type mytype;
    public static readonly Log log = new Log();
    public bool iElect = false;
    public LoggingMockProcessor (Type processorType,
Processor nextProcessor):base(nextProcessor) {
           mytype = processorType;
    }

    protected override bool ShouldProcess(int value) {
           log.Add(mytype);
           return iElect;
    }

    protected override int ProcessThis(int value) {
         return 0;
    }
}

The key behavior here is the implementation of ShouldProcess() to add a reference of the actual type this mock represents to a logging object.  This is the critical part -- when the chain of mocks is asked to process, each mock will record that it was reached, the type it represents, and we can also capture the order in which they are reached if we care about that.

The implementation of  ProcessThis() is trivial because we are only interested in the chain’s composition, not its behavior.  We’ve already fully specified the behaviors in previous tests, and each test should be as unique as possible.  

Also note that this mock, as it is only needed here, should be a private inner class of the test.  Because the two issues inclusion and sequence are part of the same behavior (creation), everything will be specified in a single test.

The Log, also a private inner class of the test, looks something like this:

private class Log {
    private List<Type> myList;
    public void Reset() {
           myList = new List<Type>();
    }
    public void Add(Type t) {
           myList.Add(t);
    }

    public void AssertSize(int expectedSize) {
           Assert.AreEqual(expectedSize, myList.Count);
    }

    public void AssertAtPosition(Type expected, int position) {
           Assert.AreEqual(expected, myList[position]);
    }
}

It’s just a simple encapsulated list, but note that it contains two custom assertions.  This is preferred because it allows us to keep our test focused on the issues it is specifying, and not on the details of “how we know”.  It makes the specification more readable, and easier to change.  

(A detail: The log is “resettable” because it is held statically by the mock.  This is done to make it easy for all the mock instances to write to the same log that the test will subsequently read.  There are other way to do this, of course, but this way involves the least infrastructure.  Since the log and the mock are private inner classes of the test, this static member represents very little danger of unintended coupling.)

Step 4: Use the “spying” capability of the mock in a specification of the chain composition

Let’s look at the test itself:

[TestMethod]
public void TestFactoryReturnsProperChainOfProcessors() {
    // Setup
    ProcessorFactory factory = new TestableProcessorFactory();
    const int correctChainLength = 3;
    List<Type> correctCollection =
new List<Type> {
typeof (LargeValueProcessor),
               typeof (SmallValueProcessor),
               typeof (TerminalProcessor)
            };
    Processor processorChain = factory.GetProcessor();
    Log myLog = LoggingMockProcessor.log;
    myLog.Reset();
      
// Trigger     
processorChain.Process(Any.Value);

    // Verification
    myLog.AssertSize(correctChainLength);
for (int i = 0; i < correctCollection.Count; i++) {
           myLog.AssertAtPosition(correctCollection[i], i);
    }
}

If the order of the processors was not important, we would simply change the way the log reports their inclusion:

// In Log
public void AssertContains(Type expected){
       Assert.IsTrue(myList.Contains(expected));
}

...and call this from the test instead.

// In TestFactoryReturnsProperChainOfProcessors()
for (int i = 0; i < correctCollection.Count; i++) {
       myLog.AssertContains(correctCollection[i]);
}

Some testing frameworks actually provide special Asserts for collections like this.

Objections

OK, we know what some of you are thinking.  “Guys, this is the code you’re testing:”

public Processor GetProcessor() {
           return MakeFirstProcessor(
MakeSecondProcessor(
MakeLastProcessor()));
}

“...and look at all the *stuff* you’ve created to do so!  Your test is several times the size of the thing you’re testing!   Arrrrrrrrrgh!”

This is a completely understandable objection, and one we’ve felt in the past.  But to begin with remember that in our view this is not a test, it is a specification.  It’s not that unusual for specifications to be longer than the code they specify.  Sometimes it’s the other way around.  It just depends on the nature of the specification and the implementation involved.

The specification of the way the space shuttle opened the cargo bay doors was probably a book. The computer code that opened it was likely much shorter.

Also, this is a reflection of the relative value of each thing.  Recently, a friend who runs a large development team got a call in the middle of the night, warning him of a major failure in their server farm involving both development and test servers.  He knew all was well since they have offsite backups, but as he was driving into work in the wee hours he had time to ask himself “if I lost something here... would I rather lose our product code, or our tests?”
He realized he would rather lose the product code.  Re-creating the source from the tests seemed like a lot less work than the opposite (that would certainly be true here).  But what that really means is that the test/specifications actually have more irreplaceable value than the product code does.

In TDD, the tests are part of the project.  We create and maintain them just like we do the product code.  Everything we do must produce value... and that’s the point, not whether one part of the system is larger than another.  And while TDD style tests do certainly take time and effort to write, remember that they have persistent value because they can be automatically verified later.

Finally, ask yourself what you would do here if the system needed to be changed, say, to support small, medium, and large values?  We would test-drive the new MediumValueProcessor, and then change TestFactoryReturnsProperChainOfProcessors() and watch it fail.  We’d then update the factory, and watch the failing test go green. We’d also have automatic confirmation that all other tests remained green throughout.

That’s an awfully nice way to change a system.  We know exactly what to do, and we have concrete confirmation that we did exactly and only that.  Such confidence is hard to get in our business!

-----
Links:

http://www.netobjectives.com/competencies/separate-use-from-construction
http://www.netobjectives.com/resources/separate-use-construction

-----

[4] Some CoRs require their chain elements to be in a specific order.  Some do not.  For example, we would not want the TerminalProcessor to be anywhere but at the end of the chain.  So, while we may not always care about/need to specify this issue, it’s important to know how to do it.  So we’ll assume here that, for whatever domain reason, LargeValueProcessor must be first, SmallValueProcessor must be second, and TerminalProcessor must be third.

[5] We’re using the class objects of the actual types.  You could use anything unique: strings with the classnames, an enumeration, even just constant values.  We like the class objects because we already have them.  Less work!

Wednesday, July 11, 2012

Testing the Chain of Responsibility, Part 1 (redux)

Download the podcast

Testing the Chain of Responsibility

The Chain of Responsibility pattern (hereafter CoR) is one of the original “Gang of Four” patterns.  We’re assuming you know this pattern already, but if not you might want to read about it first at the Net Objectives Pattern Repository.

The basic idea is this: you have a series of rules (or algorithms) that are conceptually the same.  Only one of the rules will apply in a given circumstance.  You want to decouple the client objects that use the rules from:

  1. The fact that there is more than one rule
  2. How many rules there are
  3. How each rule is implemented
  4. How the correct rule is selected
  5. Which rule actually acted on any given request

All the clients should see/couple to is the common interface that all rules export, and perhaps the factory that creates them (or, from the clients’ perspective, the factory that creates “it”).

The CoR, in its classic form [1] accomplishes this by chaining the rules together, and handing a reference to the first one (in an upcast to the shared abstraction) to the client. When the client requests the action, the first rule decides “for itself” if it should act.  We call this “electing”.  If the rule elects, it performs the action and returns its result.  If it does not elect, it delegates to the next rule in the chain, and so on until some rule elects.  Regardless of which rule elects, the result is propagated back up the chain to the client. Typically only one rule will elect, and when one does we stop asking the rules that follow it; it just acts and returns, and we’re done.

Let’s examine a concrete example, and look at the design and some code.  We’ll keep the example very simple, so the pattern and testing techniques are easy to see.

Problem Statement

We have to process an integer.  There are two ways of processing it: a processing algorithm that is appropriate for “small” values (which are defined in the domain as any value in the range of 1 - 10000) and a different algorithm that is appropriate for “large” values (10001 - 20000).  Values over 20000 are not allowed.

Again, for simplicity, we’ll say that the large processor algorithm halves the value it is given, while the small processor doubles it.  If neither processing algorithm is appropriate, the system must throw an exception indicating an unsupported value was given.

Using the CoR


The classic CoR design view of this problem would look like this:

The Classic Chain of Responsibility

The Code

public abstract class Processor {
    public const int MIN_SMALL_VALUE = 1;
    public const int MAX_SMALL_VALUE = 10000;
    public const int MIN_LARGE_VALUE = 10001;
    public const int MAX_LARGE_VALUE = 20000;

    private readonly Processor nextProcessor;

    protected Processor(Processor aProcessor) {
       nextProcessor = aProcessor;
    }

    public int Process(int value) {
           int returnValue = 0;

           if(ShouldProcess(value)) {
               returnValue = ProcessThis(value);
           } else {
               returnValue = nextProcessor.Process(value);
           }
           return returnValue;
    }

    protected abstract bool ShouldProcess(int value);
    protected abstract int ProcessThis(int value);
}

Note the use of the Template Method Pattern [2] in this base class.  This eliminates the otherwise redundant part of the “decision making” that all the various processors would share, and delegates to the two abstract methods where the specific implementation in each case will be supplied in the derived classes.

Here they are:

public class LargeValueProcessor : Processor {
    public LargeValueProcessor(Processor aProcessor) :
base(aProcessor){}

    protected override bool ShouldProcess(int value) {
           if (value >= MIN_LARGE_VALUE && 
                value <= MAX_LARGE_VALUE)
    return true;
           return false;
    }

    protected override int ProcessThis(int value) {
           return (value/2);
    }
}

public class SmallValueProcessor : Processor {
    public SmallValueProcessor(Processor aProcessor) :
base(aProcessor){}

    protected override bool ShouldProcess(int value) {
           if (value <= MAX_SMALL_VALUE && 
                value >= MIN_SMALL_VALUE)
     return true;
           return false;
    }

    protected override int ProcessThis(int value) {
       return (value * 2);
    }
}

public class TerminalProcessor : Processor {
    public TerminalProcessor() : base(null){ }

    protected override bool ShouldProcess(int value) {
           return true;
    }

    protected override int ProcessThis(int value) {
           throw new ArgumentException();
    }

}

In testing this pattern, we have a number of behaviors to specify:

Common Chain-Traversal Behaviors
  1. That a processor which elects itself will not delegate to the next processor
  2. That a processor which does not elect itself will delegate to the next processor, and will forward the parameter(s) it was given unchanged
  3. That a processor which did not elect will “hand back” (return) any result returned to it from the next processor without changing the result

Individually Varying Processor Behaviors
  1. That a given processor will choose to act (elect) when it should
  2. That a given processor will not elect when it shouldn't
  3. That upon acting, the given processor will perform its function correctly

Chain Composition Behaviors
  1. That the chain appears to be the proper abstraction to the client
  2. The chain is made up of the right processors
  3. The processors are given “a chance” in the right order (if this is important)

Common Chain-Traversal Behaviors

All these behaviors are implemented in the base class Processor, via the template method, to avoid redundancy.  We don’t want redundancy in the tests either, so the place to specify these behaviors is in a test of one entity: the base class.  Unfortunately, the base class is an abstract type and thus cannot be instantiated.  One might think “well, just pick one of the processors -- it does not matter which one -- and write the test using that.  All derived classes can access the behavior of their base class, after all.”

We don’t want to do that.  First of all, it will couple the test of the common behaviors to the existence particular processor we happened to choose.  What if that implementation gets retired at some point in the future?  We’ll have to do test maintenance just because we got unlucky.  Or, what if a bug is introduced in the concrete processor we picked?  This could cause the test of the base behavior to fail when the base class is working just fine, due to the inheritance coupling.  That would be a misleading failure; we never want our tests to lie to us.  Coupling should always be intentional, and should always work for, not against us.

Here’s another good use for a mock.  If we make a mock implementation of the base class, it, like any other derived class, will have access to the common behavior.

class MockProcessor : Processor {
    public bool willElect = false;
    public bool wasAskedtoProcess = false;
    public int valueReceived = 0;
    public int returnValue = 0;

    public MockProcessor(Processor aProcessor) : 
           base(aProcessor){}

    protected override bool ShouldProcess(int value) {
           wasAskedtoProcess = true;
           valueReceived = value;
           return willElect;
    }

    protected override int ProcessThis(int value) {
           return returnValue;
    }
}

Note we keep this as simple as possible.  This is really part of the test, and will thus not be tested itself.  In fact, if we didn’t need it for two different tests, we’d probably make it an inner class of the test (which we call an inner shunt.  More on shunts later).

The tests that specify the proper chain-traversal behavior are simply conducted with two instances of the mock, chained together.  The first can be told to elect or not, and the second can be examined to see what happens to it with each scenario.

The first scenario concerns what should happen if the first process does not elect, but delegates to the second processor:

[TestClass]
public class ProcessorDelegationTest {
    private MockProcessor firstProcessor;
    private MockProcessor secondProcessor;
    private int valueToProcess;
    private int returnedValue;

    [TestInitialize]
    public void Init() {
           // Setup
           secondProcessor = new MockProcessor(null);
secondProcessor.willElect = true;
           firstProcessor = new MockProcessor(secondProcessor);
firstProcessor.willElect = false;
           valueToProcess = Any.Value; // [3]
           secondProcessor.returnValue = Any.Value;

           // Common Trigger
           returnedValue = 
                  firstProcessor.Process(valueToProcess);
    }

    [TestMethod]
    public void TestDelegationHappensWhenItShould() {
           Assert.IsTrue(secondProcessor.wasAskedtoProcess);
    }

    [TestMethod]
    public void TestDelegationHappensWithUnchangedParameter() {
           Assert.AreEqual(valueToProcess,
secondProcessor.valueReceived);
    }

    [TestMethod]
    public void TestDelegationHappensWithUnchangedReturn() {
           Assert.AreEqual(returnedValue,
secondProcessor.returnValue);
    }
}

These tests specify the three aspects of a processor that does not elect.  Note each aspect is in its own test method.  By telling the first mock not to elect, we can inspect the second mock to ensure that it got called, that it got the parameter unchanged, and that whatever it returns to the first mock is propagated back out with being changed.

The second scenario is where the first processor does elect.  All we need to prove here is that it does not delegate to the second processor.  Whether it does the right thing, algorithmically, will be specified in the test of the actual processors (we’ll get to that)..

[TestClass]
public class ProcessorNonDelegationTest {
    [TestMethod]
    public void TestNoDelegationWhenProcessorElects() {
           MockProcessor secondProcessor = 
                new MockProcessor(null);
           MockProcessor firstProcessor =
new MockProcessor((secondProcessor));
           firstProcessor.willElect = true;

           firstProcessor.Process(Any.Value);

           Assert.IsFalse(secondProcessor.wasAskedtoProcess);
    }
}

At first this might seem odd.  We’re writing tests that only use mocks?  That seems like a snake eating itself... the test is testing the test.  But remember, when the classloader loads a subclass (in this case, the mock), it also loads an instance of the base class in the background, and the base class is where the behavior we’re specifying actually exists.  We’re not testing the mock, we’re testing the template method in the abstract base class through the mock.

Individually Varying Processor Behaviors

Now that we’ve specified and proven that the delegation and traversal issues are correct, we now only have two things to specify in each individual processor: that it will elect only when it should, and that it will process correctly when it does.  The exception is the Terminal processor which, of course, should simply always elect and always throw an exception.

The problem here is that the only public method of the concrete processors is the Process() method, which is established (and tested) in the base class.  It would be a mistake, and a rather easy one to make, to write the tests of the concrete processors through the Process()method.  Doing so would couple these new tests to the ones we’ve already written, and over the long haul this will dramatically reduce the maintainability of the suite.

What we need to do is to write tests that directly access the protected methods ShouldProcess() and ProcessThis(), giving them different values to ensure they do what they are specified to do in the case of each concrete prossor.  Normally, such methods would not be accessible to the test, but we can fix this, simply, by deriving the test from the class in each case.  For example:

[TestClass]
public class SmallValueProcessorTest : SmallValueProcessor {
    public SmallValueProcessorTest():base(null){}

    [TestMethod]
    public void TestSmallValueProcessorElectsCorrectly() {
          Assert.IsTrue(
                 ShouldProcess(Processor.MIN_SMALL_VALUE));
          Assert.IsFalse(
                 ShouldProcess(Processor.MIN_SMALL_VALUE-1));
          Assert.IsTrue(
                 ShouldProcess(Processor.MAX_SMALL_VALUE));
          Assert.IsFalse(
                 ShouldProcess(Processor.MAX_SMALL_VALUE+1));
    }

    [TestMethod]
    public void TestSmallValueProcessorProcessesCorrectly() {
           int valueToBeProcessed =
Any.ValueBetween(Processor.MIN_SMALL_VALUE,
Processor.MAX_SMALL_VALUE);
           int expectedReturn = valueToBeProcessed * 2;
           Assert.AreEqual(expectedReturn,
this.ProcessThis(valueToBeProcessed));
    }
}

Note we have to give our test a constructor, just to satisfy the base class contract (chaining to its parameterized constructor, passing null).  If you dislike this, and/or if you dislike the direct coupling between the test and the class under test, an alternative is to use a testing adapter:

[TestClass]
public class LargeValueProcessorTest {
    private LargeValueProcessorAdapter testAdapter;

    [TestInitialize]
    public void Init() {
           testAdapter = new LargeValueProcessorAdapter();
    }

    [TestMethod]
public void TestLargeValueProcessorElectsCorrectly() {
         
           Assert.IsTrue(
testAdapter.ShouldProcess(
Processor.MIN_LARGE_VALUE));
           Assert.IsFalse(
testAdapter.ShouldProcess(
Processor.MIN_LARGE_VALUE - 1));
           Assert.IsTrue(
testAdapter.ShouldProcess(
Processor.MAX_LARGE_VALUE));
           Assert.IsFalse(
testAdapter.ShouldProcess(
Processor.MAX_LARGE_VALUE + 1));
    }

    [TestMethod]
    public void TestLargeValueProcessorProcessesCorrectly() {
int valueToBeProcessed =
Any.ValueBetween(Processor.MIN_LARGE_VALUE,
Processor.MAX_LARGE_VALUE);
           int expectedReturn = valueToBeProcessed / 2;
           Assert.AreEqual(expectedReturn,
testAdapter.ProcessThis(valueToBeProcessed));
    }

    private class LargeValueProcessorAdapter : 
               LargeValueProcessor {
           public LargeValueProcessorAdapter() : base(null) { }

           public new bool ShouldProcess(int value) {
               return base.ShouldProcess(value);
           }

           public new int ProcessThis(int value) {
               return base.ProcessThis(value);
           }
    }
}

We leave it up to you to decide which is desirable, but we’d recommend you pick one technique and stick with it.

Note that the first test method (TestLargeValueProcessorElectsCorrectly()) is a boundary (range)  test, and the second test method (TestLargeValueProcessorProcessesCorrectly()) is a test of a static behavior.  Refer to our blog on Test Categories for more details, if you’ve not already read that one.

FInally, we need to specify the exception-throwing behavior of the terminal processor.  This could be done either through direct subclassing or via a testing adapter; we’ll use direct subclassing for brevity:

[TestClass]
public class TerminalProcessorTest : TerminalProcessor
{
    [TestMethod]
    public void TestTerminalProcessorAlwaysElects()    {
           Assert.IsTrue(ShouldProcess(Any.Value));
    }

    [TestMethod]
    public void
TestTerminalProcessorThrowsExceptionWhenProcessing() {
           try
           {
               ProcessThis(Any.Value);
               Assert.Fail("TerminalProcessor should always throw an exception when reached");
           } catch (ArgumentException){}
    }
}

This may look a bit odd, but we’ll talk about exceptions and testing in another entry.  For now we think you can see that this test will pass if the exception is thrown if it is reached, and fail if it is not.

Oh, and don’t forget to specify your public constants!

[TestClass]
public class ConstantSpecificationTest
{
    [TestMethod]
    public void SpecifyConstants()
    {
           Assert.AreEqual(1, Processor.MIN_SMALL_VALUE);
           Assert.AreEqual(10000, Processor.MAX_SMALL_VALUE);
           Assert.AreEqual(10001, Processor.MIN_LARGE_VALUE);
           Assert.AreEqual(20000, Processor.MAX_LARGE_VALUE);
    }
}

In the next part, we’ll examine the third set of issues that have to do with the composition of the chain itself... that all the required elements are there, and that they are in the proper order (in cases where is important).  This will present us with an opportunity to discuss object factories, and how to test/specify them.

Stay tuned!

-----

[1] It’s important to note that patterns are not implementations.  We know many other forms of this pattern, but in this section will focus on the implementation shown in the Gang of Four.

[2] Unfamiliar with the Template Method Pattern?  We have a write up of it here:
http://www.netobjectivestest.com/PatternRepository/index.php?title=TheTemplateMethodPattern

[3] Details on the use of an “Any” class is a subject into itself.  For now, just know that Any.Value returns a random integer, while Any.Value(min, max) returns a random integer within a range.