Net Objectives

Net Objectives
If you are interested in coaching or training in ATDD or TDD please click here.

Friday, August 9, 2013

TDD Mark 3 Introduced

First of all, sorry for the long absence.  Our training schedule has been wall-to-wall, and when one of us had a brief gap the other has always been busy.

It has given us time to think, however.  Long airplane rides and such. :)

We're been playing around with an idea we're calling (for the moment) TDD Mark 3 (the notion that TDD is not about testing but rather about specification being TDD Mark 2).  To give you an idea of what we're thinking, let's look at an example of TDD Mark 2 as we've been writing tests up to this point, and then refactor it to the TDD Mark 3 style.

Mark 2

So, what is the requirement? Our client is a cruise ship operator. Some of the stuff on the cruise is free and the rest are paid extras. On the ship, the guest can pay for extras with a standard credit card, or with the ship's debit card. Paying with the ship's debit card gives the guest a 5% discount on the purchase cost. The catch is that if you want to use the ship's debit card the guest has to load the card for the first time with at least $2,000. If you try to load a card with less than that amount, the transaction should fail.

[TestClass]
public class FundLoaderTDD
{

  [TestMethod]
  public void TestMinimalFunds()
  {
    Assert.AreEqual(2000, Card.MINIMAL_FUNDS);
  }

  [TestMethod]
  public void TestLoadingLessThanMinimalFundsThrowsException()
  {
    
uint minimalFunds = Card.MINIMAL_FUNDS;        
    Card card = new Card(/*card holder's details*/);
    card.LoadFunds(minimalFunds);
    

    uint insufficientFunds = minimalFunds - 1;
    card = new Card(/**/);
    try
    {
      card.LoadFunds(
insufficientFunds);
      Assert.Fail("Card should have thrown a " +
 

                 typeof(Card.InsufficientFundsException).Name());
    }
    catch (Card.InsufficientFundsException exception)
    {
      Assert.AreEqual(
insufficientFunds, exception.Funds());
    }
  }
}



The meaning here is very clear: the LoadFunds() method of Card will throw an InsufficientFundsException if you try to load an amount less than the minimal allowed value.  We also show that if the minimal amount is loaded, an exception is not thrown.  This constitutes a very typical specification of a behavioral boundary anchored at the value MINIMAL_FUNDS. Note also that we have specified what that value is in the first test.

Naturally, there are many other tests that specify the various aspects of the Card's behavior, and together they turn the user's requirement into an executable specification.  That's what Mark 2 is all about.

Refactor to Mark 3

We all know the importance of good design. Good design enables proper code maintainability (more on that in a future blog), which has to do with dealing with change.

We should also acknowledge that the tests that we write are not "second class citizens". They require as much love and attention as the production code they specify. This means that after the test has been written we have an opportunity to refactor its design. This is done with respect to specific changes that may be required in the code. These can come from two sources - changing requirements and changing the domain model to reflect changing responsibilities.

Changing requirement could comprise raising the minimal limit or creating a graded discount structure. Changing the domain comprises adding or removing classes or methods on classes.

The customers new requirement is this: there are other ways to charge the user for on-board services. It turns out that guests often do not carry the card with them (to the pool, for example) but would still like to purchase cute drinks with little pink umbrellas. To enable that, a biometric system was installed where the guest can charge the drink to his card by swiping their finger over a fingerprint scanner incorporated into the card reader held by the server.

This means that the model we created where the Card was the central object needs to be refined, and an Account class introduced. The Card is just one way if interacting with the account.

What affect will this have on our test? All reference to Card must be replaced with references to Account. Considering out test code there are two redundancies that we can identify: Card.MINIMAL_FUNDS and card.LoadFunds().

[TestClass]
public class FundLoaderTDD
{

  [TestMethod]
  public void TestMinimalFunds()
  {
    Assert.AreEqual(2000, Card.MINIMAL_FUNDS);
  }

  [TestMethod]
  public void TestLoadingLessThanMinimalFundsThrowsException()
  {
    

    uint minimalFunds = Card.MINIMAL_FUNDS;         
    Card card = new Card(/*Any card holder's details*/);
          card.LoadFunds(minimalFunds);
    

    uint insufficientFunds = minimalFunds - 1;
    card = new Card(/*Any card holder's details*/);
    try
    {
      card.LoadFunds(
insufficientFunds);
      Assert.Fail("Card should have thrown a " +

                 typeof(Card.InsufficientFundsException).Name());
    }
    catch (Card.InsufficientFundsException exception)
    {
      Assert.AreEqual(
insufficientFunds, exception.Funds());
    }
  }
}


We don't like redundancies in our tests any more than we like them in our production code.  We extract the redundancies into methods:


[TestClass]
public class FundLoaderTDD
{

  [TestMethod]
  public void TestMinimalFunds()
  {
    Assert.AreEqual(2000, MinimalFunds());
  }

  [TestMethod]
  public void TestLoadingLessThanMinimalFundsThrowsException()
  {

    uint minimalFunds = MinimalFunds();      
    card = new Card(/*Any card holder's details*/);
    LoadFunds(minimalFunds);

    
    uint insufficientFunds = minimalFunds - 1;
    card = new Card(/*Any card holder's details*/);
    try
    {
      LoadFunds(
insufficientFunds);
      Assert.Fail("Card should have thrown a " +
 

                 typeof(Card.InsufficientFundsException).Name());
    }
    catch (Card.InsufficientFundsException exception)
    {
      Assert.AreEqual(
insufficientFunds, exception.Funds());
    }
  } 


  Card card;
  private UInt MinimalFunds() 
  {
    return Card.MINIMAL_FUNDS;
  }

  private void LoadFunds(uint funds)
  {
    card.LoadFunds(funds);
  }
}


We can inline the MinimalFunds private function and get:

[TestClass]
public class FundLoaderTDD
{

  [TestMethod]
  public void TestMinimalFunds()
  {
    Assert.AreEqual(2000, MinimalFunds());
  }

  [TestMethod]
  public void TestLoadingLessThanMinimalFundsThrowsException()
  {
    card = new Card(/*
Any card holder's details*/);
    LoadFunds(MinimalFunds());
    

    uint insufficientFunds = MinimalFunds() - 1;
    card = new Card(/*Any card holder's details*/);
    try
    {
      LoadFunds(
insufficientFunds);
      Assert.Fail("Card should have thrown a " +
 

                 typeof(Card.InsufficientFundsException).Name());
    }
    catch (Card.InsufficientFundsException exception)
    {
      Assert.AreEqual(
insufficientFunds, exception.Funds());
    }
  } 


  Card card;
  private UInt MinimalFunds() 
  {
    return Card.MINIMAL_FUNDS;
  }

  private void LoadFunds(uint funds)
  {
    card.LoadFunds(funds);
  }
}


Wait! There's another redundancy above:

    try
    {
     //...

      Assert.Fail("Card should have thrown a " +
 

                 typeof(Card.InsufficientFundsException).Name());
    }
    catch (Card.InsufficientFundsException exception)
    {
      //...

    }


We are specifying the type of the exception twice...We'll deal with that redundancy in a bit, so we'll put it on the to-do list. In the meanwhile, back to the refactored tests.  I do not like the name we gave the LoadFunds method, it's misleading. The customer not want the exception to be thrown every time the card is loaded with a small amount -- only on the initial load. So perhaps this is better:

  [TestMethod]
  public void TestLoadingLessThanMinimalFundsThrowsException()
  {
    
LoadInitialFunds(MinimalFunds());
    

    uint insufficientFunds = MinimalFunds() - 1;
    try
    {
     
LoadInitialFunds(insufficientFunds);
      Assert.Fail("Card should have thrown a " +
 

                 typeof(Card.InsufficientFundsException).Name());
    }
    catch (Card.InsufficientFundsException exception)
    {
      Assert.AreEqual(
insufficientFunds, exception.Funds());
    }
  } 



  Card card;
  private void LoadInitialFunds(uint funds)
  {
    card = new Card(/*Any card holder's details*/);
    card.LoadFunds(funds);
  }

Note the fact that card's initialization was moved into the LoadInitialFunds method.


Besides shifting the funds handling responsibility to the Account object, it was also deemed useful to shift the initial amount loading from a specific method to the constructor. So for the $64,000 question - how many places do we need to make this change in? One:

  Card card;
  private void  LoadInitialFunds(uint funds)
  {
    card = new Card(/*card holder's details*/);
    Account account = new Account(funds);
  }

And where should the limit be defines? Account, and it will return the value in a method.
  
  private UInt MinimalFunds() 
  {
    return Account.MinimalFunds();
  }

Finally, we can make the changes in the test, but only because we left the two references to the exception in the test.

  [TestMethod]
  public void TestLoadingLessThanMinimalFundsThrowsException()
  {
    
    LoadInitialFunds(MinimalFunds());
    

    uint insufficientFunds = MinimalFunds() - 1;
    try
    {
     
LoadInitialFunds(insufficientFunds);
      Assert.Fail("Card should have thrown a " +
 

              typeof(Account.InsufficientFundsException).Name());
    }
    catch (Account.InsufficientFundsException exception)
    {
      Assert.AreEqual(
insufficientFunds, exception.Funds());
    }
  }



The public methods are the specification, the private methods encapsulate implementation. Well, almost, with the exception of the exception handling. But why is an exception being thrown at all?

Well, if you remember, the customer wanted the user to be notified if the amount is too small. Exceptions are just one way of doing it. So we can safely say that the specific exception is an implementation detail, and based on the role we want the public method to play - specification, we really need to get that implementation detail out of here.

So, here's a question to our readers. How would you do it? Note that although we used C# right now, the refactoring principles are relevant to any language.

So without dealing with the exception, yet, this is what the test code looks like.

[TestClass]
public class FundLoaderTDD
{

  [TestMethod]
  public void TestMinimalFunds()
  {
    Assert.AreEqual(2000, MinimalFunds());
  }

  [TestMethod]
  public void TestLoadingLessThanMinimalFundsThrowsException()
  {
    LoadInitialFunds(
MinimalFunds());
    

    uint insufficientFunds = MinimalFunds() - 1;
    try
    {
     
LoadInitialFunds(insufficientFunds);
      Assert.Fail("Card should have thrown a " +
 

              typeof(Account.InsufficientFundsException).Name());
    }
    catch (Account.InsufficientFundsException exception)
    {
      Assert.AreEqual(
insufficientFunds, exception.Funds());
    }
  } 


  private UInt MinimalFunds()   {
    return Account.MINIMAL_FUNDS;
  }

  private void LoadFunds(uint funds)
  {
    Account account = new Account(funds);
  }
}


The public methods now essentially constitute an acceptance test.  In fact, those familiar with acceptance testing frameworks like FIT would express what these unit test methods communicate in another form, like a table for example, and the private methods would be the fixtures written to connect the tests to the system's implementation.

This does make the test class longer and more verbose, but it also makes it easier to read just the specification part, if that's all you are interested in.  Also, when design changes are made later (lets say, for example, that we decide to build the Account in a factory, or store the minimal initial value in a configuration file) that only one private method will be effected by a given change, and none of the public methods at all (which makes sense, since the design has been altered but not the acceptance criteria).

Mark 3

The separation in perspectives that was created in the above code is a result of refactoring. But it actually makes sense regardless. The public test method is written by intention,and describes the conceptual behavior of the system.

We also have a separation between the specification and implementation. We call these - different perspectives. And they allow us to focus on getting the requirement right, and then getting the design right. We can change the design without affective the requirement.

This is a major piece of making TDD sustainable. As this allows us to change the system design without affecting the public tests which specify the behavior.

So, the $1.000,0000 question is: "Why not write the tests that way to begin with?"

To Be Continued....

Friday, February 22, 2013

Hiatus

We just wanted to make clear to our readers that this blog, and the book we are working on through it, are not dead.  Amir and I have just been in very high demand for training since the holidays, but we will be back!  Stay tuned.....

-Scott-

Thursday, September 27, 2012

Testing Through API's

Download the Podcast

We recently got a question from Tomas Vykruta, a colleague of ours, and we felt it turned out to be such a good, fruitful question that we wanted to pass it, and our answers, along in this blog.

Here is Tomas' question:
Do you prefer to have unit tests written against the public API, or to test individual functions inside the API? I've seen both approaches at my company, and in many cases, a single class is unit tested with a mix of the two. I haven't seen this topic addressed in any style or testing guides, so it seems to be left as a choice to the author.

While there is likely no right or wrong answer here and each class will require some combination, I thought it would be interesting to enumerate your real world experiences (good and bad) resulting from these 2 strategies. Off the top of my head, here are some pros (+) and cons (-).

API-level:
+ If internal implementation details of API change,s the unit tests don't have to. Less maintenance.
+ Serves as documentation for public usage of API.
+ Does not require fabricating the internal API in a way as to make every function easily testable.
+- Possibly less code to write.
- Does not serve as documentation for individual internal functions.
- Unit tests are less likely to test every single internal function thoroughly.
- Test failures can take some time to track down and identify and require understanding the internal API.

Internal API unit testing (individual functions):
+ Unit tests are very simple, short, quick to write and read.
+ Functions are very thoroughly tested, easy to verify against full range of inputs.
+ Serves as documentation for every internal function.
+ Test failures are easily identifiable even for engineers not familiar with the code base, since each test is focused on a very limited bit of code.
- When any implementation details change, the tests must change with it.
- Not useful to pure external API users who don't care about internal implementation details.

Scott's Response:
My view is this: if you consider the test suite as you would a specification of the system, then the question as to whether to test at one level or another becomes: “is it specified?” 

Systems produce behavioral effects, and these effects are what determine the value of the system.  Value, however, is always from the point of view of a “client” or “customer” and every system has several customers.  All these customers have a behavioral view of the system which can be specified.

For example, the end users have a specification: “this can accurately calculate my income tax”.  But so does the legal department: “this has a EULA that indemnifies us against tax penalties”.  And the marketing department: “the system has a tax-evaluation feature that our competitor does not”.  And the developers themselves: “this has an extensible system for taxation algorithms.”  Etc…

Anything in anyone’s spec needs a test.  Some of these will be at the API level, some will be further in.

Not all implementation details are part of the specification.  If you are able to refactor a particular implementation and still satisfy all customer specifications, then the implementation does not require a separate test.

Amir's Response:
Scott has already expanded on the difference between testing and specification. I would like to add a little to this ‘specification’ perspective.

Let me start by saying that all TDD tests must only use public interfaces. This can be interpreted to mean – you must only test through APIs, as they are the public interface of the system. This is true when you consider the external consumers of the system. They see only the public API and hence ‘feel’ the system’s behavior through it. The TDD test will specify what this behavior is (for better or worse).

And just to clarify – when we say ‘public interface,’ we do not refer only to the exposed functional interface. A public interface can also be the GUI, database schema, specific file formats, file names, URL format, a log (or trace facility),  Van Eck phreaking,  or a Ouija board. As long as the usage of the public interface allows an external entity to affect your system of vice versa, it is considered public.

Some of the interfaces mentioned above may be used by entities within the company, such as support or QA. For all intents and purposes they are still customers of the system and as such their needs (e.g., the types of error report generated under specific circumstances, or the ability to throttle the level of tracing done, or the ability to remotely control a client system) must be specified in the TDD tests. After all, you still want the ‘intra-organizational’ behavior to be known an invariant to other changes.

When we do TDD however, we are not concerned only about the system’s external behavior (as defined above), but also about its internal behavior. This internal behavior has two manifestations (and this is our arbitrary nomenclature, but I hope it makes sense). First is the architecture of the product, second is its design. These two may seem to be the same but there is a subtle difference between them.

The system’s architecture is the set of design decisions that were made to accomplish functional and performance goals. Once set, these become a requirement. An individual developer or team cannot decide to do things differently, but has to operate with these architectural guidelines. This is specified through a set of tests that specify how every architectural element contributes to the implementation of the desired overall behavior.

The system’s design is the set of design decision that are made by the team and individual developers, and are considered to be ‘implementation choices’. The team can assign whichever responsibilities it deems reasonable to the different design entities in order to achieve the desired behavior. This is all well, except that there is one ‘tacit’ requirement that is solely in the responsibility of the team (and probably the technical organization management). This requirement is maintainability, and it is what guides the team in their design choices. The TDD tests help us specify both what the system design is and also what the specific responsibilities assigned to the system entities are.

The point about both design and architecture is that they are internal to the system. As such, how can you test-drive them through the system’s APIs? By testing through the APIs I can see that the behavior is specified correctly. I cannot see that the architecture is adhered to or that the design promotes maintainability.

The answer to this paradox lies in the definition of the word ‘public’. Public is a relative term. If you live in a high rise condo, then the ‘public’ interface may be the building’s front door. But consider the individual apartments. The neighbors can’t come into your condo at will, can they? The condo has a public interface – its door, which is hidden to those outside the building (private) but visible and usable by the internal neighbors. Inside your condo this division continues. You have rooms, with doors (their public interfaces), and storage cabinets, with their doors, and boxes, with their lids, and bottles with their caps. What we get is a complex set of enclosures which are public to their immediate surrounding and private to anything further out.

Computer systems are the same. The APIs are the public doorways to the surrounding clients – these clients do not see the way the system is composed. But the elements of the system themselves do see this design –- they can see the other elements (which they interact with) although they cannot see inside these elements. The interfaces that these inner elements expose, are they private or public. Well, that depends on who you’re asking. From the perspective of the outside clients – they are private. From the perspective of the peer elements they are public. Since they are public, they should be specified through TDD, and this is exactly how we specify the system’s architecture and design.

So, in a nutshell, the answer to the question – “do we test external or internal APIs” is yes.
We would love to hear from all of you on this question!





Wednesday, August 8, 2012

Testing the Chain of Responsibility, Part 2

Download the podcast

Chain Composition Behaviors

We always design services for multiple clients.  Even if a service (like the Processor service in our example) has only a single client today, we want to allow for multiple clients in the future.  In fact, we want to promote this; any effort expended to create a service will return increasing value when multiple clients end up using it.

So, one thing we definitely want to do is to limit/reduce the coupling from the clients’ point of view. The run-time view of the CoR from the client’s point of view should be extremely limited:

Note that the reality, on the right, is hidden from the client, on the left.  This means we can add more processors, remove existing ones, change the order of them, change the rules of the termination of the chain, change how any/all of the rules are implemented... and when we do, this requires no maintenance on the clients.  This is especially important if there are (or will be, or may be) clients that we don’t even control.  Maybe they live in code belonging to someone else.

The one place where reality cannot be concealed is wherever the chain objects are instantiated.  The concrete types, the fact that this is a linked list, and the current order of the list will be revealed to the entity that creates the service.   If this is done in the client objects, then they all will have this information (it will be redundant).  Also, there is no guarantee that any given client will build the service correctly; there is no enforcement of the rules of its construction.  

This obviously leads us to prefer another option.  We may, for example, decide to move all creation issues into a separate factory object.

It may initially seem that by doing so we’re just moving the problem elsewhere, essentially sweeping it under the rug. The advantage comes from the fact that factory objects, unlike clients,  do not tend to increase in number.  So, at least we’ve limited our maintenance to one place.  Also, if factories are only factories then we are not intermixing client behavior and construction behavior.  This results in simpler code in the factories, which tends to be easier to maintain.  Finally, if all clients use the factory to create the service, then we know (if the factory works properly) that the service is always built correctly.

We call this the separation of use from creation, and it turns out to be a pretty important thing to focus on.  Here, this would lead us to create a ProcessorFactory that all clients can use to obtain the service, and then use it blindly.  Initially, this might seem like a very simple thing to do:

public class ProcessorFactory {
    public Processor GetProcessor() {
           return new LargeValueProcessor(
new SmallValueProcessor(
new TerminalProcessor()));
    }
}

Pretty darned simple.  From the clients’ perspective, the issue to specify in a test is also very straightforward: I get the right type from the factory:

[TestClass]
public class ProcessorFactoryTest {
    [TestMethod]
    public void TestFactoryReturnsProperType() {
         Processor processor =
              new ProcessorFactory().GetProcessor();
         Assert.IsTrue(processor is Processor);
    }
}

This test represents the requirement from the point of view of any client object.  Conceptually it tells the tale, though in strongly-typed language we might not want to actually write it.  This is something the compiler enforces, and therefore is a test that actually could never fail if it compiles.  Your mileage may vary.

However, there is another perspective, with different requirements that must also be specified.  In TDD, we need to specify in tests:

  1. Which processors are included in the chain (how many and their types)
  2. The order that they are placed into the chain (sometimes)  [4]

Now that the rules of construction are in one place (which is good) this also means that we must specify that it works as it should, given that all clients will now depend on this correctness.

However, when we try to specify the chain composition in this way we run into a challenge:  since we have strongly encapsulated all the details, we have also hidden them from the test.  We often encounter this in TDD; encapsulation, which is good, gets in the way of specification through tests.

Here is another use for mocks.  However, in this case we are going to use them not simply to break dependencies but rather to “spy” on the internal aspects of an otherwise well-encapsulated design. Knowing how to do this yields a huge advantage: it allows us to enjoy the benefits of strong encapsulation without giving up the equally important benefits of a completely automated specification and test suite.

This can seem a little tricky at first so we’ll go slow here, step by step.  Once you get the idea, however, it’s actually quite straightforward and a great thing to know how to do.

Step 1: Create internal separation in the factory

Let’s refactor the factory just a little bit.  We’re going to pull each object creation statement (new x()) into its own helper method.  This is very simple, and in fact most modern IDEs will do it for you; highlight the code, right-click > refactor > extract method..

public class ProcessorFactory {
    public Processor GetProcessor() {
           return MakeFirstProcessor(
MakeSecondProcessor(
MakeLastProcessor()));
    }

    protected virtual Processor MakeFirstProcessor(
Processor aProcessor)    {
           return new LargeValueProcessor(aProcessor);
    }

    protected virtual Processor MakeSecondProcessor(
Processor aProcessor)    {
           return new SmallValueProcessor(aProcessor);
    }

    protected virtual Processor MakeLastProcessor() {
           return new TerminalProcessor();
    }
}

Note that these helper method would almost certainly be made private by an automated refactoring tool.  We’ll have to change them to protected virtual (or just protected in a language like Java where methods are virtual by default) for our purposes.  You’ll see why.

Step 2: Subclass the factory to return mocks from the helper methods

This is another example of the endo testing technique we examined in our section on dependency injection:

private class TestableProcessorFactory : ProcessorFactory {
    protected override Processor MakeFirstProcessor(
Processor aProcessor)    {
           return new LoggingMockProcessor(
typeof(LargeValueProcessor), aProcessor);
    }

    protected override Processor MakeSecondProcessor(
Processor aProcessor)    {
           return new LoggingMockProcessor(
typeof(SmallValueProcessor), aProcessor);
    }

    protected override Processor MakeLastProcessor() {
           LoggingMockProcessor mock = new LoggingMockProcessor(
typeof(TerminalProcessor), null)
mock.iElect = true;
           return mock;
    }
}

This would almost certainly be a private inner class of the test.  If you look closely you’ll see three important details.  

  • Each helper method is returning an instance of the same type (which we’ll implement next),  LoggingMockProcessor, but in each case the mock is given a different type to specify in its constructor [5]
  • The presence of the aProcessor parameter  in each method specifies the chaining behavior of the factory (which is what we will observe behaviorally through the mocks)  
  • The MakeLastProcessor() conditions the mock to elect.  As you’ll see, these mocks do not elect by default (causing the entire chain to be traversed) but the last one must, to specify the end of delegation

Step 3: Create a logging mock object and a log object to track the chain from within

Here is the code for the mock:

private class LoggingMockProcessor : Processor {
    private readonly Type mytype;
    public static readonly Log log = new Log();
    public bool iElect = false;
    public LoggingMockProcessor (Type processorType,
Processor nextProcessor):base(nextProcessor) {
           mytype = processorType;
    }

    protected override bool ShouldProcess(int value) {
           log.Add(mytype);
           return iElect;
    }

    protected override int ProcessThis(int value) {
         return 0;
    }
}

The key behavior here is the implementation of ShouldProcess() to add a reference of the actual type this mock represents to a logging object.  This is the critical part -- when the chain of mocks is asked to process, each mock will record that it was reached, the type it represents, and we can also capture the order in which they are reached if we care about that.

The implementation of  ProcessThis() is trivial because we are only interested in the chain’s composition, not its behavior.  We’ve already fully specified the behaviors in previous tests, and each test should be as unique as possible.  

Also note that this mock, as it is only needed here, should be a private inner class of the test.  Because the two issues inclusion and sequence are part of the same behavior (creation), everything will be specified in a single test.

The Log, also a private inner class of the test, looks something like this:

private class Log {
    private List<Type> myList;
    public void Reset() {
           myList = new List<Type>();
    }
    public void Add(Type t) {
           myList.Add(t);
    }

    public void AssertSize(int expectedSize) {
           Assert.AreEqual(expectedSize, myList.Count);
    }

    public void AssertAtPosition(Type expected, int position) {
           Assert.AreEqual(expected, myList[position]);
    }
}

It’s just a simple encapsulated list, but note that it contains two custom assertions.  This is preferred because it allows us to keep our test focused on the issues it is specifying, and not on the details of “how we know”.  It makes the specification more readable, and easier to change.  

(A detail: The log is “resettable” because it is held statically by the mock.  This is done to make it easy for all the mock instances to write to the same log that the test will subsequently read.  There are other way to do this, of course, but this way involves the least infrastructure.  Since the log and the mock are private inner classes of the test, this static member represents very little danger of unintended coupling.)

Step 4: Use the “spying” capability of the mock in a specification of the chain composition

Let’s look at the test itself:

[TestMethod]
public void TestFactoryReturnsProperChainOfProcessors() {
    // Setup
    ProcessorFactory factory = new TestableProcessorFactory();
    const int correctChainLength = 3;
    List<Type> correctCollection =
new List<Type> {
typeof (LargeValueProcessor),
               typeof (SmallValueProcessor),
               typeof (TerminalProcessor)
            };
    Processor processorChain = factory.GetProcessor();
    Log myLog = LoggingMockProcessor.log;
    myLog.Reset();
      
// Trigger     
processorChain.Process(Any.Value);

    // Verification
    myLog.AssertSize(correctChainLength);
for (int i = 0; i < correctCollection.Count; i++) {
           myLog.AssertAtPosition(correctCollection[i], i);
    }
}

If the order of the processors was not important, we would simply change the way the log reports their inclusion:

// In Log
public void AssertContains(Type expected){
       Assert.IsTrue(myList.Contains(expected));
}

...and call this from the test instead.

// In TestFactoryReturnsProperChainOfProcessors()
for (int i = 0; i < correctCollection.Count; i++) {
       myLog.AssertContains(correctCollection[i]);
}

Some testing frameworks actually provide special Asserts for collections like this.

Objections

OK, we know what some of you are thinking.  “Guys, this is the code you’re testing:”

public Processor GetProcessor() {
           return MakeFirstProcessor(
MakeSecondProcessor(
MakeLastProcessor()));
}

“...and look at all the *stuff* you’ve created to do so!  Your test is several times the size of the thing you’re testing!   Arrrrrrrrrgh!”

This is a completely understandable objection, and one we’ve felt in the past.  But to begin with remember that in our view this is not a test, it is a specification.  It’s not that unusual for specifications to be longer than the code they specify.  Sometimes it’s the other way around.  It just depends on the nature of the specification and the implementation involved.

The specification of the way the space shuttle opened the cargo bay doors was probably a book. The computer code that opened it was likely much shorter.

Also, this is a reflection of the relative value of each thing.  Recently, a friend who runs a large development team got a call in the middle of the night, warning him of a major failure in their server farm involving both development and test servers.  He knew all was well since they have offsite backups, but as he was driving into work in the wee hours he had time to ask himself “if I lost something here... would I rather lose our product code, or our tests?”
He realized he would rather lose the product code.  Re-creating the source from the tests seemed like a lot less work than the opposite (that would certainly be true here).  But what that really means is that the test/specifications actually have more irreplaceable value than the product code does.

In TDD, the tests are part of the project.  We create and maintain them just like we do the product code.  Everything we do must produce value... and that’s the point, not whether one part of the system is larger than another.  And while TDD style tests do certainly take time and effort to write, remember that they have persistent value because they can be automatically verified later.

Finally, ask yourself what you would do here if the system needed to be changed, say, to support small, medium, and large values?  We would test-drive the new MediumValueProcessor, and then change TestFactoryReturnsProperChainOfProcessors() and watch it fail.  We’d then update the factory, and watch the failing test go green. We’d also have automatic confirmation that all other tests remained green throughout.

That’s an awfully nice way to change a system.  We know exactly what to do, and we have concrete confirmation that we did exactly and only that.  Such confidence is hard to get in our business!

-----
Links:

http://www.netobjectives.com/competencies/separate-use-from-construction
http://www.netobjectives.com/resources/separate-use-construction

-----

[4] Some CoRs require their chain elements to be in a specific order.  Some do not.  For example, we would not want the TerminalProcessor to be anywhere but at the end of the chain.  So, while we may not always care about/need to specify this issue, it’s important to know how to do it.  So we’ll assume here that, for whatever domain reason, LargeValueProcessor must be first, SmallValueProcessor must be second, and TerminalProcessor must be third.

[5] We’re using the class objects of the actual types.  You could use anything unique: strings with the classnames, an enumeration, even just constant values.  We like the class objects because we already have them.  Less work!