Net Objectives

Net Objectives
If you are interested in coaching or training in ATDD or TDD please click here.

Friday, May 18, 2012

Mock Objects, Part 1

Download the podcast.

Narrow Specifications

When writing tests as specifications, we strive to create a very narrow focus in each individual test.  We want each test to make a single, unique distinction about the system.  This creates a clear specification when we later read the test, and also creates the maximum value when the test fails.

If a test makes a single distinction, then when it fails we know what the specific problem is.

If a test makes a unique distinction, then any particular problem will not cause multiple tests to fail at the same time.

Unfortunately, systems are created by coupling entities together (the coupling is, in effect, the system) and thus we often have various objects and systems which are present and operating (we say “in scope”) at the time a test is running, but which are not part of what the test is specifying.

We say it this way:

A given test will test everything which is in scope but which is not under the control of the test.

If we only wish to test/specify one, single and unique thing, then everything else which is in scope must somehow be brought under the control of the test.  This can turn out to be lots of things:
  • The system clock
  • Random numbers
  • The graphical user interface
  • The file system
  • The database
  • The network
  • Other objects
  • Sharable libraries
  • Other systems we depend on
  • Hardware
  • Etc…
If the behavior we are testing/specifying has dependencies on any of these “other” things then we must take control of them in the test so that the only thing we are focusing on is the one thing we are not controlling. Mocks are a big part of solving this problem.

An Analogy

Sometimes the best way to understand something well, and to retain that understanding, is to find an analogy to something we already understand.

Let’s say we were not making software, but manufacturing cars.  We would have testing to do, certainly, including the crash-worthiness of the vehicles we were planning to sell to the public.  A car, one could say, has an operational dependency: a driver.  However, we don’t want to test the car’s crash-worthiness with an actual driver in the driver’s seat!  We’ll likely kill the poor fellow.  So, we replace the driver with one of these:

A Crash Test Dummy



This, of course is a crash test dummy.  This is a good analogy for a mock object for two reasons.

First, we “insert” the crash test dummy into the car, because we do not want to test the driver but rather the car.   This allows the tester to “control the driver” in various ways.  Mocks are used in this way.

Second, there are different kinds of crash test dummies, of different levels of complexity, depending on what we need from them.  This is also true of mocks.
  1. Sometimes we just need something of the proper weight to be present in the driver's seat so that the test is realistic.  For this purpose, we might just use a sandbag, or a simple block of wood of the right weight.  Sometimes our mock objects are like this; simply dead “nothing” objects that act as inert placeholders.
  2. Other times we need to conduct various test scenarios with the same crash test dummy.  Perhaps one where the driver’s hands are at “10 and 2” on the steering wheel, then another where one hand is on the wheel while the other is on the stick shift, then yet another where the dummy is taking the place of a passenger with its feet up on the dash, or one sitting in the backseat, or facing backwards in a car seat, etc....  For these kinds of tests we would need an articulated dummy that can be put into different positions for these different scenarios.  We do this with mocks too, if needed, and when we do we say the mock is “conditionable”.
  3. Finally, sometimes we would need to know the lethality of a crash scenario, and thus need to measure what happened to the crash test dummy (and thus what would have happened to an actual person in the same crash).  For this, we would put various sensors in the dummy; perhaps a pressure plate in the chest, shock sensors on all the limbs, an accelerometer in the head, etc… These sensors would all measure these various effects during the crash, and record them into a central titanium-clad storage unit buried deep in the dummy.  After the crash is over the testers could plug into the storage unit and download the data to perform an analysis of the effects of the crash.  We also can do this with mocks, and when we do we say the mock is “inspectable”.

The amount of sophistication and complexity in our mocks needs to be kept at a minimum, as we are not going to test them (they are, in fact, a part of the test, and will be validated by initial failure).  If all we need is a block of wood, then that’s all we’re going to use.

An Example of Software Mocking

Let’s say we’re writing software to automate the movement of a tractor.  Farm equipment today is often highly sophisticated including microprocessors, touchscreens, GPS, wireless internet connections via cell towers, and so forth.  One behavior that is needed “turning a red warning light on when the tractor gets too close to the edge of the planting area”.  In specifying  this "Boundary Alarm" we would have dependencies on two aspects of the tractor hardware: the GPS unit that tells us our location at any given point in time and the physical dashboard warning light that we want to turn on.

If we had interfaces already for these two hardware points, the design would likely look something like this:

Interfaces


+CheckFieldBoundary() is what we want to specify/test.  If the GPS reports our location is too close to leaving the planting area, then the BoundaryAlarm object should call ActivateDashLight() on the DashLight interface.

We are not testing the GPS, nor are we testing the light (not that we would not ever, but we are not at this moment, in this test).  These things are in scope, however, and so we must bring them under the control of the test.  Since we are fortunate enough in this case that these dependencies are currently represented by interfaces, we can easily create mocks as implementations of these interfaces.  Also, we got lucky in that the BoundaryAlarm constructor takes implementations of GPS and DashLight, allowing us to easily inject our mocks.

We will obviously have to address situations where we don’t have these advantages, and we shall.  But for now let’s just focus on what the mocks do and how they do it, then we’ll examine various techniques for creating and injecting them.

Mock Implementations


Note that the mocks have additional methods added to them that are not defined in the interfaces they mock:

  • In the case of MockGPS, we added SetCurrentLocation().  The real GPS, of course, gets this location by measuring signals from the satellites that orbit the earth.  Our MockGPS requires no satellites and in fact does nothing other than return whatever we tell it to.  This is an example of making the mock “conditionable”.
     
  • In the case of MockDashLight we have added Boolean values (which start at false) to track whether the two methods ActivateDashLight() and DeactivateDashLight() were called or not.  We’ve also added two methods which simply return these values, namely DashLightActivated() and DashLightDeactivated().  This is an example of making the mock "inspectable".

These extra methods for conditioning and inspecting the mocks will not be visible to the  BoundaryAlarm object because the mock instances will be implicitly up-cast when they are passed into its constructor (assuming a strongly-typed language).  This is is essentially encapsulation by casting.

Let’s look at some pseudocode:

[TestClass]
public class BoundaryAlarmTest {
    [TestMethod]
    public void TestBoundaryAlarmActivatesDashLightWhenNeeded() {
        // Setup
        GPS mockGPS = new MockGPS();
        DashLight mockDashLight = new MockDashLight();
        BoundaryAlarm testBoundaryAlarm =
             new BoundaryAlarm(mockGPS, mockDashLight);
        Location goodLocation = // location inside planting area
        Location badLocation = // location in danger of leaving
       
        // Trigger lower boundary
        mockGPS.SetLocation(goodLocation);
        testBoundaryAlarm.CheckFieldLocation();

        // Verify lower boundary
        Assert.IsFalse(mockDashlight.DashLightActivated());

        // Trigger upper boundary
        mockGPS.SetLocation(badLocation);
        testBoundaryAlarm.CheckFieldLocation();

        // Verify upper boundary
        Assert.IsTrue(mockDashLight.DashLightActivated());
    }
}

public class MockGPS implements GPS {
    private Location testLocation;
    public Location GetLocation() {
        return testLocation;
    }
    public void SetLocation(Location aLocation)) {
        testLocation = aLocation;
    }
}

public MockDashLight implements DashLight {
    private boolean activateGotCalled = false;
    private boolean deactivateGotCalled = false;
    public void ActivateDashLight() {
        activateGotCalled = true;
    }
    public void DeactivateDashLight() {
        deactivateGotCalled = true;
    }
    public bool DashLightActivated() {
        return activateGotCalled;
    }
    public bool DashLightDeactivated() {
        Return deactivateGotCalled;
    }
}

public class BoundaryAlarm {
    public BoundaryAlarm(GPS aGPS, DashLight aDashLight){}
    public void CheckFieldLocation(){}
}

(Obviously we have skipped over what a Location is and how that works.  One can easily imagine it might contain latitude and longitude members, something like that)

The test will obviously fail since CheckFieldLocation() does nothing at all… so we watch it fail, which validates the test, and only then we put in the logic that causes BoundaryAlarm to turn on the light when it should.  The test drives the development of the behavior.

One thing we need to point out here is that, as simple and straightforward as this is, we’ve actually gone a bit too far.  There is nothing in our test that has anything to do with when and how the dashboard light should be deactivated.  In fact, we might not (at this point) even know the rules about this.  Our mock, therefore, really should not contain any capability regarding the DeactivateDashLight() method; at least, not yet.  We never want to make mocks more complicated than necessary, and we also have no failing test to prove that this part of the mock is valid and accurate.  This is all we should do for now:

Minimal Mocking


Adding more capability to this mock later, or even creating another mock for the purpose of specifying the deactivation of the light, will not be hard to do.  What is hard is keeping track of capabilities we may build in anticipation of a need, when we do not know if and when that need will arrive.  Also, if this “just in case” capability is actually wrong or non-functional, we can easily fail to notice this or lose track of it.

Whenever we add anything to our test suite, whether it be a mock or a test or an assertion or whatever, we want to see a failing test as soon as possible to prove that the validity of that thing.  Remember a test is only valid if it can fail and, specifically, if it fails for the reason we intend in writing it.

...To Be Continued...


3 comments:

  1. Nice example. It inspires a couple of suggestions.

    I’d prefer one more abstraction in this example, such as:

    class Field {
    Field (Location [] corners) {};
    Boolean isInField(Location aLocation) {}
    }

    Note that isInField() is assertive, rather than inquisitive. Field should not divulge its boundaries. The boundaries of the field might alternatively be defined with line segments. So the code might look like:

    class BoundaryAlarm {
    void CheckFieldBoundary () {
    Location p = aGPS.getLocation();
    if (aField.isInField(p))
    Dashlight.activateDashlight();
    else
    Dashlight.deactivateDashlight();
    }
    }

    Field does not need to be mocked, since it is purely computational.

    The example also shows that there is a need to consider how BoundaryAlarm will be executed when it is being designed. For example, it could be scheduled by a timer or it could respond to a message, or it could be called by the GPS as an observer of new locations. The form as you showed with it calling GPS to get the location is appropriate for the scheduling version.

    However with a message or observer, BoundaryAlarm might simply receive the location as part of the message or the observed event. So there would be need to mock the GPS, but rather to create a message or event object.
    In this case, by not having the alarm need to know about GPS, we have decreased coupling. Decreased coupling is almost always good. The class might look like:

    void listener(Event e)
    {
    CheckFieldBoundary (e.getLocation());
    }
    void CheckFieldBoundary (Location p) {
    if (aField.isInField(p))
    Dashlight.activateDashlight();
    else
    Dashlight.deactivateDashlight();
    }

    The need for mocking the dashlight can go away if the BoundaryAlarm uses either a message or observer. However, you would need a general purpose messaging/observer mock that can check that the appropriate message or event has been generated.

    As a side note, I consider designing to eliminate the need for mocks easier than to keep mocks in sync with a production implementation

    ReplyDelete
  2. Did part 2 ever get written? If so, a link would be helpful. If not, :-(

    ReplyDelete
    Replies
    1. Part Two:
      http://www.sustainabletdd.com/2012/06/mock-objects-part-2.html

      Part Three:
      http://www.sustainabletdd.com/2012/06/mock-objects-part-3.html

      You can also use the navigation tree on the right.

      Cheers!

      Delete