Net Objectives

Net Objectives
If you are interested in coaching or training in ATDD or TDD please click here.

Friday, August 17, 2018

Test-Driven Development in the Larger Context: Pt 4. Synergy

Having established the differences between ATDD and UTDD, how are they the same? Why are they both called “TDD” collectively?

Tests ensure a detailed and specific, shared understanding of behavior.

If properly trained, those who create these tests will subject their knowledge to a rigorous and unforgiving standard; it is hard if not impossible to create a high-quality test about something you do not know enough about. It is also unwise to try and implement behavior before you have that understanding. TDD ensures that you have sufficient and correct understanding by placing tests in the primary position. This is equally true for acceptance tests as it is for unit tests, it's just that the audience for the conversation is different..

Tests capture knowledge that would otherwise be lost.

Many organizations we encounter in our consulting practice have large complex legacy systems that no one understands very well. The people who designed and built them have often retired or moved on to other positions, and the highly valuable knowledge they possessed left with them. If tests are written to capture this knowledge (which, again, requires that those who write them are properly trained in this respect) then not only is this knowledge retained, but also its accuracy can be verified at any time in the future by simply executing the tests. This is true whether the tests are automated or not, but obviously the automation is a big advantage here. This leads us to view tests, when written up-front, as specifications. They hold the value that specifications hold, but add the ability to verify accuracy in the future.

Furthermore, if any change to the system is required, TDD mandates that the tests are updated before the production code as part of the cadence of the work. This ensures that the changes are correct, and that the specification never becomes outdated. Only TDD can do this.

Tests ensure quality design.

As anyone can tell you who has tried to add tests after the fact to a legacy system, bad design is notoriously hard to test. If tests precede development, then design flaws are made clear early in the painful process of trying to test them.  In other words, TDD will tell you early if your design is weak because the pain you’ll feel is a diagnostic tool, as all pain is. Adequate training in design (design patterns training for example)will ensure that the team understands what the design should be, and the tests will confirm when this has happened. Note that this is true whether tests are written or not; it is the point of view that accompanies testability that drives toward better design. In this respect the actual tests become an extremely useful side-product. That said, once it is determined how to test something, which ensures that it is indeed testable, then the truly difficult work is done. One might as well write the tests…

What TDD does not do, neither in terms of ATDD nor UTDD, is replace traditional testing. The quality control/quality assurance process that has traditionally followed development is still needed as TDD will not test all aspects of the system, only those needed to create it. Usability, scalability, security, and so on still need to be ensured by traditional testing. What TDD does do is contribute some of the tests needed by QA, but certainly not all of them.

There is another benefit to the adoption of TDD, one of healthy culture. In many organizations, developers view the testing effort as a source of either no news (the tests confirm the system is correct) or bad news (the tests point out flaws). Similarly, testers view the developers as a source of myriad problems they must detect and report.

When TDD is adopted, developers have a clearer understanding of the benefits of testing from their perspective. Indeed, TDD can become a strongly-preferred way to work by developers because it leads to a kind of certainty and confidence that they are unaccustomed to, and crave. On the other hand, testers begin to see the development effort as a source of many of the tests that they, in the past, had to retrofit onto the system. This frees them up to add the more interesting, more sophisticated tests (that require their experience and knowledge) which otherwise often end up being cut from the schedule due to lack of time. This, of course, leads to better and more robust products overall.

Driving development from tests initially seemed like an odd idea to most who heard of it. The truth is, it makes perfect sense. It’s always important to understand what you are going to build before you build it, and tests are a very good way to ensure that you do and that everyone is on the same page. But tests deliver more value than this; they can also be used to efficiently update the system, and to capture the knowledge that existed when the system was created, months, years, even decades in the future. They ensure that the value of the work done will be persistent value, and in complete alignment with the forces that make the business thrive.

TDD helps everyone.

Part 1
Part 2
Part 3
Part 4

Monday, August 13, 2018

Test-Driven Development in the Larger Context: Pt 3. Automation

Both ATDD and UTDD are automatable. The difference has to do with the role of such automation, how critical and valuable it is, and when the organization should put resources into creating it.



ATDD’s value comes primarily from the deep collaboration it engenders, and the shared understanding that comes from this effort.  The health of the organization in general, and specifically the degree to which development effort is aligned with business value will improve dramatically once the process is well understood and committed to by all. Training your teams in ATDD pays back in the short term. Excellent ATDD training pays back before the course is even over.

Automating your acceptance test execution is worthwhile, but not an immediate requirement. An organization can start ATDD without any automation and still get profound value from the process itself.  For many organizations automation is too tough a nut to crack at the beginning, but this should not dissuade anyone from adopting ATDD and making sure everyone knows how to do it properly. The automation can be added later if desired but even then, acceptance tests will not run particularly quickly. That is acceptable because they are not run very frequently, perhaps part of a nightly build.

Also, it will likely not be at all clear, in the beginning, what form of automation should be used. There are many different ATDD automation tools and frameworks out there, and while any tool could be used to automate any form of expression, some tools are better than others given the nature of that expression. If a textual form, like Gherkin, is determined to be clearest and least ambiguous given the nature of the stakeholders involved, then an automation tool like Cucumber (Java) or Specflow (.Net) is a very natural and low-cost fit. If a different representation makes better sense, then another tool will be easier and cheaper to use.

The automation tool should never dictate the way acceptance tests are expressed. It should follow. This may require the organization to invest in the effort to create its own tools or enhancements to existing tools. but this is a one-time cost that will return on the investment indefinitely. In ATDD the clarity and accuracy of the expression of value is paramount; the automation is beneficial and "nice to have."



UTDD requires automation from the outset.  It is not optional and, in fact, without automation UTDD could scarcely be recommended.

Unit tests are run very frequently, often every few minutes, and thus if they are not efficient it will be far too expensive (in terms of time and effort) for the team to run them. Running a suite of unit tests should appear to cost nothing to the team; this is obviously not literally true, but that attitude should be reasonable.

Thus, unit tests must be extremely fast, and many aspects of UTDD training should ensure that the developers know how to make them extremely painless to execute. They must know how to manage dependencies in the system, and how to craft tests in such a way that they execute in the least time possible, without sacrificing clarity.

Since unit test are intimately connected to the system from the outset, most teams find that it makes sense to write them in the same programming language that the system itself is being written in. This means that the developers do not have to do any context-switching when moving back and forth between tests and production code, which they will do in a very tight loop.

Unlike ATDD, the unit-testing automation framework must be an early decision in UTDD, as it will match/drive the technology used to create the system itself. One benefit of this is that the skills developers have acquired for one purpose are highly valuable for the other. This value flows in both directions: writing unit tests makes you a better developer, and writing production code makes you a better tester.

Also, if writing automated unit tests is difficult or painful to the developers, this is nearly always a strong indicator of weakness in the product design. The details are beyond the scope of this article, but suffice it to say that bad design is notoriously hard to test, and bad design should be rooted out early and corrected before any effort is wasted on implementing it.

Part 1
Part 2
Part 3
Part 4

Tuesday, August 7, 2018

Test-Driven Development in the Larger Context: Pt 2. Cadence

Another key difference between ATDD and UTDD is the pace and granularity, or “cadence” of the work.  This difference is driven by the purpose of the activity, and how the effort can drive the maximum value with the minimum delay.




Acceptance tests should be written at the start of the development cycle: during sprint planning in Scrum, as an example.  Enough tests are written to cover the entire upcoming development effort, plus a few more in case the team moves more quickly than estimates expect.  

If using a pull system, like Kanban, then the acceptance tests should be generated into a backlog that the team can pull from.  The creation of these tests, again, are part of the collaborative planning process and should follow its pace exactly.

Acceptance test start off failing, as a group, and then are run at regular intervals (perhaps as part of a nightly build) to allow the business to track the progress of the teams as they gradually convert them to passing tests.  This provides data that allows management to forecast completion (the “burn down curve”) which aids in planning.

The primary purpose of creating acceptance tests is the collaboration this engenders.  The tests are a side-effect (albeit an enormously beneficial one) of the process, which is engaged to ensure that all stakeholders are addressed, all critical information is included, and that there is complete alignment between business prioritization and the upcoming development effort.

When training stakeholders to write such tests, the experience should be realistic; work should be done by teams that include everyone mentioned in the “Audience” blog that preceded this one, and they should ideally work on real requirements from the business.



A single unit test is written and proved to fail by running it immediately.  Failure validates the test in that a test that cannot fail, or fails for the wrong reason, has no value.  The developer does not proceed without such a test, and without a clear understanding of why it is failing.

Then the production work is done to make this one test pass, immediately.  The developer does not move on to write another test until it and all previously-written tests are “green.” The guiding principle is that we never have more than one failing test at a time, and therefore the test is a process gate determining when the next user story (or similar artifact) can begin to be worked on.

When training developers to write these tests properly, we use previously-derived examples to ensure that they understand how to avoid the common pitfalls that can plague this process: tests can become accidentally coupled to each other, tests can become redundant and fail in groups, one test added to the system can cause other, older tests to fail.  All of this is avoidable but requires that developers who write them are given the proper set of experiences, in the right order, so that they are armed with the necessary understanding to ensure that the test suite, as it grows large, does not become unsustainable.

Part 1
Part 2
Part 3
Part 4

Thursday, July 26, 2018

Test-Driven Development in the Larger Context: Pt 1. Audience

One difference between ATDD and UTDD becomes clear when you examine who is involved in the process.  We will call this the “Audience” for the process in this blog.




Acceptance tests should be created by a gathering of representatives from every aspect of the organization: business analysts, product owners, project managers, legal experts, developers, marketing people, testers, end-users (or their representatives), etc.  

ATDD is a framework for collaboration that ensures complete alignment of the development effort that is to come with the business values that should drive it.  These business values are complex and manifold, and so a wide range of viewpoints must be included.  Acceptance tests should be expressed in a way that can be written, read, understood, and updated by anyone in the organization.  They should require no technical knowledge, and only minimal training.

The specific expression of an acceptance test should be selected based on the clarity of that form given on the nature of the organization and its work.  Many people find the “Given, When, Then” textual form (often referred to as “Gherkin”) to be the easiest to create and understand.  Others prefer tables, or images, or other domain-specific artifacts.

We once worked with an organization that did chemical processing.  We noted that in all their conversations and meetings, they used images like this in their PowerPoint slides and on the whiteboards, etc.:
To most people (including myself) these would not be easy to understand, but for this organization it was obvious and clear.  For them, expressing their acceptance tests this way lowered the bar of comprehension.  Why make them convert this into some textual or other representation?  Use their language, always.

Typical business analysts spend much of their time looking at spreadsheets, or Gantt charts, or Candlestick charts, etc…  The point is, once the stakeholders of a given piece of software are identified, then the form of the expression chosen should be whatever is clearest for them.  They should be able to write the tests, read the tests, and update the tests as needed, without any understanding of computer code (unless the only stakeholders are literally other developers).

The notion of automating these tests should never drive their form.  Any representation of acceptance can be make executable given the right tools, even if those tools must be created by the organization.  Choosing, say, Robot or Fit or Specflow to automate your acceptance tests before you identify your stakeholders is putting the cart before the horse.



Unit tests should be written by technical people: developers and sometimes testers as well.  They are typically written in the computer language that will be used to develop the actual system, though there are exceptions to this.  But in any case, only people with deep technical knowledge can write them, read them, and update them as requirements change. 

To ensure the suite of tests itself does not become a maintenance burden as it grows, developers must be trained in the techniques that make the UTTD effort sustainable over the long haul.  This includes test structure, test suite architecture, what tests to write and how to write them, tests that should be avoided, etc.   Training of the development team must include these critical concepts, or the process rapidly becomes too expensive to maintain.

Part 1
Part 2
Part 3
Part 4

Friday, July 20, 2018

Test-Driven Development in the Larger Context: TDD & ATDD (Introduction)


A question that often arises in our consulting and training practices concerns the relationship between Test-Driven Development (TDD) and Acceptance-Test-Driven Development (ATDD).  Which is “really” TDD, and which should an organization focus on in terms of which comes first, and which should achieve the most attention and resources?

It’s not uncommon for someone to refer to TDD as “developers writing unit tests before the production code”.  That is one form of TDD, certainly, but is a subset of a larger set of ideas and processes.

Similarly, it’s not uncommon for someone to refer to ATDD as “acceptance tests being written before the development cycle begins”.  Again, this is not incorrect, but it is usually seen as a separate thing from what the developers do which is typically thought of as “TDD”.

The truth is they are both TDD.  They are not conducted in the same way nor by the same people (entirely), and the value they provide is different (but compatible) … but at the end of the day TDD is the umbrella that they both fall under.  TDD is a software development paradigm.

This blog series will begin by investigating the differences between these two forms of the overall “Test-Driven” process, which we will call (for clarity) ATDD and UTDD (or “Unit-Test-Driven Development”, which most people think of as TDD).  We will see how they differ in terms of:
  1. Who is involved?  Who creates these “tests”, updates them, and can ultimately read and understand them in the future?  We will use the term “Audience” to describe the different groups that conduct ATDD and UTDD.
  2. When they are done, and the granularity of their activities.  This “Cadence”, as we shall see, is quite different and for good reason.
  3. Finally, we will examine the difference in the value and the necessity of the effort to automate these two different processes.  Both certainly can be automated, but how critical is such automation, what value is derived with and without it, and where should the effort to automate fall in the adoption process?  We will call this section “Automation”.
After having examined the differences between ATDD and UTDD, we will then address how they are the same, why they both fit under the overall rubric “TDD” and how, ultimately, they synergize into a very powerful way to create alignment around business value in software development.  These similarities include:
  1. Shared understanding of the valuable behaviors that make up a requirement, and the way those behaviors satisfy the expectations of the stakeholders.
  2. The preservation of high-worth enterprise knowledge in a form that can retain its value over time.
  3. The creation of a high-quality architecture and design, and all the value that this brings in terms of the ability of the organization to respond to market challenges, opportunities, and changes.
Regardless of your process, every activity conducted by every individual in a software development organization should be ultimately traceable back to business value: every bit of code that’s written, every test that is defined and executed, every form that’s filled out, every meeting that’s called, everything.  We want to show that TDD is in complete alignment with this concept, and, when properly conducted, increases the value of every investment of time and effort, and its ultimate return to the bottom line.

Links to the parts of the blog:

Part 1
Part 2
Part 3
Part 4

Thursday, August 3, 2017

TDD: Testing Behavior in Abstract Classes

Interfaces vs. Abstract Classes

In languages like Java and C#, developers can use either an interface or an abstract class to create object polymorphism.  It’s a common question in technical training: “It is best to use an interface, or an abstract class?”

Furthermore, many teams adopt the “I” naming convention for interfaces; namely that an interface’s name should start with a capital I, whereas other classes (including abstract classes) should not.  The problem with this convention is that it creates design coupling.  Client objects that contain references to service objects must be changed when a simple, concrete class must be changed to become an abstraction, if an interface is to be used to model it.  Should client objects care whether a service is a concrete class, abstract class, or interface? No.  This would seem to argue against this naming convention in the first place.

But the real problem stems from the fact that the “interface” type is commonly used for two very different purposes: to create polymorphism and to mark a class as a valid participant in a framework process.  For example, a class can implement “ISerializible”, not for casting purposes per se, but so it can be serialized by .Net or a similar framework.  This may be a tangential issue to the class’ core responsibility.  On the other hand, 10 different versions of a tax calculation algorithm implemented by 10 different tax calculation classes can all implement “ITaxCalc” so that they can be cast up and dealt with in the same way by various client classes.  This would create polymorphism around the central responsibility of all the classes involved: calculating taxes.  If we had started with a single algorithm, a concrete class called TaxCalc, and this was referred to across the system by that name, then when we evolve the system to support different algorithms and thus the class becomes an interface, then the type name would change (if the “I” convention is used) and all client code will have to be maintained.

Different Purposes, Different Approaches

It seems like a bad idea to use one idiom for two unrelated purposes.
Personally, I prefer to create polymorphism using abstract classes, and to mark a class for participation in a framework process using interfaces.

Part of my argument is this:  when many different classes have a conceptual relationship, such as the tax calculators mentioned above, then it is likely they will also contain some code in their implementation that is the same.  This yields redundancy that creates maintenance problems when requirements due to tax laws and regulations change (for example).  An abstract class can implement common functionality, whereas interfaces cannot.  Even if a set of related classes contains no redundant implementation today, redundancies can emerge over time.  Abstract classes make this problem easy to solve whenever it arises.

Also, if I limit the use of interfaces to process flags, then the “I” convention is less of an issue.  I do not create design coupling within my system if I use it, because, for example, “IComparable” is not my interface, it allows a collection of classes to be sorted by a framework.  It belongs to that framework and is highly unlikely to be changed, due to the chaos this would create in everyone’s code if it were to be.  In any case, I don’t control its name.

TDD and Common Behaviors

If an abstract class is used to create polymorphism, and if there is indeed some common functionality in the base class, then the question arises: how do I test that behavior?  One cannot instantiate an abstract class, and thus its behavior cannot be triggered by a test unless that behavior is in a static method.  Static methods are disfavored for a number of reasons (I’ll deal with those in another blog), and I certainly would not make the behavior static just for testing purposes.  So what should a TDD practitioner, or a traditional tester, do about testing instance behavior that is implemented in an abstract class?

Here is a completely generic example:

Each “ConcreteService” version would have its own test, for each implementation version of the “VaryingFunction()” method.  But how would one write a test for the “CommonFunction()” method if it were an instance method, and one cannot create a instance?

Initially you might say “well, just pick any of the subclasses, create an instance of it and test the common function there, as they all have access to it.”  The problem is that this creates coupling in the test to the concrete service class that you arbitrarily chose.  If you happened to pick “ConcreteService1”, for instance, and later that class were to be retired/eliminated due to changing requirements, then the test of the common function would break even though that function is working fine.  Similarly, if "ConcreteService1" at some point in the future were to be changed to override the "CommonFunction()" method, this will also break the test.  We want tests that fail only for the reason we wrote them to.

Another Use for a Mock Object

Mock objects[1] are used to break and control dependencies in testing.  Here we can use a mock to eliminate coupling from the test of the common function to any of the concrete production classes.

This mock, like any subclass, has access through inheritance to the common function, but unlike other subclasses the mock:
  1. Is not part of the production code, but actually part of the test namespace/package/etc…
  2. Is never eliminated due to a changing requirement.  It is really part of the test.
  3. Is not a public class.  It is only visible to the tests.
Another advantage of this approach is that it makes it easier to test base-class behavior that is not exposed to the system in general (not public).

This is a pattern, a “Testing Class Adapter”[2].  It works because the test will hold the “Mock Service” by its concrete type, not in an upcast, and thus this new accessor method, which is public, can be called to access the protected method in the base class.  Again, this mock is not part of production, and thus does not break encapsulation in general, only for testing.


I prefer to use abstract classes to create polymorphism, and use interfaces to flag classes as participants in framework services.  When you do this, you can easily eliminate functional redundancies in derived classes by pushing them up into the base class.  To test this otherwise-redundant functionality use a mock object/testing class adapter to access it.
[1] For more on Mock Objects see:

[2] For more on the Adapter Pattern see: