Net Objectives

Net Objectives
If you are interested in coaching or training in ATDD or TDD please click here.

Friday, August 17, 2018

Test-Driven Development in the Larger Context: Pt 4. Synergy

Having established the differences between ATDD and UTDD, how are they the same? Why are they both called “TDD” collectively?

Tests ensure a detailed and specific, shared understanding of behavior.

If properly trained, those who create these tests will subject their knowledge to a rigorous and unforgiving standard; it is hard if not impossible to create a high-quality test about something you do not know enough about. It is also unwise to try and implement behavior before you have that understanding. TDD ensures that you have sufficient and correct understanding by placing tests in the primary position. This is equally true for acceptance tests as it is for unit tests, it's just that the audience for the conversation is different..

Tests capture knowledge that would otherwise be lost.

Many organizations we encounter in our consulting practice have large complex legacy systems that no one understands very well. The people who designed and built them have often retired or moved on to other positions, and the highly valuable knowledge they possessed left with them. If tests are written to capture this knowledge (which, again, requires that those who write them are properly trained in this respect) then not only is this knowledge retained, but also its accuracy can be verified at any time in the future by simply executing the tests. This is true whether the tests are automated or not, but obviously the automation is a big advantage here. This leads us to view tests, when written up-front, as specifications. They hold the value that specifications hold, but add the ability to verify accuracy in the future.

Furthermore, if any change to the system is required, TDD mandates that the tests are updated before the production code as part of the cadence of the work. This ensures that the changes are correct, and that the specification never becomes outdated. Only TDD can do this.

Tests ensure quality design.

As anyone can tell you who has tried to add tests after the fact to a legacy system, bad design is notoriously hard to test. If tests precede development, then design flaws are made clear early in the painful process of trying to test them.  In other words, TDD will tell you early if your design is weak because the pain you’ll feel is a diagnostic tool, as all pain is. Adequate training in design (design patterns training for example)will ensure that the team understands what the design should be, and the tests will confirm when this has happened. Note that this is true whether tests are written or not; it is the point of view that accompanies testability that drives toward better design. In this respect the actual tests become an extremely useful side-product. That said, once it is determined how to test something, which ensures that it is indeed testable, then the truly difficult work is done. One might as well write the tests…

What TDD does not do, neither in terms of ATDD nor UTDD, is replace traditional testing. The quality control/quality assurance process that has traditionally followed development is still needed as TDD will not test all aspects of the system, only those needed to create it. Usability, scalability, security, and so on still need to be ensured by traditional testing. What TDD does do is contribute some of the tests needed by QA, but certainly not all of them.

There is another benefit to the adoption of TDD, one of healthy culture. In many organizations, developers view the testing effort as a source of either no news (the tests confirm the system is correct) or bad news (the tests point out flaws). Similarly, testers view the developers as a source of myriad problems they must detect and report.

When TDD is adopted, developers have a clearer understanding of the benefits of testing from their perspective. Indeed, TDD can become a strongly-preferred way to work by developers because it leads to a kind of certainty and confidence that they are unaccustomed to, and crave. On the other hand, testers begin to see the development effort as a source of many of the tests that they, in the past, had to retrofit onto the system. This frees them up to add the more interesting, more sophisticated tests (that require their experience and knowledge) which otherwise often end up being cut from the schedule due to lack of time. This, of course, leads to better and more robust products overall.

Driving development from tests initially seemed like an odd idea to most who heard of it. The truth is, it makes perfect sense. It’s always important to understand what you are going to build before you build it, and tests are a very good way to ensure that you do and that everyone is on the same page. But tests deliver more value than this; they can also be used to efficiently update the system, and to capture the knowledge that existed when the system was created, months, years, even decades in the future. They ensure that the value of the work done will be persistent value, and in complete alignment with the forces that make the business thrive.

TDD helps everyone.

Intro
Part 1
Part 2
Part 3
Part 4

Monday, August 13, 2018

Test-Driven Development in the Larger Context: Pt 3. Automation

Both ATDD and UTDD are automatable. The difference has to do with the role of such automation, how critical and valuable it is, and when the organization should put resources into creating it.

AUTOMATION

ATDD


ATDD’s value comes primarily from the deep collaboration it engenders, and the shared understanding that comes from this effort.  The health of the organization in general, and specifically the degree to which development effort is aligned with business value will improve dramatically once the process is well understood and committed to by all. Training your teams in ATDD pays back in the short term. Excellent ATDD training pays back before the course is even over.

Automating your acceptance test execution is worthwhile, but not an immediate requirement. An organization can start ATDD without any automation and still get profound value from the process itself.  For many organizations automation is too tough a nut to crack at the beginning, but this should not dissuade anyone from adopting ATDD and making sure everyone knows how to do it properly. The automation can be added later if desired but even then, acceptance tests will not run particularly quickly. That is acceptable because they are not run very frequently, perhaps part of a nightly build.

Also, it will likely not be at all clear, in the beginning, what form of automation should be used. There are many different ATDD automation tools and frameworks out there, and while any tool could be used to automate any form of expression, some tools are better than others given the nature of that expression. If a textual form, like Gherkin, is determined to be clearest and least ambiguous given the nature of the stakeholders involved, then an automation tool like Cucumber (Java) or Specflow (.Net) is a very natural and low-cost fit. If a different representation makes better sense, then another tool will be easier and cheaper to use.

The automation tool should never dictate the way acceptance tests are expressed. It should follow. This may require the organization to invest in the effort to create its own tools or enhancements to existing tools. but this is a one-time cost that will return on the investment indefinitely. In ATDD the clarity and accuracy of the expression of value is paramount; the automation is beneficial and "nice to have."

 

UTDD


UTDD requires automation from the outset.  It is not optional and, in fact, without automation UTDD could scarcely be recommended.

Unit tests are run very frequently, often every few minutes, and thus if they are not efficient it will be far too expensive (in terms of time and effort) for the team to run them. Running a suite of unit tests should appear to cost nothing to the team; this is obviously not literally true, but that attitude should be reasonable.

Thus, unit tests must be extremely fast, and many aspects of UTDD training should ensure that the developers know how to make them extremely painless to execute. They must know how to manage dependencies in the system, and how to craft tests in such a way that they execute in the least time possible, without sacrificing clarity.

Since unit test are intimately connected to the system from the outset, most teams find that it makes sense to write them in the same programming language that the system itself is being written in. This means that the developers do not have to do any context-switching when moving back and forth between tests and production code, which they will do in a very tight loop.

Unlike ATDD, the unit-testing automation framework must be an early decision in UTDD, as it will match/drive the technology used to create the system itself. One benefit of this is that the skills developers have acquired for one purpose are highly valuable for the other. This value flows in both directions: writing unit tests makes you a better developer, and writing production code makes you a better tester.

Also, if writing automated unit tests is difficult or painful to the developers, this is nearly always a strong indicator of weakness in the product design. The details are beyond the scope of this article, but suffice it to say that bad design is notoriously hard to test, and bad design should be rooted out early and corrected before any effort is wasted on implementing it.

Intro
Part 1
Part 2
Part 3
Part 4

Tuesday, August 7, 2018

Test-Driven Development in the Larger Context: Pt 2. Cadence


Another key difference between ATDD and UTDD is the pace and granularity, or “cadence” of the work.  This difference is driven by the purpose of the activity, and how the effort can drive the maximum value with the minimum delay.

CADENCE

 

ATDD


Acceptance tests should be written at the start of the development cycle: during sprint planning in Scrum, as an example.  Enough tests are written to cover the entire upcoming development effort, plus a few more in case the team moves more quickly than estimates expect.  

If using a pull system, like Kanban, then the acceptance tests should be generated into a backlog that the team can pull from.  The creation of these tests, again, are part of the collaborative planning process and should follow its pace exactly.

Acceptance test start off failing, as a group, and then are run at regular intervals (perhaps as part of a nightly build) to allow the business to track the progress of the teams as they gradually convert them to passing tests.  This provides data that allows management to forecast completion (the “burn down curve”) which aids in planning.

The primary purpose of creating acceptance tests is the collaboration this engenders.  The tests are a side-effect (albeit an enormously beneficial one) of the process, which is engaged to ensure that all stakeholders are addressed, all critical information is included, and that there is complete alignment between business prioritization and the upcoming development effort.

When training stakeholders to write such tests, the experience should be realistic; work should be done by teams that include everyone mentioned in the “Audience” blog that preceded this one, and they should ideally work on real requirements from the business.

 

UTDD


A single unit test is written and proved to fail by running it immediately.  Failure validates the test in that a test that cannot fail, or fails for the wrong reason, has no value.  The developer does not proceed without such a test, and without a clear understanding of why it is failing.

Then the production work is done to make this one test pass, immediately.  The developer does not move on to write another test until it and all previously-written tests are “green.” The guiding principle is that we never have more than one failing test at a time, and therefore the test is a process gate determining when the next user story (or similar artifact) can begin to be worked on.

When training developers to write these tests properly, we use previously-derived examples to ensure that they understand how to avoid the common pitfalls that can plague this process: tests can become accidentally coupled to each other, tests can become redundant and fail in groups, one test added to the system can cause other, older tests to fail.  All of this is avoidable but requires that developers who write them are given the proper set of experiences, in the right order, so that they are armed with the necessary understanding to ensure that the test suite, as it grows large, does not become unsustainable.

Intro
Part 1
Part 2
Part 3
Part 4