ATDD’s value comes primarily from the deep collaboration it engenders, and the shared understanding that comes from this effort. The health of the organization in general, and specifically the degree to which development effort is aligned with business value will improve dramatically once the process is well understood and committed to by all. Training your teams in ATDD pays back in the short term. Excellent ATDD training pays back before the course is even over.
Automating your acceptance test execution is worthwhile, but not an immediate requirement. An organization can start ATDD without any automation and still get profound value from the process itself. For many organizations automation is too tough a nut to crack at the beginning, but this should not dissuade anyone from adopting ATDD and making sure everyone knows how to do it properly. The automation can be added later if desired but even then, acceptance tests will not run particularly quickly. That is acceptable because they are not run very frequently, perhaps part of a nightly build.
Also, it will likely not be at all clear, in the beginning, what form of automation should be used. There are many different ATDD automation tools and frameworks out there, and while any tool could be used to automate any form of expression, some tools are better than others given the nature of that expression. If a textual form, like Gherkin, is determined to be clearest and least ambiguous given the nature of the stakeholders involved, then an automation tool like Cucumber (Java) or Specflow (.Net) is a very natural and low-cost fit. If a different representation makes better sense, then another tool will be easier and cheaper to use.
The automation tool should never dictate the way acceptance tests are expressed. It should follow. This may require the organization to invest in the effort to create its own tools or enhancements to existing tools. but this is a one-time cost that will return on the investment indefinitely. In ATDD the clarity and accuracy of the expression of value is paramount; the automation is beneficial and "nice to have."
UTDD requires automation from the outset. It is not optional and, in fact, without automation UTDD could scarcely be recommended.
Unit tests are run very frequently, often every few minutes, and thus if they are not efficient it will be far too expensive (in terms of time and effort) for the team to run them. Running a suite of unit tests should appear to cost nothing to the team; this is obviously not literally true, but that attitude should be reasonable.
Thus, unit tests must be extremely fast, and many aspects of UTDD training should ensure that the developers know how to make them extremely painless to execute. They must know how to manage dependencies in the system, and how to craft tests in such a way that they execute in the least time possible, without sacrificing clarity.
Since unit test are intimately connected to the system from the outset, most teams find that it makes sense to write them in the same programming language that the system itself is being written in. This means that the developers do not have to do any context-switching when moving back and forth between tests and production code, which they will do in a very tight loop.
Unlike ATDD, the unit-testing automation framework must be an early decision in UTDD, as it will match/drive the technology used to create the system itself. One benefit of this is that the skills developers have acquired for one purpose are highly valuable for the other. This value flows in both directions: writing unit tests makes you a better developer, and writing production code makes you a better tester.
Also, if writing automated unit tests is difficult or painful to the developers, this is nearly always a strong indicator of weakness in the product design. The details are beyond the scope of this article, but suffice it to say that bad design is notoriously hard to test, and bad design should be rooted out early and corrected before any effort is wasted on implementing it.