Having established the differences between ATDD and UTDD, how are they the same? Why are they both called “TDD” collectively?
Tests ensure a detailed and specific, shared understanding of behavior.
If properly trained, those who create these tests will subject their knowledge to a rigorous and unforgiving standard; it is hard if not impossible to create a high-quality test about something you do not know enough about. It is also unwise to try and implement behavior before you have that understanding. TDD ensures that you have sufficient and correct understanding by placing tests in the primary position. This is equally true for acceptance tests as it is for unit tests, it's just that the audience for the conversation is different..
Tests capture knowledge that would otherwise be lost.
Many organizations we encounter in our consulting practice have large complex legacy systems that no one understands very well. The people who designed and built them have often retired or moved on to other positions, and the highly valuable knowledge they possessed left with them. If tests are written to capture this knowledge (which, again, requires that those who write them are properly trained in this respect) then not only is this knowledge retained, but also its accuracy can be verified at any time in the future by simply executing the tests. This is true whether the tests are automated or not, but obviously the automation is a big advantage here. This leads us to view tests, when written up-front, as specifications. They hold the value that specifications hold, but add the ability to verify accuracy in the future.
Furthermore, if any change to the system is required, TDD mandates that the tests are updated before the production code as part of the cadence of the work. This ensures that the changes are correct, and that the specification never becomes outdated. Only TDD can do this.
Tests ensure quality design.
As anyone can tell you who has tried to add tests after the fact to a legacy system, bad design is notoriously hard to test. If tests precede development, then design flaws are made clear early in the painful process of trying to test them. In other words, TDD will tell you early if your design is weak because the pain you’ll feel is a diagnostic tool, as all pain is. Adequate training in design (design patterns training for example)will ensure that the team understands what the design should be, and the tests will confirm when this has happened. Note that this is true whether tests are written or not; it is the point of view that accompanies testability that drives toward better design. In this respect the actual tests become an extremely useful side-product. That said, once it is determined how to test something, which ensures that it is indeed testable, then the truly difficult work is done. One might as well write the tests…
What TDD does not do, neither in terms of ATDD nor UTDD, is replace traditional testing. The quality control/quality assurance process that has traditionally followed development is still needed as TDD will not test all aspects of the system, only those needed to create it. Usability, scalability, security, and so on still need to be ensured by traditional testing. What TDD does do is contribute some of the tests needed by QA, but certainly not all of them.
There is another benefit to the adoption of TDD, one of healthy culture. In many organizations, developers view the testing effort as a source of either no news (the tests confirm the system is correct) or bad news (the tests point out flaws). Similarly, testers view the developers as a source of myriad problems they must detect and report.
When TDD is adopted, developers have a clearer understanding of the benefits of testing from their perspective. Indeed, TDD can become a strongly-preferred way to work by developers because it leads to a kind of certainty and confidence that they are unaccustomed to, and crave. On the other hand, testers begin to see the development effort as a source of many of the tests that they, in the past, had to retrofit onto the system. This frees them up to add the more interesting, more sophisticated tests (that require their experience and knowledge) which otherwise often end up being cut from the schedule due to lack of time. This, of course, leads to better and more robust products overall.
Driving development from tests initially seemed like an odd idea to most who heard of it. The truth is, it makes perfect sense. It’s always important to understand what you are going to build before you build it, and tests are a very good way to ensure that you do and that everyone is on the same page. But tests deliver more value than this; they can also be used to efficiently update the system, and to capture the knowledge that existed when the system was created, months, years, even decades in the future. They ensure that the value of the work done will be persistent value, and in complete alignment with the forces that make the business thrive.
TDD helps everyone.
Intro
Part 1
Part 2
Part 3
Part 4
Friday, August 17, 2018
Wednesday, August 15, 2018
TDD Mark 3, part 2
I realized recently that this had been written, but never published. Part 1 was, but never this second part. Not sure how that happened. Maybe I needed a test. :)
Anyway, here it is. Part three is still pending.
-Scott-
Our central thesis thus far has centered on the notion that TDD is not really about testing, it is really about specification. But we must also make a distinction between what TDD is and what it does. Test-Driven Development is definitely a phrase that describes an action, if one focuses on the word “driven”.
What does TDD drive? It drives development. What is development?
Traditionally we have considered the creation of software to consist of a sequence of phases, usually something like:
TDD, being a distinctly agile methodology, must therefore concern itself with all aspects of development.
The analysis aspect of TDD is the reason we can consider the test suite to form a technical specification, and we can certainly say TDD drives us toward this by the simple fact that you cannot write a test about something you do not understand. Automated tests are very unforgiving, and require a level of detailed understanding in order to create them. Thus, they require rigorous analysis.
We like to say that the best specification “forces” you to write the correct code. In investigating this fully (which we will do in a future blog) we’ll see that the tests we write, if done in the proper and complete way, do exactly this. You cannot make the tests pass unless you write the right code. Thus TDD leads to construction.
Also, while we do not write our tests for testing purposes, but rather as the spec that leads to the implemention code, we do not discard the tests one the code is complete. They have, in essence, a second life where they provide a second value. They become tests once we are done using them to create the system. So TDD does apply to testing as well. There may be other tests we write, but the TDD suite does contribute to the testing needs of the team.
That leaves design. Can TDD also be said to apply to design? Could TDD also be “Test-Driven Design”, in other words? We say yes, decidedly so. Much of what will follow in future blogs will demonstrate this.
But this integration of the test-writing activity into all aspects of software development means that the test suite itself becomes essentially part of the source code. We must consider the tests to be first class citizens of the project, and thus we must also address ourselves to the design of the tests themselves. They must be well-designed in order to be maintainable, and this is a critical issue when it comes to conducting TDD in a sustainable way, which is a clear focus of this blog series.
How does one define a good design? This is not a trivial question. Some would say that looking to the Design Patterns can provide excellent examples of good design. Some would say that attending to a rubric like SOLID (Single responsibility, Open-closed, Liskov substitution, Interface segregation and Dependency inversion) can provide the guidance we need to produce high-quality designs. We agree with these ideas, but also with the notion of the separation of concerns.
Martin Fowler, in his book “UML Distilled”, suggested that one way to approach this is to fundamentally ensure that the abstract aspects (what he called the “conceptual perspective”) of the system should not be intermixed with the specific way those concepts are executed (what he called the “implementation perspective”).
Let’s examine a counter example, where we do not follow this advice and mix these two perspectives.
Let’s say we have an object that allows us to communicate via a USB port. We’ll call it USBConnection, and we’ll give it a send() and receive() method. Let’s furthermore say that, sometime after this object has been developed we have a new requirement to create a similar object, but that we need to also ensure that any packet sent over the port is verified to be well-formed, otherwise we throw a BadPackedException. In the past, when we considered OO to be primarily focused on the notion of object reuse, we might have suggested something like this:
Figure 1: “Reusing” the USBConnection by deriving a new type from it
This can produce problems.
First, any change to USBConnection can also propagate down to VerifiedUSBConnection, whether that is appropriate/desired or not. The opposite, however, is not true. We can make changes to the verified version with complete confidence that these changes will have no effect on the original class.
Second, one can create an instance of VerifiedUSBConnection and, perhaps accidentally, cast it to the base type. It will appear, in the code, to be the simple USBConnection, which never throws an exception, but this will not be true. The reverse, however, is impossible. We cannot cast an instance of USBConnection to type VerifiedUSBConnection and then compile the code successfully.
If we do this very much, we end up with a vague, muddy, confusing architecture, where changes and errors propagate in hard-to-predict ways, where we simply have to remember that certain issues are of concern where other are not, because the design does not explicitly control coupling.
But Fowler’s guidance would also lead us away from using inheritance like this, because the class USBConnection is essentially forming an interface which is implemented by VerifiedUSBConnection, while also being an implementation itself. It is both conceptual, and an implementation, we have not separated these perspectives in this object. If we want to completely separate the conceptual part of the system from its implementation we would be forced to design it differently:
Figure 2: Two ways of separating concept from implementation
In the first design, USBConnection is a conceptual type (interface, abstract class, pure virtual class, something along those lines) with two different implementing versions. The conceptual type is only conceptual and the implementing types are only implementations; there is no mixing.
In the second design (which, if you are familiar with patterns is a combination of the Strategy Pattern with the Null Object Pattern), the concept of PacketVerifier is represented by a type that is strictly conceptual, whereas the two kinds of verifiers (one which performs the verification and one which does nothing at all) are only implementations, there is no mixing.
Either way (and we will examine which of these we prefer, and why, in a later blog) we have created this separation of concerns. In the first design, a change to NonVerifiedUSBConnection will never propagate to VerifiedUSBConnection, and the same is true in the reverse. Instances of neither of the implementing types can be accidentally cast to the other. In the second design, these qualities are the same for the PacketVerifier implementations.
Design quality is all about maintainability, about the ability to add, modify, change, scale, and extend without excessive risk and waste. If our tests are first-class citizens in our system, they must be well-designed too.
Let’s look back at a piece of code from TDD Mark 3 Introduced:
[TestMethod]
public void TestLoadingLessThanMinimalFundsThrowsException()
{
LoadInitialFunds(MinimalFunds());
uint insufficientFunds = MinimalFunds() - 1;
try
{
LoadInitialFunds(insufficientFunds);
Assert.Fail("Card should have thrown a " +
typeof(Account.InsufficientFundsException).Name());
}
catch (Account.InsufficientFundsException exception)
{
Assert.AreEqual(insufficientFunds, exception.Funds());
}
}
private UInt MinimalFunds() {
return Account.MINIMAL_FUNDS;
}
private void LoadFunds(uint funds)
{
Account account = new Account(funds);
}
The public method (marked [TestMethod] expresses the specification conceptually; the concept of loading funds, there being a notion of “minimal funds”, and the idea that a whole dollar epsilon of the behavioral boundary, these comprise the conceptual perspective. The fact that “minimum funds” is a constant on the Account class, and the fact that the fund-loading behavior is done by the constructor of Account, these are implementation details that could be changed without the concepts being effected.
For example, we may later decide to store the minimal funds in a database, to make it configurable. We may decide to validate the minimum level in a service object that Account uses, or we could build Account in a factory and allow the factory to validate that the funds are sufficient. These changes would impact, in each case, a single private method on the test, and the conceptual public method would be unchanged.
This is the next step in sustainability, and we will be investigating many aspects of it. How will it change the way we write tests? How will it change dependency management? Should these private methods actually be extracted into a separate class? If so, when and why would we decide to do that?
We’d love to hear from you….
Anyway, here it is. Part three is still pending.
-Scott-
Expanding the thesis
Our central thesis thus far has centered on the notion that TDD is not really about testing, it is really about specification. But we must also make a distinction between what TDD is and what it does. Test-Driven Development is definitely a phrase that describes an action, if one focuses on the word “driven”.
What does TDD drive? It drives development. What is development?
Traditionally we have considered the creation of software to consist of a sequence of phases, usually something like:
- Analysis
- Design
- Construction (coding)
- Inspection (testing)
TDD, being a distinctly agile methodology, must therefore concern itself with all aspects of development.
The analysis aspect of TDD is the reason we can consider the test suite to form a technical specification, and we can certainly say TDD drives us toward this by the simple fact that you cannot write a test about something you do not understand. Automated tests are very unforgiving, and require a level of detailed understanding in order to create them. Thus, they require rigorous analysis.
We like to say that the best specification “forces” you to write the correct code. In investigating this fully (which we will do in a future blog) we’ll see that the tests we write, if done in the proper and complete way, do exactly this. You cannot make the tests pass unless you write the right code. Thus TDD leads to construction.
Also, while we do not write our tests for testing purposes, but rather as the spec that leads to the implemention code, we do not discard the tests one the code is complete. They have, in essence, a second life where they provide a second value. They become tests once we are done using them to create the system. So TDD does apply to testing as well. There may be other tests we write, but the TDD suite does contribute to the testing needs of the team.
That leaves design. Can TDD also be said to apply to design? Could TDD also be “Test-Driven Design”, in other words? We say yes, decidedly so. Much of what will follow in future blogs will demonstrate this.
But this integration of the test-writing activity into all aspects of software development means that the test suite itself becomes essentially part of the source code. We must consider the tests to be first class citizens of the project, and thus we must also address ourselves to the design of the tests themselves. They must be well-designed in order to be maintainable, and this is a critical issue when it comes to conducting TDD in a sustainable way, which is a clear focus of this blog series.
“Good” design
How does one define a good design? This is not a trivial question. Some would say that looking to the Design Patterns can provide excellent examples of good design. Some would say that attending to a rubric like SOLID (Single responsibility, Open-closed, Liskov substitution, Interface segregation and Dependency inversion) can provide the guidance we need to produce high-quality designs. We agree with these ideas, but also with the notion of the separation of concerns.
Martin Fowler, in his book “UML Distilled”, suggested that one way to approach this is to fundamentally ensure that the abstract aspects (what he called the “conceptual perspective”) of the system should not be intermixed with the specific way those concepts are executed (what he called the “implementation perspective”).
Let’s examine a counter example, where we do not follow this advice and mix these two perspectives.
Let’s say we have an object that allows us to communicate via a USB port. We’ll call it USBConnection, and we’ll give it a send() and receive() method. Let’s furthermore say that, sometime after this object has been developed we have a new requirement to create a similar object, but that we need to also ensure that any packet sent over the port is verified to be well-formed, otherwise we throw a BadPackedException. In the past, when we considered OO to be primarily focused on the notion of object reuse, we might have suggested something like this:
Figure 1: “Reusing” the USBConnection by deriving a new type from it
This can produce problems.
First, any change to USBConnection can also propagate down to VerifiedUSBConnection, whether that is appropriate/desired or not. The opposite, however, is not true. We can make changes to the verified version with complete confidence that these changes will have no effect on the original class.
Second, one can create an instance of VerifiedUSBConnection and, perhaps accidentally, cast it to the base type. It will appear, in the code, to be the simple USBConnection, which never throws an exception, but this will not be true. The reverse, however, is impossible. We cannot cast an instance of USBConnection to type VerifiedUSBConnection and then compile the code successfully.
If we do this very much, we end up with a vague, muddy, confusing architecture, where changes and errors propagate in hard-to-predict ways, where we simply have to remember that certain issues are of concern where other are not, because the design does not explicitly control coupling.
But Fowler’s guidance would also lead us away from using inheritance like this, because the class USBConnection is essentially forming an interface which is implemented by VerifiedUSBConnection, while also being an implementation itself. It is both conceptual, and an implementation, we have not separated these perspectives in this object. If we want to completely separate the conceptual part of the system from its implementation we would be forced to design it differently:
Figure 2: Two ways of separating concept from implementation
In the first design, USBConnection is a conceptual type (interface, abstract class, pure virtual class, something along those lines) with two different implementing versions. The conceptual type is only conceptual and the implementing types are only implementations; there is no mixing.
In the second design (which, if you are familiar with patterns is a combination of the Strategy Pattern with the Null Object Pattern), the concept of PacketVerifier is represented by a type that is strictly conceptual, whereas the two kinds of verifiers (one which performs the verification and one which does nothing at all) are only implementations, there is no mixing.
Either way (and we will examine which of these we prefer, and why, in a later blog) we have created this separation of concerns. In the first design, a change to NonVerifiedUSBConnection will never propagate to VerifiedUSBConnection, and the same is true in the reverse. Instances of neither of the implementing types can be accidentally cast to the other. In the second design, these qualities are the same for the PacketVerifier implementations.
Design quality is all about maintainability, about the ability to add, modify, change, scale, and extend without excessive risk and waste. If our tests are first-class citizens in our system, they must be well-designed too.
Let’s look back at a piece of code from TDD Mark 3 Introduced:
[TestMethod]
public void TestLoadingLessThanMinimalFundsThrowsException()
{
LoadInitialFunds(MinimalFunds());
uint insufficientFunds = MinimalFunds() - 1;
try
{
LoadInitialFunds(insufficientFunds);
Assert.Fail("Card should have thrown a " +
typeof(Account.InsufficientFundsException).Name());
}
catch (Account.InsufficientFundsException exception)
{
Assert.AreEqual(insufficientFunds, exception.Funds());
}
}
private UInt MinimalFunds() {
return Account.MINIMAL_FUNDS;
}
private void LoadFunds(uint funds)
{
Account account = new Account(funds);
}
The public method (marked [TestMethod] expresses the specification conceptually; the concept of loading funds, there being a notion of “minimal funds”, and the idea that a whole dollar epsilon of the behavioral boundary, these comprise the conceptual perspective. The fact that “minimum funds” is a constant on the Account class, and the fact that the fund-loading behavior is done by the constructor of Account, these are implementation details that could be changed without the concepts being effected.
For example, we may later decide to store the minimal funds in a database, to make it configurable. We may decide to validate the minimum level in a service object that Account uses, or we could build Account in a factory and allow the factory to validate that the funds are sufficient. These changes would impact, in each case, a single private method on the test, and the conceptual public method would be unchanged.
This is the next step in sustainability, and we will be investigating many aspects of it. How will it change the way we write tests? How will it change dependency management? Should these private methods actually be extracted into a separate class? If so, when and why would we decide to do that?
We’d love to hear from you….
Monday, August 13, 2018
Test-Driven Development in the Larger Context: Pt 3. Automation
Both ATDD and UTDD are automatable. The difference has to do with the role of such automation, how critical and valuable it is, and when the organization should put resources into creating it.
AUTOMATION
ATDD’s value comes primarily from the deep collaboration it engenders, and the shared understanding that comes from this effort. The health of the organization in general, and specifically the degree to which development effort is aligned with business value will improve dramatically once the process is well understood and committed to by all. Training your teams in ATDD pays back in the short term. Excellent ATDD training pays back before the course is even over.
Automating your acceptance test execution is worthwhile, but not an immediate requirement. An organization can start ATDD without any automation and still get profound value from the process itself. For many organizations automation is too tough a nut to crack at the beginning, but this should not dissuade anyone from adopting ATDD and making sure everyone knows how to do it properly. The automation can be added later if desired but even then, acceptance tests will not run particularly quickly. That is acceptable because they are not run very frequently, perhaps part of a nightly build.
Also, it will likely not be at all clear, in the beginning, what form of automation should be used. There are many different ATDD automation tools and frameworks out there, and while any tool could be used to automate any form of expression, some tools are better than others given the nature of that expression. If a textual form, like Gherkin, is determined to be clearest and least ambiguous given the nature of the stakeholders involved, then an automation tool like Cucumber (Java) or Specflow (.Net) is a very natural and low-cost fit. If a different representation makes better sense, then another tool will be easier and cheaper to use.
The automation tool should never dictate the way acceptance tests are expressed. It should follow. This may require the organization to invest in the effort to create its own tools or enhancements to existing tools. but this is a one-time cost that will return on the investment indefinitely. In ATDD the clarity and accuracy of the expression of value is paramount; the automation is beneficial and "nice to have."
UTDD requires automation from the outset. It is not optional and, in fact, without automation UTDD could scarcely be recommended.
Unit tests are run very frequently, often every few minutes, and thus if they are not efficient it will be far too expensive (in terms of time and effort) for the team to run them. Running a suite of unit tests should appear to cost nothing to the team; this is obviously not literally true, but that attitude should be reasonable.
Thus, unit tests must be extremely fast, and many aspects of UTDD training should ensure that the developers know how to make them extremely painless to execute. They must know how to manage dependencies in the system, and how to craft tests in such a way that they execute in the least time possible, without sacrificing clarity.
Since unit test are intimately connected to the system from the outset, most teams find that it makes sense to write them in the same programming language that the system itself is being written in. This means that the developers do not have to do any context-switching when moving back and forth between tests and production code, which they will do in a very tight loop.
Unlike ATDD, the unit-testing automation framework must be an early decision in UTDD, as it will match/drive the technology used to create the system itself. One benefit of this is that the skills developers have acquired for one purpose are highly valuable for the other. This value flows in both directions: writing unit tests makes you a better developer, and writing production code makes you a better tester.
Also, if writing automated unit tests is difficult or painful to the developers, this is nearly always a strong indicator of weakness in the product design. The details are beyond the scope of this article, but suffice it to say that bad design is notoriously hard to test, and bad design should be rooted out early and corrected before any effort is wasted on implementing it.
Intro
Part 1
Part 2
Part 3
Part 4
AUTOMATION
ATDD
ATDD’s value comes primarily from the deep collaboration it engenders, and the shared understanding that comes from this effort. The health of the organization in general, and specifically the degree to which development effort is aligned with business value will improve dramatically once the process is well understood and committed to by all. Training your teams in ATDD pays back in the short term. Excellent ATDD training pays back before the course is even over.
Automating your acceptance test execution is worthwhile, but not an immediate requirement. An organization can start ATDD without any automation and still get profound value from the process itself. For many organizations automation is too tough a nut to crack at the beginning, but this should not dissuade anyone from adopting ATDD and making sure everyone knows how to do it properly. The automation can be added later if desired but even then, acceptance tests will not run particularly quickly. That is acceptable because they are not run very frequently, perhaps part of a nightly build.
Also, it will likely not be at all clear, in the beginning, what form of automation should be used. There are many different ATDD automation tools and frameworks out there, and while any tool could be used to automate any form of expression, some tools are better than others given the nature of that expression. If a textual form, like Gherkin, is determined to be clearest and least ambiguous given the nature of the stakeholders involved, then an automation tool like Cucumber (Java) or Specflow (.Net) is a very natural and low-cost fit. If a different representation makes better sense, then another tool will be easier and cheaper to use.
The automation tool should never dictate the way acceptance tests are expressed. It should follow. This may require the organization to invest in the effort to create its own tools or enhancements to existing tools. but this is a one-time cost that will return on the investment indefinitely. In ATDD the clarity and accuracy of the expression of value is paramount; the automation is beneficial and "nice to have."
UTDD
UTDD requires automation from the outset. It is not optional and, in fact, without automation UTDD could scarcely be recommended.
Unit tests are run very frequently, often every few minutes, and thus if they are not efficient it will be far too expensive (in terms of time and effort) for the team to run them. Running a suite of unit tests should appear to cost nothing to the team; this is obviously not literally true, but that attitude should be reasonable.
Thus, unit tests must be extremely fast, and many aspects of UTDD training should ensure that the developers know how to make them extremely painless to execute. They must know how to manage dependencies in the system, and how to craft tests in such a way that they execute in the least time possible, without sacrificing clarity.
Since unit test are intimately connected to the system from the outset, most teams find that it makes sense to write them in the same programming language that the system itself is being written in. This means that the developers do not have to do any context-switching when moving back and forth between tests and production code, which they will do in a very tight loop.
Unlike ATDD, the unit-testing automation framework must be an early decision in UTDD, as it will match/drive the technology used to create the system itself. One benefit of this is that the skills developers have acquired for one purpose are highly valuable for the other. This value flows in both directions: writing unit tests makes you a better developer, and writing production code makes you a better tester.
Also, if writing automated unit tests is difficult or painful to the developers, this is nearly always a strong indicator of weakness in the product design. The details are beyond the scope of this article, but suffice it to say that bad design is notoriously hard to test, and bad design should be rooted out early and corrected before any effort is wasted on implementing it.
Intro
Part 1
Part 2
Part 3
Part 4
Tuesday, August 7, 2018
Test-Driven Development in the Larger Context: Pt 2. Cadence
Another key difference between ATDD and UTDD is the pace and
granularity, or “cadence” of the work.
This difference is driven by the purpose of the activity, and how the
effort can drive the maximum value with the minimum delay.
CADENCE
CADENCE
ATDD
Acceptance tests should be written at the start of the development cycle: during sprint planning in Scrum, as an example. Enough tests are written to cover the entire upcoming development effort, plus a few more in case the team moves more quickly than estimates expect.
If using a pull system, like Kanban, then the acceptance
tests should be generated into a backlog that the team can pull from. The creation of these tests, again, are part of
the collaborative planning process and should follow its pace exactly.
Acceptance test start off failing, as a group, and then are
run at regular intervals (perhaps as part of a nightly build) to allow the
business to track the progress of the teams as they gradually convert them to
passing tests. This provides data that
allows management to forecast completion (the “burn down curve”) which aids in
planning.
The primary purpose of creating acceptance tests is the
collaboration this engenders. The tests
are a side-effect (albeit an enormously beneficial one) of the process, which
is engaged to ensure that all stakeholders are addressed, all critical
information is included, and that there is complete alignment between business
prioritization and the upcoming development effort.
When training stakeholders to write such tests, the
experience should be realistic; work should be done by teams that include
everyone mentioned in the “Audience” blog that preceded this one, and they
should ideally work on real requirements from the business.
UTDD
A single unit test is written and proved to fail by running it immediately. Failure validates the test in that a test that cannot fail, or fails for the wrong reason, has no value. The developer does not proceed without such a test, and without a clear understanding of why it is failing.
Then the production work is done to make this one test pass,
immediately. The developer does not move
on to write another test until it and all previously-written tests are “green.”
The guiding principle is that we never have more than one failing test at a
time, and therefore the test is a process gate determining when the next user
story (or similar artifact) can begin to be worked on.
When training developers to write these tests properly, we
use previously-derived examples to ensure that they understand how to avoid the
common pitfalls that can plague this process: tests can become accidentally
coupled to each other, tests can become redundant and fail in groups, one test
added to the system can cause other, older tests to fail. All of this is avoidable but requires that
developers who write them are given the proper set of experiences, in the right
order, so that they are armed with the necessary understanding to ensure that
the test suite, as it grows large, does not become unsustainable.
Intro
Part 1
Part 2
Part 3
Part 4
Intro
Part 1
Part 2
Part 3
Part 4
Thursday, July 26, 2018
Test-Driven Development in the Larger Context: Pt 1. Audience
One difference between ATDD and UTDD becomes clear when you
examine who is involved in the process.
We will call this the “Audience” for the process in this blog.
Audience
ATDD
Acceptance tests should be created by a gathering of representatives from every aspect of the organization: business analysts, product owners, project managers, legal experts, developers, marketing people, testers, end-users (or their representatives), etc.
ATDD is a framework for collaboration that ensures complete
alignment of the development effort that is to come with the business values
that should drive it. These business
values are complex and manifold, and so a wide range of viewpoints must be
included. Acceptance tests should be
expressed in a way that can be written, read, understood, and updated by anyone
in the organization. They should require
no technical knowledge, and only minimal training.
The specific expression of an acceptance test should be
selected based on the clarity of that form given on the nature of the
organization and its work. Many people
find the “Given, When, Then” textual form (often referred to as “Gherkin”) to
be the easiest to create and understand.
Others prefer tables, or images, or other domain-specific artifacts.
We once worked with an organization that did chemical
processing. We noted that in all their
conversations and meetings, they used images like this in their PowerPoint
slides and on the whiteboards, etc.:
To most people (including myself) these would not be easy to
understand, but for this organization it was obvious and clear. For them, expressing their acceptance tests
this way lowered the bar of comprehension.
Why make them convert this into some textual or other
representation? Use their language,
always.
Typical business analysts spend much of their time looking
at spreadsheets, or Gantt charts, or Candlestick charts, etc… The point is, once the stakeholders of a
given piece of software are identified, then the form of the expression chosen
should be whatever is clearest for them.
They should be able to write the tests, read the tests, and update the
tests as needed, without any understanding of computer code (unless the only
stakeholders are literally other developers).
The notion of automating these tests should never drive their form. Any representation of acceptance can be make
executable given the right tools, even if those tools must be created by the
organization. Choosing, say, Robot or
Fit or Specflow to automate your acceptance tests before you identify your
stakeholders is putting the cart before the horse.
UTDD
Unit tests should be written by technical people: developers and sometimes testers as well. They are typically written in the computer language that will be used to develop the actual system, though there are exceptions to this. But in any case, only people with deep technical knowledge can write them, read them, and update them as requirements change.
To ensure the suite of tests itself does not become a
maintenance burden as it grows, developers must be trained in the techniques
that make the UTTD effort sustainable
over the long haul. This includes test
structure, test suite architecture, what tests to write and how to write them,
tests that should be avoided, etc.
Training of the development team must include these critical concepts,
or the process rapidly becomes too expensive to maintain.
Intro
Part 1
Part 2
Part 3
Part 4
Intro
Part 1
Part 2
Part 3
Part 4
Subscribe to:
Posts (Atom)