If you don’t automate acceptance tests?
Amr Elssamadisy reports on InfoQ that automated acceptance tests are “only used by a small minority of the community.” Is this true? If you and your team don’t use automated acceptance tests, please let me know how you handle regression tests as the application grows larger. You can leave a comment here or, if you’d rather not say it in public, email me directly.
OK, I know that “acceptance tests” are somewhat a misnomer. While they may provide a go/no go indicator for the functionality of a user story, we all know that it’s possible that the application passes the test and still isn’t what the Product Owner or Customer wants. You still need to show it to the Product Owner/Customer to get their acceptance. Bear with me, though, and let’s use this common term.
So, a Product Owner, a Developer, and a Tester walk into a bar sit down to talk about something that the system under development should do. The Product Owner describes the user story. The Developer and Tester ask questions (and make suggestions) until they think they can answer the basic question, “How will I know that this story has been accomplished?”
No matter how or when it’s done, these three amigos (to borrow a term from my friends at Nationwide) must agree on this basic criteria or things will go wrong. Turning this agreement into an automated acceptance test (or three) gives it a precision that often tests the agreement and uncovers fuzziness or conflicting definitions in the words we use. Automated acceptance tests help us express our expectations.
If you don’t use automated acceptance tests, how do you clearly communicate desires (“requirements”) between the business, the developers, and the testers?
If your testing is manually executed, or is automated using record-and-playback, then you’ll have the problem where the testers have to wait until the developers think they’re done before they can start verifying the functionality. This puts the testers behind from the very beginning. It also delays the feedback to the developers when the functionality doesn’t behave as expected and results in bug-fix cycles on code thought to be complete. These things combine to slow down the pace of development.
It’s more valuable to automate those tests while the code is still written. As development proceeds, you can see those tests start to pass, providing a clear indication of the progress. If a developer writes code expected to make a particular test scenario work, but the test fails, then you can delve into the issue right away. Is there a mistake in the code, in the test, or just a lingering disagreement about what we intended to do? Automated acceptance tests express the growth of functionality in our application.
If you don’t use automated acceptance tests, how do you monitor the progress of development?
Once the functionality works, we want it to continue working, unless we expressly decide it should work a different way. If we want to know that it continues to work, we need to verify that. That means that we need to continue to check a growing amount of functionality. If that checking requires significant human effort, we’ll soon be overwhelmed by it and our progress will get slower and slower.
Computers are great for doing our repetitive grunt work. Yes, it’s a continually increasing job for them, too, but they’re usually faster than people, they can work longer hours, and they can easily scale to handle more work by adding hardware. If computers are checking that all of our tests are still working on a daily or more frequent basis, then we rapid notification when we’ve accidentally broken an old feature. Automated acceptance tests express our confidence that the system continues to work.
If you don’t use automated acceptance tests, how do you maintain confidence that the system still works?
If you don’t use automated acceptance test, please let me know the answers to these questions–especially the last one.