If you don’t automate acceptance tests?
Amr Elssamadisy reports on InfoQ that automated acceptance tests are “only used by a small minority of the community.” Is this true? If you and your team don’t use automated acceptance tests, please let me know how you handle regression tests as the application grows larger. You can leave a comment here or, if you’d rather not say it in public, email me directly.
OK, I know that “acceptance tests” are somewhat a misnomer. While they may provide a go/no go indicator for the functionality of a user story, we all know that it’s possible that the application passes the test and still isn’t what the Product Owner or Customer wants. You still need to show it to the Product Owner/Customer to get their acceptance. Bear with me, though, and let’s use this common term.
So, a Product Owner, a Developer, and a Tester walk into a bar sit down to talk about something that the system under development should do. The Product Owner describes the user story. The Developer and Tester ask questions (and make suggestions) until they think they can answer the basic question, “How will I know that this story has been accomplished?”
No matter how or when it’s done, these three amigos (to borrow a term from my friends at Nationwide) must agree on this basic criteria or things will go wrong. Turning this agreement into an automated acceptance test (or three) gives it a precision that often tests the agreement and uncovers fuzziness or conflicting definitions in the words we use. Automated acceptance tests help us express our expectations.
If you don’t use automated acceptance tests, how do you clearly communicate desires (“requirements”) between the business, the developers, and the testers?
If your testing is manually executed, or is automated using record-and-playback, then you’ll have the problem where the testers have to wait until the developers think they’re done before they can start verifying the functionality. This puts the testers behind from the very beginning. It also delays the feedback to the developers when the functionality doesn’t behave as expected and results in bug-fix cycles on code thought to be complete. These things combine to slow down the pace of development.
It’s more valuable to automate those tests while the code is still written. As development proceeds, you can see those tests start to pass, providing a clear indication of the progress. If a developer writes code expected to make a particular test scenario work, but the test fails, then you can delve into the issue right away. Is there a mistake in the code, in the test, or just a lingering disagreement about what we intended to do? Automated acceptance tests express the growth of functionality in our application.
If you don’t use automated acceptance tests, how do you monitor the progress of development?
Once the functionality works, we want it to continue working, unless we expressly decide it should work a different way. If we want to know that it continues to work, we need to verify that. That means that we need to continue to check a growing amount of functionality. If that checking requires significant human effort, we’ll soon be overwhelmed by it and our progress will get slower and slower.
Computers are great for doing our repetitive grunt work. Yes, it’s a continually increasing job for them, too, but they’re usually faster than people, they can work longer hours, and they can easily scale to handle more work by adding hardware. If computers are checking that all of our tests are still working on a daily or more frequent basis, then we rapid notification when we’ve accidentally broken an old feature. Automated acceptance tests express our confidence that the system continues to work.
If you don’t use automated acceptance tests, how do you maintain confidence that the system still works?
If you don’t use automated acceptance test, please let me know the answers to these questions–especially the last one.
Good day George,
It has been my experience that there are a few different situations and reasons (not necessarily good nor bad) why automated acceptance tests are not used widely.
* Automated acceptance testing tools have not had the exposure that xUnit and commercial test automation tools have had. I teach students how to use tools such as Fit and StoryTestIQ all the time. Before I showed them that these tools exist they had no idea.
* Some teams automate acceptance-like tests through the use of xUnit tools and put less emphasis on the customer involvement. Unfortunately, it is common that organizations I have worked with do not put emphasis on customer collaboration or don’t know how it can work effectively. The thought of being involved in something so technical seems scary until they see how it provides them value.
* The gap between coders and testers is still quite wide outside of the small percentage of hard-core Agile teams. It is common for me to work with clients who have isolated silos of testers away from coders. Coders do not believe they should have anything to do with testing and the feeling is usually mutual from the tester’s point of view. Closing this gap is essential to taking on an acceptance testing mindset, in my opinion.
The idea of confidence in work is a question of degree from my experience working with teams. Some teams thoughts on what is possible in terms of increased “confidence” is a much lower bar than teams that I work on who are hard-core Scrum with XP technical practices and been doing it for some time. It is easy to forget what it was like before I worked in this way and there is a large gap between those who have consumed Agile as a way of doing their work and those who are stumbling into it and those who are not aware or resistant to it.
I have started using the terms “Executable Design” for TDD plus additional tooling to guide design incrementally and “Executable Specifications” to describe automated tests such as are created with Fit, RobotFramework, and StoryTestIQ. This helps me move past much of the baggage that “test” comes with in terms of dysfunctional industry cultural views of the testing function. I then direct discussions towards the thought of “quality” using this terminology and it seems to get a foothold even in very traditional organizations. It is too bad that these types of “word-plays” matter so much but sometimes change involves terminology, too.
Can’t wait to hear other people’s responses to this topic. I think it is quite interesting that automated acceptance testing is not more popular.
Perhaps one answer to this question may be found by taking a step back from acceptance testing and looking at all the work items that impact the overall Rate of Development.
The Rate of Development is a function of several key steps, of which acceptance testing is only one piece – see approximation (A.1 – D.1) below. [This model may not agree with everyone’s experience, but it can work for a mature product undergoing typically bite size changes so as to not lose hard core users or take too long between releases. New products would typically combine many steps since they are given a broader mandate from management and have minimal legacy code and users.]
Rate of Development = Function of
[(A1. rate of new marketable idea creation + buy-in from PO’s, CEOs or man/woman in the street) +
(A.2 rate of resource allocation – scrum team(s) or other) +
(B.1 rate of requirements generation – user stories, formal spec or other) +
(B.2 rate of coding and manual unit test passing and/or automated acceptance test passing – Agile or other means) +
(B.3 rate of help updates) +
(C.1 rate of combined build test passing) +
(C.2 rate of release test passing – automated mostly) +
(D.1 rate of product release – determined by business calendar and contractuals)]
Possible reasons why folks are not automating acceptance tests to include the current release content…
1. Because work flow items other than manual unit testing take up so much organizational bandwidth and time. Perhaps they play a far greater role in determining the Rate of Development and so there is no motivation to change.
2. Manual unit testing the fresh code you just worked on can be quick and efficient. Use existing automated testing for stale code you or others worked on before that may not be understood or is too vast. [Whether a mix of manual and automated acceptance testing is better than all automated is open to debate.]
3. Automation is added after release “1.1” to the acceptance test for release “1.2” . This would include only major areas of user stories from release “1.1”. This work is done while activities A.1-B.1 are in progress for “1.2”, so not a critical path item is introduced.
I recommend checking out Elisabeth Hendrickson’s Lightning Talk from the Agile Alliance Functional Testing Tools (AAFTT) visioning workshop, October 2007. http://video.google.com/videoplay?docid=4949576318072329085
Hi George,
In cases where the team have only had automated recording available – or perhaps, only the skills to use automated recording tools – we’ve performed the scenarios manually, and watch them fail (for the right reasons) before we start coding. This helps us get the precision you talk about and gets the conversations going – and it can start as simply as going to a web page and getting a 404 where we “expected” our new screen (see my blog post about this: http://lizkeogh.com/2007/11/16/bdd-bug-driven-development/ )
After the code has been written, the recording tools can be used to record the pages in action.
In some ways I’ve found this even better than using automated frameworks, because it keeps the code easy to change throughout the development cycle and even tests that are written rather than recorded are often easier to do after the code’s there.
Note: TechWell published an article of mine describing the “Three Amigos” approach in more detail. See http://manage.techwell.com/articles/weekly/three-amigos
Edit: Now available at https://web.archive.org/web/20121126190702/http://manage.techwell.com/articles/weekly/three-amigos
Here we are, about 11 years later and all I can say: If you don’t automate acceptance tests, you’re f*cked…
Great article!
It’s interesting to explore different perspectives on automated acceptance tests. While some argue that they are only used by a small minority, your points about their importance in expressing expectations, facilitating communication among team members, and ensuring ongoing system reliability make a compelling case for their value.Your insights could provide valuable perspectives for others in the software development community.
It’s undeniable that automated acceptance tests offer a significant advantage in ensuring clarity and precision in expressing expectations between business stakeholders, developers, and testers. As you rightly pointed out, these tests serve as a beacon of progress, providing rapid feedback loops and helping maintain the pace of development.
Without automated acceptance tests, the challenge of clearly communicating desires and requirements becomes more pronounced, potentially leading to misunderstandings, delays, and a slower development cycle. The reliance on manual testing or record-and-playback automation not only slows down the verification process but also hampers the ability to monitor progress effectively.
The importance of automated acceptance tests in modern software development cannot be overstated. Not only do they serve as a precise indicator of whether a user story has been accomplished, but they also facilitate clear communication between the business, developers, and testers. Without automated acceptance tests, the process of verifying functionality becomes manual and prone to delays, hindering the pace of development.
Furthermore, automated acceptance tests enable continuous monitoring of development progress, providing immediate feedback when issues arise. They are essential for maintaining confidence in the system’s functionality, as they ensure that previously implemented features continue to work as expected.
In essence, automated acceptance tests streamline the development process, enhance communication, and bolster confidence in the system’s integrity. If automated acceptance tests are not utilized, it raises significant concerns regarding efficiency, reliability, and overall development velocity.