More on Automated Acceptance Testing
Jim Shore has posted a response to the reactions about his previous post on Acceptance Testing in which he defends the way he and the teams he coaches are working. About the same time, Lisa Crispin posted her thoughts on the topic.
As Lisa says,
I can’t tell you the one right way to test and develop software”¦. The one right way for your team to code and test will continually evolve over the years. In fact, I’m sorry to tell you that you’ll never find the one right way; it’s a moving target, but you’ll get closer and closer to it.
This is an incredibly important point! There may be many “wrong” ways—wrong in that they fail to achieve your objectives—but there is no “right way.” So I’m happy that Jim and his teams are able to achieve the results they want. I’m not saying they’re doing it wrong.
At the same time, I’ve seen teams struggle with the increasing burden of acceptance testing and the divergence of understanding between the Customer, the Developers, and the Testers. And so I continue to explore ways to alleviate that struggle.
Jim describes how the teams discuss concrete examples with the customer. They then automate those examples (or not) in their own unit and integration tests. Jim reports that these tests give the programmers confidence that the system is working as desired. For the customer’s confidence, Jim reports relying on “a reliable track record of shipping defect-free software.”
Jim’s argument is that the customer won’t trust the test results before trusting the team. This is true. If the customer thinks the developers will fake the test results, then it doesn’t matter how the tests are expressed; you’ve got a serious problem.
I’ve not experienced this level of distrust. Instead, I’ve experienced customer’s who trust that the developers are working in good faith, but still don’t trust that the developers will ship defect-free software. And perhaps the developers won’t ship defect-free software. They’re fallible, just like the rest of us humans. Maybe they misunderstood the customer’s example. Or they made an error converting it to code. Or they didn’t automate that example because they thought it was the same as another that they did automate. Or”¦
Mistakes happen. Miscommunication happens. It’s great that Jim’s developers have that zero bugs attitude, but the customer has the right and obligation to participate in achieving zero bugs. It can be condescending to take all the responsibility for correctness.
Yes, the customer gets to review the completed software, but we can detect misunderstandings sooner if the customer can read the acceptance tests. Yes, the developers may produce defect-free software, but miss a test case such that nobody notices right away when a defect is inserted into this feature later. I want to enable the customer reading the test and saying “Yes, that’s what I meant.” I want to enable the customer saying, “I’ve thought about another case we don’t have covered.” Or “We’re changing our business rules and this example is no longer correct.” Or anything else that’s appropriate. I want the customer to share that zero bugs attitude.
But is this practice essential? Obviously not, as Jim and his teams are successful without it. For many teams transitioning to agile practices, they’ve probably got too many other, more fundamental, practices to learn that they don’t have the energy to tackle this one yet.
Is this practice too expensive in terms of effort? Maybe. A lot of people are working on making it less expensive. If you don’t make it an integral part of the conversations when you first start discussing the story, then it’s likely to seem too expensive. Then it’s like writing unit tests after the code rather than doing TDD. To make it work, I think you need to make the customer examples flow from the initial conversation through signaling “done” for the developer. You’ve got to work at making them maintainable, just like the code in the system you’re delivering.
This is just one small practice in a forest of useful practices. The forest can survive without any single one. We can almost always find ways to compensate for things we’re not doing. That’s a double-edged sword, however. It can also allow us to fritter away our advantages until we’re working in the same old way we’ve always done. It can blind us to some of the process improvements we could make. Jim’s “fix the process description” is
Every escaped defect, whether found in exploratory testing or found by end-users, is a indication of a flaw in the process. When we find one, we fix our process.
The first thing is to analyze the defect, write a test that reproduces it, and fix it. While we’re fixing it, we look at the design of the code and see if it needs improvement, too.
Next, we conduct Root-Cause Analysis. We ask ourselves, “What about our process allowed this defect to escape?” We continue asking “why” until we get to the root cause. Once we find it, we make changes to prevent that entire category of defects from happening again.
Note that “fix the process” could include adding a process of creating Customer Readable Automated Acceptance Tests. They’re are a tool that I want to keep in my toolbox.