Testing that no errors remain

On the Test Driven Development YahooGroup, Alan Baljeu asked, “It is my impression that whenever a feature is added, there may be many things which are affected by adding that feature…. And by not considering those things, you introduce bugs into the software.” In response, he got two types of advice. One was specific suggestions of software construction techniques that can reduce the occurrence of errors by reducing the number of places that need to change when adding a new feature.

The second type was general testing advice. Alan admitted, “I don’t see a way to properly identify what new behaviour will need to be covered. I want to be sure I’m not forgetting cases, but currently I’m overlooking too many of these effects.” In other words, the current test coverage is not catching all the errors.

It is a truism that testing cannot guarantee the absence of errors, in spite of the title of this post. It also seems apparent, to me and to Alan, that Alan’s testing has gaps in it. I suspect that he’s trying to cover all the cases with a single set of tests. As he says, “I have acceptance tests, but it is extremely hard to be thorough. I find them difficult because they tend to follow this pattern:
Do A
Do B
Do C
The following 1000 things should now be true: (in practice I check about 6, which is not enough).

Peter Warshall, a master builder of watertight geodesic domes, once stated in The CoEvolution Quarterly that domes leak because they depend on a single technological skin that must be perfect. Conventional buildings have multiple layers of protection, so that the water that gets past the outer covering is caught by the next. Bruce Schneier says, “One of the basic philosophies of security is defense in depth: overlapping systems designed to provide security even if one of them fails.” The same approach seems to be the best for preventing bugs from reaching production.

First of all, Test Driven Development [TDD] is principally a design technique rather than a testing technique, despite its name. The fact that it leaves behind a suite of tests that may be used for regression testing is a bonus. The bare minimum of tests needed to drive the design, however, is unlikely to be sufficient to detect unwanted changes.

The next step is to add “negative tests” to your unit test suites. In other words, add tests that verify that things that shouldn’t happen, don’t. This is more of a testing frame of mind than a design one, though it may drive the creation of code that prevents bad things. A careful developer will check edge conditions and perhaps some invalid input, depending on the circumstances.

  • Aside: Testers will generally think that the developer didn’t do enough of this, and they’ll probably be right. I find that testers tend to do a better job at testing and developers tend to do a better job of writing code. Developers are rarely in the position of evaluating the code of testers, however, while testers often evaluate the missed tests of developers. I’m not down on testers–some of my best friends are testers–but I do think they sometimes under-appreciate what developers do when they bemoan errors they find. I also think that developers often under-appreciate the defensive mind-set of the testers. People who are really good at both jobs seem to be rare, however. And I don’t know anybody who’s good at both viewpoints simultaneously.

Then you want some tests to ensure that the system works together as a whole. These are termed system, or integration, or acceptance, or customer tests. Certainly you want the correct operation of the main features to be tested with automated scripts. The negative tests at this level, however, would lead to combinatorial explosion if you tried to cover them exhaustively.

Hence the need for exploratory testing. Exploratory testing is interactive testing by a human tester who is trying to find ways to surface problems. Its proponents like to point out that it’s the best way to find bugs. I believe them, because the bugs surfaced by regression tests would already be fixed. At this point, if things are done well, there’s little need for testing that things are working correctly. Instead, the good tester will concentrate on things that are possible, but likely not anticipated by the developers. Uncovering the implicit assumptions, and finding holes in them, is the value of a good interactive tester.

Testers are people, too, and there’s always the possibility that some misfeature will evade all of these levels. Care and skill are needed by both testers and developers to minimize such surprises. From the developers’ viewpoint, one of the things I’ve found is that good, simple design helps to minimize the places where such problems can hide.

By minimizing coupling and maximizing cohesion, the things that change together tend to be visible together, where it’s obvious what needs to change. In many designs I see, it’s as you say: a change here ripples through the system and lots of places need to accommodate a new parameter or a new property of an object. To me, this is a point of pain, and I’ll want to clean it up. Perhaps I’ll create a parameter object that encapsulates the tuple that needs to be passed around. Perhaps I’ll move behavior depending on the object’s properties into the object, so the object’s collaborators don’t have to worry about the internal changes. These things are the right thing to do, not because someone said so, but because they reduce the incidence of errors.

Ultimately it all comes down to skill and thinking about the problem. Errors must be attacked at different levels: design, implementation, and integrations. Errors must be attacked from different viewpoints: what’s intended and what’s not intended. Like most things, practice is required to attain that skill.

Kudos to Alan Baljeu for asking these questions, and addressing the issues. That’s the necessary first step. When you’ve got a lot of areas where improvement could be made, try to focus on one or two that seem like they would offer immediate benefit. Try those, and monitor the situation for benefit. Also be alert for other aspects, and the techniques that would address them, that you may not have fully considered before.

A fundamental part of agility is to try something and pay attention to what happens. That’s called feedback.

Edit 3/29/2021: Updated yahoogroup links to groups.io

4 Comments

Categories: Working Software

Tags:

4 Replies to “Testing that no errors remain”

  1. And… if exploratory testing finds a bug, See if you can write automated acceptance tests and unit tests that fail because of the bug. Then fix the bug. Then the tests should pass. If the bug comes back, those tests should start failing again.

  2. When automated testing is done correctly by both development and testing, finding bugs in this manner should become increasingly difficult. It will require better and better testers with more and more information and understanding of the program. I think that all automation pushes us towards needing better educated people. Those who run the robots at GM are not the same caliber as their grandfathers who turned wrenches for Henry Ford. Similarly, testers of the future will have to have higher qualifications than those today. The days of “monkeys on keyboards” testing may be coming to an end.

  3. Thanks for the comments.

    Keith, I agree 100% on capturing the bug in an automated test in preparation for fixing it.

    Kelly, I’m not sure I’d push for a linear measure along any axis, even education. What’s really important for finding bugs is a different set of assumptions, so you don’t have the same blind spots.

    Merlyn, mutation testing is another useful technique. I’m not sure it’s as cost-effective as exploratory testing by a good tester, however. Perhaps if a good tester isn’t available, it would be a reasonable substitute. And I think 100% test coverage is a false goal. It’s not a bad thing to have, but chasing after that can lead you away from more important issues.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.