It’s not the script, it’s how you do it.
I’ve had numerous discussions with Michael Bolton where he makes the claim that scripted testing (whether via automation or a person following written directions) is not testing but checking. He quotes Cem Kaner‘s definition of testing: “testing is an empirical, technical investigation of a product, done on behalf of stakeholders, with the intention of revealing quality-related information of the kind that they seek.” Running a script that validates certain desired behavior certainly fits this definition.
Michael says that checks “confirm existing beliefs” and tests “find new information.” If a “check” that was passing now fails, is not that new information?
He says that checks are machine decidable, but tests require sapience? I think that both require sapience. It takes sapience to create the script. Michael admits this much in his posting, “Merely” Checking or “Merely” Testing. He also admits that there is skill required in reporting and interpreting the results.
What he doesn’t mention is the action that might be taken in response to the results. If we’ve got a failing test, then we might try exploring the application manually to see what’s going on. We might write some more tests to understand what happened. We might check the log files for information. We might call someone to see if other systems, on which ours depends, are working correctly. We might do any of a large number of things.
And really, this is no different from exploratory testing other than in terms of the time delays between various actions. In exploratory testing, we decide how we want to exercise the system, we observe what happens when we do so, and we use the information we gain to decide what other ways we might want to exercise the system. We do the same thing with automated tests. The difference is that there are delays between these steps (and we’ll likely do many of the “exercise and observe” steps before going to the “decide what to do next” steps).
It’s that “exercise and observe” step (and making a decision as to whether the observation is cause for concern or not) that Michael calls a “check.” But that’s only a part of scripted testing, at least, if you’re doing it right.
To be sure, there’s great value in speeding up the cycle between observation and deciding what to do next. And there’s great value in noticing things along the way. There’s also great value in speeding up the time it takes to observe a lot of details. And there’s great value in not forgetting some of those details.
There are other relative strengths and weaknesses between exploratory testing and automated scripted testing, of course. And both can be done badly. But both are testing, both in terms of Cem Kaner’s definition, in terms of finding new information, and in terms of requiring sapience. If you’re not thinking about it, you’re doing it wrong.