It’s not the script, it’s how you do it.

I’ve had numerous discussions with Michael Bolton where he makes the claim that scripted testing (whether via automation or a person following written directions) is not testing but checking.  He quotes Cem Kaner‘s definition of testing: “testing is an empirical, technical investigation of a product, done on behalf of stakeholders, with the intention of revealing quality-related information of the kind that they seek.”  Running a script that validates certain desired behavior certainly fits this definition.

Michael says that checks “confirm existing beliefs” and tests “find new information.”  If a “check” that was passing now fails, is not that new information?

He says that checks are machine decidable, but tests require sapience?  I think that both require sapience.  It takes sapience to create the script. Michael admits this much in his posting, “Merely” Checking or “Merely” Testing.  He also admits that there is skill required in reporting and interpreting the results.

What he doesn’t mention is the action that might be taken in response to the results.  If we’ve got a failing test, then we might try exploring the application manually to see what’s going on. We might write some more tests to understand what happened. We might check the log files for information. We might call someone to see if other systems, on which ours depends, are working correctly.  We might do any of a large number of things.

And really, this is no different from exploratory testing other than in terms of the time delays between various actions.  In exploratory testing, we decide how we want to exercise the system, we observe what happens when we do so, and we use the information we gain to decide what other ways we might want to exercise the system.  We do the same thing with automated tests.  The difference is that there are delays between these steps (and we’ll likely do many of the “exercise and observe” steps before going to the “decide what to do next” steps).

It’s that “exercise and observe” step (and making a decision as to whether the observation is cause for concern or not) that Michael calls a “check.”  But that’s only a part of scripted testing, at least, if you’re doing it right.

To be sure, there’s great value in speeding up the cycle between observation and deciding what to do next.  And there’s great value in noticing things along the way.   There’s also great value in speeding up the time it takes to observe a lot of details.  And there’s great value in not forgetting some of those details.

There are other relative strengths and weaknesses between exploratory testing and automated scripted testing, of course.  And both can be done badly.  But both are testing, both in terms of Cem Kaner’s definition, in terms of finding new information, and in terms of requiring sapience.  If you’re not thinking about it, you’re doing it wrong.

4 Comments

Categories: Working Software

Tags:

4 Replies to “It’s not the script, it’s how you do it.”

  1. Hi, George…

    I’ve been explicit that checks themselves do not require sapience, but that the preparation and analysis of them do. That’s not an “admission”.

    What he doesn’t mention is the action that might be taken in response to the results.

    How far did you read in the very blog post that you cite? In that post, I said (and I’m quoting): “Sapient activity by someone—a tester, a programmer, or a product owner—is needed to determine the response. Upon deciding on significance, more sapient action is required—fixing the application being checked (by the production programmer); fixing or updating the check (by the person who designed or programmed the check); adding a new check (by whomever might want to do so) or getting rid of the check (by one or more people who matter, and who have decided that the check is no longer relevant).” I therefore object to the suggestion that I didn’t mention that action might be taken in response to the results. I mentioned it, clearly and explicitly.

    Then you go on:

    It’s that ‘exercise and observe’ step (and making a decision as to whether the observation is cause for concern or not) that Michael calls a ‘check.’

    The main sentence is correct, but the parenthetical is something that you just made up. Again, read the blog post that you cite. It says exactly this:

    “By definition, the observation, the decision rule, and the setting of the bit all happen without the cognitive engagement of a skilled human.

    “Once the check has been performed, though, skill comes back into the picture for reporting. Checks are rarely done on their own, so they must be aggregated. The aggregation is typically handled by the application of programming skill. To make the outcome of the check observable, the aggregated results must be turned into a human-readable report of some kind, which requires both testing and programming skill. The human observation of the report, intake, is by defintion a sapient process. Then comes interpretation. The human ascribes meaning to the various parts of the report, which requires skills of testing and of critical thinking. The human ascribes significance to the meaning, which again takes testing and critical thinking skill.”

    I don’t know how much more explicit or clear I could be, nor can I understand why you would misrepresent what I’ve said in the way you’ve done above.

    As James Bach and I have pointed out in various public forms, testing activity dominates checking. That is, the weaker your testing (and programming) skill, the less value that your checks will deliver. So I strongly agree with you when you say that “If you’re not thinking about it, you’re doing it wrong.”

    —Michael B.

  2. Thanks for the comments.

    WRT the charge of not reading all of your post: I noted that you didn’t mention these as activities of the tester. I don’t think the tester’s job properly ends with reporting and analysis.

    WRT the decision rule: This is done with cognitive engagement. It’s just not done in “real time.” The cognitive engagement happens when the decision rule is written.

    My point is that exploratory testing and scripted testing are both doing the same sorts of things, but in different orders and with different timings. When you say scripted testing is just “checking,” you’re leaving out part of the picture. Part of exploratory testing is just checking, also.

    I understand that you’re trying to combat some practices that are chronically ineffective and low-value, but this distinction you’re making is, in my opinion, the wrong one. And I think it makes some people dismiss other things that you say–things I’d rather they didn’t dismiss.

  3. Oh, I’m sorry. Did I suggest somewhere that the analysis, the imparting of meaning and signficance would require no further work on the part of the tester? When I said that would require testing skill, do you think I was suggesting that it would only be a programmer’s testing skill? When I say, “Sapient activity by someone—a tester, a programmer, or a product owner—is needed to determine the response,” does that suggest no further testing by a tester? Please. If you’re that dismayed by your inferences, and if you still have any respect for me, why not ask for clarification instead of imagining —incorrectly—what I might say?

    This is done with cognitive engagement. It’s just not done in “real time.” The cognitive engagement happens when the decision rule is written.

    Yeah, well, there’s another part that either you didn’t read or that you’ve deliberately ignored: the whole paragraph that precedes it. Here THAT is again:

    “It takes sapience to recognize the need for a check—a risk, or a potential vulnerability. It takes sapience—testing skill—to express that need in terms of a test idea. It takes sapience—more test design skill—to express that test idea in terms of a question that we could ask about the program. Sapience—in terms of testing skill, and probably some programming skill—is needed to frame that question as a yes-or-no, true-or-false, pass-or-fail question. Sapience, in the form of programming skill, is required to turn that question into executable code that can implement the check (or, far more expensively and with less value, into a test script for execution by a non-sapient human). We need sapience—testing skill again—to identify an event or condition that would trigger some agency to perform the check. We need sapience—programming skill again—to encode that trigger into executable code so that the process can be automated.”

    That is of the essence, George, of saying that the cognitive engagement happens when the rule is written. It’s at the moment that the check is performed, by a machine, that sapience is absent, by definition.

    I understand you’re really, really enthusiastic about checking, so here’s more: the issue about “which we are going to value: the checks (‘we have 50,000 automated tests’) or the checking. Mere checks aren’t important; but checking—the activity required to build, maintain, and analyze the checks—is. To paraphrase Eisenhower, with respect to checking, the checks are nothing; the checking is everything. Yet the checking isn’t everything; neither is the testing. They’re both important, and to me, neither can be appropriately preceded with ‘mere’, or ‘merely’.”

    Beware: the next step is for me to visit your home and read it to you.

    Can we agree, please, that manual scripted checking is an expensive waste of time, and that (yet another quote from that same blog post) automated “checks can be hugely important?”

    —Michael B.

  4. I think it can’t be brought down to “exercise and observe” only. The biggest value of exploratory testing is in instant reaction on small hints which appear on the way. If something looks strange you dig deeper until you find a bug or decide to move on.

    With any kind of automatic testing you get the overall results where all these hints disappear in loads of data.

    Another thing is automatic testing always check exactly the same path while human will most likely do things a bit differently because of curiosity, distraction or fatigue.

    On one hand this gives you broader range of tests but on the other doesn’t assure you have gone exactly through the predefined list of scenarios.

    That’s why both exploratory and automatic testing fit each other well.

    And one more comment on preparing test cases – the biggest value of test cases is not withing executing but within preparing them. That’s the place when all the creative thinking happens and you catch a number of issues.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.