Avoiding Mini-Waterfalls

A lot of people and organizations, when transitioning from a serial software development lifecycle toward an Agile one, fall into the pattern of mini-waterfalls. They start doing iterations, but each iteration resembles the development lifecycle they already know. The programmers do some design work, then they write the code to implement the design, then unit test the code, and then they pass it to the testers for testing. To many people, this is the only way it can work. Their mental model only admits to this series of phases.

And they run into typical problems. Sometimes the design doesn’t fit the problem well, and patches are needed because there isn’t time to go back to design. The testers get squeezed for time at the end of the iteration, and no one knows how to accommodate the rework when a problem is found. More patches are added, because there isn’t time to redesign. And the next iteration starts the cycle over again.

Sure, doing this in two to four week cycles beats doing it in six to twelve month cycles. But only a little. Most of the time, it starts to fall apart if the team doesn’t learn to work differently.

But it’s inevitable, they say.

No, it’s not inevitable. Some teams don’t work in a serial fashion. That’s a simple existence proof. A single counter-example should be sufficient. Of course, people are not logical beings, so it’s not. They cling to what the know, deny what they don’t know, and sometimes get angry with people who disagree.

A first step toward finding an alternative is to think about all the individual details of designing, coding, and various sorts of testing. If you break these activities up into little mini-activities, you’ll quickly notice that a lot of them do not depend on each other, and can be done in arbitrary order. You can remove a lot of the serial nature without dropping the serial mindset.

A second alternative is more radical. You invert the entire sequence. Rather than starting with design, you start with testing. While you’re first discussing what functionality needs to be produced, think of some examples that illustrate what it needs to do. Think deeply, and look for situations that your examples don’t cover. Create more examples for those. This questioning the requirements is, itself, a form of testing. It’s also a great communication boon. I like to use a process I call The Three Amigos. And having those examples in mind helps the programmers hit their target more accurately.

Now it’s relatively trivial to turn those examples into automated tests for that functionality, even before the code is written. Just automate the examples, expressing your expectations at the end of each. Of course, they won’t pass until the code is completed, but that’s OK. Once they’re automated, running them is trivial.

The developers don’t have to start with designing before coding, either. Again, start with a test. In this case, it’s a unit test. The examples that illustrate the requirements will surely give some ideas for a starting point. Write a unit test (or microtest as GeePaw Hill calls it; these aren’t your father’s unit tests) and then write just enough code to make it pass. Once it passes, then consider the design, refactoring the code to eliminate duplication and push bits of functionality into a shape that makes a good design. It sounds backwards and impossible until you try it. At least, it did for me. But it does, in fact, work to reverse the waterfall flow. (I don’t always proceed without a design in mind. Sometimes I can’t stop myself from thinking of a design before I start. But I don’t delay starting while I think about the design. I do that thinking as I code and refactor.)

As the code starts to take shape, I may need to add a little glue code to connect the already-written tests to the code I’ve just written. At some point, the tests pass. Is it “test-after-code” if the code is the last thing written? I wouldn’t say so. Of course you will want to do some exploratory testing as each part demonstrates it’s met the explicit criteria of the examples.

As written above, this sounds very mechanical. In actual practice, it’s a lot more fluid. People are talking with each other all the time, noticing loose ends that have been missed, and taking looks from multiple points of view throughout the process. No one is too fussed about the order of the activities. They’re fluid enough that, to a first approximation, everything is happening all the time. And it’s a far cry from mini-waterfalls each iteration.

5 Replies to “Avoiding Mini-Waterfalls”

  1. Thanks George for writing something that has been wandering between my random thoughts for a while, and was not brave nor wise enough to put it down in words.

    As a tester in an agile team, I’ve been suffering of mini-waterfalls for some time, that’s when I started thinking of agile as no “silver bullet” for software development process, unless you make it be so in your own organisation, by your own means and with your own sweat and blood. Developers worked agile-ly, but testing was waiting until the code was mature enough to start testing… My conclusion became: if your developers work agile but you wait for them to be finished before starting testing, you are not agile, you are the last waterfall phase!

    From that point on, we started to agilize our testing process, in two directions:

    1) Advance testing effort as possible, be present and visible in early development stages like documentation, reqs gathering, analysis, specification… Nothing but great things come from this, bugs are prevented in paper or whiteboard, customers’ advocacy appears before nothing is coded, testing ideas and testing effort planification can start at iteration day one so when you have the code available, you’ll be ready to start testing hands-on!

    2) Deal with development to get partial releases, prototypes, consolidated features before final release… Whatever that may help you advance your “unit-functional” testing, leaving the integration / system tests for the final phase of the iteration, but not all the testing effort.

    Thanks again, really inspiring.

    Cheers!

    Mauri

  2. Thank you George for sharing,
    in general I agree with you that mini waterfoolish approaches resemble the very same problems than big waterfoolish approaches, just in smaller scale.

    Your Test Driven approach tackles the typical fuzzy and blurry approach to requirements management, where the requirements are not specific enough to find the right design (which is a fair statement in itself).

    The design itself is according to your statement supposed to be as minimalistic as possible (and I fully agree to that), but looking closely that is indeed also a valid design. Having said that your order is still:

    Requirements (writing test case and executing test case) -> Design (principle 1: just enough to pass the test, principle 2: refactor till good enough) -> Build (following the design principles) -> Operate (no way to test before you operate the solution) -> Test (execute test cases till they succeed).

    So from an IT value flow it is the same principles as always, the difference (and that is a good approach if you ask me) lies in the nature on HOW you tackle the problem, by making it as concrete as possible while at the same time follow the principle of fail fast and fail often.

    Inspiring, no matter if it sounds like a critique or not. 🙂

    Kai

  3. So there is a serial nature to the work we do hence they key is to exploit areas where development activities can (and should) occur concurrently and this must be “learned”. I do wish we could retire ‘waterfall’ from our lingo as I don’t think it makes the point we are all trying to make.

  4. I’ll take this by heart: “Write a unit test (or microtest as GeePaw Hill calls it; these aren’t your father’s unit tests) and then write just enough code to make it pass.” Thank you!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.