Test-driving those “non-functional” stories
The term “non-functional requirements” has always bothered me. “Do you mean the software isn’t supposed to work?” 😉
This post is about how I approach test-driving the solution to non-functional requirements such as performance, and how it differs from test-driving a functional user story requirement.
Some questions came up on the scrumdevelopment yahoogroup about “technical stories” when developing frameworks. Well, I’ve been test-infected for quite some time, and I don’t find much need for “technical stories.” I find that driving the development with user stories keeps things on track much better. So, for developing a framework, I suggest developing a rudimentary client for that framework in parallel. It’ll help you drive things from an end-user point of view. It will also help your framework become usable by and useful for client code.
Generally I start with the story test (or part of it, if the story is a little big). I’ll make that test pass in a trivial way, and then use unit tests to drive completion of the story.
The original poster was having trouble splitting stories, and thought that perhaps they could be split on the basis of performance. The techniques of splitting stories is a topic for another post, but I mentioned that I find that non-functional stories, such as performance, aren’t good candidates for directly driving the development of code. Tobias Mayer asked for details of how I’d approach such a story, and here is my answer:
I would write a story for the performance criteria and an acceptance test checking the performance criteria. That doesn’t mean that the story is a good one for driving development. It’s just a business need. The “non-functional” requirements don’t make for easy TDD.
I would profile the system and see where the time is being spent. Most of the time, the culprit is rather well defined. Often the solution (well, a solution) is pretty obvious. Sometimes it’s damn difficult. A new algorithm might have to be developed.
In any event, I would use TDD to drive the new solution. That TDD would drive from a technical basis, however, not from the story test. The story test would just verify if the target performance had been met.
For a simple example, the story might be “Display the sorted list of froobles in less than 1 second.” A performance test written for the frooble display might show it takes 2.5 seconds. Profiling the application shows that 1.9 seconds of that time is spent in the bubble sort routine.
Perhaps the decision is made to use a merge sort. I would test drive the writing of the merge sort routine. Then I would substitute the merge sort for the bubble sort and run the acceptance test. Great, it now takes 0.9 seconds to display the sorted list of froobles. We’re done, for now.
As Dave Smith points out, non-functional requirements have a way of popping up again in the future.
Yes, this is exactly how the teams I coach do it. One of them is an Internet startup that has gone from 0 users to millions in under 2 years, and they have had quite a number of stories like this.
The nice thing about phrasing it in end-user terms is that doesn’t lock people into one solution. Many’s the time they’ve ended up optimizing something different than the engineers initially expected because they found a better cost/value payoof.
I also agree that the “non-functional requirements” distinction is a dubious one. If nobody on a team can figure out why something matters to a real person and express it in a way an executive can understand, then I don’t think they know enough to start work on it.