Long-Range Planning with User Stories

I frequently hear or read people suggesting using User Stories for relatively long-range planning. Sometimes they mean something as short as a release in a few months. Sometimes they’re talking about multiple releases over a year or two. In all of these cases, they’re talking about breaking down the work to be done into small slices so that they can better measure it’s perceived size for predicting the future.

What are the implications for doing this?

First of all, I think of a User Story as a small slice of functionality that adds “one thing” to the system being developed. I find it a useful tool for Getting Things Done on a development project. In order to be good as a tool for GTD, they need to be pretty small—something you can accomplish in a short period of time before you go to accomplish the next small thing. As a rule of thumb, I recommend to teams that they size User Stories such that they take a day or two of calendar time to accomplish. I don’t care how many team members might be working on the story to get it done in that time. And I don’t mind if they’re smaller. When I’m working by myself, I prefer them much smaller, in general.

If a three-month (13 weeks of 5 days) release is broken into User Stories that take two team-days each, then that’s about 32 stories. That’s a lot of stories, but it’s more if multiple stories are in-progress simultaneously, or if some of the stories are smaller. If half the team gets involved in each story, and half of the stories are only one day, then our story count balloons to 96. Imagine the team churning through a list of 96 stories at the start of the project so that we can know what fits into three months, or that we can know how long it will take to do what we want. Sounds like a lot of effort, doesn’t it? (And this is for a small project.)

As we expend that effort, we learn a lot of details about those stories. We’ll want to record what we learn so we don’t have to re-learn it later. That will take more effort, especially to record it in a way that won’t be misinterpreted later.

As we learn these details, some build on the details we’ve learned before. But since we’ve just learned them in a short time, we often forget some of them. It takes time for learning to “soak in.” So far, in the hypothetical scenario, we’ve only talked about the system. We haven’t actually built anything with which we can interact. That’s a clue that some of the details we’ve learned are invariably wrong, but we don’t know which ones, yet. As we build the system and interact with it, we’ll learn more, and we’ll learn more deeply. Do we abandon the early pre-start learning, or do we try to maintain it, fixing it where it’s incorrect and incorporating new things we’ve learned? Either represents extra work.

All in all, creating a long product backlog of User Stories is very reminiscent of creating a detailed Work Breakdown Structure at the start of the project. The details are typically oriented more in functional terms than construction terms, but it’s still a difficult and error-prone way of defining the work. It does, of course, let us come up with numerical estimates.

In my experience, those numerical estimates are not the ultimate goal, but are intermediate goals toward accomplishing some higher goal. What is that higher goal?

Let’s assume that our goal is to build a new system. And lets assume that we have good reasons to project into the future, as not every situation really needs to do this. In that case, we might have little historical data to base our projections. I would list the features (which some people call big stories, or epics) that were known, sort them into piles of approximate perceived size, and estimate those perceived sizes as best I could with whatever historical experience I had. Then I could test those perceptions when implementing the first of those features and adjust my expectations to the empirical results. As things proceeded, I could continue to refine my expectations. I could also readily change my mind about what features were really needed, or how much effort to put into a feature, because I hadn’t invested a lot of energy into it, yet.

If we were extending an existing system, using an existing team, then we’ve got a leg up. We’ve already got known information about how fast this team created similar features. We can continue to refine our expectations, as before.

Estimating big chunks of work, like features, is different from estimating small chunks of work, like User Stories. We, as a species, seem poor at giving consistent estimates over a broad range of sizes. I’ve never known the sum of estimates for User Stories to implement a feature to bear much relationship to the estimate of the feature. For that reason, I suggest treating them separately. I typically use T-shirt sizes for features, since so little is known about them that numerical estimates always seem unduly precise. For User Stories, I prefer to just count them. Yes, they are different sizes. They should not be such a broad range of sizes that it matters much, though. And in my experience, it’s easier to get them to similar sizes by examining the acceptance scenarios needed to verify them than to estimate them with numerical consistency.

If you need precise release dates, you can get those much more reliably by adjusting scope than by predicting with more precision. If you want to slice and dice the numbers, you can do that, but be aware that you are likely increasing the cost of your project by doing so.

My advice is to use simpler techniques for predictions, and spend your energy on the best quality development you can muster. Focusing on quality in the small is one thing that can help you produce faster and deliver more reliably.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.