A Big Estimate is Not a Sum of Small Estimates

I’m working with a client that has multiple, non-collocated component teams working on one project. It’s not my ideal situation, but we’re making the best we can of the situation.

We built a story map of business-oriented, project-level “epics.” These have been prioritized within business themes, and have been tentatively scheduled for development releases. The early ones have been estimated with level of effort (LOE). Basically these LOEs are Small, Medium, and Large, but given numeric scores to allow tracking project progress toward development releases from a business point of view using a burnup chart.

These project-level “epics” are broken down into component level “stories” for development. The component stories have their own acceptance criteria at the component boundaries, and are estimated by the component team doing the work. These estimates necessarily don’t use the same “units” as the business level estimates. There’s no way to make the estimates use a common “point” from team to team, much less comparable to the high-level ones. The component story estimates are used for tracking progress within each team’s sprint.

It’s not the most highly-tuned Agile process, but it’s pretty darn good for a project transitioning to Agile in a large organization used to a highly controlled, serial lifecycle. It’s reasonable, and it’s theirs.

So where’s the rub? They’re also using a well-known “Agile Lifecycle Management” tool. Remember, this is a distributed project. Also, the Quality Office, accustomed to that highly controlled, serial lifecycle, demands lots of documentation.

We started putting the epics and stories into this tool. Determined not to let the tool dictate the process, we ignored that it wanted us to estimate task-hours. We assigned the stories as children to the epics. When we did so, the tool deleted our epic estimate, and replaced it with the sum of the story estimates. This gives us a lot more precision—we’ve got way more sizes than Small, Medium, and Large, now—but much less accuracy.

We estimated the fruit salad as a Small and called it 5 points. The tool saw that we were putting 2 pineapples, 6 apples, 3 grapefruit, and 120 blueberries into the salad. Therefore the fruit salad is now sized as 132 fruit. How useful is that?

It reminds me of Dave Nicolette‘s classic post:

How many Elephant Points are there in the veldt? Let’s conduct a poll of the herds. Herd A reports 50,000 kg. Herd B report 84 legs. Herd C reports 92,000 lb. Herd D reports 24 head. Herd E reports 546 elephant sounds per day. Herd F reports elephant skin rgb values of (192, 192, 192). Herd G reports an average height of 11 ft. So, there are 50,000 + 84 + 92,000 + 24 + 546 + 192 + 11 = 142,857 Elephant Points in the veldt. The average herd has 20,408.142857143 Elephant Points. We know this is a useful number because there is a decimal point in it.

So far, we haven’t found a way around this. (Nor for the fact that we can’t set the release for an epic if it has any children attached.) It’s a classic case of the tool trying to dictate the process rather than supporting it.

3 Comments

Categories: Tools and Techniques

Tags:

3 Replies to “A Big Estimate is Not a Sum of Small Estimates”

  1. In one of the well-known tools, I simply stored the story estimates in a custom field that the tool didn’t use for anything. When we needed to burn down a sprint, I did it by hand (we didn’t need that much).

    That way, the tool can be used for release planning and business communication.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.