Validation and Uncertainty

What an extraordinary conversation I had recently on Twitter. It started with Neil Killick’s statement that we should not consider our stories truly done until validated by actual use. This is a lovely thing, if we can manage it. While I’ve not set such a bold declaration of “done,” I’ve certainly advocated for testing the benefit of what we build in actual use. Deriving user stories from hypothesis of what the users will do, and then measuring actual use against that hypothesis is a very powerful means of producing the right thing—more powerful than any Product Owner or Marketing Manager could otherwise dream up.  (Continued)

Setting Expectations

Karl Scotland wrote an excellent blog post on Estimates as Sensors. In it, he extols the use of estimates to “sense capability and create feedback for yourself.” This is similar to my point in Estimation as Hypothesis.

At the end of this post, it now says “I don’t recommend using them to make promises and give guarantees to others.” Originally, this said something like “I don’t recommend using them to make promises and setting expectations of others.” I asked him what he did use for setting expectations. He had two responses. “I’d say expectations are set from understanding our capability, refined through sensing, and with a confidence range.” and changing the post to reflect his original intent.

There’s more to this business of setting expectations. (Continued)

Short Overview of Estimation

Back in 2008, Bob Payne and I were working with a team learning to practice Agile Software Development. They were doing quite well at a lot of things, but their sprint velocity bounced up and down like a yo-yo. The sawtooth velocity chart indicated, since they were clearly delivering value at a pretty steady pace, that they were not very good at estimating stories. Bob suggested to me that they might do as well, with less effort, by counting the stories instead of estimating them. (Continued)

Definition of Ready

Many time, in the middle of developing a user story, the programmer discovers a question about how it’s intended to work. Or the tester, when looking at the functionality that’s been developed, questions if it’s really supposed to work that way. I once worked with a team that too-often found that, when the programmer picked up the card, there were questions that hadn’t been thought out. The team created a new column on the sprint board, “Needs Analysis,” to the left of “Ready for Development,” for these cards that had been planned without being well understood. (Continued)

Taking the Long View in Software Development

My article, Taking the Long View in Software Development, has been published on ProjectManagement.com. Free registration is required on that site.

What does it mean for an estimate to be right?

What does it mean for an estimate to be right? Does that mean that later actuals had the same numerical value? That the project length, or cost, or end date was the same as the estimate? Is it an indication of the “correct length of the project,” whether or not the project is done in that time? Or is there some better definition of “correct estimate” that we can use? (Continued)

Tracking velocity

It’s very common for organizations to track the velocity of the Agile teams over time. This is quite a reasonable datapoint to plot. Combined with other data, it might give you some insights when you look back, and insights based on data are typically more useful than insights based on opinion. Remember, though, to keep in mind what the data is, and what it is not. (Continued)

Getting so much better all the time!

I’ve got to admit it’s getting better (Better)
It’s a little better all the time (It can’t get no worse)
— Beatles

Why do we expect productivity to increase? This goal seems to be a common expectation for management-driven Agile adoptions. Productivity is like motherhood and apple pie; who wouldn’t want more? (Continued)

How easy is it for your programmers to fix problems?

A programmer, writing some new code, looks into some existing code that she needs to use. Something doesn’t look quite right. In fact, there’s a bug. Whether no one’s triggered it, or they have but their complaints haven’t reached anyone who will do something about it, is hard to say. Can she fix this code now and keep working? Or does something prevent that?

In such a situation, I would prefer to write a new test illustrating the bug, fix it, and check both the test and the fix into source control. This might take five minutes or an hour. It’s a small detour, but I feel better knowing that the code is now safer for the future.

Maybe, however, there are policies, either explicit or tacit, that prevent such quick resolution. (Continued)

Long-Range Planning with User Stories

I frequently hear or read people suggesting using User Stories for relatively long-range planning. Sometimes they mean something as short as a release in a few months. Sometimes they’re talking about multiple releases over a year or two. In all of these cases, they’re talking about breaking down the work to be done into small slices so that they can better measure it’s perceived size for predicting the future.

What are the implications for doing this? (Continued)