Most Scrum teams estimate their top priority stories, select those stories that add up to their historical velocity for their sprint backlog. Some teams simplify this by merely counting the stories, or using the mathematical reciprocal, cycle time. Others make it more complicated, calculating the effect of days off and other known distractions from the work.
However they calculate it, some people put a lot of faith in the historical data to guide the future. “It’s data,” they say, “it’s better than guesses and not subject to cognitive bias.” Not all data is easily measured and converted to numbers, though. Limiting yourself to this initial calculation is, itself, an example of anchoring bias.
After planning the work for the next short interval of time, it’s helpful if a team looks each other in the eyes and asks themselves, “Can we do this much; could we do more?” If a group can’t do this and give itself an honest answer, it’s probably not yet a true team. A true team will be able to make a gut judgement of what the team as a whole can or cannot do. This judgement will take into account data that may not yet be identified and articulated, much less converted to numbers.
I’ve observed teams ask themselves this question and then get really quiet. No, they don’t feel they can do this work. Eventually someone speaks their doubt, and everyone else chimes in. They may identify reasons why this time period is different from the past. “There are a lot of distractions around the upcoming acquisition of another company.” They may identify reasons why this body of work is different from the past. “This is a new technology. We’ve investigated it, but never used it in production.” “Much of this work involves the skills of only one person. It’s going to be less efficient having others come up to speed on this work.” They may not identify the reasons at all, but still know in their hearts that their rational process is giving them the wrong answer.
Pay attention to that gut feel. It’s data, just as much as the numbers. Judgement is what turns data into choices. Making automatic decisions based on the numbers is an abdication of that judgement.
While I believe that the gut check should be done by every team, I don’t think their inability to answer it is a reflection of their “team-ness.” It might be a reflection of the atmosphere, the team’s comfort with the domain/tech/situation, or any number of things.
That said, a mix of gut checks and empiricism is useful
Unwillingness to answer might be a reflection of the general atmosphere. A lack of team comfort with the domain/tech/situation might mean the honest answer is “I don’t know.” But I stand by my assertion that inability to answer is likely due to not knowing themselves as a team–a strong indicator of a lack of “team-ness.”
I disagree that looking at the historical data is in itself a form of anchoring bias. On the contrary, it’s a (simple) form of Reference-class forecasting (http://en.wikipedia.org/wiki/Reference_class_forecasting) that is used to correct for cognitive biases that occur during the estimating/forecasting process. I think we don’t use the historical data enough, i.e., with sufficient regard to statistical variability and probability distribution so that we don’t just blindly use the data.
George did not say that LOOKING AT historical data was an example of anchoring bias. He said that making decisions automatically based on the data without considering other factors was an example of it.
Dave: You’re right, he didn’t quite say that, though he did say “Limiting yourself to this initial calculation is, itself, an example of anchoring bias.” Which is wrong: that’s not anchoring bias. There’s no bias, since it’s based on data. Bias only comes into play if someone is relying on their intuition.
Ted, there’s infinite data. The bias is in what data you choose and what you ignore. Anchoring bias is choosing the first data you find. Rejecting the first data and selecting other would also be bias, but probably doesn’t have a name.
Of course there’s always loads of data to be found, but when you use data to inform a decision that’s not cognitive bias.
Numbers can be used to create anchoring bias but that’s not what you seem to be talking about. If we use classes of service and work types to help inform an expectation around what amount of variability can be expected from work to be done then we’re actually combating cognitive bias using data.
If you mean using historical velocity to inform our gut when we guess what we can do next time, then yes, that’s subject to anchoring bias. But if one is not using intuition, then one cannot be subject to cognitive bias.
When teams are being required to set sprint commitments there is value in using historical velocity to anchor their guts, it’s not as valuable as a pull system where flow is managed instead of pushed through but still valuable IMO.
If I fill a gallon jug with water then pour that water out into a graduated cylinder to confirm the amount of water in it then I forecast that the jug can hold precisely that amount of water based on the recording of the level in the cylinder, I am using data to inform my theory. I am not being anchored by the data, it’s empirical and therefore I can expect much less variability in the result.
Someone I respect once said “…because empirical evidence trumps speculation every single time.” and I agree with her completely. 🙂
You’re fooling yourself if you think that you’re not subject to cognitive bias if you use some data. And you’re neglecting the fact that human systems are a lot less predictable than your gallon jug of water.
I think George is 100% right, if teams are starting out with velocity as their metric they should certainly run entirely by gut feel.
Velocity is a guess divided by an arbitrary number and therefore requires frequent gut-feel redirects.
Also, cognitive bias is something we are always subjected to in everything we do. The best we can do is use numbers to mitigate their impacts. The more accurate and meaningful our numbers are, the more likely we are to have better cognitive bias mitigation.
Kanban has done pretty good job enhancing the internal team communication. So far the best Agile method I’ve used in my organization.
Having observed many teams, I note that there is nothing inherent in kanban to enhance internal team communication. Good internal team communication will, however, enhance the success of kanban. I’m glad it’s working for you.