Managing Risk With Estimates

Used with care, software development estimates can help you manage project risks. They let you peer into the future, though only as well as your current understanding allows. Estimating based on what you know is easy. Estimating based on what you know you don’t know is possible. Allowing for what you don’t know you don’t know is prudent.

Managing risk is a dynamic process. I’ve seen people document a risk in a “Risk Register” document and promptly ignore it. That’s not management. Instead, consider different ways a risk might be reduced in likelihood or consequence. When time or cost is of the essence, think about how you’ll determine what you can afford, and when you need to take a second-choice approach.

I once worked on a project where we had to remove the requirement to log in for viewing certain documents, such as a Form 10K. The system had been built with the assumption that the user would always be logged in, though they could create a login that would give them immediate access to such open documents. A change in SEC ruling meant that the organization would be liable for expensive daily fines if new behavior was not deployed by December 31.

The existing code checked the access rights in numerous places. In some places, a user would be permitted or denied access to the document. In other places, a user might or might not be able to see the existence of a document when they did not have access to the document itself. Access right might be granted on an individual basis, a company basis, or a group basis. There were similar, though slightly more complicated, business rules governing the access required for the documents, themselves.

There were at least two possible approaches. We could patch the logic of each of those places. Or we could refactor to remove the duplication and implement the logic in one area. The latter approach was clearly superior for the long term, but was also more work, touching more code. It was only considered feasible if it could meet the short term deadline. Missing the SEC deadline was not considered a successful outcome.

Estimates predicted the refactoring could be done in time, by a small margin. Work started, in an order that touched one area of functionality at a time. If the estimates proved optimistic, there would still be time to switch approaches as long as we’d left the other areas of functionality untouched and working. The first couple of areas, of course, took longer than later ones. They required the initial implementation of functionality that would be extended and shared by the other areas. By the time these were completed, the safety margin had shrunk. Should we proceed with the refactoring?

Since the these two areas were completed and working, the time needed to patch all of the duplicate logic was now reduced; there were two fewer places to patch. I considered that enough leeway to support continuing with the refactoring. The project manager was nervous. The estimates of the remaining areas looked optimistic to her, given the time the first two had taken. As it turned out, the remaining estimates were a bit pessimistic, and the safety margin started to grow again with each area that was refactored to the new approach. The reduced time to implement and test each remaining area allowed the deadline to be met, and a few customer annoyances to be fixed, as well.

This story has a happy ending, but it’s instructive to look at places where things might have turned out differently. The initial estimates might have shown that there was insufficient time for the refactoring, or that patching the code had little margin for safety, itself. Either of these might have lead to choosing what was considered the simpler, safer approach, patching the code.

This team was not experienced at refactoring legacy code, and might have had difficulty either estimating or performing the refactoring without me. This team would have likely estimated the refactoring very pessimistically, but I’ve seen other teams who were overly optimistic about their ability to refactor an existing code base. Either way, the confidence that the project manager placed in the estimates would likely have been lower. I think she would not have agreed to the refactoring without my confidence, and the ability to double-check the progress along the way.

I was not new to this code. I had been working with it for a few months, and had already made some progress in finding seams to isolate sections for testability. I had put characterization tests around those sections that were substantially unchanged. These tests had proven pretty good at alerting me when I stumbled across a new assumption in the code about which I was ignorant. This knowledge, and the fact that I was finding the code more flexible for new requirements, gave me confidence that I could successfully refactor within the time constraint. Without knowledge of the code base and some past experience developing in it, my estimates would have had a wider margin of error.

The first couple areas had taken longer than I expected. I ran into a few of those unknown assumptions in the code. Had they taken longer than they did, we would have likely switched gears to patching the code. In spite of these estimates having been a bit optimistic, I still felt confident in the remaining estimates. There might still be a few special cases in the access logic, but I now had a module with good test coverage, and had solved an infrastructure problem with testing the database access. If subsequent implementation had not supported my confidence, we still would have switched gears at that point.

When estimates are used for managing risk, they can’t be performed one time and then forgotten. The confidence place in them needs to be examined for flaws in the assumptions. They need to be tested by comparison with actuals. When actuals and estimates disagree, believe the actuals. Consider what that may mean for other estimates. Do they need to be adjusted based on new knowledge or adjusted assumptions?

Estimates are never guarantees. They are, however, a useful tool for achieving the best future we can while minimizing the downside of risk.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.