Visual Management Tools

Sometimes we intentionally make our work more visible so that we can more easily see what’s going on. We do this so that, as a group, we get a better picture of the whole of the group’s effort. At it’s best, this is more than a dashboard that displays information. Instead, it’s a tool that’s used by the people doing the work in the process of doing that work. (Continued)

Mocking External Services

Should your tests mock outside services or not? I keep seeing discussions on this topic. On the one hand, it lets your tests be in control of the testing context. Otherwise it may be very difficult to create a reliable automated test.

  • The external service might return different results at different times.
  • The external service might be slow to respond.
  • Using the external service might require running the test in a particular environment.
  • It may be impossible to generate certain error conditions with the real service.
  • There may be side-effects of using the real service.

On the other hand, our mock of the external service might differ from it in important ways.

  • Our mock service might have a slightly different interface than the real one.
  • Our mock service might accept slightly different parameters than the real one.
  • Our mock service might return slightly different results than the real one.
  • The real service might change behavior, and we won’t notice until we deploy to production.

This leaves us in a quandary. What to do? (Continued)

Relationship of Cycle Time and Velocity

I sometimes see clashes between Kanban proponents and their Scrum counterparts. Despite the disagreements, both of these approaches tend toward the same goals and use many similar techniques. Karl Scotland and I did some root cause analysis of practices a few years back and came to the conclusion that there were a lot of similarities between Kanban and Scrum [as the poster-child of iterative agile] when viewed through that lens. I also noticed that while Scrum explicitly focuses on iterations as a control mechanism, Scrum teams tend get into trouble when they ignore Work In Progress (WIP). Conversely, while Kanban explicitly focuses on WIP, Kanban teams tend to get into trouble when they ignore Cadence.

A twitter conversation I was in revolved around Cycle Time and Velocity. Since this is a topic that’s come up before, I thought it would be valuable to describe it more fully. Again, I find there to be more similarities than differences between Kanban (which uses Cycle Time) and Scrum (which uses Velocity) in terms of predicting when a given amount of work will be done, or how much work will be done by a given time. (Continued)

Another Approach to the Diamond Kata

I saw that Alistair Cockburn had written a post about Seb Rose’s post on the Diamond Kata. I only read the beginning of both of those because I recognized the problem that Seb described with the “Gorilla” approach that, upon reaching the ‘C’ case.

“The code is now screaming for us to refactor it, but to keep all the tests passing most people try to solve the entire problem at once. That’s hard, because we’ll need to cope with multiple lines, varying indentation, and repeated characters with a varying number of spaces between them.”

I’ve run into such situations before, and it’s always be a clue for me to back up and work in smaller steps. Seb describes that the ‘B’ case, “easy enough to get this to pass by hardcoding the result.” Alistair describes the strategy as “shuffle around a bit” for the ‘B’ case. I’m not sure what “shuffling around a bit” means and I don’t think it would be particularly easy to get both ‘A’ and ‘B’ cases working with constants and not heading down a silly “if (letter == 'A') … elseif (letter == 'B') …” implementation. I was curious how I would approach it, and decided to try. (Ron Jeffries also wrote a post on the topic.) I didn’t read any of these three solutions before implementing my own, just so I could see what I would do. (Continued)

When Klunky Component APIs Show in the UX

At various times, the behavior of Facebook reveals to me that it’s a loosely coupled system with long latencies in synchronizing its various data stores. Just today, the iOS app asked me to confirm a friend request. When I did so, it gave me a pretty generic error—something about not being able to do this action at this time. That’s when I realized that I’d already accepted that particular friend request, days ago. That tells me that there are often inconsistencies between the database used with one interaction, and the one used with another interaction on another interface.

It also tells me that there are likely teams working on components, rather than features. If it was a feature team, the need for an error message would have driven a different underlying implementation. A feature team would want to differentiate between not being able to accomplish an action because it was already done, and because some problem was blocking it. For the former, there’s no need to show an error at all. (Continued)

Short Overview of Estimation

Back in 2008, Bob Payne and I were working with a team learning to practice Agile Software Development. They were doing quite well at a lot of things, but their sprint velocity bounced up and down like a yo-yo. The sawtooth velocity chart indicated, since they were clearly delivering value at a pretty steady pace, that they were not very good at estimating stories. Bob suggested to me that they might do as well, with less effort, by counting the stories instead of estimating them. (Continued)

Why I Practice TDD

I was reading Laurent Bossavit’s book, “The Leprechauns of Software Engineering—How folklore turns into fact and what to do about it,” and came across his mention of “Comparing the Defect Reduction Benefits of Code Inspection and Test-Driven Development” by Jerod W. Wilkerson, Jay F. Nunamaker, & Rick Mercer. This struck me as an odd thing to study. Not only is Test-Driven Development not primarily about defect reduction, but the populations of defects it might reduce are likely to be very different from population of defects reduced by code inspection.

I then took a look at my own list of TDD studies and noted that most of these studies were focused on external quality as measured by absence of known defects, and time it took to develop the functionality. Keith Braithwaite, at Agile 2007, reported on internal quality, specifically Cyclomatic Complexity.

Quality and productivity are, of course, important things. And they’re easy to sell to some managers. Who could be against them? And I certainly wouldn’t continue to practice Test-Driven Development if it added defects or took a significantly longer time to create functionality. But that’s not why I practice TDD. (Continued)

Estimation as Hypothesis

Experimentation is a powerful learning tool. When I was young, I performed scientific experiments by mixing chemicals together to see what they would do. I learned that most random concoctions from my chemistry set would make a brown liquid that was often hard to clean out of a test tube. I learned that sometimes they would create very smelly brown liquids. These were not really experiments, however, and I didn’t really learn from them. Instead, these were activities and I collected anecdotes and experiences from them.

The scientific method rests on the performance of experiments to confirm or deny a proposed hypothesis. Unless you can propose a hypothesis in advance, you cannot design an experiment to test it. Until you test the hypothesis, you haven’t really learned anything. (Continued)

How do we estimate?

There have been some web posts and twitter comments lately that suggest some people have a very narrow view of what techniques constitute an estimate. I take a larger view, that any projection of human work into the future is necessarily an approximation, and therefore an estimate.

I often tell people that the abbreviation of “estimate” is “guess.” I do this to remind people that they’re just estimates, not data. When observations and estimates disagree, you’d be prudent to trust the observations. When you don’t yet have any confirming or disproving observations, you should think about how much trust you put into the estimate. And think about how much risk you have if the estimate does not predict reality.

This does not mean, however, that you have to estimate by guessing. There are lots of ways to make an estimate more trustworthy. (Continued)

A Big Estimate is Not a Sum of Small Estimates

I’m working with a client that has multiple, non-collocated component teams working on one project. It’s not my ideal situation, but we’re making the best we can of the situation.

We built a story map of business-oriented, project-level “epics.” These have been prioritized within business themes, and have been tentatively scheduled for development releases. The early ones have been estimated with level of effort (LOE). Basically these LOEs are Small, Medium, and Large, but given numeric scores to allow tracking project progress toward development releases from a business point of view using a burnup chart. (Continued)