On Models

Brian Marick has written a tantilizing post The Stance of Reaction. In it he says

At this point, Sr. Naveira has at least four reasonable choices. He can step forward, thereby “asking” Srta. Muzzopappa to move backwards. He can step backwards (typically reversing the sweep). He can turn toward her so that they’re facing, causing her to “collect” (bring her feet together). He can take no step at all, but turn his chest to the left, causing her to begin moving counterclockwise around him.

The important thing about Argentine tango (danced recreationally, though perhaps not in a performance like this) is that she does not know which choice he’ll make. She must be balanced in a stance—physical and mental—that allows her to react gracefully to any legitimate move.

I truly hope he’ll expand on this, and how he applies it to the business of software development. I have great admiration for Brian’s intellect and inventiveness. I suspect what he says will help me work on some half-baked ideas I have about effective TDD keeping the code in a state in which it’s prepared to go in any direction, and about Pair Programming being most effective when we work to increase the possibilities open to our partner (a la Improv acting).

So far, Brian seems to be describing the concept of Reaction by saying what it is not–that it is not a reduction to a model. His description of this dichotomy does not match my understanding of how we use models. Online conversation has not clarified my understanding of his description. I suspect that the difficulty stems from us looking at the situation using different models. The appropriate next step seems (to me) to clarify my own model of how models work and are useful to me.

I wholeheartedly agree with Brian’s assertion that we have “a readiness to reduce (abstract away, simplify, generalize) the world’s complexity into something simpler that you can work with and think about.” This is not just true of knowledge workers, but of people in general. The assault of the universe on our senses is far too much to observe without doing so. As Brian says, “By default, we mostly act unconsciously, with the unconscious mind forwarding only anomalies to the rational part of the mind.” In other words, when the world around us is appearing to conform to the models that are most ingrained within us, we can react according to those models without making a conscious choice.

Other models are less ingrained, and we apply them more consciously.  Brian’s examples of the “homo economicus” and the “V-model of the Systems Engineering Process” models are good illustrations of models we’ve built particularly to help us with situations that our subconscious doesn’t handle.

Is there a fundamental difference between our unconscious and conscious models? As far as I can tell, it’s only in our awareness of them.  The study Brian uses to illustrate the power of the unconscious mind describes a detectable awareness between a cookie sheet and a chess board.  The unconscious model accounts for putting a cookie sheet into the oven, but does not account for putting a chess board into the oven.  Certainly both cookie sheets and chess boards are too modern to be innate to the species. Instead, the pattern of putting a cookie sheet is one to which we’ve become accustomed by familiarity. Or, at least, most of us have. Trying the same experiment with people unfamiliar with ovens, cookie sheets, and chess board would assuredly give different results.

Reductionist models are the way that both the conscious and unconscious mind deals with the sensory overload.  But the models we use can lead us astray.  As Brian says,

So sticky is the “homo economicus” reduction that economists face the occupational hazard of treating it as the only model of human behavior, which can make them say awfully silly things. Similarly, elegant and simple software development models like the V-model are so elegant, so simple, so pleasingly linear that their failure to work with real human behavior and limitations is commonly seen as the fault of the people, not the model.

H.L. Mencken put it this way, “For every complex problem, there is a solution that is simple, neat, and wrong.”  We are more likely to jump to an easy answer when we only know one model that fits the situation, or when we fit a model that’s so deep in our unconscious that we don’t notice it’s there.

If we are to avoid the trap of being limited by an inappropriate model, then we need to know more than one model.  And for the unconscious models, we need to find ways to rethink the situation according to conscious models.

In fact, this latter use is the value I find in the Myers Briggs Type Indicator model that so irks Brian. Brian would like for people to discard MBTI because it’s not “true” and doesn’t predict peoples’ behavior well. I, on the other hand, don’t expect either of these things from MBTI. It only aspires to be a model of preference, not behavior. While it often gets misapplied for things such as predicting what career path a person should take, that’s not, as far as I know, an intended use. And I don’t believe it’s “true” in the sense of corresponding to entities in the human psyche. Instead, it’s “true” in the sense that it corresponds to human observations, much as putting a cookie sheet in the oven does. I’m not even convinced that the preferences indicated by the MBTI are constant. I wonder if my preferences change slowly over time, and if they sometimes change rapidly and suddenly in response to the situation.

Whatever the faults of the MBTI as a model for people, it helps me to rethink the application of my unconscious models. As an introvert, my unconscious model might label an extravert as pushy and self-absorbed. Re-evaluating the situation in the light of MBTI, however, may suggest to me that they’re merely thinking out loud. As an NT (iNtuitive-Thinker), it’s easy for me to jump to the conclusion that the solution in my head is both correct and obvious. Realizing my preferences helps me to realize that it is unlikely to be obvious to others, and that I should test its correctness with some data.

We would do best if we always beware our implicit trust in our models, especially our deepest and most closely held ones. These are the models that allow us to act immediately and “instinctively.” But it is the model we don’t question that will most likely get us into trouble. From time to time, especially when things are not playing out to our liking, we can view the same situation in light of multiple models and see if they give us different insights.

Post to Twitter Post to Plurk Post to Yahoo Buzz Post to Delicious Post to Digg Post to Facebook Post to MySpace Post to Ping.fm Post to Reddit Post to StumbleUpon

Comments (2) to “On Models”

  1. I think what you’re overlooking is that I’m talking about reactions that don’t depend on models at all. When we jerk our hands back from a hot stove, we don’t have a model, we have a reflex.

    Phil Agre’s /Computation and Human Experience/ goes into great detail about routine. A thing (program, person) can exhibit perfectly competent behavior without having any “world model” at all. It simply reacts appropriately to local stimuli.

    My oft-repeated story of how vet students learn to perceive whether a cow is bright or dull is along those lines. http://www.exampler.com/testing-com/writings/tacit-knowledge.html The student believes there’s a model behind the perception. The teacher humors her. By the time the student becomes competent, she’s lost the model. Is it really lost? Or can she simply no longer express it? Hard to say, but there’s neurological evidence that the brain tries to avoid model-making when it can. Some teacher (I forget the context) said that explanations are something you give to keep the rational mind occupied while the rest of the mind does the *real* learning.

    Note: I do not claim that everything can accomplished through learned habit. But I think it’s interesting that XP placed an uncommon emphasis on learned habit. That early radical departure from the way we’re supposed to think hasn’t really been followed up on. I think it’s time.

  2. Here’s a few resources that might be useful for this exploration:

    Jens Rasmussen has put forward the SRK (Skills, Rules, & Knowledge) framework as a way of looking at human performance. One of the ideas in the framework, as I recall, is that the more novel the situation you face, the less you’ll be able to deploy your automatic/trained responses and the more you’ll have to reason about the situation.
    See http://www.carlosrighi.com.br/177/Ergonomia/Skills%20rules%20and%20knowledge%20-%20Rasmussen%20seg.pdf

    Also, I recently finished Mathew Syed’s
    /Bounce/, which is a book about expertise with lots of good examples from sport. In particular, one of the later chapters is about choking, where conscious attention gets in the way of automatic, learned, habitual performance.

    I recently listened to David A. Black being interviewed on the ACM podcast. He opined that the technology, and even particularly the software development community/ecology, are perhaps overly fond of change. His counter example it Knuth’s TeX.

    I wonder if there’s a tension between achieving high levels of learned habit and the agile principle of embracing change. My quick analysis is that they are in tension. Though it could be that stems from an overly facile notion of what change the principle is suggesting we embrace.

    I suspect there’s a link between automaticity in behavior and tacit knowledge, but I’m currently unconvinced they’re the same thing.

    I also feel like teasing out the distinction between as it were compile time – i.e. learning – and run time, would also help clarify if there’s any interesting disagreement here. My understanding of the Dreyfus model of skill acquisition suggests that models, or at least rules, are needed during learning, but may not be directly, consciously used during expert performance.


Post a Comment
*Required (Never published)