All Life Is Problem Solving

Joe Firestone’s Blog on Knowledge and Knowledge Management

All Life Is Problem Solving header image 2

On Cynefin as a Sensemaking Framework: Part Three

May 29th, 2008 · No Comments

JMWTurnerKeelmanHeavinginCoalsbyMoonlight1835

 

There are three interesting questions we’d like to take up in this part.

– First, assuming that the approach taken by Cynefin, requiring sensemaking through first selecting the context type one is dealing with is appropriate, is the Cynefin framework complete enough as it stands or does it fail to identify important types of decision making contexts?

– Second, what is the place of causality, predictability, sensemaking, and rationality across the Cynefin domains?

– Third, is it a good idea to separate the sensemaking process into the steps of classification of contexts, and the rest of sensemaking, or is it better to address the more specific question of the best decision given a particular situational context that may or may not fall into one of the Cynefin contexts?

Other Context Types

The Cynefin framework distinguishes only four primary sensemaking contexts as relevant to decision making. It doesn’t claim that these are the only important ones, but nevertheless the question of whether they are is unavoidable. I think there are at least two more such contexts that spring quickly to mind. First, there are randomness/chance contexts, and second, there are complex, yet relatively predictable contexts.

  • Randomness/Chance

Sometimes we need to decide what to do in contexts where we make decisions in relation to random or close to random interactions or data. Component behavior arising from interactions characterized by chance or randomness is indeterministic, patternless, and unpredictable. However, statistical aggregates of such behavior form patterns we can understand. In fact, it can be said of random processes that though they are unpredictable and highly uncertain in the short run, they are highly stable and predictable in the long run. Processes such as dice throwing, card shuffling, and roulette wheel spinning, can be the basis of professional gambling and of businesses that profit from gambling just because such stability exists.

Historically, knowledge about chance processes was sought and used by those who wanted to improve the rewards and minimize the losses they experienced in gambling. But other applications of such knowledge are very well-known and widespread. Of course, life and other forms of insurance are based on the analysis of randomness, of odds and proportions. And there is enough long-run stability in human populations to support such businesses. In addition, business processes. Including manufacturing processes, have variable outputs. Statistical quality control is based on the idea that variation can be held within certain limits by finding and manipulating the causes of such variability. By applying cause and effect knowledge appropriately, statistical control seeks to minimize variability in outcomes while retaining a stable chance process of output variability. Of course, these days, every manufacturing company uses statistical quality control, so there are certainly many decision making contexts in business where we deal with chance and randomness and attempt to limit its scope. Of course, survey research, polling, political campaigning, population forecasting, and policy, time-series analysis, epidemiology, are a small sample of fields in which statistical models analyzing random error and providing a basis for its control inform the context of decision making. The decision making may involve cause and effect, where the target of action is a parameter affecting the variability of a statistical process. But sometimes decision making will involve action that doesn’t affect a statistical process, but instead tries to operate within it and to benefit from the fact that others acting in the process are less knowledgeable about chance variations than oneself.

All this is by way of posing the question of whether chance contexts should be included within Cynefin. It seems to me an obvious oversight that they are not included. Perhaps Dave meant to classify them as complicated systems since there are rules and cause and effect relations involved in controlling processes with a statistical component. However, the aspect of the processes being controlled is random in character, and our understanding of process variability is in terms of probability models. These models are different from ordinary causal models, even complicated ones, and the differences involved and the impossibility of predicting and controlling individual outcomes of decision making argues for the recognition of an alternative context – specifically a “randomness/chance” context.

  • Complex, More Predictable

Dave Snowden has said in a number of contexts that human complex systems are different from others. Here’s a quote from the HBR article: (p. 3)

“More recently, some thinkers and practitioners have started to argue that human complex systems are very different from those in nature and cannot be modeled in the same ways because of human unpredictability and intellect. Consider the following ways in which humans are distinct from other animals:

– They have multiple identities and can fluidly switch between them without conscious thought. (For example, a person can be a respected member of the community as well as a terrorist.)

– They make decisions based on past patterns of success and failure, rather than on logical, definable rules

– They can, in certain circumstances, purposefully change the systems in which they operate to equilibrium states (think of a Six Sigma project) in order to create predictable outcomes.”

I agree that human complexity is different from that found in nature. Ant hills, for example, work without the aid of explicit central controls. In contrast, human systems combine emergent self-organization with the efforts of system agents to direct and control their systems in accordance with their own intentions. I call the second type, human complexity, Promethean Complexity because it frequently tries to impose non-complex order on tendencies toward self-organization. Complexity science, at present, is mostly based on research about Natural Complexity (N-Complexity), and not on research focused on Promethean Complexity (P-Complexity), and we are in a phase now where we are trying to apply constructs, knowledge, and methods developed for N-Complexity to P-Complexity. It is likely that such an effort will succeed only partly, and that we will have to broaden complexity theories and approaches to be successful with P-Complexity.

I propose that we distinguish N-Complexity from P-Complexity contexts, because I think that N-Complexity contexts for action are likely to be more predictable than outcomes of P-Complexity contexts. Dave’s framework seems oriented to P complexity, so perhaps we should define another type of context for more predictable complexity. The addition of Randomness/chance and More Predictable Complex contexts, would leave Cynefin with six primary contexts.

Causality, Predictability, Sensemaking, and Rationality Across the Domains

Earlier, I pointed out that Cynefin classification of contexts into one of the four primary domains involves a KLC. The problem in the Cynefin KLC is the question of which domain the context falls into. The first step in many Cynefin projects is information acquisition and individual and group learning aggregation from sensemaking items. The first step then provides a basis for a second step of knowledge claims about classification, which is then followed by exchanges among the sensemakers evaluating such claims and developing a consensus. Thus, Knowledge Claim Evaluation in Cynefin is communitarian, though every effort is made to partial out the influences of organizational hierarchy on the process of arriving at consensus. Communication of the results of a Cynefin project, provides the knowledge integration aspect of the KLC.

I made the point earlier, that classification into the simple/known domain doesn’t require the step of using sensemaking items to separate the known from other domains. That is, you know you have a problem when your previous knowledge about the effects of your decisions either seems to be or proves wrong or unreliable. When you see that, you know immediately that you’re not in the known domain, and you also know that you don’t know what to do to get where you want to go. Given such a judgment, the problem presented to a decision maker is that what they ought to do is unknown, and they have to “make sense” of that problem. Now part of solving that problem may be successful classification of the decision context into one of the other domains. But that classification task is secondary compared to the primary problem of deciding on the best action we can take to control the instrumental behavior gap.

In a sense, what Cynefin seems to be implying, or suggesting, or inferring (take your choice of the appropriate term here) is that to arrive at that best decision, or, at least to arrive at a good one, it is better to begin by classifying the context into complicated, complex, or chaotic, and once having done that to follow the further recommendations of Cynefin to Sense-Analyze-Respond, or Probe-Sense-Respond, or Act-Sense-Respond, as the case may be, then it is to directly address the problem of what the best decision is in the context of the situation.

Cynefin seems to justify this by pointing to our relative inability in the unordered contexts (complexity and chaos), to understand cause and effect and to predict the consequences of our actions before a decision is made. We cannot hope to predict decision outcomes before we act since, in the unordered domains, we cannot understand cause and effect before we act, and in the chaotic domain we cannot understand cause and effect at all. Thus, we must, in the unordered domains, rely on trial and error to reveal patterns that we can understand, and use to inform our decisions, and eventually to drive change toward satisfactory outcomes. We must know which domain we’re in before we consider the problem of what a good decision is, because the only domain where we can know enough about cause and effect to be able to predict outcomes and so know what the best decision is, is the complicated domain. In the other two domains, due to the impossibility of knowing cause and effect relations before the fact, finding a good or the best decision through prediction is also impossible, so we have to content ourselves with recognizing patterns, pursuing trial and error, running parallel safe-fail experiments, and reinforcing favorable patterns that result from our actions.

If this reconstruction of what Cynefin is saying to us is correct, the argument is very plausible, but nevertheless, I think, for a number of reasons, that it is wrong in saying that we cannot predict in the unordered domains. First, it is inherent in decision making to assume that voluntary acts we undertake either provide immediate gratification, or have an instrumental purpose. Looking at acts that have an instrumental purpose, we would not decide to undertake them at all if we believed that they did not do something to bring about our purposes and goals. So if we select one decision rather than another, and we do not do so using a random or chance rule, then we do so because we expect the decision we select to have a more favorable outcome than the other decision alternatives available to us. Now, this very general consideration is not restricted to any specific Cynefin contexts, but it suggests that all contexts involving instrumental decisions require us to have expectations about decision outcomes before we act and that these expectations can be true or false.

However, an expectation that a decision will bring about a result, is not the same as a simple cause and effect rule. It may be more complex than that and may suggest only that something we do will create a greater propensity for an aspect of reality to change in a direction we favor. But, there isn’t much doubt that such an expectation is a prediction about the future, given the contextual conditions of the decision. So, however difficult it may be to predict the future in complex and chaotic contexts, our decisions in those contexts do imply expectations and predictions. Perhaps we cannot predict the future in these, as well as we can in simple or complicated contexts. Nevertheless, when we make decisions we must have expectations and make predictions, and this means that acting in complex and chaotic contexts must involve such predictions, whether they are likely to be correct or not.

Second, I have pointed out above that Promethean Complexity is likely to differ from Natural Complexity in its level of unpredictability. This suggests that “unpredictability” is not the whole story where complexity is concerned. Instead, Promethean Complexity can be more unpredictable than Natural Complexity, or what is the same thing, Natural Complexity can be more predictable than Promethean Complexity.

My third reason, related to my second one, is that the dichotomy predictable vs. unpredictable inaccurately characterizes all the contexts. All are predictable or unpredictable to some degree. The ordered domains may be more predictable, but the unordered domains offer some predictability as well. This point is actually supported in Dave’s own account of Cynefin’s unordered domains. That is, he indicates that probes involving safe-fail experiments in the complex domain may produce results we want to reinforce. But that is an expectation (a prediction) that such experiments will work in this way. And if safe-fail experiments do produce patterns we like, then when we set out to reinforce them, don’t we have expectations that certain actions will reinforce them, and aren’t these also predictions?

That the answer to this question is yes, is suggested by the view that in the unordered domains, we must rely on trial and error to reveal patterns that we can understand, and use to inform our decisions. For how can patterns and their understanding be useful to us in informing our decisions? I suggest they can only be useful if understanding them helps us to form expectations about the future, about the consequences our decisions are likely to have. That is, patterns providing an understanding of the decision context also provide an understanding of the constraints that apply to both ourselves and our contexts. And these constraints tell us something about what kinds of future we can expect if we do “X”, “Y” or “Z.” I’m not suggesting that the patterns will provide prediction rules, or laws as in the known domain, but they allow us and support us in formulating ideas about whether particular decisions are more likely to have better results than alternatives.

In the chaotic domain, we may not be able to predict with any accuracy how what we do will affect the chaotic attractor; while maintaining a chaotic context, but nevertheless, as Dave recommends, we can take action or do safe-fail experiments to try to learn how to change the context from a chaotic one to one that is known or one that is complex. Again, in doing those experiments or taking those actions, we will have expectations about how the experiments will work and the outcomes they will provide. Some actions will clearly destroy the chaotic regimes and create order where there was none before. Others will reduce turbulence enough so that chaos will be able to spawn self-organization and complexity. When we make decisions leading to such actions, again, we have expectations, and make predictions, even though these are not as reliable as the predictions we make in the ordered or complicated domains.

If some predictability in the unordered domains of Cynefin exists, then that reflects strongly on the issue of whether we should classify first and then follow the actions recommended for those domains, or whether we should directly address what the best decision is in a particular situation. For the meaning of the Cynefin distinctions among sense-analyze-respond, probe-sense-respond, and act-sense-respond action sequences is that in the first of these, rational inquiry is recommended (“analyze”), because it can yield tested rules that will allow us to predict the consequences of our decision, but in the last two sequences the appearance of “sense” alone without “analyze” is meant to signal “pattern recognition,” grasping, and selection, rather than rational inquiry, because it’s assumed in Cynefin that rational inquiry and analysis is of no help in arriving at patterns and it is also assumed that patterns are no help in generating expectations and predictions.

Thus, in complex unorder we “probe” and wait for patterns to appear that we can recognize and respond to, and in chaotic unorder we “act” and then do the same thing. The difference between “probing” in complexity and “acting” in “chaos” is not logically sharp. Both are forms of acting, but “probing,” has the connotation of a more experimental, tentative information seeking form of activity, while “acting” in chaos, has the connotation of an intention to leave the chaotic context in favor of the known or complex context.

However, does it really make sense to suggest that “probing” in complexity, and “acting” in chaos are divorced from “analysis” and that “sensing” and “responding” in these domains also exclude “analysis?” I don’t think so, and I also think that the only reason why this seems plausible, is because the starting point for action is not equalized among the three domains, but rather it is just assumed that sense-analyze-respond doesn’t apply to domains other than the complicated, and it is also assumed that the complex and chaotic sequences don’t apply to the complicated domain. But we can also view things in the following way:

(1) There’s no difference between “acting” and “responding,” except for temporal perspective. That is, in any cycle involving “acting-sensing-responding,” “responding” is another word for “acting” in the cycle just following the initial cycle. Thus we could as well be talking about “acting-sensing-acting” as “acting-sensing-responding,” since “acting” is always preceded by “sensing” and “sensing” always follows “acting.”
(2) “Probing” is a type of “acting”
(3) The exception to “acting” – “sensing” – “acting” occurs when “sensing” doesn’t make sense and we have to make new knowledge in order to make sense. Then we have “acting” – “sensing” – analyzing” – “acting”
(4) If we take up the Cynefin Process at the point where we’ve decided we’re dealing with a complicated context, we next have to decide what to do, so we go through “sensing” – analyzing” – “acting” (as Dave says), however, this may be followed by “sensing’ – “acting” or by another round of “sensing” – analyzing” – “acting,” if we still can’t make sense of the situation
(5) If we take up the Process at the point where we’ve decided on complexity, then do we really “act – sense – act” as Dave suggests? That is, can we really follow Dave’s advice and act without “sensing,” and if the “sensing” fails to make sense, as will be true in both complexity and chaos, without “analyzing” (inquiring), before we “act”? I think the general answer to this question is no, though, of course, our process of analyzing can be cut very short by the perceived need to “act” in a situational context
(6) But, if after deciding what type of context we are dealing with we must sense and analyze before we act in both “complex” and “chaotic” contexts, then we must revise Dave’s recommendation to read: sense-analyze-act-sense (perhaps analyze)-act in all domains
(7) Dave may think the above is in error because one of the main points of distinction between “order” and “unorder” is that “unorder” is not amenable to “analysis” or “rational inquiry,” due to the absence of causal order and predictability, so that it makes no sense to add the sense-analyze steps to his sequences for “complexity” and “chaos.” However, I’ve shown earlier that predictability is not completely absent from “unorder,” and have also just emphasized the point that “acting” requires prior “sensing,” in the “complex” and “chaotic” domains as well as in the “complicated” domain.

The type and extent of analysis performed in the different domains may differ, but except in the simple domain, once a knowledge gap is recognized, “analysis” will always involve developing new ideas (however lacking in variety) and evaluating them (however arbitrarily and perfunctorily), and it may also involve communicating new knowledge before the focus of the decision maker leaves analyzing and returns to acting. To make this point in the vocabulary of the DEC-KLC-KM framework, one cannot consciously respond to a context with an “act” unless one already knows what to do, and we can only know what to do if the “act” is “routine.” But, as is true in the complicated, complex, and chaotic domains, acting is not routine; it always involves sensing a knowledge gap, and therefore a KLC (analysis) is necessary to remove that gap, before acting is possible

Should We Classify the Context First or Is It Better To Address the Question of What The Best Decision Is Given a Particular Situation?

To summarize the above analysis, rational inquiry and predictability, contrary to Cynefin, have their place in decisions about all three — complicated, complex, and chaotic — domains, and since they do, highlighting the place of analysis in the complicated domain, while denying it a place in the other two domains is mistaken, and ignores what humans can actually do in complex and chaotic sensemaking/decision making contexts. But if to deny analysis its place in the complex and chaotic contexts is a mistake, then, the primary reason to approach decision making through prior classification of sensemaking items into five contexts (simple, complicated, complex, chaotic, and disordered), is gone, and we can ask the question of whether it is better to approach decision making the Cynefin way, or whether a more direct approach to selecting the best available decision alternative available in a particular context is preferable?

A more direct approach to sensemaking for decision making in contexts with a knowledge gap, is to specify decision alternatives and compare them in terms of a cost-benefit evaluation taking into account expectations about the consequences of the decision alternatives. Such a “rationalistic” and “predictive” approach to decision making is less popular these days than it once was, because research has shown that humans prefer to “first-pattern-match” in decision making, and then proceed by what is, essentially, sequential trial and error if the first pattern doesn’t match post-decision experience. I think this second form, often called Recognition Primed Decision Making (RPD), or Naturalistic Decision Making (NDM), is dominant in the simple or “known” domain, where by the way, we are confident in our expectations. However, I’m suggesting that the decision alternatives approach would apply more frequently to contexts that are not “known,” where “first-pattern-matches” often don’t work as well as they do in the known domain, and new knowledge needs to be created. Even in these “unknown,” domains, however, NDM can still be and often is used in preference to rationalistic decision making, so the question is when should each of these contrasting approaches be used?

Now, let’s keep firmly in mind what results from a successful Cynefin classification of the “unknown” domains. Specifically, if the context is complicated we know that we can follow classification with Sense-Analyze-Respond, and we also know that the term “analyze” is meant to include approaches to inquiry such as the decision alternatives approach, as well as first-pattern-match approaches. Moving to the complex domain, however, we follow classification with Probe-Sense-Respond, and it seems clear from Dave’s account that he means to say that the decision alternatives approach ought not to be followed in the complex domain since we cannot predict decision outcomes before the fact, and therefore we ought just to use first-pattern-match after the fact. Similarly, in the chaotic domain, where we follow classification with Act-Sense-Respond, it is again contended that since we cannot predict decision outcomes before the fact, and since we cannot really understand cause-and-effect at all, all we can really do is to first-pattern-match after the fact and respond accordingly.

But, I think I’ve shown in an earlier section that prediction is not entirely impossible in either the complex or chaotic domains and that Cynefin is mistaken in contending that it is. And if my arguments there are correct, I think it follows that the need to classify the unknown domains according to Cynefin, in order to evaluate whether one should use an NDM or a decision alternatives approach in developing the new knowledge we need to make a decision, is gone.

An evaluation of that kind may very well be necessary and appropriate for unknown domains, but as long as some level of prediction, however, probabilistic, is possible, a decision alternatives approach cannot be rejected on grounds that it is impossible to implement. Therefore, classifying contexts into “complicated,” “complex,” and “chaotic” is not useful in helping us to decide whether we ought to follow an NDM or decision alternatives approach in inquiry seeking a good solution for a decision problem. On the other hand, it is helpful to classify contexts into “known” and “unknown,” since decision making directly applying previous knowledge to new specific conditions, circumstances, and situational contexts is certainly where NDM shines, while the decision alternatives approach is much more important in “unknown” domains, though I think that it does not supplant NDM in these domains, but rather, overlays it.

Going further, in evaluating whether one ought to follow an NDM, or a decision alternatives approach to making new knowledge in unknown domains, one is dependent on considerations such as the relative cost of each approach, the resources available to use one or another approach, the time frame in which one needs one’s best guess, and, of course, the importance of the decision problem being addressed. These parameters may be affected by whether the domain context of a decision is complicated, complex, or chaotic, but there is not enough of a correspondence between these factors and domain context to make it useful to classify a context into one of the unknown domains before one seeks to directly investigate the problem of arriving a good decision. So, to answer the primary question raised in this section, it is preferable to address the question of what is the best decision one can make, or, at least, what is a good decision, than to try to first answer the question of classification, before addressing that question.

Tags: Complexity · Epistemology/Ontology/Value Theory · KM Software Tools · Knowledge Making · Knowledge Management