
This post is about “safe-fail experiments.” The essential idea in safe-fail experiments was expressed well by Dave Snowden in this way: “I can afford them to fail and critically, I plan them so that through that failure I learn more about the terrain through which I wish to travel.”
And again, in another place, he adds:
“One of the main (if not the main) strategies for dealing with a complex system is to create a range of safe-fail experiments or probes that will allow the nature of emergent possibilities to become more visible.”
I like this emphasis on safe-fail experiments: first because of their low risk character, and second, because the emphasis on them is about our “learning from failure,” or, put another way, about our “learning from error.” Learning from error is what Critical Rationalism is all about.
In outlining the way one should use safe-fail experiments Dave offers the following:
“– Before opinions harden you create a very simple decision rule. Everyone with an idea that has even the remotest possibility of being true or useful creates a safe fail experiment based on the idea. Critically this does not have to be one that would prove the issue, just consistent with the position adopted.
— Next each proposal is fleshed out, costed and subject to challenge and review, but nothing is ruled out unless rationing of resource is required. This is rarely the case by the way as you keep the experiments small, designed for fast feedback/evolution.
— For each experiment to be valid its outcome must be observable, not to measure necessarily but to allow the simple rule of amplification or dampening of good or bad patterns to be put into operation. There is no point in an experiment where you can not observe what is happening.
— The experiments are then reviewed for common elements and resourced along with set up of monitoring and review processes.”
I find some of the wording used in this outline of great interest. First, Dave views ideas as “true” or “useful.” While it’s not clear what “true” or “useful” mean in this context. It’s clear that the kind of ideas Dave is talking about can be false. Since the experiments in question don’t have to prove an idea, but do have to be consistent with it, and also have to provide us with something we an learn, then doesn’t it follow that what we can learn from the experiments is whether or not the knowledge claims or ideas underlying them are false? And, in turn, doesn’t that mean that safe-fail experiments are about testing the ideas underlying them?
But if this is true then why are these experiments viewed by Dave as mere “probes?” Why shouldn’t they be viewed as tests of the new ideas (conjectures) of their formulators? Moreover, in requiring that experimental outcomes be observable, isn’t Dave completing a pattern of activity that Popper specified for Science? Specifically, isn’t he recommending that participants in Cynefin applications develop conjectures, to be refuted, if possible by safe-fail experiments yielding observational outcomes, as his method for learning what direction to take in acting in complex (and sometimes also in chaotic) domains?
Exchange on Safe-Fail Experiments
Raymond Salzwedel at the narrative lab has also offered some ideas on safe-fail, experiments in the form of 9 principles. And Dave Snowden has commented on these principles. Below I’ll add my comments to the views of Raymond and Dave (sourced from Dave’s blog).
Raymond: “Don’t be afraid to experiment – some will fail.”
Dave: “I would go further than this and say that experiments should be designed with failure in mind. We often learn more from failure than success anyway, and this is research as well as intervention. We want to create an environment in which success is not privileged over failure in the early stages of dealing with complex issues.”
Joe: I strongly agree with this comment of Dave’s. This is one of the central emphases of Critical Rationalism. We ought to try our best to design the experiment to test the idea underlying it as severely as possible. The game is about making it fail if we can. That way if it doesn’t fail, we’ll really be able to say that it may be true.
Raymond: “Every experiment will be different – don’t use the cookie-cutter approach when designing interventions.”
Dave: “Yes and no. You might want the same experiment run by different people or in different areas. Cookie-cutter approaches tend to be larger scale that true safe-fail probes so this may or may not be appropriate.”
Joe: There’s value in repeating safe-fail experiments to ensure that the results weren’t a fluke or due to accident. However, there’s a problem here. One can’t claim that complex contexts are different from others in the sense that they outcomes are never repeated and also claim that one can repeat the same experiment at a later time and still get the same result. So, I think that one of these two views will have to give way, and if we think that safe-fail experiments are possible, useful, and repeatable, then we will have to grant that least in the respects essential to the safe-fail experiments, the complex context is unchanged between the time we do the first experiment and its replication.
Raymond: “Don’t learn the same lesson twice – or maybe I should say, don’t make the same mistake twice.”
Dave: Disagree, you can never be sure of the influence of context. Often an experiment which failed in the past may now succeed. Your competitor may well learn how to make your failures work. Obviously you don’t want to be stupid here, but many a good initiative has been destroyed by the We have tried that argument.
Joe: I couldn’t agree more with Dave’s sentiment here and with the view that the “We have tried that argument” is often way off-base, and I’ll also add both superficial and also motivated by political considerations of one kind or another. However, I also have to say that, even though I agree with Dave that one can never be sure of the influence of context, I agree with Raymond’s admonition about not making the same mistake twice since I think that his admonition assumes that all essential conditions including context are the same in both cases. But finally, having said the above, I’m also acutely aware of another possible problem here.
What if the relationships in complex domains between experimental treatments and outcomes are relationships of propensity expressible as probabilities? In that case, we couldn’t conclude that an idea was refuted based on the results of of one or a few safe-fail experiments. If this was the case we’d have to make the same mistake a number of times before we were sure it was a mistake.
Raymond: Start with a low-risk area when you begin to experiment with a system.
Dave: Again yes and no. If you are talking about the whole system yes, but normally complex issues are immediate and failure is high risk. The experiment is low risk (the nature of safe-fail is such that you can afford to fail) but the problem area may well be high risk. In my experience complexity based strategies work much better in these situations.
Joe: I agree with Dave on this point.
Design an experiment that can be measured. That is, know what the success and failure indicators of each experiment are.
Dave: Change measure to monitor and I can agree with it. The second sentence I would delete
Joe: I agree with Dave about changing “measure” to “monitor.” However, I’m not sure about the second sentence. That is, it seems to me that the experimental design should be quite clear about the observational outcomes that would constitute a refutation of the idea underlying a safe-fail experiment.
Raymond: Try doing multiple experiments on the same system – even at the same time. Some will work, some will fail – good. Shut down the ones that fail and create variations on the ones that work.
Introduce dissent. Maximize diversity in the experiment design process by getting as many inputs as possible.
Dave: In the main agree, but see the above process. I generally don’t like the failure and success words as they seem inappropriate to probes
Joe: I agree with Dave’s remark about the failure and success words, provided these words aren’t tied specifically to the idea of failure or survival of the underlying ideas. But, if they are, then I have no problem with the words. Going further, however, I think there is a problem with parallel safe-fail experiments because it may be difficult to associate the results of a particular experiment with that experiment, rather than with some portion of or the entire configuration of ongoing experiments. Of course, depending on the details, there may be no possibility of confounding the effects of one experiment with that of another. But if the interaction clusters within the complex context are highly interdependent, then simultaneous experiments on two or more of them will confound one another’s effects, and the results of each individual experiment won’t be attributable to that experiment.
Raymond: Learn from the results of other people’s experiments.
Dave: Yep, but remember your context is different
Joe: Agree
Raymond: Teach other people the results of your experiments.
Dave: Yep, but remember your context is different
Joe: Of course. This is just knowledge integration.
Safe-Fail Experiments for Unknown Domains
Dave and Raymond emphasize safe-fail experiments as appropriate for use in complex contexts, or in certain chaotic contexts where there is time for safe-fail experiments. However, it’s hard to see why safe-fail experiments can’t be applied to “complicated” contexts, and others I’ve identified in a previous blog. The general principle here, is that safe-fail experiments are a way of testing ideas and also acquiring new information that can help in developing more new ideas. These things are needed in any context where a knowledge gap exists. Of course, in many complicated contexts, laboratory experiments may be used to test new ideas. However, laboratory experiments are clearly a type of safe-fail experiment, in that we can afford to have them “Fail.”
Conclusion
Finally, safe-fail experiments are viewed as “probes” or “acts” in Cynefin. Since everything we do with conscious intention is an “act” of ours, we can certainly characterize them this way. However, safe-fail experiments are not “acts” that we carry out to have a specific operational impact as part of organizational routine. Instead, they are tests of ideas formulated as knowledge claims. So, rather than being part of organizational routine decision making and learning, they are an aspect of the creative learning process, which I have elsewhere called the Knowledge life Cycle or Double-Loop Learning.
Tags: Complexity · Epistemology/Ontology/Value Theory · Knowledge Making · Knowledge Management
May 29th, 2008 · Comments Off on On Cynefin as a Sensemaking Framework: Part Three

There are three interesting questions we’d like to take up in this part.
— First, assuming that the approach taken by Cynefin, requiring sensemaking through first selecting the context type one is dealing with is appropriate, is the Cynefin framework complete enough as it stands or does it fail to identify important types of decision making contexts?
— Second, what is the place of causality, predictability, sensemaking, and rationality across the Cynefin domains?
— Third, is it a good idea to separate the sensemaking process into the steps of classification of contexts, and the rest of sensemaking, or is it better to address the more specific question of the best decision given a particular situational context that may or may not fall into one of the Cynefin contexts?
Other Context Types
The Cynefin framework distinguishes only four primary sensemaking contexts as relevant to decision making. It doesn’t claim that these are the only important ones, but nevertheless the question of whether they are is unavoidable. I think there are at least two more such contexts that spring quickly to mind. First, there are randomness/chance contexts, and second, there are complex, yet relatively predictable contexts.
Sometimes we need to decide what to do in contexts where we make decisions in relation to random or close to random interactions or data. Component behavior arising from interactions characterized by chance or randomness is indeterministic, patternless, and unpredictable. However, statistical aggregates of such behavior form patterns we can understand. In fact, it can be said of random processes that though they are unpredictable and highly uncertain in the short run, they are highly stable and predictable in the long run. Processes such as dice throwing, card shuffling, and roulette wheel spinning, can be the basis of professional gambling and of businesses that profit from gambling just because such stability exists.
Historically, knowledge about chance processes was sought and used by those who wanted to improve the rewards and minimize the losses they experienced in gambling. But other applications of such knowledge are very well-known and widespread. Of course, life and other forms of insurance are based on the analysis of randomness, of odds and proportions. And there is enough long-run stability in human populations to support such businesses. In addition, business processes. Including manufacturing processes, have variable outputs. Statistical quality control is based on the idea that variation can be held within certain limits by finding and manipulating the causes of such variability. By applying cause and effect knowledge appropriately, statistical control seeks to minimize variability in outcomes while retaining a stable chance process of output variability. Of course, these days, every manufacturing company uses statistical quality control, so there are certainly many decision making contexts in business where we deal with chance and randomness and attempt to limit its scope. Of course, survey research, polling, political campaigning, population forecasting, and policy, time-series analysis, epidemiology, are a small sample of fields in which statistical models analyzing random error and providing a basis for its control inform the context of decision making. The decision making may involve cause and effect, where the target of action is a parameter affecting the variability of a statistical process. But sometimes decision making will involve action that doesn’t affect a statistical process, but instead tries to operate within it and to benefit from the fact that others acting in the process are less knowledgeable about chance variations than oneself.
All this is by way of posing the question of whether chance contexts should be included within Cynefin. It seems to me an obvious oversight that they are not included. Perhaps Dave meant to classify them as complicated systems since there are rules and cause and effect relations involved in controlling processes with a statistical component. However, the aspect of the processes being controlled is random in character, and our understanding of process variability is in terms of probability models. These models are different from ordinary causal models, even complicated ones, and the differences involved and the impossibility of predicting and controlling individual outcomes of decision making argues for the recognition of an alternative context – specifically a “randomness/chance” context.
- Complex, More Predictable
Dave Snowden has said in a number of contexts that human complex systems are different from others. Here’s a quote from the HBR article: (p. 3)
“More recently, some thinkers and practitioners have started to argue that human complex systems are very different from those in nature and cannot be modeled in the same ways because of human unpredictability and intellect. Consider the following ways in which humans are distinct from other animals:
— They have multiple identities and can fluidly switch between them without conscious thought. (For example, a person can be a respected member of the community as well as a terrorist.)
— They make decisions based on past patterns of success and failure, rather than on logical, definable rules
— They can, in certain circumstances, purposefully change the systems in which they operate to equilibrium states (think of a Six Sigma project) in order to create predictable outcomes.”
I agree that human complexity is different from that found in nature. Ant hills, for example, work without the aid of explicit central controls. In contrast, human systems combine emergent self-organization with the efforts of system agents to direct and control their systems in accordance with their own intentions. I call the second type, human complexity, Promethean Complexity because it frequently tries to impose non-complex order on tendencies toward self-organization. Complexity science, at present, is mostly based on research about Natural Complexity (N-Complexity), and not on research focused on Promethean Complexity (P-Complexity), and we are in a phase now where we are trying to apply constructs, knowledge, and methods developed for N-Complexity to P-Complexity. It is likely that such an effort will succeed only partly, and that we will have to broaden complexity theories and approaches to be successful with P-Complexity.
I propose that we distinguish N-Complexity from P-Complexity contexts, because I think that N-Complexity contexts for action are likely to be more predictable than outcomes of P-Complexity contexts. Dave’s framework seems oriented to P complexity, so perhaps we should define another type of context for more predictable complexity. The addition of Randomness/chance and More Predictable Complex contexts, would leave Cynefin with six primary contexts.
Causality, Predictability, Sensemaking, and Rationality Across the Domains
Earlier, I pointed out that Cynefin classification of contexts into one of the four primary domains involves a KLC. The problem in the Cynefin KLC is the question of which domain the context falls into. The first step in many Cynefin projects is information acquisition and individual and group learning aggregation from sensemaking items. The first step then provides a basis for a second step of knowledge claims about classification, which is then followed by exchanges among the sensemakers evaluating such claims and developing a consensus. Thus, Knowledge Claim Evaluation in Cynefin is communitarian, though every effort is made to partial out the influences of organizational hierarchy on the process of arriving at consensus. Communication of the results of a Cynefin project, provides the knowledge integration aspect of the KLC.
I made the point earlier, that classification into the simple/known domain doesn’t require the step of using sensemaking items to separate the known from other domains. That is, you know you have a problem when your previous knowledge about the effects of your decisions either seems to be or proves wrong or unreliable. When you see that, you know immediately that you’re not in the known domain, and you also know that you don’t know what to do to get where you want to go. Given such a judgment, the problem presented to a decision maker is that what they ought to do is unknown, and they have to “make sense” of that problem. Now part of solving that problem may be successful classification of the decision context into one of the other domains. But that classification task is secondary compared to the primary problem of deciding on the best action we can take to control the instrumental behavior gap.
In a sense, what Cynefin seems to be implying, or suggesting, or inferring (take your choice of the appropriate term here) is that to arrive at that best decision, or, at least to arrive at a good one, it is better to begin by classifying the context into complicated, complex, or chaotic, and once having done that to follow the further recommendations of Cynefin to Sense-Analyze-Respond, or Probe-Sense-Respond, or Act-Sense-Respond, as the case may be, then it is to directly address the problem of what the best decision is in the context of the situation.
Cynefin seems to justify this by pointing to our relative inability in the unordered contexts (complexity and chaos), to understand cause and effect and to predict the consequences of our actions before a decision is made. We cannot hope to predict decision outcomes before we act since, in the unordered domains, we cannot understand cause and effect before we act, and in the chaotic domain we cannot understand cause and effect at all. Thus, we must, in the unordered domains, rely on trial and error to reveal patterns that we can understand, and use to inform our decisions, and eventually to drive change toward satisfactory outcomes. We must know which domain we’re in before we consider the problem of what a good decision is, because the only domain where we can know enough about cause and effect to be able to predict outcomes and so know what the best decision is, is the complicated domain. In the other two domains, due to the impossibility of knowing cause and effect relations before the fact, finding a good or the best decision through prediction is also impossible, so we have to content ourselves with recognizing patterns, pursuing trial and error, running parallel safe-fail experiments, and reinforcing favorable patterns that result from our actions.
If this reconstruction of what Cynefin is saying to us is correct, the argument is very plausible, but nevertheless, I think, for a number of reasons, that it is wrong in saying that we cannot predict in the unordered domains. First, it is inherent in decision making to assume that voluntary acts we undertake either provide immediate gratification, or have an instrumental purpose. Looking at acts that have an instrumental purpose, we would not decide to undertake them at all if we believed that they did not do something to bring about our purposes and goals. So if we select one decision rather than another, and we do not do so using a random or chance rule, then we do so because we expect the decision we select to have a more favorable outcome than the other decision alternatives available to us. Now, this very general consideration is not restricted to any specific Cynefin contexts, but it suggests that all contexts involving instrumental decisions require us to have expectations about decision outcomes before we act and that these expectations can be true or false.
However, an expectation that a decision will bring about a result, is not the same as a simple cause and effect rule. It may be more complex than that and may suggest only that something we do will create a greater propensity for an aspect of reality to change in a direction we favor. But, there isn’t much doubt that such an expectation is a prediction about the future, given the contextual conditions of the decision. So, however difficult it may be to predict the future in complex and chaotic contexts, our decisions in those contexts do imply expectations and predictions. Perhaps we cannot predict the future in these, as well as we can in simple or complicated contexts. Nevertheless, when we make decisions we must have expectations and make predictions, and this means that acting in complex and chaotic contexts must involve such predictions, whether they are likely to be correct or not.
Second, I have pointed out above that Promethean Complexity is likely to differ from Natural Complexity in its level of unpredictability. This suggests that “unpredictability” is not the whole story where complexity is concerned. Instead, Promethean Complexity can be more unpredictable than Natural Complexity, or what is the same thing, Natural Complexity can be more predictable than Promethean Complexity.
My third reason, related to my second one, is that the dichotomy predictable vs. unpredictable inaccurately characterizes all the contexts. All are predictable or unpredictable to some degree. The ordered domains may be more predictable, but the unordered domains offer some predictability as well. This point is actually supported in Dave’s own account of Cynefin’s unordered domains. That is, he indicates that probes involving safe-fail experiments in the complex domain may produce results we want to reinforce. But that is an expectation (a prediction) that such experiments will work in this way. And if safe-fail experiments do produce patterns we like, then when we set out to reinforce them, don’t we have expectations that certain actions will reinforce them, and aren’t these also predictions?
That the answer to this question is yes, is suggested by the view that in the unordered domains, we must rely on trial and error to reveal patterns that we can understand, and use to inform our decisions. For how can patterns and their understanding be useful to us in informing our decisions? I suggest they can only be useful if understanding them helps us to form expectations about the future, about the consequences our decisions are likely to have. That is, patterns providing an understanding of the decision context also provide an understanding of the constraints that apply to both ourselves and our contexts. And these constraints tell us something about what kinds of future we can expect if we do “X”, “Y” or “Z.” I’m not suggesting that the patterns will provide prediction rules, or laws as in the known domain, but they allow us and support us in formulating ideas about whether particular decisions are more likely to have better results than alternatives.
In the chaotic domain, we may not be able to predict with any accuracy how what we do will affect the chaotic attractor; while maintaining a chaotic context, but nevertheless, as Dave recommends, we can take action or do safe-fail experiments to try to learn how to change the context from a chaotic one to one that is known or one that is complex. Again, in doing those experiments or taking those actions, we will have expectations about how the experiments will work and the outcomes they will provide. Some actions will clearly destroy the chaotic regimes and create order where there was none before. Others will reduce turbulence enough so that chaos will be able to spawn self-organization and complexity. When we make decisions leading to such actions, again, we have expectations, and make predictions, even though these are not as reliable as the predictions we make in the ordered or complicated domains.
If some predictability in the unordered domains of Cynefin exists, then that reflects strongly on the issue of whether we should classify first and then follow the actions recommended for those domains, or whether we should directly address what the best decision is in a particular situation. For the meaning of the Cynefin distinctions among sense-analyze-respond, probe-sense-respond, and act-sense-respond action sequences is that in the first of these, rational inquiry is recommended (“analyze”), because it can yield tested rules that will allow us to predict the consequences of our decision, but in the last two sequences the appearance of “sense” alone without “analyze” is meant to signal “pattern recognition,” grasping, and selection, rather than rational inquiry, because it’s assumed in Cynefin that rational inquiry and analysis is of no help in arriving at patterns and it is also assumed that patterns are no help in generating expectations and predictions.
Thus, in complex unorder we “probe” and wait for patterns to appear that we can recognize and respond to, and in chaotic unorder we “act” and then do the same thing. The difference between “probing” in complexity and “acting” in “chaos” is not logically sharp. Both are forms of acting, but “probing,” has the connotation of a more experimental, tentative information seeking form of activity, while “acting” in chaos, has the connotation of an intention to leave the chaotic context in favor of the known or complex context.
However, does it really make sense to suggest that “probing” in complexity, and “acting” in chaos are divorced from “analysis” and that “sensing” and “responding” in these domains also exclude “analysis?” I don’t think so, and I also think that the only reason why this seems plausible, is because the starting point for action is not equalized among the three domains, but rather it is just assumed that sense-analyze-respond doesn’t apply to domains other than the complicated, and it is also assumed that the complex and chaotic sequences don’t apply to the complicated domain. But we can also view things in the following way:
(1) There’s no difference between “acting” and “responding,” except for temporal perspective. That is, in any cycle involving “acting-sensing-responding,” “responding” is another word for “acting” in the cycle just following the initial cycle. Thus we could as well be talking about “acting-sensing-acting” as “acting-sensing-responding,” since “acting” is always preceded by “sensing” and “sensing” always follows “acting.”
(2) “Probing” is a type of “acting”
(3) The exception to “acting” – “sensing” – “acting” occurs when “sensing” doesn’t make sense and we have to make new knowledge in order to make sense. Then we have “acting” – “sensing” – analyzing” – “acting”
(4) If we take up the Cynefin Process at the point where we’ve decided we’re dealing with a complicated context, we next have to decide what to do, so we go through “sensing” – analyzing” – “acting” (as Dave says), however, this may be followed by “sensing’ – “acting” or by another round of “sensing” – analyzing” – “acting,” if we still can’t make sense of the situation
(5) If we take up the Process at the point where we’ve decided on complexity, then do we really “act – sense – act” as Dave suggests? That is, can we really follow Dave’s advice and act without “sensing,” and if the “sensing” fails to make sense, as will be true in both complexity and chaos, without “analyzing” (inquiring), before we “act”? I think the general answer to this question is no, though, of course, our process of analyzing can be cut very short by the perceived need to “act” in a situational context
(6) But, if after deciding what type of context we are dealing with we must sense and analyze before we act in both “complex” and “chaotic” contexts, then we must revise Dave’s recommendation to read: sense-analyze-act-sense (perhaps analyze)-act in all domains
(7) Dave may think the above is in error because one of the main points of distinction between “order” and “unorder” is that “unorder” is not amenable to “analysis” or “rational inquiry,” due to the absence of causal order and predictability, so that it makes no sense to add the sense-analyze steps to his sequences for “complexity” and “chaos.” However, I’ve shown earlier that predictability is not completely absent from “unorder,” and have also just emphasized the point that “acting” requires prior “sensing,” in the “complex” and “chaotic” domains as well as in the “complicated” domain.
The type and extent of analysis performed in the different domains may differ, but except in the simple domain, once a knowledge gap is recognized, “analysis” will always involve developing new ideas (however lacking in variety) and evaluating them (however arbitrarily and perfunctorily), and it may also involve communicating new knowledge before the focus of the decision maker leaves analyzing and returns to acting. To make this point in the vocabulary of the DEC-KLC-KM framework, one cannot consciously respond to a context with an “act” unless one already knows what to do, and we can only know what to do if the “act” is “routine.” But, as is true in the complicated, complex, and chaotic domains, acting is not routine; it always involves sensing a knowledge gap, and therefore a KLC (analysis) is necessary to remove that gap, before acting is possible
Should We Classify the Context First or Is It Better To Address the Question of What The Best Decision Is Given a Particular Situation?
To summarize the above analysis, rational inquiry and predictability, contrary to Cynefin, have their place in decisions about all three — complicated, complex, and chaotic — domains, and since they do, highlighting the place of analysis in the complicated domain, while denying it a place in the other two domains is mistaken, and ignores what humans can actually do in complex and chaotic sensemaking/decision making contexts. But if to deny analysis its place in the complex and chaotic contexts is a mistake, then, the primary reason to approach decision making through prior classification of sensemaking items into five contexts (simple, complicated, complex, chaotic, and disordered), is gone, and we can ask the question of whether it is better to approach decision making the Cynefin way, or whether a more direct approach to selecting the best available decision alternative available in a particular context is preferable?
A more direct approach to sensemaking for decision making in contexts with a knowledge gap, is to specify decision alternatives and compare them in terms of a cost-benefit evaluation taking into account expectations about the consequences of the decision alternatives. Such a “rationalistic” and “predictive” approach to decision making is less popular these days than it once was, because research has shown that humans prefer to “first-pattern-match” in decision making, and then proceed by what is, essentially, sequential trial and error if the first pattern doesn’t match post-decision experience. I think this second form, often called Recognition Primed Decision Making (RPD), or Naturalistic Decision Making (NDM), is dominant in the simple or “known” domain, where by the way, we are confident in our expectations. However, I’m suggesting that the decision alternatives approach would apply more frequently to contexts that are not “known,” where “first-pattern-matches” often don’t work as well as they do in the known domain, and new knowledge needs to be created. Even in these “unknown,” domains, however, NDM can still be and often is used in preference to rationalistic decision making, so the question is when should each of these contrasting approaches be used?
Now, let’s keep firmly in mind what results from a successful Cynefin classification of the “unknown” domains. Specifically, if the context is complicated we know that we can follow classification with Sense-Analyze-Respond, and we also know that the term “analyze” is meant to include approaches to inquiry such as the decision alternatives approach, as well as first-pattern-match approaches. Moving to the complex domain, however, we follow classification with Probe-Sense-Respond, and it seems clear from Dave’s account that he means to say that the decision alternatives approach ought not to be followed in the complex domain since we cannot predict decision outcomes before the fact, and therefore we ought just to use first-pattern-match after the fact. Similarly, in the chaotic domain, where we follow classification with Act-Sense-Respond, it is again contended that since we cannot predict decision outcomes before the fact, and since we cannot really understand cause-and-effect at all, all we can really do is to first-pattern-match after the fact and respond accordingly.
But, I think I’ve shown in an earlier section that prediction is not entirely impossible in either the complex or chaotic domains and that Cynefin is mistaken in contending that it is. And if my arguments there are correct, I think it follows that the need to classify the unknown domains according to Cynefin, in order to evaluate whether one should use an NDM or a decision alternatives approach in developing the new knowledge we need to make a decision, is gone.
An evaluation of that kind may very well be necessary and appropriate for unknown domains, but as long as some level of prediction, however, probabilistic, is possible, a decision alternatives approach cannot be rejected on grounds that it is impossible to implement. Therefore, classifying contexts into “complicated,” “complex,” and “chaotic” is not useful in helping us to decide whether we ought to follow an NDM or decision alternatives approach in inquiry seeking a good solution for a decision problem. On the other hand, it is helpful to classify contexts into “known” and “unknown,” since decision making directly applying previous knowledge to new specific conditions, circumstances, and situational contexts is certainly where NDM shines, while the decision alternatives approach is much more important in “unknown” domains, though I think that it does not supplant NDM in these domains, but rather, overlays it.
Going further, in evaluating whether one ought to follow an NDM, or a decision alternatives approach to making new knowledge in unknown domains, one is dependent on considerations such as the relative cost of each approach, the resources available to use one or another approach, the time frame in which one needs one’s best guess, and, of course, the importance of the decision problem being addressed. These parameters may be affected by whether the domain context of a decision is complicated, complex, or chaotic, but there is not enough of a correspondence between these factors and domain context to make it useful to classify a context into one of the unknown domains before one seeks to directly investigate the problem of arriving a good decision. So, to answer the primary question raised in this section, it is preferable to address the question of what is the best decision one can make, or, at least, what is a good decision, than to try to first answer the question of classification, before addressing that question.
Tags: Complexity · Epistemology/Ontology/Value Theory · KM Software Tools · Knowledge Making · Knowledge Management
May 29th, 2008 · Comments Off on On Cynefin as a Sensemaking Framework: Part Two

It’s now time to review Dave’s characterizations of the three remaining contexts and to comment on them. Again using the HBR article as the primary source for my discussion, the “complicated” domain is characterized as follows.
Complicated
— Expert diagnosis required
— Cause-and-effect relationships discoverable but not immediately apparent to everyone;
— More than one right answer possible
— Known unknowns
— Fact-based management
— Sense-Analyze-Respond
This is the second domain in Dave’s global category of “order.” It is the domain of experts. Cause and effect relating our decisions to anticipated outcomes still applies and once analysis is completed, accurate predictions based on causal and other predictive rules is possible. But the problems in this domain require effort and often, experts, to solve, and the context of decision making relates to influencing or controlling a complicated construct in which multiple and “knowable” causes and effects are at work. Such systems may involve feedback and cybernetics, in addition to simpler causal relationships. Systems analysis can be used here, but the context involved is not one of emergence. Finally, the mode of action, once the context is deemed to be “complicated,” is Sense-Analyze-Respond.
Looking at complicated contexts from the viewpoint of the DEC-KLC-KM framework, Dave’s characterization raises a number of questions. First, the context seems to be limited to knowledge gaps related to complicated situations where some of the cause and effect patterns are unknown. On the other hand, the “simple” context contained only known “simple” cause and effect relationships, which raises a question about which of the contexts deals with simple cause and effect relations that are “knowable,” but as yet unknown. Dave’s exposition implies that such “known unknowns” are not in the complicated context, because they are too simple; and, according to Cynefin, they’re not in the “simple” context, either, because, once again, they are “unknown,” and “knowable,” “ordered” relationships.
The second question relates to the meaning of “analyze,” in the recommended Sense-Analyze-Response sequence. Earlier, I identified “sense” with monitoring and evaluating in the DEC, and “responding” with planning and acting. However, what does “analyze” correspond to? I think that at the level of a collective it can be identified with the Knowledge Life Cycle itself. In other words,”analyze” summarizes “Acquiring External Information/and farming the results of previous individual and group learning in one’s organization-creating new ideas-eliminating errors in those ideas-integrating ideas.” In short, from the viewpoint of the DEC-KLC-KM framework, “analyze” in the complicated context refers to implementing KLCs to make the unknowns knowns.
Looking at these two questions together, and keeping in mind that even simple cause and effect relationships, if unknown, may well require the efforts of experts to make them known, one resolution of the question of where unknown “simple” cause-and-effect relationships belong in the framework is that they belong in the “complicated” category, not because they’re complicated but because they’re unknown but “knowable” cause-and-effect relationships. And that, in turn suggests, that this category is perhaps more fundamentally about creating new knowledge about causally ordered relationships than it is about “complicated” contexts.
Moreover, the above problem suggests that an analogous, though reverse problem exists with Dave’s “simple” contexts, as well. That is, in the Cynefin “simple” context, no “known” complicated causally ordered systems are included. Since these are also not included in the complicated context, where are they in Cynefin? This problem can be solved by making the Cynefin “simple” context about “known” cause-and-effect and rules-based ordered relationships of whatever degree of complication. In short, the Cynefin framework, would be more clear if the “simple” domain were the “known” ordered domain, and the complicated domain were the “unknown” but “knowable” ordered domain.
In earlier versions of Cynefin, the framework did describe the “simple” context as the “known” domain, and the “complicated” context as the domain of the “knowable.” But even in these versions, it was still true that the “known” domain contained no “complicated” webs of relationships, and the “knowable” domain contained no simple relationships, so the shift to “known” and “knowable” just proposed would still represent a change from Dave’s earlier Cynefin constructs.
The Unordered Contexts
In addition to the two “ordered contexts,” the Cynefin framework also specifies two “unordered” contexts, the “complex” and “chaotic” contexts. These contexts are “unordered” in the sense that “there is no immediately apparent relationship between cause and effect, and the way forward is determined based on emerging patterns.” If this is to serve as a viable distinction between the “ordered” and unordered contexts, we must take it to mean that even after investigation by experts, cause and effect relationships between our decisions and actions and outcomes, that will withstand our evaluations, can’t be formulated; because If they could, we would be dealing with a complicated and not a complex decision making context. The primary characteristics of “complex” contexts according to Cynefin follow.
Complex
— Flux and unpredictability
— No right answers;
— emergent instructive patterns
— Unknown unknowns
— Many competing ideas
— A need for creative and innovative approaches
— Pattern-based leadership
— Probe-Sense-Respond
I think there are a number of difficulties with this characterization of complex contexts. First, does the claim that there are “no right answers” mean that we can’t make knowledge claims that are true, I.e. that correspond to reality? If so, how can we know that? How can it be proved?
The answer is that it can’t. It is fine to say, as Dave does, that a complex system is in constant flux and that the whole is more than the sum of its parts. But this is not enough to imply or even suggest the claim that our descriptive statements about such systems cannot be true, or that we cannot formulate true statements relating our decisions to act on such systems to outcomes. We may, in complex contexts, come up with right answers, as far as we know. We may come up with answers that work. Of course, in saying that there are no right answers, Dave may mean that there is more than one solution to a problem and more than one way to affect such a system; but if that’s the case, then, in this respect, the complex context is not different from the complicated context, where Dave also indicates that there may also be more than one solution to a problem.
Second, emergent patterns appearing in complex contexts may be very instructive, but why would they be instructive if they didn’t guide us toward decisions that are more right than others? In particular, emergent patterns involve the emergence of new higher level structures that, in turn constrain agents in complex systems. But don’t such structures introduce a measure of predictability into complex contexts and into the results of our decisions acting upon such contexts? That is, can’t we formulate predictive rules relating new constraining structures to probable behaviors of agents arising, in part, from the constraints imposed by the structures, and, in part, from our decisions? I think we can and often do just that in everyday life.
Third, regarding “flux and unpredictability,” certainly the details of emergent patterns following upon our decisions can’t be predicted in detail in complex contexts, but is the response to our decision entirely unpredictable in such contexts? Can’t our decisions create greater or lesser propensities for certain outcomes to occur? In both the “simple” and “complicated” contexts of “order” in which cause and effect relationships and rules apply, prediction is possible; but does “unorder” imply that predictions of lesser probability are impossible, or just that predictions that are too detailed, or that we can be fairly certain about, are impossible?
Fourth, complexity is considered to be the domain of “unknown unknowns.” That is, it is a domain in which we don’t know what we don’t know. In Dave’s discussion of complicated contexts, these are characterized as the domain of “known unknowns.” But is this really the case? If the complicated context is one where we must produce new knowledge, then how can we know beforehand what this knowledge is? We may know what our knowledge gaps are, alright, and we may even develop ideas about the kind of knowledge we need in general. But how can we know what the unknowns are beforehand, without those unknowns being known?
What if the unknowns that will solve a problem can’t be known without developing knowledge that expresses an entirely novel point of view or theory? Would we then have a “known unknown?” So, in the end, is the “known unknown” state of the complex domain really that different from the state of needing to make new knowledge in the “complicated context? Perhaps it is somewhat different, but I think it’s doubtful that the difference is as great as Dave indicates.
Fifth, though I agree that complex contexts are characterized by many competing ideas, I also think that complicated contexts can also be characterized by many competing ideas, as is also perhaps suggested by Dave’s statement that in such contexts there can be more than one right answer to a question. So, again, I doubt that this characteristic of complex contexts distinguishes it from the complicated context.
Sixth, a similar comment applies to the need for creative and innovative approaches. Sure those are very important for complexity, but new ideas are important in any situation where there’s a knowledge gap. Since, by definition, complicated contexts require closing knowledge gaps and problem solving, they require creative and innovative approaches, as much as any of the other contexts.
Seventh, complex contexts require “pattern-based leadership” in the sense that leaders must learn which emergent patterns ought to be reinforced and stabilized and which should be discouraged, and this is different from “complicated contexts” because in those there is no emergence. So, considering all of the above points, it seems that the characteristics that really distinguish the “complex” from the “complicated” domains most sharply are (a) those having to do with emergence and (b) those having to do with predictability, which even if one feels less inclined to accept the idea that predictability is absent from the “complex” domain, it certainly seems easy to accept the idea that there is less predictability and more uncertainty in complexity than there is in complicated contexts.
Eighth, the recommended mode of action for “complex” contexts is Probe-Sense-Respond. From the point of view of the DEC-KLC-KM framework, however, we can place a slightly different interpretation on this recommendation. Specifically, prior to the classification of a context as “complex,” there is the determination that it is not “simple,” and that a KLC is required to decide what the context is and what decision is appropriate. Next, a determination of which of the other contexts a case is, involves acquiring information either externally or internally as appropriate, and for Cynefin projects involves assembly and analysis by sensemakers of sensemaking items from narrative databases, alternative histories, fables, etc.
Now, this process of classification, from a DEC-KLC-KM point of view, means that the alternative knowledge claims that a given domain is “complicated,” “complex,” “chaotic,” or “disordered,” all get considered and evaluated during the sensemaking process, before Probe-Sense-Respond becomes relevant. In fact, once the decision is made that the domain in question is not ‘simple,” and it’s recognized that there’s a gap in what is “known,” then in the Sensemaking process we actually have: acquiring external information/and farming the results of previous individual and group learning in one’s organization-creating new ideas-eliminating errors in those ideas-integrating ideas, all occurring before Probe-Sense-Respond. So, from the DEC-KLC-KM point of view, we have the following pattern of knowledge processing: problem-acquiring external information/and farming the results of previous individual and group learning in one’s organization-creating new ideas-eliminating errors in those ideas-integrating ideas-Probe-Sense-Respond.
And then, when we look at “Probe” more closely, what do we see? To “probe” is, after all, to “act.” The act may be a safe-fail experiment, or a number of parallel safe-fail experiments. Now, we do not undertake such experiments randomly and without expectations. In the case of complex domains, specifically, we are experimenting by “seeding” the domain in the hopes of obtaining patterns whose effects merit reinforcement.
So, one way to look at this, is as a test of of our “cause and effect” expectations that at least some of the safe-fail experiments we choose to conduct will provide results we will want to reinforce. The experiments that don’t have such results are errors precisely in the sense that the expectation that they will have desirable effects is false, and the patterns they produce will be eliminated. So, we can look at “probes,” and safe-fail experiments, as error elimination activities testing our expectations that they will produce outcomes we may wish to reinforce.
Alternatively, we can look at “probe” from the DEC point of view as involving a plan-act sequence, when we then move on to “sense” the results we become involved in monitoring and evaluating them, and then, if necessary, in acquiring information/and farming previous organization learning-creating new ideas-eliminating errors in ideas-integrating ideas. That is, we become involved in engaging in a KLC focused on determining the effects of the safe-fail experiments. Once that KLC is done, then to “respond,” we plan and act again in the next DEC round in accord with new expectations about the effects our actions will have. In other words, I think it’s quite easy to interpret the Cynefin view of what one would do in the complex domain from the viewpoint of the DEC-KLC-KM framework.
In talking about the chaotic context, Dave Snowden had this to say:
“In a chaotic context, searching for right answers would be pointless: The relationships between cause and effect are impossible to determine because they shift constantly and no manageable patterns exist-only turbulence. This is the realm of unknowables. . . .”
Here are the characteristics of the chaotic context in the Cynefin framework.
Chaotic
— High turbulence
— No clear cause-and-effect relationships, so no point in looking for right answers
— Unknowables
— Many decisions to make and no time to think
— High tension
— Pattern-based leadership
— Act-Sense-Respond
I think it will help to understand these statements if we look for a moment at some of the ideas of “chaotic dynamics.” First, these dynamics are of two types: “deterministic chaos” and “stochastic chaos.” Both types of dynamics appear to be indistinguishable from random motion, but can be distinguished from randomness by the use of certain tests. Deterministic chaos is governed by dynamical laws, and it turns out that these laws are causal in nature. Stochastic Chaos is not governed by such laws but has probabilistic generative mechanisms that account for chaotic dynamics. This kind of chaos is also non-random, but incorporates a random component in its probabilistic framework.
Second, even though chaotic dynamics is either governed by law or by probabilistic generative mechanisms, it is correct to say that chaotic dynamics is in large measure unpredictable. However, it is important to note that chaotic dynamics is not always unpredictable. Its unpredictability arises from (a) the sensitivity of the dynamics of chaotic processes to the initial starting point of those processes, and also from (b) sensitivity of the dynamics to small differences in the form of the laws or probabilistic models used to make predictions. Sensitivity to starting points means that even if one has a good model of dynamics, one’s predictions will diverge from reality very quickly if one’s measurement of the starting conditions is imprecise. Since it is never possible to get a precise enough measurement of the starting conditions in chaotic dynamics it will always be the case that our predictions will diverge from reality within some period of time, normally a short one. However, short term predictions may be possible if the time intervals used in compiling a time series are the right size, so it is not quite correct to say that chaotic dynamics is unpredictable. Moreover, analysts of financial data can use this limited predictability of chaotic series to good advantage provided they repeat their analyses often enough to account for the divergence of the predicted from the actual which will inevitably occur. Now, a similar difficulty in prediction will arise from small differences in the dynamical laws governing a system, so once again, we are talking about very limited, but at least some predictability of chaotic dynamics.
Third, I’ve just said enough, I hope, to indicate that chaotic dynamics is not “unknowable”, in the sense that we must remain ignorant of the mechanisms generating the dynamics of chaotic interactions. Moreover, the fact that we can come to know what the generating mechanisms of chaotic behavior are, can help both our understanding and provide us with a capability for short-term prediction of such dynamics.
Fourth, these first three points may be taken as a corrective to one interpretation of Dave’s characterization of the chaotic context. However, Dave may also be taken as asserting that cause and effect relations between our decisions and chaotic dynamics are unknowable, and that predictability in the area of the impact of our decisions on the chaotic context is absent. If this interpretation of Dave’s characterization of the chaotic context is correct, then I largely agree with Dave, so long as it’s clearly stated that what we can’t predict is the impact of our decisions on the future course of the strange attractor describing chaotic dynamics in a phase space.
Fifth, however, this doesn’t mean that we can’t predict the impact of decisions intended to end chaotic dynamics and to shift an area of interaction out of a chaotic regime. In fact, Dave’s “Act-Sense-Respond” action recommendation for chaotic contexts, is aimed at ending chaotic dynamics, by shifting the context either to a simple or a complex one. Looking at that recommendation from a DEC-KLC-KM point of view, the recommended pattern looks like this: Act-Monitor-Evaluate-Acquiring Information/and farming previous organization learning-creating new ideas-eliminating errors in ideas-integrating ideas-Plan-Act.
And again, acts performed in a chaotic context carry with them expectations of outcomes and ideas about cause and effect, as Dave and Cynthia Kurtz make clear in their article on “The New Dynamics of Strategy . . . “ There, action involving the chaotic context is mostly about shifting dynamics away from the chaotic context into the “complex” or “known” (simple) contexts. More specifically, exerting coercive authority to move dynamics into a rule-governed, “known,” domain, creates a transition to a simple context called “imposition.” That is, exerting coercive authority (the cause), creates a transition to a simple ordered system (the effect). And acting to create multiple attractors to stimulate self-organization (the cause) creates a transition called “swarming,” and its effect, which is location to a complex context.
In short, while chaotic dynamics may be unknowable and unpredictable, within the chaotic domain, in the long term, due to deterministic and stochastic chaos, we can and do formulate cause and effect relationships expressing our expectations about getting out of chaos and into the simple or complex domains.
End of Part Two
Tags: Complexity · Epistemology/Ontology/Value Theory · KM Software Tools · Knowledge Making · Knowledge Management
May 29th, 2008 · Comments Off on On Cynefin as a Sensemaking Framework: Part One

In earlier posts, I discussed Dave Snowden’s Cynefin framework from the viewpoint of systems classification, offered an alternative to it, and then offered some critical comments. I did this because (a) Dave sometimes used the term “system” in describing one or another Cynefin “domain” and (b) a lot of the recent discussion on Cynefin in the act-km group focused on the issue of Cynefin as a framework for systems classification.
However, looking at Cynefin from the above perspective is not consistent with Dave’s primary application of the framework as a model and tool for aiding “sensemaking” for decision making. Using Cynefin is mostly all about sensemaking for decision making and isn’t focused on categorizing types of systems for purposes of studying them, or developing general knowledge about them, or better understanding the holistic character of a system. In short, using Cynefin is not about systems analysis. Instead, it’s about context analysis, and about helping people and groups to decide on and take actions that are appropriate in an immediate situational context.
In order to develop my view of Cynefin as a sensemaking framework, I will need to rely on my own way of seeing the world, namely a framework that combines decision cycles, processes, knowledge life cycles, and, at least, an abstract notion of KM. So, I’ll first provide an abbreviated version of that framework, and then move to Cynefin and my critique of it.
The DEC-KLC-KM Framework
Let’s begin with routine decision making. When we see a gap between the way the world is and the way we want it to be, we typically plan what we have to do to close the (instrumental behavior) gap between the two. Once we develop a plan we then act. After acting, we monitor the results of our actions. Finally we evaluate the results we’ve monitored, and then if we haven’t reached our goal, and sometimes even if we have, we begin the cycle of decision making again with a new round of planning. Let’s call this pattern the Decision Execution Cycle (DEC).

The Decision Execution Cycle
DECs, of course, produce decisions and decisions, actions. And actions – activities – are the stuff that social processes, social networks, and (complex adaptive) social systems are made of. These are all built up from activities that are inter-related by their objectives, goals, effects, and the values that are associated with them.
Another very important aspect of DECs and decisions that I need to emphasize here, is that they are undertaken in the expectation that they will have some influence on ourselves and/or the world around us. That is, decisions are viewed by us as “causes” that will, directly, or indirectly, affect, or at least influence something in the world, This relates to the idea that DECs are about closing instrumental behavior gaps. They can’t do that unless our decisions and actions can change the reality that we see. And, of course, if our decisions are expected to produce effects, then it also follows that our decisions are undertaken in the expectation that cause and effect relationships exist that our decisions are in accord with. Otherwise, we could not expect that our decisions would have specific effects on the outcomes we expect.
Now, in saying the above, I am not committing to an assumption that we expect our actions to necessarily cause specific and precise effects. That is, I am not saying that we assume determinate relationships between our decisions and their outcomes. No such strong assumption is needed here. But only the much looser requirement that we make decisions in the expectation that they will help to bring about the effects or outcomes that we seek or contribute to these outcomes in some way. Far from assuming determinism we may only be assuming that our actions make the outcomes we seek more, rather than less, probable or likely.

Business Processes are Networks of DECs
Now let’s distinguish three categories of business processes: operational business processes, knowledge processes and KM processes. Operational processes are those that are comprised of routine DECs. Examples are Sales, Marketing, Logistics, Accounting, etc. They use knowledge. And also make new knowledge about specific events and conditions that are important aspects of situations, but they do not produce or integrate new general knowledge.
Knowledge processes are also composed of DECs. But these DECs are primarily motivated by the need to guard against and solve problems that arise in operational business processes; and while there are some routine DECs in knowledge processing, there are also creative DECs in which new ideas are created. There are three primary knowledge processes: problem seeking, recognition, and formulation, the process that transitions processing from operational processing to knowledge processing and produces the problems that drive other knowledge processes; knowledge production, the process an agent (individual or collective) executes that produces new general knowledge; and knowledge integration, the process that presents new knowledge claims to storage containers and agents comprising the system.
Knowledge production is a process made up of four sub-processes:
— information acquisition,
— individual and group learning,
— knowledge claim formulation, and
— knowledge claim evaluation.
Knowledge integration is made up of four more sub-processes, all of which may use interpersonal, electronic, or both types of methods in execution:
— Knowledge and Information Broadcasting (KIB),
— Searching/Retrieving,
— Knowledge Sharing (peer-to-peer presentation of previously produced knowledge), and
— Teaching (hierarchical presentation of previously produced knowledge).
Knowledge processes, of course, produce outcomes. Chief among these is knowledge, which I’ve defined and specified at length elsewhere (For example, here). The various outcomes of knowledge processes may be viewed as part of an abstraction called the Distributed Organizational Knowledge Base (DOKB). The DOKB has electronic storage components. But it is more than that, because it contains all of the outcomes of knowledge processing in electronic, and non-electronic media. And since it includes beliefs and belief predispositions, and memories as well, it also includes all of the mental knowledge in the organization, as well as the changed synaptic structures that result from organizational learning processes.
Keeping the above notions in mind, here is how things work in organizations. Routine DECs and operational business processes are performed by agents who use previous knowledge in the DOKB: synaptic knowledge, mental knowledge and knowledge in organizational repositories, to make decisions. Sometimes the DOKB and an agent’s perceived situation doesn’t provide the answers it needs, and the agent recognizes that, and goes further to formulate the problem that has arisen, consciously or in words. The problem is an epistemic gap between what an agent knows and what it needs to know to participate successfully in the operational business process. Such a problem initiates a new knowledge production process. Once the problem is perceived, there is a need to formulate tentative solutions. These can come from new individual and group learning addressing the problem, or they can come from external sources through information acquisition, or they can come from entirely creative knowledge claim formulation, or, of course, they can come from all three.
Where the tentative solutions come from, and in what sequence, is of no importance to the self-organizing knowledge processing pattern of knowledge production. The only important thing about sequence here, is that knowledge is not produced until the tentative solutions, the previously formulated knowledge claims, have been tested and evaluated in the knowledge claim evaluation sub-process. And that sub-process, Knowledge Claim Evaluation (KCE), is the way in which agents select among tentative solutions, competitive alternatives, by comparing them against each other in the context of perspectives, criteria, or newly created ideas for selecting among them to arrive at the solution to the problem motivating knowledge production.
KCE is at the very center of knowledge processing and knowledge management. Think about it. Without KCE, what is the difference between information and knowledge? How do we know that we are integrating (broadcasting, searching/retrieving, sharing, or teaching) knowledge rather than just information? And finally, how do we know that we are doing knowledge management and not just information management?
Once knowledge and other tested and evaluated information is produced by KCE, the process of knowledge integration of the solution begins. There is no particular sequence to the integration sub-processes listed earlier. One or all of them may be used to present what has been produced to the enterprise’s agents, or to store what has been produced in the various repositories in the enterprise.
Those agents receiving knowledge or information don’t receive it passively. For them, it represents an input that may create a knowledge gap and initiate a new round of knowledge production at the level of the agent receiving it. Integration of the knowledge therefore, doesn’t signal its acceptance. It only signals that the instance of knowledge processing initiated by the first problem is over and that new problems have been initiated for some by the solution. While for others the knowledge integrated is knowledge to be used: either to continue with executing the business process that initiated the problem, or at a later time, when the situation calls for it.
Either way, the original problem that motivated knowledge processing is gone. It was born in the operational business process, solved in the knowledge production process, and its solution was spread throughout the organization during knowledge integration, and in this way, it ceased to be a problem — i.e. it died. This pattern is a life cycle, a birth-and-death cycle for problems arising from business processes.
The life cycle gives rise to knowledge, synaptic, mental and cultural (linguistic), and so I call it the Knowledge Life Cycle (KLC). Every organization produces its knowledge through the myriad KLCs that respond to its problems: KLCs at the organizational level, and KLCs at every level of social interaction and individual functioning in the organization. It is through these KLCs that new general and deep specific knowledge is produced, and the organization acquires the solutions it needs to adapt to its environment.
Organizations differ in the profile of their KLCs. They acquire information in different ways. They formulate solutions in different ways. They integrate them in different ways. And, above all, they evaluate tentative solutions in different ways. Organizations also differ in the patterning of their knowledge outcomes. They have different procedures for doing things, different software capabilities, different sales forecasting models, different performance monitoring schemes.
Knowledge Management is the set of activities and/or processes that seeks to change the organization’s present pattern of knowledge processing to enhance both it and its knowledge outcomes. So, KM doesn’t directly manage knowledge outcomes, but only impacts processes, which in turn impact such outcomes. For example, if one changes the rules affecting knowledge production, the quality of knowledge claims may improve, or if a KM intervention supplies a new search technology based on semantic analysis of knowledge bases, then that may result in improvement in the quality of models. There are at least 10 types of knowledge management activities, which, however, need not be listed here. The relationships among operational business processing, knowledge processing, and knowledge management are summarized in the three-tier model.

The Three-tier Model
Analysis of Cynefin as a Sensemaking Framework
Cynefin implicitly asserts that we will improve our decision making if, in the process of sensemaking, we gather information about a decision making context, and then distinguish, whether the context may be described by one of four ontological constructs: the simple context (Dave Snowden refers to this one and the others mostly as “domains”); the complicated context, the complex context, or the chaotic context. If we can’t describe the context of decision as focused on any of these, then, according to Cynefin, we can say the context is “disordered,” and we must try to break it down into more concrete contexts that can be described as one of the four primary contexts or domains.
Before describing and commenting on the four primary contexts, please note that from the viewpoint of the DEC-KLC-KM framework, Cynefin is asserting that when we approach any decision making situation, the first and second things thing we must do are to gather information, and then recognize that there is always an initial sensemaking task to be addressed, and that is the classification task of placing the context into one of the Cynefin categories. Again, from the viewpoint of the DEC-KLC-KM framework, such a task is a creative problem-solving task that requires us to develop new general or deep specific knowledge relating to the correct classification of the decision making context, or a further investigation breaking down the original context into more concrete contexts that can be classified.
However, is this implicit assumption of Cynefin correct? In the DEC-KLC-KM framework, the opening assumption is that when one approaches a decision making context, the first thing to do in sensemaking is to decide whether one has a routine decision making situation in which the knowledge needed to make a decision is at hand or easily accessible, or whether a knowledge gap exists, and it is necessary to go through a KLC to produce new knowledge. Translated into Cynefin terms, I think this conflicts a bit with the Cynefin procedure since it suggests that the first move should be not to gather information in the form of sensemaking items or other pieces of information that require appreciable effort to gather, but instead to use knowledge and information already at hand or easily accessible to decide on whether the situational context is one for routine sensemaking and decision making, or whether it is one for creative problem solving producing new knowledge carried out prior to the decision.
Using Cynefin terminology, the first move would be to decide on whether the context is a “simple” one where well-known predictive rules and/or cause and effect relationships exist, or whether it is not a simple context. If it is not, then, my alternative framework suggests that a KLC would be needed to resolve the classification question and to move further with sensemaking.
If a KLC is necessary, then the next step would be to seek and acquire information (sensemaking items or other sensemaking information) about a context that would help us to make sense of it. In short, I think Cynefin is in error in suggesting that the first step should be acquiring new contextual information other than that which is at hand or easily available. Rather, I think the first step should be to distinguish “simple” contexts from all others.
The Ordered Contexts
For Cynefin, Dave Snowden has given a number of slightly differing characterizations since the framework was first introduced. A recent version is in Dave’s 2007 Harvard Business Review article written with Mary Boone. There “A Leader’s Guide” table characterizes the “simple” context this way.
Simple
— Repeating patterns and consistent events
— Clear cause-and-effect relationships evident to everyone;
— right answer exists
— Known knowns
— Fact-based management
— Sense-Categorize-Respond
The “simple” context is one of the two in the realm of “order,” “where cause-and-effect relations are perceptible, and right answers can be determined based on the facts.” The context is “simple” because, within it, we know what to do. We can get facts. We can decide according to rules embodying “cause and effect” relationships between our decisions and their expected outcomes, including such relationships created by social or political norms, legal rules, cultural or economic imperatives and other connections which ensure that if we do “x,” then “y” results almost all the time, or at least enough of the time that we can count on such an expectation. In other words, “simple” contexts are the domain of “best practices.” In terms of the DEC-KLC-KM framework, simple contexts are “routine” contexts in which our learning about specific conditions and circumstances of the context is “routine,” along with our decisions relating to the context. It is when we act in such a routine context and find that our routine knowledge and expectations do not match reality, that we begin to recognize that our expectations about cause and effect are false, that we have a knowledge gap, and that we may have to create “non-routine” knowledge.
In terms of the DEC framework, Dave Snowden’s prescription that we should “sense-categorize” in this context corresponds to monitoring and evaluating in the DEC. While his “respond” corresponds to planning and acting in the DEC. In short, I think that “sense-categorize-respond” is a routine DEC cycle uninterrupted by a KLC.
Sometimes, people don’t want to believe that their routine knowledge is not working, and sometimes it is not obvious that it isn’t working. In both cases people match their expected patterns with results and don’t see a discrepancy even though there is one. They sense and categorize incorrectly, seeing a match when there is a really a mismatch. To avoid such errors, one thing we can do is to be habitually critical when we are in simple or routine decision contexts. That is, we can be careful to really ask ourselves whether reality really does match our expectations, and be as ready to accept that it does not as we are to see a match. Though this critical attitude is difficult to cultivate, it is a secret of success of adaptive individuals, since they see problems before others, and move to make new and more effective knowledge faster than others.
From the viewpoint of the DEC-KLC-KM framework, the decision to view a context as routine or non-routine itself solves a problem, the problem of whether further progress requires only routine learning or a KLC to make new knowledge. If it requires a KLC, then the next step is to acquire information from external sources and from the results of previous individual and group learning prior to arriving at new ideas about the decision context and how to cope with it. This next step may involve all manner of activities and certainly could easily include the use of narrative databases, anecdotes, alternative histories or any other information gathering techniques that might help “sensemaking.”
Looking again at Cynefin, the orientation to sensemaking it provides, suggests, however, that rather than following up this information gathering phase of sensemaking with the specific formulation of competing knowledge claims focused on providing new knowledge that might specifically inform a decision, what we ought to do is to divide the problem into two distinct KLC steps. The first step would be to continue to classify the decision/problem solving context and decide whether we are dealing with a “complicated,” “complex,” or “chaotic” context. And then, having decided which kind of context we are dealing with, we might proceed to a KLC second step to develop solutions about what to do, and then follow that with action. Now, this, construal, is, of course, relative to looking at Cynefin through the lens of the DEC-KLC-KM framework itself. It’s doubtful that Dave Snowden would look at Cynefin as involving linked KLCs, but might view activity after the decision to classify one’s context as action uninformed by further problem solving thought, followed by more sensemaking of any changes in context, followed by more actions, and so on.
End of Part One
Tags: Complexity · Epistemology/Ontology/Value Theory · KM Software Tools · Knowledge Making · Knowledge Management
May 9th, 2008 · Comments Off on On Classifying “Systems:” Part Two

Types of Systems
The very circumscribed and also very partial and incomplete take on the history of General Systems Theory I provided in my last post, leaves us in the following position with respect to the problem of classifying systems. There are three very important dichotomies which have emerged out of the history of General Systems Theory. These are: Deterministic vs. indeterministic; Predictability vs. Unpredictability; and Patterned vs. Patternless. Combining these dichotomies gives 8 logically possible types of systems:
1. Deterministic-Patterned-Predictable
This category includes systems governed by deterministic laws, exhibiting understandable patterns, that are predictable in their details. It includes mechanical and teleological deterministic systems. Dave Snowden’s distinction between simple and complicated systems can also be applied here, though his own use of the distinction doesn’t suggest that his simple and complicated systems are deterministic-patterned-predictable, but rather only ordered and predictable. Also, his distinction between known and knowable systems is based on the idea that simple systems are “known,” and complicated systems are “knowable.” While this may be a valid distinction, it is not a distinction about the systems themselves, but is a distinction based on our state of ignorance about deterministic, or causally ordered systems. Since the present classification is about ontology, and not about the state of our psychological beliefs about systems, or even about the epistemological classification of systems, I think the distinction between “known” and “knowable” causally ordered systems is less important here.
2. Deterministic-Patterned-Unpredictable
This combination of the dichotomies identifies no systems that have been discovered in nature or in human experience. It suggests that if a system is both deterministic and patterned, it will be predictable.
3. Deterministic-Patternless-Predictable
This is another combination of the dichotomies that doesn’t exist, and suggests that if a deterministic system exhibits dynamics that have no pattern, it will be unpredictable.
4. Deterministic-Patternless-Unpredictable
Chaotic Systems are deterministic, patternless and unpredictable in their details. In Dave Snowden’s discussion of Chaotic Systems in Cynefin, such systems are among those called “unordered,” and he also characterizes them as systems in which the agents are unconstrained by a higher level system. However, if agents are unconstrained by a higher level system, then there is no higher level system. So if we talk about chaotic systems at all, we must be talking about systems whose phase space dynamics are deterministic, patternless, and unpredictable in their details, since if this were not the case, the systems in question would be either chance systems, or complex systems at the component level of analysis.
5. Indeterministic-Patterned-Predictable
Some Complex Adaptive Systems are indeterministic, patterned, and predictable. These are human intelligent agent-based PCAS systems whose patterns of behavior involve appreciable coercive control efforts by central authorities or by cohesive factions engaged in long-standing and inconclusive political conflicts with one another. The coercive control structures in such systems make human behavior more predictable than it is in NCASs, which lack such structures. But such predictability is not produced by deterministic relationships, but by humans choosing not to resist coercive authority. And the cost of implementing such structures is severe restriction in problem solving capability and the variety of new ideas generated to meet challenges from the system’s environment. Indeed the system is more predictable because its reactions to the environment are less creative than would be the case if its agents were engaged in more autonomous problem-solving efforts.
At the level of human organizations, we can identify three types of PCASs: the Closed Organization, the Mobilized Organization, and the Frozen Organization.
The Closed Organization is a PCAS in which authority to recognize, formulate, solve problems, and disseminate solutions is restricted by high level management to a small elite, while the mass of employees contributes only to operational business processing. Examples include American Automobile Manufacturers of the 1950s and 1960s.
The Mobilized Organization is one in which many employees are enlisted in problem solving and solution dissemination, but, also, in which problem solving efforts and dissemination are closely managed and directed by a small elite so that only certain methods and processes of problem solving implemented. An example of such an organization is General Electric with its centrally directed imposition of Six Sigma-based approaches to problem solving.
The Frozen Organization is one in which hierarchical stove-piped structures have formed to deal with both operational business processing and problem solving. Within the stove pipes, the pattern is one of the closed or mobilized organization, but communication across stove pipes is prevented by organizational structures, or culture with the result that organizational problems that are broader in scope than the stove pipes cannot be solved.
6. Indeterministic-Patterned-Unpredictable
This is the category of Natural Complex Adaptive Systems and of PCASs whose behavior is relatively unpredictable. Indeterminism exists because laws that govern the evolution of complex systems can’t be formulated. Nevertheless, these systems do exhibit a pattern of change over time. Their dynamics can be understood in retrospect, while these same dynamics are relatively unpredictable, because both environmental challenges and the complex system’s reaction to them are relatively unpredictable.
Among PCASs, there are two system types: The Open Organization, and the Violently Conflictful system. Open organizations are characterized by widely distributed authority to seek, recognize and formulate problems, arrive at new solutions and disseminate those solutions to others. Structural barriers to self-organization are at a minimum and enabling structures for self-organization are at a maximum. Also, internal transparency in knowledge processing and trust in related interactions is high. Indeterminism exists because laws governing creativity in problem solving can’t be developed, and also because choice has a big role to play in self-organization. Complex system interaction can’t be predicted in detail, because of the role of human choice and creativity in the system. On the other hand, the system pattern can be understood in retrospect, even though detailed prediction is impossible.
Violently Conflictful Systems are not found at the organizational level, due to policing, but they can be found at other levels of society, say in residential, neighborhood or regional settings, where there is hostility and escalating conflict between or among social groups. In these settings, individuals self-organize to support contending groups, which can have coercive communitarian structures. Violent patterns of interaction, are neither determined nor random, but complex. They can’t be predicted, because violent outbreaks involve individual choices magnified by self-organizing patterns that sometimes reflect the kinds of chaotic escalations one sees in arms races. On the other hand, patterns of interaction can be understood after the fact.
7. Indeterministic-Patternless-Predictable
Systems of this type do not exist, and their absence suggests that systems cannot be both indeterministic and patternless and still be relatively predictable.
8. Indeterministic-Patternless-Unpredictable
This is the realm of Chance Systems — systems in which elementary and irreducible chance events occur. Chance Systems exist at a very small scale according to Quantum Theory, and humans can engage in games, using tools whose behavior can, more or less, approximate results that we would expect from true chance systems. However, human social behavior outside of such contexts is non-random, even though it can sometimes be modeled as reflecting randomness.
Summary and Conclusions
Even though there are 8 logically possible combinations of the three dichotomies, only 5 of the categories exist in reality, and the remaining three are probably empirically impossible. The Classical deterministic systems of Category 1, whether mechanical or teleological are no surprise, and the systems of deterministic chaos falling into Category 4, as well as the Chance Systems of Category 8 are also well-known. However, the complex systems of Categories 5 and 6 are characterized differently than in other discussions of system types.
In particular, the indeterministic, patterned, and predictable combination of Category 5 includes human-based PCASs involving coercive controls. These systems are relatively predictable because the structures and operation of coercive control within them, reinforced by authoritarian culture, simulate the mechanical and teleological systems of Category 1. Thus, human behavior in closed, mobilized, or frozen organizations is relatively predictable compared to other types of NCASs and PCASs.
On the other hand, the indeterministic, patterned, and unpredictable combination of Category 6 includes both commonly observable NCASs such as Ant Hills and Beehives, and two types of human-based PCASs whose behavior is relatively unpredictable, open organizations and violently conflictful organizations. Open organizations and NCASs are less predictable because agents within these systems are free to self-organize and to engage in distributed problem solving. Finally, violently conflictful PCASs are also unpredictable, because, in the case of such systems, violent outbreaks involve individual choices magnified by self-organizing patterns that sometimes reflect the kinds of chaotic escalations one sees in arms races. The truth is, that even if we don’t like them, violently conflictful systems also involve a great deal of creativity and distributed problem solving, as humans enmeshed in such systems know very well.
The emptiness of the Categories 2, 3, and 7 suggest three Natural laws expressed in negative form. Specifically:
-
There are no deterministic, patterned, and unpredictable systems;
-
There are no deterministic, patternless, and predictable systems; and
-
There are no indeterministic, patternless, and predictable systems.
These propositions, arising out of the categorization, can be refuted by finding one real system of each of the three types.
Comparison with Snowden’s Types
Dave Snowden’s three “physical” types: order, chaos, and complexity appear to map to types 1, 4, and both 5 and 6 above, respectively. Type 8, Chance Systems, isn’t represented in his classification. Also, I said “appears” above, because what Dave means by “order” isn’t entirely clear from his writings. But I think that some of his examples, at least, suggest that he means “causally” ordered systems characterized by causal laws, the sort of order we find in deterministic systems and in classical physics. However, other examples, involving things like the legal system also suggest that he may not be talking about order characterized by causal laws at all. From my point of view this aspect of Cynefin certainly needs clarification.
It’s also not entirely clear that his “chaos” and my “chaos” match entirely. Dave has stated that his category includes deterministic chaos, but he’s also indicated in act-km correspondence that “chaotic systems” may include other kinds of systems as well. He has characterized “chaotic systems” as those in which agents are “unconstrained.” This doesn’t clarify things for me however, because when agents within a system are unconstrained by a higher level system, there is no such system. Of course there’s no problem with defining a “no system” type from a logical point of view, But if one does that, then the idea would not include “deterministic chaos,” which certainly implies (as in the famous example of the “butterfly effect”) causal interdependence of the components and agents in a system, and therefore constraints on the agents, whether we can see the patterning of these constraints or not.
In the area of “complexity,” Dave recognizes that human complex systems are different from those found in nature; but he doesn’t make the NCAS/PCAS distinction in detail. He also doesn’t recognize that there are more and less predictable CASs, so his classification doesn’t distinguish between Types 5 and 6 above.
Within the human realm Dave’s order category is broken down according to what he calls an “epistemological” criterion: whether systems are “known” or “knowable.” I don’t make that distinction myself because my classification is intended to be ontological, that is, it’s intended to be about the systems. It’s not intended to be about what we know, or to be about “us.” That’s another subject.
I feel the same way about Dave’s fifth type of system, “disorder.” Dave says very little about this type of system and its real characteristics, and the impression I get is that he really means to characterize it almost from a social psychological point of view, as referring to systems that we have no consensus about relative to whether we are dealing with a “known,” “knowable,” “chaotic,” or “complex” system. He’d probably respond that this is a ‘sensemaking” point of view, rather than a social psychological point of view. If so, I wouldn’t argue over terms, since in neither case are disorderly systems ontological in character.
In future blogs, I’ll take up other Cynefin topics, including an analysis of Cynefin viewed not as a framework for classifying systems, but as a sensemaking framework for analyzing and deciding about what to do in specific situations or contexts; and also an analysis of Dave’s view of safe-fail experiments.
Tags: Complexity · Epistemology/Ontology/Value Theory · Knowledge Management
May 9th, 2008 · Comments Off on On Classifying “Systems:” Part One

Introduction
One of the aspects of Dave Snowden’s Cynefin approach is the identification of three physical and five human “domains,” or “systems.” The physical systems are called “order,” “chaos,” and “complexity.” In the area of human systems Dave breaks “order” down into known (simple) and knowable (complicated) systems, and also adds a fifth “domain” called “disorder.” In discussions in the act-km group, there has been considerable recent critical discussion of this framework with Stephen Bounds, Richard Vines and myself all engaging in exchanges with Dave. I don’t want to discuss these exchanges in this post, though I will bring up a number of other aspects of the Cynefin approach in future blog installments. Here, however, I will focus only on the question of what a system is, and on the question of how we should classify systems.
Before I start this examination however, I need to emphasize that this post is about “systems.” It is not about “domains,” “coalescences,” “contexts,” or other terms that Dave Snowden has used, in addition to “systems,” to describe the three physical and five human “things” named above. The question I’m addressing here is whether there is an alternative framework that addresses the question of “how ought we to classify real world systems, both physical and human, in a way that best reflects the nature of reality?” In other words, I am addressing questions of ontology related to systems classification, not questions of ontology related to “domain” or “context,” or “situational” classification. I’ll take up these other questions in the future.
Systems
A system is a conceptually isolable unit composed of components and their interactions, both having properties. That is, it is a collective of interacting components. Components, in turn, are individual units of which properties may be predicated. Interaction consists of the contact, or exchange, components have with one another. As with components, properties may also be predicated of the interactions. Among the properties of interactions are global properties of the collectives we call systems.
Since complete descriptions of phenomena are logically impossible, when we analyze or describe systems, we never deal with the whole of a system’s reality. We always abstract and select from infinitely rich concrete reality a set of components, properties, and interactions which have significance for us.
To analyze system change we have to make the process/product distinction. Ontologically, only process may exist. But to view change, we have to distinguish time intervals or time slices from changes across them. Within any time interval or time slice, we can describe the state of components, interactions, and properties.
We can also distinguish properties of components, and interactions from collective properties of a system. There are three types of such collective properties: aggregate properties are mathematical aggregations of the values of properties of individual components; structural properties are relations between or among individual components; and global properties are properties of the system itself that can’t be derived mathematically from either aggregate or structural properties.
Finally, “system” implies the idea of “boundary;” specifically, that there is an inside and an outside of any system – a system, and its environment, and also conceptual criteria that allow us to distinguish the boundary, as well as the possibility of exchanges coming into and out of the system; inputs and outputs.
General Systems Theory from 100,000 Feet
In the early days of General Systems Theory, in the1940s, ’50s, and 60s, classification of systems was simple. They were all classified as either mechanical (or closed systems) or teleological (or open systems). Both types were viewed as deterministic systems subject to natural laws. If a system was generally not subject to causally relevant inputs from its environment, then it was viewed as a closed, mechanical system. Typical examples of such systems are clockworks, and the solar system.
In teleological or open systems, system dynamics is subject to continual causally relevant inputs, and the system has to self-regulate its reactions to these (use feedback) in such a way that it maintains its internal conditions within certain limits, and also maintains its goal-directedness over time. It is because of this goal-directedness that these systems are called teleological. That is, through self-regulation and feedback, they operate in such a way that they tend towards particular states over time; the products of system processes. Teleological systems are different from mechanical systems in a very important respect. While mechanical system laws relate one system product to another system product from one time slice to another; teleological system laws relate one system product to a specified class or range of system products from one time slice to another. Again, however, both of these system types are deterministic.
As General Systems Theory developed over time, this simple classification of systems was shown to be inadequate. One development occurred in chaos theory. There, people studying the dynamics of certain deterministic systems discovered a class of systems they called “chaotic systems” which, though governed by deterministic laws, were nevertheless unpredictable in principle because (a) the future state of these systems is extremely sensitive to their initial starting conditions; and (b) in both theory and practice, we are never able to measure such starting conditions with perfect accuracy. Even a reasonable degree of accuracy in measuring starting conditions would not be enough to overcome this because in classical deterministic systems, the divergence between the actual and projected course of such a system becomes exponentially greater over time, and the initial conditions, even though measured fairly accurately, along with the systems laws cannot guide us to the future, determined, but unpredictable, state of the system.
Systems characterized by such “deterministic chaos” turn out to be much more common than the typical examples normally given of classical mechanical systems. That is, it turns out that, in reality, systems like the solar system and classical clockworks are atypical systems, their frequency of occurrence drowned by the frequency of systems subject to deterministic chaos.
Even at the time, in the1940s and ’50s when General Systems Theory first became popular it was widely known that random or chance systems (systems in which elementary and irreducible chance events occur) existed, since Quantum Mechanics was already well-established. So, given the progress of work on chaos theory and chaotic dynamics, it was apparent by the 1970s that there were two classes of systems: Deterministic and indeterministic systems, and that the deterministic category included classical mechanical, teleological, and deterministically chaotic systems, while the indeterministic category included random or chance systems.
It was also apparent that these two classes of systems no longer successfully distinguished systems that were alike from others that were different in essential respects. In particular, deterministic systems included both classical clockworks whose behavior was highly predictable and also chaotic systems, which though deterministic, exhibited behavior that was unpredictable. This suggested that the early two category classification of systems might be expanded into a four category classification based on the deterministic-indeterministic and predictable-unpredictable dichotomy.
During the 1960s and certainly by the middle ’70s another development in the evolution of General Systems Theory was clearly visible. Work in Biology and General Systems Theory by Ilya Prigogine, Manfred Eigen, Humberto Maturana, and Francisco Varela focused on the idea of “complexity,” and also on closely associated ideas of “dissipative structures,” “emergence,” “self-organization,” “identity,” “self-making” “autopoiesis,” and “cognition.” While I don’t have space here to discuss these, by now well-known, ideas, I want to make the point that this first phase of complexity research established another key variant of the notion of system. Specifically, this idea views a complex system as a “pattern” or network of interactions, an “organization” that can be understood and explained in retrospect, but that is both non-random and indeterministic. It is non-random, just because it is a pattern that persists through time. And it is indeterministic because (a) the pattern emerges out of interactions among its components in a way that cannot be accounted for by our theories and models, i.e., as a matter of fact we cannot specify laws that govern the detailed behavior of such systems, (b) we cannot know all the initial conditions to which the behavior of the system is sensitive, and (c) the details of the behavior of the system cannot be predicted by our theories and models.
This characterization of complex systems as indeterministic is a point that is not generally agreed upon. What is agreed is that “linear models” cannot account for or predict the details of complex system behavior. Some systems practitioners subscribe to determinism as a metaphysical doctrine, and assert the possibility that non-linear deterministic laws for such systems may always be found and that, in any case, it is good to proceed on such an assumption. I accept that this view may be right in individual cases. But I think it’s also possible that there are systems that are intrinsically complex and for which it may never be possible to develop either linear or non-linear deterministic laws.
In any event, here it is important to distinguish a number of different claims. First, there’s the view that all systems are really deterministic and that indeterminism, both random and complex is an appearance arising out of our ignorance. This view suggests that neither random nor complex systems really exist and that all systems are deterministic. Second, there’s the claim that some particular system is deterministic or indeterministic, as the case may be. And third, there’s the claim that all systems are indeterministic. This is not the place to take up the first or third claims and Karl Popper has already provided a wonderful discussion of the issues in The Open Universe, 1982. Here, I think we can focus on the second claim and simply point out that at any point in time and in any problem domain, theories and models that view a system as either indeterministic or deterministic can be compared and we can choose which of these stands up best to our tests, evaluations, and criticisms. So whatever our views are about the ultimate reality of any system, we may still be able to agree that a particular theory whether deterministic or indeterministic in character is closer to the truth than another theory that expresses the opposing persuasion. We may also be able to agree that as far as we know from the present state of scientific research, we can point to deterministic and indeterministic systems, and within the deterministic category to predictable and chaotic systems, and within the indeterministic category to random or chance systems, and complex systems, both of which are unpredictable in their behavioral details.
During the 1980s and 1990s the study of complex systems continued in biology and spread to economics and the social sciences. The outstanding work in this phase of complexity research is associated with various individuals affiliated with the Santa Fe Institute including John Holland, Brian Arthur, Stuart Kauffman, Chris Langton, Doyne Farmer, Murray Gell-mann, Philip Anderson, and George Cowan. Many others outside the institute have drawn from their work, which by now influences many disciplines and research traditions. The emphasis of earlier research on the self-organization of structures and processes in biological systems, began to shift to research on the self-organization of agents into higher level systems. Computer simulation has played a large role in this research, showing, in particular, how interacting agents governed by simple and deterministic rules could self-organize into higher level emergent systems. The self-organization has been marked by the development of higher level order arising out of system interactions. This order has been characterized as “order for free,” since its maintenance doesn’t require any central control. Emergent order is also characterized by “enablers” and “constraints” that the emergent higher level system imposes on self-organizing agents. That is, once agents do self-organize into a higher level complex system, then that system influences their behavior through the imposition of enablers and constraints, what Donald Campbell, some years earlier, called “downward causation,” and what Popper, in the development of his three worlds ontology, called “plastic controls.”
Another, characteristic of this latest phase of systems research is its focus on the dynamics of systems. In particular, complexity research has emphasized transitions from rule-based order to chaos and complexity. And it has also viewed complex systems as ones that are “far from equilibrium,” and as systems that maintain themselves between the highly predictable equilibrium of static order and the unpredictable dynamics of chaos. Finally, Chris Langton’s metaphor of complexity existing “at the edge of chaos,” has been influential in spreading the idea that complexity is not rule-based order, but that nevertheless it is a “pattern” of order that must continually strive to prevent itself from decaying into either deterministic predictable order or deterministic unpredictable chaos. That is, “complexity” stands between two forms of determinism, but is itself indeterministic.
The Diffusion of Complexity Theory
The spread of Complexity Theory into the social sciences is now creating another fault line in General Systems Theory. Clearly, there are complex systems that emerge from self-organizing agent interactions that operate without the aid of explicit central controls. Ant hills are an example of this sort of complexity. On the other hand, there are also complex systems that combine emergent self-organization with the efforts of system agents to direct and control their systems in accordance with their own intentions. In brief, there’s a distinction between Natural Complex Adaptive Systems (NCASs), and organizations composed of self-conscious intelligent agents. Let’s call the second type of complex systems Promethean Complex Adaptive Systems (PCASs). Complexity science, at present, is mostly based on research about NCASs, and not on research focused on PCASs, and we are in a phase now where we are trying to apply constructs, knowledge, and methods developed for NCASs to PCASs. It is likely that such an effort will succeed only partly, and that we will have to broaden complexity theories and approaches to be successful with PCASs.
Tags: Complexity · Epistemology/Ontology/Value Theory · Knowledge Making · Knowledge Management

The Relative Risk Intelligence (RRI) of a President, Prime Minister, or other Chief Executive of a Nation State is the relative ability of the Executive to solve problems and reduce the risk of error in decision models of his/her Government in its various domains of activity or risk compared to other Chief Executives. Domains of risky activity could be energy policy, foreign policy toward the middle east or a particular nation, Health Care Policy etc. You get the idea; a domain of activity is any key policy issue area we think might arise during a Chief Executive’s term, as well as any currently unknown and challenging issues that might arise.
How important is the RRI of a Presidential candidate? Well, if we look at past Presidents of the United States, I think it’s plain that the best ones: Washington, Jackson, Lincoln, Teddy Roosevelt, Woodrow Wilson, Franklin Roosevelt, and Harry Truman, were the best problem solvers. And the ones that really got the US into trouble: John Adams, James Buchanan, Herbert Hoover, and now George W. Bush have been our worst ones. In fact, under Mr. Bush’s leadership, the current administration seems to treat all problems by sweeping them under the rug and refusing to enforce existing laws in area after area.
RRI may be the most important characteristic of a Presidential candidate. After all, if President Bush had been remotely sensitive to the possibility that he might be in error, and to the possible consequences of being wrong, he might have found another way to deal with Iraq, or even intervening, he might have done it in a much less risky way. He might have had an entirely different attitude about relying on “Brownie” to run FEMA, or about using the phrase “axis of evil” to describe certain of our adversaries, or about relying so heavily on, and expanding US dependence on, external oil, or … Oh, well, I could go on and on about the current administration’s greatest hits. But what I’m getting to here is that we really ought to consider the RRI of the current three remaining candidates of the United States when deciding who to select. So how can we assess RRI? Here’s a line of reasoning that can get us there.
What’s the difference between the ability to learn about a type of risk and the ability to solve problems, where a ‘problem’ is defined as a gap between what we know and what we think we need to know about that risk?
And if there’s no difference, or not much difference, then why shouldn’t assessing the ability to learn about risks and reduce the risk of error in one’s decision models be the same thing as assessing the ability to solve problems and integrate the solutions into one’s organization or in this case, political system?
And why shouldn’t this ability be assessed as a candidate’s ability to perform creative learning in various problem areas, and integrate the results of that learning into the knowledge base of the Government and the political system that supports decision making and action?
Answering these rhetorical questions led me to 6 factors covering aspects of ability to perform creative learning and integrate the results, that one can use in doing such an assessment.
-
How does the presidential candidate compare to other candidates in her/his ability to seek out, recognize and formulate problems in her/his knowledge of the various risk areas the Government must deal with? This factor measures the comparative ability to adapt to challenges by transitioning from ineffective routine learning and decision making to creative learning. This is important because creative learning is not the most frequent form of learning. That form is routine learning about the consequences of one’s actions, actions of others, and what we observe about the world around us. Creative learning is, in a particular sense, problem solving. But you can’t solve a problem, if you can’t see that you have one. So, the ability to perform the activities described in this factor is critical to the ability to learn about risks and reduce the risk of error in solutions.
-
How does the candidate compare to the others in ability to acquire information (including experience) that is relevant to helping to develop new ideas about how to solve problems? Acquiring external information is one of the most important responses we can make to a problem, just because something is a problem when the knowledge providing a solution to it is not available within us or within our organizations and also because acquiring information from external sources that we evaluate as having solved our problems is often less expensive in time, resources, and effort, than figuring out the solution to a problem ourselves.
-
How does the candidate compare to the others in ability to understand and use the results of individual and group learning (new knowledge) developed by her/his staff. This is another important feature. Evidently President Bush had a very limited circle of people he looked to for advice, and they, almost without exception, were people who addressed problems in collaboration with relatively small groups of trusted advisors. Their circle of trust was not very large, and the variety of new ideas they had to call upon was apparently exhausted before very much of the Administration’s first term in office had passed.
-
How does the candidate compare to the others in ability to formulate new ideas or solutions in the various risk areas? This is about coming up with new ideas, a dimension that’s essential to problem solving, and to arriving at alternative models that can help in reducing the risk of error. Some important factors related to this dimension include the problem solving capability of one’s projected executive staff, and also, the variety of methods available to members of the candidate’s projected executive staff to help in generating new ideas. Another important factor is the attitude of the candidate to new ideas and how they ought to be introduced. Some candidates severely restrict the communication of new ideas from individuals and groups and also don’t provide much in the way of resources to enable them. Such restrictions will undermine distributed problem solving by restricting the variety of new solutions that can be evaluated, and therefore increasing the likelihood that an effective solution will never make it to the evaluation process.
-
How does the candidate compare to the others in ability to criticize, test and evaluate the new ideas she/he or the Government formulates to solve problems in the various risk areas? This dimension is about the ability to eliminate errors through fair comparison. Without it, evaluation of competing alternative solutions will be biased against arriving at solutions to problems that work and that are more likely to be true than their competitors. Fair comparison is what allows us to select solutions that carry with them the minimum risk of error. If a candidate can’t do this better than his/her competition he/she will be less capable of learning about risks in a way that minimizes the risk of error.
-
How does the candidate compare to the others in her/his ability to integrate the solutions to problems she/he or the Government develops into other areas in the Government and the political system generally? This dimension is about a candidate’s and related staff’s ability to integrate new knowledge once creative learning produces it. The ability to keep records is very important here. But the capabilities to search, share, teach and broadcast are even more important.
We can use these six factors to assess the RRI of presidential candidates in a number of ways. The easiest way is to just ask yourself who’s better on each of the 6 factors, and then ask yourself who has the best RRI on the whole.
Another way is to use a simple scoring system to get the scores on the individual factors. That is, you might ask people to evaluate whether the competing entities are average, above average, or below average in their ability, and then decide to assign to the competing entities a score of zero if they are below average, 0.5 if they are average, and 1.0 if they are above average. This is very easy to do and would produce scores varying from 0-6, when you sum the individual ratings across the six factors.
Still another way to improve on the scores is to weight each of the six factors differently according to your view of their importance and then use the 0, 0.5, and 1.0 ratings multiplied by the weight you give to each factor. You can then add up the weighted ratings to give you a total score for RRI. Not too long ago, I developed a model rating the importance of the six factors in another application context. Below are the importance weights adapted for this application. You can see they’re very different from equal weights. You may want to use these ratings as a benchmark and revise them using your own judgment. An easy way to do this is to ask yourself what percent of each importance weight your rating would be. That is, it could be 75% of the importance weight, or 110% of it, or whatever multiplier you wanted to use to change my importance weights to your own.

There are more ways to combine the six factors and to create scores on each factor, including using psychometric methods that produce ratio scale scores derived from human judgments tested for logical consistency. One very well known method for doing this is called the Analytic Hierarchy Process, which takes a good bit more effort than I’ve outlined above. But one place you can find about the AHP and software for implementing it is here: www.expertchoice.com. Simple AHP models such as the one you’d have to develop to do the above ratings can easily be completed using a spreadsheet.
So, if you want to, you can compute the RRI of the currently competing presidential candidates using any of the above methods and then you can vote for one with the highest Relative Risk Intelligence. But before you do that, do keep one last thing in mind. A few very good problem solvers, in general, hurt their presidencies grievously because they made key decisions without taking into account the risk of error. They were, of course, Lyndon Johnson. Richard Nixon, and Bill Clinton. So, in rating the candidates, it may be a good idea to take into whether each one might not have key areas of activity, where because of their personalities they are likely not to take account of the risk of error, but, instead, may think: “the devil take the torpedoes, full speed ahead!
Tags: Epistemology/Ontology/Value Theory · KM Software Tools · Knowledge Making · Knowledge Management
April 21st, 2008 · Comments Off on Potpourri: Categories and Other Issues

This is my second blog entry commenting on Dave Snowden’s “Wave-particle duality” piece. My first reply addressed the strawman, knowledge as thing and flow, and paradox issues. This one will address the other six issues he raised. Once again, they are:
1. “If you think in categories, then the world is presented in categories or a failure to categorize.”
2. “Joe wants to create categories (hierarchical and otherwise) and that such a way of thinking is antithetical in language and form of argument to understanding a world informed by complexity science.”
3. In “Complex Acts of Knowing” Dave “introduced the Cynefin framework and argued that we needed to understand that different approaches to knowledge management, communities etc applied depending on the context and that it was a mistake to argue for one approach over another without first developing an understanding of the nature of the system.”
4. “I also argued for a recognition that We always know more than we can say and we can always say more than we can write down was key to KM and that we had to learn to handle narrative and experience as much as we handled content and information centric views.”
5. “By way of introduction I made reference to three generations of understanding KM. The pre-Nonaka period characterised by data warehousing and decision support, the Nonaka period characterised by attempts to make tacit knowledge explicit and early attempts at collaboration, and then a third or post Nonaka period which would recognise the importance of narrative etc. Joe and Mark spent a considerable amount of time arguing that I had failed to realise that the most important distinction was between Knowledge Processing and Knowledge Management, and that their (Or Mark’s) understanding of this was the fault line between first and second generation KM.”
6. For Joe categories are important. Thus (as he does in the paper) if he can find examples for Nonaka like thinking in the pre-Nonaka period then my talking about three generations has to be false. Now the whole point about generations is that they overlap – Your father does not have to die so that you can exist. I was creating a way of viewing history as an unfolding and overlapping series of events not a set of categories where things were right or wrong.”
Categories, Categories
Dave seems to think that there is a special problem about using categories in our thinking, or, more precisely about using categories in our descriptions of the world, and three of the six remaining issues he raised have to do with the idea of categories. He appears to object to the very idea of expressing one’s thoughts in terms of categories when he says: “If you think in categories, then the world is presented in categories or a failure to categorize.”
I had a number of reactions to this statement. First, I asked myself. How can the world be presented as failing to categorize? Clearly what Dave must have in mind here is that if one uses categories to express oneself, then one will describe the world in categories, or will criticize others from the viewpoint that they fail to distinguish the same categories as oneself.
While I think there is a lot of truth to this statement, I also found myself asking in response: How can one talk or write about the world without categories? Don’t we need to use categories to describe our discernment of old and new patterns? That is, to describe the distinctions we discern? Don’t we, in fact, have to invent categories when we see or recognize phenomena that are as yet unnamed, in order to describe or explain to others what we have invented or recognized?
And assuming that one cannot describe or explain the world without using categories, then doesn’t Dave use categories to describe and explain things, to tell stories, and to advance theories, frameworks, and models? And if he does, doesn’t he criticize others for failing to distinguish categories that he thinks are important, or for distinguishing categories that he doesn’t wish to recognize, or for using the wrong categories? To answer these questions I took another look at “Complex Acts of Knowing,” and this is some of what I found.
In the paper Abstract, Dave distinguishes between three generations of knowledge management, uses the tacit-explicit knowledge distinction, uses the terms, “timely information” and “decision support” (both are categories), uses the term “BPR”, mentions “context,” “narrative,” and “content management” (three categories), refers to “scientific management,” refers to “complex adaptive systems” theory,” “sensemaking,” “self-organizing capabilities,” “informal communities” “natural flow model of knowledge creation, disruption, and utilisation, refers to “the argument from nature of many complexity thinkers,” “human capability,” “order,” “predictability,” ‘collective acts,” “individual acts,” “thing,” “flow,” and “diverse management approaches.”
The pattern of category use we find in the Abstract of the paper, is reflected throughout. On the second page, for example, we find a distinction between three “ages” of knowledge management. We find references to “academics,” “management,” “management science,” “Newtonian Science,” “quantum mechanics,” “phase shift,” “dogma,” “medieval,” “Enlightenment,” “esoteric complication,” “new simplicity,” “meaning,” “missionary enthusiasm,” “consultants” “pre-existing “primitive” cultures,” “rape,” “pillage,” “disillusionment,” “knowledge gained through experience,” “traditional forms of knowledge transfer,” “apprentice,” “collective knowledge,” “problematic,” “SECI model,” “socialisation,” “externalisation,” “combination,” “internalisation,” “dualistic,” “dialectical,” “knowledge capture,” “collaborative computing,” “Intranets,” “extranets,” “Japanese tradition of “Oneness,”” rational, analytical, and Cartesian,” “innovation,” “manufacturing processes,” “knowledge programmes,””organisational asset,” “more holistic and dialectical view.” All of these are categories.
As we move through the rest of the paper, we find categories on every page. We find distinctions between “upper and lower levels of acceptable abstraction in any knowledge exchange,” “teaching and learning cultures,” “Ba” and “Cynefin,” “open spaces or domains of knowledge” or “sensemaking,” “Bureaucratic/Structured: teaching, low abstraction, “Professional /Logical: teaching, high abstraction,” “Informal/Interdependent: Learning, high abstraction,” “Uncharted/Innovative: Learning, low abstraction,” “complicated systems,” “complex systems,” and “chaotic systems,” “known complicated systems,” “knowable complicated systems,” “knowledge flows,” and “Just in Time Knowledge Management.”
So, Dave certainly does create, and use categories in expressing his views and describing the world. And if he does, doesn’t he criticize others for failing to distinguish categories that he thinks are important, or for distinguishing categories that he doesn’t wish to recognize, or for using the wrong categories? The answer is, of course. In “Complex Acts of Knowing,” he criticizes scientific management for using the language of cause and effect. He criticizes those who assume that knowledge is a “thing,” and those who don’t distinguish knowledge as a thing from knowledge as a flow. In exchanges with me in the actkm.org group he has criticized my distinctions among biological, mental, and cultural knowledge, and has claimed that my classification of system types is less desirable than his because it is “more complex” and also “hierarchical.” He criticizes “quality management” approaches because they refuse to recognize “complexity.” In short, he does all the things I have named above.
Moving on to Dave’s statement that: “Joe wants to create categories (hierarchical and otherwise) and that such a way of thinking is antithetical in language and form of argument to understanding a world informed by complexity science.” I think that either this statement is untrue or that it applies equally well to Dave’s own theorizing, since in “Complex Acts of Knowing,” and elsewhere he uses categories liberally, and also where and when he pleases. He also uses hierarchies as when he divides “complicated” systems, into the “known” and the “unknowable,” knowledge into “tacit” and “explicit,” and knowledge, again, into “things and flows.” Now, it may be true that my particular set of categories “is antithetical in language and form of argument to understanding a world informed by complexity science.” But, if so, I think Dave needs to make that argument.
Earlier, in actkm exchanges we approached that discussion. I offered an alternative classification of system types to Dave’s. The exchange did not show that my classification was inferior to Dave’s. Rather, my classification was more complex and richer in detail than his, while his was simpler and non-hierarchical than mine. In the course of the exchange, it became clear that Dave was using his “types” to describe differing system states or strange attractors in phase space. However, when asked, he could not make explicit the dimensions or coordinate axes of his phase space, leaving the conceptual basis for his classification of system types unclear, and suggesting that my question about coordinate axes was motivated by attachment to categorical thinking and had nothing to do with what it is necessary to know to track the dynamics of a system in phase space. I leave it to my readers to evaluate how much sense this response makes.
The third “category” issue raised by Dave, once again, is stated this way: “For Joe categories are important. Thus (as he does in the paper) if he can find examples for Nonaka like thinking in the pre-Nonaka period then my talking about three generations has to be false. Now the whole point about generations is that they overlap – Your father does not have to die so that you can exist. I was creating a way of viewing history as an unfolding and overlapping series of events not a set of categories where things were right or wrong.”
This third issue really gets to what this “category” debate is all about. It’s not really that I use categories and Dave doesn’t, but rather that I’ve used categorical distinctions and related analysis to question Dave’s interpretations and claims about facts. In “Generations of KM,” Mark and I criticized Dave’s three ages view of Knowledge Management by juxtaposing our own two generations view, by questioning whether his facts about the first two ages are correct, and also and most importantly based on whether his conceptual distinctions between the three ages are important from a theoretical perspective.
Above, he says that our criticism of his three ages view is false, because it doesn’t recognize that his three categories are fuzzy, and that there is considerable overlap of ages or generations. But I think that here Dave is missing the point. We criticized his three ages theory of KM change because we disagreed with his characterization of the facts as well as because we thought we had a better set of categories for describing change in KM. Thus, Dave characterizes the first age as mostly about “information for decision support.” At one level that’s correct, but: (Generations of KM, p. 7)
“First, was there really no more to the first age of KM than “information for decision support”? If so, then why was the term KM used at all? After all, the field of business intelligence provides information for decision support. So do data warehousing and data mining. And so does the still broader category of Decision Support Systems (DSS). So what was the term KM supposed to signify that those other terms do not?
Second, also, if there was no more than information for decision support to the first age, then what were the attempts to distinguish data, information, knowledge and wisdom about? What was the development of Xerox’s community of practice for the exchange of knowledge among technicians about? What was knowledge sharing at Buckman laboratories in 1987 and 1992 (Rumizen, 1998) about? Where does Hubert St. Onge’s work (See Stewart, 1999) on the relationship of customer capital to the learning organization fit? Or Senge’s (1990) work on systems thinking? Or Karl Wiig’s early introductions to KM (1993, 1994, 1995)?
In brief, Snowden’s characterization of the first age of KM as focused on providing information for decision support and implementing BPR schemes suggests much too heavy an emphasis on KM as primarily composed of IT applications to reflect the full reality of the first age. In fact, his failure to take account of the human side of KM during the first age suggests a desire for the same kind of neat distinction we find in Koenig’s analysis. In effect, Snowden, like Koenig, seems to want to say that the first age was about technology and the second age was about the role of people in Nonaka’s four modes of conversion.”
In other words, our factual disagreement with Dave, wasn’t just based on some earlier than 1995 “Nonaka-like thinking.” Rather, there were so many exceptions to Dave’s categorization of the period that one could drive the proverbial truck through the picture of the first age he conjectured. Nor was his conjecture characterizing the second age as essentially about working the implications of Nonaka’s SECI model any more accurate. As we said just following the above passage (ibid. p. 8):
“Third, in describing the second age of KM, Snowden’s account is, once again, far too spare in its characterization. No doubt, the Nonaka and Takeuchi book has had an important and substantial impact on KM, but the period since 1995 has seen important work done in many areas not explicitly concerned with knowledge conversion.
These areas include semantic network analysis, the role of complex adaptive systems theory in knowledge management, systems thinking, intellectual capital, value network analysis, organizational learning, communities of practice, content management, knowledge sharing,conceptual frameworks for knowledge processing and knowledge management, knowledge management metrics, enterprise information portals, knowledge management methodology, and innovation, to name some, but far from all, areas in which important work has been done.”
Work in these areas was very common during the period 1995 – 2002, when we wrote our paper, and its existence belies Dave’s interpretation that KM in this age was primarily about working through the SECI model. Further, our theoretical reasons for differing with Dave’s three ages “fuzzy” categorization, can be easily stated.
“Boiled down to its essentials, Dave almost seems to be saying:
- The first age was about applying the BPR notions of Hammer and Champy (1993) on a foundation of Taylor (1912);
- The second age was about applying the vision expressed in Nonaka and Takeuchi (1995); and
- The coming third agre will be about applying the vision expressed in his own Cynefin model, coupled with Stacey’s notions about the paradoxical character of knowledge, and expanded through its synthesis with the systems typology.
So, Snowden’s story of change is not guided by a transcendent conceptual framework that can provide us with categories to set a context for describing change, but rather is a claim that KM proceeds from vision to vision expressed in great books and/or articles. His view provides no guide about what the next fundamental change in KM will bring, because how can we know what the rest of a story might be?”
On the other hand, Mark McElroy’s two generations theory of change in KM is based on the conceptual framework developed primarily by Mark and myself. Put simply, the framework distinguishes Knowledge Management from Knowledge Processing and Business Processing, and views Knowledge Management as activities enhancing Knowledge Processing.The two key knowledge processes we distinguished were knowledge production and knowledge integration. Proceeding from there, Mark’s theory says that the first generation of KM primarily focused on knowledge integration alone, and particularly on knowledge sharing, while the second generation of KM ADDED a focus on knowledge production. We think the first generation of KM ranged from about 1990-1999 or so. We don’t use sharp boundaries for our categories in distinguishing the generations, and we’ve never claimed that knowledge production was entirely ignored before 1999. We just think it wasn’t a primary focus. During and after 1999 on the other hand, it seemed to us that more and more practitioners recognized that making knowledge was a primary concern of KM, and that situation, the second generation of KM has remained to this day.
Cynefin Alternatives?
In “Complex Acts of Knowing” Dave “introduced the Cynefin framework and argued that we needed to understand that different approaches to knowledge management, communities etc applied depending on the context and that it was a mistake to argue for one approach over another without first developing an understanding of the nature of the system.” My thought on this issue is that it depends on what one means by an “approach,” and also on what one means by “understanding.” If by this, Dave means that one ought to have an understanding of the state of an organizational system before one decides on one’s approach for enhancing knowledge processing, I certainly agree. But if we are to do that then we must have a framework that allows us to describe the state of the system. Dave seems to use the simple, complicated, complex, and chaotic systems framework to decide on what state the system is in, and that again raises the question of the validity of that conceptual framework. Earlier, I pointed out that I had developed an alternative systems framework, and I’m sure that many other alternatives could easily be developed. This is not the place to discuss alternative frameworks for describing the state of organizational systems. But perhaps it is the place to point out that the Cynefin framework has received very little testing against alternative frameworks of change in organizational phase space, and that it can hardly receive any such testing without further development of it to specify the nature of the phase space by defining its coordinate axes.
We Know More Than We Can Tell and We Tell More than We Can Know
Dave said: “I also argued for a recognition that We always know more than we can say and we can always say more than we can write down was key to KM and that we had to learn to handle narrative and experience as much as we handled content and information centric views.” I agree with this statement and certainly encourage Dave’s efforts to develop narrative-related methods both in gathering and analyzing data and content. In addition, however, I also think it’s important to continue to emphasize developing explicit theories, and models, because these are very powerful too.
What’s often not pointed out about “objective knowledge” is that in an important sense, it is, in Bartley’s words, “unfathomed knowledge.” In saying this he was pointing to the well-known idea that the logical content of any non-trivial knowledge claim is open-ended. New logical consequences of any of our theories or models may appear at any time, and we cannot know, in general, what those consequences will be. Neither Newton nor Einstein understood the logical consequences of their theories. Nor does Dave know very much about what logically follows from a commitment to the propositions of Cynefin. It may well be true, as Polanyi said, that: “we know more than we can tell.” But it is just as true that in stating and committing ourselves to a knowledge claim, we also tell more than we can know. So there is mystery in both tacit and explicit knowledge, and if we want to increase our understanding, we need to explore both.
The Fault Line“
By way of introduction I made reference to three generations of understanding KM. The pre-Nonaka period characterised by data warehousing and decision support, the Nonaka period characterised by attempts to make tacit knowledge explicit and early attempts at collaboration, and then a third or post Nonaka period which would recognise the importance of narrative etc. Joe and Mark spent a considerable amount of time arguing that I had failed to realise that the most important distinction was between Knowledge Processing and Knowledge Management, and that their (or Mark’s) understanding of this was the fault line between first and second generation KM.”
This analysis of what Mark and I said is really a bit mixed up. What we said is that the fault line between the first and second generations of KM was that first generation was focused on knowledge integration, while second generation added a focus on knowledge production. The distinction between Knowledge Management and Knowledge Processing is also extremely important because without it, practitioners confuse the scope of KM and become involved in activities that are either knowledge processing or business processing, rather than KM. However, this second distinction is not the basis for our conception of second generation KM.
Tags: Complexity · Epistemology/Ontology/Value Theory · KM Techniques · Knowledge Integration · Knowledge Making · Knowledge Management
April 19th, 2008 · 1 Comment

This week Dave Snowden discussed my views in two of his blog entries. My last blog entry, “Is There A Correct Interpretation of Hamlet?” answered his first entry. This blog installment and at least one, perhaps two, following it will begin to answer his second entitled: “Wave-particle duality.”
I must admit I had a funny feeling reading Dave’s piece. Given the title, I expected it would deal with his view that knowledge is both a thing and a flow, with the wave-particle duality, and with the idea of “paradox,” three ideas that were closely related in his original paper, “Complex Acts of Knowing.” However, instead, it contained at least 9 different issues that I find I want to comment on. Here are the 9 issues:
- Dave charges that Mark McElroy used a “strawman” of his views in our critical paper, “Generations of Knowledge Management” (also Ch. 4 in our book, Key Issues in the New Knowledge Management
- “If you think in categories, then the world is presented in categories or a failure to categorize.”
- “Joe wants to create categories (hierarchical and otherwise) and that such a way of thinking is antithetical in language and form of argument to understanding a world informed by complexity science.”
- In “Complex Acts of Knowing” Dave “introduced the Cynefin framework and argued that we needed to understand that different approaches to knowledge management, communities etc applied depending on the context and that it was a mistake to argue for one approach over another without first developing an understanding of the nature of the system.”
- “I also argued for a recognition that We always know more than we can say and we can always say more than we can write down was key to KM and that we had to learn to handle narrative and experience as much as we handled content and information centric views.”
- “By way of introduction I made reference to three generations of understanding KM. The pre-Nonaka period characterised by data warehousing and decision support, the Nonaka period characterised by attempts to make tacit knowledge explicit and early attempts at collaboration, and then a third or post Nonaka period which would recognise the importance of narrative etc. Joe and Mark spent a considerable amount of time arguing that I had failed to realise that the most important distinction was between Knowledge Processing and Knowledge Management, and that their (Or Mark’s) understanding of this was the fault line between first and second generation KM.”
- For Joe categories are important. Thus (as he does in the paper) if he can find examples for Nonaka like thinking in the pre-Nonaka period then my talking about three generations has to be false. Now the whole point about generations is that they overlap – Your father does not have to die so that you can exist. I was creating a way of viewing history as an unfolding and overlapping series of events not a set of categories where things were right or wrong.”
- The knowledge as thing and flow/wave-particle duality issue
- “Recognising ambiguity and its nature through paradox, but avoiding a surrender to relativism and social constructivism (understood as a universal) is essential to making progress in this and related fields” is important for making progress in KM.
In this blog I’ll comment on the strawman (issue 1), knowledge as thing and flow (issue 8), and paradox (issue 9) issues. I’ll take up the other 6 in future blog installments.
Strawman Fallacy?
Dave begins his critique by characterizing “Generations of Knowledge Management” as an example of the strawman fallacy, which he defines as: “describing what I say in a way I do not recognize, and then attacking the representation.” Now, I think it’s very convenient for the recipient of criticism to define “the strawman fallacy” this way, but I also think it’s an instance of both extreme chutzpa and serious distortion of the strawman notion.
A strawman argument represents another’s view in an inaccurate and an unfair way that overstates it and is easier to refute, then the other’s actual position, and then proceeds to attribute that view to the other, and, finally, to refute it. Whether or not someone has created a strawman has nothing whatever to do with whether the recipient of the criticism “recognizes” the description, analysis, or characterization of a critic. Sorry Dave, you can’t be both the author of a theory and the arbiter of whether its critics have “strawmanned” you or not. It’s really up to third parties to render that verdict.
Now, since I’m one of the principals in this critical exchange, I, too, can’t be the one to determine whether the critique Mark and I delivered in “Generations of Knowledge Management” rendered Dave’s views in an inaccurate or oversimlplified way, But I will say that we were very mindful of the “strawman” issue when we wrote our original paper, and we worked very hard to fairly and accurately represent the views
expressed in the paper fairly. We quote liberally from his paper, and in detail, and where some concepts were unclear we sought clarification in other papers written by Dave and in some of the publications he referenced. Moreover, I do not believe there is another examination of “Complex Acts of Knowing” or the Cynefin framework, even nearly 6 years after ours was written, that is as detailed and careful in its analysis as our paper is. I’m confident that if readers of this blog read both papers, and whether or not they agree with our critique, they will generally find that the “strawman fallacy” is not at issue here.
Knowledge Duality and Paradox?
About the knowledge duality issue Dave’s latest piece says:
“In respect of my saying that knowledge was paradoxically a thing and a flow and the reference to Physics, Joe and Mark say ‘This is all very neat, but it is also very problematic: (1) Philosophers have learned much from paradox, but this doesn’t mean that paradox in the definition of knowledge is necessarily good for KM, especially if there is no paradox. (2) It is not true that physicists have concluded that electrons are both particles and waves. Rather, electrons are things that may be described using a particle model under certain conditions and a wave model under others. The reason why there is no contradiction or paradox in this view is that physicists know enough not to claim that electrons are both waves and particles, but that they are a third thing entirely.’
Again I dismissed this at the time as a failure to understand the nature of paradox. The whole point about paradox is that it allows us to reference a third and as yet not fully understood state. A potential Hegelian synthesis. However if you think in categories this sort of ambiguity has to be removed. Hence the debates on Hamlet.”
This reply of Dave’s to part of “Generations of KM,” questioning his view of knowledge, may seem fair and reasonable to some reading only his blog because Dave quotes one of its telling passages against his view and then charges that Mark McElroy and I fail to understand “paradox,” “potential Hegelian synthesis,” and also commit the sins of thinking in categories, and needing to remove all ambiguity. However, all we really objected to in his paper was his claim about the necessity of paradox and ambiguity in defining knowledge when neither was necessary.
Also, Dave took the quote from us out of context. And to assess his charges about our lack of understanding of paradox and the rest, I think it’s pretty important to look at more of the context. Dave should have few objections to that since he is always telling others about the importance of context in sharing knowledge.
In “Complex Acts of Knowing” Dave says:
“Some of the basic concepts underpinning knowledge management are now being challenged: “Knowledge is not a “thing”, or a system, but an ephemeral, active process of relating.”
Mark McElroy and I, in response to that, said (p. 20):
“Taken from Stacey (2001), this definition suffers from, or at least creates, a process-product confusion. It is fueled by a desire to focus on the dynamics of knowledge creation, rather than only on explicit codified outcomes or mental beliefs. However, we can do this without becoming confused just by distinguishing knowledge processes from knowledge products or outcomes. Knowledge processes are not any less important because we call them “knowledge processes” rather than “knowledge” (the “ephemeral active process of relating”).
Why should we avoid the process-product confusion? First, if we take the view that knowledge is a process, we can no longer talk about knowledge as embedded in cultural products, or even knowledge as beliefs or predispositions in minds. Or knowledge as “true” or “false,” since processes are neither true or false, but only existent or non-existent.
Next, if we tolerate the confusion, it doesn’t allow us to account for the content of cultural products or beliefs or predispositions in minds. So we are left with the problem of finding words other than knowledge to describe these very real phenomena. The real question is: what do we gain by calling knowledge “an ephemeral, active process of relating”? What does it do for us? In our view it only adds confusion in a field that is already replete with it, because some people insist on using words for their “halo effect” rather than for their descriptive value.
To us, it seems clear that knowledge is not a process but an outcome of knowledge production and integration processes. In other words, we believe that knowledge should be viewed as a “thing,” not a process. We also believe that as specified elsewhere (Firestone, 2001), knowledge is not a single thing, but is divided into three types: physical, mental, and cultural. All are things, and more specifically are encoded structures in systems that help those systems respond and adapt to changes in their environments.”
Since Mark and I wrote this last in 2002, I’ve changed my mind about the “encoded structures” claim in the above. I now believe that physical and mental knowledge in living systems is emergent, meaning that it is formed through the active interaction of a living system with its environment. Cultural knowledge is created as part of emergent processes, but as a product it is encoded in cultural artifacts.
Next, Dave criticized the notion that knowledge is a thing in this way: “. . mainstream theory and practice have adopted a Kantian epistemology in which knowledge is perceived as a thing,something absolute, awaiting discovery through scientific investigation.” (ibid.)
As Mark and I say our paper, however (p. 21):
“To say knowledge is a thing may be Kantian, or sometimes even Platonist for that matter, but to label it in this way is not to criticize the idea on its merits. Furthermore, to say that knowledge is a thing is not to say that it is “absolute,” or that it is “awaiting discovery through scientific investigation.” That is, knowledge can be (a) a thing, (b) produced by social processes of many kinds, and not just processes of scientific investigation, much less awaiting discovery by the latter, and (c) can also be either false or true. So there is nothing “absolute” about it.
Dave also said: “In the third generation we grow beyond managing knowledge as a thing to also managing knowledge as a flow. To do this we will need to focus more on context and narrative, than on content.” (ibid.)
But what did Dave mean here by “managing knowledge as a flow?” Did he mean managing knowledge processes as one would expect from his earlier statement that knowledge was not a thing but a process? Not really, for he went on to say:
“Properly understood knowledge is paradoxically both a thing and a flow; in the second age we looked for things and in consequence found things, in the third age we look for both in different ways and embrace the consequent paradox.” (ibid., p. 102)
And we replied: (p 22)
“Here we see a shift in Snowden’s view. As we saw above he begins by characterizing knowledge as a process and creating a process-product confusion, but ends by claiming that it is both a “thing” and a “flow,” thereby creating a process-product redundancy (to wit, flows are things). This he denies is a redundancy, treats as a seeming contradiction, and terms a “paradox.” He then defends paradox, by pointing out that philosophers have learned much from paradox, and also that physicists have had to live for many decades with the paradox that electrons are both particles and waves.
This is all very neat, but it is also very problematic:
(1) Philosophers have learned much from paradox, but this doesn’t mean that paradox in the definition of knowledge is necessarily good for KM, especially if there is no paradox.
(2) It is not true that physicists have concluded that electrons are both particles and waves. Rather, electrons are things that may be described using a particle model under certain conditions and a wave model under others. The reason why there is no contradiction or paradox in this view is that physicists know enough not to claim that electrons are both waves and particles, but that they are a third thing entirely. Indeed, this is the key lesson embodied in the Heisenberg Uncertainty Principle.
And (3), and most importantly, Snowden hasn’t established the need to call knowledge both a thing and a flow and thereby embrace paradox, contradiction or redundancy, much less another age of KM founded on paradox.”
All we need do, instead, is to say that knowledge is an outcome or product (thing) that is produced by human social processes (process). This allows us to deal with both dynamics and outcomes, an ability that has always existed in general systems theory.
So, the effort to establish knowledge first as a process, and then as a “thing” and a “flow,” doesn’t work. It offers no advantages that the process-product view of KM doesn’t. But it does offer the disadvantages of logical contradiction, redundancy, or perhaps (unnecessary) paradox, if one accepts Dave’s assertion.
Given the importance of the view of knowledge as ‘flow’ to the Cynefin model, it is critical to understand what he means by the term, and why he claims it is paradoxical in relation to the view of knowledge as a ‘thing.’ Earlier we noted the confusion caused by this language by pointing out that flows are things. Putting that aside, however, where, exactly, is the claimed contradiction between the terms in this case, or the paradox between them?
To say that knowledge flows and is also a thing, which Dave does in parts of “Complex Acts of Knowing,” is not to invoke a contradiction at all or even a paradox. On the other hand, if Dave had said that knowledge is both a thing that does not flow, on the one hand, and a thing that does flow on the other, then we would indeed have a contradiction or a paradox. But this does not seem to be what he says at all. Rather, what he seems to be saying is that knowledge flows – not that knowledge is flow, but that it (as a ‘thing’) is subject to movement. But where’s the paradox in that?
Later on in “Generations of KM,” Mark and I also say this: (pp. 37-38)
“Another possible interpretation of Snowden’s claims about knowledge as flow is that he’s really not talking about knowledge at all. Rather, he’s talking about a process whose outcomes are knowledge (i.e., learning and innovation). But here we encounter, once again, the product/ process confusion we covered before. The flow of knowledge (process) should not be regarded as knowledge. Both are things, but they are not the same things. The flow of knowledge occurs between various stages (or states) in the processes of knowledge production and integration, but to say that knowledge flows between the stages of a process is not to say that knowledge is a flow.
Turning to other sources for what flow could possibly mean to Snowden in this context, we see the term heavily used in two fields closely related to Knowledge Management. One is complex adaptive systems (CAS) theory, a bedrock of Snowden’s own hypothesis, and the other is system dynamics, a closely related field in which the nonlinearity of complex systems is modeled and studied.
To CAS theorists, flows are movements of things between nodes and across connectors in networks (Holland, 1995, p. 23). In Holland’s
treatment of this subject, he states: “In CAS the flows through these networks vary over time; moreover nodes and connections can appear and disappear as the agents adapt or fail to adapt. Thus neither the flows nor the networks are fixed in time. They are patterns that reflect changing adaptations as time elapses and experience accumulates.” (ibid.).
Now, if this is what Snowden (and Stacey) mean by “ephemeral, active process[es] of relating,” (Snowden, 2002, p. 101), again, we fail to see the paradox and see only confusion, instead. Holland and other CAS theorists are not claiming that the things that flow across ephemeral networks are the same things as the ephemeral networks, themselves. A sharp distinction between the two is made with no paradox involved, nor any need for one. And so we fail to see how the use of the term ‘flows’ in the literature on CASs could be used to support Snowden’s claim of a paradox in the view of knowledge, or in the Cynefin model.
In the system dynamics arena, “stocks and flows” are central to the lingua franca of the field. Flows in system dynamics refer to streams of things (which are otherwise held in “stocks”) moving at different rates of speed and with different degrees of frequency, with or without delays. But flows as things are never confused with the things that they carry. And so here again, we fail to see how the historical use of the term ‘flows’ necessarily leads to any sort of contradiction or paradox.
In sum, while Snowden purports to use the term ‘flow’ as a noun (as in, knowledge is flow) in his definition of knowledge, his actual use of the term in his discussion seems confined to its use as a verb (as in, knowledge flows). Thus, he never manages to provide a satisfactory definition for knowledge as flow. On the other hand, to the extent that he implies that flow may be a process, the process he refers to is arguably one that produces and/or transfers knowledge, but which is not the same as knowledge itself. For all of these reasons, we find Snowden’s claim of a paradox in the third age definition of knowledge to be unpersuasive and full of confusions.”
So, in my view of Dave’s most recent criticism of “Generations of Knowledge Management,” as failing to take account of knowledge duality and “paradox,”is quite misplaced. Our analysis shows that there is no paradox, and also no need for a Hegelian synthesis, though there is quite a bit of process/product confusion. Dave says that in this last we are mistaken, and that our mistake is due to our insistence on thinking in terms of categories, which, in turn, prevents our understanding of the “paradox” he has shown us. But I think we have no special category thinking problem not shared by other human beings, including Dave himself, and I will address this issue as well as some of the other issues raised by Dave in a future blog entry.
Tags: Complexity · Epistemology/Ontology/Value Theory · Knowledge Making · Knowledge Management
April 17th, 2008 · Comments Off on Is There a Correct Interpretation of Hamlet?

Turner’s Evening of the Deluge (1843)
In a recent blog entry, Dave Snowden, commented on a statement I made in an exchange in the actkm.org list serv group. Here’s the quotation from my post:
“I do believe that there is a “correct” interpretation of Hamlet, and also that we can select among interpretations and find the interpretation that is closer to the truth than its competitors. Of course, however, even if we someday find the “correct” interpretation, we have no way of knowing that we have found it. It is, I’m afraid, our fate to be able to find the truth, sometimes, but, unfortunately, always to be less than certain that we have found it.”
I wasn’t referring to performance interpretations here, but simply to interpretation of Hamlet as a text. So, what I was suggesting above is that we may someday find a “correct interpretation” of what the Hamlet text asserts, and that in the meantime we can select among alternative interpretations ands evaluate which one is closer to the truth.
Now, my friend Dave, juxtaposed this quote from Gabriele Lakomski:
“The model of the human mind has been assumed to be akin that of a symbol processor, a computer like engine that allows us to manipulate successfully a range of symbols of which language is deemed the most significant. This view of the human mind is very limiting because it assumes that what we know, and are able to know, is expressible in symbolic form only.”
I have to admit that I have no idea what this quote has to do with the quote from my actkm post, or with any other work I’ve done.
- I’ve never said that the mind is “akin to a symbol processor.”
- I’ve never said it is “like a computer engine that allows us to manipulate a range of symbols”, and
- I’ve also never said “that what we know, and are able to know, is expressible in symbolic form only.”
That I do say these things is in no way implied by the above quote. Furthermore, I don’t think I’ve ever said anything which corresponds to those assertions. Indeed, I have said, or strongly implied, the opposite of all three in my chapter on “What Knowledge Is.”
In Dave’s blog he followed Lakomski’s statement with this one:
“So what is the false assumption in the idea that there is a correct interpretation of Hamlet? Well Joe is assuming that the text of Hamlet exists in isolation from its performance (which would include a reading) and fails to consider the nature of a play (or other work of art) just as other people have failed to appreciate the role of recipes in the production of Bouillabaisse.”
I have assumed that the text of Hamlet and its content is different from its performances and their content and that we can distinguish the two. And I have also assumed that one of these, the text, can be interpreted from the viewpoint of what it asserts, and that we can ask of any such interpretation whether it truly expresses the meaning of the text. I have said further that it is possible to discover the “correct” interpretation of the meaning of the text. Where is there a false assumption in any of this?
Dave says that the text of Hamlet doesn’t exist in isolation from its performance. But THIS is clearly the false assumption here. There is the text, and there are its performances. We can talk about the relationship of the two, but they are clearly not the same thing, ontologically speaking.
I have never said that there was a “correct” performance interpretation of Hamlet in the sense that one interpretation provides us with a TRUE account of what the Hamlet text MEANS. This is not even possible, since performance interpretations are not about providing an account of what the language in Hamlet means. Instead they are about providing a performance of the play of the highest aesthetic quality.
Regarding my “failure” to consider the nature of Hamlet in its performance aspects, I don’t agree that I was obligated to do so, in the first place, or that I missed the point I was trying to make by not doing so. I made a claim about Hamlet as a text. That’s all there is to this story, and Dave’s attempt to interpret what I was saying in terms of performance simply changes the subject.
Dave’s blog entry then goes on to offer three examples including experiencing soccer games, operas, and bouillabaisse. They are all about the idea that experiential knowledge is different from linguistic knowledge or information, and they are certainly good examples. But (a) I have never claimed that experiential knowledge is the same as linguistic knowledge, and I am quite systematic about distinguishing between mental and biological knowledge of all types, including experiential knowledge, and linguistic knowledge; and (b) what do these examples have to do with my view that there is a correct interpretation of what the Hamlet text says that we may someday express, but that we can never “establish” as the correct interpretation beyond doubt? Nothing.
Dave ends his blog with the statement:
“A very large part of what we know, and how we know it is fluid, evolutionary and context dependent. To constantly talk about validation in the sense of symbol manipulation is to impoverish human knowledge. Of course this does not mean that we cannot be objective. My experience of rugby and opera has subjective elements, but parts of it are also objective. But that is a blog for later in the week.”
I certainly agree that his first sentence, just above, applies to our mental knowledge and even that it applies to our cultural knowledge, though in a different way. But, as for the rest of this statement, if I talk about Knowledge Claim Evaluation (KCE) a lot, it is only because others in KM almost never talk about it; and why my talking about it, however excessively from Dave’s point of view, “impoverishes human knowledge,” Dave doesn’t say.
In fact, insofar as my focus on KCE reminds people to question their knowledge and to always look for higher quality knowledge, I think it is likely to help people to grow and enrich human knowledge. I also think that an essential aspect of KM includes enhancing KCE, since that is one of the major areas of knowledge processing that are performed very poorly in today’s enterprises.
As far as Dave’s experience of rugby, or opera, or bouillabaisse being “objective” is concerned, I must simply disagree with this, however authentic and rich that experience is, and however, much it immerses him in reality, since I know of no way to directly share his mental experience among the rest of us, and I believe that one important aspect of objectivity is sharability. Of course, he can describe such experiences to everyone, and I do agree that his description of them, whether true or false, is objective so long as he shares it with others, and all of us can critically evaluate his description if we feel the need to.
The bottom line here is that Dave’s blog post relates only very peripherally to the point I was making in the quote he used, and I can’t begin to understand how he associated the Lakomski quote with the view I expressed about the Hamlet text. That is one helluva stretch indeed.
Tags: Epistemology/Ontology/Value Theory · Knowledge Making · Knowledge Management