July 21st, 2008 · Comments Off on Remarks on Truth and Theories of Evaluation

First, I think that true and false are terms we should apply to linguistic networks rather than single statements. Networks are necessary, because single statements generally assume a good deal of background knowledge illuminating the meaning of those statements. If the background knowledge is expressed in language also, we have a network of statements, and it is this network, not the single statement that actually gets tested in evaluation. In cases where we’re using observations to test such a network, observations running contrary to the expectations suggested by the network never logically compel us to falsify any individual statement in the network, but they set a problem of inconsistency for the network, and to solve that problem we have to decide which of its statements must be falsified.
So, second, are there knowledge claim networks that are more or less true? Well, yes and no. More specifically, if the network in question is about reality it must be either true or false. However, among false knowledge claim networks, some may be closer to the truth than others. For example Newton’s Theory of Gravitation is, as far as we now know, false. However, it is closer to the truth than earlier theories about why objects behave as they do in the vicinity of the earth. Also, Einstein never really believed that his own General Theory of Relativity was true, and thought that it would one day be superceded by a better theory. On that day we will view Einstein’s theory as false, but also as closer to the truth than Newton’s Theory.
So, third, now the question arises, how can we compare competing theories in terms of closeness of approach to the truth. Popper developed a theory about this, but Pavel Tichy and David Miller (Popper’s student and later collaborator), both independently showed that Popper’s formal account of closeness of approach to the truth was wrong. Popper never offered an alternative because he thought it wasn’t fundamental to his work to develop a measurement model for this property (the single term for which is “verisimilitude). Others since Popper have continued to try to develop such a measurement model. Currently, the leading researcher in this area is Illka Niiniluoto a Professor at the University of Helsinki. In 2003, I developed a model of my own in the Appendix to Chapter 5 of my book with Mark McElroy, Key Issues in the New Knowledge Management. Of course, such a model amounts to a formal theory of knowledge claim evaluation and must include evaluation criteria. Those were presented and discussed in Chapter 5 of Key Issues . . ., where the criteria were viewed in a more qualitative way as perspectives and where knowledge claim evaluation was viewed as a largely qualitative process compared with the view taken in the Appendix.
Fourth, I don’t think our theories of knowledge claim evaluation are true or false, as are our theories about reality. They are theories alright; and as such they are conjectural. But I think they are normative theories, theories about the ethics of inquiry and knowledge production. And I also think that these theories are about fairly comparing competing theories dealing with reality, so that alternative theories of knowledge claim evaluation are also alternative theories of fair comparison. We can compare these alternative theories of fair comparison, but we need yet another theory to do so, this time one for fairly comparing fair comparison theories of knowledge claim evaluation.
Fifth, this is not an infinite regress situation, however. The reason is that at the level of fairly comparing fair comparison theories of knowledge claim evaluation, we are out of levels. That is, at that level, we can only compare theories of fair comparison using the theories of fair comparison we already have, with the exception that we can think of new theories that would apply at both levels at which there are theories of fair comparison. The reason why it may seem that this is an infinite regress situation, is because some will assume that the theories of fair comparison we select as legitimate at each level must be “justified” before we select them. For those requiring “justification” it will always be possible to ask whether one’s decision to select a theory of fair comparison was justified and that will drive one to a higher level of evaluation. However, if one assumes that justification is unnecessary, and that the fair comparisons will be carried out through criticism only, then the situation changes because the best surviving alternative among theories of fair comparison in the face of criticism, is the best performer at both levels at which theories of fair comparison are compared.
Sixth, all reality is not a human construct, but, I do think that the idea of truth is either a human construct or at least one of intelligent creatures having descriptive languages including humans and intelligent creatures evolving elsewhere in the multiverse. In my view, truth is correspondence between what our linguistic constructs assert about reality and reality itself. So we can’t talk about such a correspondence without descriptive languages, and such languages are created by societies of intelligent agents like humans.
Seventh, I don’t believe this view represents a paradigm shift for realists like me, since, as Popper showed time and time again, a belief in realism isn’t in contradiction with the idea that we humans construct our knowledge. Realism, instead, contradicts the view that because we construct our knowledge, it cannot be true or “objective.”
Tags: Epistemology/Ontology/Value Theory · Knowledge Making · Knowledge Management
July 18th, 2008 · Comments Off on Does Partial Constructivism Make Sense?

I don’t think there are empirical truths. The idea that there are such truths is a hangover from positivism and empiricism, now discredited epistemologies, even though many social scientists seem unaware of this.
Also, from my viewpoint one really needs to distinguish between three kinds of knowledge: biological knowledge, mental knowledge, and cultural knowledge. Biological knowledge and mental knowledge are “subjective” in character, in the precise sense that they’re not sharable. In saying that they’re subjective, I am not saying that they are not real, and I am also not saying that they are not important, and I am also not saying that we cannot have objective knowledge about such subjective knowledge. Also, biological and mental subjective knowledge are very important because they are the immediate precursors of our actions.
Nevertheless, the only knowledge that is “objective” is cultural knowledge. It is “objective” when it is both sharable among humans and also refutable through criticisms, tests, and evaluations. Let’s now focus specifically on the part of culture called linguistic assertions about the real world. These assert “objective information” provided that they’re sharable. Objective Knowledge is a subset of this information and is comprised of those assertions, knowledge claims, that have survived our criticism, tests, and evaluations (assuming, of course, that they remain sharable and refutable).
I don’t think there is a school of thought that is a philosophical school of thought that is constructivist about some forms of knowledge and objectivist (in my sense) about other forms of knowledge; but I do think there are many people who believe in this. They accept that the methods of knowledge claim evaluation developed in scientific practice in certain scientific fields, and in certain other, practical, areas of life, provide a basis for selecting among false and true knowledge claims. But they do not accept that the same or similar knowledge claim evaluation practices provide a basis for selecting among competing knowledge claims in the social sciences, or in the areas of morals and ethics, on in ontology and epistemology. In these areas, they simply believe that one person’s knowledge claims are as good another’s, and they are basically relativists in their approach.
The relativism of the constructivists is founded on the idea that our beliefs and knowledge claims are determined by our biological makeup (biological constructivism), or that these are determined by our cultural background and experiences (social constructivism), or both in combination, and also that because they are so determined they cannot correspond to the external world (i.e.cannot be true). Constructivism also holds that since our processes of critical evaluation also rely on knowledge; they too are determined by these same factors and carry a built-in bias that render them ineffective in eliminating error and selecting true alternatives among our knowledge claims.
Having outlined the basis of constructivist relativism above, we can ask whether those who accept constructivism in the social sciences, and in morals, ethics, ontology and epistemology, but reject it in “the Natural Sciences” and certain practical areas are being consistent. That is, there is nothing in the constructivist argument that suggests any differential application depending on the area of inquiry. According to constructivism, all of our categories, all of our knowledge claims, everything we think is determined by external and subjective factors and there is absolutely no reason to believe that knowledge claims arising from creatures subject to these factors would or should correspond to reality, or that the evaluation practices of such creatures would work to eliminate errors and arrive at formulations that are close to the truth.
So, for consistent constructivists, there should be no distinctions between areas of inquiry regarding the possibility of attaining objective knowledge, and so from a consistent constructivist point of view, one can’t pick and choose among areas and say, I’ll accept constructivism here but not there. To make such a choice is, in fact, to question the validity of constructivism, unless one can give a special reason why the fact that we construct our knowledge claims under biological and social constraints makes correspondence with reality impossible in some areas of inquiry, but not in others. However, I’ve never seen any arguments that have shown such impossibility. So, it seems to me that Discretionary Constructivism has no basis in reasonable argument, but, in the end, is a kind of fideism saying that we ought to have faith that we can develop objective knowledge in the Natural Sciences, but not in other areas of inquiry.
Tags: Epistemology/Ontology/Value Theory · Knowledge Making · Knowledge Management

Historically, since Plato, the most frequent definition of knowledge has been Justified True Belief (JTB). Until recently (the 20th century), philosophers believed in a foundation for JTB. The Cartesian Rationalists believed that some beliefs were certain because they were self-evident truths that survived Descartes method of doubt. The empiricists believed that some beliefs were self-evident truths because they were in agreement with observational experience. The Kantian Idealists believed that some beliefs were synthetic a prioris and as such were also certain.
For these three philosophical traditions, Justification in JTB meant simply showing that a belief could be deduced from a foundational belief using the rules of logic. In the 20th century the following happened. The Pragmatism of Peirce, James, Dewey, and their successors, and the Critical Rationalism of Karl Popper argued that there were no certain foundational beliefs, and that neither Rationalism, nor Empiricism, nor Idealism nor any other philosophical school could supply such foundations. Thus, fallibilism, the view that no beliefs about the world are or can be certain, became accepted as the dominant position in academic philosophy. Because of this, the notion that knowledge was JTB had to undergo change. Some philosophers have tried to keep the notion of JTB by arguing that our ideas about “Justification” had to change. “Justification” could no longer mean deduction from certain foundations, but had to mean something else. In turn, the group that wanted to change the meaning of Justification split into a number of schools. The two main ones are what I would call the Wittgensteinian approach, and the “good reasons” approach.
The Wittgensteinian approach basically says that any theoretical system has to have “hinges” or basic premises about how to use language, ontological assumptions and epistemological assumptions. People who don’t accept these premises are just refusing to play the language game within whose context a theory is presented. Knowledge is relative to the language game and its basic premises. It is constituted of these premises and all the conclusions that can be deduced from them. Knowledge is as before. It is still JTB, but we no longer justify it in terms of certain premises, but only in terms of the premises of the language game. Obviously, while this position is consistent it also presents a view of JTB and knowledge that is profoundly relativistic in character, and it also begs the question of justification by insisting that the basic premises of the language game don’t need justification. But this is after all, an empirical question. If one is willing to play the language game, then no justification is needed. But if one is deciding whether to play it or not then the absence of Justification means that the knowledge produced by such systems is not Justified True Belief at all, but rather belief that may not be true, and that is not justified until one decides that the foundational beliefs in such a system don’t need to be justified.
The other main approach retaining “justification” is the good reasons approach. Its basic tenet is that justification should no longer be viewed as deduction from certain foundations, but rather as supplying “good reasons” for accepting the foundations of any theoretical system. While this move may seem quite reasonable, we can’t overlook the fact that “good reasons” don’t provide any logical warrant for accepting the foundations of theoretical system and also we can’t overlook the fact, that no proponent of the “good reasons” approach has yet been able to develop a coherent way of distinguishing good reasons from bad ones. Finally, however, having a good reason to believe a foundational belief, cannot guarantee the certain truth of that belief. So again we are left with a situation where the belief in question may not be true, and also is not “justified” in the sense that the justification provided can guarantee the truth of the belief. Thus if we insist on the JTB notion, the consequence is that “knowledge” is either relative, or that no knowledge exists. In terms of set theory, such a definition either gives us relativism or it gives us knowledge as the empty set, an unacceptable consequence if we think that we do have some knowledge.
Moving away from “justificationist” responses to the JTB crisis, Critical Rationalism and Evolutionary Epistemology contend that since justifications for beliefs or knowledge claims that make them certain can’t be provided, the best move is to change the definition of knowledge by getting rid of justification. This leaves true belief. However, because of fallibilism, we can never know for certain if a belief of ours is true. So if we say that knowledge is True Belief, we are left with the problem that something that we think is knowledge, may prove false and therefore is not knowledge at all. If this happens, we can say that the belief never was knowledge in the first place, a counter-intuitive result considering that we’ve had theories like Newton’s which we’ve thought of as knowledge for hundreds of years.
Well, Critical Rationalism (CR) and Evolutionary Epistemology (EE) ask, what if we go the whole way and say that knowledge needs neither to be justified nor to be true, but only needs to survive our best efforts at criticism, testing, and evaluation. If we make this move then we are led to the position that (1) knowledge exists at any point in time, and also that (2) some of that knowledge may prove false tomorrow, and (3) much more of it may, unbeknownst to us, be false today. So the CR/EE position is that knowledge is not justified, and it is not necessarily true, but it is what has survived our critical experience, viewed very broadly.
At this point you may ask, well if knowledge can’t be justified then how do we establish it? And the answer to this question is, that we do not “establish” or “prove” it. What we do is to test competing knowledge claims by criticizing, testing and evaluating all reasonably coherent competitors, and then by evaluating which of these stands up best to our criticism. It is the knowledge claims that survive along with the meta-claims comprising the track record of our attempts to overturn them and their competitors that comprise our knowledge.
The foregoing view is my interpretation of past trends. Since most philosophers are unreconstructedly “justificationist,” most of them would probably ignore the CR/EE theme as a sideline, and emphasize variations on the good reasons approach more heavily. However, after almost 50 years of good reasons efforts to find coherent justifications for knowledge claims, I think that this sort of tilting at windmills may have run its course, and perhaps more people may be ready for the anti-justificationist, criticalist approach offered by CR. If, of course, they don’t succumb to constructivist relativism first.
However, in recent years, post-modernism seems also to be weakening, so perhaps the constructivist wave is beginning to break on the shoals of reality, and that we are not far from a resurgence of realism and a critical approach to evaluation. I hope so, and I am doing all I can to see to it that such an approach is a respectable one in the field of Knowledge Management.
Tags: Epistemology/Ontology/Value Theory · Knowledge Making

The over-riding problem with shifting from a “KM” orientation to a “knowledge sharing” one, is that the words don’t mean the same thing, and focusing on one or the other may well lead to different policies, programs, and interventions. Put another way, since “Knowledge Sharing” and Knowledge Management are not the same thing, it’s possible that enhancing the one, may have a negative impact on the quality of the other. This post will lay out some of the consequences of shifting the focus from “KM” to “knowledge sharing,” consequences that introduce additional problems themselves.
Innovation is an increasingly important focus of KM, but the shift to “knowledge sharing” prevents people from focusing on innovation as one of the important value propositions of KM, because “knowledge sharing” focuses attention on only one aspect of the knowledge life cycle, namely the knowledge integration aspect, and ignores the problem seeking, recognition, and formulation, and also the knowledge making aspects of knowledge processing. Note that innovation is a successful traversal of the knowledge life cycle, including its problem, knowledge making and knowledge integration aspects, so innovation is the key to successful adaptation, because it depends on high quality innovation.
An intense focus on enhancing “knowledge sharing” to the exclusion of the other major aspects of the knowledge life cycle is bad Knowledge Management because it leaves most of knowledge processing untouched by it, and may have the effect of enhancing the circulation of bad information rather than quality knowledge. Remember, there’s no guarantee that what “knowledge sharing” is doing is actually “knowledge sharing.” It, in fact, may only be sharing information, and poor quality information at that. So paradoxically, perhaps, for those who moved from a KM to a “knowledge sharing” focus, by making that move they guaranteed the continuation of poor quality KM in their organizations, simply by deciding to ignore most of the knowledge processing it ought to target.
Another important focus of KM, enhancing our capability to recognize where our knowledge is lacking and to formulate what the knowledge “problem” is, is also neglected by a knowledge sharing emphasis. As I indicated earlier, a shift to “knowledge sharing” means a shift away from problem seeking, recognition, and formulation, among other things. But, of course, it is vital for organizations to get better and not worse at seeking, recognizing, and formulating problems, because good problem processing is the first step in adapting to any challenge, big or small, that an organization has to meet. Insofar as managers focus on enhancing knowledge sharing, and turn attention away from problem processing, they are hurting their organization’s ability to adapt.
Next, a shift to “knowledge sharing,” and an avoidance of “KM,” tends to facilitate “information sharing,” while ignoring the problem of distinguishing whether it is information or knowledge that is being shared. An emphasis on “knowledge sharing” to the exclusion of “KM,” which certainly is implied by the notion of replacing a “KM” orientation with a “knowledge sharing” one, also tends to make sharing an end in itself, and works against establishing a structure of metrics that connects sharing outcomes to business impacts. It’s not that “knowledge sharing” as an orientation is necessarily hostile to metrics, but rather that “management,” as an orientation is more likely to be associated with an orientation toward measurement and impact analysis. Historically, one of the major problems of KM has been an absence of metrics allowing evaluation of the knowledge processing impacts of KM. Perhaps, the very popular focus on knowledge sharing running through, the Best Practices, Community of Practice, and Enterprise Portal movements within KM, is one of the reasons why the area of metrics in KM is so under-developed.
A decision to shift from “KM” to “knowledge sharing” will, of course, also prevent enhancing KM itself, since enhancing KM depends on developing new knowledge about to enhance knowledge processing. At best, however, a focus on knowledge sharing may involve developing new knowledge about how to enhance knowledge sharing. But new knowledge about the rest of knowledge processing activities will no longer be relevant, so “KM” will not develop such knowledge except through the kind of informal, half conscious KM interactions that have have been characteristic of its history prior to the appearance of formal KM activities in the late 1980s.
If we look at all the above problems, we can see that they all arise because the shift from “KM” to “knowledge sharing” as the focus of a major knowledge-related organizational program, isolates, if you will, “stovepipes,” one part of knowledge processing, and therefore leaves KM undone or done very badly in real organizational contexts. In other words, the decision to shift to “knowledge sharing” isn’t just an innocent decision to move from one complex focus of organizational activity to another, but rather, it is a decision to get only one part of the KM job done and only one part of knowledge processing enhanced in such a way that it may be improving the throughput of low quality information rather than improving that of high quality knowledge.
There is another way, however, of simplifying KM activities apart from isolating only one aspect of knowledge processing. That is, knowledge managers could simplify KM programs by selecting important domains of activity where problems exist and could enhance knowledge processing throughout the knowledge life cycle in these problem-related domains. In other words, let problems drive the simplification of KM program activities, rather than categorizations of particular knowledge functions such as “knowledge sharing.”
In short, I think that new KM programs should target knowledge processing more broadly than just knowledge sharing. But I also think that such programs ought to more narrowly focus on enhancing knowledge processing in specific business domains, so that changes in knowledge processing resulting from KM initiatives, and also changes in business processes and outcomes, indirectly affected by KM can be more easily measured. Thus, accountable and practical KM programs can be created, and “knowledge sharing” itself, can be enhanced in the full context of the knowledge life cycle and its central function of adaptation.
Tags: Knowledge Integration · Knowledge Making · Knowledge Management
July 15th, 2008 · Comments Off on Knowledge Sharing Is Not As Transparent As It Seems

I think that most, if not all, current knowledge sharing programs do not distinguish those knowledge claims that are just information, from those knowledge claims that are knowledge, because they don’t know how to do so. And I also think that the consequence of this is that most, if not all, current knowledge sharing programs, are merely information sharing programs dressed up in the vocabulary of knowledge to give them more status.
Of course, there are many who don’t believe that knowledge inheres in documents, content, or linguistic networks, and therefore also don’t believe that one can distinguish knowledge from information in such networks, or should try to do so. So, for them that particular problem doesn’t exist. But, to the extent these executives define knowledge as something in the mind, there is an even greater complication for them. For such mental knowledge can’t be shared, because telepathy is not an option. What can be shared are knowledge claims, linguistic assertions, so if an executive takes the position that these cannot be knowledge because knowledge only exists in the mind, then that executive is tacitly admitting that “knowledge sharing” programs can’t possibly share knowledge. Yet executives who believe that knowledge exists only in the mind most probably don’t understand, that they don’t understand how such mental knowledge can be shared. So, again, the “knowledge sharing” idea that seems to them so “transparent,” turns out to not be transparent at all.
In the discussion so far, I’ve pointed to two groups of believers in knowledge sharing, those who believe that knowledge is contained in linguistic content, and those who believe that knowledge is some form of belief and that it is mental. Of course, still another group believes in both kinds of knowledge and even in biological knowledge such as synaptic structures and genetic codes. My own view of knowledge is called the Unified Theory of Knowledge (http://www.kmci.org/media/Whatknowledgeis%20(non-fiction%20version).pdf) and it states that biological, mental, and cultural knowledge all exist, and that only cultural knowledge can be shared. It also holds that cultural knowledge is the only kind of knowledge that can be “objective” in the sense that it is both sharable and criticizable.
People who believe only in mental knowledge have a problem with the categories of biological and cultural knowledge. This is relevant for “knowledge sharing,” because their belief that knowledge can only be mental creates, as I have argued above, the problem that there can be no “knowledge sharing.” An example, however, can show how this exclusive belief in non-sharable, subjective, knowledge in the mind, seriously violates intuition.
Suppose we have two physicists, a Professor in a well-known University and Albert Einstein, and we also have the Theory of General Relativity. Einstein’s published expositions of General Relativity offer statements whose content has withstood criticisms, tests, and evaluations for about 95 years now. But the Professor’s beliefs about General Relativity are his alone, and are tested only by his experience which is not directly sharable, since no one else has that experience. Thus, I think that what Einstein’s statements assert, are much closer to what we mean by knowledge, i.e. information of very high intersubjective quality, then are the Professor’s beliefs, which are not directly sharable with others and therefore have no intersubjective quality, either positive or negative at all.
Are Einstein’s expressions information? Of course, they are, but the Professor’s beliefs are information too. Just because these are information doesn’t mean that they are not knowledge. Knowledge is a form of information, specifically, it is information that has withstood criticisms, tests, and evaluations. It can be structured information (data) or it can be less structured information, or it can be relatively unstructured information, but in all cases it is not knowledge, in my view, unless it has survived better than its competitors under stress.
When the Professor is talking about his own beliefs he is sharing his knowledge. Analyzing this from the viewpoint of the Unified Theory, first, the Professor isn’t sharing his genetic or synaptic knowledge, i.e. his biological knowledge, since one can’t do that. Second, he’s not sharing his mental knowledge, i.e., either his psychological predispositions or his situational beliefs, since we lack telepathy, and there’s an epistemic gap between his mental beliefs and the statements he is offering.
Third, according to the Unified Theory he would be sharing “his” cultural knowledge only if the knowledge claims about Einstein’s Theory he’s offering, have withstood criticisms, testing, and evaluation in the past. If that’s not the case, then he’s not sharing “his knowledge,” even if he devoutly believes he is doing that. All he’s doing is putting forward some knowledge claims he says he believes in even though neither he nor we have any way of determining the correspondence of his mental beliefs with what he says he believes in.
In sum, though “Knowledge Sharing” is a term that appears to be more transparent than “KM,” it turns out that mental knowledge can’t be shared, and that cultural knowledge, while certainly sharable, needs to be distinguished from information if anyone is to know or measure just what knowledge has been shared in any situation. And further, at this point, those who believe that knowledge may be found in documents, or more generally, in networks of knowledge claims are not distinguishing cultural knowledge from cultural information and don’t appear to know how to do so. To gain this knowledge will require acceptance of a theory of knowledge that distinguishes it from information. But developing and accepting such a theory gets one as, or more deeply into the philosophical weeds, than does the task of defining KM itself. So, the supposed advantage of shifting to “knowledge sharing,” specifically, being able to talk to people in in terms of a concept that they can easily understand, is an illusion. People think they understand what they mean by “knowledge sharing,” but unless they intend to equate it with cultural information sharing, they are mistaken, and the term “knowledge sharing,” in the end, turns out to be as opaque as KM itself.
Tags: KM Techniques · Knowledge Integration · Knowledge Management

IBM was not the first large organization to decide that “knowledge sharing” is an easier sell than “KM.” The World Bank preceded IBM in this move by more than a decade, long before the advent of Web 2.0 or Enterprise 2.0. The Bank decided to use”knowledge sharing” as the orienting idea in their knowledge-related program because the people trying to get the program adopted thought that “knowledge sharing” was easier to understand than “Knowledge Management,” because it has fewer contradictory conceptual elements. In the event, the decision to turn away from “KM” and toward ‘knowledge sharing” was successful in ‘”selling” what became a very large program at the Bank.
From FY97 – FY02 this high profile “knowledge sharing” program, spent $280 million. It encouraged development of some 125 thematic groups (Communities of Practice), 24 advisory services, widespread web site use, and training in and use of storytelling. In the field of KM itself, this program was widely viewed as a successful, even a flagship, model, even though it had abandoned the label “KM” for the friendlier “knowledge sharing,”and many other organizations emulated its emphasis on CoPs and storytelling.
However, a World Bank Review of the program in 2003, while providing a perfunctory nod to its success in fostering a new knowledge sharing culture and a wide variety of new activities for aggregating and sharing knowledge, also concluded that “the Bank’s new activities have not been well-integrated into core lending and non-lending processes.” And the report mentions management shortfalls as accounting for this state of affairs. Specifically, management did not define roles and responsibilities for making knowledge sharing a way of doing business; nor did it provide incentives for incorporating knowledge sharing into operational processes. Further, there was “no systematic monitoring and evaluation of knowledge sharing programs and activities.” In other words, no structure of metrics was developed for the project, and no connection between the accomplishments of the program and the bank’s operational activities and day-to-day business could be established.
How could this evaluation occur? How could a program that was so well-funded, so well-staffed, and with access to the world’s top KM and knowledge sharing consultants be repudiated in such plain terms by evaluators? Well, first, it’s easy to believe that there was Bank internal politics involved. There was a change in the top leadership of the Bank, and the new management was not friendly to the outlook of knowledge sharing. I suspect the Bank evaluation team may have been selected to ensure a tough evaluation of the program.
Even assuming this, however, the fact remains that the overall tenor of the evaluation is hard to deny. There was no coupling of the new activities to operational lending and other day-to-day concerns. There was no structure of metrics to ensure accountability. There was no systematic monitoring and evaluation of program activities, and there were Management shortfalls, indeed it is not too much to say that there were Knowledge Management shortfalls that explain these problems. The question I want to raise is whether the failure to solve KM conceptual problems at the inception of the project, and the ensuing shift to a “knowledge sharing” rather than a more comprehensive KM orientation, may not have been a primary factor in the later problems of the project? Unfortunately, I can’t answer this question, because I wasn’t close enough to the program and the Bank’s evaluation report was definitely not written from a KM point of view. However, I do think that shifting from “KM” to “knowledge sharing” as an orienting concept can bring with it a variety of problems. In future posts, I’ll talk about those.
Tags: KM 2.0 · KM Software Tools · Knowledge Integration · Knowledge Making · Knowledge Management
July 14th, 2008 · Comments Off on “Knowledge Sharing:” IBM’s Change In Philosophy

IBM has placed Knowledge Sharing in the news again, by announcing that it has “philosophically repositioned” its Knowledge Management practice around Knowledge Sharing. According to IBM’s Chris Cooper, “Management suggests control: control of process and control of environment. The sharing tag is quite important to us.” Of course, “Management” suggests control, these days, only to those who are being disingenuous, because they want to move from one marketing “tag” to another because they think it will be more effective; or to those who are totally unacquainted with the history of Management and Organization Theory, since the 1930s. That history has shown a continuous movement from a machine model of Management, toward one that emphasizes people, their self-organization, their processes, their culture, and their complex adaptive systems. Only the most unread, and, I think, naive, still adhere to the machine model of Management and few of them will admit it.
Furthermore, it is highly debatable that very many Knowledge Management practitioners ever adhered to a model seeking control of processes and environments, since KM appeared as a formal field very late in the game and much after the revolution in Organization Theory and Management Science that invalidated the Machine model. The closest management fads to a machine model to appear in recent years were probably Six Sigma, a Quality Management technique and Business Process Engineering (BPR), the hot topic in Management which KM was in part a reaction against. Neither of these has developed an appreciable following in the KM field.
In short, I believe that IBM’s shift of orientation has little to do with either the idea of “KM” itself, or with the notion that “KM” is associated with “control” because that is flatly not true, and everyone in KM knows it. What I think the IBM shift in philosophy is about, is its calculation that the idea of “knowledge sharing,” can sell more Web 2.0 products and consulting, than the idea of “KM” can. The large IT companies are currently in an intense battle for dominance of the web 2.0 marketplace, and I believe that IBM has all it can handle and more from Oracle, and that its shift in “philosophy” has more to do with its theories about how to market Web 2.0, than it has to do with any association of the “KM” concept with “control” out there in management land.
Also, I think we should have, and should have always had, a certain degree of skepticism about the commitment of IT vendors to KM as a field of theory and practice Such vendors can have no commitment of this kind to a management process. To expect them to be leaders in the “KM” field for very long is tantamount to believing in Santa Claus. They are not leaders in any field of practice, other than IT practice. Instead, they will emphasize those aspects of any field of practice that are likely to give them a better justification for their software products. Right now, “KM” support is not as likely a sell as “knowledge sharing” support for Web 2.0 products, and that is the reason for IBM’s change in “philosophy.”
Tags: KM 2.0 · KM Techniques · Knowledge Integration · Knowledge Management

Introduction
In my two previous posts I’ve talked about the OODA loop framework and its relationships to the Decision Execution Cycle (DEC), Single- and Double-loop learning, and the Knowledge Life Cycle (KLC) frameworks. Here I want to discuss the relationship of Recognition Primed Decision Making (RPD), a primary type of Naturalistic Decision Making (NDM) to OODA, the DEC, Single- and Double-loop learning and the KLC.
Recognition Primed Decision Making and Rational Decision Making
The basic notion of RPD is that humans prefer to “first-pattern-match” in decision making, and then proceed by what is, essentially, sequential trial and error, if the first pattern doesn’t match either their mental simulation of the likely consequences of their decision, or the actual consequences perceived in their post-decision experience. This is a bit different than animal decision making, since humans mentally simulate the results of their contemplated decisions in much more complex and detailed ways than animals who appear to be limited to relatively simple expectations about consequences.
In Rational Decision Making (RDM), humans look for a number of plausible decision alternatives, and then comparatively evaluate them, and select the best option, or according to some notions “the optimal decision.” In the past 25 years, much research has shown that decision makers rarely use RDM, but prefer RPD, and sometimes other forms of NDM. The most well-known research of this kind has been performed by Gary Klein and his collaborators at Gary Klein Associates, and it is fair to say that this research has shown that RPD is functional in situations where RDM is either not, or is impractical to carry out, and also, raises the possibility that RPD is the kind of decision making we ought to employ in most situations, restricting RDM to relatively rare cases where the time, resources, and possible high benefit/cost ratio from an RDM procedure outweighs its far greater costs to implement.
Relationships: RPD, RDM, OODA, DEC and the KLC
In earlier posts, I pointed out the distinctions between routine learning and creative problem solving, and between routine DECs and Problem Solving DECs. I also pointed out that creative problem solving is performed in organizations through KLCs and that they are comprised of DECs motivated by a learning or problem solving incentive system, rather than, primarily, by an incentive to close an instrumental behavior gap. I also related OODA to the DEC and the KLC by identifying simple or routine OODA loops with the DEC, and by making the case that at the organizational level, routine DECs or OODA loops create activities and are organized into goal-directed processes organized around the need to close instrumental behavior gaps. Mismatches between expectations and our experience show the existence of knowledge gaps and trigger KLCs whose purpose is to make and integrate new knowledge. Such KLCs are comprised of multiple DECs or OODAs. But these are different from routine DECs or OODAs, in that they are motivated by the incentive to learn and to solve a specific problem. Once problems are solved by new knowledge and the knowledge is integrated into an organization’s DOKB, it is available for routine decision making and business processing.
Now here is where RPD and RDM fit into this picture. First, in routine learning/decision making DECs/OODAs, RPD is the dominant, if not the sole pattern of decision making, since in such DECs we always act according to our expectations about what the results of our actions will be, and this, further, implies that we are acting according to a recognized pattern coupling our contemplated actions and those expectations. Second, when, however, RPD doesn’t work and our expectations are not fulfilled, then we must recognize that our knowledge about the routine decision making situation didn’t work, that we have a knowledge gap, a problem, and that we must seek new knowledge that will work. We acquire this new knowledge through performing double-loop learning (DLL) through KLCs.
Third, it is in performing KLCs, that we encounter the choice between RPD and RDM, in an acute and complex way. I’ll develop this choice in the context of the KLC framework. In that framework, and assuming a clear formulation of the problem to be solved we distinguish information acquisition, individual and group learning, knowledge claim formulation, and knowledge claim evaluation as sub-processes within knowledge production; knowledge and information broadcasting, searching and retrieving, teaching, and sharing within knowledge integration; and finally, use of the new knowledge in post- KLC decision making. The first thing to notice about this is that in the KLC context, unlike the routine decision making context, there are multiple DECs (or simple OODA loops), and hence multiple decisions. The second thing to notice is that the distinguishing mark of RDM is its focus on multiple decision alternatives and then its evaluation of these and selection of one of them as the preferred alternative for action. This implies that RDM, unlike RPD, is a multiple decision loop process, like the KLC. I’m not sure that this point comes through very clearly in the literature when RPD is compared with RDM. There it’s made clear that RDM is far more complex than RPD, and that it requires more time and resources. However, the focus is on the operational decision coming out of RDM, and whether getting to the decision involved posing and evaluating alternatives, and selecting among them, and the idea that RDM is a process involving a pattern of decision loops, while RPD involves only one loop, seems to be overlooked.
Once we see that RDM is a complex pattern of decision loops then it becomes relevant to ask whether or not these are made up of loops that use RPD? And since, RDM is a multiple decision loop process like the KLC, it also suggests the idea that RDM may be at least a particular kind of KLC? Considering the second question first, clearly RDM does seem to be a KLC since it involves formulating alternative knowledge claims in the form of decision alternatives, and then evaluating and selecting among them. But then we arrive at yet another question, closely related to the first just above, namely whether every KLC must use RDM, or whether we can have multiple decision loop KLCs that use RPD in every one of their decisions?
The answer is that KLCs need not be instances of RDM, but can use RPD to quickly arrive at a single knowledge claim about one’s decision, which is then evaluated quickly by using mental simulation. In brief, KLCs can use RPD or RDM. In fact, things are more complex than that since, in the multiple decision loops of any KLC, the RPD, or RDM option always exists, so that various combinations of RPD and RDM are possible in any KLC.
What’s Rational in Decision Making?
Usually, research studies in NDM and/or RPD make much of the contrast with the RDM approach and in doing so manage sooner or later to imply that RDM is idealistic, excessively normative, and unrealistic to apply in most human decision making situations. They do this often with the clear implication that man is just not rational as the enlightenment and classical economics assumed, and that one of the great gifts of modern social science in general and decision making research over the past 25 years, in particular, is to unmask the fantasy of rationality that we have all been laboring under.
Now, I’m as happy as the next person, to call attention to the simplicity of enlightenment assumptions and the conception of rationality found in classical economics, and also in classical democratic theory, however, I also think it’s a bit unfair to just assume that a particular decision making model is “the” rational model of decision making, while all others are somehow non-rational. From my point of view the RDM model of decision making is not characterized by rationality. Nor is the RPD model non-rational or irrational. The use of these labels is not descriptive of the central features of these models, and I also don’t believe that either gets at the central features of rationality in a modern context.
We can begin to see this more clearly if we consider the distinguishing features of RPD and RDM and the context in which both types of models are used. The first context we discussed above for RPD is that of routine action and learning, where the right thing to do is already known. In that context, provided that previous behavior and its results have been assessed with an open and critical mind and that the first pattern is consistent with these results, it is always rational to apply RPD, simply because we have no reason to believe that our previous knowledge is mistaken. On the other hand, if, because we have failed to see problems due to our allowing what we expect to see to color our experience, or because we haven’t been diligent enough to look for problems that are indicated by weak signals, we apply RPD, then we certainly have departed from “rationality,” in a very meaningful sense of the term. That is, we have applied RPD in a situation where it doesn’t apply because we have refused to open our minds to reality, and that is one of plainest indicators of irrationality there is. So, in the context of routine action, learning, and decision making, RPD may be either rational or irrational to apply depending on the context. And the most important point here is that RPD can embody Rational Decision Making in this context, so insofar as anyone claims a monopoly on rationality for the old RDM model, I think they are simply mis-characterizing rationality.
Moving to the KLC context, here too, there are contexts where RPD is either rational or irrational, as the case may be, and where RDM, also is either rational, or irrational. For both RPD and RDM, there are external and internal aspects of rationality. The external aspects relate to whether RPD or RDM should be used in the context of a particular KLC? This question highlights a meta-decision about which type of KLC to use in a given context. This decision may be a routine one which we don’t need a KLC to figure out. That is, it can be obvious that there’s no point in using the RDM, because an operational decision needs to be made before a KLC using RDM, that is, one posing and evaluating alternatives, can be completed. Or, alternatively, it can be equally obvious that there is plenty of time and resources available, and that the operational decision giving rise to the KLC is important enough to warrant application of the RDM. In either case, it’s perfectly rational to select one or the other alternative approach to the KLC, and equally, it would be irrational to deny our previous experience and to decide in favor of RDM, when there is no time to complete the process, or to decide in favor of the RPD, when it’s really important to avoid error and we have the time and resources to go through an RDM. In short, when the decision about whether to use RPD or RDM is routine, the rational choice is clear, and that meta-decision would not require application of the RDM.
Of course, the meta-decision about whether to apply the RPD or the RDM may not be a routine one. It may not be clear which of these should be applied. If that’s the case, one would have to go through another meta-KLC and to decide whether an RPD or an RDM was appropriate for this new KLC, and then the pattern of choice would repeat. Of course, my intention is not so much to point out that the choice between RPD and RDM can be complex, but rather to point out it’s sometimes not clear or obvious whether RPD or RDM is the appropriate choice for working through a KLC in any context. RDM is not necessarily the rational choice, nor is it necessarily the irrational choice. From an external viewpoint, the assignment of “rationality” to applying one model, and non-rationality or irrationality to another is not always very clear.
Next, once a choice is made about whether a particular KLC will use only RPD, or at least some RDM, then even if the initial choice was rational, that won’t guarantee that the application of either RPD or RDM will be irrational or rational in the specific context. Can an application of RPD in the KLC context be irrational? I think the answer to this is yes, since once a new decision pattern is arrived at by an individual, their mental simulation and evaluation of the likely consequences of the new decision may be incoherent or inconsistent. Also, in cases where the contemplated decision is very risky, there may be no attempt at safe-fail experiments, even when there is time for them. Can it be rational? Again, I think the answer is yes, provided that the mental simulation is coherent and consistent, and safe-fail experiments are used where there is both time and resources.
The possibility of irrationality is just as great in applying RDM. That is, even assuming that an application of RDM develops alternative decision possibilities, that’s no indication that the RDM will be executed rationally. In particular, the evaluation perspectives used in the RDM process can be quite irrational. They may not require consistency, or coherence, or fair comparison of alternatives. They may rely on authority as the dominant evaluation criterion. They may not employ safe-fail experiments in risky situations to pilot test decisions.
There is yet another difficulty in associating the RDM model with rationality, and that is that there is no longer agreement on the very foundation of the idea of rationality. Modern philosophy has shown that the classical conception of rationality requiring justification of one’s view in terms of some foundation that itself cannot be questioned, is no longer viable. Since there are no certain foundations for our knowledge, this concept of rationality turns out to be limited in the sense that it is always relative to foundations that themselves can’t be justified. So, according to this view, RDM models that implement a process of attempting to justify one alternative decision relative to others have only limited rationality. There’s an alternative concept of rationality available that says that rationality in the RDM requires fair critical comparison of competing decision alternatives and acceptance of the decision that best survives that fair critical comparison. That’s the view I favor, but I think I can safely say that current applications of the RDM don’t embody it. So, in my view they are not rational.
Finally, in addition to showing how RPD and RDM decision making patterns relate to OODA, the DEC, and the KLC, I’ve also tried to show that the association of RPD with non-rational, or even irrational decision making, and the contrasting association of RDM with rationality, are both invalid associations. Intuition is not the same as irrationality, and it may frequently be rational to rely on it in decision making. Also, the RDM is not the same as rationality, even when there is ample time and resources to apply it, since RDM can be, and currently is, used in many irrational ways.
Tags: Complexity · Epistemology/Ontology/Value Theory · Knowledge Making
June 18th, 2008 · Comments Off on OODA, the DEC, and the KLC

Introduction
In my last post, I examined John Boyd’s OODA Loop framework and discussed its relationship to double-loop learning. I mentioned there that OODA was one of a number of similar Decision Learning Cycle (DLC) frameworks developed by various writers over the years, including my own Decision Execution Cycle (DEC) framework. In this post, I’ll compare the OODA and DEC Cycles, and then, because the DEC is coupled in my own work with the Knowledge Life Cycle (KLC), I’ll also write about the relationship of both to it.
The DEC and the DOKB
Let’s begin with routine decision making and learning. When we see a gap between the way the world is and the way we want it to be, we typically decide what we have to do to close the gap between the two. Once we Decide, we Act. After acting, we Monitor the results of our actions. Finally, we Evaluate the results we’ve monitored, and then if we haven’t reached our goal, and sometimes even if we have, we begin the cycle of decision making again. Let’s call this pattern, illustrated just below, the Decision Execution Cycle (DEC). I’ve written about the DEC on many occasions previously, but this treatment is a revision of previous accounts in that it shifts the locus of planning outside the DEC and into the broader area of knowledge making to be discussed briefly later.

The Decision Execution Cycle
The generic task patterns or phases of any DEC are: Deciding, Acting, Monitoring, and Evaluating. Deciding means forming an intention to do something involving an action or sequence of actions. This may mean selecting among alternative decision options in a specific situation, or it may mean deciding on the basis of the first decision-consequence pattern we can call up from our previous knowledge. Sometimes we form such intentions in the context of broader plans of actions. Planning, however, is not part of a routine DEC. It is, in some part, a knowledge making activity.
Acting means implementing a decision by performing the specific activity or activities decided upon. Acting involves using the results of planning and deciding along with other knowledge to implement decisions, but acting does not, by itself, produce new knowledge, except knowledge that an act was performed.
Monitoring means retrospectively tracking and describing activities and their outcomes. Monitoring involves gathering data and information, and using previous knowledge routinely to produce new descriptive, impact-related, and predictive knowledge about the results of acting.
Evaluating means retrospectively assessing the previously monitored activities and outcomes as a value network. Evaluating means using the results of monitoring, along with previous knowledge to assess the results of acting and to produce knowledge about the descriptive gaps between business outcomes and tactical objectives and about the normative (benefits and costs) impact of these gaps between outcomes and objectives.
The DEC applies to any business process (in a manner to be discussed shortly), and monitoring, evaluating, deciding, and acting all use previous knowledge. Where does the previous knowledge come from? It comes most immediately from what we will call the Distributed Organizational Knowledge Base (DOKB). The DOKB is the combination of biological knowledge found in genes and synaptic structures, previous belief knowledge and belief predispositions of enterprise agents, and artifact-based explicit knowledge claims, and meta-information (or meta-claims) stored in both electronic and non-electronic enterprise repositories. The figure below illustrates the DOKB.

Previous Knowledge: The Distributed Organizational Knowledge Base
Routine DECs use previous knowledge in the DOKB. In fact, the DOKB is a very important source of patterns for pattern-based or Recognition-Primed Decision Making. However, the DEC also adds new knowledge to the DOKB. It does this in two ways DECs that produce mismatches between expectations and perceived reality in monitoring and evaluation, call into question or refute previous knowledge. And second, DECs also provide new knowledge about specific situations, conditions, circumstances, and events.
The DEC can be easily related to a Complex Adaptive Systems framework and also to a business process perspective. From the viewpoint of a CAS framework, the DEC becomes the basic unit that generates transactional activity. And from a DEC point of view, processes are inter-related sequences of goal-directed DECs.

DECS and Business Processes
The connection between the DEC and the Knowledge Life Cycle and also the connection between routine and creative learning arises out of mismatches between expectations and perceived reality. These mismatches tell us that previous general knowledge is wrong or unreliable, and lead us to initiate creative learning in Knowledge Life Cycles. I’ll discuss this in more detail after we compare the DEC and OODA.
OODA and the DEC
In my last post, I discussed Boyd’s OODA loop at length. Here’s a quick review. Observation refers to the task of sensing the world both external and internal to oneself and of feeding the results of sensing on to the task of Orientation. Orientation refers to the task of fitting the observations to our predispositions and expectations about the world in order to arrive at an interpretation of the situation one is facing. It involves various kinds of filtering and processing about which more will be said in a moment, and also formulating decision alternatives. Deciding is the process of reviewing alternative actions and selecting an alternative. Boyd views the decision as a hypothesis. And Acting is the process of implementing one’s alternative. Boyd views implementing as testing a hypothesis. The results of Acting are available for Observation, and the loop starts again.
Comparing the DEC and the OODA loop it appears that Deciding and Acting match up one-to-one in the two DLCs frameworks. The question is how does Observation, match up with Monitoring, and Orientation with Evaluation?
First, I don’t think the Monitoring phase in the DEC is an exact match for Observation in the OODA loop, mainly because it goes further into impact analysis and prediction than Observation does in OODA. That is, part of the OODA Orientation Phase is placed in Monitoring by the DEC. However, this difference seems more a question of where the cut is made between the two phases, since the impact analysis and prediction activities are present in both frameworks. Boyd may have made the cut between Observation and Orientation where he did because he was thinking in terms of isolating experience from interpretation of the situation as much as he could. My own reason for distinguishing monitoring and evaluation in the way I did was to clearly separate a descriptive from an assessment phase with the latter being the one where mismatches and future predictions would be assessed for significance. Thus, in the DEC the combination of Monitoring and Evaluation is needed to get to the final assessment of a mismatch. In the OODA loop the mismatch, if one exists, is determined in the Orientation Phase. In the end, then there doesn’t seem to be a big difference between the two frameworks on where mismatches are finally determined.
Second, however, even noting that the DEC phases of monitoring and evaluation are probably encompassed by Boyd’s Observation and Orientation Phases, I don’t think the reverse is true. It’s not that the OODA Loop is different from the DEC in its inclusion of such factors as new information, genetic heritage, cultural traditions, and previous experience, because the DEC picks up these either through monitoring or from the DOKB (which corresponds to Boyd’s results of Orientation and previous OODA Loops). Rather, the OODA loop includes more than the DEC in the degree of its incorporation of analysis and synthesis to destroy old knowledge and create new knowledge.
In the DEC, analysis is used in monitoring and evaluation to determine the agreement of the consequences of action with expectations and also to assess costs and benefits. If a mismatch is found, this may result in our deciding that some aspect of our previous knowledge is false. But Orientation in the OODA loop uses analysis both to find mismatches and also to evaluate syntheses once they are arrived at, so, in this respect, it seems at first blush that Boyd’s final OODA framework is more comprehensive than the DEC.
However, I think that this greater comprehensiveness of OODA is actually an error by Boyd. Specifically, as I argued in my previous post, Boyd presents the OODA loop as if its phases delineate one DLC, so that Orientation is presented as just a phase of a single DLC, as is Evaluation in the DEC. However, when one looks at what is involved in analysis and synthesis in the process of making new knowledge it’s easy to see that multiple OODA loops, and not a single OODA loop are involved in knowledge processing. So, in extending the OODA loop to multiple OODA loop activities, Boyd actually gets into a logical inconsistency, since single OODA that contain double-loop learning and the making of new knowledge are not single OODA loops. Apart, from this problem however, how adequate is Boyd’s characterization of double-loop learning in terms of the analytical/synthetic loop within his Orientation phase? In the next section, I’ll discuss this in more complete detail when I get into the KLC.
The DEC, the KLC, and OODA
In limiting the DEC to routine single-loop learning, rather than creative double-loop learning, I’ve always been very conscious of my own personal experience in both spheres. Take a routine activity such as driving an automobile, much of the time the mechanics of driving are on automatic pilot and I’m unaware of conscious decision making, but when conditions on the road make me conscious of what I’m doing, I’m very aware of the DLC of Deciding, Acting, Monitoring, and Evaluating, and of the continuous nature of DEC loop processing until I get to where I’m going. I’m sure that if I thought in terms of Boyd’s early and relatively simple formulations of the OODA loop, that framework could be applied equally well to such an automobile trip. With few exceptions, such processing is routine from a learning point of view, even id it involves adjusting to sudden and unexpected occurrences, since I am always using some aspect of previous knowledge to make adjustments and am even using first pattern matches in most instances.
On the other hand, when routine decision making produces a mismatch and I cannot retrieve from memory, or sources near to hand, a decision that gives promise of working, now I have one of Popper’s and also Boyd’s problems and I cannot solve this within the confines of a single DEC or OODA loop because I must formulate the problem, think up or otherwise arrive at new tentative solutions and perform error elimination before I can decide on a likely solution and return to the routine decision making I temporarily left to solve the problem. And each of these tasks, alone, requires at a least a single DEC or OODA Loop, and perhaps more than one. Collectively, these tasks define what we might call a Problem Life Cycle (PLC), in our view another name for a Double-loop Learning cycle. Here’s a graphic of the origin of the PLC and Double-loop Learning in routine DECs motivated by ordinary instrumental behavior gaps, followed by another of the relationship of DECs motivated by the learning or problem-solving incentive to the PLC.

A Routine DEC and the Origin of the PLC/Double-loop Learning

The Problem Life Cycle and DECs
This analysis of how double-loop learning and the problem life cycle emerge out of routine DECs applies not only to DECs, but also to a version of the OODA loop which is not quite so expansive as Boyd’s final version. That is, if Orientation in the OODA Loop is limited to “analysis” and doesn’t include “synthesis,” and if more analysis and all synthesis is left for the PLC, then the OODA loop, just as readily as the DEC can be viewed as capable of giving rise to PLCs. Put another way, as things are the OODA framework apparently assumes that a single OODA loop can encompass either single-loop or double-loop learning, as the case may be. I’ve argued previously however, that this isn’t possible because multiple OODA loops are necessary to successfully perform orientations that produce new knowledge. If this argument is valid, the current formulation of the OODA loop involves contradiction: a claim that a single-loop describes a situation where multiple OODA loops actually applies. To resolve this contradiction one needs to reformulate OODA along lines I’ve used for the DEC. That is, one needs to view OODA not as containing double-loop learning, but as a single loop learning process, giving rise to double-loop learning in the face of mismatches.
Such a reformulation is actually in the spirit of Boyd’s own thought, since he believed that mismatches (problems) drive creativity and the growth of new knowledge. Specifically he believed that analysis of our current knowledge, no matter how good it was, would eventually give rise to mismatches and to “destruction” of our model or models, at which point we would have to proceed by breaking down these models into their elementary patterns and then to arrive at new and better knowledge claim networks that matched our experience by reassembling the patterns in a new synthesis. This process of analyses and synthesis seems very reminiscent of the PLC. So, there seems no reason why Boyd’s OODA framework couldn’t be reformulated as an OODA/PLC framework, paralleling the DEC/PLC framework sketched out here.
This brings us to the issue raised above of whether Boyd’s account of new knowledge creation in “Destruction and Creation” and “the Conceptual Spiral” is an adequate account of that process. In connection with this issue, I think that Boyd’s notion that we first break down conceptual wholes into elemental patterns and then re-synthesize those patterns is not a wholly adequate theory of how we make new knowledge. Basically, Boyd is saying that we go through conceptual breakdown and then recombine our conceptual patterns in novel ways to create something new, following which we test out our results against the world. This is good as far as it goes and probably covers the processes of conceptual combination and perhaps conceptual blending that have been the subjects of recent research. However, apart from Boyd’s questionable uses of the terms “deduction” and “induction” to describe the breakdown and recombination processes, there is the problem of accounting for creating novel patterns through creative processes that go beyond aggregation of previous patterns. Not everything we create existed previously in another form. Emergence of new forms at higher levels of process is a fact of the universe. There are new things and new ideas under the sun.
The problem solving pattern of clearly formulating problems, arriving at new tentative solutions, and then eliminating errors through criticisms, tests, and evaluations, encompasses Boyd’s notion of the analytical/synthetic loop. Since it allows for radical creativity of new patterns, I think it should be used instead.
We now come to the Knowledge Life Cycle (KLC). Along with Mark McElroy, I’ve developed this framework in previous work. KLCs arise as human reactions to mismatches which occur in routine business process DECs. In brief, the KLC includes problem formulation, making or discovering new knowledge and knowledge integration. The first two of these are essentially the PLC projected to the group, organizational, or supra-organizational levels of analysis. Knowledge Integration is the process of communicating new knowledge to the remainder or an organization or system. Like the PLC, the KLC is comprised of DECs or, if one prefers OODA loops, and like the PLC, these are unified by a learning incentive, rather than a motivation to close an operational instrumental behavior gap.
In the KLC, Problem Claim Formulation corresponds to Popper’s problem phase of the tetradic schema, Information Acquisition, Individual and Group Learning, and Knowledge Claim Formulation correspond to the process of developing tentative solutions. Knowledge Claim Evaluation corresponds to Error Elimination. The Knowledge Integration process is broken into four parallel sub-processes: Knowledge and Information Broadcasting, Searching and Retrieving, Teaching, and Knowledge and Sharing. The results of Knowledge Integration and of KLCs at lower levels of analysis below the organizational level exist in an organization’s DOKB, and are used later by routine business processing which is composed of routine DECs, or, if you prefer OODAs. In organizations KLCs, ay many different levels are being generated constantly by DECs and OODAs. While routine Decs and OODAs generate knowledge about specific conditions and patterns, novel and/or general knowledge is generated by double-loop learning in KLCs. You can find a more detailed account of the KLC here.
In sum, at the organizational level, routine DECs or OODA loops create activities and are organized into goal-directed processes organized around the need to close instrumental behavior gaps. Mismatches between expectations and our experience show the existence of knowledge gaps and trigger KLCs whose purpose is to make and integrate new knowledge. Such KLCs are comprised of multiple DECs or OODAs, but these are different from routine Decs or OODAs in that they are motivated by the incentive to learn and to solve a specific problem. Once problems are solved by new knowledge and the knowledge is integrated into an organization’s DOKB, it is available for routine business processing.
Tags: Complexity · Epistemology/Ontology/Value Theory · Knowledge Making

Decision and Learning Cycles
There are a number of examples in the organizational learning field of frameworks that conjecture a cyclic agent behavioral process of decision, action, experiential feedback, and then adjustment followed by new action. Such frameworks are not new. Russell Ackoff and Kolb and Fry in the 1970s, Kolb in the 1980s, and Haeckel in the 1990s, offer similar four-phase frameworks which I will call Decision and Learning Cycles (DLCs). Another slightly different three-phase formulation of the DLC idea is Ralph Stacey’s (1996): “Choose, Act, Discover.” My own four-phase version of the DLC is called the Decision Execution Cycle (DEC). I’ll describe its phases in a later post.
It is motivated by a perceived gap between an agent’s goal state and the actual state of the world the agent is trying to manage. Since such gaps exist almost all the time that humans are awake, DLCs are going on all the time and some who have written about them view life itself as reducible to successive DLCs. This last is probably a bit overdrawn, since we are asleep part of the time, may be unconscious at other times, and, at still other times, may be engaged in activity for its own sake rather than to close some gap between what we want and the way world is. However, it’s certainly true that very much of life is about instrumental behavior activity whose purpose is to close such gaps.
John Boyd’s OODA Loop
One of the most influential versions of the DLC was developed by a near-legendary fighter pilot, instructor, and military strategist named Col. John Boyd who died in 1997 after contributing to military thinking mightily in the areas of strategy and tactics, and also to the design of Fighter Aircraft. Boyd’s version of the DLC has four phases: Observe, Orient, Decide, and Act, aligned in a loop most frequently referred to as the OODA loop. Most of Boyd’s work on strategy, tactics, and the OODA loop appears in briefings which Boyd gave and gradually refined over nearly 20 years.
The OODA loop begins to appear in his briefings in the late 1970s and at first is presented in the form of seemingly casual mentions emphasizing the importance of performing one’s own OODA loops more and more rapidly, and in any case more rapidly than one’s enemies perform their own. In addition, the importance of “getting inside” the OODA loops of one’s enemies, and particularly of distorting the “observe” and “orient” phases of their OODA loops, in order to influence their decisions and actions, is heavily emphasized. This notion is then tied to doctrines about “maneuver warfare,” rapidity of movement, and intelligence that became very influential in the design of the F-16 fighter, and in the strategies and tactics used by the United States in Desert Storm and in the opening, very successful, activities of the Iraq War. In addition, the OODA loop became identified by Boyd as the C2 or “Command and Control” loop, and attempts to undermine the OODA loops of enemies are seen by him as attempts to win battles and wars by undermining the opponent’s C2.
The conception of OODA which most people came away with from Boyd’s briefings during the late 1970s and the 1980s was one of a fairly simple DLC involving successive phases. The meaning of Observe, Decide, and Act seemed pretty plain, and if the Orient phase was a little more complex than others, its function of relating observation to thinking and cognitive processing was not understood as impacting heavily on the simple pattern of Observe-Orient-Decide-Act with a feedback loop connecting Act and Observe. However, Boyd’s views on OODA were never that simple as is hinted at in his 1976 paper on “Destruction and Creation,” an epistemological statement reflecting the depth of his thinking that remained important to the development of his views until he died in 1997. And as Boyd introduced more and more of the presentation components of his “Discourse on Winning and Losing,” now often known as “the Green Book,” it became clear that Boyd’s OODA construct incorporates many rich perspectives from Epistemology, Physics, General Systems Theory, Cybernetics, Information Theory, Darwinian Theory, Complex Adaptive Systems Theory, Cognitive Science.
The definitive work developing this richer perspective on Boyd and the OODA loop will be found in Frans Osinga’s Ph.D. thesis (2005) and subsequent book, both entitled Science, Strategy, and War (2006). Osinga dissects Boyd’s briefings and his “Destruction and Creation” paper and meticulously relates Boyd’s statements in the briefings and paper to books that Boyd’s archives indicate that he read. He lets Boyd’s words speak for themselves, but nevertheless shows that perspective is gained on what Boyd meant by relating his words to the voluminous literature Boyd read, but did not explicitly cite.
In his last briefing, completed in June of 1995, called “The Essence of Winning and Losing,” Boyd summarizes his views in 5 slides. The briefing contains the only graphic of the OODA loop ever presented by Boyd. It begins with several “key statements” (P. 2):
1. “Without our genetic heritage, cultural traditions, and previous experiences, we do not possess an implicit repertoire of psychological skills shaped by environments and changes that have been previously experienced.”
2. “Without analysis and synthesis, across a variety of domains or across a variety of competing/independent channels of information, we cannot evolve new repertoires to deal with unfamiliar phenomena or unforeseen change.”
3. “Without a many-sided implicit cross-referencing process of projection, empathy, correlation, and rejection (across these many different domains or channels of information), we cannot even do analysis and synthesis.”
4. “Without OODA loops we can neither sense, hence observe, thereby collect a variety of information for the above processes, nor decide as well as implement actions in accord with those processes.” And, “put another way”:
5. “Without OODA Loops embracing all the above and without the ability to get inside other OODA loops (or other environments), we will find it impossible to comprehend, shape, adapt to, and in turn be shaped by an unfolding, evolving reality that is uncertain, ever-changing, unpredictable.”

BOYD’s OODA Loop
(Adapted from John Boyd, “The Essence of Winning and Losing,”(Rev. 1996)
The next slide is Boyd’s OODA Graphic given just above and then Boyd ends his presentation with the statement (p. 5):
6. “The key statements of this presentation, the OODA Loop Sketch and related insights represent an evolving, open-ended, far from equilibrium process of self-organization, emergence and natural selection.”
The graphic and the key statements indicate that there is much more to the OODA loop than the simple Observe-Orient-Decide-Act with a feedback loop connecting Act and Observe. In the simple version, Observation refers to the task of sensing the world both external and internal to oneself and of feeding the results of sensing on to the task of Orientation. Orientation refers to the task of fitting the observations to our predispositions and expectations about the world in order to arrive at an interpretation of the situation one is facing. It involves various kinds of filtering and processing about which more will be said in a moment, and also formulating decision alternatives. Deciding is the process of reviewing alternative actions and selecting an alternative. Boyd views the decision as a hypothesis. And Acting is the process of implementing one’s alternative. Boyd views implementing as testing a hypothesis. The results of Acting are available for Observation, and the loop starts again.
In Boyd’s later specification of the loop in the above figure, he presents a much richer notion of the OODA process, and one that draws much more on his work in his “Destruction and Creation” paper and his later briefing on “The Conceptual Spiral.” First, in this specification, the heart of OODA is Orientation. It provides guidance and implicit control to the rest of OODA, including Observing, Deciding, and Acting. Thus, Orientation is the focus or harmonizing agent in the OODA loop. It:
“shapes the way we interact with the environment-hence orientation shapes the way we observe, the way we decide, the way we act. . . Orientation shapes the character of present observation-orientation-decision-action loops-while these present loops shape the character of future orientation. (From Boyd, 1987, “Organic Design and Control,” p. 16)
Second, also:
“. . . orientation is an interactive process of many-sided implicit cross-referencing projections, empathies, correlations, and rejections that is shaped by and shapes the interplay of genetic heritage, cultural tradition, previous experiences, and unfolding circumstances.” (p. 12)
Third:
“Orientation, seen as a result, represents images, views, or impressions of the world shaped by genetic heritage, cultural tradition, previous experiences, and unfolding circumstances.” (p. 13)
Fourth, and what are the “images, views, or impressions of the world,” resulting from orientation? That’s not made entirely clear by Boyd, but, Osinga (2005, p. 236) thinks that these words are “synonyms for mental modules, schemata, memes, and tacit knowledge.” And, in my view they are also suggestive of other closely related ideas such as paradigms, conceptual frameworks, conscious knowledge, knowledge predispositions, beliefs, values, etc. So orientations, and changes in orientation over time are the chief source of our patterns of mental knowledge and of the changes in those patterns.
Fifth, in “The Conceptual Spiral” and in “Destruction and Creation,” Boyd, also gives considerable attention to the idea of mismatches between our expectations, theories, and beliefs; and our experiences of the world; and to the processes we use to get rid of such mismatches and to adapt our understanding of the world. His notion of “orientation” can’t be fully understood without reviewing his basic notions from these papers and relating them to orientation.
Beginning with how orientation changes, Boyd thought of the world as exhibiting both uncertainty and novelty. And he thought that humans both encounter novelty and also create it to overcome uncertainty. To understand novel patterns we have to break them down into features that make up such patterns, Different people can break things down in different ways, so different features and parts can always be found. Whatever the differences, such breakdowns are processes of reduction, and Boyd called them analyses.
Analyses themselves are patterns, and can be further broken down into more elemental parts. These parts can then be used in different ways to create new patterns. New patterns are created by finding common features among the parts that connect them conceptually. This process Boyd called synthesis. If we test the results of this process we get “an analytical/synthetic feedback loop for comprehending, shaping and adapting” to the world.
When we arrive at a novel synthesis that reduces uncertainty, we proceed to apply it in our OODA loops and refine the knowledge and mental models (the patterns) we have created. However, Boyd had drawn from his study of Heisenberg, Godel, and the Second Law of Thermodyamics, the idea that the more we refine our mental models, the more likely we are to find new uncertainty. That is, the very success of a new synthesis, eventually leads to the discovery of new mismatches, new gaps in our framework, and new uncertainties, that, once again, we resolve by employing the analytical/synthetic feedback loop. Thus, the driving force behind our generating new ideas, systems, and processes, I.e. new knowledge, mental models, and patterns, is the appearance of mismatches, and these, in turn, inevitably arise from the new ideas themselves, as over time, we refine them and attempt to increase the accuracy and precision of new analyses based on them.
In the OODA loop, Observation is guided by the results of previous Orientations. That is, by already existing knowledge. So what we observe is influenced by that knowledge. But, in addition, once the results of Observation are passed on to Orientation, then, its various components interact to guide us in recognizing mismatches, and to influence the analyses and syntheses we will perform in changing patterns of Orientation. Boyd’s ideas about analysis and synthesis explain how it is possible for Orientation to develop decision alternatives that can be provided to the Decision phase for selection. They also indicate that Orientation produces new ideas, beliefs, mental models, etc. that did not exist before.
The OODA Loop and Double-Loop Learning
Osinga (2005, p. 271), views the OODA loop and specifically Orientation as a double-loop learning process, and Hall in his article “Biological Nature of Knowledge in the Learning Organization” (Vol. 12, April, 2005, p. 182), views it as consistent with Popper’s tetradic evolutionary theory of knowledge and associates Boyd’s “destruction” and “analysis” with Popper’s “criticism,” and Boyd’s “creation” and “synthesis” with Popper’s “tentative theories or solutions.” Hall might have gone even further and associated Boyd’s “mismatches” with Popper’s “problems,” which, in the mental or psychological sphere, according to Popper, are experiences contrary to our expectations, a view very similar to Boyd’s.
However, I don’t think a simple association of double-loop (or creative) learning with OODA and Orientation works. The reason is that there are different types of OODA loops and that a distinction between routine and creative OODA loops is one that Boyd might have, but never really arrived at explicitly. Routine OODA loops are those triggered by Observations and Orientations that show a) a gap between the state of the world we want to see and the state of the world that exists, b) the presence of a pattern in our Orientation repertoire that we believe will be effective in helping us to close the gap, and c) no mismatch between the state of the world and the expectations we have had. When we decide to act on that pattern and then implement it, we have completed a routine OODA loop and are in position to enter the next one. In executing the loop we have learned about specific conditions surrounding our decision and have experienced and so learned about our decisions and actions; but we have not made any new general knowledge, or changed any rules guiding our behavior. Routine OODA loops of this type are therefore instances of single-loop learning, not double-loop learning.
Now, sometimes, when we decide to act in the way indicated by the pattern given by our Orientation, and we then act in accordance with our decision and observe the result, we find (1) a mismatch between our expectations and reality, and (2) an absence of any pattern in our current knowledge that promises to close the gap between the state of the world we’d like to see and the way things actually are. It is in this sort of situation that we will seek to go through Boyd’s analytical/synthetic loop, and to engage in double-loop learning to create one or more novel patterns that can remove the mismatch between expectations and perceived reality. Even then however, we can’t engage in a single creative OODA Loop constituting double-loop learning. For double-loop learning requires multiple OODA Loops, motivated primarily by the need to solve a problem (remove a mismatch), and only secondarily by the instrumental behavior gap that motivated its discovery.
On the basis of his study of Boyd’s notes and bibliographies, papers and briefings, Osinga (2005) indicates that Boyd read and was influenced by Popper, among many others, including Kuhn and Polanyi. In particular, Boyd seems to have incorporated Popper’s ideas about the importance of conjectures and refutations in science and also the general outlook he expressed on evolutionary epistemology in “Evolution and the Tree of Knowledge,” a paper written in 1961 and included in Popper’s Objective Knowledge (1972). However, there’s no indication in Boyd’s work that he knew of Popper’s tetradic schema, or of its relevance to his own formulations about how new knowledge is made. By incorporating the analytical/synthetic loop in Orientation, and by equating Decision with hypothesis and Act with test, Boyd is saying that problem solving is performed within a single OODA loop. But, from the viewpoint of Popper’s tetradic schema it is not.
In the tetradic schema the following sequence applies P(1) -> TS -> EE -> P(2), where the Ps are problems, TS is a tentative solution, and EE is error elimination. The schema can be generalized easily so that multiple TSs can be formulated following the problem phase. But the important thing about the tetradic schema is that it is not one OODA Loop.
Problems are mismatches alright, but once you’ve recognized a problem, there’s more work to be done. Specifically, the nature of the problem may not be clear from the mismatch, and one may have to clearly formulate the problem in order to solve it. Clear problem formulations, in turn, may involve alternative problem formulations and selections among them, which, of course, require decisions, I.e. OODA loops.
Further, when tentative solutions are formulated, this may very well involve alternative formulations, and again decisions and OODA Loops, since, as a practical matter, not all alternatives can be carried forward into EE, and therefore there is a need to create a fair comparison set of tentative solutions. Finally, the EE phase also involves selection among alternatives and therefore decisions. But does it involve OODA Loops?
The answer is that it can, since while the EE phase may end with the selection of a tentative solution to a problem, the problem solver may not be the decision maker whose routine decision making produced unexpected results, and who needed to have the problem solved. Thus, an EE phase OODA loop may well end with a decision to accept a hypothesis and an act of communication of the solution to the decision maker who originally recognized the problem. After that communication, the newly developed knowledge is returned to the OODA Loop of the first decision maker who then has to decide to accept and act upon it. So, in this process there are two rounds of hypothesis and testing. The first round is part of the TS and EE phases in Popper’s problem solving process, the second round is part of the operational OODA loop of the decision maker who generated the original problem.
This picture raises an important distinction that is missing in Boyd’s account. While it is true that every decision can be considered a hypothesis and every act or sequence of acts implementing the decision a test, these “hypotheses” and “tests” arise out of practical activities and learning loops of all kinds, including routine activity and learning, and are found in all living systems. They are as characteristic of single-loop learning and routine as they are of double-loop learning and creative work. All living systems learn using this sort of “hypothesis and test,” including humans.
However, the hypotheses and tests that are more characteristic of creative work in humans and that are most common in science and other very focused double-loop learning activities, are not ones that arise out of routine action. Instead, they are part of a learning process that requires multiple OODA loops and that generates hypotheses and tests prior to trying out surviving hypotheses in the practical, instrumental situations in which the problems being addressed arose in the first place. Let’s call this last double-loop learning process the Problem Life Cycle (PLC) because it is focused on the birth and death of problems followed by the birth of new problems. The PLC allows us to compare alternatives and to kill our worst ideas before they kill us. That is, it allows us to proceed not just by trial and error as do other living creatures, but by trial and elimination of error before we have to pay the price for our errors.
So, in the end, I don’t think the OODA Loop in itself provides a model for double-loop learning, or a clarification of this idea. Rather, it provides an account of routine activity, or at most of PLCs that employ first-pattern-match to both recognize a problem and develop a solution that is accepted as one’s decision and then tested directly in action (instrumental behavior). In other terms, it provides an account of naturalistic decision making, but not of rational decision making which embodies double-loop learning. Boyd himself, had more than a glimmer of this, and that is why he developed the Orientation phase of OODA so that it included the analytical/synthetic loop. However, I think that once one moves to the analytical/synthetic loop, in a very real sense one has moved out of the original simple OODA Loop and into a PLC that has an autonomous dynamic involving multiple OODAs as I described earlier. I don’t think John Boyd saw this point himself, but, instead thought he could incorporate more complex forms of learning into OODA by adding conceptual richness to the Orientation phase.
In future posts, I’ll continue my analysis of OODA by relating it to my own DEC and KLC frameworks and I’ll also spend some time talking about the relationship of OODA, the DEC, and the KLC to Recognition Primed Decision Making (RPD).
Tags: Complexity · Epistemology/Ontology/Value Theory · Knowledge Making