My last post commented on Dave Snowden’s primary argument against a National KM Center, discussed in “Emperor’s Chess Board: Pt. 1” and “The Empire Repeats.” In addition to this argument, however, in “The Empire Repeats,” he wrote of two themes that emerged in the actkm discussion on National KM Centers and “connecting the dots.” The first theme, that this time we will get KM right, he discusses in the blog. But of the second theme he says:
“The scary idea that any approach needs clear definitions of knowledge management, and criteria by which various claims can be validated. This of course means adopting a particular philosophical approach (in this case a variation of Popper) and requires (i) a degree of intellectual vigor in government that is not likely any time soon and (ii) fails to appreciate the messy bottom up approach implied by taking a complexity science approach to the problem.
Now I don’t intend to devote anytime to the second theme, I think it defeats itself by its nature.”
I hardly know where to begin in discussing this paragraph, since I think that so much of it is over-stated and misleading, so I guess I’ll just begin at the beginning and work my way through. First, I don’t know of any approach advocated in the actkm discussion that states flatly that any kind of approach requires a careful definition of KM, or that an individual project can’t be good KM even though it doesn’t use a clear definition of the subject. I, have, of course, contended very often that clear definitions of KM, not necessarily the same, are needed to have cumulative progress in KM. I’ve argued that point in two articles, here and here, and thus far, at least, I don’t think that Dave has been successful in countering them.
Second, I don’t know of anyone who in the recent actkm exchange has said that specific criteria are necessary for validation. Don’t get me wrong, I, personally, frequently advocate knowledge claim evaluation, the process of eliminating errors through criticisms, tests, and evaluations, as a necessary step in arriving at knowledge. In some of my writings, I’ve even specified a framework for evaluation that could be interpreted as involving criteria. However, I certainly have never claimed that either criteria, or my particular fair critical comparison framework are necessary for selecting among competing views or solutions to problems, I.e. for knowledge claim evaluation. Lastly, for some years now I haven’t written about “validating” knowledge claims. I used to talk that way during the 1990s, but close to six years ago I realized that the idea that generalizations can never be verified by a finite number of instances implied that general knowledge could not be “validated ” by its performance, but could only survive our experience without being refuted.
Third, nor is it true that accepting that clear definitions of KM are important, and that knowledge claim evaluation is a necessary stage in making new knowledge commits one to adopting a particular philosophical approach such as a variation of Popper’s approach. Of course, as Dave well knows, I do use a variant of Popper’s approach, but others might, and, in fact do, use a variant of Peirce’s approach, or a variant of Mill’s approach, or a variant of any one of many other approaches to knowledge claim evaluation. Indeed, one might even invent one’s own approach. The point is that practicing knowledge claim evaluation in an explicit way, does not require commitment to a particular person’s philosophy. One is quite free to draw from any number of sources and to synthesize one’s own theory of knowledge claim evaluation in order to practice it.
Fourth, in asserting that the second theme requires “a degree of intellectual vigor . . . “ (should that have been “rigor”?) in Government that is not likely any time soon, I think Dave is assuming that knowledge claim evaluation always requires formal comparison of alternatives, but this is true only in unusual instances where time is available for a fair critical comparison, and where the risk of error in decision making is great enough to warrant the time and expense it takes to be rigorous.
Fifth, there is nothing about the second theme that implies its opposition to “complexity science,” or to “messy bottom up” approaches to problems. Its only opposition is to those “complexity science” studies that gloss over Knowledge Claim Evaluation and don’t make explicit how it is actually practiced in organizations; since these studies are certainly not instances of “science,” complex or otherwise.
Finally, one can easily say that the ‘second theme,” is self-defeating by its very nature and leave it at that. But without showing why it is self-defeating, there is very little that one can say about such judgments except to register disagreement.