September 9th, 2008 · 1 Comment

The year 2007 ended with a very interesting take on KM 2.0 by one of my former correspondents, Dave Pollard (See for example 1, 2, 3). Dave, who is given to emphasizing the social networking aspects of things provides this definition of KM:
“In a recent post where I waxed rhapsodic about how the best approach to everything could be reduced to three magic words (love, conversation, community), I presented this one-sentence summary of how this might apply to knowledge management (KM):
“KM is simply the art enabling trusted, context-rich conversations among the appropriate members of communities about things these communities are passionate about.
“In another recent post I laid out how the work of information professionals is now being done in (what I consider) leading organizations, around five key types of deliverables: awareness products, research products, guidance products, self-assessment and connectivity tools, and facilitated events.
“At the request of several readers, I’ve pulled this all together in the table above into a framework for what some have called KM 2.0, but which I prefer to call KM 0.0, because it’s getting back to the roots of why and how people share what they know. It could also be called PKM — Personal Knowledge Management — because it’s about self-managed content and peer-to-peer connectivity.”
So in the above Dave, equates KM 2.0 with communities and enabling trusted, context-rich conversations within them and also equates it with Personal KM, self-managed content and peer-to-peer connectivity. Nowhere in this description do we see the words knowledge processing. Nowhere are knowledge outcomes talked about. And even though KM is mentioned we see that it is about enabling context-rich conversations in trusted but otherwise unstated ways. Frankly, this is the first time I’ve heard that KM is about making context-rich conversations possible. And while this is an interesting idea, I don’t think it’s very tightly coupled to such notions as enabling better problem formulation, or better knowledge claim formulation, or better individual and group learning, or better knowledge claim evaluation, it may have more to do with enhanced knowledge integration, but the trouble is we have no way of knowing whether the “context-rich, trusted conversation” is exchanging knowledge or just information.
In any event, on the basis of considerations like these, Dave characterizes two types of KM. The old kind, or KM 1.0, is described as being all about content and collection, and KM 0.0 (what others call KM 2.0), which is described as being all about context and connection. Content and collection is associated with: large centralized just-in-case content repositories of ‘submitted’ ‘reusable’ documents with standardized taxonomy and search tools; large complicated centrally-managed intranets for ‘publishing’ and ‘browsing’ content; main information flows are top-down instruction (policies, directories), bottom-up submission; communities of practice – centrally established and managed, content-focused; “best practices’ (stripped down); public websites (boundaries established by firewall); licensed databases purchased from outside info-professionals (disintermediation); e-mail; and what the company wants you to know: press releases, sales material.
Context and connection is associated with: personal content management tools – everyone manages their own content, just-in-time, harvestable; RSS-publishable and subscribable personal web pages, blogs and small-group-created wikis; main information flows are what matters to each person, peer-to-peer; communities of passion – self-managed and ad hoc, conversation-focused; stories (detailed, context-rich); visualizations; everything inside is open and shared outside unless it’s illegal to do so (community of the whole world); high-value, high-meaning RSS-subscribable content produced by internal info-professionals (reintermediation): awareness alerts (what’s new?), research (what does it mean), guidance (what should we do about it?); Instant Messaging, virtual meeting tools (desktop video, other simple ubiquitous real-time tools), organization and facilitation of real & virtual community-self-initiated self-managed events, including Open Space hosting and facilitation, people-finding and community-creating tools; and what the customer wants to know: multimedia interactive self-assessment tools.
So, again, Dave has given us two states of organizational systems which he calls KM 1.0 and KM 0.0 (or 2.0) and which he provides detailed specifications of in terms of the above properties. While the differences between these two states and the richness of his characterizations of both are certainly of great interest, I’m afraid that the connection of these two types to Knowledge Management, as an idea, is less than crystal clear from my point of view. This is not to say that KM efforts in the past have not used many of the elements identified by Dave in their interventions, such as, for example communities of practice or best practices databases, or content repositories, or even occasionally, large complicated centrally-managed intranets for ‘publishing’ and ‘browsing’ content, but those were not necessary characteristics of KM in past years, and such tools as e-mail, licensed databases purchased from outside, public web sites with firewalls, and press releases and sales materials have hardly ever been associated with KM as a distinguishing characteristic of this form of management or its effects. Furthermore, what is absent in Dave’s account of KM 1.0 is any characterization of its unifying ideas other than “content and collection;” and this neat phrase is hardly descriptive of the character of even first generation KM as an activity. In fact, it looks much more like a broad characterization of content processing than it does KM, and it fails to establish a coupling between a general notion of first generation KM, the “content and collection” meme and the specific attributes that Dave couples to the label KM 1.0.
This problem of specificity in the tie between a general characterization of KM, the general idea of the KM type being described, and the characteristics specified to describe the type, carries over to Dave’s treatment of KM 0.0. It’s central idea is described as context and connection, and the specific characteristics given by Dave match that idea very well, but the task of tying “context and connection” to the idea of KM itself is absent. In other words we are being asked to believe that KM 0.0 (or 2.0) is described in general terms by “context and connection” without any comparative analysis of this account of KM 2.0 compared to other alternatives. But what is there about “context and connection” that implies the kind of activity we can call KM 2.0 and what is this kind of activity like? And, alternatively, if context and connection is meant to characterize the facilities, structures, social networks, type of content, and tools implemented in or resulting from KM 2.0 interventions, then what is the relation of these characteristics to KM and to enhancing the knowledge processing targets of KM?
Questions like these are neither posed nor answered in Dave’s blog post. And therefore we are left with assertions that KM 1.0 and KM 2.0 are as described, with KM and knowledge processing being only vaguely recognizable in the profiles developed. Thus, KM 1.0 looks like content processing to me and KM 0.0 (or KM 2.0) looks like social network enablement and intensification. But neither one looks like a KM state to me, and neither one connects the state it describes to characteristic patterns of knowledge processing, so the connection of these two patterns to Knowledge Management, and to KM 2.0, particular is tenuous at best.
To Be Continued
Tags: KM 2.0 · KM Software Tools · KM Techniques · Knowledge Integration · Knowledge Making · Knowledge Management · Personal KM
September 7th, 2008 · Comments Off on What Is A Knowledge Management Software Application?

In my last post, I introduced the problem of evaluating whether something is a KM software application by distinguishing KM and its outcomes, knowledge processing and its outcomes, and business processing and its outcomes. I also pointed out that software applications could contribute to any of the tiers of the Three-tier Model, and that software vendors don’t distinguish among the levels of the three-tier model when they label their software. Then I proceeded to point out that a software application ought to be called a knowledge processing application to the extent that its use cases enhance some aspect of knowledge processing. I also said that if its use cases directly enable enhanced Knowledge Management activity, then, of course, it is directly supportive of KM processing. This blog will discuss how we can decide whether a vendor’s application is properly labeled a Knowledge Management software application.
Knowledge Management
As with knowledge processing, to decide whether an application supports or enables aspects of KM, we need to have a conceptual framework that provides a map of KM. Knowledge Management is the set of processes that seeks to change the organization’s present pattern of knowledge processing to enhance both it and its knowledge outcomes. This implies that KM doesn’t directly manage knowledge outcomes, but only impacts processes, which in turn impact outcomes. For example, if one changes the rules affecting knowledge production, the quality of knowledge claims may improve, or if a KM intervention supplies a new search technology based on semantic analysis of knowledge bases, that may result in improvement in the quality of models.
There are at least 9 types of knowledge management activities:
–Symbolic Representation,
— Building External Relationships with Others Practicing KM,
— Leadership,
— KM-level Knowledge Production,
— KM level Knowledge Integration,
— Crisis Handling,
— Changing Knowledge Processing Rules,
— Negotiating for Resources with Representatives of Other Organizational Processes, and
— Resource Allocation for knowledge processes and for other KM processes.
KM-Level Knowledge Production and Integration reflects the idea that KM may also be about responding to epistemic gaps arising from Knowledge Management operational processes themselves. The Change in Knowledge Processing Rules process, for example, may develop epistemic problems. In that case, KLCs at the level of KM processing will be initiated and will produce and integrate new knowledge about how to change knowledge processing rules to enhance information acquisition, knowledge claim evaluation or one of the other sub-processes of the KLC.
Evaluating Knowledge Management Software Applications
Whether a software application is a KM software application may not, in general, be an either/or matter. We must, then, evaluate applications against each of the types of KM activity and decide to what degree the use cases of the application enable each of the nine types. Some may think that an IT application supports KM if it performs content management, or if it supports collaboration, or if it performs data mining. But, though there is some truth to this claim, I think the connection between these and other types of applications and knowledge processing and KM is at best indirect, and at worst very tenuous, because each such application may or not provide support for knowledge or KM processes, beyond the general support they provide for information processing and information management.
In each case of an IT application, therefore, the connection from the application in question to knowledge processing and KM use cases that are distinct from information processing and information management use cases must be demonstrated. The connection is simply not self-evident because the application in question is a content management or a collaborative application.
I’ll use the example of Hyperwave, once again, to illustrate an evaluation of a software application, this time as a KM application. Hyperwave’s capability for supporting knowledge processing at the level of knowledge management is the same as its capability for supporting it among knowledge workers. To the extent a Hyperwave implementation is successful at the knowledge worker level, it can be equally successful in supporting knowledge management. Let’s look now at the support provided by Hyperwave for each of the other major categories of KM activity.
Symbolic Representation
This activity is about employing symbols to reinforce the organizational authority of knowledge management. It’s about ceremonials and ceremonial communication. Hyperwave provides plenty of opportunity for such communication in the form of broadcast e-mails, published messages, ability to create and publish memoranda of various kinds, and the ability to address others in web conferencing formats.
Building External Relationships
Building External Relationships is about collaborating with Knowledge Managers in other organizations. Hyperwave’s Team Workspace, e-mail, and eConferencing modules are equally applicable to inter-organizational collaboration and provide the same features for that as for intra-organizational collaboration. External relationship building therefore can be supported by a Hyperwave Portal simply by incorporating the appropriate roles and providing appropriate access to external parties.
Leading
Leading is a category encompassing many activities. Here’s a brief list: consensus building; project management; persuading, compelling, incenting, informing, obligating, hiring, evaluating, delegating, meeting, and writing memoranda. Of course, this only illustrates the variety of activities involved in leading. Hyperwave’s content processing and management, and collaborative processing capabilities can certainly enable most of these activities; but also specifically targeted support for many of them is not available. There’s some support for project management, in Hyperwave Team Workspace and Workflow. But the sophistication of project management functionality present in an application such as Primavera or R-Plan is not provided.
Crisis Handling
Crisis Handling involves such things as meeting CEO requests for new competitive intelligence in an area of high strategic interest for an enterprise, and directing rapid development of a KM support infrastructure in response to requests from high level executives. There are no special predictable requirements for crisis handling; only the ability to do the same things more rapidly. Insofar as a Hyperwave portal facilitates efficient collaboration and content processing and management, it supports crisis handling because its effect will be to reduce the cycle time of adaptive initiatives.
Changing Knowledge Processing Rules
The knowledge process sub-processes of information acquisition, individual and group learning, knowledge claim formulation, knowledge claim evaluation, broadcasting, searching/retrieving, teaching, and sharing are all composed of tasks, some of which are rule-governed. Knowledge workers execute these tasks and knowledge managers produce the process rules. Knowledge managers also change the rules once they produce new knowledge about them.
Hyperwave provides substantial support through its content processing and management, and its collaboration capability to change rules, publish new ones, and train people in the organization to apply these rules. However, it cannot provide support for modeling rule changes and their likely impact on knowledge processing because it cannot access analytical modeling and simulation applications to forecast impact, nor statistical analysis applications to measure impact after the rule changing interventions are accomplished. Moreover, Hyperwave also doesn’t support applications that allow value interpretations of descriptive impact measurements, so that non-monetary costs and benefits can be assessed from within portal-based applications.
Negotiating for Resources with Representatives of Other Organizational Processes
Negotiating agreements with representatives of business processes over levels of effort for KM, the shape of KM programs, the ROI expected of KM activities, etc., is an essential knowledge management function. Hyperwave’s content processing and management, and collaborative capabilities provide all the support needed for the communication aspects of portal-based negotiations. But successful negotiation for resources requires clear ideas about the resources one is negotiating for, and as we shall see just below. Hyperwave doesn’t provide effective support for that.
Resource Allocation for Knowledge Processes and for other KM Processes
Allocating resources includes allocations for KM support infrastructures, training, professional conferences, salaries for KM staff, funds for new KM programs; in short, all KM interventions for enhancing knowledge processing.
Hyperwave provides little support for planning resource allocations, except resource allocation planning using Excel. This, again, is due to Hyperwave’s lack of connectivity to the structured data-related applications that support resource allocation planning. In particular, Knowledge Managers need Portfolio Management application programs to plan programs and interventions. They also need support for measuring the likely impact of planned resource allocations and their likely non-monetary benefits and costs. Hyperwave provides no connectivity to these types of applications. So it can provide little capability for supporting resource allocations for KM programs.
An evaluation of this type can be performed with any software application. It can be made more detailed by breaking down the categories of KM activity further, if it turns out to be too difficult to match the use cases of the software application to the nine types of KM activity. I’ve developed an Expert Choice software template called the Open Enterprise Template that has a very in-depth breakdown of the KM activities and that can be used for work like this.
As with my remarks about knowledge processing software, social computing, Web 2.0, and even “Web 3.0,” also need to be evaluated in the way I’ve just illustrated. That is, claims that such applications are “KM 2.0” applications, need to be evaluated against a framework such as the one I’ve outlined above in order to settle the question of the degree to which a particular application is a KM application.
Tags: KM 2.0 · KM Software Tools · Knowledge Management
September 6th, 2008 · Comments Off on What Is A Knowledge Processing Software Application?

Software vendors feel very free to claim that theirs is a KM Software application. At the height of KM’s popularity, in 1998, I’ve even had the experience of running across a document copying vendor who claimed that theirs was a KM Software application. Well, it’s a software vendor’s job to make claims that may sell their software. But it’s our job to evaluate such claims and decide which software applications really do facilitate KM and which don’t. The first step in doing that is to be clear about what we can possibly mean when we place the label KM on a software application.
The first thing to recognize about this is that software vendors don’t distinguish among the levels of the three-tier model: business processing and its outcomes; knowledge processing and its outcomes; and KM processing and its outcomes. Thus, they will refer to an application as a KM application even though it’s primarily for supporting knowledge processing, or even for using existing knowledge in business processing. In cases where a software package supports knowledge use, I think we can quickly conclude that the software vendor involved is just stretching their marketing beyond reasonable bounds. But, in other instances their software is relevant to KM. That is, if its use cases enable enhanced knowledge processing then it may be useful as part of a KM intervention designed to enhance knowledge processing. And if its use cases directly enable enhanced Knowledge Management activity, then, of course, it is directly supportive of KM processing. This blog will discuss how we can decide whether a vendor’s application is properly labeled a knowledge processing software application while the next will deal with how we can decide whether something is a KM software application.
Knowledge Processing
To decide whether an application supports or enables aspects of knowledge processing, we need to have a conceptual framework that provides a map of knowledge processing. Of course, we have one. It’s called the Knowledge Life Cycle (KLC), and we’ve discussed it in previous blogs and in many other publications. The KLC identifies knowledge production and knowledge integration as the two primary processes in knowledge processing.
Knowledge production is a process made up of four task clusters (or sub-processes):
— information acquisition,
— individual and group learning,
— knowledge claim formulation, and
— knowledge claim evaluation.
Knowledge integration is made up of four more task clusters, all of which may use interpersonal, electronic, or both types of methods in execution:
— Knowledge and Information Broadcasting (KIB),
— Searching/Retrieving,
— Knowledge Sharing (peer-to-peer presentation of previously produced knowledge), and
— Teaching (hierarchical presentation of previously produced knowledge).
Among the above 8 sub-processes it is important to remember that individual and group learning is itself knowledge processing. Individual and group learning produces knowledge claims for consideration at higher levels of analysis of knowledge processing, but at the individual and group levels themselves, learning is knowledge production, and depending on the group level, all four task clusters are involved at that level too. Let’s call this the “nesting” of knowledge processing in the enterprise.
Evaluating Knowledge Processing Software Applications
Whether a software application is a knowledge processing software application may not, in general, be an either/or matter. Of course, there are some “KM” applications that support only business processing, and some that support only KM activities, but those that do support knowledge processing will support various aspects of it in varying degrees. We must, then, evaluate applications against each of the categories of knowledge processing and decide to what degree the use cases of the application enable each of the eight sub-processes of knowledge processing.

Here’s an example of a “quick-and-dirty” evaluation of the Hyperwave Portal application.

An evaluation of this type can be performed with any software application. It can be made more detailed by breaking down the sub-processes of the knowledge processes further, if it turns out to be too difficult to match the use cases of the software application to the eight knowledge sub-processes. I’ve developed an Expert Choice software template called the Open Enterprise Template that has a very in-depth breakdown of the knowledge sub-processes and that can be used for work like this.
In other blog posts, I’ve been doing a series on KM 2.0 and its development. Needless to say social computing, Web 2.0, and even “Web 3.0,” also need to be evaluated in the way I’ve just illustrated. That is, claims that such applications are “KM 2.0” applications, need to be evaluated against a framework such as the one I’ve outlined in order to settle the question of the degree to which a particular application is a business processing, knowledge processing, or KM application.
Tags: KM 2.0 · KM Methodology · KM Software Tools · Knowledge Integration · Knowledge Making · Knowledge Management
September 2nd, 2008 · 1 Comment

Over the past years, I’ve spent many enjoyable Saturday afternoons participating in Washington, DC’s Cafe Philo group (a face-to-face public philosophy group), and have occasionally participated in its list serv. Over the past couple of days a friend responded to my support of the statement “No statement can be justified,” by asking whether I was: “saying that we cannot evaluate the reasons given for claims about where babies come from, why it rains or whether there were WMD in Iraq or involvement by Iraq in 9/11 without resorting to intuition? That science cannot evaluate claims without resorting to intuition? That critical distinctions that are made all the time (especially by language) cannot be evaluated and challenged based on such an evaluation? That it is okay that people just accept claims of all sorts without questioning them, particularly on important matters? I can think of nothing more problematic in human history and experience than the acceptance of all sorts of beliefs and claims without question; for invalid reasons. Can you?”
I answered my friend initially, by distinguishing beliefs and claims and reasoning in the following way. Statements are not beliefs. They are linguistic formulations, Entailment is a logical relationship between statements. Thus statements entail other statements.
Beliefs, however, are mental orientations or predispositions arising out of mental processes. They are not linguistic in character, and one belief, therefore, doesn’t entail another. It may precede another. It m ay be associated with another. It may correlate with another. It may influence another, But it can never entail another. Thus, since “justification,” in the context of “Justified True Belief,” requires that one thing entail another, it follows that one belief can never justify another, simply because it can never entail another.
I then went on to amplify by saying that statements express linguistic “content,” including both factual descriptions and evaluations, rather than “beliefs”. Colloquially, we may say they “express beliefs.” But whether or not they have a connection to beliefs that allows us to say that they “express beliefs,” our expressions are not our “beliefs,” but just our expressions. Our beliefs, our feelings, our attitudes, our values, all the stuffs of our mental processes are non-linguistic in character. Thus, nothing follows from belief in the logical sense of the term “follow.” Beliefs entail nothing, even if the statement “Harry is an innocent civilian,” has logical implications.
My friend then asked me what becomes of “the burden of proof” under sich a doctrine. I replied by saying there is no burden of proof for claimants because no statement can be proved in the sense of Justified True Belief (JTB). All that can happen is that the past record can show that some claims perform better than others when faced with criticisms, tests, and evaluations.
Now, this means that when we are confronted with a statement of someone else’s or a statement of our own, we should critically evaluate that statement if we care about whether or not it is true (in the case of descriptive statements), or legitimate (in the case of statements about intrinsic value). For even though we cannot prove truth or legitimacy we can develop criticisms of competing statements and can then evaluate and decide which ones are false, undecided, illegitimate, or non-legitimate. The remaining statements at any point in time survive and constitute our body of objective knowledge at that time, provided none among them are insulated from criticisms, tests, and evaluations by authorities.
Going back to the idea of “burden of proof,” that burden is not one of proof in the sense of JTB. Instead, in criminal cases, it is proof beyond a reasonable doubt which is sought and the way that “proof” is secured is by the prosecution getting the jury or judge to falsify the defendant’s story about what happened, while preventing its own alternative account of what happened from being falsified by the defense. That is, rather than there being a “burden of proof” for the prosecution there is a burden of falsification and avoidance of falsification that it assumes. If you want to call this “burden of proof,” that’s OK, because we’d now be into arguing over labels and such arguments don’t matter. What does matter is that there is no burden of proof in the sense of absolute justification of one’s beliefs or statements in the law.
Furthermore, when we get to scientific activity, here too, there is no burden of absolute proof. In fact, Popper’s CR views are much more popular among scientists than they are among philosophers just because many scientists do think that science works through testing and falsification and not through proof or overcoming burdens of proof. Now, science doesn’t always work this way, of course, and there are many cases where people seem to be viewing things in terms of “burden of proof,” yet nearly all scientists agree that all scientific knowledge is contingent and that any of it may prove invalid tomorrow. Thus, scientists know that absolute JTB-type proof of their statements is impossible, and many of them believe that our most respected theories: Quantum Theory, General Relativity, the neo-Darwinian synthesis, and others are all subject to reasonable doubt, so that our best scientific theories don’t even meet “the burden of proof” imposed on our legal theories of a case.
But here we come to the normative question, when science works in the “burden of proof” mode, should it do so, or is it really unscientific to work in this way. Critical Rationalism and those who accept it believe that when science works in this mode, and places a burden of proof on new theories that explain the world just as well as old ones, it is, in truth, being biased and therefore unscientific in character. And so they deplore the tendency of Quantum Theorists to adhere to the Copenhagen Interpretation, and to prefer it to the MWI, Bohm’s model, and even Popper’s, when all of these theories equally well account for the experimental evidence. The result of such biases is the slower progression of science because of the lack of support within it for devising crucial experiments that would distinguish between the competing versions of Quantum Theory.
All of this ties into the distinctions between CR and Thomas Kuhn’s institutional view of science. In looking at these competing views. we need to clearly distinguish the descriptive from the normative issues involved. In reading both CR and Kuhn it is sometimes hard to distinguish the boundary between the descriptive and the normative. But for me, it’s always been the case that Popper’s account should mainly be viewed as normative and Kuhn’s as mainly descriptive. However, there’s no doubt that followers of Kuhn have taken his theory as normative and have viewed paradigms, incommensurability of competing theories, scientific revolutions, scientific institutions, conservatism in theory evaluations, and isolative scientific cultures, as the way things ought to be. They have used Kuhn’s work to shore up post-modernism, constructivism, and other forms of relativism in our societies. This attempt by Kuhn’s followers to move from the facts as Kuhn saw them to the way that science ought to be, is an illegitimate inference from the descriptive to the normative.
I am not a believer in the fact-value dichotomy (See my “Against the Fact-Value Dichotomy” in the Cafe Philo Dialogue Yahoo Group files), and my views on this matter are not classical, but I do agree with the Humean criticism that one cannot move from the flat assertion that something is a particular way to the view that it ought to be that way. Contrary to Pangloss’s fellow-travellers, this is not the best of all scientific worlds, and even if all scientists acted according to the Kuhnian model, it would still be the case that that model would prescribe unscientific behavior, when it suggests conservatism in theory evaluation and protective belts around our favorite theories. This point is fundamental to any scientific philosophy, because it is the distinction between science and religion. Science should not favor its pet theories. Religion is about doing just that.
Moreover, I cannot end this without pointing out that Kuhn’s views about the facts of scientific behavior also do not hold up under close examination and have never done so. Kuhn’s views are based on the idea of “paradigms,” but his use of that term in “Structure . . . ” was so ambiguous, that his views on what has occurred in science relating to paradigm change are hardly testable. There are arguments on both sides. But after 46 years there’s general agreement that Kuhn’s model of revolutionary scientific change rarely applies to scientific events as a matter of fact, and that his account is poor history.
Popper’s normative notions about how science should work, on the other hand, have fared better. Many prominent scientists have taken up his views in whole, or in part, and still do take them up today. Others, such as Richard Feynman, have developed similar views without, evidently, being influenced directly by Popper — in this way indicating that Popper’s insights into the heart of the scientific endeavor, as an activity based on critical, negative evaluations of competing views, that is neo-Darwinian in character, are essentially correct. For these people, as well as explicit adherents to CR, science is not about justification and proof, but about testing, criticizing, and evaluating competing theories to find those linguistic constructs that prove strongest in the face of our best efforts to refute them.
Tags: Epistemology/Ontology/Value Theory · Knowledge Making
September 1st, 2008 · Comments Off on Why Don’t We Write More About How We Ought to Evaluate Knowledge Claims?

There’s remarkably little attention given to the discussion of how we ought to evaluate knowledge claims in spite of the fact that this issue is rather central to both knowledge processing and KM. I’ve argued for the importance of KCE in the past. Here I want to illustrate its importance with a critical take on an aspect of some widely known work of Nonaka and Takeuchi. Here’s what their The Knowledge Creating Company, has to say about “justifying concepts” (pp. 86-87):
“In our theory of organizational knowledge creation, knowledge is defined as justified true belief. Therefore, new concepts created by the individual or the team need to be justified at some point in the procedure. Justification involves the process of determining if the newly created concepts are truly worthwhile for the organization and society. It is similar to a screening process. Individuals seem to be justifying or screening information, concepts, or knowledge continuously and unconsciously throughout the entire process. The organization, however, must conduct the justification in a more explicit way to check if the organizational intention is still intact and to ascertain if the concepts being generated meet the needs of society at large. The most appropriate time for the organization to conduct this screening process is right after the concepts have been created.
“For business organizations, the normal justification criteria include cost, profit margin, and the degree to which a product can contribute to the firm’s growth. But justification criteria can be both quantitative and qualitative … More abstract criteria may include value premises such as adventure, romanticism, and aesthetics. Thus, justification criteria need not be strictly objective and factual; they can also be judgmental and value-laden.“
It is striking that “justifying concepts” as a basis for knowledge is about evaluating or screening knowledge claims (statements) in the process of converting them into “tacit” knowledge (beliefs) for the purpose of psychologically justifying them. This is how we interpret the passage above. Though knowledge is characterized as “justified true belief,” the above statement makes very plain that the emphasis is on belief and psychological justification and not on truth at all. Where in “justifying concepts” are the epistemic evaluation criteria for selecting among contending knowledge claim networks? Where is the concern for seeking and finding true knowledge claim networks rather than false ones? Where is the concern with finding solutions to problems that reflect reality?
Upon closer inspection, we find that Nonaka and Takeuchi’s theory of truth is more concerned with the proximity of beliefs and claims to positions held by managers, than with closeness to reality. Consider the following statement of theirs (p. 87):
“In a knowledge-creating company, it is primarily the role of top management to formulate the justification criteria in the form of organizational intention, which is expressed in terms of strategy or vision.”
Earlier (p. 86), they contend that justification of “true beliefs” is measured “against the vision established by top management.” It should be clear, then, that according to Nonaka and Takeuchi, truth has little to do with reality, and instead is a function of how close beliefs or claims happen to come to the beliefs or claims held or expressed by managers – who of course could all be wrong.
The important point is that in considering “justifying concepts” as an “Internalization” process, Nonaka and Takeuchi, and he and his collaborators in other works, as well, have by-passed the process of ‘open’ Knowledge Claim Evaluation, a process that selects among knowledge claims on the basis of their defensible correspondence with reality, and which in the process never refers to, or relies upon, the authority or rank of a claim’s proponents. Instead, Nonaka and Takeuchi seem to prefer a position which states that (a) beliefs or claims being transferred or “converted” in their SECI model are always true, (b) that a political/ psychological process seeking certainty in beliefs or claims is valid, and (c) commitment to those beliefs or claims on the basis of the rank of their originators is a preferred and sufficient basis for the justification they seek.
Such a process may build consensus and commitment; it may produce justification of one’s beliefs. But, it does not produce severe tests and evaluations for alternative knowledge claims. It does not produce the strongest solutions to our problems. It does not produce the growth of knowledge. And finally, it does not eliminate our bad ideas before they eliminate us. In short, it is not a recipe for creating knowledge that will more closely approach the truth. Instead, it is a recipe for creating comfortable knowledge claim networks that we can all agree upon, whether or not these are the best networks for helping us to adapt to the challenges we will surely face.
In future posts, I’ll return to the topic of Knowledge Claim Evaluation and will raise the issue of whether we can or need to justify our knowledge claims or statements at all.
Tags: Epistemology/Ontology/Value Theory · KM Methodology · Knowledge Making · Knowledge Management

From the beginning of KM there’s been remarkably little focus on metrics and measurement. In particular, there’s been remarkably little focus on metrics of KM impact. This lack of focus is in line with a certain anti-scientific orientation that has appeared in KM associated with the philosophies of post-modernism and social constructivism. It is also in line with a rejection in the field of the idea that KM projects need to be justified by pointing to concrete results, and the adoption of a position which seems almost to say that KM is like the furniture in an organization. It’s hard to measure its impact; but, without it, an organization is hard-pressed to survive. Further, the neglect of metrics and measurement is also in line with the difficulty and unpleasantness of developing frameworks and architectures for measurement in an applied social systems field like Knowledge Management. The key terms of KM are abstractions. Change in them is not directly or easily observable. To relate changes in our experience to changes in these abstractions, we often need complex measurement models and there aren’t many KM practitioners who have the background and training to develop such models.
There are still other difficulties to note. Dave Snowden talks about the extreme reactivity of many indicators and the ease with which they can be “gamed.” He is right. Simple indicators can be gamed and this also argues for more complex measurement models that are non-reactive and impossible to game because gaming them would require too high a price for the “gamers” to pay in their everyday organizational interaction. In addition, the most important thing to measure in KM is the impact of KM interventions. This, however, introduces another difficulty, because measuring impact requires measuring change over time and also doing some modeling of influence relations. Nor is this all. To measure impact we also need to be able to project a counter-factual: the expected result of a scenario in which we don’t intervene, and compare that with the measured state of the target we’ve been trying to influence after we intervene. All this requires a methodological and technical sophistication, which we have rarely seen employed in KM to date. Nevertheless, it is all necessary to measure impact.
Finally, yet another difficulty in measurement is caused by the persistent tendency in KM to confound KM activities and outcomes with knowledge processing and its outcomes. It’s easy to understand this problem by looking at the three-tier model below.

When we do see metrics in KM projects, they often relate KM activities and direct outcomes to effects on business processes and their outcomes. That is, KM activities are related to business metrics, but such studies don’t develop any measurement models or metrics relating KM to the middle, knowledge processing and outcomes, tier. The problem with that is a failure to trace impact through the middle tier, making it harder to show that any post- KM intervention outcomes are actually due to KM. Now sometimes this omission is not important in justifying one’s project. For example, in the Partner’s HealthCare case it’s very hard to deny that reductions in the negative impact due to errors in ordering prescriptions was due to the KM intervention re-structuring the ordering process, and eliciting a growth in the problem recognition and problem solving surrounding it. In other cases, however, particularly those involving the much more common ecological approach to KM, the relationships among KM, knowledge processing and outcomes, and business processing and outcomes. is much more complex and is not disentangled in the cases. So, it is much harder for them to establish KM impact, either positive or negative in character.
In spite of all these difficulties besetting the task of developing measurement models and metrics in KM, I don’t think the field will progress very much unless this development takes place. If we can’t show impact we won’t be able to claim impact. And if we can’t claim impact, no one will ever take KM seriously. So, I think we had better begin to spend a lot more time on both doing and writing a lot more about measurement and metrics, and we ought to do that immediately, so that when the time comes to evaluate the newly minted Web 2.0-based interventions, we can say just how successful they are without yet another generation of arm-waving.
Tags: KM 2.0 · KM Methodology · KM Techniques · Knowledge Making · Knowledge Management
August 30th, 2008 · Comments Off on Doing KM and Calling It Something Else

In a recent article in Knowledge Management Research and Practice, I suggested that the problem of lack of agreement on what KM is, suggests four possibilities:
1. People can be doing KM and calling it KM;
2. People can be doing KM and calling it something else;
3. People can be doing non-KM and calling it KM; or
4. People can be doing non-KM and calling it non-KM.
And I also pointed out that if the ratio of what’s in the first category to the sum of the second and third is less than 1, than we may have a serious distortion of the track record of KM on our hands.
Now, I’m pretty sure that category 3 is awfully large, at least from the perspective of my own definition of KM as management activity intended to enhance knowledge processing, since I see all sorts of “KM projects” that are no more than attempts to enhance collaboration, content management, customer relationship management, or information sharing. However, what about category 2? Are many companies doing KM and calling it something else? I’m not in a position to answer this question definitively, and perhaps no one in KM currently is. However, perhaps we should start identifying those organizations with KM programs that have labeled them something else. If we begin to do this whenever we see such cases, eventually we’ll shed light on category 2, and begin to get a better picture of the real state of KM.
Tags: Knowledge Management
August 29th, 2008 · Comments Off on Collaboration, KM 2.0, and Knowledge Processing

According to many, KM 2.0 is introducing social media tools to improve connectivity, resulting in building relationships and trust, and then resulting in better communications and knowledge transfer. This is a simple theory. But it is at the heart of the claim that social computing tools will provide more success in knowledge sharing than previous KM efforts that did not use social computing have delivered. There is a missing link in this theory however, and it is the further assumption that increased connectivity will lead to increased collaboration which in turn will lead to increased knowledge sharing. This sounds neat enough, but I think it has a big hole running through it.
Collaboration refers to people working together to reach a common objective or goal. Enhancing collaborative processes focuses on locating and putting people together who may have a predisposition to collaborate. It also focuses on providing “spaces” enabling collaboration, or on providing means for creating collaborative processes, or on tracking collaborative processes and the results of collaboration, all of which are part of what KM 2.0 is about.
But enhancing collaborative processes is not itself about the purposes of collaboration or increased connectivity. It is neutral about those purposes. Collaboration enhancement prioritizes maintaining collaboration, not necessarily accomplishing the goals of collaboration. This can bring collaboration enhancement in conflict with knowledge processing, since knowledge processing’s primary purpose is to solve problems and integrate the solutions and not to maintain collaboration. In knowledge processing, collaboration is the means to individual, group, and organizational learning. It is an important means, and when combined with content processing and management, it provides powerful background conditions for successful knowledge processing. But in the end it is only a means to create knowledge and it is a means that must be harnessed to the various primary foci of knowledge processing, which I have listed on numerous occasions in previous writings.
In addition, Collaboration Management and KM are closely related in certain respects. Certainly, the activity categories of KM all involve collaboration and its management to one extent or another. But KM is much more than collaboration management, and its primary purpose is not to enhance it but to enhance knowledge processing. So, the implementation of Web 2.0 in the enterprise, while it may indeed be about increasing connectivity and collaboration, is not directly about knowledge processing and KM, and those who think that it is are making the same kind of error as previous writers in KM who have mistaken collaboration portals for knowledge portals and Collaboration Management for KM. It is another instance of loose thinking in KM – of an attempt to “fuzz up” the distinction between different categories in hopes that people will believe that one has found a new solution to a long standing problem – the problem of finding an IT tool or class of tools that defines the next generation of KM. Unfortunately, this is an insoluble problem, simply because KM generations are not determined by changes in IT tools, but by changes in KM conceptual paradigms.
Tags: KM 2.0 · KM Software Tools · Knowledge Management
August 26th, 2008 · Comments Off on Why Don’t We Write Much About KM Approaches?

Here’s another post, on “why don’t we write much about ______?” This one deals with approaches to KM interventions. In my “On Doing Knowledge Management”, I distinguished two basic approaches that may encompass all KM interventions. First, there are interventions introducing strategies, policies, programs, techniques, and tools, that enhance knowledge processing by attempting to enhance the background conditions affecting both operational and knowledge processing Decision Execution Cycles (DECs), in a such a way as to enable more effective knowledge processing by people. And second, there are interventions introducing strategies, policies, programs, techniques, and tools that enhance knowledge processing by interrupting the DECs of people. My memory may be failing me, but I don’t remember any other basic types of approaches to KM interventions. Can anyone add to this classification? I’d like to know that, but even more generally, I’d like to know why people don’t write much about basic types of KM interventions? Does anyone know the answer to that question?
Tags: KM Techniques · Knowledge Management
August 25th, 2008 · Comments Off on Why Don’t We Write Much About KM Policies?

In my last post, I asked why we in KM don’t write much about KM strategies. Here I ask the same question with respect to policies. Policies and strategies are not the same. A high-level plan for achieving strategic goals and objectives, and ultimately a strategic vision may include a number of policies. On the other hand, Knowledge Managers may have policies they implement without explicit strategies. So, again, why don’t we see much discussion of KM Policies? And why don’t we think much about KM policies? Here are some examples of the variety of KM policies:
— Enhance Knowledge Claim Evaluation by supporting Fair Critical Comparison
— Make KM the handmaiden of enterprise strategy
— Reinforce self-organizing tendencies in all areas of knowledge processing
— Enhance Knowledge Sharing activity in the Enterprise
— Create an Enterprise supporting sustainable innovation
— Support informal and open knowledge processing in CoPs, Blogs, Social Networks, teams, and informal groups
— Create and support ethodiversity among Enterprise Staff
— Disrupt established patterns of knowledge processing
Tags: KM Techniques · Knowledge Integration · Knowledge Making · Knowledge Management