From the beginning of KM there’s been remarkably little focus on metrics and measurement. In particular, there’s been remarkably little focus on metrics of KM impact. This lack of focus is in line with a certain anti-scientific orientation that has appeared in KM associated with the philosophies of post-modernism and social constructivism. It is also in line with a rejection in the field of the idea that KM projects need to be justified by pointing to concrete results, and the adoption of a position which seems almost to say that KM is like the furniture in an organization. It’s hard to measure its impact; but, without it, an organization is hard-pressed to survive. Further, the neglect of metrics and measurement is also in line with the difficulty and unpleasantness of developing frameworks and architectures for measurement in an applied social systems field like Knowledge Management. The key terms of KM are abstractions. Change in them is not directly or easily observable. To relate changes in our experience to changes in these abstractions, we often need complex measurement models and there aren’t many KM practitioners who have the background and training to develop such models.
There are still other difficulties to note. Dave Snowden talks about the extreme reactivity of many indicators and the ease with which they can be “gamed.” He is right. Simple indicators can be gamed and this also argues for more complex measurement models that are non-reactive and impossible to game because gaming them would require too high a price for the “gamers” to pay in their everyday organizational interaction. In addition, the most important thing to measure in KM is the impact of KM interventions. This, however, introduces another difficulty, because measuring impact requires measuring change over time and also doing some modeling of influence relations. Nor is this all. To measure impact we also need to be able to project a counter-factual: the expected result of a scenario in which we don’t intervene, and compare that with the measured state of the target we’ve been trying to influence after we intervene. All this requires a methodological and technical sophistication, which we have rarely seen employed in KM to date. Nevertheless, it is all necessary to measure impact.
Finally, yet another difficulty in measurement is caused by the persistent tendency in KM to confound KM activities and outcomes with knowledge processing and its outcomes. It’s easy to understand this problem by looking at the three-tier model below.
When we do see metrics in KM projects, they often relate KM activities and direct outcomes to effects on business processes and their outcomes. That is, KM activities are related to business metrics, but such studies don’t develop any measurement models or metrics relating KM to the middle, knowledge processing and outcomes, tier. The problem with that is a failure to trace impact through the middle tier, making it harder to show that any post- KM intervention outcomes are actually due to KM. Now sometimes this omission is not important in justifying one’s project. For example, in the Partner’s HealthCare case it’s very hard to deny that reductions in the negative impact due to errors in ordering prescriptions was due to the KM intervention re-structuring the ordering process, and eliciting a growth in the problem recognition and problem solving surrounding it. In other cases, however, particularly those involving the much more common ecological approach to KM, the relationships among KM, knowledge processing and outcomes, and business processing and outcomes. is much more complex and is not disentangled in the cases. So, it is much harder for them to establish KM impact, either positive or negative in character.
In spite of all these difficulties besetting the task of developing measurement models and metrics in KM, I don’t think the field will progress very much unless this development takes place. If we can’t show impact we won’t be able to claim impact. And if we can’t claim impact, no one will ever take KM seriously. So, I think we had better begin to spend a lot more time on both doing and writing a lot more about measurement and metrics, and we ought to do that immediately, so that when the time comes to evaluate the newly minted Web 2.0-based interventions, we can say just how successful they are without yet another generation of arm-waving.