
Having discussed both the difficulties in evaluating KM activities and different approaches to KM, in my last two blogs in this series, I’ll now consider the implications of the approaches combined with the difficulties for the proper organization of the KAO’s evaluation function. The Decision Interruption Approach greatly alleviates three of the four difficulties and makes evaluation more straightforward. In that approach, the problems motivating interventions are problems related to a history of specific operational errors or deviations from expectations. In the Partners Healthcare case serious medical errors in terms of fatalities and serious side effects resulted from errors in prescriptions. The impact of the KM effort in this case can be described by noting that: (1) serious medical errors in order entry were reduced by 55%; (2) use of beneficial drugs increased 7-fold; and successful dosages increased 11-fold.
Because of the sharp focus of the intervention on a specific decision type, and also because the interventions are undertaken in the expectation that enhancements in knowledge processing will reduce medical errors, it is very implausible to attribute a result like the one at Partners HealthCare to anything else but the system introduced by the KM intervention, and the impact of that system on knowledge processing. But if further evidence were needed, it would be easy in a case of this kind to measure things like the growth over time in: problem recognition, critical evaluations of existing knowledge; new knowledge; interactions between different levels of the organization, levels of confidence in the knowledge base; and the impact of changes of this sort on indicators like frequency and severity of medical errors. In short, this approach makes it easier to trace the influence from the intervention through knowledge processing to the decisions themselves and their outcomes, and to simplify the relationship between ecology and knowledge enhancements. It also makes it easier to develop middle-tier knowledge processing and knowledge outcome metrics. Even though the fourth problem of evaluation is not solved by this approach, evaluation techniques that translate impacts into non-monetary benefits and costs can be much more easily applied when impacts are clearly traceable.
Since evaluation in the context of the Decision Interruption Approach is relatively straightforward, the methods and perspectives used can be relatively informal. The need to use complex statistical analysis, or simulation in the process of evaluation is less. The need for formal measurement models, and multiple indicator designs as a foundation for impact analysis is less than it is for other approaches. On the other hand, capabilities for evaluating team and committee design and also for evaluating the IT aspects of the DEC interruption process and the design and maintenance of knowledge bases are necessary. Finally, since the effect of repeated and widespread use of this approach would be to create an organization steeped in the habits and norms of distributed problem solving and knowledge production, there would need to be a capability for measuring the spread of such habits and norms over time by using narrative techniques and the analysis of narrative databases. This capability will include statistical analysis and modeling.
Like the Decision Interruption Approach, and unlike the Ecological Approach, the Expectations Gap Approach (EGA) alleviates the most serious difficulties of impact measurement. One reason for this is that the EGA is always an effort designed to upgrade the problem solving and knowledge sharing skills of people relative to particular problems or problem areas. These areas become the target of KM interventions because there are continuing operational failures of expectations that have resisted problem solving efforts. And usually these operational failures are measured. They might be measured by accident rates, or declining sales, or lost work days. So, if a KM effort is successful in moving the performance measures substantially, it’s hard to deny one is looking at a probable KM impact. Especially in the case of a continuing improvement over time.
The classic cases of this occur in the quality field. Once a company implements training and mentoring in quality programs and then emphasizes and enforces commitments to quality problem solving disciplines, the results of its efforts can be seen in declining defect rates in its products. We’ve seen this happen in automobile and electronics companies, and also see that those companies that are most diligent in developing “deep knowledge” about quality are also the companies where defect rates continue to fall.
Another example is provided by Alcoa (See Spear’s detailed account). Under the leadership of Paul O’Neill, it decided to implement a program in the Environmental Health and Safety (EHS) area, and set as a goal reducing the rate of workplace injuries to zero. At the heart of the program was training and mentoring managers and workers to see problems, immediately report them, swarm and solve them, immediately apply the solutions, reflect on the results, and share the knowledge gained from the problem solving process. The training and mentoring in initiating and performing Knowledge Life Cycles around EHS problems was supplemented with reporting and documentation requirements and also with rather strict enforcement of the policy of looking for, seeing, and solving problems. The outcome measure of percent of on-the job-injuries declined from 2 percent to 0.07 percent per year over a period of 20 years ending in 2007, and the chance of serious injury in one’s career fell from 40 percent to 2 percent. The risk of a losing a workday in manufacturing companies other than Alcoa fell from 68 percent to 40 percent in a career.
Apart from operational measures like injury rates, the EGA also allows for developing middle-tier measures of changes in knowledge processing. Such measures can be generated from the reports and documents required by the EGA and can include frequency and severity measures related to the problems, various measures relating to ideas generated, quality of those ideas, reasons given for adopting certain “root causes” and solutions, rather than others, and measures of the quality of the idea evaluation and knowledge sharing processes.
The EGA is like the Decision Interruption Approach, in not directly treating the problem of the difficulty of arriving at evaluations, rather than just descriptions of impact. However, the ability to clearly see impacts because of the connections between problems, solutions, and outcomes makes it easier to arrive at evaluations too.
The Ecological approach is the most popular to which the term KM is applied. The EGA approach may be just as popular, in a larger business context, but it generally is buried within the Quality Management approach and isn’t as widely recognized as a conceptually distinct approach to KM. In any event, KM projects that have focused on Best Practices systems, Enterprise Portals, Communities of Practice, and most recently Enterprise 2.0 and social media interventions have used the notion that if we make a key change in the ecology of individual decision making, knowledge processing, and especially knowledge sharing will improve, and this, in turn, will lead to improved decision making and better organizational performance.
The four difficulties of evaluation I discussed earlier are most serious in the context of this ecological approach. First, the indirectness of the relationship between KM activity and business outcomes is particularly troubling, because the chain of influence from KM activity to downstream effects breaks off at the ecological change introduced by KM. For example, an intervention creates a CoP, but, from there, whether it is knowledge or just information that is shared, cannot be clearly shown. Furthermore, after whatever is shared gets shared, the downstream use of the “knowledge” in a decision or a business process cannot be clearly and easily traced. The same is also true of Portal solutions, Best Practices systems, and other modifications of ecology. Once the change occurs, its connection to knowledge, as opposed to information sharing, is vague, and the further connection between any information or knowledge emerging from the new ecological structure and operational decisions and business processed is hard to trace.
Second, the ecological approach is no better when it comes to providing middle-tier knowledge processing metrics. It has little to say about measuring whether or not we see problems. It also gives us little purchase for measuring whether we get more new ideas than before a KM intervention. It provides no help in measuring whether we’ve improved knowledge claim evaluation, and third, since it doesn’t clearly distinguish knowledge from information, it’s hard to disentangle the relationship between ecology and knowledge processing and specifically, to measure whether “knowledge sharing” is actually enhanced by the ecological intervention, or whether any such “sharing” has an effect on business processes or outcomes. Fourth, since for the above reasons, it’s very hard to describe and measure KM impact, it’s also very difficult to evaluate that impact in terms of non-monetary benefits and costs.
There are ways of getting past these difficulties of evaluating KM interventions using the ecological approach, but they require tempering the approach by channeling it toward specific problems or classes of problems, in the context of which knowledge processing and knowledge outcomes, and downstream business processing and outcomes can all be connected. This wasn’t done at the World Bank where the outcomes of their CoP program could not be related to the Bank’s operational function. It was, however, done at Halliburton, where every combined CoP/Portal intervention was tasked with contributing to the solution of problems in a specific business problem area. The Halliburton project was unorthodox because of the way it compromised self-organization in communities by determining the overall problem solving purpose of a CoP before it was established. But, on the other hand, all the Halliburton interventions could be evaluated as successful, because this orientation toward specific problems allowed the KM interventions to be designed with multiple specific balanced scorecard-like indicators of both knowledge processing and operational success explicitly stated. It was then, relatively easy to tell a persuasive story about impact, tying together the intervention, problem solutions, and implementation results, even if impacts could not be strictly measured in accordance with a rigorous statistical design and method.
In the absence of an ecological intervention tied specifically to operational problems, assessing the impact of KM interventions is still possible, but I think it is much harder work that requires complex performance scorecards, elicitation (through Anecdote Circles or Most Significant Change techniques) and use of narrative databases and content analysis (in contrast to surveys, which I believe are ineffective for evaluation), and also computer simulation and multivariate statistical analysis methods. We’ll need all of these because KM interventions in organizations often occur in a complex context where both environmental changes and other major initiatives such as Data Warehousing, Data Mining, Business Intelligence (BI) and On-line Analytical Processing (OLAP), Business Performance Measurement (BPM), CRM, Enterprise Resource Planning (ERP), Collaboration Management, Content Management (CM), Document Management, e-Conferencing applications, e-Learning applications, and Enterprise Information Portals (EIPs), may be going on at the same time. To attribute either positive or negative changes to a KM intervention, rather than one or more of a number of other factors that may be going on at the same time, is very difficult without recourse to techniques like computer simulation and multivariate statistical analysis.
The differences in nature, methods, and difficulty of impact analysis and evaluation across the three primary KM approaches suggests that the Knowledge Accountability Office (KAO) should organize its evaluation function to take account of these differences. The work of evaluation and the skills needed will be very different for projects and programs using the Decision Interruption, Expectation Gap, and Ecological Approaches. Apart from KM policy analysts and subject matter experts; evaluation methodologists, IT-competent staff, and some statistical help will be needed for evaluating projects and programs using the Decision Interruption Approach. For the EGA, KM policy analysts and subject matter experts will need to be supplemented by anthropologists and content analysts. While for the Ecological Approach, I think that KM policy analysts and subject matter experts will need assistance from community and social networking practitioners, supplemented by Enterprise 2.0 IT-competent staff along with staff competent in computer simulation and statistical modeling and analysis.
This ends my development of the idea of the National Governmental KM Center, outlined in Part Two of this series, which I’ve also labeled the Knowledge Accountability Office (KAO). Next week, I’ll summarize this series in a final post.
To Be Continued
2 responses so far ↓
1 Knowledge Management and Risk // Mar 18, 2009 at 11:42 pm
[…] I think that’s a good way to measure the value of KM, but I don’t think it’s ROI. Here, I’ve talked about three approaches to KM, and discussed the Partners HealthCare case, and the […]
2 “Leakage:” Changing Distributed Organizational Knowledge Bases // Mar 31, 2009 at 1:56 am
[…] is a longer-term incremental, fix of course. That fix involves application of the Ecological, Decision Interruption, and Expectation Gap approaches to KM. If all three are implemented, the cultural supports for greater redundancy in similar structures of […]