All Life Is Problem Solving

Joe Firestone’s Blog on Knowledge and Knowledge Management

All Life Is Problem Solving header image 1

A Brief Note on Fallibilism, and Popperian Falsificationism

February 24th, 2009 · 2 Comments

marbl

Since Karl Popper’s views on objective knowledge and scientific “logic” seem to be gaining a little traction in KM these days, I think it may be a good idea to offer a clear statement about his views on fallibilism and falsificationism, especially since I agree with them. Fallibilism is the idea that no knowledge claim, even one that is true, can be proved beyond doubt. And, by the way, that includes meta-knowledge claims asserting that a particular knowledge claim is false. As Popper put it: “By ‘fallibilism’ I mean here the view, or the acceptance of the fact, that we may err, and that the quest for certainty (or even the quest for high probability) is a mistaken quest.” As a fallibilist, Popper claimed that statements about the world could neither be proved nor disproved, if by “proof” one means “justification” of a knowledge claim as certainly true. [Read more →]

→ 2 CommentsTags: Epistemology/Ontology/Value Theory · Knowledge Making · Knowledge Management

The Glass is Half Empty

February 23rd, 2009 · Comments Off on The Glass is Half Empty

vonkar

Today, I have a quickie comment on US politics. Ryan Lizza had a striking profile on Rahm Emanuel in the New Yorker, which among other things recorded Rahm’s reactions to some critics of the Administration’s efforts on the stimulus package. Lizza puts things this way:

““They have never worked the legislative process,” Emanuel said of critics like the Times columnist Paul Krugman, who argued that Obama’s concessions to Senate Republicans—in particular, the tax cuts, which will do little to stimulate the economy—produced a package that wasn’t large enough to respond to the magnitude of the recession. “How many bills has he passed?”” [Read more →]

Comments Off on The Glass is Half EmptyTags: Politics

The Problem Solving Pattern Matters: Part Ten, More On Enhancing Developing Solutions: Evaluating and Selecting Among New Ideas

February 22nd, 2009 · 2 Comments

waterfall

(Co-Authored with Steven A. Cavaleri)

Here are some examples of criteria that may be used for comparing alternative solutions (i.e. decision models) in a Comparative Decision Making (CDM) context.

— Logical consistency (inconsistent decision models are invalid and must be reformulated)

— Empirical fit (competing models fit current and past data to varying degrees)

— Projectibility (models vary in their plausibility; models also vary in their after the fact success in prediction)

— Systematic fruitfulness (extent to which a decision model facilitates novel deductions)

— Heuristic quality (extent to which a decision model facilitates new conjectures)

— Systematic coherence and testability (coherence of statements relating abstractions in decision models, coherence of statements relating abstractions and concrete terms in decision models, and testability of expectations resulting from such coherence)

— Simplicity (economy in number of variables in a decision model, simplicity of mathematical form)

— Estimated risk of error in accepting a model rather than its alternatives. [Read more →]

→ 2 CommentsTags: Epistemology/Ontology/Value Theory · Knowledge Making

National Governmental Knowledge Management: KM, Adaptation, and Complexity: Part Seven, Comments on A “Simple” Definition

February 21st, 2009 · 1 Comment

caco

Before moving on to discussing in more detail how a National KM Center would coordinate information availability about KM and knowledge processing, I’d like to take a little time to write about a long-standing issue in KM. The issue of definition. In Part One of this series, I defined KM as activity intended to enhance knowledge processing and then specified knowledge processing as: problem seeking, recognition, and formulation, knowledge production, and knowledge integration. But, in some circles one of the early definitions of Knowledge Management, “Getting the Right Information to the Right People at the Right Time,” is still current. Why not use this definition of KM for the Center? Of course, I’ll tell you why in this blog. [Read more →]

→ 1 CommentTags: Epistemology/Ontology/Value Theory · Knowledge Integration · Knowledge Making · Knowledge Management · Politics

The Problem Solving Pattern Matters: Part Nine, Enhancing Developing Solutions: Evaluating and Selecting Among New Ideas

February 20th, 2009 · 2 Comments

BoskyDell

(Co-Authored with Steven A. Cavaleri)

Alternative solutions, as we create them, are, in the end, alternative beliefs. The process of belief selection is ultimately Darwinian in character, and the final context of that selection is performing a solution and experiencing post-action outcomes. That is, when we act on the basis of our ideas or psychological predispositions, then reality will influence, and sometimes even determine, whether the expectations resulting from these ideas or predispositions fit our experience. If not, these will, sooner or later, change, until our reality-influenced (subjective) experience selects those of our ideas and predispositions whose associated expectations “match” our interpretations of the consequences of our actions.

This ultimate, post-action context of selection is not part of the PSP, however. Instead, it is part of the Operational Pattern (OP), because it actually follows, rather than precedes a decision to accept a belief as a solution and a basis for action. So, the questions arise, how do pre-action evaluations and selections of belief, which are part of the PSP, occur? What are they based on? And how may pre-action processes of evaluation and selection be enhanced?

We think there are three primary types of processes that people frequently follow in evaluating and selecting beliefs in the pre-action context. The first type is selecting a solution based on the authority of its source. The second, is selecting a belief or a solution based on intuition, including pattern recognition. And the third is selecting one with the aid of a comparative analysis of alternative solutions relative to a set of perspectives or criteria the decision maker thinks can discriminate among alternative solutions according to the likelihood of their validity or invalidity. The first of these types we’ll call Authoritarian Decision Making (ADM), the second, Recognition-Primed Decision Making (RPD), and the third, Comparative Decision Making (CDM).

In ADM, we evaluate competing alternative solutions based on the authority of the source. Then we select the alternative solution suggested by the source with the greatest authority. Variations of ADM occur depending on the basis of authority involved. Selections can be based on political authority, intellectual authority, religious authority, or charismatic authority, as the case may be.

The basic notion of RPD is that humans prefer to “first-pattern-match” in decision making, and then proceed by what is, essentially, sequential trial and error — if the first pattern doesn’t match either their mental simulation of the likely consequences of their decision, or the actual consequences perceived in their post-decision experience. This is a bit different than animal decision making, since humans can mentally simulate the results of their contemplated decisions in much more complex and detailed ways than animals, who appear to be limited to relatively simple expectations about consequences. At bottom, RPD is intuitive decision making, though RPDs based on careful mental simulation are certainly different than RPDs based on an inchoate “gut feel.” This is so because an evaluation based on a mental simulation can provide a basis for deciding that there is likely to be a mismatch between the contemplated solution and reality, and therefore that it should not be implemented and a new one should be sought instead. This means that mental simulations accompanying RPD, can lead to the refutation of solutions in a pre-action context, perhaps saving the costs of undesirable consequences that may result from implementing an invalid solution to a problem.

In CDM, humans create a number of decision alternatives, and then, in the same pre-action context, comparatively evaluate them, and select the best option, or according to some notions “the optimal decision.” In the past 30 years, much research has shown that decision makers rarely use CDM (which is most-often and we think, erroneously, referred to as “Rational Decision Making” (RDM)), but prefer RPD, and sometimes other forms of “Naturalistic Decision Making” (NDM). The most well-known research of this kind has been performed by Gary Klein and his collaborators at Gary Klein Associates. This research has shown that RPD is functional in situations where CDM is either not, or is impractical to carry out, and also, raises the possibility that RPD is the kind of decision making we ought to employ in most situations, restricting CDM to relatively rare cases where the time, resources, and possible high benefit/cost ratio from a CDM procedure outweigh its far greater costs to implement.

Leaving aside the question of whether one ought to employ RPD rather than CDM in most situations or not, we’ll summarize by saying that ADM selection is based on authority, RPD is based on intuition and mental simulation, and CDM uses whatever perspectives and criteria a decision maker or group of decision makers develop to perform their comparative evaluations. In addition, both RPD and CDM forms of selection can also benefit from “safe-fail experiments” that test solutions developed using RPD, or CDM. The two central characteristics of safe-fail experiments are (a) that their failure risks little and (b) illuminates what is wrong with solutions, enabling “learning from error.” However, safe-fail experiments take time to perform and complete. Decision situations in which only an RPD approach is practical, may preclude both safe-fail experiments and CDM approaches, and leave only RPD supported by mental simulation as the basis for accepting a solution.

Saying as we just did, that CDM selection employs various perspectives and criteria, as well as multiple alternative solutions, barely touches the issue of variation in the ways CDM selection may be done. There is great variation in the perspectives and criteria people use to compare alternative solutions, and even in the guiding regulative ideal and necessary conditions underlying CDM comparisons. In our next blog in this series we’ll discuss some of these variations

To Be Continued

→ 2 CommentsTags: Epistemology/Ontology/Value Theory · Knowledge Making

National Governmental Knowledge Management: KM, Adaptation, and Complexity: Part Six, A National KM Research Center

February 19th, 2009 · Comments Off on National Governmental Knowledge Management: KM, Adaptation, and Complexity: Part Six, A National KM Research Center

colestjohn

Last July I wrote two posts on National Governmental Knowledge Management. In the first, I made the case that there was a need to organize and implement formal KM in National Governments to see whether it can produce an ecology of rationality that will work to enhance knowledge processing, knowledge and adaptation throughout such Governments. In the second, I considered the alternative of formal, decentralized, “local KM,” and its considerable advantages while also pointing to its three crippling disadvantages: the strategy exception error, the absence of any way to prevent stove-piping, and the failure to provide a structure for regulating KM performance in the Government that connects KM to a recognized fiduciary authority.

I then proposed a possible organization for National KM in which one adds to decentralized KM, a National Government KM Center for

— 1) Performing KM Research and Development;

— 2) Coordinating information availability about KM and knowledge processing including information about KM R & D performed elsewhere, both in and outside the Government;

— 3) funding KM programs and projects across the National Government: and also

— 4) evaluating the impact of KM and knowledge processing activity across the decentralized, partially self-organizing clusters of KM activity.

In other words, this Center would be a combined “clearinghouse,” KM scientific research center, funding source for programs and projects, and evaluation agency.

In the next series of posts, I’ll expand my thinking about the National Government KM Center (perhaps it might be be called the Knowledge Accountability Office or KAO), by visualizing in a bit greater detail the activities of the proposed Center in each of the above four areas. I’ll begin in this post with its activities performing KM research and development. Why should the KAO have a research component? The answer is that KM, as a formal discipline is only about 20 years old and it doesn’t yet have a settled body of knowledge. A National KM Research Program located in the Center would provide a focal point for self-organization in KM research nationally. Research at the Center would be independent of research priorities set in the Executive Branch, because the KAO would be independent of the Executive. Instead, it would focus on research problems that are critical for enhancing adaptive capability, whether or not such research contributed to the problems of the moment.

A National KM Research Center would increase the status and popularity of KM research and serve as a stimulus for research programs in KM at Universities across the United States. Over a period of years it would help us to grow our knowledge about KM and the impact KM activities can have in enhancing performance of the various aspects of knowledge processing – problem seeking, recognition, and formulation, problem solving (knowledge production), and knowledge integration in organizations and in the Government. In short, it would grow our knowledge about how to enhance the Government’s adaptive capacity so that it can cope with the myriad neglected challenges facing the United States.

In its research function, the National KM Research Center would operate like a National Laboratory, but with a specialization in creating knowledge about how to enhance Knowledge Management and Knowledge Processing. The agenda of the Center will depend on the conceptual framework it will use to map out the scope of the disciplinary concerns of KM, as well as on its evaluation of the most urgent needs of the Federal Government in enhancing its adaptive capability.

Given the wide-ranging disagreements in KM over the scope of the field, the Center will need to begin with an accelerated project to create a KM/knowledge processing conceptual framework and to prioritize research needs within the context of that framework. Since the very basis of the National KM Center is the idea that KM is activity intended to enhance knowledge processing including: problem seeking, recognition, and formulation; problem solving (knowledge production); and knowledge integration the framework will need to begin with these conceptual commitments – commitments to a variant of Second Generation KM concerned both with making and integrating knowledge, rather than to a variant of a First Generation, relatively narrow, knowledge sharing orientation. Beyond these conceptual commitments, the rest of the framework ought to be formulated by the National KM Research Center in its initial accelerated research program. It ought to involve the broadest possible participation in this program and to consider all of the major KM Second Generation conceptual frameworks as well as any elements of First Generation conceptual frameworks that are not acknowledged in the broader Second Generation frameworks. I hesitate to suggest specific methods eliciting broad participation, but certainly Web 2.0 tools will be useful in surveying frameworks, and will group facilitation methods will be useful in synthesizing and prioritizing them.

Finally, I also suggest that the following are key questions to be considered in this foundational framing effort:

— What is knowledge and how do you distinguish it from information?

— How can we tell when information becomes knowledge?

— How ought we to select among competing knowledge claims to create knowledge?

— Why is it that knowledge can’t be commanded into existence?

— What is the Complex Adaptive Systems backdrop of the social processes of KM, problem formulation, knowledge production, integration, and use?

— Where does knowledge fit into this context?

— Can the growth of knowledge be predicted?

— How do intelligent agents solve problems, learn, and produce knowledge they can use?

— What is the character of mental knowledge? Is it tacit, implicit, explicit? Situationally tied? Predispositional?

— Can we provide a foundation for analyzing the impact of KM by making clear what KM is and what it is not?

— How can we usefully specify the targets of KM in knowledge processing and its ecology to support auditing and benchmarking prior to KM interventions?

— How should we segment KM activities in a way that will be useful for impact analysis?

— How should we specify the KM and knowledge processing conceptual framework to support metrics development for KM impact analysis?

— How can we specify a conceptual framework so that it will support comprehensive evaluations of KM Impact in terms of both economic and non-economic benefits and comparisons of the two on a common scale of measurement?

— What is sustainable innovation and how can we conceptualize it in terms of our Second Generation conceptual framework?

— What should be the comprehensive goal or normative vision of National KM policies and programs seeking to maximize transparency and continuous and effective problem solving?

— How can we change organizations so that all participants may contribute to (distributed) problem solving and adaptation, while still maintaining the authority and integrity of management?

Future blogs in this series will fill in more detail on the other three aspects of the National KM center

To Be Continued

Comments Off on National Governmental Knowledge Management: KM, Adaptation, and Complexity: Part Six, A National KM Research CenterTags: Complexity · Epistemology/Ontology/Value Theory · Knowledge Integration · Knowledge Making · Knowledge Management · Politics

KMCI On-Line Press Publishes Ground-breaking Paper on the Foundations of Organizational Knowledge

February 18th, 2009 · Comments Off on KMCI On-Line Press Publishes Ground-breaking Paper on the Foundations of Organizational Knowledge

stonehenge

February 17, 2009. Alexandria, VA — EIS and KMCI are proud to announce the release of a new White Paper co-authored by Richard A. Vines of Project Lessons – Strategic Solutions of Melbourne Australia, William P. Hall of the Australian Centre for Science, Innovation, and Society, Melbourne, Australia, and Luke Naismith of Knowledge Futures, of Knowledge Futures Dubai, entitled:

”Exploring the Foundations of Organizational Knowledge: An Emergent Synthesis Grounded in Thinking Related to Evolutionary Biology”

Vines, Hall, and Naismith offer the following Abstract introducing their White Paper.

Prevailing views about what constitutes organisational knowledge need to be systematically evaluated at deep epistemological levels. We argue there is a need is to establish a new paradigm comprised of both a theoretical and an ontological foundation for thinking about knowledge epistemologies. We think, along with Bill McKelvey, (1997, 2002) that the “science of management” as it relates to organisations seems to be greatly wanting.

Our approach is based on an evolutionary theory of knowledge contained within Karl Popper’s later epistemological works beginning with his 1972 “Objective Knowledge – an evolutionary approach” and a framework of organisational theory based on Maturana and Varela’s concept of self-producing complex systems (“autopoiesis”). We have drawn upon this combined approach in order to understand how best to integrate understandings of personal and objective knowledge and the notion of “living organisations” into a new paradigm of organisational knowledge.

A model that is congruent with this new paradigmatic approach is detailed and discussed. This model is designed to provide a general overview of the different types of knowledge that give rise to organisational knowledge.

Importantly, we highlight that all explicit knowledge held in organisations encoded in analogue or digital form (content) is in fact inert. Equally, we claim that calling such content knowledge objects is dependent upon the type and role of the social systems within which such content is created, reviewed and evaluated. In general terms, knowledge objects cannot be regarded as “living knowledge” unless the filter of human interpretative intelligence is applied to generate meaning from these objects or, increasingly, unless such intelligence is built into dynamic processes and systems within the organisation. Therefore, we claim that the human aspects of managing knowledge are of significant importance. We suggest that the metaphor of “organisational boundary as membrane” is an important element of organisational knowledge. This is because different types of flows and exchanges that cross the boundaries of organisations over periods of time are fundamental to how an organisation sustains its ability for self production and self-control. We claim, in conclusion, that these features of organisational knowledge have crucial implications for how different types of knowledge are best managed.

This paper relates to a power point presentation made at the actKM National Conference in September 2007. The ideas presented have been developed within a group of collaborators interested in developing a synthesis of approaches that embraces knowledge management and its links with organization theory, autopoiesis and Karl Popper’s evolutionary epistemology.

Comments Off on KMCI On-Line Press Publishes Ground-breaking Paper on the Foundations of Organizational KnowledgeTags: Complexity · Epistemology/Ontology/Value Theory · Knowledge Making · Knowledge Management

National Governmental Knowledge Management: Part Five, More on First and Second Generation KM

February 17th, 2009 · Comments Off on National Governmental Knowledge Management: Part Five, More on First and Second Generation KM

angelstanding

This post replies to a comment by Professor Steve Cavaleri, my co-author in the PSP blog series, on Part Three of this NGKM series.

Steve:“. . . many people believe that First Generation KM is “KM” — ‘in its entirety’ and Second Generation approaches are not KM.”

Joe: That’s true, but the issue is not what people believe, but how we should explicate the term KM in order to develop the KM field in the most useful way. Limiting KM to its First Generation meaning is not a viable way to explicate “KM.” I’ve provided some of the reasons why it’s not viable in another place.

Steve: “Second, it is plausible that others do not see any reason or value for distinguishing between first and second generations of KM.”

Joe: Again, I agree that many think there’s no value in making that distinction, but some reasons why it’s valuable to make the distinction are also given here.

Steve: “I believe a central issue is that First Generation KM is perceived as being simpler to manage than Second Generation KM and is viewed as having a clearer, more definitive Return on Investment.”

Joe: Again, that perception is out there, but it is incorrect. The history of KM shows that a high percentage of First Generation KM projects and programs have failed. Since First Generation KM often comes down to an approach devoted to enhancing knowledge sharing, I think the criticisms of a knowledge sharing approach to KM I’ve given here show that First Generation KM doesn’t offer a more definitive return on investment at all, but only confusion of information sharing with knowledge sharing.

Steve: “As Steven Spear notes in his articles and books, ultimately, systems, such as the Toyota Production System (TPS), become more efficient than others because they are continually improved, foster innovation, redesigned, and debugged by workers who focus on problem solving. Second Generation KM is very compatible with such an approach because there are cultural supports, job designs, and knowledge processing already built into the system.

Joe: I agree. But, even more, I think that Second Generation KM is built into the TPS, and is the foundation of its Quality Management processes. See an earlier blog in the Problem Solving Pattern series we are writing together reviewing Steven Spear’s Chasing the Rabbit.

Steve: “My guess is that when you overlay a Second Generation KM system over a highly mechanical organization with a culture oriented toward supporting a Tayloristic style of doing things — you get chaos. Despite Tom Peters ideas of twenty years ago that they need to learn to thrive on chaos, in the name of competitiveness, many companies have dis-invested in adaptive capabilities and left themselves unable to handle richer approaches, such as Second Generation KM. So to wrap up, I believe that these companies are not drawn to Second Generation KM approaches, and they may be ill-equipped in terms of culture and leadership to effectively implement them. However, I believe that if organizations commit to such strategies, they can implement it incrementally and transform themselves over time. In the current economic climate, it seems that will be less likely, but as competitors, such as Toyota, perfect their use of high impact KM approaches, competitors will have few options other than to change or fail.”

Joe: Steve, I’m not sure what you mean by “overlay,” but I agree that the introduction of Second Generation KM into a Tayloristic organization is difficult, and requires an incremental approach, evolving the organization from a closed to an open Problem Solving Pattern. In my award-winning article with Mark McElroy in The Learning Organization Journal (which you edited during your time as editor of TLO), we write about the Decision Interruption Approach to KM and point to an incremental strategy using it. I also wrote more about this approach in my KMRP article available here.

Finally, most of your comment seems to be flowing from your observations about “Tayloristic” organizations in the private sector. However, your comment is directed at one of my posts on National Governmental Knowledge Management. That and other posts in the series are about the US Federal Government, and the problem of establishing a National KM Center headed by a National CKO. If we do formulate such an organization it won’t be Tayloristic, nor is there anything in the Federal environment, with its current emphasis on problem solving across a broad front, suggesting that the Obama Administration would be more favorable to the narrow and, I think, vague and ambiguous, First Generation KM orientation than to the much broader and intellectually coherent Second Generation KM orientation I’ve proposed.

Comments Off on National Governmental Knowledge Management: Part Five, More on First and Second Generation KMTags: Knowledge Integration · Knowledge Making · Knowledge Management

The Problem Solving Pattern Matters: Part Eight, Still More On Enhancing Developing Solutions: Coming Up With New Ideas

February 16th, 2009 · 1 Comment

Florence

(Co-Authored with Steven A. Cavaleri)

This post is the third discussing the question of enhancing processes and activities we use to develop new ideas.

Fourth, current organizational knowledge bases don’t distinguish knowledge from information, and given the importance of previous cultural knowledge for creating new ideas, this is a very big problem for present technology. Most current organizational knowledge bases don’t record the track record of past performance of knowledge claims used by the enterprise. So, from the point of view of people using them, everything in the knowledge base is just information. It would be a big help to people thinking up new solutions to confront their thinking with past organizational knowledge, as well as their own personal knowledge. But it is hard to do that in the absence of real knowledge bases that distinguish between cultural knowledge, just information, and error.

Turning again to Toyota for an example, it gets a good deal closer to the requirement for a knowledge base track record I’ve mentioned. It uses a discipline of detailed documentation of lessons learned from team problem solving efforts, and adopts an experimental and organizational learning point of view toward work. As new knowledge is created in a problem context, it is documented in “A3 reports” which capture on an A-3 size sheet of paper, partly in a visual format, the story of a team problem solving effort including summarizing the process of team problem solving, in addition to the solution arrived at. The process includes documenting alternative solutions and a team’s reasons for selecting a preferred solution. A-3 reports are used to produce “lessons learned” books in each area of process design. The “lessons learned” books frequently change to incorporate experiences (using audit sheets) of new process deviations and refinement of solutions (knowledge) for meeting them. Toyota’s A-3 reports and “lessons learned” books incorporate the structured and unstructured information about problem and process context necessary for creating what we mean by a “real” knowledge base, but their design doesn’t make fully explicit the idea of a track record of alternative solutions, and, in addition, the integration of the A-3 reports and “lessons learned” documents is accomplished primarily through human knowledge and know-how about them with little Information Technology support.

It should be a priority of organizations to construct real knowledge bases, including:

— “Practices and Solutions systems” that actually track the performance of practices and solutions with collaborative tags and annotations, rather than just identifying practices that are claimed to be “best” (such systems would include both “lessons learned” and “best practices,” but both would be placed in a much richer problem and process context of meaning than in the past and, therefore, are much more likely to be used.)

— Collaboratively tagged and annotated narrative databases that express the perspectives and evaluations of people in concrete detail understandable to people. In the Toyota case, this would mean collaboratively tagging and annotating A-3 reports and “lessons learned” e-books,

— Collaboratively tagged and annotated blog posts, and wiki contributions,

— Collaboratively tagged and annotated podcasts and youtube-like videos, and

— Other collaboratively tagged and annotated structured and unstructured content.

For flexibility and variety, the real knowledge bases we have in mind, ought to be distributed, rather than centralized, and Enterprise 2.0 and 3.0 technology including tagging, annotating, and mashups, and new semantic web applications, should be applied to create both a new and richer layer of meaning and integration across stove pipes. To be effective in creating high quality knowledge bases that will be most useful in enhancing thinking up new ideas, social computing technology must be applied both collaboratively, and in a way that includes all ideas, no matter how new and untested they are. The rule should be to let the knowledge base reflect the track record of performance of ideas comprising solutions, or the absence of such a track record, and leave it up to people to factor that into their own creative thinking.

Distributed Organizational Knowledge Bases (DOKBs) should be “objective” in the sense that they incorporate a track record of performance reflecting fair critical comparisons among alternative solutions. We’ll discuss fair critical comparison a bit in a later post in this series. But here the important point is that creating such knowledge bases involves more than just using Enterprise 2.0 software. It also involves knowing how to use it and other software applications to create a new layer of cultural meaning that reflects performance tracking and fair critical comparison of ideas.

Fifth, information technology initiatives that support individuals generating new ideas are very important to implement. We’ve already indicated that Web 2.0 cluster technologies can be a big help in generating new ideas, both because they enable the mechanics of idea creation and expression in a social context, and also increase transparency, and sometimes inclusiveness. In addition, there are many applications that are helpful as aids to individual level creative thinking such as visual mind mapping software, text and data mining software, decision support software, and simulation modeling software.

To Be Continued

→ 1 CommentTags: Epistemology/Ontology/Value Theory · KM Software Tools · Knowledge Integration · Knowledge Making

The Problem Solving Pattern Matters: Part Seven, More On Enhancing Developing Solutions: Coming Up With New Ideas

February 15th, 2009 · 1 Comment

adirondacks

(Co-Authored with Steven A. Cavaleri)

This post is the second discussing the question of enhancing processes and activities we use to develop new ideas.

Second, introduce openness to new ideas as a policy and get the organization to commit to it. This is easy to say, but because “openness” is not always easy to see and enforce, it requires unremitting efforts to communicate its importance and legitimacy and also to model it. One way to do this is to emphasize that “the job is to be creative,” as they do for example, at Toyota. Another way is to provide everyone with the communication channels they need to participate in the process of creating new ideas at the level of the collective.

Communication channels can be social groupings such as teams, friendship groups and communities of practice which provide alternative pathways to formal authority structures. In addition, alternative communication channels can be supported by new technologies. The Web 2.0, Enterprise 2.0, social software, social computing, and social media technology cluster can be really important here. Implementing Enterprise 2.0 ecology “inside the firewall,” can’t guarantee creativity or the quality of new ideas, greater inclusiveness in PSP processes, and greater internal transparency, but there’s no question that implementing this technology cluster creates and so signals both the policy and partial reality of a policy of openness to new ideas, and also enables democratizing their creation; therefore making the likelihood of variety in new ideas more likely.

Most organizations are not open to new thinking. They restrict freedom to formulate alternative solutions to problems to a relatively small organizational elite composed of either managers themselves or research specialists in various disciplines. This practice restricts the problem-solving capacity of enterprises, and also provides an excuse for restricting access to information, and some aspects of previous knowledge, to the few who most obviously need it for their problem-solving activities.

Openness to new ideas provides, in contrast, for distributed knowledge creation and discovery, and for using the inventiveness and talents of everyone in the enterprise to close knowledge gaps. This kind of openness is much more adaptive for an organization than restricting participation in problem solving, because it creates the variety of new ideas necessary to counter the variety of environmental challenges an organization is likely to face.

Toyota is the poster child when it comes to openness to new ideas. In The Elegant Solution, Matt May claims that Toyota (pp. xi-xii):

“. . . implements a million ideas a year . . . It’s the reason why they’re one of the planet’s ten most profitable companies. It’s why they make well over twice as much money as any other carmaker, and with under 15% of the market. It’s why their systems, processes, and products are the envy of the world. It’s the greatest source of their competitive advantage and staying power. It’s their engine of innovation.

Those ideas are coming from every level in the organization. Because innovation isn’t about technology. And it’s certainly not about manufacturing. It’s about value, and opportunity, and impact. At Toyota, every idea counts. It’s an environment of everyday innovation, the direct result of a fanatical focus on getting a little better daily.”

As Toyota illustrates, openness to new ideas, is not about the intrinsic value of democracy in organizations. Instead, it is about Ashby’s Law, which for any system requires that the variety in the system must be equal to, or larger than, the variety introduced by environmental perturbations, if a system is to survive.

Third, organizations should introduce training for knowledge workers in the use of social technologies for generating new ideas at both the individual and collective levels, and then organizations should use these technologies. Communities of Inquiry and Knowledge Cafés, are two of these. But there are much older, better-tested social technologies for group decision making that are very effective, in providing an environment where new ideas can be stimulated by social exchanges, and transparency, inclusiveness, and trust can be increased. They include Problem Solving Teams, Delphi Technique, Nominal Group Technique (NGT), Group Value Measurement Technique (GVMT), Team Analytic Hierarchy Process (TAHP), and a variety of group facilitation and focus group processes. The older techniques have frequently incorporated psychometric procedures producing ratio scales developed from judgmental data gathered during the group decision process. Such scales can be very useful in developing new models, including causal, forecasting, measurement, and value assessment models.

To Be Continued

→ 1 CommentTags: Complexity · KM Software Tools · Knowledge Integration · Knowledge Making