All Life Is Problem Solving

Joe Firestone’s Blog on Knowledge and Knowledge Management

All Life Is Problem Solving header image 1

Knols Aren’t Units of Knowledge and What Google Can Do About It

August 5th, 2008 · 1 Comment

voyageoflifemanhood

Google has introduced a new service to facilitate knowledge sharing. Google describes it this way:

The Knol project is a site that hosts many knols — units of knowledge — written about various subjects. The authors of the knols can take credit for their writing, provide credentials, and elicit peer reviews and comments. Users can provide feedback, comments, and related information. So the Knol project is a platform for sharing information, with multiple cues that help you evaluate the quality and veracity of information.”

Knols are indexed by the big search engines, of course. And well-written knols become popular the same as regular web pages. The Knol site allows anyone to write and manage knols through a browser on any computer.”

It seems strange to claim that a blog post or an article is “a unit of knowledge.” This is true first, because an article is a container composed of many statements, assertions, or knowledge claims. But, second even if we interpret an article as referring to its content, the set of abstract objects (propositions, arguments, problems, theories, etc. but never “concepts”) asserted or expressed by the article, then a few things seem clear.

— A given article may assert no knowledge at all, but only known falsehoods;

— A given article may contain any number assertions that, as far we know, may be true and that therefore are knowledge: and

— Assertions that are knowledge will differ in both their logical and empirical content. Some assertions will be relatively vacuous and will be nearly empty of content, making them very difficult to refute. Other assertions will be riskier and prima facie should be much easier to test, and, in principle, to refute. Both assertions may be knowledge at any particular point in time. Which has more “units of knowledge?”

Third, the above comments and Google’s idea, as well, assume that expressed linguistic content can be knowledge, i.e. that though knowledge can only be created by humans, once created, it can exist as “knowledge without a knower.” However, for many people, there is only one sort of knowledge, and that is “knowledge in the mind.” If one believes that, then Google’s idea is way off the mark, because “knols” would have to be viewed as “units of mental belief,” and perhaps even as “units of mental justified true belief.” Of course, such units are not directly accessible through articles, and therefore such articles can’t be units of knowledge.

Fourth, even though the idea of a concrete unit of knowledge, in the form of an article, doesn’t hold up, a more measurable idea is the amount of knowledge expressed in an article. So Google could set up a system to compare articles in terms of the “knol score” of what they express, where the knol score is defined as the value of an article on the knol ratio scale. Measuring the knol score involves an easy application of Thomas L. Saaty’s Analytic Hierarchy Process (AHP), and the theory and methodology behind such an application have been tested and applied for roughly 35 years now in thousands of studies.

To create ratio-scaled knol scores, all Google would have to do would be to use the following procedure.

Step 1: When people register for the knol site, and using a graphical method of comparison splitting a pie, get their pairwise comparison ratings of the categories with respect to the amount of knowledge they connote relative to one another other: Greatest Amount of Knowledge possible in a single article; Great Amount of Knowledge, Good Amount of Knowledge; Fair Amount of Knowledge; Some Knowledge; Least Amount of Knowledge. This step will require [(n)(n-1)]/2 or 15 judgments by each person. These graphical comparisons are directly translatable to ratios. Software to easily perform such ratings over the web is called Comparion and is available from Expert Choice.

Step 2: Use Saaty’s Eigenvalue method to compute the relative priority ratio scale scores of the six categories. The scores emerging from the method are meaningful ratio scale numerical scores implicitly compared to an absolute zero knowledge value which no article will ever actually reach. Moreover, the logical consistency of the judgments underlying the ratio-scaled scores are tested using the method, and the test results are derived along with the ratio scale scores.

Step 3: As individuals add ratings, multiply each individual’s vector of ratio scale scores by the ratio of 100 to the value of “Greatest Amount of Knowledge possible in a single article,” in each individual’s vector. This will have the effect of mathematically stretching the length of all individual vectors so that all scores are in the interval of 0 to 100.

Step 4: Average the ratio scale values for all categories across all individual ratings. It’s easy to weight the individual ratio scale scores for inconsistency levels, so that the scores reflecting greater inconsistency have less weight in the overall average.

Step 5: Ask individuals who want to rate articles for amount of knowledge to categorize the articles according to the six categories. Transform their categorization to a ratio scale score by mapping to the category to the average category value for the whole population.

Step 6: Average the numerical ratings across all individuals to derive a numerical rating for an article at any point in time.

Step 7: Keep updating the average category ratings and the average article ratings as new data becomes available.

This method will produce a ratio scale score of amount of knowledge in each article posted on the knol site. The ratio scale units created as part of the process can also be called “knols.” However, the choice of unit size is arbitrary, and is due to the adjustment of each vector so that it has a high score of 100, even though the absolute zero and original “Greatest Amount of Knowledge possible in a single article,” ratings are not.

Well that’s it. Knols are not yet units of knowledge, but Google can measure the units of knowledge expressed in them with a methodology like the foregoing.

→ 1 CommentTags: Complexity · KM 2.0 · KM Software Tools · Knowledge Management · Personal KM

KM 2.0 and Knowledge Management: Part Two, “Buzz” and Some Skepticism

August 3rd, 2008 · Comments Off on KM 2.0 and Knowledge Management: Part Two, “Buzz” and Some Skepticism

voyageoflifeyouth

During 2007, the KM 2.0 meme began to spread more rapidly, but as it spread, some observers began to express skepticism about the identification of web 2.0 tools and a fundamentally new sort of KM. One sort of skepticism was expressed by David Weinberger in an article posted at KM World on February 1. David Weinberger argues that Web 2.0 is not a discontinuous stage in web development, but rather a continuation of previous trends. Thus:

“The Web’s been participatory from its inception. Yet, it is certainly true that blogging software and wikis have dramatically lowered the barrier to participating. Likewise, applications were integrated with other apps before Web 2.0, although the growth of APIs and standards has made it much easier than before. So, while Web 2.0 correctly draws our attention to real changes, it would be a mistake to think that the phrase implies that those changes were radical innovations, and not better-faster-easier versions of what we already were doing.”

And then he goes on to draw what he sees is an important difference between Web 2.0 and KM 2.0:

“That’s why, in my opinion, KM 2.0 is both a useful phrase and fundamentally different from Web 2.0. KM 2.0 points to Web 2.0-ish phenomena gaining prominence in the KM space: bottom-up, participatory, rapid innovation, more mixing up and mashing up of information. These are all good things, or at least good things to try. But they are truly discontinuous from the paradigmatic versions of KM 1.0, which were all about managing and controlling information environments.

So, I think it makes sense to talk about Web 2.0 and about KM 2.0. Both point to real changes. But it’s simultaneously important to recognize the real difference between the two 2.0s. Web 2.0 gives a label to a set of phenomena continuous with what came before; KM 2.0 announces a significant change in KM. And not a moment too soon.”

So, David Weinberger thinks that Web 2.0 is different from KM 2.0 because Web 2.0 doesn’t really amount to a discontinuity in the evolution of the Web, while KM 2.0 does represent something revolutionary in KM because there existed a KM 1.0 which was all about managing and controlling information environments.

A contrasting view of the difference between Web 2.0 and KM 2.0 comes from Matthew Hodgson, who posted a blog on April 23 that also stated the issue well:

The fundamental principles of knowledge management . . . are actually about supporting social environments that stimulate informal sharing of knowledge through developing processes that encourage more formal knowledge creation and exchange. Now that we’re seeing the social web evolve and we’re moving onto Web 2.0 I think it’s off the mark to suggest that these sorts of tools equate to KM 2.0. This sort of thing is the systems view of knowledge management that failed years ago. KM is not about the systems, but about the people and processes.

So has KM evolved to KM 2.0? No, not at all. KM is still about people and sharing knowledge. It’s always been about ensuring a supporting environment in which this can be best achieved. It’s never been about the technology because good KM can exist without it! It can even be about drinks with your IA colleagues once a month.

Yes, we’re currently seeing, through blogs and wikis, an environment in which knowledge management can be supported through technology. My message is, just don’t get confused between the two of them.”

I think this is exactly right, and that of the two views just compared, Hodgson is right. The reason is that by 2005, when KM 2.0 originated, most of KM had long since recognized that KM was not about IT primarily, and also that it was not about control, and that control paradigms were inimical to KM. There’s plenty of evidence from the literature that this is true. I’ve reviewed some of it here, in the course of a critique of some of Dave Snowden’s early work written in 2002. But, also anyone who knows the degree to which KM work has been dominated by CoP interventions over the past 10 years, and who also knows about the popularity of CAS theory in KM will find a claim that KM has been trying to implement a control paradigm very strange indeed.

During the portal phase of KM it may have seemed to some that a control paradigm was involved, because of the scale of some many portal installations. But even there, the purpose was to facilitate individual problem solving and decision support as well as collaboration. Insofar, as portals involved any aspect of the control paradigm, this was due far more to the limitations of the technology Knowledge Managers had to work than it was to a KM paradigm emphasizing information control. For more on what portals tried to do and on their unfulfilled ideals and promise see my own book on portals and KM.

A slightly different take on the issue of the relationship of Web 2.0 and KM 2.0 comes from James Dellow. James said:

“Its great to see people talking about the links between experiences with KM adoption with E2.0. In fact, the speed at which we are traveling along the hype curve to get to this point is quite amazing. But, before we get too carried away and find ourselves in the same position of confusing knowledge management and information management I suggest we all read Wilson’s case against KM again, and also my own take on this issue.

Just as social software does not equal wisdom, E2.0 does not equal knowledge management. E2.0 simply provides KM with some new tools that can help with the KM problem of participation, including but not limited to social media (and that’s great!). In fact, while James Robertson suggests we abandon the term “Knowledge Management System” from a planning perspective, I actually like the fact that the term is vague – a KMS should be any kind of information system you use to achieve a KM objective (I should add that I agree with James’ central theme about buying branded KM systems). On the other hand, E2.0 represents a cross over between a KMS and just a part of a wider Web 2.0 trend that is also moving inside the firewall at the same time. Inside the firewall, Web 2.0 will provide:

  • New ways to support collaboration both inside and between organisations (also a benefit for KM, but not limited to KM);

  • A new approach for developing and deploying enterprise applications, and access to enterprise data (hmm, starting to get away from KM here) – for example, see Why “Super Users” are the new programmers; and

  • Better techniques for providing rich user environments that make software easier to use.

  • …and there are probably more.

More importantly, KM is alive and well and for next generation KM the addiction to information technology is under control:

  • Concepts like Communities of Practice (CoPs) and Storytelling can all work without information technology; and

  • Social Network Analysis (SNA) uses computing power as a means to an end.

E2.0 (“enterprise social software“) is different from KM because:

  • It is all about information technology – it does not and can not exist without it; and

  • It appears to have the power to change the shape of organisations, while KM typically tried to improve what was there or provide a way to tap into the back channel.”

I think this analysis of James Dellow’s is very well-taken. Neither E 2.0 nor Web 2.0 equals KM and to confuse either of these things with KM is to fall into old conceptual errors that afflicted earlier KM activities. IT is about supporting KM, it is not KM itself because Management activity whether directed at knowledge processing or any other enterprise phenomenon can’t be entirely automated. In this area we should always be talking about IT-assists for KM or knowledge processing and never about KM as IT.

Having said the above, I don’t entirely agree with James in his implication that while E 2.0 can be revolutionary, KM does not have the power to be. Clearly, if KM interventions can use E 2.0 and the latter can be revolutionary, it follows that KM can be equally, if not more revolutionary. James is right that KM has not been marked by revolutionary change in the past; but his implication, that it cannot be, seems overdrawn given his views on the potential impact of E 2.0, and given the possibility of even more radical IT innovations that KM may be able to make use of. Whether KM can implement revolutionary change, depends on part on whether one’s vision of KM is revolutionary. If such a vision is formulated, and if it meets technology that can support it, then perhaps revolutionary KM-induced change will be both possible and forthcoming.

During the remainder of 2007 the “buzz” surrounding “KM 2.0” seemed to accelerate. In Part 3 I’ll review some of the discussion during the remainder of 2007.

Comments Off on KM 2.0 and Knowledge Management: Part Two, “Buzz” and Some SkepticismTags: Complexity · KM 2.0 · KM Software Tools · Knowledge Integration · Knowledge Management · Personal KM

Interpreting Popper’s Three Worlds Ontology for Knowledge Management: A Guest Reply by Richard Vines

August 2nd, 2008 · Comments Off on Interpreting Popper’s Three Worlds Ontology for Knowledge Management: A Guest Reply by Richard Vines

adamspeak

I think, Joe, you have raised some very interesting and reflective comments in your two blogs on “Popper’s three worlds ontology.”

 

Firstly, let me state, that I think it is inevitable that some reformulation of the three worlds ontology needs to be explored and will be explored by those that see the merit in starting from this ontological framework in the first place. The reason? My pragmatic sense is that the contributions of emergent understandings of the nature of life itself, living systems and complexity theory are forcing the need for such a re-evaluation.

 

So, I agree with Bill, that the combined role of “knowledge” and “cognition” as it is defined by Maturana and Varela is a good starting point from which to engage in a contemporary critique of the three worlds ontology. I also think it opens up interesting possibilities with flow on (and possible) implications in relation to higher order organisational dynamics. For example, I think the idea of multiple levels of cognition occurring across multiple levels of hierarchy offers a very interesting way of accessing the complexity of the possible relationship between autopoiesis and dynamics such as those that exist in large complex enterprises.

 

And so, what was said is that:

 

we argue that Karl Popper’s evolutionary epistemology and his “three worlds” ontology, presented in its most general form in his (1972) book, Objective Knowledge: An Evolutionary Approach, (a) provides an alternative account for the internal construction of knowledge that dispenses with the need for a convoluted observer centric language in formulating a concept of autopoiesis and (b) highlights the fundamental and integral roles of “knowledge” and “cognition” in the formation of autopoietic systems. Thus, Popper’s philosophy is as central to our discussion as is autopoiesis itself and it should also provide a linguistic “bridge” helping to bridge the gap between constructivism and realism.”

 

It remains to be seen if such a linguistic bridge can withstand follow up criticisms. However, in attempting to build such a bridge, I think it is inevitable as you say that there is a need to expand the three worlds ontology to include a distinction between living and non-living. This, as others might say, is not a further regression into ontological categorization. The Popperian perspective has always been that the interaction of the different ontological domains is as important as the naming of the distinctions in the first place. In other words, emergence is the foundation of all. I had originally (and incorrectly) thought that the distinction between W1 and W2 was a distinction of non-living and living.

 

So … personally, I am attracted to your suggested reformulation of the four ontological domains and the interaction of these domains. And beyond, this I do agree with you that there is a problem in suggesting that “the logical content of our genetic codes, as opposed to the logical content of our theories about our genetic code, is in World 3.”

 

The final point I wish to respond to is that one about the body – mind problem. There is something in this question, which I still have not got to the bottom of. You state:

 

“…. the interaction of W2 and W3 is no longer about the impact of the conscious mind and our beliefs on W3, nor is it about whether the conscious mind grasps and understands W3 content; but is now about the impact of cognition, more generally on W3, and about whether cognition, conscious or not, “grasps” W3.”

 

I think this is a misunderstanding of what Bill is saying. If I am right (and I am not yet sure that I agree), I think he is saying that it is not that cognition grasps W3, but that W3 is an expression of cognition. This is, I think, fundamental to the idea that artifacts, such as documents, play a crucial role in the role of moderating the permeability of boundaries internally within organisations and at the edges of organizations. This for me extends to the possibility of distributed cognition. And, for me, this is a fundamental consideration of how a theory of autopoiesis really has something interesting to say about organization dynamics and the boundary conditions of inclusion and exclusion.

 

But, I still have some reservation about this idea. And, I think this reservation goes back to the point you also made namely that: … yet, the findings of neural science have not yet come close to dissolving “the hard problem of consciousness.”

Comments Off on Interpreting Popper’s Three Worlds Ontology for Knowledge Management: A Guest Reply by Richard VinesTags: Complexity · Epistemology/Ontology/Value Theory · Knowledge Making · Knowledge Management

KM 2.0 and Knowledge Management: Part One, Early KM 2.0

August 1st, 2008 · 2 Comments

owentsia

For about three years now, the meme of “KM 2.0” has been circulating. It began with the introduction of the label “Web 2.0” to describe a collection of IT applications called social software. These applications first included blogs, wikis, social network analysis, social networking applications, collaborative content tagging, folksonomies, community support/collaboration software, and project collaboration software. But as time passed, the category came to include many web services applications, “mashups,” digital videos, podcasts, social bookmarking, news aggregation, and virtual environments.

Pretty soon after the introduction of Web 2.0, some enterprising observers (beginning with Andrew McAfee), thought they could get a pretty good meme going by the simple expedient of talking about “Enterprise 2.0” as a type of enterprise that has implemented “social software platforms” and, more generally, Web 2.0 techniques for purposes of increasing social connectivity, collaboration, and decision support. Once “Enterprise 2.0” began to gather a buzz, it was not long before people began asking whether there wasn’t a KM 2.0 in that mess somewhere. The KM 2.0 meme surfaced in 2005 and has been gradually spreading ever since. IBM Global Business Services picked up the meme fairly early on and lent considerable weight to its circulation, and, by last fall, Information Today, the company that holds the KM World and Intranets Conferences and also publishes KM World, had made the theme of their 2007 conference, “KM 2.0.” But, what, exactly is “KM 2.0?”

Well, in the beginning, in a blog post entitled “KM 2.0,” and dated October 10, 2005 Euan Semple (who may have originated the term) seemed to equate it with:

. . . people, connected people, empowered people, people who don’t always do what you expect or what you tell them but invariably end up taking you to exciting places you that would never have expected to get to.

And not only that but you can do this using tools that cost peanuts!”

So, Euan thinks it’s about people and using inexpensive Web 2.0 tools. Specifying a bit, we might infer that this refers to a type of KM which acts to enable self-organization by people by introducing appropriate Web 2.0 tools. These words are a bit different from Euan’s and I hope they don’t distort his intent.

Jack Vinson quickly picked up on Euan’s blog and in a blog of his own on October 8th, added this thought:

Anything that gets you to look at something in a new light is good. Euan is suggesting that maybe a parallel change in the view of KM from command-and-control, “I know what’s right for you,” to more distributed and a sense that “we know what is right for each other.” “

Jack thus, explicitly introduced the view (which Euan had merely implied) that KM had previously been focused on command-and-control approaches, while KM 2.0 was focused on self-organization and community.

Anyone knowing the literature in KM before 2005, knows that this distinction between an older KM as command-and-control oriented and a KM 2.0 oriented towards self-organization and community doesn’t accord with the facts of KM. Complexity theory had appeared in KM at least 10 years earlier than 2005, and most of the major approaches being advocated in KM after the year 2000 were based on complexity, community, and the idea of ecological approaches to KM, rather than on command-and-control notions. Sometimes the IT tools supporting KM interventions, such as portal installations and Content Management Systems were expensive interventions requiring heavy organizational financial investments, and high-level executive support, but even these were seen as providing ecological support for individual connectivity and voluntary collaboration with others. If they were KM interventions, “command-and-control” wasn’t involved.

Having said the above, I hasten to add that Web 2.0 tools have provided much more lightweight and inexpensive possibilities for self-organization and collaboration than had existed previously, and that I am not claiming that such tools don’t bring new support capabilities for KM interventions, and a much increased capability for self-organized collaboration. They most certainly do. However, the implication that KM used to be about “command-and-control,” while KM 2.0 is about self-organization and collaboration is a myth.

Luis Suarez, social computing evangelist at IBM has been focusing on Web 2.0 applications since the beginning of 2006. Along the way he has written a good bit about blogs, wikis, and other Web 2.0 tools bringing us closer to “KM 2.0.” In one post, Luis discussed a post of Dan Kirsch’s and offered this comment:

“. . . When people are “best” connected, relationships build and trust improves — all of which tend to improve both communications and transfer of knowledge” Amen to that! I doubt I would have been able to put it in a much clearer way than that ! This is actually one of the key fundamental factors for which I have always believed that social software is truly going to revolutionalise the way we conduct business. It is happening now already but we are just seeing the tip of the iceberg. As more and more businesses get to adopt all those social media tools we would be entering the next generation of KM: i.e. KM 2.0.

I think this quote embodies an important aspect of the reasoning behind the spread of the KM 2.0 meme. It says that KM 2.0 is introducing social media tools to improve connectivity, resulting in building relationships and trust, resulting in better communications and knowledge transfer. This, of course, is a simple theory. But it is at the heart of the claim that social computing tools will provide more success in knowledge sharing than previous KM efforts that did not use social computing have delivered.

The idea of KM 2.0 gathered momentum in 2006, and as people blogged about it, and began to implement it, it became clear that some identified the term “KM 2.0” with KM interventions that used an organic/ecological approach to KM, to encourage self-organization, and employed Web 2.0 tools in the service of that kind of approach, while others identified “KM 2.0” with the tools themselves. There was a bit of a replay here of the early days of KM where some entering the field viewed KM as a field within IT, or as a different sort of IT, called Knowledge Technology, whereas others always recognized the distinction between KM as a management activity with certain knowledge-related purposes that focused on software tools because they had certain advantages in achieving those purposes. For the second group, the idea of creating an ecology that would facilitate connectivity, self-organization, social relationships, trust, better communication, and knowledge sharing, using software tools was primary. And it is such an approach that also defines KM 2.0 in the current Web 2.0 context for those who see beyond the social computing software to the deeper meaning of KM.

In Part Two, I’ll continue the story of the development of KM 2.0 in 2007. But before I end, I want to emphasize, once again, the key theory underlying the broader view of KM 2.0. Specifically, that social media tools are used in the service of creating an ecology that improves connectivity, resulting in building relationships and trust, resulting in better communications and knowledge sharing. This theory is, after all, a conjecture, it may be true, but to use the old cliché, the devil is in the details, and the details may not be wholly in the social software. As a general matter of social theory, increased connectivity doesn’t always serve to build relationships and trust, and even if that should be the result of interventions employing Web 2.0, and even if higher quality collaborations and communications follows on such a result, it may or not be the case that more knowledge is shared in the end. The reason why not, is that knowledge workers may end up with improved information sharing rather than knowledge sharing, unless, of course, either the new tools, or the combination of those tools with the way people use them allow a distinction to be made, and measured, between these two outcomes.

→ 2 CommentsTags: Complexity · KM 2.0 · KM Software Tools · Knowledge Integration · Knowledge Management · Personal KM

Interpreting Popper’s Three Worlds Ontology for Knowledge Management: Part Two

July 29th, 2008 · 1 Comment

caco

Comparative Evaluation of the Two Theories

Let’s compare the two theories of the three worlds, world-by-world, as it were. First, Popper’s W1 has the disadvantage that it blurs the distinction between the living and the non-living, since both are included in W1. This also has the effect of including knowledge in W1 without specifying a category distinction between those parts of the material world where knowledge occurs and those parts where it does not. I think it is useful to have a “world” category which distinguishes the absence of life and the absence of knowledge from other categories in which both are present. And Bill’s theory of the three worlds does have that advantage over Popper’s

Second, on the other hand, Bill’s view of W2 is too broad in that it either inadvertently includes both “mental” cognition, and what we might call lower-level biological cognition in the same “world” category, or alternatively, is meant to deny the existence of the mental by denying the validity of the body-mind distinction. If the first is the case, then this argues for another category corresponding to Popper’s W2 in which mental phenomena are isolated, since mental phenomena are at another and higher level of emergence, than lower-level biological cognition, and we will still need to investigate the relationships between this level and Popper and Bill’s W3. Put another way, with the movement of the boundary of W2 to life, the critical question of very emergence of mind, consciousness, and self, and the ability to address it as a consequence of the continuing interaction of body/brain and culture is gone, and with it the ability to address some very promising theories about how mind, consciousness, and self emerge.

On the other hand, if the second is the case, and the intent is to assert by ontological assumption that mental phenomena don’t form a significant ontological category, then I think this formulation is mistaken. Now, admittedly, the body-mind distinction is a very vexed and still vexing philosophical problem. The development of neural science is associated with a point of view that there is no such distinction, and cognitive science reflects the view that “the mind is as the mind does,” a vague but not unreasonable view. Yet, the findings of neural science have not yet come close to dissolving “the hard problem of consciousness.” What’s needed to reduce the mind to non-conscious brain processes is a showing that human qualia and subjective mental experiences can be mathematically reduced to such processes. If that can’t be done, and as yet we’re not even close to being there, there is no reduction, and the belief that “mind” is reducible is just an ontological assumption. But what reason do we have to believe in it, if we cannot actually carry out a reduction? How does it help us to believe in it? I think the answer is that it doesn’t, but instead hurts us by removing the possibility of investigating the relationships between a hypothetical level of emergent mental processes and W3.

Moving to W3, Bill’s notion of it has the disadvantage, that it includes material knowledge, the “logic” of the genetic code in W3, and also assumes the existence of such “logic.” Bill has claimed, though I don’t know if he still holds this view that Popper, too, thought that the logic of the genetic code was in W3. In support of this interpretation he pointed to:

“. . . knowledge in the objective sense, which consists of the logical content of our theories, conjectures, guesses (and, if we like, the logical content of our genetic code)” (p. 73 of Objective Knowledge),

But this is ambiguous because a) Popper might have meant the logical content of our theories about the genetic code, and b) this is the only evidence in all of Popper’s books and papers after 1972 that may be taken as supporting the idea that either objective knowledge or the logical content of our genetic codes, as opposed to the logical content of our theories about our genetic code, is in World 3. The interpretation of Popper’s statement as referring to our theories about the genetic code, rather than the logic of the code itself, is supported by his views on logic. He did not consider logic to be physical, or organic, or psychological in nature. That is, for Popper, there is no logic in the physical world or in the mental world, Rather, for him, logic, and also objective knowledge, for that matter, is a linguistic creation of the human mind and logical content is something we find in linguistic expressions and not in the physical or mental domains.

There is no place in his writing where Popper takes any other position about the nature of logic, or about objective knowledge, and since in every other place in which he discusses W3, he also refers to it as the domain of the products of the human mind, I think the only way to interpret that quote from p. 73 is that it does refer to something linguistic and not to the structure of the genetic code itself.

Of course, regardless of which interpretation of Popper’s meaning is correct, there still may be good reasons for Bill’s including the logic of the physical genetic code in W3. However, if this is done, then we will have W3 objects that are created by human minds, or alternatively cognitive processes, and W3 objects produced by natural selection in evolutionary processes. In addition, we will have objects in W3 that cannot be directly grasped by human minds, or alternatively by cognitive processes, which violates another central rationale of Popper’s idea of W3. In short, it is hard to see what is gained by saying that the logic of the genetic code is in W3, rather than saying that the structure of the genetic code is in W1, except removing any residual knowledge left in W1 after one changes the boundaries of W2, so that it begins with life, rather than with mind.

In short, I think that Bill’s reformulation of Popper’s ontology is helpful in highlighting the lack of distinction between the non-living physical world and the living biological world, but that in other areas, it introduces confusion and conceptual problems, where, previously, there was clarity.

A New Reformulation

There is an easy way to resolve the conflict between Bill’s formulation and Popper’s, while serving an interest in emphasizing the boundary between the living and non-living. That is , we can distinguish four “worlds.” W0 would be the domain of the nonliving; W1 the domain of the living, cognitive, and autopoietic; W2 the domain of mental processes, events, and predispositions; and W3, the cultural world of the products of the human mind. Worlds 1, 2, and 3 are domains in which knowledge is produced. W1 is the world of biological knowledge; W2 is the world of mental processes and mental knowledge in predispositions, beliefs, and expectations; and W3 is the world of cultural knowledge, including scientific knowledge.

To effect this change all we have to do is to epistemically cut Popper’s W1, into W0 and W1. We then highlight the interactions of the living and non living, of biological cognitions and mind, and of mind and culture. There is no knowledge in W0. Genetic codes and synaptic structures are in W1. Tacit, implicit, and explicit beliefs, along with mental processes, value, attitudinal, belief predispositions and subjective knowledge, are in W2, and abstract knowledge objects such as problems, theories, arguments, and systems of logic, aesthetics, and value theory (but not concepts) are in W3.

Significance for Knowledge Management

Many in KM prefer not to spend too much time talking about how to characterize knowledge. They believe that we can improve the quality of knowledge without talking much about what it is we’re improving just by showing that KM activity has a positive impact on collaboration or community or perhaps certain bottom line factors. I think people who hold this view don’t understand what it will take to show that KM has a positive impact on such factors. There is an underlying theory or set of assumptions that underlies KM. It is a simple theory and can be stated as:

— higher quality KM activity leads to higher quality knowledge making and/or higher quality knowledge sharing;

— higher quality knowledge making and knowledge sharing results in higher quality knowledge available to support individual decisions, and, of course,

— higher quality individual decisions lead to higher quality outcomes.

Now, if KM is going to be successful, we need to show that the general outlines of this theory are correct, and I think that, in order to do that, we need to measure the various links in this chain including the knowledge link. In order to measure the knowledge link in turn, we need to have a theory about the nature of knowledge. If we think that knowledge means only biological cognition and W3 abstract objects, then this will result in our including only measures of the changes of state in those things. If we, also, think that knowledge includes mental beliefs and predispositions, however, then we will also try to measure the change in quality of that mental knowledge with all the difficulty that entails.

Having said the above, I have to say that I often have the feeling that many in KM think that to get people to accept this theory, they don’t have to measure the intermediate links, but can just look at the KM input and the bottom line result of any intervention, and claim success or run for the hills depending on what that bottom line result is. However, I think that such a view is naive in the extreme. In case of a good bottom line result, opponents of KM will attribute that result to everything under the sun execpt KM, and for the friends of KM every bad result will be rationalized by pointing to various factors that have undermined the impact of KM. To undercut such self-serving reactions, on both sides, we need to measure the links in that chain. To the extent that we do we can make KM accountable. To the extent that we don’t, we will just continue to wave our hands in favor of the conclusions about impact that we favor.

→ 1 CommentTags: Complexity · Epistemology/Ontology/Value Theory · Knowledge Making · Knowledge Management

Interpreting Popper’s Three Worlds Ontology for Knowledge Management: Part One

July 28th, 2008 · 1 Comment

TurnerJunctionofThamesandMedway

Popper’s Three Worlds Ontology

In his Objective Knowledge (1972), Karl Popper introduced the idea of three ontological worlds or domains. The first world is the world of material objects, events, and processes, including the domain of biology. The second world is the world of mental events, processes, and predispositions– the world of beliefs and other psychological phenomena. The third world is the world of the products of the human mind. In later work, Popper took his friend John Eccles’ suggestion and referred to the first world as world 1 (W1), the second world as world 2 (W2), and the third world as world 3 (W3).

Why three worlds rather than one? First, because Popper was seeking a formulation that would help him solve the body-mind problem, and he didn’t think that either materialist monism, or any variation of classical Cartesian dualism worked. Against monism, he reasoned that it had no way of accounting for either the emergence and facts of mind and consciousness, or the emergence of culture. And against dualism, he reasoned that it had no way of accounting for its claim that mind is some special non-material substance, and also, therefore no way of accounting for its emergence in terms of natural selection, since non-material substances, as opposed to abstract objects like logical and artistic content, cannot emerge out of material objects and processes through natural selection.

Second, Popper also wanted to be able to analyze the interaction among the material world (W1), mental processes (W2), and the cultural objects produced by human minds (W3). He believed the effects of autonomous linguistic, artistic, dramatic, and musical content, various aspects of W3, on W1, working through W2 were very real and very transformative. But, one can’t analyze such interactions without recognizing these categories in the first place.

Third, Popper was also very concerned about the subjectivity of belief knowledge and the rise of relativism in the 20th century. The failure of foundationalist Kantianism and logical positivism, coupled with the successes of both American Pragmatism and his own Critical Rationalism, had discredited the Platonic idea of knowledge as Justified True Belief (JTB). Popper’s distinction between W2 and W3 allowed him to reconstruct the notion of objective knowledge, not as JTB, but as fallible and sharable assertions in W3, that are defeasible by criticisms and empirical tests, but that have survived such criticisms and tests at the time we are considering them. In turn, our beliefs, even the ones we are psychologically certain about, in his hands, become subjective knowledge in the precise sense that they are unsharable and uncriticizable.

And that brings us to knowledge, more generally, Popper, again explicitly distinguished between subjective and objective knowledge using the three worlds ontology, and in many publications he also states or clearly implies the idea that knowledge is made by living things that do not have mind and consciousness, and that therefore do not make either W2 beliefs or W3 products. Such knowledge includes genetic content, but also learned predispositions of organisms, lacking minds, all in W1.

The Unified Theory of Knowledge

In my work in Knowledge Management, I’ve relied heavily on Popper’s three worlds ontology and also on his ideas about knowledge. The “unified theory of knowledge” developed by Mark McElroy and I, and named by Art Murray, closely follows Popper’s work, but makes explicit that:

Knowledge is a tested, evaluated and surviving structure of information (e.g., DNA instructions, synaptic structures, beliefs, or claims) that is developed by a living system to help itself solve problems and which may help it to adapt.

 

One can distinguish W1, W2, and W3 types of knowledge that fit this definition:

— tested, evaluated, and surviving structures of information in physical systems that may allow them to adapt to their environment (e.g., genetic and synapticknowledge composed of biological structures used in developmental and learning processes);

— tested, evaluated, and surviving beliefs and belief predispositions (in minds) about the world (subjective, or non-sharable, mental knowledge composed of mental structures used in learning, thinking, and acting); and

— tested, evaluated, and surviving, sharable (objective), linguistic formulations about the world (i.e., claims and meta-claims that are speech- or artifact-based or cultural knowledge used in learning, thinking, and acting).

I’ve used these ideas in my work for a number of years and and have suggested in many publications that this view of knowledge is a very good one for Knowledge Managers to adopt, because it recognizes three types of knowledge and therefore also transcends conflicts among those who think that knowledge is exclusively biological very few), or exclusively psychological (many), or exclusively cultural (also many). In other words, this view is highly synthetic and provides a very broad perspective on knowledge. It also is consistent with evolutionary theory and the emergence of knowledge (though thid part of the story is beyond the scope of this blog), first in the biological world, and later in the form of co-evolving knowledge in the mental and cultural worlds. Others in KM have also begun to look into Popper’s work and, specifically into the three worlds ontology. This has begun to give rise to alternative interpretations of the three worlds notion. The first of these alternatives is from William P. (Bill) Hall, a friend who I’ve corresponded extensively with for some years now, and whose views on many things I have broad appreciation for and agreement with.

Bill Hall’s Reformulation of the Three Worlds Ontology

In a recent, soon to be published, and, I think, very ambitious and important, paper co-authored with Richard Vines and Susu Nousala, and in various e-mails posted to the autopoiesis-dialognet yahoo group, Bill follows Maturana and Varela (as do I) in thinking that life is characterized by autopoiesis and cognition, and that these are two sides of the same coin. They write, further, however, and here I think they part company with myself and Karl Popper, that one ought to consider W2 as beginning where cognition begins, with life. Bill and his collaborators also state that W3 includes objective knowledge held in the logical content of the genetic code. And in other communications, Bill has indicated that he thinks Popper asserted this, but that, in any case, he thinks that W3 includes such knowledge.

I think these ideas about where W2 begins, and about what is included in W3, amount to a new construction of the three worlds ontology that departs considerably both from Popper’s views and from my use of them in the unified theory. That doesn’t make these claims wrong. But it does make them different, and I want to make those differences quite clear before I evaluate them. First, to begin W2 at life, rather than at mind, removes both cognition and knowledge from W1, and places it in W2. Does it remove all knowledge from W1 and place it in W2? Not necessarily, because genetic knowledge, represented by the structure of the genetic code, is not produced through cognition, but through natural selection.

Second, shifting the boundary of World 1 relative to Popper’s view, also removes his focus on the body-mind problem. With the shift in boundary, the interaction of W2 and W1 is no longer about the impact of the conscious mind on the material world, but about the impact of cognition of all kinds on the material world and vice versa. This is an interesting question, but it is very different from the one Popper was asking. Further, the interaction of W2 and W3 is no longer about the impact of the conscious mind and our beliefs on W3, nor is it about whether the conscious mind grasps and understands W3 content; but is now about the impact of cognition, more generally on W3, and about whether cognition, conscious or not, “grasps” W3. Still further, we are no longer asking about the impact of W3 on the conscious mind or the self, but rather we are asking about the impact of W3 content on cognitive processing. Again a very different, though also a very interesting, question.

Moving to the construal of W3 as including the logical content of the genetic code, rather than just the logical content of out theories about the genetic code, this revision of the three worlds ontology has the effect of removing the remainder of W1 knowledge out of W1 and into W3, leaving W1 empty of knowledge, and also including in W3, knowledge that is not: a) produced by the human mind (since genetic knowledge is produced by physical natural selection), b) mediated by W2, either in Popper’s or Bill’s sense of this term, and c) objective in the sense that it is sharable and criticizable by others.

→ 1 CommentTags: Epistemology/Ontology/Value Theory · Knowledge Making · Knowledge Management

National Governmental Knowledge Management: A Guest Reply By Richard Vines

July 26th, 2008 · 1 Comment

thomascolesunriseinthecatskills

I think your twin posts on knowledge management and its possible relevance to national governments raise some very interesting and creative ideas that warrant a serious pause for thought.

I have just been in the United States and revisited the Washington Mall, and the axes of the Mall including the White House, the Congress and the Jefferson and Lincoln memorials. And, then from a local rise, I was able to look out over these pillars of American democracy – where I also learned for the first time the remarkable story of the stitching together of the families of General Washington and General Lee through marriage down the generations. The vantage point was of course the Arlington Cemetery. I had no idea that this piece of land actually once belonged to General Lee’s family, and that he spent the latter part of his life playing such an active role in reconciling the country after the shocking civil war. What a strange mixing of narratives there has been over the years.

So, I have returned to Australia with grand memories of these iconic places that are testament to so much of the fabric of the modern world- and the potency of the principles of the “separation of powers” and the rigourous critique of and integration of policies that ensues from this separation.

In your post, you have been both provocative and brave to have posed a very fundamental question for KM. Yours is an intimidating question for a domain of practice which still has not developed any real sense of coherence, or identity. Here is your question repeated.

With the development of formal KM in the late 1980s, however, the question arises as to how to organize KM activities in National Governments, and also how to decide which agencies, and inter-agency projects and programs can benefit from a formal KM structure and which can continue to be handled informally, through individual efforts and self-organizing group structures?

And, of course, first off, you discuss whether the question should be asked in the first place. We are, of course, free to do nothing about what KM may or may not have to contribute to our respective national governments. But, I agree with you – why should we not advance a cause that is grounded deeply in our emergent understandings of what might become a “knowledge domain of practice”. Why should we not encompass notions of emergence and at the same time declare what might be possible through KM?

It is true perhaps that your preferred views about the knowledge domain are grounded steeply in the traditions of your own national culture. For example, it is fascinating that your projected vision for KM within national governments (the third option) is premised on a foundation of the separation of powers. A national KM strategy that was both centralized and decentralized and one which reported to the legislative and not to the Executive arm of government.

But beyond this supra level of governance, implementing a national KM intervention by embedding the separation of powers could do much to strengthen the wisdom arising from the edges of complex adaptive systems. For example, what would it be like if we slowly developed a whole new cadre of bureaucrats who were well versed in what it takes to generate and implement public policy with a real sense of complexity and network theory and practice?

Quite frankly, Joe, I think the ideas you outline are worthy of the highest scrutiny. Surely the skeptics would have to agree that such an approach could do much to strengthen both the coherence and the adaptive capacity of individuals and agencies and national Governments as a whole? Of course such an approach would not be without its risks and its skeptics. I can hear the financial fudiciarists groaning already. But, I can also see the potential for reform and innovation. I wonder how long it might be before we might have “knowledge fudiciarists” and what their work might be?

Whilst reading your blog, I found myself thinking – how could such a model be adopted within a Westminster context? I wonder what might be the equivalent to the creation of a KM network (centralized and decentralized) that would report to American legislators. And, how would the mechanics of such an approach be implemented in Australia across seven states, where our population is so much smaller and where such an approach, because of our smallness, might have an unintended consequence of excessive bureaucratic intervention.

Thanks for the blogs, Joe. They show that when we write something down, what we write down can generate potentialities way beyond what we know. This is the mystery of cultural artifacts. I am sure President Washington, his wife and step-son would get a surprise if they were to look out over the city of Washington from the vantage point of what now is the Arlington Cemetery and see the pillars of US democracy reflected in the axes of the Mall and the cultural artifacts/monuments – including his own! But equally, I suspect he might also reflect on the importance of the continued commitment to the principles of openness required to achieve the necessary critique of policies that the separation of powers is designed to provide.

It is perhaps a lofty aspiration for KM to hitch a ride on these principles of governance, openness and critiquing across multiple levels of hierarchy in all open Governments. But, why not, given how fundamental we think the nature of knowledge is in relation to the expression of life diversity itself?

However, for these lofty aspirations to be deeply sustainable, they need to be grounded in a deep rooted theory and practice of knowledge itself. We have to keep reminding ourselves that KM is about context, networks, scrutiny and adaptability and not about political hierarchies, including political nationalism. This is a tall order and a worthy aspiration to keep committing to. I hope the potential for an emergent coalescence of ideas, including your own ideas, Joe, takes root.

→ 1 CommentTags: Complexity · Epistemology/Ontology/Value Theory · KM Techniques · Knowledge Integration · Knowledge Making · Knowledge Management

National Governmental Knowledge Management: KM, Adaptation, and Complexity: Part Two

July 24th, 2008 · 8 Comments

squam

The Organization of Knowledge Management in National Governments (continued)

A second possible answer to the question of how to organize KM in National Governments is to organize it in a decentralized way across national governmental agencies and inter-agency teams. Each Governmental unit, or inter-agency group, would have some KM personnel and would be responsible for doing what it could to enhance knowledge processing and Knowledge Management within their local agency or inter-agency group. KM personnel in the agencies and groups would be responsible to the heads or chiefs of the agencies, groups, and problem-solving teams they are a part of. KM personnel then would be charged with implementing the will of these heads and chiefs. Formal organization transcending agencies and inter-agency teams would be avoided, and the National Government implementing such a possibility would leave it to the KM officers and personnel in each locale to self-organize external relations with other locales, including establishing friendship groups, communities of practice, conferences, committees, and using whatever other communication venues and means they needed to employ to create an informal, emergent, National KM system.

This second possibility which I’ll characterize as formal, decentralized, “local KM,” has the advantages of decentralization, some measure of distributed problem solving, and also the potential for self-organization and emergent National KM patterns across locales and inter-agency teams, including the centralized executive staffs and structures of the Government. But it also has a number of crippling disadvantages.

A) The first of these is the “strategy exception error.” That is, by subordinating KM personnel to local heads and chiefs, this possibility subordinates the function of enhancing knowledge processing, to the requirements of existing strategy. The error here, is that that strategy can be inconsistent with the adaptive function of KM, in that it may make no provision for enhancing the knowledge processing on which the content of strategy itself depends. To avoid this strategy exception error, KM personnel need a measure of autonomy from line authority and an ability to define for themselves where knowledge processing in the locales and other groups needs to be enhanced.

B) Another major disadvantage of this decentralized local KM pattern is that it contains nothing formal to prevent “stove-piping” and constant re-invention of the wheel in locale after locale, and across inter-agency groups and teams as well. Stove-piping can be reduced by encouraging and enabling self-organization across locales, groups, and teams. But unless there’s a central authority to do this and to provide some enabling resources it won’t happen.

C) A third major disadvantage of the decentralized local pattern is the failure to recognize the fiduciary character of KM for National Governments, and to provide a structure for regulating KM performance in the Government that connects KM to its fiduciaries. Since adaptation is an essential function for Governments, on which all else depends, it can be argued that the innovation performance of the Government is a matter that must receive continual oversight from the Government’s trustees.

In National Governments, those trustees are the legislators. Theirs is the fiduciary responsibility to ensure that the adaptive capacity of the Government and its innovation performance are at a sufficiently high level to meet the Government’s severest challenges. Under the decentralized local pattern, again, there is no direct connection of KM personnel to the fiduciaries, and therefore line executives are free to interpret the function of KM in terms of their everyday, routine needs, or their need to solve specific problems rather than in terms of the function of KM to enhance knowledge processing.

A third possible organization for National KM is one in which we add to decentralized KM, a National Government KM Center for 1) Performing KM Research and Development, 2) Coordinating information availability about KM and knowledge processing including information about KM R & D performed elsewhere, 3) funding KM programs and projects across the National Government, and also 4) evaluating the impact of KM and knowledge processing activity across the decentralized, partially self-organizing clusters of KM activity. In other words, this Center would be a combined “clearinghouse,” KM scientific research center, funding source for programs and projects, and evaluation agency.

This National Government KM Center would not have direct line authority over the KM staffs and activities in the various locales, even though it would fund their programs and projects. Nor would it be housed with or subject to the central executive authority in the National Government. That authority would have its own KM activities and staffs, which in both the second organizational possibility and in this one, is viewed as just one of the “locales” in which KM activity would be formally constituted.

The National Government KM Center would function autonomously relative to the Executive, and would be directly responsible to the Legislative Authority which would directly fund it and evaluate its performance. In its evaluation function, the KM Center would function like the Government Accountability Office (GAO) in the United States (perhaps it should be called the Knowledge Accountability Office, the KAO), and would report to the Legislative Authority and also to the Executive, on the state of KM and knowledge processing in the National Government. In its R & D function, the National Center would operate like a National Laboratory, but with a specialization in creating knowledge about how to enhance Knowledge Management and Knowledge Processing. In its Information Clearinghouse Function, the National Center would serve as a coordinator of information and knowledge generated in locales related to KM and knowledge processing. In its funding source function, the National Center would provide the support for agencies, groups, and inter-agency teams, to actually perform KM programs and projects to enhance knowledge processing in the various locales including inter-agency problem solving teams. And in its evaluation function it would serve as the mechanism of National Government-wide KM performance accountability to the legislative fiduciary.

Among the above possibilities, the third is the one I like best. First, it does nothing to undermine self-organization in knowledge processing and KM. So the first possibility can be implemented within the framework of the National Governmental KM Center alternative. Second, the second possibility of implementing decentralized, distributed KM in locales, while preserving self-organization in knowledge processing can also be implemented within the framework of National Governmental KM. Third, in addition, however, the third possibility would meet the three problems with the second possibility I pointed to earlier.

Thus, the “strategy exception error” would be ameliorated, by making sure that local KM chiefs and staffs can get funding for projects that will strengthen both the strategy-making process (an activity relying on knowledge making) and also the full range of knowledge processing in various domains, regardless of whether local agency chiefs have formulated strategies that emphasize the importance of adaptation and problem solving. Also, the National Governmental KM Center would ameliorate the inevitable tendency to locale-based stove-piping, by enabling knowledge and information sharing and also collaboration across locales and agencies through its knowledge and information sharing programs and facilities, its cross-locale funding of projects and programs, and finally making available its evaluation reports including information about KM impact evaluations.

Finally, the third problem, that of implementing KM as fiduciary function of the national Government is solved by the third possibility since, the National Governmental KM Center itself would be directly accountable to the legislative authority, while all of the local KM functions, though continuing to be formally accountable to executive authority, would also be accountable to the National KM Center and ultimately to the legislative authority because they would be dependent on both for much of the information and knowledge, research findings, funding for programs and projects, and impact and performance evaluations they need to function.

Are there disadvantages with this third possibility for National Governmental KM? I’m quite sure there are. But, this is after all just a blog post and must stop somewhere. At this point, I’d like to open up this subject for discussion. In full confidence that these disadvantages will very quickly be revealed.

→ 8 CommentsTags: Complexity · Knowledge Integration · Knowledge Making · Knowledge Management

National Governmental Knowledge Management: KM, Adaptation, and Complexity: Part One

July 23rd, 2008 · 6 Comments

adamspeak

National Governmental Knowledge Management

The primary focus of Knowledge Management, thus far, has been on organizations, communities, and teams, with some emphasis on Personal Knowledge Management (PKM), and “Knowledge Cities.” Knowledge Management in Government has primarily continued the organizational focus of most work in the field. It is agency-based and project-focused, and has had little to say about Knowledge Management at the level of a National Government taken as a whole. This post and the one following is about a vision for National Governmental Knowledge Management. The vision will be brief and will gloss over all sorts of details and problems, as is perhaps appropriate for this initial presentation using a blog format. It has two objectives. First to present ideas for decentralized KM that allows for decentralized Knowledge Management and Distributed Problem Solving. And second, to present ideas for a central KM agency that will (a) enable decentralized KM in various locales in Governments, by providing new knowledge and information about KM and knowledge processing, and also (b) provide for KM accountability to National fiduciaries representing the citizenry.

Adaptation, Knowledge Management, and Complexity

Any human organization, including a Government, must cope with the twin problems of integration and adaptation. Integration involves organizing a Government’s activities to maintain its identity and its unity in pursuing its primary goals and objectives. Adaptation involves organizing an organization’s activities to cope with changes in its environment affecting that organization. It presupposes the existence of knowledge about how to cope with and survive challenges resulting from these changes, whether they’re anticipated or not.

Or, at the very least, it requires producing such knowledge in the process of solving problems, as well as integrating new knowledge so that it’s available when needed. It also presupposes the existence of knowledge about how to solve problems, and how to learn, when the need to do so presents itself. Thus, high quality learning, problem-solving, and production and integration of relevant new knowledge (that is, innovation) in response to Governmental problems, are an essential requirement for successful adaptation in Governments and Nations.

In short, learning, problem solving, and knowledge processing are essential functions of Governments, and other large scale systems too, because knowledge is the resource that such systems use to adapt. This brings up the question of how Governments can see to it that performance of these functions is is done well. And this brings us to Knowledge Management. Knowledge Management is an activity intended to enhance knowledge processing. Extending this simple statement a bit, it is activity intended to enhance the following behavioral processes:

— problem seeking, recognition, and formulation,

— knowledge production (creating or discovering new knowledge), and

— knowledge integration (making the new knowledge available to others in the system through broadcasting, searching/retrieving, teaching, and sharing).

Now that I’ve outlined a very abstract notion of Knowledge Management, we are almost ready to begin to address the question of how it should be organized in National Governments. But first, I need to say a few words about complexity. National Governments are Complex Adaptive Systems. That means they have coherence in the face of change, or “identity.” Coherence refers to the maintenance of the characteristic pattern of organization of a CAS through time. Coherence in a National Government’s overall pattern of organization persists in spite of the continuous change occurring in its agents, the materials it may use, the challenges it is called upon to meet, and the specific responses it produces. The process of maintaining coherence or identity in the face of environmental changes, also referred to as “self-making,” is called “autopoiesis” by Maturana and Varela.

Further, agents in a CAS self-organize to produce emergent global behavior. This is one of the most important features of National Governments and CASs.

The key idea is that the agents comprising a CAS, act in accordance with their own purposes and motives, in pursuit of their own goals, and that their actions produce self-organization, without any centralized planning or control, and in a way that we cannot model successfully, resulting in the recognizable pattern of global organization that identifies the CAS. Of course, in National Governments, there is centralized planning and control as well as distributed planning and control. Nevertheless, there is still emergence of unplanned global behavior and other features that determine distinctive characteristics varying among National Governments.

In such systems, distributed problem-solving and knowledge processing are important. Individuals in National Governments solve their own problems. In doing so, they contribute to solving the problems of National Governments in a distributed, but, nonetheless, organized fashion. In addition, the ability of National Governments to learn and develop new and effective knowledge is greater to the extent that their constituent agents are operating in problem-solving and distributed knowledge processing environments marked by relative “openness.” The more “openness” in the distributed knowledge processing environment, the greater the adaptive capability of the National Government, provided that the ability to learn of its agents remains constant. The reasons for this are that openness allows the National Governments to take advantage of the variety of skills and capabilities of individuals and collectives in the CAS, and also that it supports self-organization of informal structures in the system, that, in turn, support its distributed problem solving capability.

The Organization of Knowledge Management in National Governments

Knowledge Management is as old as self-conscious human beings. Since the first time humans became aware of learning and its knowledge outcomes, the question of how they could improve their learning sub-processes must have occurred to them, and measures that they thought would be effective in enhancing their capacity to learn must have been of interest to them. The first time a human used one of these measures, she or he was doing Knowledge Management, even though there was no word to label activities intended to enhance knowledge processing.

So, it’s no exaggeration to say that every agency in National Government, every inter-agency project or program, and every individual, as well, performs some KM activity, and has always done so. With the development of formal KM in the late 1980s, however, the question arises as to how to organize KM activities in National Governments, and also how to decide which agencies, and inter-agency projects and programs can benefit from a formal KM structure and which can continue to be handled informally, through individual efforts and self-organizing group structures?

The first possible answer to these questions is to have no formal KM at all. One rationale for this answer to my questions is that any formal organization of KM is likely to interfere with self-organization of initial knowledge processing activities. Since, it may be argued, self-organization is best for enabling creativity, it is also true that there should be no formal KM activity which may interfere with this creativity and impose the kind of command and control activities that will be counter-productive from the standpoint of distributed and creative problem solving.

This “no formal KM position” can be doubted, however. It is dependent on the idea that there is nothing we can do to encourage and enable creativity, but must take a completely passive position and simply let it emerge. Put another way, it says that we can do nothing to either intentionally construct an “ecology of rationality,” or to systematically disrupt other ecologies that undermine rational approaches to problem solving/knowledge making. While this position may be true, it also may be false, and to accept it without testing out this theory by attempting to construct such ecologies, is to enable a self-fulfilling prophecy. In a word, to know whether this position is correct, we must try to refute it by attempting to organize and implement KM in National Governments to see whether it can produce the ecology of rationality that will work to enhance knowledge processing throughout such Governments.

To Be Continued

→ 6 CommentsTags: Complexity · KM Techniques · Knowledge Integration · Knowledge Making · Knowledge Management

A Correct Interpretation of a Musical Composition?

July 22nd, 2008 · Comments Off on A Correct Interpretation of a Musical Composition?

thomascoleflorence1837

I think that a musical composition is different from a text asserting logical and semantic content. There still might be a “correct interpretation” of musical compositions, but I don’t think the issue here is one of a true theory about the semantic and logical content of a text, but of the aesthetic value of different musical compositions, or of different performances of the same musical score. I think we can ask the question of whether one or another performance of a musical score has more or less aesthetic value, and that people can answer this question by rating musical performances. In fact, it is possible for each person rating such performances to develop a measurement model creating a ratio-scaled set of numerical ratings of the relative aesthetic value of any comparison set of such performances. Of course, each of these models will be the conjecture of a single individual, so you can ask what all of these conjectures have to do with knowledge? Well, first of all, each of these measurement models can be tested for logical inconsistency. And second, we can use neural networking or other models to derive the patterns of coherence among alternative models generated by different people, and we can decide whether to accept a measurement model as the “best one” based on such an analysis of the pattern of coherence.

The model produced in this way is “objective” in the sense that it is sharable and criticizable. We cannot know for sure whether the scores produced by such a model actually correspond to “aesthetic value.” However, and once again, we can never be sure that any of our models, descriptive or otherwise, correspond to what they’re supposed to correspond to. All we can ever do is to compare competing models, evaluate what’s best according to our critical frameworks, and get on with growing our knowledge. If our efforts improve our lives over time, then we will know eventually, whether our knowledge has helped us to adapt to the challenges posed by our environment. If it has, then it truly was knowledge, whether of fact or value. If it hasn’t, then something is wrong with our frameworks for evaluating our knowledge claims, and we had better improve both the originality of our ideas and these evaluative frameworks.

Finally, what we should not do, I think, is what we have done in the field of value theory, including ethics, normative theory, and aesthetics. That is, we should not accept the epistemological perspectives of relativism and subjectivity which assert that our knowledge is always and everywhere a matter of individual, ethnic, moral, or cultural subjectivity. To make that assumption is to give up the game before we have played it. It is to give up the possible growth of knowledge and the attainment of objective knowledge in return for the comfortable feeling that arises from thinking we are sophisticated and realistic because we have recognized the diversity of human perspectives and have concluded from this diversity that we can, of course, create nothing but an arbitrary unity. This assumption is nothing but a basic axiom of a particular epistemology, which however sensible it may seem, may be a false theory of knowledge. And if it is false, and if we in our “sophistication” act in accordance with this theory, we will give away any possibility of the growth of our knowledge, and with that growth, our future world itself.

Comments Off on A Correct Interpretation of a Musical Composition?Tags: Epistemology/Ontology/Value Theory