All Life Is Problem Solving

Joe Firestone’s Blog on Knowledge and Knowledge Management

All Life Is Problem Solving header image 2

On Classifying “Systems:” Part One

May 9th, 2008 · No Comments

cloudssunset

Introduction

 

One of the aspects of Dave Snowden’s Cynefin approach is the identification of three physical and five human “domains,” or “systems.” The physical systems are called “order,” “chaos,” and “complexity.” In the area of human systems Dave breaks “order” down into known (simple) and knowable (complicated) systems, and also adds a fifth “domain” called “disorder.” In discussions in the act-km group, there has been considerable recent critical discussion of this framework with Stephen Bounds, Richard Vines and myself all engaging in exchanges with Dave. I don’t want to discuss these exchanges in this post, though I will bring up a number of other aspects of the Cynefin approach in future blog installments. Here, however, I will focus only on the question of what a system is, and on the question of how we should classify systems.

 

Before I start this examination however, I need to emphasize that this post is about “systems.” It is not about “domains,” “coalescences,” “contexts,” or other terms that Dave Snowden has used, in addition to “systems,” to describe the three physical and five human “things” named above. The question I’m addressing here is whether there is an alternative framework that addresses the question of “how ought we to classify real world systems, both physical and human, in a way that best reflects the nature of reality?” In other words, I am addressing questions of ontology related to systems classification, not questions of ontology related to “domain” or “context,” or “situational” classification. I’ll take up these other questions in the future.

 

Systems

 

A system is a conceptually isolable unit composed of components and their interactions, both having properties. That is, it is a collective of interacting components. Components, in turn, are individual units of which properties may be predicated. Interaction consists of the contact, or exchange, components have with one another. As with components, properties may also be predicated of the interactions. Among the properties of interactions are global properties of the collectives we call systems.

 

Since complete descriptions of phenomena are logically impossible, when we analyze or describe systems, we never deal with the whole of a system’s reality. We always abstract and select from infinitely rich concrete reality a set of components, properties, and interactions which have significance for us.

 

To analyze system change we have to make the process/product distinction. Ontologically, only process may exist. But to view change, we have to distinguish time intervals or time slices from changes across them. Within any time interval or time slice, we can describe the state of components, interactions, and properties.

 

We can also distinguish properties of components, and interactions from collective properties of a system. There are three types of such collective properties: aggregate properties are mathematical aggregations of the values of properties of individual components; structural properties are relations between or among individual components; and global properties are properties of the system itself that can’t be derived mathematically from either aggregate or structural properties.

 

Finally, “system” implies the idea of “boundary;” specifically, that there is an inside and an outside of any system – a system, and its environment, and also conceptual criteria that allow us to distinguish the boundary, as well as the possibility of exchanges coming into and out of the system; inputs and outputs.

 

General Systems Theory from 100,000 Feet

 

In the early days of General Systems Theory, in the1940s, ’50s, and 60s, classification of systems was simple. They were all classified as either mechanical (or closed systems) or teleological (or open systems). Both types were viewed as deterministic systems subject to natural laws. If a system was generally not subject to causally relevant inputs from its environment, then it was viewed as a closed, mechanical system. Typical examples of such systems are clockworks, and the solar system.

 

In teleological or open systems, system dynamics is subject to continual causally relevant inputs, and the system has to self-regulate its reactions to these (use feedback) in such a way that it maintains its internal conditions within certain limits, and also maintains its goal-directedness over time. It is because of this goal-directedness that these systems are called teleological. That is, through self-regulation and feedback, they operate in such a way that they tend towards particular states over time; the products of system processes. Teleological systems are different from mechanical systems in a very important respect. While mechanical system laws relate one system product to another system product from one time slice to another; teleological system laws relate one system product to a specified class or range of system products from one time slice to another. Again, however, both of these system types are deterministic.

 

As General Systems Theory developed over time, this simple classification of systems was shown to be inadequate. One development occurred in chaos theory. There, people studying the dynamics of certain deterministic systems discovered a class of systems they called “chaotic systems” which, though governed by deterministic laws, were nevertheless unpredictable in principle because (a) the future state of these systems is extremely sensitive to their initial starting conditions; and (b) in both theory and practice, we are never able to measure such starting conditions with perfect accuracy. Even a reasonable degree of accuracy in measuring starting conditions would not be enough to overcome this because in classical deterministic systems, the divergence between the actual and projected course of such a system becomes exponentially greater over time, and the initial conditions, even though measured fairly accurately, along with the systems laws cannot guide us to the future, determined, but unpredictable, state of the system.

 

Systems characterized by such “deterministic chaos” turn out to be much more common than the typical examples normally given of classical mechanical systems. That is, it turns out that, in reality, systems like the solar system and classical clockworks are atypical systems, their frequency of occurrence drowned by the frequency of systems subject to deterministic chaos.

 

Even at the time, in the1940s and ’50s when General Systems Theory first became popular it was widely known that random or chance systems (systems in which elementary and irreducible chance events occur) existed, since Quantum Mechanics was already well-established. So, given the progress of work on chaos theory and chaotic dynamics, it was apparent by the 1970s that there were two classes of systems: Deterministic and indeterministic systems, and that the deterministic category included classical mechanical, teleological, and deterministically chaotic systems, while the indeterministic category included random or chance systems.

 

It was also apparent that these two classes of systems no longer successfully distinguished systems that were alike from others that were different in essential respects. In particular, deterministic systems included both classical clockworks whose behavior was highly predictable and also chaotic systems, which though deterministic, exhibited behavior that was unpredictable. This suggested that the early two category classification of systems might be expanded into a four category classification based on the deterministic-indeterministic and predictable-unpredictable dichotomy.

 

During the 1960s and certainly by the middle ’70s another development in the evolution of General Systems Theory was clearly visible. Work in Biology and General Systems Theory by Ilya Prigogine, Manfred Eigen, Humberto Maturana, and Francisco Varela focused on the idea of “complexity,” and also on closely associated ideas of “dissipative structures,” “emergence,” “self-organization,” “identity,” “self-making” “autopoiesis,” and “cognition.” While I don’t have space here to discuss these, by now well-known, ideas, I want to make the point that this first phase of complexity research established another key variant of the notion of system. Specifically, this idea views a complex system as a “pattern” or network of interactions, an “organization” that can be understood and explained in retrospect, but that is both non-random and indeterministic. It is non-random, just because it is a pattern that persists through time. And it is indeterministic because (a) the pattern emerges out of interactions among its components in a way that cannot be accounted for by our theories and models, i.e., as a matter of fact we cannot specify laws that govern the detailed behavior of such systems, (b) we cannot know all the initial conditions to which the behavior of the system is sensitive, and (c) the details of the behavior of the system cannot be predicted by our theories and models.

 

This characterization of complex systems as indeterministic is a point that is not generally agreed upon. What is agreed is that “linear models” cannot account for or predict the details of complex system behavior. Some systems practitioners subscribe to determinism as a metaphysical doctrine, and assert the possibility that non-linear deterministic laws for such systems may always be found and that, in any case, it is good to proceed on such an assumption. I accept that this view may be right in individual cases. But I think it’s also possible that there are systems that are intrinsically complex and for which it may never be possible to develop either linear or non-linear deterministic laws.

 

In any event, here it is important to distinguish a number of different claims. First, there’s the view that all systems are really deterministic and that indeterminism, both random and complex is an appearance arising out of our ignorance. This view suggests that neither random nor complex systems really exist and that all systems are deterministic. Second, there’s the claim that some particular system is deterministic or indeterministic, as the case may be. And third, there’s the claim that all systems are indeterministic. This is not the place to take up the first or third claims and Karl Popper has already provided a wonderful discussion of the issues in The Open Universe, 1982. Here, I think we can focus on the second claim and simply point out that at any point in time and in any problem domain, theories and models that view a system as either indeterministic or deterministic can be compared and we can choose which of these stands up best to our tests, evaluations, and criticisms. So whatever our views are about the ultimate reality of any system, we may still be able to agree that a particular theory whether deterministic or indeterministic in character is closer to the truth than another theory that expresses the opposing persuasion. We may also be able to agree that as far as we know from the present state of scientific research, we can point to deterministic and indeterministic systems, and within the deterministic category to predictable and chaotic systems, and within the indeterministic category to random or chance systems, and complex systems, both of which are unpredictable in their behavioral details.

 

During the 1980s and 1990s the study of complex systems continued in biology and spread to economics and the social sciences. The outstanding work in this phase of complexity research is associated with various individuals affiliated with the Santa Fe Institute including John Holland, Brian Arthur, Stuart Kauffman, Chris Langton, Doyne Farmer, Murray Gell-mann, Philip Anderson, and George Cowan. Many others outside the institute have drawn from their work, which by now influences many disciplines and research traditions. The emphasis of earlier research on the self-organization of structures and processes in biological systems, began to shift to research on the self-organization of agents into higher level systems. Computer simulation has played a large role in this research, showing, in particular, how interacting agents governed by simple and deterministic rules could self-organize into higher level emergent systems. The self-organization has been marked by the development of higher level order arising out of system interactions. This order has been characterized as “order for free,” since its maintenance doesn’t require any central control. Emergent order is also characterized by “enablers” and “constraints” that the emergent higher level system imposes on self-organizing agents. That is, once agents do self-organize into a higher level complex system, then that system influences their behavior through the imposition of enablers and constraints, what Donald Campbell, some years earlier, called “downward causation,” and what Popper, in the development of his three worlds ontology, called “plastic controls.”

 

Another, characteristic of this latest phase of systems research is its focus on the dynamics of systems. In particular, complexity research has emphasized transitions from rule-based order to chaos and complexity. And it has also viewed complex systems as ones that are “far from equilibrium,” and as systems that maintain themselves between the highly predictable equilibrium of static order and the unpredictable dynamics of chaos. Finally, Chris Langton’s metaphor of complexity existing “at the edge of chaos,” has been influential in spreading the idea that complexity is not rule-based order, but that nevertheless it is a “pattern” of order that must continually strive to prevent itself from decaying into either deterministic predictable order or deterministic unpredictable chaos. That is, “complexity” stands between two forms of determinism, but is itself indeterministic.

 

The Diffusion of Complexity Theory

 

The spread of Complexity Theory into the social sciences is now creating another fault line in General Systems Theory. Clearly, there are complex systems that emerge from self-organizing agent interactions that operate without the aid of explicit central controls. Ant hills are an example of this sort of complexity. On the other hand, there are also complex systems that combine emergent self-organization with the efforts of system agents to direct and control their systems in accordance with their own intentions. In brief, there’s a distinction between Natural Complex Adaptive Systems (NCASs), and organizations composed of self-conscious intelligent agents. Let’s call the second type of complex systems Promethean Complex Adaptive Systems (PCASs). Complexity science, at present, is mostly based on research about NCASs, and not on research focused on PCASs, and we are in a phase now where we are trying to apply constructs, knowledge, and methods developed for NCASs to PCASs. It is likely that such an effort will succeed only partly, and that we will have to broaden complexity theories and approaches to be successful with PCASs.

Tags: Complexity · Epistemology/Ontology/Value Theory · Knowledge Making · Knowledge Management