(Co-Authored with Steven A. Cavaleri)
Here are some examples of criteria that may be used for comparing alternative solutions (i.e. decision models) in a Comparative Decision Making (CDM) context.
– Logical consistency (inconsistent decision models are invalid and must be reformulated)
– Empirical fit (competing models fit current and past data to varying degrees)
– Projectibility (models vary in their plausibility; models also vary in their after the fact success in prediction)
– Systematic fruitfulness (extent to which a decision model facilitates novel deductions)
– Heuristic quality (extent to which a decision model facilitates new conjectures)
– Systematic coherence and testability (coherence of statements relating abstractions in decision models, coherence of statements relating abstractions and concrete terms in decision models, and testability of expectations resulting from such coherence)
– Simplicity (economy in number of variables in a decision model, simplicity of mathematical form)
– Estimated risk of error in accepting a model rather than its alternatives.
In comparatively evaluating alternative solutions, the criteria we use and the way we combine them together comprise a kind of measurement model for a regulative ideal for comparison. Here are some examples of regulative ideals that might guide comparison in CDM.
– Comparison in the service of justifying a favored solution as the most valid one
– Comparison in the service of justifying one of the alternative decision models or solutions as the most valid one
– Comparison in the service of showing that one of the alternative decision models is a consensual model
Each of these regulative ideals for comparison will be accompanied by implicit or explicit requirements that need to be fulfilled to perform valid comparisons relative to the regulative ideal. For example, to perform “fair” critical comparisons one needs to ensure a “level playing field” for the alternative solutions. Here are some requirements for that:
– Equivalent specification (competing solutions must be developed with equal specificity before comparative evaluation);
– Continuity (when preparing competing solutions for comparison, more concrete versions of them must not change the fundamental ideas expressed in their original, less evolved versions);
– Commensurability (when competing solutions are expressed in terminology that’s so different that they defy easy critical comparison, the solutions must be restated using a broader conceptual framework that enables comparing alternatives); and
– Completeness of the comparison set (the competing solutions must be representative of the range of alternatives as far as one knows, since it is easy to bias a comparison by leaving a strong alternative out of it).
There’s a very important reason why we used the label CDM, rather than Rational Decision Making (RDM), to describe pre-action selection involving alternative models. It is that we believe that the term “rationality” should not be associated solely with the CDM pattern, and that the Authoritarian Decision Making (ADM) and Recognition-Primed Decision Making (RPD) patterns should not be associated solely with non-rationality. The choice of any of these three selection methods may be “rational” depending on the context, and depending on what one means by “rationality.” And once a choice is made about which selection method is appropriate in a particular context, the method selected can be implemented either “rationally,” “non-rationally,” or “irrationally.”
Of course, the Open Problem Solving Pattern requires distributed, transparent, and “rational” evaluation and selection of solutions to organizational problems. But, what do we mean by “rationality” in the method of selection and evaluation of new solutions? Simply that, to the extent possible, the pre-action evaluation and selection process we implement, ought to be one that provides the most severe test possible, within the bounds of practicality, of any proposed solution to a problem; so that solutions in error are eliminated before we have to implement them in action.
In other words, enhancing selecting and evaluating processes, means setting up as Darwinian an environment as possible for our solutions, on the assumption that the solution or solutions that best survive the pre-action critical process, are the ones most likely to be true. Or, if you like, to work, when we move from the PSP back to the Operational Pattern (OP).
Another way of viewing this is to note that even though the process of evaluating beliefs is Darwinian in the OP, it is not Darwinian in the PSP, unless we design it that way. That is, organizations can insulate beliefs from critical evaluation, testing, and fair comparison easily enough. In fact, they do that all the time. So the results of evaluation in the PSP may ill-prepare an individual or an organization for the reality it will face in the OP.
There are countless examples of this, but perhaps the most outstanding contemporary example is in the “Bush 43” Administration’s evaluation of whether or not to attack preceding the Iraq War. It’s commonplace to point out that the Administration slanted the case for intervention to arrive at the result it wanted. It protected the “Go-to-War” solution from other alternatives that might have been adopted, and arranged any evidence so that it would “justify” that alternative. But, clearly, the Administration’s non-Darwinian, justificationist, evaluation process ill-prepared it for the Darwinian selection process imposed by the post-action OP of Iraq’s reality. That process now has led most viewing the situation to conclude that the “Go-To-War” solution was a disastrous error resulting in costs that are much heavier than would have resulted from just continuing to contain Saddam.
To Be Continued