This post is about “safe-fail experiments.” The essential idea in safe-fail experiments was expressed well by Dave Snowden in this way: “I can afford them to fail and critically, I plan them so that through that failure I learn more about the terrain through which I wish to travel.”
And again, in another place, he adds:
“One of the main (if not the main) strategies for dealing with a complex system is to create a range of safe-fail experiments or probes that will allow the nature of emergent possibilities to become more visible.”
I like this emphasis on safe-fail experiments: first because of their low risk character, and second, because the emphasis on them is about our “learning from failure,” or, put another way, about our “learning from error.” Learning from error is what Critical Rationalism is all about.
In outlining the way one should use safe-fail experiments Dave offers the following:
“– Before opinions harden you create a very simple decision rule. Everyone with an idea that has even the remotest possibility of being true or useful creates a safe fail experiment based on the idea. Critically this does not have to be one that would prove the issue, just consistent with the position adopted.
— Next each proposal is fleshed out, costed and subject to challenge and review, but nothing is ruled out unless rationing of resource is required. This is rarely the case by the way as you keep the experiments small, designed for fast feedback/evolution.
— For each experiment to be valid its outcome must be observable, not to measure necessarily but to allow the simple rule of amplification or dampening of good or bad patterns to be put into operation. There is no point in an experiment where you can not observe what is happening.
— The experiments are then reviewed for common elements and resourced along with set up of monitoring and review processes.”
I find some of the wording used in this outline of great interest. First, Dave views ideas as “true” or “useful.” While it’s not clear what “true” or “useful” mean in this context. It’s clear that the kind of ideas Dave is talking about can be false. Since the experiments in question don’t have to prove an idea, but do have to be consistent with it, and also have to provide us with something we an learn, then doesn’t it follow that what we can learn from the experiments is whether or not the knowledge claims or ideas underlying them are false? And, in turn, doesn’t that mean that safe-fail experiments are about testing the ideas underlying them?
But if this is true then why are these experiments viewed by Dave as mere “probes?” Why shouldn’t they be viewed as tests of the new ideas (conjectures) of their formulators? Moreover, in requiring that experimental outcomes be observable, isn’t Dave completing a pattern of activity that Popper specified for Science? Specifically, isn’t he recommending that participants in Cynefin applications develop conjectures, to be refuted, if possible by safe-fail experiments yielding observational outcomes, as his method for learning what direction to take in acting in complex (and sometimes also in chaotic) domains?
Exchange on Safe-Fail Experiments
Raymond Salzwedel at the narrative lab has also offered some ideas on safe-fail, experiments in the form of 9 principles. And Dave Snowden has commented on these principles. Below I’ll add my comments to the views of Raymond and Dave (sourced from Dave’s blog).
Raymond: “Don’t be afraid to experiment – some will fail.”
Dave: “I would go further than this and say that experiments should be designed with failure in mind. We often learn more from failure than success anyway, and this is research as well as intervention. We want to create an environment in which success is not privileged over failure in the early stages of dealing with complex issues.”
Joe: I strongly agree with this comment of Dave’s. This is one of the central emphases of Critical Rationalism. We ought to try our best to design the experiment to test the idea underlying it as severely as possible. The game is about making it fail if we can. That way if it doesn’t fail, we’ll really be able to say that it may be true.
Raymond: “Every experiment will be different – don’t use the cookie-cutter approach when designing interventions.”
Dave: “Yes and no. You might want the same experiment run by different people or in different areas. Cookie-cutter approaches tend to be larger scale that true safe-fail probes so this may or may not be appropriate.”
Joe: There’s value in repeating safe-fail experiments to ensure that the results weren’t a fluke or due to accident. However, there’s a problem here. One can’t claim that complex contexts are different from others in the sense that they outcomes are never repeated and also claim that one can repeat the same experiment at a later time and still get the same result. So, I think that one of these two views will have to give way, and if we think that safe-fail experiments are possible, useful, and repeatable, then we will have to grant that least in the respects essential to the safe-fail experiments, the complex context is unchanged between the time we do the first experiment and its replication.
Raymond: “Don’t learn the same lesson twice – or maybe I should say, don’t make the same mistake twice.”
Dave: Disagree, you can never be sure of the influence of context. Often an experiment which failed in the past may now succeed. Your competitor may well learn how to make your failures work. Obviously you don’t want to be stupid here, but many a good initiative has been destroyed by the We have tried that argument.
Joe: I couldn’t agree more with Dave’s sentiment here and with the view that the “We have tried that argument” is often way off-base, and I’ll also add both superficial and also motivated by political considerations of one kind or another. However, I also have to say that, even though I agree with Dave that one can never be sure of the influence of context, I agree with Raymond’s admonition about not making the same mistake twice since I think that his admonition assumes that all essential conditions including context are the same in both cases. But finally, having said the above, I’m also acutely aware of another possible problem here.
What if the relationships in complex domains between experimental treatments and outcomes are relationships of propensity expressible as probabilities? In that case, we couldn’t conclude that an idea was refuted based on the results of of one or a few safe-fail experiments. If this was the case we’d have to make the same mistake a number of times before we were sure it was a mistake.
Raymond: Start with a low-risk area when you begin to experiment with a system.
Dave: Again yes and no. If you are talking about the whole system yes, but normally complex issues are immediate and failure is high risk. The experiment is low risk (the nature of safe-fail is such that you can afford to fail) but the problem area may well be high risk. In my experience complexity based strategies work much better in these situations.
Joe: I agree with Dave on this point.
Design an experiment that can be measured. That is, know what the success and failure indicators of each experiment are.
Dave: Change measure to monitor and I can agree with it. The second sentence I would delete
Joe: I agree with Dave about changing “measure” to “monitor.” However, I’m not sure about the second sentence. That is, it seems to me that the experimental design should be quite clear about the observational outcomes that would constitute a refutation of the idea underlying a safe-fail experiment.
Raymond: Try doing multiple experiments on the same system – even at the same time. Some will work, some will fail – good. Shut down the ones that fail and create variations on the ones that work.
Introduce dissent. Maximize diversity in the experiment design process by getting as many inputs as possible.
Dave: In the main agree, but see the above process. I generally don’t like the failure and success words as they seem inappropriate to probes
Joe: I agree with Dave’s remark about the failure and success words, provided these words aren’t tied specifically to the idea of failure or survival of the underlying ideas. But, if they are, then I have no problem with the words. Going further, however, I think there is a problem with parallel safe-fail experiments because it may be difficult to associate the results of a particular experiment with that experiment, rather than with some portion of or the entire configuration of ongoing experiments. Of course, depending on the details, there may be no possibility of confounding the effects of one experiment with that of another. But if the interaction clusters within the complex context are highly interdependent, then simultaneous experiments on two or more of them will confound one another’s effects, and the results of each individual experiment won’t be attributable to that experiment.
Raymond: Learn from the results of other people’s experiments.
Dave: Yep, but remember your context is different
Raymond: Teach other people the results of your experiments.
Dave: Yep, but remember your context is different
Joe: Of course. This is just knowledge integration.
Safe-Fail Experiments for Unknown Domains
Dave and Raymond emphasize safe-fail experiments as appropriate for use in complex contexts, or in certain chaotic contexts where there is time for safe-fail experiments. However, it’s hard to see why safe-fail experiments can’t be applied to “complicated” contexts, and others I’ve identified in a previous blog. The general principle here, is that safe-fail experiments are a way of testing ideas and also acquiring new information that can help in developing more new ideas. These things are needed in any context where a knowledge gap exists. Of course, in many complicated contexts, laboratory experiments may be used to test new ideas. However, laboratory experiments are clearly a type of safe-fail experiment, in that we can afford to have them “Fail.”
Finally, safe-fail experiments are viewed as “probes” or “acts” in Cynefin. Since everything we do with conscious intention is an “act” of ours, we can certainly characterize them this way. However, safe-fail experiments are not “acts” that we carry out to have a specific operational impact as part of organizational routine. Instead, they are tests of ideas formulated as knowledge claims. So, rather than being part of organizational routine decision making and learning, they are an aspect of the creative learning process, which I have elsewhere called the Knowledge life Cycle or Double-Loop Learning.