All Life Is Problem Solving

Joe Firestone’s Blog on Knowledge and Knowledge Management

All Life Is Problem Solving header image 2

Some Comments on Safe-Fail Experiments

May 30th, 2008 · 9 Comments

thomascolesunriseinthecatskills1826

 

This post is about “safe-fail experiments.” The essential idea in safe-fail experiments was expressed well by Dave Snowden in this way: “I can afford them to fail and critically, I plan them so that through that failure I learn more about the terrain through which I wish to travel.”

And again, in another place, he adds:

“One of the main (if not the main) strategies for dealing with a complex system is to create a range of safe-fail experiments or probes that will allow the nature of emergent possibilities to become more visible.”

I like this emphasis on safe-fail experiments: first because of their low risk character, and second, because the emphasis on them is about our “learning from failure,” or, put another way, about our “learning from error.” Learning from error is what Critical Rationalism is all about.

In outlining the way one should use safe-fail experiments Dave offers the following:

“– Before opinions harden you create a very simple decision rule. Everyone with an idea that has even the remotest possibility of being true or useful creates a safe fail experiment based on the idea. Critically this does not have to be one that would prove the issue, just consistent with the position adopted.

– Next each proposal is fleshed out, costed and subject to challenge and review, but nothing is ruled out unless rationing of resource is required. This is rarely the case by the way as you keep the experiments small, designed for fast feedback/evolution.

– For each experiment to be valid its outcome must be observable, not to measure necessarily but to allow the simple rule of amplification or dampening of good or bad patterns to be put into operation. There is no point in an experiment where you can not observe what is happening.

– The experiments are then reviewed for common elements and resourced along with set up of monitoring and review processes.”

I find some of the wording used in this outline of great interest. First, Dave views ideas as “true” or “useful.” While it’s not clear what “true” or “useful” mean in this context. It’s clear that the kind of ideas Dave is talking about can be false. Since the experiments in question don’t have to prove an idea, but do have to be consistent with it, and also have to provide us with something we an learn, then doesn’t it follow that what we can learn from the experiments is whether or not the knowledge claims or ideas underlying them are false? And, in turn, doesn’t that mean that safe-fail experiments are about testing the ideas underlying them?

But if this is true then why are these experiments viewed by Dave as mere “probes?” Why shouldn’t they be viewed as tests of the new ideas (conjectures) of their formulators? Moreover, in requiring that experimental outcomes be observable, isn’t Dave completing a pattern of activity that Popper specified for Science? Specifically, isn’t he recommending that participants in Cynefin applications develop conjectures, to be refuted, if possible by safe-fail experiments yielding observational outcomes, as his method for learning what direction to take in acting in complex (and sometimes also in chaotic) domains?

Exchange on Safe-Fail Experiments

Raymond Salzwedel at the narrative lab has also offered some ideas on safe-fail, experiments in the form of 9 principles. And Dave Snowden has commented on these principles. Below I’ll add my comments to the views of Raymond and Dave (sourced from Dave’s blog).

Raymond: “Don’t be afraid to experiment – some will fail.”

Dave: “I would go further than this and say that experiments should be designed with failure in mind. We often learn more from failure than success anyway, and this is research as well as intervention. We want to create an environment in which success is not privileged over failure in the early stages of dealing with complex issues.”

Joe: I strongly agree with this comment of Dave’s. This is one of the central emphases of Critical Rationalism. We ought to try our best to design the experiment to test the idea underlying it as severely as possible. The game is about making it fail if we can. That way if it doesn’t fail, we’ll really be able to say that it may be true.

Raymond: “Every experiment will be different – don’t use the cookie-cutter approach when designing interventions.”

Dave: “Yes and no. You might want the same experiment run by different people or in different areas. Cookie-cutter approaches tend to be larger scale that true safe-fail probes so this may or may not be appropriate.”

Joe: There’s value in repeating safe-fail experiments to ensure that the results weren’t a fluke or due to accident. However, there’s a problem here. One can’t claim that complex contexts are different from others in the sense that they outcomes are never repeated and also claim that one can repeat the same experiment at a later time and still get the same result. So, I think that one of these two views will have to give way, and if we think that safe-fail experiments are possible, useful, and repeatable, then we will have to grant that least in the respects essential to the safe-fail experiments, the complex context is unchanged between the time we do the first experiment and its replication.

Raymond: “Don’t learn the same lesson twice – or maybe I should say, don’t make the same mistake twice.”

Dave: Disagree, you can never be sure of the influence of context. Often an experiment which failed in the past may now succeed. Your competitor may well learn how to make your failures work. Obviously you don’t want to be stupid here, but many a good initiative has been destroyed by the We have tried that argument.

Joe: I couldn’t agree more with Dave’s sentiment here and with the view that the “We have tried that argument” is often way off-base, and I’ll also add both superficial and also motivated by political considerations of one kind or another. However, I also have to say that, even though I agree with Dave that one can never be sure of the influence of context, I agree with Raymond’s admonition about not making the same mistake twice since I think that his admonition assumes that all essential conditions including context are the same in both cases. But finally, having said the above, I’m also acutely aware of another possible problem here.

What if the relationships in complex domains between experimental treatments and outcomes are relationships of propensity expressible as probabilities? In that case, we couldn’t conclude that an idea was refuted based on the results of of one or a few safe-fail experiments. If this was the case we’d have to make the same mistake a number of times before we were sure it was a mistake.

Raymond: Start with a low-risk area when you begin to experiment with a system.

Dave: Again yes and no. If you are talking about the whole system yes, but normally complex issues are immediate and failure is high risk. The experiment is low risk (the nature of safe-fail is such that you can afford to fail) but the problem area may well be high risk. In my experience complexity based strategies work much better in these situations.

Joe: I agree with Dave on this point.

Design an experiment that can be measured. That is, know what the success and failure indicators of each experiment are.

Dave: Change measure to monitor and I can agree with it. The second sentence I would delete

Joe: I agree with Dave about changing “measure” to “monitor.” However, I’m not sure about the second sentence. That is, it seems to me that the experimental design should be quite clear about the observational outcomes that would constitute a refutation of the idea underlying a safe-fail experiment.

Raymond: Try doing multiple experiments on the same system – even at the same time. Some will work, some will fail – good. Shut down the ones that fail and create variations on the ones that work.

Introduce dissent. Maximize diversity in the experiment design process by getting as many inputs as possible.

Dave: In the main agree, but see the above process. I generally don’t like the failure and success words as they seem inappropriate to probes

Joe: I agree with Dave’s remark about the failure and success words, provided these words aren’t tied specifically to the idea of failure or survival of the underlying ideas. But, if they are, then I have no problem with the words. Going further, however, I think there is a problem with parallel safe-fail experiments because it may be difficult to associate the results of a particular experiment with that experiment, rather than with some portion of or the entire configuration of ongoing experiments. Of course, depending on the details, there may be no possibility of confounding the effects of one experiment with that of another. But if the interaction clusters within the complex context are highly interdependent, then simultaneous experiments on two or more of them will confound one another’s effects, and the results of each individual experiment won’t be attributable to that experiment.

Raymond: Learn from the results of other people’s experiments.

Dave: Yep, but remember your context is different

Joe: Agree

Raymond: Teach other people the results of your experiments.

Dave: Yep, but remember your context is different

Joe: Of course. This is just knowledge integration.

Safe-Fail Experiments for Unknown Domains

Dave and Raymond emphasize safe-fail experiments as appropriate for use in complex contexts, or in certain chaotic contexts where there is time for safe-fail experiments. However, it’s hard to see why safe-fail experiments can’t be applied to “complicated” contexts, and others I’ve identified in a previous blog. The general principle here, is that safe-fail experiments are a way of testing ideas and also acquiring new information that can help in developing more new ideas. These things are needed in any context where a knowledge gap exists. Of course, in many complicated contexts, laboratory experiments may be used to test new ideas. However, laboratory experiments are clearly a type of safe-fail experiment, in that we can afford to have them “Fail.”

Conclusion

Finally, safe-fail experiments are viewed as “probes” or “acts” in Cynefin. Since everything we do with conscious intention is an “act” of ours, we can certainly characterize them this way. However, safe-fail experiments are not “acts” that we carry out to have a specific operational impact as part of organizational routine. Instead, they are tests of ideas formulated as knowledge claims. So, rather than being part of organizational routine decision making and learning, they are an aspect of the creative learning process, which I have elsewhere called the Knowledge life Cycle or Double-Loop Learning.

Tags: Complexity · Epistemology/Ontology/Value Theory · Knowledge Making · Knowledge Management

9 responses so far ↓

  • 1 snowded // May 30, 2008 at 9:55 pm

    Nice post Joe – not sure if you were testing if you were really in the RSS feed! Ignoring the links back into Knowledge Life Cycle the one question you ask is about the appropriateness of safe-fail approaches to complicated contexts. I wouldn’t disagree with that, yes they can be especially if there is not time or resource to deploy expert knowledge or conduct a full explanation. My overall point is that in a complex situation it is about the only method that you can use.

  • 2 Joe // May 31, 2008 at 11:20 am

    Hi Dave, Thanks for your comment. I haven’t thought enough about it to be able to comment on whether safe-fail experiments are all we can use in complex situations. Off the top of my head, I think we might have possibilities of learning things from narrative data gathered at different times combined with text mining and analysis. However, safe-fail experiments are probably better anyway, because of their much tighter focus.

    On complicated contexts you seem to prefer expert knowledge to safe-fail experiments. But I think that in cases where experts need to develop new knowledge, they would be wise to rely on safe-fail experiments to check their gut level intuition, theories, simulations, etc. before deciding on the recommendations to be made to decision makers.

  • 3 kk aw // Jun 7, 2008 at 1:44 am

    We already have “proof of concept”, “pilot projects” and various other exploratory activities that we can afford for them to fail. The objective is to learn.

    My question is: How is safe-fail experiments different?

    Sounds like a fanciful name for something that has been practiced for ages.

  • 4 Joe // Jun 7, 2008 at 9:43 am

    Hi KK,

    Welcome to All Life Is Problem Solving.

    I’m not sure safe-fail experiments are different. Dave might say that they’re different in that their purpose is not to test hypotheses but to see how complex or chaotic systems respond. But it seems to me that such experiments are developed with expectations and that these are hypothetical in character.

  • 5 stephenb // Jun 8, 2008 at 10:06 am

    kk,

    I’d say that the essence of “safe-fail” is (a) trying multiple things in parallel; (b) not picking winners in advance and (c) always proceeding incrementally.

    For example, most pilot projects run in two phases. First, run a proof of concept. If that works, roll out to the whole organisation.

    Safe-fail breaks the process down much further. The idea is to always do things in chunks small enough to allow for roll-back if negative results occur. So an old-style “roll out” might instead happen as a dozen or more “safe fail” experiments, at each point confirming that results are positive before proceeding further down that path.

  • 6 Joe // Jun 8, 2008 at 11:58 am

    Welcome to All Life Is Problem Solving, Stephen.

    I agree with your comments about safe-fail experiments and their differences from pilot projects. Pilot projects do seem to be more about confirming an already developed solution than they are about really testing them. However, I’m still not sure that the idea of safe-fail experiments is very different from the idea behind scientific experiments in general. There are nuances, of course, such as the idea of parallel safe-fail experiments. However, safe-fail experiments seem to be more of an elaboration on the idea of an experiment than something that is qualitatively different.

  • 7 kk aw // Jun 9, 2008 at 12:25 pm

    Stephen

    So it is very similar to exploratory activities that I for one have been using for ages.

    Like you said, we break down the process into multiple small processes. Test them to see what works and what don’t. Learn the key factors governing the behaviour of these processes. Finally we synthesis them into a comprehensive process. Then we still have to do a proof of concept and a pilot project before we can are comfortable to do a major roll-out.

    Nothing new there!

  • 8 The Problem Solving Pattern Matters: Part Nine, Enhancing Developing Solutions: Evaluating and Selecting Among New Ideas // Feb 20, 2009 at 1:35 am

    [...] comparative evaluations. In addition, both RPD and CDM forms of selection can also benefit from “safe-fail experiments” that test solutions developed using RPD, or CDM. The two central characteristics of safe-fail [...]

  • 9 Systems Thinking and Design: A Case for Egypt? « Censemaking // Jan 30, 2011 at 1:35 pm

    [...] Evaluate the implementation of the prototype and incorporate the findings into successive models and then re-implement them in the form of new prototypes. This rapid-cycle prototyping on small scale experiments enable a safe-fail culture to form rather than aim for the impractical fail-safe models that almost never work in complex systems; [...]