Before reading this post, I encourage you to have a go at this quick puzzle to test your problem solving.
How did you do?
I’ve used this challenge numerous times in various talks and workshops and I find that the majority of people follow the same pattern. They only test their hypotheses by guessing sequences which they think will succeed in following the rule. Even when I’ve occasionally primed them by talking about the need for failure. Its only after repeated guesses that someone will eventually propose a sequence which does fail, and the “oohs” and “aahs” noticeably indicate learning.
This problem is a nice example to show that we learn more when we fail because we generate new information. Information Theory suggests maximum information is generated when the probability of failure is 50%. If we never fail then we must know everything already, and if we fail all the time then we must be repeating the same mistakes over and over again.
Of course, that doesn’t necessarily mean we always want to fail 50% of the time. We wouldn’t want any plane we fly in to have a 50% probably of crashing, but then we’re not really wanting to generate new information and learning when we fly. Context is important. However, when we are developing new products, we do want to learn, so a 50% failure rate is more appropriate. Failure is not an option, it’s a necessity!
That’s easier said than done though, so here’s some things I have learnt which may help.
Run Experiments
Firstly, to be open to failure we need to consider the assumptions we have made in taking decisions on what to do. We should treat those assumptions as hypotheses, and come up with ways to test them. A simple template which can be useful as training wheels is this one:
- We believe that <solution>
- Will result in <outcome>
- We will know we have succeeded when <evidence>
Having said that, the puzzle this post opened with showed how we need our experiments to fail as well as succeed, so we also need to intentionally falsify our hypotheses. As Karl Popper wrote in The Poverty if Historicism:
The discovery of instances which confirm a theory means very little if we have not tried, and failed, to discover refutations. For if we are uncritical we shall always find what we want: we shall look for, and find, confirmations, and we shall look away from, and not see, whatever might be dangerous to our pet theories. In this way it is only too easy to obtain . . . overwhelming evidence in favour of a theory which, if approached critically, would have been refuted.
Thus the evidence we look for should show that not only have we proved our hypotheses, but also that we have also failed to disprove them. This means moving from a fail-safe approach where we assume a low probability of failure, to a safe-to-fail approach where are can quickly and cheaply recover from failure without putting lives, careers or reputations at risk.
Illuminate Feedback
We need to run experiments because we are generally dealing with complex problems, where cause and effect are not repeatable and we can only explain results with retrospective coherence. We cannot rely on experts to prescribe good practice to follow and need to rely on emergent practice as a result of experimentation.
The consequence is that experts disagree, a tell-tail sign of a complex problem. Recognising a problem as complex, with no knowable solution, is the first step in breaking away from arguing over who is right it wrong. Instead we can use Chris Argyris‘ Ladder of Inference to explore why we have differing opinions and how we might reconcile views and achieve mutual learning to allow solutions to emerge.
The ladder has the following rungs:
- Take Actions
- Adopt Beliefs
- Draw Conclusions
- Make Assumptions
- Interpret Meaning
- Select Reality
- Observe Reality
We naturally tend to climb to the top of the ladder, quickly taking action without considering the beliefs, conclusions, assumptions, meaning and reality that has led to the action. By climbing down the ladder we can understand both why we take certain actions, and why others take different actions. It is probably due to different beliefs, conclusions, assumptions, interpretation or selection of data.
This known as Assertive Inquiry, where we advocate for our view, while at the same time seeking to understand an alternate view. Discovering why we might recommend different action this way generates understanding which can lead to potential experiments to test various views.
We can think if this as shining a light on a problem. If we try and solve problems in the dark, we don’t see the information and feedback from which we can learn. I like the analogy of practising a golf swing in the dark. We won’t see where the golf ball ends up, so we can’t adjust accordingly.
This is similar to the Streetlight Effect; searching for something and looking only where it is easiest. Its like the “joke” about the drunk who has lost his car keys, and is looking for them under the lamppost. When asked why he is looking there, his response is that he won’t find his keys where there is no light.
Expect the Unexpected
Running experiments and searching for feedback in this way means intentionally widening the scanning range so that we are more likely to pick up on information that we might naturally ignore due to our inherent biases.
First we need to overcome the God Complex; an unshakable belief characterised by consistently inflated feelings of personal ability, privilege, or infallibility. In my last post, In the Lap of the Gods, I talked about how this can lead us to not even acknowledge the need to run experiments or search for feedback in the first place. It’s why disagreement can be healthy and why we need to create safe environments where people who can openly challenge our thinking.
Then there is Cognitive Dissonance; the inner tension we feel when our beliefs are challenged by evidence. Whenever I walk down a stationary escalator, something seems wrong. My brain is expecting movement but my body doesn’t experience any. It really should be just like walking down stairs, but it does’t feel that way. My belief is that the escalator is moving, but the evidence is that it is not, and our natural inclination is to believe our beliefs and ignore the evidence. Thus we dismiss any contrary feedback we receive as being wrong.
Related to this is Confirmation Bias; the tendency to favour information that confirms one’s pre-existing beliefs or hypotheses. This is what generally leads us to only try and prove our hypotheses, but also means that we only notice feedback which does prove our hypotheses. Again any contrary feedback we receive is ignored.
The situation is further complicated by Survivorship Bias; concentrating on the people or things that made it past some selection process and overlooking those that did not. A great example of this is the story of Abraham Wald, a statistician during World War 2. We was tasked with prioritising where to reinforce planes with better armour to increase survival rates given that the planes’ weight limited the amount of armour possible. Available data from surviving planes showed the following patterns of damage and common theory was to reinforce those most damaged areas.
However, Wald’s insight was that this damage was from planes which had returned, and therefore could survive damage in those areas. As a result, it was likely that planes which did not survive would probably have been hit in the undamaged areas, and this is where any reinforcement should go. Thus we need to pay attention not just to the information that we can see from our experiments, but also consider any information that we don’t see from failed experiments.
And then there is Availability Bias; relying on immediate examples that come to mind when evaluating a specific topic, concept, method or decision. An example is whenever someone mentions a new model of car to us, and suddenly we see that model everywhere. Its not that its suddenly appearing more often, its just that our brain notices it more often because its more recently available for recall. Our preferred hypotheses will be more available, so we are more likely to notice feedback which relates to them.
So when considering a hypotheses its easy to notice information which is more immediately available, which survives experiments, and which confirms our opinions, formed from the belief that we are experts. And this is just a handful of the huge list of cognitive biases on Wikipedia!
Summary
There’s a nice acronym which suggests that a FAIL is a First Attempt In Learning. I use that to highlight that failure is not something that we should shy away from and treat as an enemy, but something we should embrace and befriend. That doesn’t mean encouraging and celebrating failure though. Too much failure is as bad as not enough.
What’s needed is Critical Thinking; the intellectually disciplined process of actively and skilfully conceptualising, applying, analysing, synthesising, and evaluating information to reach an answer or conclusion. The Backbriefing & Experiment A3s I use are intended to encourage this by helping focus on the three areas in this post – running experiments, illuminating failure, and expecting the unexpected.
To close, I would recommend Black Box Thinking by Matthew Syed to explore these ideas in more depth. The metaphor comes from the aviation industry, and in particular comparing it to the healthcare industry. Syed cites a 2013 study published in the healthcare Journal of Patient Safety which put the number of premature deaths associated with preventable harm at more than 400,000 per year. That is the equivalent of two 747 airliners crashing every day. His compelling argument is that the aviation industry, where you really don’t want failures, is actually extremely safe because of the way it uses Black Boxes to conscientiously learn from any failures when they do occur. The healthcare industry, on the the hand, has a history of brushing failures under the carpet as inevitable and just the nature of the job.
We would do well to take note, and be more like the aviation industry, applying the same attitude and discipline so that we can befriend and learn from failure.
This is good stuff. The challenge I have with it – & the general challenge with “embracing failure” (to learn), is that it is incredibly difficult to “translate” this to real operational SW development shops. This is true both from a practical perspective (i.e., how does one really present “an experiment” regarding a proposed – time boxed or otherwise – development effort) and a management perspective (where do you find management truly willing to undertake a SW development effort with any high chance of failure?).
Great article, love the way you’ve pulled a lot of concepts together into a coherent whole. There’s a nice video of the Wason puzzle on the Veritasium channel here: https://youtu.be/vKA4w2O61Xo
[…] Failure Is Not An Option (Karl Scotland) […]