Failure Is Not An Option

FAILBefore reading this post, I encourage you to have a go at this quick puzzle to test your problem solving.

Go on, you won’t regret it!

How did you do?

I’ve used this challenge numerous times in various talks and workshops and I find that the majority of people follow the same pattern. They only test their hypotheses by guessing sequences which they think will succeed in following the rule. Even when I’ve occasionally primed them by talking about the need for failure. Its only after repeated guesses that someone will eventually propose a sequence which does fail, and the “oohs” and “aahs” noticeably indicate learning.

This problem is a nice example to show that we learn more when we fail because we generate new information. Information Theory suggests maximum information is generated when the probability of failure is 50%. If we never fail then we must know everything already, and if we fail all the time then we must be repeating the same mistakes over and over again.

Of course, that doesn’t necessarily mean we always want to fail 50% of the time. We wouldn’t want any plane we fly in to have a 50% probably of crashing, but then we’re not really wanting to generate new information and learning when we fly. Context is important. However, when we are developing new products, we do want to learn, so a 50% failure rate is more appropriate. Failure is not an option, it’s a necessity!

That’s easier said than done though, so here’s some things I have learnt which may help.

Run Experiments

Firstly, to be open to failure we need to consider the assumptions we have made in taking decisions on what to do. We should treat those assumptions as hypotheses, and come up with ways to test them. A simple template which can be useful as training wheels is this one:

  • We believe that <solution>
  • Will result in <outcome>
  • We will know we have succeeded when <evidence>

Having said that, the puzzle this post opened with showed how we need our experiments to fail as well as succeed, so we also need to intentionally falsify our hypotheses. As Karl Popper wrote in The Poverty if Historicism: 

The discovery of instances which confirm a theory means very little if we have not tried, and failed, to discover refutations. For if we are uncritical we shall always find what we want: we shall look for, and find, confirmations, and we shall look away from, and not see, whatever might be dangerous to our pet theories. In this way it is only too easy to obtain . . . overwhelming evidence in favour of a theory which, if approached critically, would have been refuted.

Thus the evidence we look for should show that not only have we proved our hypotheses, but also that we have also failed to disprove them. This means moving from a fail-safe approach where we assume a low probability of failure, to a safe-to-fail approach where are can quickly and cheaply recover from failure without putting lives, careers or reputations at risk.

Illuminate Feedback

We need to run experiments because we are generally dealing with complex problems, where cause and effect are not repeatable and we can only explain results with retrospective coherence. We cannot rely on experts to prescribe good practice to follow and need to rely on emergent practice as a result of experimentation.

The consequence is that experts disagree, a tell-tail sign of a complex problem. Recognising a problem as complex, with no knowable solution, is the first step in breaking away from arguing over who is right it wrong. Instead we can use Chris Argyris‘ Ladder of Inference to explore why we have differing opinions and how we might reconcile views and achieve mutual learning to allow solutions to emerge.

The ladder has the following rungs:

  • Take Actions
  • Adopt Beliefs
  • Draw Conclusions
  • Make Assumptions
  • Interpret Meaning
  • Select Reality
  • Observe Reality

We naturally tend to climb to the top of the ladder, quickly taking action without considering the beliefs, conclusions, assumptions, meaning and reality that has led to the action. By climbing down the ladder we can understand both why we take certain actions, and why others take different actions. It is probably due to different beliefs, conclusions, assumptions, interpretation or selection of data.

This known as Assertive Inquiry, where we advocate for our view, while at the same time seeking to understand an alternate view. Discovering why we might recommend different action this way generates understanding which can lead to potential experiments to test various views.

We can think if this as shining a light on a problem. If we try and solve problems in the dark, we don’t see the information and feedback from which we can learn. I like the analogy of practising a golf swing in the dark. We won’t see where the golf ball ends up, so we can’t adjust accordingly.

This is similar to the Streetlight Effect; searching for something and looking only where it is easiest. Its like the “joke” about the drunk who has lost his car keys, and is looking for them under the lamppost. When asked why he is looking there, his response is that he won’t find his keys where there is no light.

Expect the Unexpected

Running experiments and searching for feedback in this way means intentionally widening the scanning range so that we are more likely to pick up on information that we might naturally ignore due to our inherent biases.

First we need to overcome the God Complex; an unshakable belief characterised by consistently inflated feelings of personal ability, privilege, or infallibility. In my last post, In the Lap of the Gods, I talked about how this can lead us to not even acknowledge the need to run experiments or search for feedback in the first place. It’s why disagreement can be healthy and why we need to create safe environments where people who can openly challenge our thinking.

Then there is Cognitive Dissonance; the inner tension we feel when our beliefs are challenged by evidence. Whenever I walk down a stationary escalator, something seems wrong. My brain is expecting movement but my body doesn’t experience any. It really should be just like walking down stairs, but it does’t feel that way. My belief is that the escalator is moving, but the evidence is that it is not, and our natural inclination is to believe our beliefs and ignore the evidence. Thus we dismiss any contrary feedback we receive as being wrong.

Related to this is Confirmation Bias; the tendency to favour information that confirms one’s pre-existing beliefs or hypotheses. This is what generally leads us to only try and prove our hypotheses, but also means that we only notice feedback which does prove our hypotheses. Again any contrary feedback we receive is ignored.

The situation is further complicated by Survivorship Bias; concentrating on the people or things that made it past some selection process and overlooking those that did not. A great example of this is the story of Abraham Wald, a statistician  during World War 2. We was tasked with prioritising where to reinforce planes with better armour to increase survival rates given that the planes’ weight limited the amount of armour possible. Available data from surviving planes showed the following patterns of damage and common theory was to reinforce those most damaged areas.

However, Wald’s insight was that this damage was from planes which had returned, and therefore could survive damage in those areas. As a result, it was likely that planes which did not survive would probably have been hit in the undamaged areas, and this is where any reinforcement should go. Thus we need to pay attention not just to the information that we can see from our experiments, but also consider any information that we don’t see from failed experiments.

And then there is Availability Bias; relying on immediate examples that come to mind when evaluating a specific topic, concept, method or decision. An example is whenever someone mentions a new model of car to us, and suddenly we see that model everywhere. Its not that its suddenly appearing more often, its just that our brain notices it more often because its more recently available for recall. Our preferred hypotheses will be more available, so we are more likely to notice feedback which relates to them.

So when considering a hypotheses its easy to notice information which is more immediately available, which survives experiments, and which confirms our opinions, formed from the belief that we are experts. And this is just a handful of the huge list of cognitive biases on Wikipedia!

Summary

There’s a nice acronym which suggests that a FAIL is a First Attempt In Learning. I use that to highlight that failure is not something that we should shy away from and treat as an enemy, but something we should embrace and befriend. That doesn’t mean encouraging and celebrating failure though. Too much failure is as bad as not enough.

What’s needed is Critical Thinking; the intellectually disciplined process of actively and skilfully conceptualising, applying, analysing, synthesising, and evaluating information to reach an answer or conclusion. The Backbriefing & Experiment A3s I use are intended to encourage this by helping focus on the three areas in this post –  running experiments, illuminating failure, and expecting the unexpected. 

To close, I would recommend Black Box Thinking by Matthew Syed to explore these ideas in more depth. The metaphor comes from the aviation industry, and in particular comparing it to the healthcare industry. Syed cites a 2013 study published in the healthcare Journal of Patient Safety which put the number of premature deaths associated with preventable harm at more than 400,000 per year. That is the equivalent of two 747 airliners crashing every day. His compelling argument is that the aviation industry, where you really don’t want failures, is actually extremely safe because of the way it uses Black Boxes to conscientiously learn from any failures when they do occur. The healthcare industry, on the the hand, has a history of brushing failures under the carpet as inevitable and just the nature of the job.

We would do well to take note, and be more like the aviation industry, applying the same attitude and discipline so that we can befriend and learn from failure.

In the Lap of the Agile Gods

Luck

I’ve noticed a lot of conversation recently (mostly on Twitter) debating how prescriptive, or not, we should be when helping teams through an Agile Transformation (i.e. helping them use Agile approaches to be more Agile)? Do we tell teams exactly what to do initially, or allow them complete freedom to figure it out for themselves? Or something else?

Worshipping False Gods

During World War 2, inhabitants of Melanesian islands in the Southwestern Pacific Ocean witnessed the Japanese & Allied forces using their homelands as military bases for troops, equipment and supplies. The troops would often share supplies with the native islander in return for support or assistance. After the war ended and the troops left, it was observed that the islanders would make copies of some items and mimic certain behaviours they had witnessed. For example recreating mock airfields and airplanes. Not understanding the technologies which had brought the cargo, these rituals were an attempt to attract more from the spiritual gods they thought had originally granted them.

This practice became knowns as a Cargo Cult, and has taken on a metaphorical use to describe the copying of conditions which are irrelevant or insufficient, in order to reproduce results without understanding the actual context. Thus we get Cargo Cult Agile, where the methods, practices and tools of successful teams and organisations are copied, irregardless of the original context. What has become known as the Spotify Method is a prime example, as are other attempts to reproduce the approaches used successful organisations such as Amazon, Apple or Netflix.

Trying to be Gods

In the 11th Century, legend has it that then King of England Canute the Great went to the sea shoreline to sit on his throne and command the tide not to come in. Unsurprisingly his orders had no effect and the tide continued to advance. His intent with the piece of theatre was to show that even Kings do not have the power of Gods. He was not, as is often thought, delusional enough to think that he did have such power.

This delusion is known as the God Complex, where people believe that they have the knowledge, skills, experience and power to design and predict solutions to the challenges we face. Further, they refuse to accept the fact that they might be wrong. Thus we get the Agile God Complex, where Agile is imposed on teams and organisations by managers and consultants in the belief that it will solve all problems. If people would just do it right!

Daniel Mezick has also been referring to something similar as the Agile Industrial Complex. And for the Cynefin crowd, this is of course treating a complex problem as if it were complicated. And while the use of expertise is valid for complicated problems, it still shouldn’t be confused with having god-like power!

Leaving it in the Lap of the Gods

This leaves a potential quandary. If we shouldn’t worship false gods, and we shouldn’t try to be gods, what should we do? Leave it in the lap of the gods? In other words, should we just leave it to chance and hope people will figure things our for themselves?

This is where Strategy Deployment comes in to play; allowing solutions to emerge from the people closest to the problem. We don’t need to leave it in the lap of the gods because we can provide clarity of intent and strategic guidance which informs the co-creation of experiments. Thus we can deliberately discover what action we can take to TASTE success.

Put another way, we enable Outcome-Oriented Change. Mike Burrows has recently used this term to described his Agendashift approach, and the following definition nicely sums up for me how we should be help teams through an Agile Transformation.

By building agreement on outcomes we facilitate rapid, experiment-based evolution of process, practice, and organisation. Agendashift avoids the self-defeating prescription of Lean and Agile techniques in isolation; instead we help you keep your vision and strategy aligned with a culture of co-creation and continuous transformation.