Good Agile/Bad Agile: The Difference and Why It Matters

kernels and bugsThis post is an unapologetic riff on Richard Rumelt’s book Good Strategy/Bad Strategy: The Difference and Why It Matters. The book is a wonderful analysis of what makes a good strategy and how successful organisations use strategy effectively. I found that it reinforced my notion that Agility is a Strategy and so this is also a way to help me organise my thoughts about that from the book. 

Good and Bad Agile

Rumelt describes Bad Strategy as having four major hallmarks:

  • Fluff – meaningless words or platitudes.
  • Failure to face the challenge – not knowing what the problem or opportunity being faced is.
  • Mistaking goals for strategy – simply stating ambitions or wishful thinking.
  • Bad strategy objectives – big buckets which provide no focus and can be used to justify anything (otherwise known as “strategic horoscopes”).

These hallmarks can also describe Bad Agile. For example, when Agile is just used for the sake of it (Agile is the fluff). Or when Agile is just used to do “the wrong thing righter” (failing to face the challenge). Or when Agile is just used to “improve performance” (mistaking goals for strategy). Or when Agile is just part of a variety of initiatives (bad strategy objectives).

Rumelt goes on to describe a Good Strategy as having a kernel with three elements:

  • Diagnosis – understanding the critical challenge or opportunity being faced.
  • Guiding policy – the approach to addressing the challenge or opportunity.
  • Coherent actions – the work to implement the guiding policy.

Again, I believe this kernel can help identify Good Agile. When Agile works well, it should be easy to answer the following questions:

  • What diagnosis is Agile addressing for you? What is the critical challenge or opportunity you are facing?
  • What guiding policy does Agile relate to? How does it help you decide what you should or shouldn’t do?
  • What coherent actions you are taking that are Agile? How are they coordinated to support the strategy?

Sources of Power

Rumelt suggests that

“a good strategy works by harnessing power and applying it where it will have the greatest effect”.

He goes on to describe nine of these powers (although they are not limited to these nine) and it’s worth considering how Agile can enable them.

  • Leverage – the anticipation of what is most pivotal and concentrating effort. Good Agile will focus on identifying and implementing the smallest change (e.g. MVPs) which will result in largest gains.
  • Proximate objects – something close enough to be achievable. Good Agile will help identify clear, small, incremental and iterative releases which can be easily delivered by the organisation
  • Chain-link systems – systems where performance is limited by the weakest link.  Good Agile will address the constraint in the organisation. Understanding chain-link systems is effectively the same as applying Theory of Constraints. 
  • Design – how all the elements of an organisation and its strategy fit together and are co-ordinated to support each other. Good Agile will be part of a larger design, or value stream, and not simply a local team optimisation. Using design is effectively the same as applying Systems Thinking. 
  • Focus – concentrating effort on achieving a breakthrough for a single goal. Good Agile limits work in process in order to help concentrate effort on that single goal to create the breakthrough.
  • Growth – the outcome of growing demand for special capabilities, superior products and skills. Good Agile helps build both the people and products which will result in growth.
  • Advantage – the unique differences and asymmetries which can be exploited to increase value. Good Agile helps exploit, protect or increase demand to gain a competitive advantage. In fact Good Agile can itself be an advantage.
  • Dynamics – anticipating and riding a wave of change. Good Agile helps explore new and different changes and opportunities, and then exploits them.
  • Inertia and Entropy – the resistance to change, and decline into disorder. Good Agile helps organisations overcome their own inertia and entropy, and take advantage of competitors’ inertia and entropy. In effect, having less inertia and entropy than your competition means having a tighter OODA loop.

In general, we can say that Good Agile “works by harnessing power and applying it where it will have the greatest effect”, and it should be possible to answer the following question:

  • What sources of power is your strategy harnessing, and how does Agile help apply it?

Thinking like an Agilist

Rumelt concludes with some thoughts on creating strategy, and what he suggests is

“the most useful shift in viewpoint: thinking about your own thinking”.

He describes this shift from the following perspectives:

  • The Science of Strategy – strategy as a scientific hypothesis rather than a predictable plan.
  • Using Your Head – expanding the scanning horizon for ideas rather than settling on the first idea.
  • Keeping Your Head – using independent judgement to decide the best approach rather than following the crowd.

This is where I see a connection between Good Strategy and Strategy Deployment, which is an approach to testing hypotheses (science as strategy), deliberately exploring multiple options (using your head), and discovering an appropriate, contextual solution (keeping your head).

In summary, Good Agile is deployed strategically by being part of a kernel, with a diagnosis of the critical problem or opportunity being faced, guiding policy which harnesses a source of power, and coherent actions that are evolved through experimenting as opposed to being instantiated by copying.

A3 Templates for Backbriefing and Experimenting

I’ve been meaning to share a couple of A3 templates that I’ve developed over the last year or so while I’ve been using Strategy Deployment. To paraphrase what I said when I described my thoughts on Kanban Thinkingwe need to create more templates, rather than reduce everything down to “common sense” or “good practice”. In other words, the more A3s and Canvases there are, the more variety there is for people to choose from, and hopefully, the more people will think about why they choose one over another. Further, if people can’t find one that’s quite right, I encourage them to develop their own, and then share it so there is even more variety and choice!

Having said that, the value of A3s is always in the conversations and collaborations that take part while populating them. They should be co-created as part of a Catchball process, and not filled in and handed down as instructions.

Here are the two I am making available. Both are used in the context of the X-Matrix Deployment Model. Click on the images to download the pdfs.

Backbriefing A3

Backbriefing A3

This one is heavily inspired by Stephen Bungay’s Art of Action. I use it to charter a team working on a tactical improvement initiative. The sections are:

  • Context – why the team has been brought together
  • Intent – what the team hopes to achieve
  • Higher Intent – how the team’s work helps the business achieve its goals
  • Team – who is, or needs to be, on the team
  • Boundaries – what the team are or are not allowed to do in their work
  • Plan – what the team are going to do to meet their intent, and the higher intent

The idea here is to ensure a tactical team has understood their mission and mission parameters before they move into action. The A3 helps ensure that the team remain aligned to the original strategy that has been deployed to them.

The Plan section naturally leads into the Experiment A3.

Experiment A3

Experiment A3

This is a more typical A3, but with a bias towards testing the hypotheses that are part of Strategy Deployment. I use this to help tactical teams in defining the experiments for their improvement initiative. The sections are:

  • Context – the problem the experiment is trying to solve
  • Hypothesis – the premise behind the experiment
  • Rationale – the reasons why the experiment is coherent
  • Actions – the steps required to run the experiment
  • Results – the indicators of whether the experiment has worked or not
  • Follow-up – the next steps based on what was learned from the experiment

Note that experiments can (and should) attempt to both prove and disprove a hypothesis to minimise the risk of confirmation bias. And the learning involved should be “safe to fail”.

Agendashift, Cynefin and the Butterfly Stamped

The butterfly who stamped

I’ve recently become an Agendashift partner and have enjoyed exploring how this inclusive, contextual, fulfilling, open approach fits with how I use Strategy Deployment.

Specifically, I find that the Agendashift values-based  assessment can be a form of diagnosis of a team or organisation’s critical challenges, in order to agree guiding policy for change and focus coherent action. I use those italicised terms deliberately as they come from Richard Rumelt’s book Good Strategy/Bad Strategy in which he defines a good strategy kernel as containing those key elements. I love this definition as it maps beautifully onto how I understand Strategy Deployment, and I intent to blog more about this soon.

In an early conversation with Mike when I was first experimenting with the assessment, we were exploring how Cynefin relates to the approach, and in particular the fact that not everything needs to be an experiment. This led to the idea of using the Agendashift assessment prompts as part of a Cynefin contextualisation exercise, which in turn led to the session we ran together at Lean Agile Scotland this year (also including elements of Clean Language).

My original thought had been to try something even more basic though, using the assessment prompts directly in a method that Dave Snowden calls “and the butterfly stamped“, and I got the chance to give that a go last week at Agile Northants.

The exercise – sometimes called simply Butterfly Stamping – is essentially a Four Points Contextualisation in which the items being contextualised are provided by the facilitator rather than generated by the participants. In this case those items were the prompts used in the Agendashift mini assessment, which you can see by completing the 2016 Agendashift global survey.

This meant that as well as learning about Cynefin and Sensemaking, participants were able to have rich conversations about their contexts and how well they were working, without getting stuck on what they were doing and what tools, techniques and practices they were using. Feedback was very positive, and you can see some of the output in this tweet:

I hope we can turn this into something that can be easily shared and reused. Let me know if you’re interested in running at your event. And watch this space!