Strategy Deployment and OKRs

This is the another post in my series comparing Strategy Deployment and other approaches, with the intent to show that there are may ways of approaching it and highlight the common ground.

In May this year Dan North published a great post about applying OKRs – Objectives and Key Results. I’ve been aware of OKRs for some time, and we experimented with them a bit when I was at Rally, and Dan’s post prompted me to re-visit them from the perspective of Strategy Deployment.

To begin with…

Lets look at the two key elements of OKRs:

  • Objectives – these are what you would like to achieve. They are where you want to go. In Strategy Deployment terms, I would say they are the Tactics you will focus on.
  • Key Results – these are the numbers that will change when you have achieved an Objective? They are how you will know you are getting there. In Strategy Deployment terms I would say they are the Evidence you will look for.

OKRs are generally defined quarterly at different levels, from the highest organisational OKRs, through department and team OKRs down to individual OKRs. Each level’s OKRs should reflect how they will achieve the next level up’s OKRs. They are not handed down by managers, however, but they are created collaboratively by challenging and negotiating, and they are transparent and visible to everyone. At the end of the quarter they are then scored at every level as a way of assessing, learning and steering.

As such, OKRs provide a mechanism for both alignment & autonomy, and I would say that the quarterly cascading of Objectives, measured by Key Results, could be considered to be the simplest form of Strategy Deployment and a very good way of boot-strapping a Strategy Deployment approach.

Having said that…

There are a few things about OKRs that I’m unsure about, and that I miss from the X-Matrix model.

It seems to me that while OKRs focus on a quarterly, shorter term time horizon, the longer term aspirations and strategies are only implied in the creation of top level OKRs and the subsequent cascading process. If those aspirations and strategies are not explicit, is there a risk that the detailed, individual OKRs don’t end up drifting away from the original intent?

This is amplified by the fact that OKRs naturally form a one-to-many hierarchical structure through decomposition, as opposed to the messy coherence of the X-Matrix. As the organisational OKRs cascade their way down to individual OKRs, there is also a chance that as they potentially drift away from the original intent, they also begin to conflict with each other. What is to stop one person’s key results being diametrically opposed to someone else’s?

Admittedly, the open and collaborative nature of the process may guard against this, and the cascading doesn’t have to be quite so linear, but it still feels like an exercise in local optimisation. If each individual meets their objectives, then each department and team will meet their objectives, and thus the organisation will meet its objectives. Systems Thinking suggests that rather than optimising the parts like this, we should look to optimise the whole.

In summary…

OKRs do seem like a simple form of organisational improvement in which solutions emerge from the people closest to the problem. I’m interested in learning more about  how the risks I have highlighted might be mitigated. I can imagine how OKRs could be blended with an X-Matrix as a way of doing this, where Objectives map to shared Tactics and Key Results map to shared Evidence.

If you have any experience of OKRs, let me know your feedback in the comments.

What’s the Difference Between a Scrum Board and a Kanban Board?

During a recent kanban training course, this question came up and my simple answer seemed to be a surprise and an “aha” moment. I tweeted an abbreviated version of the question and answer, and got lots of interesting and valid responses. Few of the responses really addressed the essence of my point, so this post is a more in-depth answer to give more detail.

When the question came up, I began by drawing out a very generic “Scrum Board” as below:

Everyone agreed that this visualises the basic workflow of a Product Backlog Item, from being an option on the Product Backlog, to being planned into a Sprint and on the Sprint Backlog, to being worked on and In Progress, to being ready for Accepting by the Product Owner, and finally to being Done.

To convert this into a Kanban Board I simply added a number (OK, two numbers), representing a WIP limit!

And that’s it. While there are many other things that might be different on a Kanban Board to do with the flow of the work or the nature of the work, the most basic difference is the addition of a WIP limit. Or to be more pedantic you might say an explicit WIP limit, and as was also pointed out (although I can’t find the tweet now), its actually creating a pull signal.

Thus, the simple answer to the question about the difference between a Scrum board and a Kanban board is that a Kanban Board has WIP Limits, and the more sophisticated answer is that a Kanban Board signals which work can and should be pulled to maintain flow.

Of course, this isn’t to say that Scrum teams can’t have WIP limits, or pull work, or visualise their work in more creative ways. They can, and probably should! In fact the simplicity of just adding a WIP limit goes to show how compatible Scrum and Kanban can be, and how they can be stronger together.

Strategy Deployment and Playing to Win

Playing to Win” is a book by A.G. Lafley and Roger L. Martin about “How Strategy Really Works” and it describes a model, the Strategic Choice Cascade, developed by the authors at P&G.

This model leads to the following “five strategic questions that create and sustain lasting competitive advantage”:

  1. Have you defined winning, and are you crystal clear about your winning aspiration?
  2. Have you decided where you can play to win (and just as decisively where you will not play)?
  3. Have you determined how, specifically, you will win where you choose to play?
  4. Have you pinpointed and built your core capabilities in such a way that they enable your where-to-play and how-to-win choices?
  5. Do your management systems and key measures support your other four strategic choices?

While there isn’t a direct fit between these questions and my X-Matrix TASTE model, I believe there is enough of an overlap for the model and the questions to be useful.

Lets look at them one by one:

Have you defined winning, and are you crystal clear about your winning aspiration?

Defining winning, and in particular winning aspirations is the most obvious fit with the X-Matrix Aspirations. In fact its possible that my choice of the word aspiration was influenced by Playing to Win. I confess I started reading the book some time ago, but I can’t remember exactly how long!

Have you decided where you can play to win (and just as decisively where you will not play)?

Deciding where to play to win links primarily to the X-Matrix Strategies, especially with Strategy being about decisions and choices, and hence also deciding where not to play.

Have you determined how, specifically, you will win where you choose to play?

The specificity of determining how to win, feels like a link to the X-Matrix Tactics, although I think there is still something strategic about “how” as opposed to “what”.

Have you pinpointed and built your core capabilities in such a way that they enable your where-to-play and how-to-win choices?

Building core capabilities can be considered to be X-Matrix Tactics, especially if we consider determining how to win to be more strategic. On the other hand, I also often describe the development of capabilities as providing X-Matrix Evidence of progress towards Aspirations.

Do your management systems and key measures support your other four strategic choices?

The key measures to support the other choices will also support the X-Matrix elements and correlations, and thus provide Evidence of progress towards Aspirations. Additionally,  the management systems the book describes, emphasising assertive enquiry, closely resembles Catchball and the sort of collaboration I would expect Strategy Deployment to demonstrate.

My conclusion, therefore, is that the approach described in Playing to Win, with its Strategic Choice Cascade (and associated Strategy Logic Flow) can be considered to be another form of Strategy Deployment – a form of organisational improvement in which solutions emerge from the people closest to the problem. The early questions in the cascade focus on the problem, and the later questions focus on the emergence of the solutions.

As a result, when considering Agility as a Strategy, reflect on the above 5 strategic questions for your Agile Transformation to create alignment around how Agile helps you Play to Win?

A Strategy Deployment Diagram

I came up with an initial diagram to visually summarise Strategy Deployment when I wrote about the dynamics. However, while it showed some of the collaborative elements, I never felt it was sufficient, and still had a hint of hierarchy about it that I didn’t like.

More recently, while reading “Understanding Hoshin Kanri: An Introduction by Greg Watson“, I saw a diagram I liked much more, and was inspired to tweak it to fit my understanding and experience of the approach.

Working from the outside of the picture…

The outer loop shows the people involved and their interactions as a collaboration rather than as a hierarchy. Whilst levels of seniority are represented, these levels describe the nature of decisions being made. Thus the three primary groups are the Leadership Team, Operational Teams and Implementation Teams which reflect the three primary roles in Catchball, and which are responsible for co-creating and owning the various A3 Templates. The bi-directional arrows between these teams show how they collaborate to discover, negotiate and agree on the Outcomes, Plans and Actions. (Note: the original diagram had the arrows only going clockwise, which suggested a directing and reporting dynamic to me).

The three inner circles show the main focuses of each team. The Strategy Team set direction through the True North and Aspirations. The Operational Teams maintain alignment to the intent of the Strategies by making Investments in improvement opportunities. The Implementation Teams have autonomy to realise the strategies by determining and carrying out Tactics and generating Evidence of progress.

The intersections of these circles map onto the three elements of Stephen Bungay’s Directed Opportunism (from his book “The Art of Action”), and they describe the essence of the collaboration between the different groups. The Strategy Team and Operational Teams work together to establish positive Outcomes. The Operational Teams and Implementation Teams work together to define plausible Plans. The Implementation Teams and Strategy Team work together to review the results of the Actions.

Finally, the central intersection of all the circles, and the combination of all of these elements, is a continuous Transformation –  the result of everyone working together with both alignment and autonomy.

Given this visualisation, we can also overlay the three A3 Templates on top, showing which teams have primary responsibility for each.

Like all diagrams, this is a simplification. Its the map, and not the territory. The collaborations are not as separate and clear cut as it might imply. Rather, much of this work is emerging and evolving, collectively and simultaneously. I still believe, however, that it is a useful picture of Strategy Deployment as “any form of organisational improvement in which solutions emerge from the people closest to the problem.”

Announcing the X-Matrix Jigsaw Puzzle

fitting the pieces together

The X-Matrix Jigsaw Puzzle is what I call the exercise I use in Strategy Deployment workshops to help people experience creating an X-Matrix in a short space of time. It consists of a pre-defined and generic set of “pieces” with which to populate the various sections, deciding which pieces should go where, and how they fit together.

I’ve just created a page to make this available under Creative Commons Attribution-ShareAlike 4.0 International.  If you try it out, please let me know how you get on!

Strategy Deployment and Impact Mapping

I’ve had a couple of conversations in recent weeks in which Impact Mapping came up in relation to Strategy Deployment so here’s a post on my thoughts about how the two fit together.

An Impact Map is a form of mind-map developed by Gojko Adzic, visualising the why, who, how and what of an initiative. More specifically, it shows the goals, actors involved in meeting the goals, desired impact on the actors (in order to meet the goals), and deliverables to make the impacts. The example below is from Gojko’s website.

As you can see, an Impact Map is very simple, reductionist visualisation, from Goals down to Deliverables, and while the mind map format doesn’t entirely constrain this, it tends to be what most examples I have seen look like. It does however work in such as way to start with the core problem (meeting the goal) and allow people to explore and experiment with how to solve that problem via deliverables. This is very much in line with how I define Strategy Deployment.

Lets see how that Impact Map might translate onto an X-Matrix.

The Goal is clearly an Aspiration, so any relevant measures would neatly fit into the X-Matrix’s bottom section. At the other end, the Deliverables are also clearly Tactics, and would neatly fit in the X-Matrix-s top section. I would also argue that the Impacts provide Evidence that we are meeting the Aspirations, and could fit into the X-Matrix’s right-hand section. What is not so clear is Strategy. I think the Actors could provide a hint, however, and I would suggest that an Impact Map is actually a good diagnosis instrument (as per Rumelt) with which to identify Strategy.

Taking the 4 levels on an Impact Map, and transposing them onto an X-Matrix, creates a view which can be slightly less reductionist (although not as simple), and opens up the possibility of seeing how all the different elements might be related to each other collectively. In the X-Matrix below I have added the nodes from the Impact Map above into the respective places, with direct correlations for the Impact Map relationships. This can be seen in the very ordered pattern of dots. New Tactics (Deliverables) and Evidence (Impacts), and possible more Aspirations (Goals), would of course also need to be added for the other Strategies (Actors).

Even though this is a very basic mapping, I hope its not too difficult to see the potential to start exploring what other correlations might exist for the identified Tactics. And what the underlying Strategies really are. I leave that as exercise for you to try – please leave a comment with what ideas you have!

This post is one of a series comparing Strategy Deployment and other approaches.

The Messy Coherence of X-Matrix Correlations

I promised to say more about correlations in my last post on how to TASTE Success with the X-Matrix .

One of the things I like about the X-Matrix is that it allows clarity of alignment, without relying on an overly analytical structure. Rather than consisting of simple hierarchical parent-child relationships, it allows more elaborate many-to-many relationships of varying types. This creates a messy coherence – everything fits together, but without too much neatness or precision.

This works through the shaded matrices in the corners of the X-Matrix – the ones that together form an X and give this A3 its name! Each cell in the matrices represents a correlation between two of the numbered elements. Its important to emphasise that we are representing correlation, and not causation. There may be a contribution of one to the other, but it is unlikely to be exclusive or immediate. Thus implementing Tactics collectively contribute towards applying Strategies and exhibiting Evidence. Similarly applying Strategies and exhibiting Evidence both collectively contribute towards meeting Aspirations. What we are looking for is a messy coherence across all the pieces.

There are a few approaches I have used to describe different types of correlation.

  • Directness – Can a direct correlation be explained, or is the correlation indirect via another factor (i.e. it is oblique). This tends to be easier to be objective about.
  • Strength – Is there a strong correlation between the elements, or is the correlation weak. This tends to be harder to describe because strong and weak are more subjective.
  • Likelihood – Is the correlation probable, possible or plausible. This adds a third option, and therefore another level of complexity, but the language can be useful.

Whatever the language, there is always the option of none. An X-Matrix where everything correlates with everything is usually too convenient and can be a sign of post-hoc justification.

Having decided on an approach, a symbol is used in each cell to visualise the nature of each correlation. I have tried letters and colours, and have recently settled on filled and empty circles, as in the example below. Filled circles represent direct or strong correlations, while empty circles represent indirect or weak correlations. (If using likelihood, a third variant would be needed, such as a circle with a dot in the middle).

Here we can see that there is a direct or strong correlation between “Increase Revenue +10%” (Aspiration 1) and “Global Domination” (Strategy 1). In other words this suggests that Strategy 1 contributes directly or strongly to Aspiration 1. As do all the Strategies, which indicates high coherence. Similarly, Strategy 1 has a direct/strong correlation with Aspiration 2, but Strategy 2 has no correlation, and Strategy 3 only has indirect/weak correlation.

Remember, this is just a hypothesis, and by looking at the patterns of correlations around the X-Matrix we can see and discuss the overall coherence. For example we might question why Strategy 3 only has Tactic 2 with an indirect/weak correlation. Or whether Tactic 2 is the best investment given its relatively poor correlations with both Strategies and Evidence. Or whether Evidence 4 is relevant given its relatively poor correlations with both Tactics and Aspiration.

Its visualising and discussing these correlations that is often where the magic happens, as it exposes differences in understandings and perspectives on what all the pieces mean and how relate to each other. This leads to refinement of X-Matrix, more coherence and stronger alignment.

TASTE Success with an X-Matrix Template

I’ve put together a new X-Matrix A3 template to go with the Backbriefing and Experiment A3s I published last month. Together, these 3 templates work well together as part of a Strategy Deployment process, although I should reiterate again that the templates alone are not sufficient. A culture of collaboration and learning is also necessary as part of Catchball.

 

While creating the template I decided to change some of the language on it – mainly because I think it better reflects the intent of each section. However a side-benefit is that it nicely creates a new acronym, TASTE, as follows:

  • True North – the orientation which informs what should be done. This is more of a direction and vision than a destination or future state. Decisions should take you towards rather than away from your True North.
  • Aspirations – the results we hope to achieve. These are not targets, but should reflect the size of the ambition and the challenge ahead.
  • Strategies – the guiding policies that enable us. This is the approach to meeting the aspirations by creating enabling constraints.
  • Tactics – the coherent actions we will take. These represent the hypotheses to be tested and the work to be done to implement the strategies in the form of experiments.
  • Evidence – the outcomes that indicate progress. These are the leading indicators which provide quick and frequent feedback on whether the tactics are having an impact on meeting the aspirations.

Hence working through these sections collaboratively can lead to being able to TASTE success 🙂

One of the challenges with an X-Matrix template is that there is no right number of items which should populate each section. With that in mind I have gone for what I think is a reasonable upper limit, and I would generally prefer to have fewer items than the template allows.

This version also provides no guidance on how to complete the correlations on the 4 matrices in the corners which create the X (e.g. Strong/Weak, Direct/Indirect, Probable/Possible/Plausible). I will probable come back to that with a future version and/or post.

Good Agile/Bad Agile: The Difference and Why It Matters

kernels and bugsThis post is an unapologetic riff on Richard Rumelt’s book Good Strategy/Bad Strategy: The Difference and Why It Matters. The book is a wonderful analysis of what makes a good strategy and how successful organisations use strategy effectively. I found that it reinforced my notion that Agility is a Strategy and so this is also a way to help me organise my thoughts about that from the book. 

Good and Bad Agile

Rumelt describes Bad Strategy as having four major hallmarks:

  • Fluff – meaningless words or platitudes.
  • Failure to face the challenge – not knowing what the problem or opportunity being faced is.
  • Mistaking goals for strategy – simply stating ambitions or wishful thinking.
  • Bad strategy objectives – big buckets which provide no focus and can be used to justify anything (otherwise known as “strategic horoscopes”).

These hallmarks can also describe Bad Agile. For example, when Agile is just used for the sake of it (Agile is the fluff). Or when Agile is just used to do “the wrong thing righter” (failing to face the challenge). Or when Agile is just used to “improve performance” (mistaking goals for strategy). Or when Agile is just part of a variety of initiatives (bad strategy objectives).

Rumelt goes on to describe a Good Strategy as having a kernel with three elements:

  • Diagnosis – understanding the critical challenge or opportunity being faced.
  • Guiding policy – the approach to addressing the challenge or opportunity.
  • Coherent actions – the work to implement the guiding policy.

Again, I believe this kernel can help identify Good Agile. When Agile works well, it should be easy to answer the following questions:

  • What diagnosis is Agile addressing for you? What is the critical challenge or opportunity you are facing?
  • What guiding policy does Agile relate to? How does it help you decide what you should or shouldn’t do?
  • What coherent actions you are taking that are Agile? How are they coordinated to support the strategy?

Sources of Power

Rumelt suggests that

“a good strategy works by harnessing power and applying it where it will have the greatest effect”.

He goes on to describe nine of these powers (although they are not limited to these nine) and it’s worth considering how Agile can enable them.

  • Leverage – the anticipation of what is most pivotal and concentrating effort. Good Agile will focus on identifying and implementing the smallest change (e.g. MVPs) which will result in largest gains.
  • Proximate objects – something close enough to be achievable. Good Agile will help identify clear, small, incremental and iterative releases which can be easily delivered by the organisation
  • Chain-link systems – systems where performance is limited by the weakest link.  Good Agile will address the constraint in the organisation. Understanding chain-link systems is effectively the same as applying Theory of Constraints. 
  • Design – how all the elements of an organisation and its strategy fit together and are co-ordinated to support each other. Good Agile will be part of a larger design, or value stream, and not simply a local team optimisation. Using design is effectively the same as applying Systems Thinking. 
  • Focus – concentrating effort on achieving a breakthrough for a single goal. Good Agile limits work in process in order to help concentrate effort on that single goal to create the breakthrough.
  • Growth – the outcome of growing demand for special capabilities, superior products and skills. Good Agile helps build both the people and products which will result in growth.
  • Advantage – the unique differences and asymmetries which can be exploited to increase value. Good Agile helps exploit, protect or increase demand to gain a competitive advantage. In fact Good Agile can itself be an advantage.
  • Dynamics – anticipating and riding a wave of change. Good Agile helps explore new and different changes and opportunities, and then exploits them.
  • Inertia and Entropy – the resistance to change, and decline into disorder. Good Agile helps organisations overcome their own inertia and entropy, and take advantage of competitors’ inertia and entropy. In effect, having less inertia and entropy than your competition means having a tighter OODA loop.

In general, we can say that Good Agile “works by harnessing power and applying it where it will have the greatest effect”, and it should be possible to answer the following question:

  • What sources of power is your strategy harnessing, and how does Agile help apply it?

Thinking like an Agilist

Rumelt concludes with some thoughts on creating strategy, and what he suggests is

“the most useful shift in viewpoint: thinking about your own thinking”.

He describes this shift from the following perspectives:

  • The Science of Strategy – strategy as a scientific hypothesis rather than a predictable plan.
  • Using Your Head – expanding the scanning horizon for ideas rather than settling on the first idea.
  • Keeping Your Head – using independent judgement to decide the best approach rather than following the crowd.

This is where I see a connection between Good Strategy and Strategy Deployment, which is an approach to testing hypotheses (science as strategy), deliberately exploring multiple options (using your head), and discovering an appropriate, contextual solution (keeping your head).

In summary, Good Agile is deployed strategically by being part of a kernel, with a diagnosis of the critical problem or opportunity being faced, guiding policy which harnesses a source of power, and coherent actions that are evolved through experimenting as opposed to being instantiated by copying.

A3 Templates for Backbriefing and Experimenting

I’ve been meaning to share a couple of A3 templates that I’ve developed over the last year or so while I’ve been using Strategy Deployment. To paraphrase what I said when I described my thoughts on Kanban Thinkingwe need to create more templates, rather than reduce everything down to “common sense” or “good practice”. In other words, the more A3s and Canvases there are, the more variety there is for people to choose from, and hopefully, the more people will think about why they choose one over another. Further, if people can’t find one that’s quite right, I encourage them to develop their own, and then share it so there is even more variety and choice!

Having said that, the value of A3s is always in the conversations and collaborations that take part while populating them. They should be co-created as part of a Catchball process, and not filled in and handed down as instructions.

Here are the two I am making available. Both are used in the context of the X-Matrix Deployment Model. Click on the images to download the pdfs.

Backbriefing A3

Backbriefing A3

This one is heavily inspired by Stephen Bungay’s Art of Action. I use it to charter a team working on a tactical improvement initiative. The sections are:

  • Context – why the team has been brought together
  • Intent – what the team hopes to achieve
  • Higher Intent – how the team’s work helps the business achieve its goals
  • Team – who is, or needs to be, on the team
  • Boundaries – what the team are or are not allowed to do in their work
  • Plan – what the team are going to do to meet their intent, and the higher intent

The idea here is to ensure a tactical team has understood their mission and mission parameters before they move into action. The A3 helps ensure that the team remain aligned to the original strategy that has been deployed to them.

The Plan section naturally leads into the Experiment A3.

Experiment A3

Experiment A3

This is a more typical A3, but with a bias towards testing the hypotheses that are part of Strategy Deployment. I use this to help tactical teams in defining the experiments for their improvement initiative. The sections are:

  • Context – the problem the experiment is trying to solve
  • Hypothesis – the premise behind the experiment
  • Rationale – the reasons why the experiment is coherent
  • Actions – the steps required to run the experiment
  • Results – the indicators of whether the experiment has worked or not
  • Follow-up – the next steps based on what was learned from the experiment

Note that experiments can (and should) attempt to both prove and disprove a hypothesis to minimise the risk of confirmation bias. And the learning involved should be “safe to fail”.