More on Leader-Leader and Autonomy-Alignment

I had some great feedback about my last post on Strategy Deployment and Leader-Leader from Mike Cottmeyer via Facebook, where he pointed out that my graph overlaying the Leader-Leader and Autonomy-Alignment models could be mis-interpreted. It could suggest that simply increasing alignment/clarity and autonomy/control would result in increased capability. That’s not the case, so I have revised the image to this one.

What Marquet is describing with the Leader-Leader model is that in order to be able to give people more control, you need to support them by both increasing the clarity of their work, and developing their competence to do the work. Thus, as you increase clarity and competence you can give more control. Competence, therefore is a 3rd dimension to the graph, which we can also call Ability to maintain the alliteration.

  • Increasing Ability leads to more Competence
  • Increasing Alignment leads to more Clarity
  • Increasing Autonomy leads to more Control being given.

The dashed arc is intended to show that increasing on ability/competence and alignment/clarity is required before autonomy/control can be increased.


Strategy Deployment and Leader-Leader

This is a continuation my series comparing Strategy Deployment and other approaches, with the intent of showing that there are may ways of approaching it and highlighting the common ground.

Leader-Leader is the name David Marquet gives to a model of leadership in his book Turn the Ship Around, which tells the story of his experiences and learnings when he commanded the USS Santa Fe nuclear submarine. Like Stephen Bungay’s Art of Action with its Directed Opportunism, the book is not directly about Strategy Deployment, but it is still highly relevant.

From Inno-Versity Presents: “Greatness” by David Marquet

The Leader-Leader model consists of a bridge (give control) which is supported by two pillars (competence and clarity).

This has a lot of synergy with the alignment and autonomy model, also described by Stephen Bungay, and the two could be overlaid as follows:

In other words:

  • Giving control is about increasing autonomy.
  • Creating clarity is about increasing alignment.
  • Growing competence is about increasing the ability to benefit from clarity and control.

Update: The above graph has been revised in a new post on Leader-Leader and Autonomy-Alignment.

The overall theme, which is what really struck a chord with me, is moving from a “doing” organisation where people just do what they are told and focus on avoiding errors, to a “thinking” organisation where people think for themselves and focus on achieving success. In doing so, organisations can…

achieve top performance and enduring excellence and development of additional leaders.

That sounds like what I would call a learning organisation!

Marquet achieves this with a number of “mechanisms”, which he emphasises are…

[not] prescriptions that, if followed, will result in the same long-term systemic improvements.

Instead they are examples of actions intended to not simply empower people, or even remove things that disempower people (which both still imply a Leader-Follower structure where the Leader has power over the Follower), but to emancipate them.

With emancipation we are recognising the inherent genius, energy, and creativity in all people, and allowing those talents to emerge. We realise that we don’t have the power to give these talents to others, or “empower” them to use them, only the power to prevent them from coming out. Emancipation results when teams have been given decision-making control and have the additional characteristics of competence and clarity. You know you have an emancipated team when you no longer need to empower them. Indeed, you no longer have the ability to empower them because they are not relying on you as their source of power.

That’s what Strategy Deployment should strive for as organisational improvement that allows solutions to emerge from those closest to the problem. Or to paraphrase that definition using David Marquet’s words, achieving organisational excellence by moving decision authority to the information.


A Strategy Deployment Cadence

A Strategy Deployment cadence is the rhythm with which you plan, review and steer your organisational change work. When I blogged about cadence eight years ago I said that…

“cadence is what gives a team a feeling of demarcation, progression, resolution or flow. A pattern which allows the team to know what they are doing and when it will be done.”

For Strategy Deployment, the work tends to be more about organisational improvement than product delivery, although the two are rarely mutually exclusive.

The following diagram is an attempt to visualise the general form of cadence I recommend to begin with. While it implies a 12 month cycle, and it usually is to begin with, I am reminded of the exchange between the Fisherman and the Accountant that Bjarte Bogsnes describes in his book “Implementing Beyond Budgeting“. It goes something like this.

  • Accountant: What do you do?
  • Fisherman: I’m at sea for 5 months, and then home for 5 months
  • Accountant: Hmm. What do you do the other 2 months?

The intent of this Strategy Deployment cadence is not to create another 12 month planning (or budgeting) cycle, but to have a cycle which enables a regular setting of direction and with adjustment based on feedback. How often that happens should be contextual.

Given that caveat lets look at the diagram in more detail.

Planning & Steering

The primary high level cadences are planning and steering, which typically happen quarterly. This allows sufficient time for progress to be made without any knee-jerk reactions, but is often enough to make any necessary changes before they becomes too late. Planning and Steering are workshops where a significant group of people representing both the breadth and depth of the organisation are represented to bring in as much, diversity of experience and opinion as possible. This means that all departments are involved, and all levels of employee, from senior management to front line workers. I have previously described how we did annual and quarterly planning at Rally.

Planning is an annual event 2 day event which answers the following questions:

  • What is our current situation?
  • What aspirations define our ambition?
  • What are the critical challenges or opportunities to address?
  • What strategies should we use to guide decisions?
  • What evidence will indicate our strategies are having an impact?
  • What tactics should we invest in first?

Steering is then a 1 day quarterly event which answers the following questions:

  • What progress have we made towards our aspirations?
  • How effective have our strategies been?
  • What evidence have we seen of  improvement?
  • What obstacles are getting in our way?
  • What tactics should we invest in next?

Thus annual planning is a deep diagnosis and situational assessment of the current state to set direction, aspirations and strategy, while quarterly steering is more of a realignment and adjustment to keep on track.


The high level tactics chosen to invest in during planning and steering form the basis of continuous learning in Strategy Deployment. It is these tactics for which Backbriefing A3s can be created, and which generate the information and feedback which ultimately informs the steering events. They also form the premise for the more detailed experiments which are the input into the Review events. In the diagram, the large Learn spirals represent these high level tactics, indicating that they are both iterative and parallel.


Reviewing is a more frequent feedback cadence that typically happens on a monthly basis. This allows each tactic enough time to take action and make progress, and still be often enough to maintain visibility and momentum. Reviewing is usually a shorter meeting, attended by the tactics’ owners (and other key members of the tactical teams) to reflect on their work so far (e.g. their Backbriefing A3s and Experiment A3s). In doing so, the Review answers the following questions:

  • What progress have we made?
  • Which experiments have we completed? (and what have we learned?)
  • Which experiments require most attention now?
  • Which experiments should we begin next?


Refreshing is an almost constant cadence of updating and sharing the evidence of progress. This provides quick and early feedback of the results and impact of the actions defined in Experiment A3s. Refreshing is not limited to a small group, but should be available to everyone. As such, it can be implemented through the continuous update of a shared dashboard, or the use of a Strategy Deployment “war room” (sometime called an Obeya) . Thus the Refresh can be built into existing events such as daily stand-up meetings or can be a separate weekly event to answer the following questions:

  • What new evidence do we have?
  • What evidence suggests healthy progress towards our aspirations?
  • What evidence suggests intervention is needed to meet our aspirations?
  • Where do we need to focus and learn more?

In the diagram, the smaller Refresh spirals represent the more detailed experiments, again indicating that they are both iterative and parallel.

The X-Matrix

At the heart is the diagram is an X representing the X-Matrix, which provides the common context for all the cadences. As I have already said, the exact timings of the various events should be tuned to whatever makes most sense. The goal is to provide a framework within which feedback can easily pass up, down and around the organisation, from the quarterly planning and steering to the daily improvement work and back again. This builds on the ideas I first illustrated in the post on the dynamics of strategy deployment and the framework becomes one within which to play Catchball.

What I have tried to show here is that Strategy Deployment is not a simple, single PDSA cycle, but rather multiple parallel and nested PDSA cycles, with a variety of both synchronous and asynchronous cadences. Each event in the cadence is as opportunity to collaborate, learn, and co-create solutions which emerge from the people closest to the problem.

If you would interested in learning more about how I can help your organisation introduce a Strategy Deployment cadence and facilitate the key events, please contact me and I’d be happy to talk.

What is an X-Matrix?

An X-Matrix is a template used in organisational improvement that concisely visualises the alignment of an organisation’s True North, Aspirations, Strategies, Tactics and Evidence on a single piece of paper, usually A3 size (29.7 x 42.0cm, 11.69 x 16.53 inches).

The main elements can be described as follows:

  • True North – the orientation which informs what should be done. This is more of a direction and vision than a destination or future state. Decisions should take you towards rather than away from your True North.
  • Aspirations – the results we hope to achieve. These are not targets, but should reflect the size of the ambition and the challenge ahead.
  • Strategies – the guiding policies that enable us. This is the approach to meeting the aspirations by creating constraints which will enable decisions on what to do.
  • Tactics – the coherent actions we will take. These represent the hypotheses to be tested and the work to be done to implement the strategies in the form of experiments.
  • Evidence – the outcomes that indicate progress. These are the leading indicators which provide quick and frequent feedback on whether the tactics are having an impact on meeting the aspirations.

The alignment of all these elements is shown through the 4 matrices in the corners of the template, which form an X and thus give the format its name. Each of the cells in the matrices indicate the strength of correlation (e.g. strong, weak, none) between the various pairs of elements, forming a messy coherence of the whole approach.

Completing an X-Matrix is a collaborative process of co-creation and clarification to get everyone literally on the same page about the work that needs to be done to succeed. Used to its full potential, the X-Matrix can become a central piece in Strategy Deployment, helping to guide discussions and decisions about what changes to make, why to make them, and how to assess them.

You can download a copy of my X-Matrix template, along with some related ones. Like most A3 formats, there are many variations available, and you will find that a lot of other X-Matrix versions have an additional Teams section on the right hand side. My experience so far has been that this adds only marginal value, and have therefore chosen not to include it.

If you would like help in using the X-Matrix as part of a Strategy Deployment improvement approach, please contact me to talk.

Strategy Deployment and OKRs

This is the another post in my series comparing Strategy Deployment and other approaches, with the intent to show that there are may ways of approaching it and highlight the common ground.

In May this year Dan North published a great post about applying OKRs – Objectives and Key Results. I’ve been aware of OKRs for some time, and we experimented with them a bit when I was at Rally, and Dan’s post prompted me to re-visit them from the perspective of Strategy Deployment.

To begin with…

Lets look at the two key elements of OKRs:

  • Objectives – these are what you would like to achieve. They are where you want to go. In Strategy Deployment terms, I would say they are the Tactics you will focus on.
  • Key Results – these are the numbers that will change when you have achieved an Objective? They are how you will know you are getting there. In Strategy Deployment terms I would say they are the Evidence you will look for.

OKRs are generally defined quarterly at different levels, from the highest organisational OKRs, through department and team OKRs down to individual OKRs. Each level’s OKRs should reflect how they will achieve the next level up’s OKRs. They are not handed down by managers, however, but they are created collaboratively by challenging and negotiating, and they are transparent and visible to everyone. At the end of the quarter they are then scored at every level as a way of assessing, learning and steering.

As such, OKRs provide a mechanism for both alignment & autonomy, and I would say that the quarterly cascading of Objectives, measured by Key Results, could be considered to be the simplest form of Strategy Deployment and a very good way of boot-strapping a Strategy Deployment approach.

Having said that…

There are a few things about OKRs that I’m unsure about, and that I miss from the X-Matrix model.

It seems to me that while OKRs focus on a quarterly, shorter term time horizon, the longer term aspirations and strategies are only implied in the creation of top level OKRs and the subsequent cascading process. If those aspirations and strategies are not explicit, is there a risk that the detailed, individual OKRs don’t end up drifting away from the original intent?

This is amplified by the fact that OKRs naturally form a one-to-many hierarchical structure through decomposition, as opposed to the messy coherence of the X-Matrix. As the organisational OKRs cascade their way down to individual OKRs, there is also a chance that as they potentially drift away from the original intent, they also begin to conflict with each other. What is to stop one person’s key results being diametrically opposed to someone else’s?

Admittedly, the open and collaborative nature of the process may guard against this, and the cascading doesn’t have to be quite so linear, but it still feels like an exercise in local optimisation. If each individual meets their objectives, then each department and team will meet their objectives, and thus the organisation will meet its objectives. Systems Thinking suggests that rather than optimising the parts like this, we should look to optimise the whole.

In summary…

OKRs do seem like a simple form of organisational improvement in which solutions emerge from the people closest to the problem. I’m interested in learning more about  how the risks I have highlighted might be mitigated. I can imagine how OKRs could be blended with an X-Matrix as a way of doing this, where Objectives map to shared Tactics and Key Results map to shared Evidence.

If you have any experience of OKRs, let me know your feedback in the comments.

What’s the Difference Between a Scrum Board and a Kanban Board?

During a recent kanban training course, this question came up and my simple answer seemed to be a surprise and an “aha” moment. I tweeted an abbreviated version of the question and answer, and got lots of interesting and valid responses. Few of the responses really addressed the essence of my point, so this post is a more in-depth answer to give more detail.

When the question came up, I began by drawing out a very generic “Scrum Board” as below:

Everyone agreed that this visualises the basic workflow of a Product Backlog Item, from being an option on the Product Backlog, to being planned into a Sprint and on the Sprint Backlog, to being worked on and In Progress, to being ready for Accepting by the Product Owner, and finally to being Done.

To convert this into a Kanban Board I simply added a number (OK, two numbers), representing a WIP limit!

And that’s it. While there are many other things that might be different on a Kanban Board to do with the flow of the work or the nature of the work, the most basic difference is the addition of a WIP limit. Or to be more pedantic you might say an explicit WIP limit, and as was also pointed out (although I can’t find the tweet now), its actually creating a pull signal.

Thus, the simple answer to the question about the difference between a Scrum board and a Kanban board is that a Kanban Board has WIP Limits, and the more sophisticated answer is that a Kanban Board signals which work can and should be pulled to maintain flow.

Of course, this isn’t to say that Scrum teams can’t have WIP limits, or pull work, or visualise their work in more creative ways. They can, and probably should! In fact the simplicity of just adding a WIP limit goes to show how compatible Scrum and Kanban can be, and how they can be stronger together.

Strategy Deployment and Playing to Win

Playing to Win” is a book by A.G. Lafley and Roger L. Martin about “How Strategy Really Works” and it describes a model, the Strategic Choice Cascade, developed by the authors at P&G.

This model leads to the following “five strategic questions that create and sustain lasting competitive advantage”:

  1. Have you defined winning, and are you crystal clear about your winning aspiration?
  2. Have you decided where you can play to win (and just as decisively where you will not play)?
  3. Have you determined how, specifically, you will win where you choose to play?
  4. Have you pinpointed and built your core capabilities in such a way that they enable your where-to-play and how-to-win choices?
  5. Do your management systems and key measures support your other four strategic choices?

While there isn’t a direct fit between these questions and my X-Matrix TASTE model, I believe there is enough of an overlap for the model and the questions to be useful.

Lets look at them one by one:

Have you defined winning, and are you crystal clear about your winning aspiration?

Defining winning, and in particular winning aspirations is the most obvious fit with the X-Matrix Aspirations. In fact its possible that my choice of the word aspiration was influenced by Playing to Win. I confess I started reading the book some time ago, but I can’t remember exactly how long!

Have you decided where you can play to win (and just as decisively where you will not play)?

Deciding where to play to win links primarily to the X-Matrix Strategies, especially with Strategy being about decisions and choices, and hence also deciding where not to play.

Have you determined how, specifically, you will win where you choose to play?

The specificity of determining how to win, feels like a link to the X-Matrix Tactics, although I think there is still something strategic about “how” as opposed to “what”.

Have you pinpointed and built your core capabilities in such a way that they enable your where-to-play and how-to-win choices?

Building core capabilities can be considered to be X-Matrix Tactics, especially if we consider determining how to win to be more strategic. On the other hand, I also often describe the development of capabilities as providing X-Matrix Evidence of progress towards Aspirations.

Do your management systems and key measures support your other four strategic choices?

The key measures to support the other choices will also support the X-Matrix elements and correlations, and thus provide Evidence of progress towards Aspirations. Additionally,  the management systems the book describes, emphasising assertive enquiry, closely resembles Catchball and the sort of collaboration I would expect Strategy Deployment to demonstrate.

My conclusion, therefore, is that the approach described in Playing to Win, with its Strategic Choice Cascade (and associated Strategy Logic Flow) can be considered to be another form of Strategy Deployment – a form of organisational improvement in which solutions emerge from the people closest to the problem. The early questions in the cascade focus on the problem, and the later questions focus on the emergence of the solutions.

As a result, when considering Agility as a Strategy, reflect on the above 5 strategic questions for your Agile Transformation to create alignment around how Agile helps you Play to Win?

A Strategy Deployment Diagram

I came up with an initial diagram to visually summarise Strategy Deployment when I wrote about the dynamics. However, while it showed some of the collaborative elements, I never felt it was sufficient, and still had a hint of hierarchy about it that I didn’t like.

More recently, while reading “Understanding Hoshin Kanri: An Introduction by Greg Watson“, I saw a diagram I liked much more, and was inspired to tweak it to fit my understanding and experience of the approach.

Working from the outside of the picture…

The outer loop shows the people involved and their interactions as a collaboration rather than as a hierarchy. Whilst levels of seniority are represented, these levels describe the nature of decisions being made. Thus the three primary groups are the Leadership Team, Operational Teams and Implementation Teams which reflect the three primary roles in Catchball, and which are responsible for co-creating and owning the various A3 Templates. The bi-directional arrows between these teams show how they collaborate to discover, negotiate and agree on the Outcomes, Plans and Actions. (Note: the original diagram had the arrows only going clockwise, which suggested a directing and reporting dynamic to me).

The three inner circles show the main focuses of each team. The Strategy Team set direction through the True North and Aspirations. The Operational Teams maintain alignment to the intent of the Strategies by making Investments in improvement opportunities. The Implementation Teams have autonomy to realise the strategies by determining and carrying out Tactics and generating Evidence of progress.

The intersections of these circles map onto the three elements of Stephen Bungay’s Directed Opportunism (from his book “The Art of Action”), and they describe the essence of the collaboration between the different groups. The Strategy Team and Operational Teams work together to establish positive Outcomes. The Operational Teams and Implementation Teams work together to define plausible Plans. The Implementation Teams and Strategy Team work together to review the results of the Actions.

Finally, the central intersection of all the circles, and the combination of all of these elements, is a continuous Transformation –  the result of everyone working together with both alignment and autonomy.

Given this visualisation, we can also overlay the three A3 Templates on top, showing which teams have primary responsibility for each.

Like all diagrams, this is a simplification. Its the map, and not the territory. The collaborations are not as separate and clear cut as it might imply. Rather, much of this work is emerging and evolving, collectively and simultaneously. I still believe, however, that it is a useful picture of Strategy Deployment as “any form of organisational improvement in which solutions emerge from the people closest to the problem.”

Announcing the X-Matrix Jigsaw Puzzle

fitting the pieces together

The X-Matrix Jigsaw Puzzle is what I call the exercise I use in Strategy Deployment workshops to help people experience creating an X-Matrix in a short space of time. It consists of a pre-defined and generic set of “pieces” with which to populate the various sections, deciding which pieces should go where, and how they fit together.

I’ve just created a page to make this available under Creative Commons Attribution-ShareAlike 4.0 International.  If you try it out, please let me know how you get on!

Strategy Deployment and Impact Mapping

I’ve had a couple of conversations in recent weeks in which Impact Mapping came up in relation to Strategy Deployment so here’s a post on my thoughts about how the two fit together.

An Impact Map is a form of mind-map developed by Gojko Adzic, visualising the why, who, how and what of an initiative. More specifically, it shows the goals, actors involved in meeting the goals, desired impact on the actors (in order to meet the goals), and deliverables to make the impacts. The example below is from Gojko’s website.

As you can see, an Impact Map is very simple, reductionist visualisation, from Goals down to Deliverables, and while the mind map format doesn’t entirely constrain this, it tends to be what most examples I have seen look like. It does however work in such as way to start with the core problem (meeting the goal) and allow people to explore and experiment with how to solve that problem via deliverables. This is very much in line with how I define Strategy Deployment.

Lets see how that Impact Map might translate onto an X-Matrix.

The Goal is clearly an Aspiration, so any relevant measures would neatly fit into the X-Matrix’s bottom section. At the other end, the Deliverables are also clearly Tactics, and would neatly fit in the X-Matrix-s top section. I would also argue that the Impacts provide Evidence that we are meeting the Aspirations, and could fit into the X-Matrix’s right-hand section. What is not so clear is Strategy. I think the Actors could provide a hint, however, and I would suggest that an Impact Map is actually a good diagnosis instrument (as per Rumelt) with which to identify Strategy.

Taking the 4 levels on an Impact Map, and transposing them onto an X-Matrix, creates a view which can be slightly less reductionist (although not as simple), and opens up the possibility of seeing how all the different elements might be related to each other collectively. In the X-Matrix below I have added the nodes from the Impact Map above into the respective places, with direct correlations for the Impact Map relationships. This can be seen in the very ordered pattern of dots. New Tactics (Deliverables) and Evidence (Impacts), and possible more Aspirations (Goals), would of course also need to be added for the other Strategies (Actors).

Even though this is a very basic mapping, I hope its not too difficult to see the potential to start exploring what other correlations might exist for the identified Tactics. And what the underlying Strategies really are. I leave that as exercise for you to try – please leave a comment with what ideas you have!

This post is one of a series comparing Strategy Deployment and other approaches.