Measuring the X-Matrix

"Measure a thousand times, cut once"

Dave Snowden recently posted a series of blog posts on A Sense of Direction, about the use of goals and targets with Cynefin. As the X-Matrix uses measures in two of its sections (Aspirations and Evidence) I found that useful in clarifying my thinking on how I generally approach those areas.

Lets start by addressing Dave’s two primary concerns; the tyranny of the explicit and a cliché of platitudes.

To avoid the tyranny of the explicit, I’ve been very careful to avoid the use of the word target. Evidence was a carefully chosen word (after trying multiple alternatives) to describe leading indicators of positive outcomes. The outcomes themselves are not specific goals, and can be either objective or subjective. They are things we want to see more of (or less of) and should be trends, suggesting an increased likelihood of meeting Aspirations. Aspirations again was chosen to suggest hope and ambition rather than prediction and expectation. While they define desired results, those should be considered to be challenges and not targets.

To avoid a cliché of platitudes we need to focus on Good Strategy, beginning with the clear, challenging and ambitious Aspirations. I find it interesting that Dave cites Kennedy’s “man on the moon” challenge as a liminal dip into chaos, and Rumelt uses the same example as a Proximate Object source of power for Good Strategy. An ambitious yet achievable Aspiration helps focus on the challenges and opportunities for change. With proximate Aspirations, and a clear Diagnosis of the current situation, we can set some Guiding Policies as enabling constraints with which to decide what Coherent Action to take. Thus we can avoid fluffy, wishful, aimless or horoscopic Bad Strategy.

Put together, we have a set of hypotheses which are specific enough to avoid a cliché of platitudes, yet are speculative enough to avoid the tyranny of the explicit. We believe the Aspirations are achievable. We believe our Strategies will help us be successful. We believe our Tactics will move us forward on a good bearing. We believe the Evidence will indicate favourable progress. The X-Matrix helps visualise the messy coherence of all these hypotheses with each other and Strategy Deployment is the means of continuously testing, adjusting and refining them as we navigate our way in the desired direction.

Failure Is Not An Option

FAILBefore reading this post, I encourage you to have a go at this quick puzzle to test your problem solving.

Go on, you won’t regret it!

How did you do?

I’ve used this challenge numerous times in various talks and workshops and I find that the majority of people follow the same pattern. They only test their hypotheses by guessing sequences which they think will succeed in following the rule. Even when I’ve occasionally primed them by talking about the need for failure. Its only after repeated guesses that someone will eventually propose a sequence which does fail, and the “oohs” and “aahs” noticeably indicate learning.

This problem is a nice example to show that we learn more when we fail because we generate new information. Information Theory suggests maximum information is generated when the probability of failure is 50%. If we never fail then we must know everything already, and if we fail all the time then we must be repeating the same mistakes over and over again.

Of course, that doesn’t necessarily mean we always want to fail 50% of the time. We wouldn’t want any plane we fly in to have a 50% probably of crashing, but then we’re not really wanting to generate new information and learning when we fly. Context is important. However, when we are developing new products, we do want to learn, so a 50% failure rate is more appropriate. Failure is not an option, it’s a necessity!

That’s easier said than done though, so here’s some things I have learnt which may help.

Run Experiments

Firstly, to be open to failure we need to consider the assumptions we have made in taking decisions on what to do. We should treat those assumptions as hypotheses, and come up with ways to test them. A simple template which can be useful as training wheels is this one:

  • We believe that <solution>
  • Will result in <outcome>
  • We will know we have succeeded when <evidence>

Having said that, the puzzle this post opened with showed how we need our experiments to fail as well as succeed, so we also need to intentionally falsify our hypotheses. As Karl Popper wrote in The Poverty if Historicism: 

The discovery of instances which confirm a theory means very little if we have not tried, and failed, to discover refutations. For if we are uncritical we shall always find what we want: we shall look for, and find, confirmations, and we shall look away from, and not see, whatever might be dangerous to our pet theories. In this way it is only too easy to obtain . . . overwhelming evidence in favour of a theory which, if approached critically, would have been refuted.

Thus the evidence we look for should show that not only have we proved our hypotheses, but also that we have also failed to disprove them. This means moving from a fail-safe approach where we assume a low probability of failure, to a safe-to-fail approach where are can quickly and cheaply recover from failure without putting lives, careers or reputations at risk.

Illuminate Feedback

We need to run experiments because we are generally dealing with complex problems, where cause and effect are not repeatable and we can only explain results with retrospective coherence. We cannot rely on experts to prescribe good practice to follow and need to rely on emergent practice as a result of experimentation.

The consequence is that experts disagree, a tell-tail sign of a complex problem. Recognising a problem as complex, with no knowable solution, is the first step in breaking away from arguing over who is right it wrong. Instead we can use Chris Argyris‘ Ladder of Inference to explore why we have differing opinions and how we might reconcile views and achieve mutual learning to allow solutions to emerge.

The ladder has the following rungs:

  • Take Actions
  • Adopt Beliefs
  • Draw Conclusions
  • Make Assumptions
  • Interpret Meaning
  • Select Reality
  • Observe Reality

We naturally tend to climb to the top of the ladder, quickly taking action without considering the beliefs, conclusions, assumptions, meaning and reality that has led to the action. By climbing down the ladder we can understand both why we take certain actions, and why others take different actions. It is probably due to different beliefs, conclusions, assumptions, interpretation or selection of data.

This known as Assertive Inquiry, where we advocate for our view, while at the same time seeking to understand an alternate view. Discovering why we might recommend different action this way generates understanding which can lead to potential experiments to test various views.

We can think if this as shining a light on a problem. If we try and solve problems in the dark, we don’t see the information and feedback from which we can learn. I like the analogy of practising a golf swing in the dark. We won’t see where the golf ball ends up, so we can’t adjust accordingly.

This is similar to the Streetlight Effect; searching for something and looking only where it is easiest. Its like the “joke” about the drunk who has lost his car keys, and is looking for them under the lamppost. When asked why he is looking there, his response is that he won’t find his keys where there is no light.

Expect the Unexpected

Running experiments and searching for feedback in this way means intentionally widening the scanning range so that we are more likely to pick up on information that we might naturally ignore due to our inherent biases.

First we need to overcome the God Complex; an unshakable belief characterised by consistently inflated feelings of personal ability, privilege, or infallibility. In my last post, In the Lap of the Gods, I talked about how this can lead us to not even acknowledge the need to run experiments or search for feedback in the first place. It’s why disagreement can be healthy and why we need to create safe environments where people who can openly challenge our thinking.

Then there is Cognitive Dissonance; the inner tension we feel when our beliefs are challenged by evidence. Whenever I walk down a stationary escalator, something seems wrong. My brain is expecting movement but my body doesn’t experience any. It really should be just like walking down stairs, but it does’t feel that way. My belief is that the escalator is moving, but the evidence is that it is not, and our natural inclination is to believe our beliefs and ignore the evidence. Thus we dismiss any contrary feedback we receive as being wrong.

Related to this is Confirmation Bias; the tendency to favour information that confirms one’s pre-existing beliefs or hypotheses. This is what generally leads us to only try and prove our hypotheses, but also means that we only notice feedback which does prove our hypotheses. Again any contrary feedback we receive is ignored.

The situation is further complicated by Survivorship Bias; concentrating on the people or things that made it past some selection process and overlooking those that did not. A great example of this is the story of Abraham Wald, a statistician  during World War 2. We was tasked with prioritising where to reinforce planes with better armour to increase survival rates given that the planes’ weight limited the amount of armour possible. Available data from surviving planes showed the following patterns of damage and common theory was to reinforce those most damaged areas.

However, Wald’s insight was that this damage was from planes which had returned, and therefore could survive damage in those areas. As a result, it was likely that planes which did not survive would probably have been hit in the undamaged areas, and this is where any reinforcement should go. Thus we need to pay attention not just to the information that we can see from our experiments, but also consider any information that we don’t see from failed experiments.

And then there is Availability Bias; relying on immediate examples that come to mind when evaluating a specific topic, concept, method or decision. An example is whenever someone mentions a new model of car to us, and suddenly we see that model everywhere. Its not that its suddenly appearing more often, its just that our brain notices it more often because its more recently available for recall. Our preferred hypotheses will be more available, so we are more likely to notice feedback which relates to them.

So when considering a hypotheses its easy to notice information which is more immediately available, which survives experiments, and which confirms our opinions, formed from the belief that we are experts. And this is just a handful of the huge list of cognitive biases on Wikipedia!


There’s a nice acronym which suggests that a FAIL is a First Attempt In Learning. I use that to highlight that failure is not something that we should shy away from and treat as an enemy, but something we should embrace and befriend. That doesn’t mean encouraging and celebrating failure though. Too much failure is as bad as not enough.

What’s needed is Critical Thinking; the intellectually disciplined process of actively and skilfully conceptualising, applying, analysing, synthesising, and evaluating information to reach an answer or conclusion. The Backbriefing & Experiment A3s I use are intended to encourage this by helping focus on the three areas in this post –  running experiments, illuminating failure, and expecting the unexpected. 

To close, I would recommend Black Box Thinking by Matthew Syed to explore these ideas in more depth. The metaphor comes from the aviation industry, and in particular comparing it to the healthcare industry. Syed cites a 2013 study published in the healthcare Journal of Patient Safety which put the number of premature deaths associated with preventable harm at more than 400,000 per year. That is the equivalent of two 747 airliners crashing every day. His compelling argument is that the aviation industry, where you really don’t want failures, is actually extremely safe because of the way it uses Black Boxes to conscientiously learn from any failures when they do occur. The healthcare industry, on the the hand, has a history of brushing failures under the carpet as inevitable and just the nature of the job.

We would do well to take note, and be more like the aviation industry, applying the same attitude and discipline so that we can befriend and learn from failure.

In the Lap of the Agile Gods


I’ve noticed a lot of conversation recently (mostly on Twitter) debating how prescriptive, or not, we should be when helping teams through an Agile Transformation (i.e. helping them use Agile approaches to be more Agile)? Do we tell teams exactly what to do initially, or allow them complete freedom to figure it out for themselves? Or something else?

Worshipping False Gods

During World War 2, inhabitants of Melanesian islands in the Southwestern Pacific Ocean witnessed the Japanese & Allied forces using their homelands as military bases for troops, equipment and supplies. The troops would often share supplies with the native islander in return for support or assistance. After the war ended and the troops left, it was observed that the islanders would make copies of some items and mimic certain behaviours they had witnessed. For example recreating mock airfields and airplanes. Not understanding the technologies which had brought the cargo, these rituals were an attempt to attract more from the spiritual gods they thought had originally granted them.

This practice became knowns as a Cargo Cult, and has taken on a metaphorical use to describe the copying of conditions which are irrelevant or insufficient, in order to reproduce results without understanding the actual context. Thus we get Cargo Cult Agile, where the methods, practices and tools of successful teams and organisations are copied, irregardless of the original context. What has become known as the Spotify Method is a prime example, as are other attempts to reproduce the approaches used successful organisations such as Amazon, Apple or Netflix.

Trying to be Gods

In the 11th Century, legend has it that then King of England Canute the Great went to the sea shoreline to sit on his throne and command the tide not to come in. Unsurprisingly his orders had no effect and the tide continued to advance. His intent with the piece of theatre was to show that even Kings do not have the power of Gods. He was not, as is often thought, delusional enough to think that he did have such power.

This delusion is known as the God Complex, where people believe that they have the knowledge, skills, experience and power to design and predict solutions to the challenges we face. Further, they refuse to accept the fact that they might be wrong. Thus we get the Agile God Complex, where Agile is imposed on teams and organisations by managers and consultants in the belief that it will solve all problems. If people would just do it right!

Daniel Mezick has also been referring to something similar as the Agile Industrial Complex. And for the Cynefin crowd, this is of course treating a complex problem as if it were complicated. And while the use of expertise is valid for complicated problems, it still shouldn’t be confused with having god-like power!

Leaving it in the Lap of the Gods

This leaves a potential quandary. If we shouldn’t worship false gods, and we shouldn’t try to be gods, what should we do? Leave it in the lap of the gods? In other words, should we just leave it to chance and hope people will figure things our for themselves?

This is where Strategy Deployment comes in to play; allowing solutions to emerge from the people closest to the problem. We don’t need to leave it in the lap of the gods because we can provide clarity of intent and strategic guidance which informs the co-creation of experiments. Thus we can deliberately discover what action we can take to TASTE success.

Put another way, we enable Outcome-Oriented Change. Mike Burrows has recently used this term to described his Agendashift approach, and the following definition nicely sums up for me how we should be help teams through an Agile Transformation.

By building agreement on outcomes we facilitate rapid, experiment-based evolution of process, practice, and organisation. Agendashift avoids the self-defeating prescription of Lean and Agile techniques in isolation; instead we help you keep your vision and strategy aligned with a culture of co-creation and continuous transformation.

Facilitating an X-Matrix Workshop

I was asked recently for guidance on facilitating a workshop to populate an X-Matrix. My initial response was that I don’t have a fixed approach because its so contextual. It depends on who is in the room, what is already known or decided, what the existing approaches are, how much time we have etc. On reflection though, I realised that I do tend to follow some general patterns.


The overall flow is one of divergent and convergent thinking, as described by Tim Brown in Change by Design, and as as shown in the following diagram which turns the TASTE model on its side. We begin with an exploration of options, framed by Strategy and Evidence and guided by the True North, before moving to decision making on what actions to take.


I always like to start any training or workshop with a clear purpose. This is the generic starting point I usually being with. Its a bit wordy but mentions most of the key points.

To create organisational alignment by collaboratively exploring and co-creating a strategy, and to enable autonomy by communicating and deploying the strategy, such that everyone is involved in discovering how to achieve success.


I referred to the following agenda in my post on a Strategy Deployment Cadence. (Eagled eyes readers will spot that I have added a seventh item which I originally omitted).

  1. What is our current situation?
  2. What aspirations define our ambition?
  3. What are the critical challenges or opportunities to address?
  4. What strategies should we use to guide decisions?
  5. What evidence will indicate our strategies are having an impact?
  6. What tactics should we invest in first?
  7. How coherent is the plan?
What is our current situation?

For a group that has already worked this way before, this might be as simple as a brief presentation on the current key business metrics or results so far. Effectively it is reviewing the current likelihood of meeting the aspirations.

For a group that is completely new to this way of working, I have had good experiences of using Future Backwards from Cognitive Edge to create some situational awareness and start drawing out conversations about different perspective on the way things are.

What aspirations define our ambition?

Again this will probably be some form of quick presentation for established groups – probably combined with the prior review of the current situation.

Alternatively, it can simply be a quick discussion or check-in where the key economic drivers are reasonable well understood.

I can also imagine using a technique such as 1-2-4-all here to draw out some different ideas and come to some agreement.

What are the critical challenges or opportunities to address?

This is where the divergent thinking really starts and I try to encourage as much variety and include as wide a group of roles and experience as possible.

Open Space has worked really well for this, allowing people to bring in and discuss any topics that they feel are relevant. Having said that, it’s important to emphasise that discussions should be around learning more about the current situation, and to hold back on jumping to discussing solutions for now.

The downside has been that Open Space makes it possible to not discuss topics which might be important. I have seen relevant sessions abandoned because necessary people are not there to share useful information. It might be worth considering some additional constraints, such as specifying that a certain mix roles or departments must be represented in each session.

What strategies should we use to guide decisions?

This could be as simple as a review or reminder of existing strategies, or it could be a deeper dive into deciding new strategies.

Where there are’t existing strategies in place, a diagnosis will be needed, which could be an output from a previous activity (e.g. Open Space, or Future Backwards) or a new activity. For example creating and exploring a Wardley Map might be a useful approach. (I’m looking for an opportunity to try this!)

What evidence will indicate our strategies are having an impact?

As with strategies, this could be as simple as a review or reminder of existing artefacts (of evidence), or it could be a deeper dive into deciding new forms of evidence.

For the latter I have used a variation on the 15-minute FOTO exercise from Mike Burrows and Agendashift which uses Clean Language questions to transform obstacles to outcomes. (FOTO is an acronym for From Obstacles To Outcomes). In this case the “Obstacles” are derived from the challenges and opportunities explored earlier in the agenda, and the “Outcomes” are such that we can look for evidence that they are being achieved.

Multiple resulting Outcomes can then be combined with 1-2-4-all again to come to agreement.

What tactics should we invest in first?

Up until this point, the work has ben very exploratory, and participants are usually itching to come to some concrete decisions. This is where the convergence of all the ideas begins to happen.

Open Space has again worked really well for this, allowing people to bring in and discuss the solutions that they feel will have most impact. To help focus, and introduce some enabling constraints, its useful to use A3s as an output of each session, either the Backbriefing or Experiment A3 depending on the scale of the workshop.

How coherent is the plan?

The last section depends on whether we are explicitly populating an X-Matrix. Sometimes it is just a model on which the workshop is based. If there is an X-Matrix its usually a large shared one, formed of multiple sheets of flipchart paper, and we use it to look for the messy coherence by filling in the correlations on various matrices.

Depending on group size I’ll do some variation of a small to large group exercise to get everyone’s input, discover differences and explore the rationale. For very small groups, this can be as simple as 1-2-4-all (again!). With more people we have split into four sub-groups, each working on a separate matrix, and then presenting back for discussion. For a huge group we had sub-groups work on the whole X-Matrix and then use coloured sticky dots to mass-populate a huge shared X-Matrix to see the patterns of agreement and disagreement.


I hope this gives a flavour of how I approach facilitating Strategy Deployment and X-Matrix workshops, without it appearing to be prescriptive. If I have more time with the group then I’ll probably spend it on tactics, forming teams and backbriefing. This could include a Cynefin Four Points Contextualisation, using the Outcomes generated while deciding what Evidence to look for. Another possible agenda item is to decide how to track progress i.e. what is the feedback mechanism for how and when the evidence be shared and discussed.

Of course if you would like me to help you – and it is always valuable to have an external facilitator – please contact me. I’d love to talk about how this could work in your context!

More on Leader-Leader and Autonomy-Alignment

I had some great feedback about my last post on Strategy Deployment and Leader-Leader from Mike Cottmeyer via Facebook, where he pointed out that my graph overlaying the Leader-Leader and Autonomy-Alignment models could be mis-interpreted. It could suggest that simply increasing alignment/clarity and autonomy/control would result in increased capability. That’s not the case, so I have revised the image to this one.

What Marquet is describing with the Leader-Leader model is that in order to be able to give people more control, you need to support them by both increasing the clarity of their work, and developing their competence to do the work. Thus, as you increase clarity and competence you can give more control. Competence, therefore is a 3rd dimension to the graph, which we can also call Ability to maintain the alliteration.

  • Increasing Ability leads to more Competence
  • Increasing Alignment leads to more Clarity
  • Increasing Autonomy leads to more Control being given.

The dashed arc is intended to show that increasing on ability/competence and alignment/clarity is required before autonomy/control can be increased.


Strategy Deployment and Leader-Leader

This is a continuation my series comparing Strategy Deployment and other approaches, with the intent of showing that there are may ways of approaching it and highlighting the common ground.

Leader-Leader is the name David Marquet gives to a model of leadership in his book Turn the Ship Around, which tells the story of his experiences and learnings when he commanded the USS Santa Fe nuclear submarine. Like Stephen Bungay’s Art of Action with its Directed Opportunism, the book is not directly about Strategy Deployment, but it is still highly relevant.

From Inno-Versity Presents: “Greatness” by David Marquet

The Leader-Leader model consists of a bridge (give control) which is supported by two pillars (competence and clarity).

This has a lot of synergy with the alignment and autonomy model, also described by Stephen Bungay, and the two could be overlaid as follows:

In other words:

  • Giving control is about increasing autonomy.
  • Creating clarity is about increasing alignment.
  • Growing competence is about increasing the ability to benefit from clarity and control.

Update: The above graph has been revised in a new post on Leader-Leader and Autonomy-Alignment.

The overall theme, which is what really struck a chord with me, is moving from a “doing” organisation where people just do what they are told and focus on avoiding errors, to a “thinking” organisation where people think for themselves and focus on achieving success. In doing so, organisations can…

achieve top performance and enduring excellence and development of additional leaders.

That sounds like what I would call a learning organisation!

Marquet achieves this with a number of “mechanisms”, which he emphasises are…

[not] prescriptions that, if followed, will result in the same long-term systemic improvements.

Instead they are examples of actions intended to not simply empower people, or even remove things that disempower people (which both still imply a Leader-Follower structure where the Leader has power over the Follower), but to emancipate them.

With emancipation we are recognising the inherent genius, energy, and creativity in all people, and allowing those talents to emerge. We realise that we don’t have the power to give these talents to others, or “empower” them to use them, only the power to prevent them from coming out. Emancipation results when teams have been given decision-making control and have the additional characteristics of competence and clarity. You know you have an emancipated team when you no longer need to empower them. Indeed, you no longer have the ability to empower them because they are not relying on you as their source of power.

That’s what Strategy Deployment should strive for as organisational improvement that allows solutions to emerge from those closest to the problem. Or to paraphrase that definition using David Marquet’s words, achieving organisational excellence by moving decision authority to the information.


A Strategy Deployment Cadence

A Strategy Deployment cadence is the rhythm with which you plan, review and steer your organisational change work. When I blogged about cadence eight years ago I said that…

“cadence is what gives a team a feeling of demarcation, progression, resolution or flow. A pattern which allows the team to know what they are doing and when it will be done.”

For Strategy Deployment, the work tends to be more about organisational improvement than product delivery, although the two are rarely mutually exclusive.

The following diagram is an attempt to visualise the general form of cadence I recommend to begin with. While it implies a 12 month cycle, and it usually is to begin with, I am reminded of the exchange between the Fisherman and the Accountant that Bjarte Bogsnes describes in his book “Implementing Beyond Budgeting“. It goes something like this.

  • Accountant: What do you do?
  • Fisherman: I’m at sea for 5 months, and then home for 5 months
  • Accountant: Hmm. What do you do the other 2 months?

The intent of this Strategy Deployment cadence is not to create another 12 month planning (or budgeting) cycle, but to have a cycle which enables a regular setting of direction and with adjustment based on feedback. How often that happens should be contextual.

Given that caveat lets look at the diagram in more detail.

Planning & Steering

The primary high level cadences are planning and steering, which typically happen quarterly. This allows sufficient time for progress to be made without any knee-jerk reactions, but is often enough to make any necessary changes before they becomes too late. Planning and Steering are workshops where a significant group of people representing both the breadth and depth of the organisation are represented to bring in as much, diversity of experience and opinion as possible. This means that all departments are involved, and all levels of employee, from senior management to front line workers. I have previously described how we did annual and quarterly planning at Rally.

Planning is an annual event 2 day event which answers the following questions:

  • What is our current situation?
  • What aspirations define our ambition?
  • What are the critical challenges or opportunities to address?
  • What strategies should we use to guide decisions?
  • What evidence will indicate our strategies are having an impact?
  • What tactics should we invest in first?

Steering is then a 1 day quarterly event which answers the following questions:

  • What progress have we made towards our aspirations?
  • How effective have our strategies been?
  • What evidence have we seen of  improvement?
  • What obstacles are getting in our way?
  • What tactics should we invest in next?

Thus annual planning is a deep diagnosis and situational assessment of the current state to set direction, aspirations and strategy, while quarterly steering is more of a realignment and adjustment to keep on track.


The high level tactics chosen to invest in during planning and steering form the basis of continuous learning in Strategy Deployment. It is these tactics for which Backbriefing A3s can be created, and which generate the information and feedback which ultimately informs the steering events. They also form the premise for the more detailed experiments which are the input into the Review events. In the diagram, the large Learn spirals represent these high level tactics, indicating that they are both iterative and parallel.


Reviewing is a more frequent feedback cadence that typically happens on a monthly basis. This allows each tactic enough time to take action and make progress, and still be often enough to maintain visibility and momentum. Reviewing is usually a shorter meeting, attended by the tactics’ owners (and other key members of the tactical teams) to reflect on their work so far (e.g. their Backbriefing A3s and Experiment A3s). In doing so, the Review answers the following questions:

  • What progress have we made?
  • Which experiments have we completed? (and what have we learned?)
  • Which experiments require most attention now?
  • Which experiments should we begin next?


Refreshing is an almost constant cadence of updating and sharing the evidence of progress. This provides quick and early feedback of the results and impact of the actions defined in Experiment A3s. Refreshing is not limited to a small group, but should be available to everyone. As such, it can be implemented through the continuous update of a shared dashboard, or the use of a Strategy Deployment “war room” (sometime called an Obeya) . Thus the Refresh can be built into existing events such as daily stand-up meetings or can be a separate weekly event to answer the following questions:

  • What new evidence do we have?
  • What evidence suggests healthy progress towards our aspirations?
  • What evidence suggests intervention is needed to meet our aspirations?
  • Where do we need to focus and learn more?

In the diagram, the smaller Refresh spirals represent the more detailed experiments, again indicating that they are both iterative and parallel.

The X-Matrix

At the heart is the diagram is an X representing the X-Matrix, which provides the common context for all the cadences. As I have already said, the exact timings of the various events should be tuned to whatever makes most sense. The goal is to provide a framework within which feedback can easily pass up, down and around the organisation, from the quarterly planning and steering to the daily improvement work and back again. This builds on the ideas I first illustrated in the post on the dynamics of strategy deployment and the framework becomes one within which to play Catchball.

What I have tried to show here is that Strategy Deployment is not a simple, single PDSA cycle, but rather multiple parallel and nested PDSA cycles, with a variety of both synchronous and asynchronous cadences. Each event in the cadence is as opportunity to collaborate, learn, and co-create solutions which emerge from the people closest to the problem.

If you would interested in learning more about how I can help your organisation introduce a Strategy Deployment cadence and facilitate the key events, please contact me and I’d be happy to talk.

What is an X-Matrix?

An X-Matrix is a template used in organisational improvement that concisely visualises the alignment of an organisation’s True North, Aspirations, Strategies, Tactics and Evidence on a single piece of paper, usually A3 size (29.7 x 42.0cm, 11.69 x 16.53 inches).

The main elements can be described as follows:

  • True North – the orientation which informs what should be done. This is more of a direction and vision than a destination or future state. Decisions should take you towards rather than away from your True North.
  • Aspirations – the results we hope to achieve. These are not targets, but should reflect the size of the ambition and the challenge ahead.
  • Strategies – the guiding policies that enable us. This is the approach to meeting the aspirations by creating constraints which will enable decisions on what to do.
  • Tactics – the coherent actions we will take. These represent the hypotheses to be tested and the work to be done to implement the strategies in the form of experiments.
  • Evidence – the outcomes that indicate progress. These are the leading indicators which provide quick and frequent feedback on whether the tactics are having an impact on meeting the aspirations.

The alignment of all these elements is shown through the 4 matrices in the corners of the template, which form an X and thus give the format its name. Each of the cells in the matrices indicate the strength of correlation (e.g. strong, weak, none) between the various pairs of elements, forming a messy coherence of the whole approach.

Completing an X-Matrix is a collaborative process of co-creation and clarification to get everyone literally on the same page about the work that needs to be done to succeed. Used to its full potential, the X-Matrix can become a central piece in Strategy Deployment, helping to guide discussions and decisions about what changes to make, why to make them, and how to assess them.

You can download a copy of my X-Matrix template, along with some related ones. Like most A3 formats, there are many variations available, and you will find that a lot of other X-Matrix versions have an additional Teams section on the right hand side. My experience so far has been that this adds only marginal value, and have therefore chosen not to include it.

If you would like help in using the X-Matrix as part of a Strategy Deployment improvement approach, please contact me to talk.

Strategy Deployment and OKRs

This is the another post in my series comparing Strategy Deployment and other approaches, with the intent to show that there are may ways of approaching it and highlight the common ground.

In May this year Dan North published a great post about applying OKRs – Objectives and Key Results. I’ve been aware of OKRs for some time, and we experimented with them a bit when I was at Rally, and Dan’s post prompted me to re-visit them from the perspective of Strategy Deployment.

To begin with…

Lets look at the two key elements of OKRs:

  • Objectives – these are what you would like to achieve. They are where you want to go. In Strategy Deployment terms, I would say they are the Tactics you will focus on.
  • Key Results – these are the numbers that will change when you have achieved an Objective? They are how you will know you are getting there. In Strategy Deployment terms I would say they are the Evidence you will look for.

OKRs are generally defined quarterly at different levels, from the highest organisational OKRs, through department and team OKRs down to individual OKRs. Each level’s OKRs should reflect how they will achieve the next level up’s OKRs. They are not handed down by managers, however, but they are created collaboratively by challenging and negotiating, and they are transparent and visible to everyone. At the end of the quarter they are then scored at every level as a way of assessing, learning and steering.

As such, OKRs provide a mechanism for both alignment & autonomy, and I would say that the quarterly cascading of Objectives, measured by Key Results, could be considered to be the simplest form of Strategy Deployment and a very good way of boot-strapping a Strategy Deployment approach.

Having said that…

There are a few things about OKRs that I’m unsure about, and that I miss from the X-Matrix model.

It seems to me that while OKRs focus on a quarterly, shorter term time horizon, the longer term aspirations and strategies are only implied in the creation of top level OKRs and the subsequent cascading process. If those aspirations and strategies are not explicit, is there a risk that the detailed, individual OKRs don’t end up drifting away from the original intent?

This is amplified by the fact that OKRs naturally form a one-to-many hierarchical structure through decomposition, as opposed to the messy coherence of the X-Matrix. As the organisational OKRs cascade their way down to individual OKRs, there is also a chance that as they potentially drift away from the original intent, they also begin to conflict with each other. What is to stop one person’s key results being diametrically opposed to someone else’s?

Admittedly, the open and collaborative nature of the process may guard against this, and the cascading doesn’t have to be quite so linear, but it still feels like an exercise in local optimisation. If each individual meets their objectives, then each department and team will meet their objectives, and thus the organisation will meet its objectives. Systems Thinking suggests that rather than optimising the parts like this, we should look to optimise the whole.

In summary…

OKRs do seem like a simple form of organisational improvement in which solutions emerge from the people closest to the problem. I’m interested in learning more about  how the risks I have highlighted might be mitigated. I can imagine how OKRs could be blended with an X-Matrix as a way of doing this, where Objectives map to shared Tactics and Key Results map to shared Evidence.

If you have any experience of OKRs, let me know your feedback in the comments.

What’s the Difference Between a Scrum Board and a Kanban Board?

During a recent kanban training course, this question came up and my simple answer seemed to be a surprise and an “aha” moment. I tweeted an abbreviated version of the question and answer, and got lots of interesting and valid responses. Few of the responses really addressed the essence of my point, so this post is a more in-depth answer to give more detail.

When the question came up, I began by drawing out a very generic “Scrum Board” as below:

Everyone agreed that this visualises the basic workflow of a Product Backlog Item, from being an option on the Product Backlog, to being planned into a Sprint and on the Sprint Backlog, to being worked on and In Progress, to being ready for Accepting by the Product Owner, and finally to being Done.

To convert this into a Kanban Board I simply added a number (OK, two numbers), representing a WIP limit!

And that’s it. While there are many other things that might be different on a Kanban Board to do with the flow of the work or the nature of the work, the most basic difference is the addition of a WIP limit. Or to be more pedantic you might say an explicit WIP limit, and as was also pointed out (although I can’t find the tweet now), its actually creating a pull signal.

Thus, the simple answer to the question about the difference between a Scrum board and a Kanban board is that a Kanban Board has WIP Limits, and the more sophisticated answer is that a Kanban Board signals which work can and should be pulled to maintain flow.

Of course, this isn’t to say that Scrum teams can’t have WIP limits, or pull work, or visualise their work in more creative ways. They can, and probably should! In fact the simplicity of just adding a WIP limit goes to show how compatible Scrum and Kanban can be, and how they can be stronger together.