Agendashift, Cynefin and the Butterfly Stamped

The butterfly who stamped

I’ve recently become an Agendashift partner and have enjoyed exploring how this inclusive, contextual, fulfilling, open approach fits with how I use Strategy Deployment.

Specifically, I find that the Agendashift values-based  assessment can be a form of diagnosis of a team or organisation’s critical challenges, in order to agree guiding policy for change and focus coherent action. I use those italicised terms deliberately as they come from Richard Rumelt’s book Good Strategy/Bad Strategy in which he defines a good strategy kernel as containing those key elements. I love this definition as it maps beautifully onto how I understand Strategy Deployment, and I intent to blog more about this soon.

In an early conversation with Mike when I was first experimenting with the assessment, we were exploring how Cynefin relates to the approach, and in particular the fact that not everything needs to be an experiment. This led to the idea of using the Agendashift assessment prompts as part of a Cynefin contextualisation exercise, which in turn led to the session we ran together at Lean Agile Scotland this year (also including elements of Clean Language).

My original thought had been to try something even more basic though, using the assessment prompts directly in a method that Dave Snowden calls “and the butterfly stamped“, and I got the chance to give that a go last week at Agile Northants.

The exercise – sometimes called simply Butterfly Stamping – is essentially a Four Points Contextualisation in which the items being contextualised are provided by the facilitator rather than generated by the participants. In this case those items were the prompts used in the Agendashift mini assessment, which you can see by completing the 2016 Agendashift global survey.

This meant that as well as learning about Cynefin and Sensemaking, participants were able to have rich conversations about their contexts and how well they were working, without getting stuck on what they were doing and what tools, techniques and practices they were using. Feedback was very positive, and you can see some of the output in this tweet:

I hope we can turn this into something that can be easily shared and reused. Let me know if you’re interested in running at your event. And watch this space!

How Rally Does… Strategy Deployment

This is another post originally published on the Rally Blog which I am reposting here to keep an archived copy. It was part of the same series as the one on annual and quarterly planning, in which we described various aspects of the way the business was run. Again, apart from minor edits to help it make sense as a stand alone piece I have left the content as it was.


Strategy Deployment is sometimes known as Hoshin Kanri, and like many Lean concepts, it originated from Toyota. Hoshin Kanri is a Japanese term whose literal translation can be paraphrased as “compass control.” A more metaphorical interpretation, provided by Pascal Dennis in Getting the Right Things Done, is that of a “ship in a storm going in the right direction.”

Compass

Strategy Deployment is about getting everyone involved in the focus, communication, and execution of a shared goal. I described in previous posts how we collaboratively came up with strategies and an initial plan in the form of an X-matrix. The tool that we use for the deployment is the Strategic A3.

Strategic A3s

A3 refers to the size of the paper (approximately 11 x 17 inches) used by a number of different formats to articulate and communicate something in a simple, readable way on a single sheet of paper. Each rock or departmental team uses a Strategic A3 to describe its plan. This forms the basis for their problem-solving approach by capturing all the key hypotheses and results, which helps identify the opportunities for improvement.

The different sections of the A3 tell a story about the different stages of the PDSA cycle (Plan, Do, Study, Adjust.) I prefer this latter formulation from Dr. W. Edwards Deming to the original PDCA(Plan, Do, Check, Act) of Walter A. Shewhart, because “Study” places more emphasis on learning and gaining knowledge. Similarly, “Adjust” implies feedback and iteration more strongly than does “Act.”

This annual Strategic A3 goes hand-in-hand with a macro, longer-term (three- to five-year) planning A3, and numerous micro, problem-solving A3s.

Anatomy of a Strategic A3

This is what the default template that we use looks like. While it is often good to work on A3s using pencil and paper, for wider sharing across the organisation we’ve found that using a Google document works well too.

StrategicA3

Each A3 has a clear topic, and is read in a specific order: down the left-hand side, and then down the right hand side. This flow aligns with the ORID approach (Objective, Reflective, Interpretive, Decisional) which helps avoid jumping to early conclusions.

The first section looks at prior performance, gaps, and targets, which give objective data on the current state. Targets are a hypothesis about what we would like to achieve, and performance shows the actual results. Over time, the gap between the two gives an indication of what areas need investigation and problem-solving. The next section gives the reactions to, and reflections on, the objective data. This is where emotions and gut feelings are captured. Then comes interpretation of the data and feelings to give some rationale with which to make a plan.

The three left-hand sections help us look back into the past, before we make any decisions about what we should do in the future. Having completed that we have much better information with which to complete the action plan, adding high-level focus and outcomes for each quarter. The immediate quarter will generally have a higher level of detail and confidence, with each subsequent quarter afterward becoming less granular. Finally, the immediate next steps are captured and any risks and dependencies are noted so that they can be shared and managed.

Co-creating a Strategic A3

As you can probably imagine from reading the previous posts, the process of completing a Strategic A3 can be a highly collaborative, structured, and facilitated process. One team with which I work closely recently had grown to a point where we would benefit from our own Strategic A3, rather than being a part of a larger, international Strategic A3. To create it we all got together for a day in our Amsterdam office. We felt that this would allow us to align more strongly with the corporate strategy and communicate more clearly what we were doing, and where we needed help.

We began by breaking into small groups of three to four people, mostly aligned around a regional territory. These groups spent some time filling in their own copy of the A3 template. We then reconvened together and each group gave a readout of its discussions, presenting the top three items from each section, which we captured with post-it notes on flip charts. Having gone around each group I then asked everyone to silently theme the post-its in each section until everyone seemed happy with the results. This led to a discussion about each theme and identifying titles for them. We still had quite a few themes, so we finished off by ranking them with dot-voting so that we could be clear on which  items were most important.

Our last step was to identify the top three items on the A3 that we wanted to highlight to the wider business. This turned out to be a relatively simple conversation. The collaborative nature of the process meant that everyone had a clear and shared understanding of what was important and where we needed focus.

A3Karl0 a3Karl1

Corporate Steering

Strategy deployment is not a one-off, top-down exercise. Instead, the Strategic A3 is used as a simple tool that involves everyone in the process. Teams prepare and plan their work, in line with the corporate goals, and each quarter they revisit and revise their A3s as a means of communicating status and progress. As performance numbers become available an A3 will be updated with any changes highlighted, and the updated A3 then becomes a key input into Quarterly Steering.

Strategy Deployment and Directed Opportunism

A fourth post exploring the relationship between Strategy Deployment and other approaches (see Strategy Deployment and Fitness for PurposeStrategy Deployment and AgendaShift and Strategy Deployment and Spotify Rhythm).

Directed Opportunism is the approach described by Stephen Bungay in his book The Art of Action, in which he builds on the ideas of Field Marshall Helmuth von Moltke, Chief of Staff of the Prussian Army for 30 years from 1857, and applies them to leading businesses. This also follows on from the earlier post on alignment and autonomy in Strategy Deployment.

Bungay starts by describing three gaps between desired Outcomes, the Plans made to achieve them, and the Actions taken which create actual Outcomes.  These gaps (the Knowledge Gap, Alignment Gap and Effects Gap) are shown in the diagram below, and together cause organisational friction – resistance of the organisation to meeting its goals.

Gaps

Given this model, Bungay explains how the usual approach to reducing this friction, and closing the gaps, is to attempt to reduce uncertainty by pursuing more detail and control, as show below.

More Detail

This generally makes the situation worse, however, because the problem is not linear, reductionistic or deterministic. In Cynefin terms, this is a Complicated approach in a Complex domain. Instead, Bungay recommends reducing detail and control and allowing freedom to evolve with feedback. This is what he calls Directed Opportunism.

Less Detail

This definition of Directed Opportunism seems to me to meet my definition of Strategy Deployment as a form of organisational improvement in which solutions emerge from the people closest to the problem. There is clear communication of intent (the problem) with each level (the people closest) defining how they well achieve the intent (the solution) and having freedom to adjust in line with the intent (the emergence).

From an X-Matrix perspective, being clear on results, strategies and outcomes limits direction to defining and communicating intent, and leaving tactics to emerge (through Catchball) allows different levels to define how they will achieve the intent and gives them freedom to adjust actions in line with the intent.

Strategy Deployment and Spotify Rhythm

This is the third in what has turned into a mini series exploring the relationship between Strategy Deployment and other approaches (see Strategy Deployment and Fitness for Purpose and  Strategy Deployment and AgendaShift).

Last month, Henrik Kniberg posted slides from a talk he gave at Agile Sverige on something called Spotify Rhythm which he descibes as “Spotify’s current approach to getting aligned as a company”. While looking through the material, it struck me that what he was describing was a form of Strategy Deployment. This interpretation is based purely on those slides – I haven’t had a chance yet to explore this more deeply with Henrik or anyone else from Spotify. I hope I will do some day, but given that caveat, here’s how I currently understand the approach in terms of the X-Matrix Model.

Spotify Rhythm: Taxonomy

The presentation presents the following “taxonomy” used in “strategic planning”:

Company Beliefs – While this isn’t something I don’t talk about specifically, the concept of beliefs (as opposed to values) does tie in nicely with the idea that Strategy Deployment involves many levels of nested hypotheses and experimentation (as I described in Dynamics of Strategy Deployment). Company Beliefs could be considered to be the highest level, and therefore probably strongest hypotheses.

North Star & 2-Year Goals – A North Star (sometimes called True North) is a common Lean concept (and one I probably don’t talk about enough with regard to Strategy Deployment). It is an overarching statement about a vision of the future, used to set direction. Decisions can be made based on whether they will move the organisation towards (or away from) the North Star. Strategy Deployment is ultimately all in pursuit of enabling the organisational alignment and autonomy which will move it towards the North Star. Given that, the 2-Year Goals can be considered as the Results that moving towards the North Star should deliver.

Company Bets – The Company Bets are the “Big Bets” – “large projects” and “cross-organisation initiatives”. While these sound like high level tactics, I wonder whether they can also be considered to be the Strategies. As mentioned already, Strategy Deployment involves many levels of nested hypothesis and experimentation, and therefore Strategy is a Bet in itself (as are Results , and also Beliefs).

Functional & Market Bets – If the Company Bets are about Strategy, then the Functional and Market Bets are the Tactics implemented by functional or market related teams.

DIBB – DIBB is a framework Spotify use to define bets and “make the chain of reasoning explicit” by showing the relationships between Data, Insights, Beliefs and Bets. Part of that chain of reasoning involves identifying success metrics for the Bets, or in other words, the Outcomes which will indicate if the Bet is returning a positive payoff.

While this isn’t an exact and direct mapping it feels close enough to me. One way of checking alignment would be the ability for anyone to answer some simple questions about the organisations’ journey. I can imagine how Spotify Rhythm provides clarity on how to answer these questions.

  • Do you know where you are heading? North Star
  • Do you know what the destination looks like? 2 Year Goals (Results)
  • Do you know how you will get there? Company Bets (Strategies)
  • Do you know how you will track progress? DIBBs (Outcomes)
  • Do you know how you will make progress? Functional & Market Bets (Tactics)

One final element of Spotify Rhythm which relates to Strategy Deployment is implied in its name – the cadence with which the process runs. Company Bets are reviewed every quarter by the Strategy Team (another reason why they could be considered to be Strategies) and the Functional and Market Bets – also called TPD (Tech-Product-Design) Bets – are reviewed every 6 weeks.

I’d be interested in feedback on alternative interpretations of Spotify Rhythm. Or if you know more about it than I do, please correct anything I’ve got wrong!

Strategy Deployment and Agendashift

Agendashift is the approach used by Mike Burrows, based on his book Kanban from the Inside, in which he describes the values behind the Kanban Method. You can learn more by reading Mike’s post Agendashift in a nutshell. As part of his development of Agendashift, Mike has put together a values based delivery assessment, which he uses when working with teams. Again, I recommend reading Mike’s posts on using Agendashift as a coaching tool  and debriefing an Agendashift survey if you are not familiar with Agendashift.

After listening to Mike talk about Agendashift at this year’s London Lean Kanban Day I began wondering how his approach could be used as part of a Strategy Deployment workshop. I was curious what would happen if I used the Agendashift assessment to trigger the conversations about the elements of the X-Matrix model. Specifically, how could it be used to identify change strategies, and the associated desired outcomes, in order to frame tactics as hypotheses and experiments. Mike and I had a few conversations, and it wasn’t long before I had the opportunity to give it a go. This is a description of how I went about it.

Assessment & Analysis

The initial assessment followed Mike’s post, with participants working through individual surveys before spending time analysing the aggregated results and discussing strengths, weaknesses, convergence, divergence and importance.

Strategies

Having spent some time having rich conversations about current processes and practices, triggered by exploring various perspectives suggested by the survey prompts and scores, the teams had some good insights about what they considered to be their biggest problems worth solving and which required most focus. Getting agreement on what the key problems that need solving are can be thought of as agreeing the key strategies for change.

Thus this is where I broke away from Mike’s outline, in order to first consider strategies. I asked the participants to silently and individually come up with 2 to 3 change strategies each, resulting in around 20-30 items, which we then collectively grouped into similar themes to end up with 5-10 potential strategies. Dot voting (with further discussion) then reduced this down to the 3 key change strategies which everyone agreed with.

To give some examples (which I have simplified and generalised), we had strategies around focussing collaboration, communication, quality, product and value.

Outcomes

Having identified these key strategies, the teams could then consider what desired outcomes they hoped would be achieved by implementing them. By answering the questions “what would we like to see or hear?” and “what would we measure?”, the teams came up with possible ways, both qualitative and quantitative, which might give an indication of whether the strategies, and ultimately the tactics, were working.

Taking the 3 key strategies, I asked small groups of 3-5 people to consider the outcomes they hope to achieve with those strategies, and then consolidated the output. One reassuring observation from this part of the workshop was that some common outcomes emerged across all the strategies. This means that there were many-to-many correlations between them, suggesting a messy coherence, rather than a simplistic and reductionist relationship.

Some examples of outcomes (again simplified and generalised) were related to culture, responsiveness, quality, understanding and feedback.

Hypotheses

Once we have strategies and outcomes, the next step is to create some hypotheses for what tactics might implements the strategies to achieve the outcomes. To do this I tweaked Mike’s hypothesis template, and used this one:

We believe that <change>

implements <strategies>

and will result in <outcomes>

With this template, the hypotheses are naturally correlated with both strategies and outcomes (where the outcomes already consist of both subjective observations and objective measures).

I asked each participant to come up with a single hypothesis, creating a range of options from which to begin defining experiments.

For example (vastly simplified and generalised!):

We believe that a technical practice

implements a quality related strategy

and will result in fewer defects

Actions

This as far as we got in the time available, but I hope its clear that once we have hypotheses like this we can start creating specific experiments with which to move into action, with the possibility that each hypotheses could be tested with multiple experiments.

Results

While we didn’t formally go on to populate an X-Matrix, we did have most of the main elements in place – strategies, outcomes and tactics (if we consider tactics to be the actions required to test hypotheses) – along with the correlations between them. Although we didn’t discuss end results in this instance, I don’t believe it would take much to make those explicit, and come up with the correlations to the strategies and outcomes.

On a recent call with Mike he described Agendashift in terms of making the agenda for change explicit. I think that also nicely describes Strategy Deployment, and why I think there is a lot of overlap. Strategy Deployment makes the results, strategies, outcomes and tactics explicit, along with the correlations and coherence between them, and it seems that Agendashift is one way of going about this.

Strategy Deployment and Fitness for Purpose

David Anderson defines fitness for purpose in terms of the “criteria under which our customers select our service”. Through this lens we can explore how Strategy Deployment can be used to improve fitness for purpose by having alignment and autonomy around what the criteria are and how to improve the service.

In the following presentation from 2014, David describes Neeta, a project manager and mother who represents two market segments for a pizza delivery organisation.

As a project manager, Neeta wants to feed her team. She isn’t fussy about the toppings as long as the pizza is high quality, tasty and edible. Urgency and predictability is less important. As a mother, Neeta want to feed her children. She is fussy about the toppings (or her children are), but quality is less important (because the children are less fussy about that). Urgency and predictability are more important. Thus fitness for purpose means different things to Neeta, depending on the market segment she is representing and the jobs to be done.

We can use this pizza delivery scenario to describe the X-Matrix model and show how the ideas behind fitness for purpose can be used with it.

Results

Results describe what we want to achieve by having fitness for purpose, or alternatively, they are the reasons we want to (and need) to improve fitness for purpose.

Given that this is a pizza delivery business, its probably reasonable to assume that number of pizzas sold would be the simplest business result to describe. We could possibly refine that to number of orders, or number of customers. We might even want a particular number of return customers or repeat business to be successful. At the same time operational costs would probably be important.

Strategies

Strategies describe the areas we want to focus on in order to improve fitness for purpose. They are the problems we need to solve which are stopping us from having fitness for purpose.

To identify strategies we might choose to target one of the market segments that Neeta represents, such as family or business. This could lead to strategies to focus on things like delivery capability, or menu range, or kitchen proficiency.

Outcomes

Outcomes describe what we would like to happen when we have achieved fitness for purpose. They are things that we want to see, hear, or which we can measure, which indicate that the strategies are working and which provide evidence that we are likely to deliver the results.

If our primary outcome is fitness for purpose, then we can use fitness for purpose scores, along with other related leading indicators such as delivery time, reliability, complaints, recommendations.

Tactics

Tactics describe the actions we take in order to improve fitness for purpose. They are the experiments we run in order to evolve towards successfully implementing the strategies, achieving the outcomes and ultimately delivering the results. Alternatively they may help us learn that our strategies need adjusting.

Given strategies to improving fitness for purpose based around market segments, we might try new forms of delivery, different menus or ingredient suppliers, or new alternative cooking techniques.

Correlations

I hope this shows, using David’s pizza delivery example, how fitness for purpose provides a frame to view Strategy Deployment. The X-Matrix model can be used to tell a coherent story about how all these elements – results, strategies, outcomes and tactics – correlate with each other. Clarity of purpose, and what it means to be fit for purpose, enables alignment around the chosen strategies and desired outcomes, such that autonomy can used to experiment with tactics.

The X-Matrix Strategy Deployment Model

There is a model for Strategy Deployment that sits behind the X-Matrix that is worth explaining in more detail as a way of understanding why it is designed the way it is, and how to use it. It is built around describing four types of elements – which I call results, strategies, outcomes and tactics – and how they fit together.

Before we start, lets get the George Box aphorism out of the way:

All models are wrong; some models are useful

Results
Results represent the organisational impact you want to have. They are lagging indicators, success or failure only being declared at the end of the journey. They usually reflect the nature of the business and its economics.

The Results are implemented by Strategies

Strategies
Strategies are constraints which guide how you achieve the results. They are enabling, allowing a range of possible solutions (as opposed to governing, limiting to a specific solution). Thus they guide decisions on where to focus attention (and hence also where not to focus attention).

The Strategies lead to Outcomes

Outcomes
Outcomes provide evidence that the strategies are working. They are leading indicators of whether the results can be achieved ahead of time. They describe the capabilities that the organisation requires in order to be successful.

The right Outcomes will generate the successful Results.

Screen Shot 2016-05-25 at 20.47.46

Of course the Strategies don’t directly lead to Outcomes. Some form of action has to take place. Thus the Strategies are actually implemented by Tactics.

Tactics
Tactics are the activities that take place to implement change. They are experiments which test hypothesis on how to achieve the outcomes. The represent the investments in the improvement work that is being done.

Therefore, it is the Tactics that generate the Outcomes and ultimately lead to the Results.

Screen Shot 2016-05-25 at 20.48.11

For change to be successful, there should be a correlation between the various elements in this model (and it should be remembered that correlation is not causation). Each element will have some level of contribution to another. This will range from strong or direct, to weak or indirect, or there may sometimes be none. You could also say that the correlations are Probable, Possible, or Plausible. All together there should be coherence (albeit messy) to the way all the elements fit together.

Screen Shot 2016-05-25 at 20.48.32

By starting with Results, moving on to Strategies and Outcomes and leaving Tactics until last, there is a greater chance that the Tactics chosen are ones which do implement the Strategies and generate the Outcomes. The intent is to avoid premature convergence or retrospective coherence when identifying the Tactics. It is very easy to hastily jump to the wrong conclusions about what the Tactics should be, and then justify them based on the Strategies.

Even if you don’t use the X-Matrix explicitly, understanding this model can be useful for asking questions about change and improvement.

  • What end results are you hoping to achieve?
  • What are your strategies to deliver them?
  • What intermediate outcomes will show you are on the right path?
  • What tactics are you using to move forward?
  • How do all these pieces fit together?

If you can answer these questions, then you should be able to populate an X-Matrix. I will work through an example in an future post.

X-Matrix Simple

Alignment and Autonomy in Strategy Deployment

Following on from my previous What is Strategy Deployment and Dynamics of Strategy Deployment posts, there is a model I like which I think helps to show how the mechanics and the dynamics work together.

In The Art of Action, Stephen Bungay describes how Field Marshall Helmuth von Moltke, Chief of Staff of the Prussian Army for 30 years from 1857, had an important insight regarding Alignment and Autonomy. Previously these two had been viewed as extremes at the end of a single scale. Having high alignment meant having no autonomy because alignment could only be achieved through defining detailed plans which everyone should follow. Equally, high autonomy meant having no alignment because autonomy would result in everyone doing their own thing with no regard for each others actions.

Von Moltke’s insight was that alignment and autonomy are not a single scale requiring a tradeoff between the two ends, but two different axis which can actually reinforce each other. Thus not only is it possible to have both high alignment and high autonomy, but high alignment can enable high autonomy.

Alignment and Autonomy

They key to making this possible is differentiating between intent and action. Alignment is achieved by clearly stating intent centrally, such that autonomy can be achieved by allowing action to be decentralised in support of the intent. This requires mechanisms to both clarify and amplify intent, and enable and encourage local action. Thus using the definition of Strategy Deployment as “any form of organisational improvement in which solutions emerge from the people closest to the problem”, solving the problem is the intent, and the emerged solution is the action.

Using this model we can now describe two mechanisms necessary to make this happen. Alignment can be achieved with the X-Matrix, which enables the conversations about intent and summarises and visualises the results of those conversations. In other words, the X-Matrix shows how results, strategy, outcomes and tactics align and reinforce each other. Autonomy can be achieved through Catchball (Bungay describes the equivalent as back-briefing), which enables the X-Matrix to be passed around the organisation such that everyone can reflect, give feedback, and improve it, helping focus action on meeting the intent.

X-Matrix and Catchball

Viewing Strategy Deployment in this light also highlights a symmetry with the Autonomy, Mastery and Purpose model of intrinsic motivation described by Dan Pink in his popular book Drive. Autonomy is a direct match in both models and purpose is equivalent to intent. Mastery is then the result of improving capability autonomously with strong alignment to intent.

Drive

What this way of looking at Strategy Deployment shows is that both the X-Matrix and Catchball are necessary components. Just using the X-Matrix with out Catchball will probably result in it being used as just another top-down document to command and control employees. Similarly, just using Catchball without an X-Matrix will probably result in collaboration around local improvements with no overall organisational improvement.

Dynamics of Strategy Deployment

Following on from my last post, and based on the feedback in the comments, I want to say more about the dynamics of Strategy Deployment.

The first point is to do with the directionality. Strategy isn’t deployed by being pushed down from the top of the organisation, with the expectation that the right tactics simply need to be discovered. Rather, the strategy is proposed by a central group, so that decentralised groups can explore and create feedback on the proposal. Thus information flows out and back from the central group. Further, the decentralised groups are not formed down organisational structures, but are cross-organisational so that information also flows between groups and divisions. The following picture is trying to visualise this, where colours represent organisational divisions. Note also that some individuals are both members of a “deployed from” group, and a “deployed to” group – the deployment isn’t a hand-off either.

Strategy Deployment Directionality

This means that a Strategy Deployment can begin anywhere in the organisation, in any one of those groups, and by widening the deployment to more and more groups, greater alignment is achieved around common results, strategies, indicators and tactics.

That leads to the second point about emergence. In the same way that the tactical initiatives are hypotheses with experiments on how to implement the strategies, so the strategies themselves are also hypotheses. Tactics can also be viewed as experiments to learn whether the strategies are the best ones. In fact Strategy Deployment can be thought of as nested experimentation, where every PDSA “Do” has its own PDSA cycle.

Nested PDSA Cycles

With regular and frequent feedback cycles from the experiments, looking at the current indicators and results, strategy can emerge as opportunities are identified and amplified, or drawbacks are discovered and dampened. In this way Strategy Deployment explores the evolutionary potential of the present rather than trying to close the gap towards a forecasted future.

These dynamics are often referred to as Catchball in the lean community, as ideas and learnings are tossed around the organisation between groups, with the cycle “Catch, Reflect, Improve, Pass”.

Catch, Reflect, Improve, Pass cycle

I also like the LAGER mnemonic I mentioned in Strategy Deployment as Organisational Improv, which is another way of thinking about these dynamics.

What is Strategy Deployment?

Japanese Ship in a Storm

I’ve been writing about Strategy Deployment a lot recently but realised that I haven’t properly defined what I mean by the term. Actually, I sort of did in my last post, so I’m going to repeat, expand and build on that here.

In a nutshell, Strategy Deployment is any form of organisational improvement in which solutions emerge from the people closest to the problem.

The name comes from the Japanese term, Hoshin Kanri, the literal translation of which is “Direction Management”, which suggests both setting direction and steering towards it. A more metaphorical translation is “Ship in a storm going in the right direction”. This brings to my mind the image of everyone using all their skills and experience to pull together, with a common goal of escaping turbulence and reaching safety.

Lets look at the two elements, strategy and deployment, separately.

Strategy

Wikipedia defines strategy as

“a high level plan to achieve one or more goals under conditions of uncertainty”.

The emphasis is mine as these are the two key elements which indicate that a strategy is not a detailed plan with a known and predictable outcome.

Strategy to me is about improving and making significant breakthroughs in certain key competitive capabilities. I like Geoffrey Moore’s Hierarchy of Powers from Escape Velocity as a guide for exploring what those capabilities might be. This hierarchy is nicely summarised as

“Category Power (managing the portfolio of market categories in which a company is involved), Company Power (your status relative to competitors), Market Power (market share in your target segments), Offer Power (differentiation of your offering), and Execution Power (your ability to drive strategic transformation within your enterprise).”

As an aside, in this context, Agility as a Strategy can be thought of as primarily (although not exclusively) focussed on improving Execution and Offer Powers.

Determining Strategy as “a high level plan to achieve one or more goals under conditions of uncertainty”, therefore, involves setting enabling rather than governing constraints. Strategy should guide the creation of new solutions, and not control the implementation of existing solutions. It defines the how and not the what, the approach and not the tools.

Deployment

Mirriam-Webster defines deploy as

“to spread out, utilize, or arrange for a deliberate purpose”.

In this context it is the strategy that is being utilised for the deliberate purpose of improving organisational improvement. Given that the strategy is “a high level plan to achieve one or more goals under conditions of uncertainty”, this means that the deployment is not the orchestration and implementation of a detailed plan.

Instead it requires a shift in the way organisations operate, from a mindset where management knows best, and tells employees what to do without thinking or asking questions, to one where they propose direction and ask for feedback and enquiry. Instead of assuming that managers know the right answers as a facts, the deployment of strategy assumes that any suggestions are simply opinions to be explored and challenged. Employees are allowed, and encouraged, to think for themselves, allowing for the possibility that they may turn out to be wrong, and making it acceptable for people to change their mind.

As another aside, this brings to mind a great Doctor Who quote from the latest season:

Strategy Deployment

Back to my original definition of “any form of organisational improvement in which solutions emerge from the people closest to the problem.”

Strategy Deployment is the creation of a high level plan for organisational improvement under conditions of uncertainty (the strategy), and the utilisation of that strategy by employees for a deliberate purpose (to achieve one or more goals). Clear communication of both the goals and the strategy, and constant collaboration across the whole organisation to use all the skills, knowledge and experience available, allows the appropriate tactics emerge. In this way Strategy Deployment enables autonomy of teams and departments while maintaining alignment to the overall strategy and goals.

Note that I say any form. I don’t see Strategy Deployment as a specific method, or framework, but more as general approach or style. My preferred approach at the moment uses the X-Matrix, but I would also describe David Snowden’s Cynefin, David Anderson’s Kanban Method, Mike Burrows’ Agendashift and Jeff Anderson’s Lean Change Method as forms of Strategy Deployment. I’m hoping to explore the synergies more at Lean Kanban North America and the Kanban Leadership Retreat.