Strategy Deployment and Impact Mapping

I’ve had a couple of conversations in recent weeks in which Impact Mapping came up in relation to Strategy Deployment so here’s a post on my thoughts about how the two fit together.

An Impact Map is a form of mind-map developed by Gojko Adzic, visualising the why, who, how and what of an initiative. More specifically, it shows the goals, actors involved in meeting the goals, desired impact on the actors (in order to meet the goals), and deliverables to make the impacts. The example below is from Gojko’s website.

As you can see, an Impact Map is very simple, reductionist visualisation, from Goals down to Deliverables, and while the mind map format doesn’t entirely constrain this, it tends to be what most examples I have seen look like. It does however work in such as way to start with the core problem (meeting the goal) and allow people to explore and experiment with how to solve that problem via deliverables. This is very much in line with how I define Strategy Deployment.

Lets see how that Impact Map might translate onto an X-Matrix.

The Goal is clearly an Aspiration, so any relevant measures would neatly fit into the X-Matrix’s bottom section. At the other end, the Deliverables are also clearly Tactics, and would neatly fit in the X-Matrix-s top section. I would also argue that the Impacts provide Evidence that we are meeting the Aspirations, and could fit into the X-Matrix’s right-hand section. What is not so clear is Strategy. I think the Actors could provide a hint, however, and I would suggest that an Impact Map is actually a good diagnosis instrument (as per Rumelt) with which to identify Strategy.

Taking the 4 levels on an Impact Map, and transposing them onto an X-Matrix, creates a view which can be slightly less reductionist (although not as simple), and opens up the possibility of seeing how all the different elements might be related to each other collectively. In the X-Matrix below I have added the nodes from the Impact Map above into the respective places, with direct correlations for the Impact Map relationships. This can be seen in the very ordered pattern of dots. New Tactics (Deliverables) and Evidence (Impacts), and possible more Aspirations (Goals), would of course also need to be added for the other Strategies (Actors).

Even though this is a very basic mapping, I hope its not too difficult to see the potential to start exploring what other correlations might exist for the identified Tactics. And what the underlying Strategies really are. I leave that as exercise for you to try – please leave a comment with what ideas you have!

This post is one of a series comparing Strategy Deployment and other approaches.

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

The Messy Coherence of X-Matrix Correlations

I promised to say more about correlations in my last post on how to TASTE Success with the X-Matrix .

One of the things I like about the X-Matrix is that it allows clarity of alignment, without relying on an overly analytical structure. Rather than consisting of simple hierarchical parent-child relationships, it allows more elaborate many-to-many relationships of varying types. This creates a messy coherence – everything fits together, but without too much neatness or precision.

This works through the shaded matrices in the corners of the X-Matrix – the ones that together form an X and give this A3 its name! Each cell in the matrices represents a correlation between two of the numbered elements. Its important to emphasise that we are representing correlation, and not causation. There may be a contribution of one to the other, but it is unlikely to be exclusive or immediate. Thus implementing Tactics collectively contribute towards applying Strategies and exhibiting Evidence. Similarly applying Strategies and exhibiting Evidence both collectively contribute towards meeting Aspirations. What we are looking for is a messy coherence across all the pieces.

There are a few approaches I have used to describe different types of correlation.

  • Directness – Can a direct correlation be explained, or is the correlation indirect via another factor (i.e. it is oblique). This tends to be easier to be objective about.
  • Strength – Is there a strong correlation between the elements, or is the correlation weak. This tends to be harder to describe because strong and weak are more subjective.
  • Likelihood – Is the correlation probable, possible or plausible. This adds a third option, and therefore another level of complexity, but the language can be useful.

Whatever the language, there is always the option of none. An X-Matrix where everything correlates with everything is usually too convenient and can be a sign of post-hoc justification.

Having decided on an approach, a symbol is used in each cell to visualise the nature of each correlation. I have tried letters and colours, and have recently settled on filled and empty circles, as in the example below. Filled circles represent direct or strong correlations, while empty circles represent indirect or weak correlations. (If using likelihood, a third variant would be needed, such as a circle with a dot in the middle).

Here we can see that there is a direct or strong correlation between “Increase Revenue +10%” (Aspiration 1) and “Global Domination” (Strategy 1). In other words this suggests that Strategy 1 contributes directly or strongly to Aspiration 1. As do all the Strategies, which indicates high coherence. Similarly, Strategy 1 has a direct/strong correlation with Aspiration 2, but Strategy 2 has no correlation, and Strategy 3 only has indirect/weak correlation.

Remember, this is just a hypothesis, and by looking at the patterns of correlations around the X-Matrix we can see and discuss the overall coherence. For example we might question why Strategy 3 only has Tactic 2 with an indirect/weak correlation. Or whether Tactic 2 is the best investment given its relatively poor correlations with both Strategies and Evidence. Or whether Evidence 4 is relevant given its relatively poor correlations with both Tactics and Aspiration.

Its visualising and discussing these correlations that is often where the magic happens, as it exposes differences in understandings and perspectives on what all the pieces mean and how relate to each other. This leads to refinement of X-Matrix, more coherence and stronger alignment.

VN:F [1.9.22_1171]
Rating: 5.0/5 (1 vote cast)

TASTE Success with an X-Matrix Template

I’ve put together a new X-Matrix A3 template to go with the Backbriefing and Experiment A3s I published last month. Together, these 3 templates work well together as part of a Strategy Deployment process, although I should reiterate again that the templates alone are not sufficient. A culture of collaboration and learning is also necessary as part of Catchball.

 

While creating the template I decided to change some of the language on it – mainly because I think it better reflects the intent of each section. However a side-benefit is that it nicely creates a new acronym, TASTE, as follows:

  • True North – the orientation which informs what should be done. This is more of a direction and vision than a destination or future state. Decisions should take you towards rather than away from your True North.
  • Aspirations – the results we hope to achieve. These are not targets, but should reflect the size of the ambition and the challenge ahead.
  • Strategies – the guiding policies that enable us. This is the approach to meeting the aspirations by creating enabling constraints.
  • Tactics – the coherent actions we will take. These represent the hypotheses to be tested and the work to be done to implement the strategies in the form of experiments.
  • Evidence – the outcomes that indicate progress. These are the leading indicators which provide quick and frequent feedback on whether the tactics are having an impact on meeting the aspirations.

Hence working through these sections collaboratively can lead to being able to TASTE success 🙂

One of the challenges with an X-Matrix template is that there is no right number of items which should populate each section. With that in mind I have gone for what I think is a reasonable upper limit, and I would generally prefer to have fewer items than the template allows.

This version also provides no guidance on how to complete the correlations on the 4 matrices in the corners which create the X (e.g. Strong/Weak, Direct/Indirect, Probable/Possible/Plausible). I will probable come back to that with a future version and/or post.

VN:F [1.9.22_1171]
Rating: 5.0/5 (1 vote cast)

Good Agile/Bad Agile: The Difference and Why It Matters

kernels and bugsThis post is an unapologetic riff on Richard Rumelt’s book Good Strategy/Bad Strategy: The Difference and Why It Matters. The book is a wonderful analysis of what makes a good strategy and how successful organisations use strategy effectively. I found that it reinforced my notion that Agility is a Strategy and so this is also a way to help me organise my thoughts about that from the book. 

Good and Bad Agile

Rumelt describes Bad Strategy as having four major hallmarks:

  • Fluff – meaningless words or platitudes.
  • Failure to face the challenge – not knowing what the problem or opportunity being faced is.
  • Mistaking goals for strategy – simply stating ambitions or wishful thinking.
  • Bad strategy objectives – big buckets which provide no focus and can be used to justify anything (otherwise known as “strategic horoscopes”).

These hallmarks can also describe Bad Agile. For example, when Agile is just used for the sake of it (Agile is the fluff). Or when Agile is just used to do “the wrong thing righter” (failing to face the challenge). Or when Agile is just used to “improve performance” (mistaking goals for strategy). Or when Agile is just part of a variety of initiatives (bad strategy objectives).

Rumelt goes on to describe a Good Strategy as having a kernel with three elements:

  • Diagnosis – understanding the critical challenge or opportunity being faced.
  • Guiding policy – the approach to addressing the challenge or opportunity.
  • Coherent actions – the work to implement the guiding policy.

Again, I believe this kernel can help identify Good Agile. When Agile works well, it should be easy to answer the following questions:

  • What diagnosis is Agile addressing for you? What is the critical challenge or opportunity you are facing?
  • What guiding policy does Agile relate to? How does it help you decide what you should or shouldn’t do?
  • What coherent actions you are taking that are Agile? How are they coordinated to support the strategy?

Sources of Power

Rumelt suggests that

“a good strategy works by harnessing power and applying it where it will have the greatest effect”.

He goes on to describe nine of these powers (although they are not limited to these nine) and it’s worth considering how Agile can enable them.

  • Leverage – the anticipation of what is most pivotal and concentrating effort. Good Agile will focus on identifying and implementing the smallest change (e.g. MVPs) which will result in largest gains.
  • Proximate objects – something close enough to be achievable. Good Agile will help identify clear, small, incremental and iterative releases which can be easily delivered by the organisation
  • Chain-link systems – systems where performance is limited by the weakest link.  Good Agile will address the constraint in the organisation. Understanding chain-link systems is effectively the same as applying Theory of Constraints. 
  • Design – how all the elements of an organisation and its strategy fit together and are co-ordinated to support each other. Good Agile will be part of a larger design, or value stream, and not simply a local team optimisation. Using design is effectively the same as applying Systems Thinking. 
  • Focus – concentrating effort on achieving a breakthrough for a single goal. Good Agile limits work in process in order to help concentrate effort on that single goal to create the breakthrough.
  • Growth – the outcome of growing demand for special capabilities, superior products and skills. Good Agile helps build both the people and products which will result in growth.
  • Advantage – the unique differences and asymmetries which can be exploited to increase value. Good Agile helps exploit, protect or increase demand to gain a competitive advantage. In fact Good Agile can itself be an advantage.
  • Dynamics – anticipating and riding a wave of change. Good Agile helps explore new and different changes and opportunities, and then exploits them.
  • Inertia and Entropy – the resistance to change, and decline into disorder. Good Agile helps organisations overcome their own inertia and entropy, and take advantage of competitors’ inertia and entropy. In effect, having less inertia and entropy than your competition means having a tighter OODA loop.

In general, we can say that Good Agile “works by harnessing power and applying it where it will have the greatest effect”, and it should be possible to answer the following question:

  • What sources of power is your strategy harnessing, and how does Agile help apply it?

Thinking like an Agilist

Rumelt concludes with some thoughts on creating strategy, and what he suggests is

“the most useful shift in viewpoint: thinking about your own thinking”.

He describes this shift from the following perspectives:

  • The Science of Strategy – strategy as a scientific hypothesis rather than a predictable plan.
  • Using Your Head – expanding the scanning horizon for ideas rather than settling on the first idea.
  • Keeping Your Head – using independent judgement to decide the best approach rather than following the crowd.

This is where I see a connection between Good Strategy and Strategy Deployment, which is an approach to testing hypotheses (science as strategy), deliberately exploring multiple options (using your head), and discovering an appropriate, contextual solution (keeping your head).

In summary, Good Agile is deployed strategically by being part of a kernel, with a diagnosis of the critical problem or opportunity being faced, guiding policy which harnesses a source of power, and coherent actions that are evolved through experimenting as opposed to being instantiated by copying.

VN:F [1.9.22_1171]
Rating: 5.0/5 (5 votes cast)

A3 Templates for Backbriefing and Experimenting

I’ve been meaning to share a couple of A3 templates that I’ve developed over the last year or so while I’ve been using Strategy Deployment. To paraphrase what I said when I described my thoughts on Kanban Thinkingwe need to create more templates, rather than reduce everything down to “common sense” or “good practice”. In other words, the more A3s and Canvases there are, the more variety there is for people to choose from, and hopefully, the more people will think about why they choose one over another. Further, if people can’t find one that’s quite right, I encourage them to develop their own, and then share it so there is even more variety and choice!

Having said that, the value of A3s is always in the conversations and collaborations that take part while populating them. They should be co-created as part of a Catchball process, and not filled in and handed down as instructions.

Here are the two I am making available. Both are used in the context of the X-Matrix Deployment Model. Click on the images to download the pdfs.

Backbriefing A3

Backbriefing A3

This one is heavily inspired by Stephen Bungay’s Art of Action. I use it to charter a team working on a tactical improvement initiative. The sections are:

  • Context – why the team has been brought together
  • Intent – what the team hopes to achieve
  • Higher Intent – how the team’s work helps the business achieve its goals
  • Team – who is, or needs to be, on the team
  • Boundaries – what the team are or are not allowed to do in their work
  • Plan – what the team are going to do to meet their intent, and the higher intent

The idea here is to ensure a tactical team has understood their mission and mission parameters before they move into action. The A3 helps ensure that the team remain aligned to the original strategy that has been deployed to them.

The Plan section naturally leads into the Experiment A3.

Experiment A3

Experiment A3

This is a more typical A3, but with a bias towards testing the hypotheses that are part of Strategy Deployment. I use this to help tactical teams in defining the experiments for their improvement initiative. The sections are:

  • Context – the problem the experiment is trying to solve
  • Hypothesis – the premise behind the experiment
  • Rationale – the reasons why the experiment is coherent
  • Actions – the steps required to run the experiment
  • Results – the indicators of whether the experiment has worked or not
  • Follow-up – the next steps based on what was learned from the experiment

Note that experiments can (and should) attempt to both prove and disprove a hypothesis to minimise the risk of confirmation bias. And the learning involved should be “safe to fail”.

VN:F [1.9.22_1171]
Rating: 5.0/5 (2 votes cast)

Agendashift, Cynefin and the Butterfly Stamped

The butterfly who stamped

I’ve recently become an Agendashift partner and have enjoyed exploring how this inclusive, contextual, fulfilling, open approach fits with how I use Strategy Deployment.

Specifically, I find that the Agendashift values-based  assessment can be a form of diagnosis of a team or organisation’s critical challenges, in order to agree guiding policy for change and focus coherent action. I use those italicised terms deliberately as they come from Richard Rumelt’s book Good Strategy/Bad Strategy in which he defines a good strategy kernel as containing those key elements. I love this definition as it maps beautifully onto how I understand Strategy Deployment, and I intent to blog more about this soon.

In an early conversation with Mike when I was first experimenting with the assessment, we were exploring how Cynefin relates to the approach, and in particular the fact that not everything needs to be an experiment. This led to the idea of using the Agendashift assessment prompts as part of a Cynefin contextualisation exercise, which in turn led to the session we ran together at Lean Agile Scotland this year (also including elements of Clean Language).

My original thought had been to try something even more basic though, using the assessment prompts directly in a method that Dave Snowden calls “and the butterfly stamped“, and I got the chance to give that a go last week at Agile Northants.

The exercise – sometimes called simply Butterfly Stamping – is essentially a Four Points Contextualisation in which the items being contextualised are provided by the facilitator rather than generated by the participants. In this case those items were the prompts used in the Agendashift mini assessment, which you can see by completing the 2016 Agendashift global survey.

This meant that as well as learning about Cynefin and Sensemaking, participants were able to have rich conversations about their contexts and how well they were working, without getting stuck on what they were doing and what tools, techniques and practices they were using. Feedback was very positive, and you can see some of the output in this tweet:

I hope we can turn this into something that can be easily shared and reused. Let me know if you’re interested in running at your event. And watch this space!

VN:F [1.9.22_1171]
Rating: 4.0/5 (1 vote cast)

How Rally Does… Strategy Deployment

This is another post originally published on the Rally Blog which I am reposting here to keep an archived copy. It was part of the same series as the one on annual and quarterly planning, in which we described various aspects of the way the business was run. Again, apart from minor edits to help it make sense as a stand alone piece I have left the content as it was.


Strategy Deployment is sometimes known as Hoshin Kanri, and like many Lean concepts, it originated from Toyota. Hoshin Kanri is a Japanese term whose literal translation can be paraphrased as “compass control.” A more metaphorical interpretation, provided by Pascal Dennis in Getting the Right Things Done, is that of a “ship in a storm going in the right direction.”

Compass

Strategy Deployment is about getting everyone involved in the focus, communication, and execution of a shared goal. I described in previous posts how we collaboratively came up with strategies and an initial plan in the form of an X-matrix. The tool that we use for the deployment is the Strategic A3.

Strategic A3s

A3 refers to the size of the paper (approximately 11 x 17 inches) used by a number of different formats to articulate and communicate something in a simple, readable way on a single sheet of paper. Each rock or departmental team uses a Strategic A3 to describe its plan. This forms the basis for their problem-solving approach by capturing all the key hypotheses and results, which helps identify the opportunities for improvement.

The different sections of the A3 tell a story about the different stages of the PDSA cycle (Plan, Do, Study, Adjust.) I prefer this latter formulation from Dr. W. Edwards Deming to the original PDCA(Plan, Do, Check, Act) of Walter A. Shewhart, because “Study” places more emphasis on learning and gaining knowledge. Similarly, “Adjust” implies feedback and iteration more strongly than does “Act.”

This annual Strategic A3 goes hand-in-hand with a macro, longer-term (three- to five-year) planning A3, and numerous micro, problem-solving A3s.

Anatomy of a Strategic A3

This is what the default template that we use looks like. While it is often good to work on A3s using pencil and paper, for wider sharing across the organisation we’ve found that using a Google document works well too.

StrategicA3

Each A3 has a clear topic, and is read in a specific order: down the left-hand side, and then down the right hand side. This flow aligns with the ORID approach (Objective, Reflective, Interpretive, Decisional) which helps avoid jumping to early conclusions.

The first section looks at prior performance, gaps, and targets, which give objective data on the current state. Targets are a hypothesis about what we would like to achieve, and performance shows the actual results. Over time, the gap between the two gives an indication of what areas need investigation and problem-solving. The next section gives the reactions to, and reflections on, the objective data. This is where emotions and gut feelings are captured. Then comes interpretation of the data and feelings to give some rationale with which to make a plan.

The three left-hand sections help us look back into the past, before we make any decisions about what we should do in the future. Having completed that we have much better information with which to complete the action plan, adding high-level focus and outcomes for each quarter. The immediate quarter will generally have a higher level of detail and confidence, with each subsequent quarter afterward becoming less granular. Finally, the immediate next steps are captured and any risks and dependencies are noted so that they can be shared and managed.

Co-creating a Strategic A3

As you can probably imagine from reading the previous posts, the process of completing a Strategic A3 can be a highly collaborative, structured, and facilitated process. One team with which I work closely recently had grown to a point where we would benefit from our own Strategic A3, rather than being a part of a larger, international Strategic A3. To create it we all got together for a day in our Amsterdam office. We felt that this would allow us to align more strongly with the corporate strategy and communicate more clearly what we were doing, and where we needed help.

We began by breaking into small groups of three to four people, mostly aligned around a regional territory. These groups spent some time filling in their own copy of the A3 template. We then reconvened together and each group gave a readout of its discussions, presenting the top three items from each section, which we captured with post-it notes on flip charts. Having gone around each group I then asked everyone to silently theme the post-its in each section until everyone seemed happy with the results. This led to a discussion about each theme and identifying titles for them. We still had quite a few themes, so we finished off by ranking them with dot-voting so that we could be clear on which  items were most important.

Our last step was to identify the top three items on the A3 that we wanted to highlight to the wider business. This turned out to be a relatively simple conversation. The collaborative nature of the process meant that everyone had a clear and shared understanding of what was important and where we needed focus.

A3Karl0 a3Karl1

Corporate Steering

Strategy deployment is not a one-off, top-down exercise. Instead, the Strategic A3 is used as a simple tool that involves everyone in the process. Teams prepare and plan their work, in line with the corporate goals, and each quarter they revisit and revise their A3s as a means of communicating status and progress. As performance numbers become available an A3 will be updated with any changes highlighted, and the updated A3 then becomes a key input into Quarterly Steering.

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

Strategy Deployment and Directed Opportunism

A fourth post exploring the relationship between Strategy Deployment and other approaches (see Strategy Deployment and Fitness for Purpose, Strategy Deployment and AgendaShift and Strategy Deployment and Spotify Rhythm).

Directed Opportunism is the approach described by Stephen Bungay in his book The Art of Action, in which he builds on the ideas of Field Marshall Helmuth von Moltke, Chief of Staff of the Prussian Army for 30 years from 1857, and applies them to leading businesses. This also follows on from the earlier post on alignment and autonomy in Strategy Deployment.

Bungay starts by describing three gaps between desired Outcomes, the Plans made to achieve them, and the Actions taken which create actual Outcomes.  These gaps (the Knowledge Gap, Alignment Gap and Effects Gap) are shown in the diagram below, and together cause organisational friction – resistance of the organisation to meeting its goals.

Gaps

Given this model, Bungay explains how the usual approach to reducing this friction, and closing the gaps, is to attempt to reduce uncertainty by pursuing more detail and control, as show below.

More Detail

This generally makes the situation worse, however, because the problem is not linear, reductionistic or deterministic. In Cynefin terms, this is a Complicated approach in a Complex domain. Instead, Bungay recommends reducing detail and control and allowing freedom to evolve with feedback. This is what he calls Directed Opportunism.

Less Detail

This definition of Directed Opportunism seems to me to meet my definition of Strategy Deployment as a form of organisational improvement in which solutions emerge from the people closest to the problem. There is clear communication of intent (the problem) with each level (the people closest) defining how they well achieve the intent (the solution) and having freedom to adjust in line with the intent (the emergence).

From an X-Matrix perspective, being clear on results, strategies and outcomes limits direction to defining and communicating intent, and leaving tactics to emerge (through Catchball) allows different levels to define how they will achieve the intent and gives them freedom to adjust actions in line with the intent.

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

Strategy Deployment and Spotify Rhythm

This is the third in what has turned into a mini series exploring the relationship between Strategy Deployment and other approaches (see Strategy Deployment and Fitness for Purpose and  Strategy Deployment and AgendaShift).

Last month, Henrik Kniberg posted slides from a talk he gave at Agile Sverige on something called Spotify Rhythm which he descibes as “Spotify’s current approach to getting aligned as a company”. While looking through the material, it struck me that what he was describing was a form of Strategy Deployment. This interpretation is based purely on those slides – I haven’t had a chance yet to explore this more deeply with Henrik or anyone else from Spotify. I hope I will do some day, but given that caveat, here’s how I currently understand the approach in terms of the X-Matrix Model.

Spotify Rhythm: Taxonomy

The presentation presents the following “taxonomy” used in “strategic planning”:

Company Beliefs – While this isn’t something I don’t talk about specifically, the concept of beliefs (as opposed to values) does tie in nicely with the idea that Strategy Deployment involves many levels of nested hypotheses and experimentation (as I described in Dynamics of Strategy Deployment). Company Beliefs could be considered to be the highest level, and therefore probably strongest hypotheses.

North Star & 2-Year Goals – A North Star (sometimes called True North) is a common Lean concept (and one I probably don’t talk about enough with regard to Strategy Deployment). It is an overarching statement about a vision of the future, used to set direction. Decisions can be made based on whether they will move the organisation towards (or away from) the North Star. Strategy Deployment is ultimately all in pursuit of enabling the organisational alignment and autonomy which will move it towards the North Star. Given that, the 2-Year Goals can be considered as the Results that moving towards the North Star should deliver.

Company Bets – The Company Bets are the “Big Bets” – “large projects” and “cross-organisation initiatives”. While these sound like high level tactics, I wonder whether they can also be considered to be the Strategies. As mentioned already, Strategy Deployment involves many levels of nested hypothesis and experimentation, and therefore Strategy is a Bet in itself (as are Results , and also Beliefs).

Functional & Market Bets – If the Company Bets are about Strategy, then the Functional and Market Bets are the Tactics implemented by functional or market related teams.

DIBB – DIBB is a framework Spotify use to define bets and “make the chain of reasoning explicit” by showing the relationships between Data, Insights, Beliefs and Bets. Part of that chain of reasoning involves identifying success metrics for the Bets, or in other words, the Outcomes which will indicate if the Bet is returning a positive payoff.

While this isn’t an exact and direct mapping it feels close enough to me. One way of checking alignment would be the ability for anyone to answer some simple questions about the organisations’ journey. I can imagine how Spotify Rhythm provides clarity on how to answer these questions.

  • Do you know where you are heading? North Star
  • Do you know what the destination looks like? 2 Year Goals (Results)
  • Do you know how you will get there? Company Bets (Strategies)
  • Do you know how you will track progress? DIBBs (Outcomes)
  • Do you know how you will make progress? Functional & Market Bets (Tactics)

One final element of Spotify Rhythm which relates to Strategy Deployment is implied in its name – the cadence with which the process runs. Company Bets are reviewed every quarter by the Strategy Team (another reason why they could be considered to be Strategies) and the Functional and Market Bets – also called TPD (Tech-Product-Design) Bets – are reviewed every 6 weeks.

I’d be interested in feedback on alternative interpretations of Spotify Rhythm. Or if you know more about it than I do, please correct anything I’ve got wrong!

VN:F [1.9.22_1171]
Rating: 5.0/5 (2 votes cast)

Strategy Deployment and Agendashift

Agendashift is the approach used by Mike Burrows, based on his book Kanban from the Inside, in which he describes the values behind the Kanban Method. You can learn more by reading Mike’s post Agendashift in a nutshell. As part of his development of Agendashift, Mike has put together a values based delivery assessment, which he uses when working with teams. Again, I recommend reading Mike’s posts on using Agendashift as a coaching tool  and debriefing an Agendashift survey if you are not familiar with Agendashift.

After listening to Mike talk about Agendashift at this year’s London Lean Kanban Day I began wondering how his approach could be used as part of a Strategy Deployment workshop. I was curious what would happen if I used the Agendashift assessment to trigger the conversations about the elements of the X-Matrix model. Specifically, how could it be used to identify change strategies, and the associated desired outcomes, in order to frame tactics as hypotheses and experiments. Mike and I had a few conversations, and it wasn’t long before I had the opportunity to give it a go. This is a description of how I went about it.

Assessment & Analysis

The initial assessment followed Mike’s post, with participants working through individual surveys before spending time analysing the aggregated results and discussing strengths, weaknesses, convergence, divergence and importance.

Strategies

Having spent some time having rich conversations about current processes and practices, triggered by exploring various perspectives suggested by the survey prompts and scores, the teams had some good insights about what they considered to be their biggest problems worth solving and which required most focus. Getting agreement on what the key problems that need solving are can be thought of as agreeing the key strategies for change.

Thus this is where I broke away from Mike’s outline, in order to first consider strategies. I asked the participants to silently and individually come up with 2 to 3 change strategies each, resulting in around 20-30 items, which we then collectively grouped into similar themes to end up with 5-10 potential strategies. Dot voting (with further discussion) then reduced this down to the 3 key change strategies which everyone agreed with.

To give some examples (which I have simplified and generalised), we had strategies around focussing collaboration, communication, quality, product and value.

Outcomes

Having identified these key strategies, the teams could then consider what desired outcomes they hoped would be achieved by implementing them. By answering the questions “what would we like to see or hear?” and “what would we measure?”, the teams came up with possible ways, both qualitative and quantitative, which might give an indication of whether the strategies, and ultimately the tactics, were working.

Taking the 3 key strategies, I asked small groups of 3-5 people to consider the outcomes they hope to achieve with those strategies, and then consolidated the output. One reassuring observation from this part of the workshop was that some common outcomes emerged across all the strategies. This means that there were many-to-many correlations between them, suggesting a messy coherence, rather than a simplistic and reductionist relationship.

Some examples of outcomes (again simplified and generalised) were related to culture, responsiveness, quality, understanding and feedback.

Hypotheses

Once we have strategies and outcomes, the next step is to create some hypotheses for what tactics might implements the strategies to achieve the outcomes. To do this I tweaked Mike’s hypothesis template, and used this one:

We believe that <change>

implements <strategies>

and will result in <outcomes>

With this template, the hypotheses are naturally correlated with both strategies and outcomes (where the outcomes already consist of both subjective observations and objective measures).

I asked each participant to come up with a single hypothesis, creating a range of options from which to begin defining experiments.

For example (vastly simplified and generalised!):

We believe that a technical practice

implements a quality related strategy

and will result in fewer defects

Actions

This as far as we got in the time available, but I hope its clear that once we have hypotheses like this we can start creating specific experiments with which to move into action, with the possibility that each hypotheses could be tested with multiple experiments.

Results

While we didn’t formally go on to populate an X-Matrix, we did have most of the main elements in place – strategies, outcomes and tactics (if we consider tactics to be the actions required to test hypotheses) – along with the correlations between them. Although we didn’t discuss end results in this instance, I don’t believe it would take much to make those explicit, and come up with the correlations to the strategies and outcomes.

On a recent call with Mike he described Agendashift in terms of making the agenda for change explicit. I think that also nicely describes Strategy Deployment, and why I think there is a lot of overlap. Strategy Deployment makes the results, strategies, outcomes and tactics explicit, along with the correlations and coherence between them, and it seems that Agendashift is one way of going about this.

VN:F [1.9.22_1171]
Rating: 5.0/5 (1 vote cast)