Measuring the X-Matrix

"Measure a thousand times, cut once"

Dave Snowden recently posted a series of blog posts on A Sense of Direction, about the use of goals and targets with Cynefin. As the X-Matrix uses measures in two of its sections (Aspirations and Evidence) I found that useful in clarifying my thinking on how I generally approach those areas.

Lets start by addressing Dave’s two primary concerns; the tyranny of the explicit and a cliché of platitudes.

To avoid the tyranny of the explicit, I’ve been very careful to avoid the use of the word target. Evidence was a carefully chosen word (after trying multiple alternatives) to describe leading indicators of positive outcomes. The outcomes themselves are not specific goals, and can be either objective or subjective. They are things we want to see more of (or less of) and should be trends, suggesting an increased likelihood of meeting Aspirations. Aspirations again was chosen to suggest hope and ambition rather than prediction and expectation. While they define desired results, those should be considered to be challenges and not targets.

To avoid a cliché of platitudes we need to focus on Good Strategy, beginning with the clear, challenging and ambitious Aspirations. I find it interesting that Dave cites Kennedy’s “man on the moon” challenge as a liminal dip into chaos, and Rumelt uses the same example as a Proximate Object source of power for Good Strategy. An ambitious yet achievable Aspiration helps focus on the challenges and opportunities for change. With proximate Aspirations, and a clear Diagnosis of the current situation, we can set some Guiding Policies as enabling constraints with which to decide what Coherent Action to take. Thus we can avoid fluffy, wishful, aimless or horoscopic Bad Strategy.

Put together, we have a set of hypotheses which are specific enough to avoid a cliché of platitudes, yet are speculative enough to avoid the tyranny of the explicit. We believe the Aspirations are achievable. We believe our Strategies will help us be successful. We believe our Tactics will move us forward on a good bearing. We believe the Evidence will indicate favourable progress. The X-Matrix helps visualise the messy coherence of all these hypotheses with each other and Strategy Deployment is the means of continuously testing, adjusting and refining them as we navigate our way in the desired direction.

Agendashift, Cynefin and the Butterfly Stamped

The butterfly who stamped

I’ve recently become an Agendashift partner and have enjoyed exploring how this inclusive, contextual, fulfilling, open approach fits with how I use Strategy Deployment.

Specifically, I find that the Agendashift values-based  assessment can be a form of diagnosis of a team or organisation’s critical challenges, in order to agree guiding policy for change and focus coherent action. I use those italicised terms deliberately as they come from Richard Rumelt’s book Good Strategy/Bad Strategy in which he defines a good strategy kernel as containing those key elements. I love this definition as it maps beautifully onto how I understand Strategy Deployment, and I intent to blog more about this soon.

In an early conversation with Mike when I was first experimenting with the assessment, we were exploring how Cynefin relates to the approach, and in particular the fact that not everything needs to be an experiment. This led to the idea of using the Agendashift assessment prompts as part of a Cynefin contextualisation exercise, which in turn led to the session we ran together at Lean Agile Scotland this year (also including elements of Clean Language).

My original thought had been to try something even more basic though, using the assessment prompts directly in a method that Dave Snowden calls “and the butterfly stamped“, and I got the chance to give that a go last week at Agile Northants.

The exercise – sometimes called simply Butterfly Stamping – is essentially a Four Points Contextualisation in which the items being contextualised are provided by the facilitator rather than generated by the participants. In this case those items were the prompts used in the Agendashift mini assessment, which you can see by completing the 2016 Agendashift global survey.

This meant that as well as learning about Cynefin and Sensemaking, participants were able to have rich conversations about their contexts and how well they were working, without getting stuck on what they were doing and what tools, techniques and practices they were using. Feedback was very positive, and you can see some of the output in this tweet:

I hope we can turn this into something that can be easily shared and reused. Let me know if you’re interested in running at your event. And watch this space!

Understanding SAFe PI Planning with Cynefin


Within SAFe, PI Planning (or Release Planning) is when all the people on an Agile Release Train (ART), including all team members and anyone else involved in delivering the Program Increment (PI), get together to collaborate on co-creating a plan. The goal is to create confidence around the ability to deliver business benefits and meet business objectives over the coming weeks and months – typically around 3 months or a quarter.

To many people this feels un-agile. Isn’t it creating a big plan up-front and defining work far ahead? However, a recent experience led me to realise why its not about the plan, but the planning and the dynamics of the event itself. In Cynefin terms, PI Planning is a catalyst for transition and movement between domains.

Let me explain.

Before PI Planning, and a move into an ART cadence, many organisations are in Disorder, relying on order and expertise when they should be using experiments and emergence. The scheduling of a PI Planning event triggers a realisation that there is  a lack of alignment, focus or vision. In order to prepare for the event people have to agree on what is important and what the immediate objectives, intentions and needs are. In short, what does the Program Backlog look like, and what Features should be at the top. The conversations and work required to figure are the beginning of a shallow (or sometimes not so shallow!) dive into Chaos.

During PI Planning is when the Chaos reaches a peak, typically at the end of Day One, as it becomes clearly apparent that the nice ordered approach that was anticipated isn’t achievable. More conversations happen, decisions are made about the minimum plausible solution and hypothesis are formulated about what might be possible. This is when action happens as everyone starts to pull together to figure out how they might collectively meet the business objectives and move the situation out of Chaos into Complexity.

After PI Planning there is still uncertainty, and the iteration cadences and synchronisation points guide the movement through that uncertainty. Feedback on the system development, transparency of program status and evolution of the solution are all necessary to understand progress, identify learning and inform ongoing changes. This may require subsequent dips into Chaos again at future PI Planning events, or over time the ART may become more stable as understanding grows, and PI Planning in the initial form may eventually become unnecessary.

It is this triggering of a journey that makes me believe that PI Planning, or equivalent “mid-range” and “big-room” planning events, are a keystone to achieving agility at scale for many organisations. I wonder how this matches other’s experience of successful PI Planning meetings? Let me know with a comment.

Understanding Change with Cynefin

I have written a number of posts on systemic thinking, which I believe is at the heart of Kanban Thinking. I’m currently referring to systemic thinking, rather than Systems Thinking, in order to try and avoid any confusion with any particular school of thought, of which Systems Thinking could be one. I have also written an number of posts on Cynefin. This post brings the two together.

Understanding Systems

Being able to start thinking systemically is not easy, because there are different types of system, and each requires a different approach to working within them. Dave Snowden has developed a framework known as Cynefin, which is a useful way of understanding some of the differences, and the implications of those differences.



Cynefin is a Welsh word, which translates as “place of our multiple affiliations”. Snowden describes it as a multi-ontological sense-making framework. The multi-ontological aspect refers to the fact that there are different types of system, all of which are may be valid for different contexts. The sense-making aspect refers to the exercise of gathering data before creating the framework, as opposed to categorisation where a framework is created first and onto which data is subsequently placed. In practice, this means that examples are first gathered first and placed relative to each other based on some criteria, and then boundaries are then drawn in order to understand the relationships. This allows the emergence of a framework that might not otherwise have been identified.

System Domains

Cynefin has five domains; two ordered, two unordered, and one of disorder.

On the right hand side are the ordered domains simple and complicated. These are ordered because there is a direct relationship between cause and effect. Causality is known and clearly perceivable, predictable and repeatable for simple problems, or it is at least knowable, although less obvious due to separation in time and space, for complicated problems. As a result we can immediately get a sense of the situation first, before categorising or analysing, in or to decide how to respond with best or good practice.

On the left hand side are the unordered domains complex and chaotic. These are unordered because of the ambiguous relationship between cause and effect. Causality may be retrospectively coherent, but not repeatable for complex problems, or simply incoherent and not perceivable at all for chaotic problems. As a result we need to probe with experiments or act quickly and assertively before we can get a sense of the situation in order to know how to respond.

In the centre is the domain of disorder. This is where we are unsure what type of problem we are dealing with and as a result are likely to respond instinctively with our preferred approach rather than the one appropriate for the situation.

The Cynefin framework, and in particular its distinction between ordered and unordered systems, is useful in understanding two common ways of dealing with change.

Managed Change

On the ordered side, in the simple and complicated domains, we can take advantage of expert and common knowledge, using best and good practice, to define a desirable future state, and the work to close the gap from the current state.

This is the approach generally taken by managed change programs, which put together teams, define roadmaps and create backlogs of work to implement in order to complete the change. This is a perfectly valid approach when the system really is ordered, and not incompatible with Kanban Thinking. A future state Kanban System can be designed, with pre-determined work types and workflow, and an appropriate visualisation, WIP limits and other policies. Metrics can be gathered, but learning may be minimal because of the assumption that the correct solution can be known in advance. However, while short term results may be achieved, there is also the risk that the chosen practices being implemented are not appropriate, or get followed blindly and dogmatically, with the long term result being a fall over the cliff into chaos as already described.

Evolutionary Change

On the unordered side, in the complex and chaotic domains, we need to be more experimental, using emergent and novel practice, to understand the current state and discover its evolutionary potential. The author John Kay describes this approach as being one of Obliquity in his book of the same name. Similarly, Tim Harford discusses the need for a variety of experiments, resilience to the possibility of failure of those experiments, and clear selection of which experiments to kill or continue in his book Adapt.

This is the approach that really harnesses the power of Kanban Thinking. A Kanban System can be designed to represent the current state with existing work types and work flow, an appropriate visualisation  conservative WIP limits and other policies. Metrics can be used to further understand the current state and adjustments can be made to the work, its flow, visualisation and policies in the form of intentional experiments, run to learn how to evolve the system for greater impact. A more suitable system design is likely to emerge this way than could be envisaged up front, although there is the risk that the improvement could be achieved more quickly using expert guidance.

Domain Dynamics

What becomes interesting with Cynefin is not the classification of whether something is simple, complicated, complex or chaotic, but how situations transition between the domains and across the boundaries. No domain is considered to better than the others because each is contextual. Moving from the complex to the complicated domain may be appropriate when optimising or exploiting a solution. Conversely, moving from the complicated to complex domain may be appropriate when wanting to innovate or explore an idea. Occasionally a short and shallow dive through chaos into complexity might be appropriate for more radical changes. A key transition to be aware of is the one from the simple to chaotic domain. This results from complacency and over reliance on best practice, which pushes problems over the cliff into chaos. This transition is known as a cliff, and represented differently by the squiggle, because recovery is a non-trivial and costly process.

Use of the domains can be further confusing because often it is discovered that scenarios may be in multiple domains at the same time because elements of it live in a different domains. Narrative fragments, in the form of short anecdotes, are a common and useful form for capturing the examples and identifying and understanding the differences. Taking coding as an example, someone could tell a story about learning a new language, which might be a simple problem with best practice about syntax and coding styles. At the same time, someone else could tell a story about using Test Driven Development with that language as a good practice for a more complicated challenge of detailed design and development. Yet another person could tell a story about using spikes to experiment with a complex feature which requires the emergence of an innovative new design and architecture. Finally, a further person could tell a story about dealing with the chaos resulting from having to react to defects found in some legacy spaghetti code.

So What?

I have found that having an understanding of these different system domains, and their dynamics, to be useful in guiding my approach to taking action when working to  help organisations change and improve their system. While my experience may often lead me to believe I know the answer, there are times when there is no right answer, and sometime the answer is not even knowable. By helping others discover the answer for themselves, they are able to both get to a better place now, and be better positioned to be able to discover new answers for themselves in the future.


Three Cynefin Ahas

Over the last year I’ve been increasingly influenced by ideas from Cynefin, created by Dave Snowden of Cognitive Edge. If you want a good introduction, Liz Keogh recently blogged a good explanation. I’ve realised that there are 3 key changes in my thinking, some completely new, and some reinforced by a better understanding of cognitive complexity. None of these are unique to Cynefin, and Cynefin contains much more. This list is my take, rather than any official list, although if you know Dave’s work I’m sure you’ll recognise a lot of the language!

1. Evolutionary Potential. Even though I’m a fan of Systems Thinking, I’ve realised that in complex situations, defining a future state and closing the gap isn’t the right approach. I still find system archetypes such as Tragedy of the Commons useful, but more in understanding the current situation than defining a future one. Instead I prefer to explore the evolutionary potential. There may be many different answers, some of which are not yet know, so experimenting, in a safe to fail way, helps evolve to the potential. An interesting case of this is exaptation, where a function is used for a purpose it was not originally adapted or selected for. My most recent aha related to evolutionary potential was that even though complex systems aren’t controllable, they are dispositional. In other words, while we still might not be able to know what the outcome of a change will be (let alone the output or activity to get there), but we can determine whether a change has a positive or negative impact on the overall system.

2 Sense-making. Cynefin is primarily a sense-making framework. This means that the data precedes the framework, as opposed to a categorisation framework where the framework precedes the data. Thus, rather than trying to figure out where an example should go in a matrix, examples are positioned relative to each other based on some criteria, and then boundaries are drawn subsequently. This makes sense-making much more dynamic, and what becomes interesting is not the classification of whether something is complex or complicated, but how things transition across the boundaries. No domain is better than any other as each is contextual. Moving from complex to complicated may be appropriate when optimising or exploiting. Equally, moving from complicated to complex (via a shallow dive into chaos) may be appropriate when wanting to innovate or explore. Further, any scenario is often in multiple places at the same time (after all Cynefin translates from Welsh into “place of our multiple affiliations”). Elements may be simple, complicated and complex, and narrative becomes an useful tool for understanding the differences.

3. Narrative. One of the main benefits of Kanban Systems that attracted me was the power of the contextual approach. A Kanban System is something that is overlaid on top of an existing approach to better understand and improve it and narratives are a great way of discovering, exploring and understanding aspects of a context. Collecting a set of anecdotes about best and worst experiences in a context creates a form of knowledge against which to pattern match for similarity of new situations, leading to better insights and decisions as to how to manage those situations.

Putting those three ahas together, I can imagine applying them through working with organisations to collect a range of narratives, help make sense of them by contextualising them with Cynefin, and then facilitate the creation of appropriate actions to make an impact on the business. Those actions might be safe to fail experiments, based on lean and agile principles, to explore the evolutionary potential for complex problems, or a more direct application of lean and agile practices for complicated problems. Or more likely a hybrid of both!


The Science of Kanban – Introduction

This is a write-up of a talk I gave at a number of conferences last year. I have split it into 5 parts.


Science is the building and organising of knowledge into testable explanations and predictions about the world. Kanban is an approach which leverages many scientific discoveries to enable improved flow, value and capability. This article will explore some of science behind kanban, focussing on mathematics and brain science in particular, in order to explain the benefits of studying a system, sharing and limiting it, sensing its performance and learning in order to improve it. Readers will gain a deeper understanding of why and how kanban systems work so that they can apply the theory to their own team and organisation’s practices.



When I first started talking and writing about Kanban I was trying to articulate that Kanban is more than just using a card-wall. The kanban board is the visible mechanics of the system, but the goal is achieve a flow of value, and while time-boxes become optional, a cadence is required to understand capability. I referred to this triad of Kanban, Flow and Cadence as KFC (the irony being that fried chicken is not at all lean!) and that blog post from October 2008 remains the most popular I have written. While my language and thinking has evolved since then, I have realised that as I learn more about the science behind Kanban, much of it still maps back to those three core elements.

Kanban Thinking

This article will not describe how to design a Kanban system, but explores some of the science behind Kanban Thinking, an approach to creating a contextually appropriate solution.

Kanban Thinking is a systemic approach which places an overall emphasis on achieving flow, delivering value and building capability. The primary activities are studying, sharing, limiting, sensing and learning, and thus Kanban Thinking is itself a scientific approach.

Scientific Management

Frederick Winslow Taylor is generally credited with the development of Scientific Management in the late 19th Century by applying a scientific approach to improving manufacturing processes and publishing “The Principles of Scientific Management” in 1911. Given that we are now in the 21st Century, how relevant is scientific management to us today for software and systems development? Scientific management is considered to have become obsolete in the 1930s, yet I believe we can still apply science to understanding why and how differences in productivity exist. Scientific theory can be used to inform the practices we use, while our experiential practice can also inform and evolve the scientific theory.


Dave Snowden’s Cynefin model is a good example of balanced theory and practice. Cynefin suggests that there are different domains, and that we should act appropriately for each one. Thus depending on our understanding of the current context, we should apply scientific theory differently, and implement alternative practices appropriately. Thus scientific management can still be relevant for software and systems development if we apply a scientific approach contextually.


The manufacturing context was probably complicated at worst, with elements possible being simple. Thus Taylor’s approach to scientific management, with best and good practice, was appropriate. However, software development and knowledge work is often complex, so the appropriate approach is to allow emergent practice, using what Snowden calls probe-sense-respond.

Making an Impact

In a complex domain, not being able to predict or repeat cause and effect does not mean that a situation cannot be improved. It is still possible to understand the current state, and current performance, and known whether things are improving. Rather than simply reacting to the current state or attempting to predict or plan for a future state, having anticipatory awareness of the current state, with a view to exploring its evolutionary potential, allows the application of continuous experimentation to sense whether we are making an impact by improving outcomes for both the business and for the people.

I’ll cover some of the sciences that can be used to make an impact in the following future posts:

Cynefin, Agile & Lean Mashups

IMG_05742012 certainly started with a bang for me (lets hope it doesn’t end with a bang!). After a relaxing Christmas and New Year, I was up at 6am on January 2nd to head for Almens in the Swiss Alps, and an intense few days with Simon Bennett, Steve Freeman, Joseph Pelrine and Dave Snowden. We gathered at Joseph’s house to discuss a common interest, namely how do we apply complexity science, and in particular the Cynefin framework, to Agile and Lean development.

Early on, Simon suggested the name CALM, and it stuck almost immediately. I like it for a couple of reasons. Mashups invokes the idea of “a creative combination or mixing of content from different sources”. That’s exactly what we want to do, and its the creative aspect that particularly appeals to me. A Cynefin, Agile and Lean Mashup will inevitably be created contextually. CALM also subtly counterbalances the XP extreme notion. While that’s not an intentional focus, I find it a mildly amusing reference.

My interest in Cynefin began back in around 2004 when Dave first spoke at XPDays London, and while back then I wasn’t smart enough to realise the full implications, fortunately others like Steve and Joseph were. I met Dave again at ScanAgile in 2009 and last year at the LeanSSC and ALE conferences. Simon also gave a great talk linking Scrum and Complexity more concretely at Agile2011, and I that’s when I finally figured out how Cynefin could match my interest in exploring the underlying theories behind Agile and Lean, and more specifically Kanban Thinking.

My personal goal for being involved in the meeting was to move Cynefin, and complexity science, from being something which is used as a justification, to something which provides meaningful explanation, and ultimately to new application. To keep the industry advancing, and to be able to apply Agile and Lean principles in increasingly challenging organisations, we need theory informed practices, as well as learning from our current success by evolving practice informed theory. In other words we need to take a scientific approach, which ties in nicely with my recent presentations on the Science of Kanban, which should make it to a blog post soon.

The primary outcome of the CALM meeting was the creation of CALMalpha. This is a two day residential conference to be held on the 16th and 17th of February 2012 at Wokefield Park in the United Kingdom. The alpha represents the notion that this is an initial safe-to-fail experiment where we hope to explore the subject in more detail, as we seek to find coherence, coalescence and convergence around what we do in the future.

More detail, including prices and booking information, can be found on the eventbrite page. I hope to see you there!