The Best Agile Accounting Approach To CapEx and OpEx

Agile Accounting

What is the best way of accounting for costs as different types of expenditure with agile development teams? This is a question that comes up every now and again, particularly during more significant transformations. I usually respond by explaining that I’m not an accountant. It’s also not a topic I particularly care about.

However, I heard Johanna Rothman make a comment which struck me during the recent Drunk Agile Holiday Special. At least I’m pretty sure it was Johanna! I now understand why I don’t care and can give a much better answer.

Value and Cost

Before I get to that though, it’s worth reiterating one of the underlying principles behind agile software development. That is a focus on value.

Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.

Agile Manifesto Principle #1

I often say at some point towards the beginning of a new transformation that increasing value is infinite while reducing cost is zero-bounded. This comes up when I get a sense that an organisation views Agile as a means to reduce costs. After all, it’s associated with being more Lean, isn’t it?

I explain that an agile approach might actually cost a little bit more. However, that should be a relatively small increase in cost compared to the potentially significant increase in value. In other words, it’s better to focus on being more effective than more efficient. Or to paraphrase Russell Ackoff, it’s better to focus on doing the right thing, rather than doing the thing right.

Revenue and Tax

Let’s go back to how to allocate costs to different types of expenditure for agile development teams. The usual reason for this is to minimise tax liabilities. After all, minimising tax will increase profit. As a result, organisations try to design their processes and practices to make it simpler to minimise tax. That’s fairly easy in a traditional, large batch, stage-gate process. Different stages can count as different expenditures. The challenge is that it’s not so easy with agile teams who focus more on collaboration and flow.

What Johanna said is that it’s better to focus on increasing revenue, than reducing tax. Just like value and cost, increasing revenue is infinite while reducing tax is zero-bounded. Thus increasing revenue is the thing which should drive decisions on ways of working, and not minimising tax.

In other words, don’t worry about it. That’s the best agile accounting approach to CapEx and OpEx. Instead, your real strategy and how you are going to deploy that strategy is what is going to have the maximum impact.

How to use Hexis for Powerful Strategy Deployment

Hexis BAse Kit
Hexi Base Kit

This post is a follow-up to the list of 50 Quick Ideas To Improve Your Agile Transformation but from the perspective of Cynefin Company Hexis. I will describe what Hexis are, along with how I think they could be used as part of Strategy Deployment.

What are Hexis?

Hexis Method Kits are a product from Dave Snowden and The Cynefin Company. They are designed to enable the (re)combination of different methods, practices, tools, and techniques into new assemblies. Therefore, this allows unique solutions to be created for different contexts. A Hexi Kit primarily contains a set of hexagonal cards (the Hexis), each of which describes a single, granular unit within a body of knowledge. Further, there are pieces which allow overlaying an additional layer of information.

Currently, there are two kits available:

  • a base pack with core Cynefin methods and tools, and
  • an EU Field Gude kit based on the publication of the same name.

However, there are plans to create additional kits, including an Agile one based on Ivar Jacobson’s Essence. Dave and Ivar explain more in this meetup video on “Seeking vendor independent approaches to Agile and agility“. Moreover, there may be third parties who create their own kits. Together, this sets up the potential to create a rich ecosystem of ideas with which people can explore and discover new solutions to their challenges.

What has this got to do with Strategy Deployment?

My definition of Strategy Deployment is “any form of organisational improvement in which solutions emerge from the people closest to the problem“. Given that, I’m excited about the prospect of using Hexis as a way of helping those solutions emerge. The Hexis provide a way of starting with a set of known and granular options. However, a diverse group of people can combine them, and add to them, in their own way. Thus they provide a mechanism for the emergence.

In particular, when the Agile Hexis are available, they can help organisations discuss and design their own approaches to an agile transformation, rather than taking a pre-designed off-the-shelf one.

Given that, I’m excited about the possibilities for using Hexis. I’m looking forward to finding opportunities to use them and learning more about how to realise that potential.

What would Strategy Deployment Hexis be?

The possibility of creating third-party kits makes me wonder about a set of Hexis based on the X-Matrix TASTE model. This would help organisations to design their strategy deployment approach, as well as their agile transformation.

Thus, as well as a Hexi for the X-Matrix itself, there could be a set of “organising” Hexis for each element of TASTE. These would be similar to the EU Field Guide Stages of Assess, Adapt, [<>] (aporetic turn), Exapt, and Transcend for those familiar with that. That gives us:

  • X-Matrix
  • True North
  • Aspirations
  • Strategies
  • Tactics
  • Evidence

In addition, for each of those elements, there could be a set of related Hexis for various techniques or exercises. Consequently, these could be used to begin working out how to go about discovering what strategy deployment might look like. Given that, an initial brainstorm came up with this.

True North
Aspirations
Strategies
Tactics
Evidence
  • Leading / Input metrics
  • Flow Metrics
  • Catchball

There are plenty more ideas that could be included on my list of 50 Quick Ideas To Improve Your Agile Transformation. Furthermore, some of those don’t necessarily fit neatly into the above categorisation, which they don’t have to. However, this should give a high-level idea of how a Hexis Kit might be created which could include Strategy Deployment related concepts. Moreover, it should give a glimpse of how those Hexis could be used, in combination with others, to invite people to design their own agile transformation.

Nine Surprising Strategy Deployment Books with Powerful Lessons

Strategy Deployment Bookshelf

In a recent post, I said I would put together a list of recommended books that have influenced my thinking on Strategy Deployment over the years. This post includes that list! Most of them I have previously blogged about in my series on Strategy Deployment And other approaches, and I have included links to those posts here as well.

Getting the Right Things Done: A Leader’s Guide to Planning and Execution by Pascal Dennis
  • A very accessible book from the LEI which introduces the basic concepts of Strategy Deployment. It includes sample A3 templates, although not the X-Matrix.
Hoshin Kanri for the Lean Enterprise: Developing Competitive Capabilities and Managing Profit by Thomas L. Jackson
  • A less accessible book, but one which introduces and describes the X-Matrix in great detail. While not an easy ready, and very manufacturing focussed, it contains a number of nuggets of gold.
The 4 Disciplines of Execution: Achieving Your Wildly Important Goals by Sean Covey and Chris McChesney and Jim Huling
Idealised Design: How to Dissolve Tomorrow’s Crisis. . .Today by Russell L. Ackoff
  • Another more general business book that puts the emphasis on setting a trajectory of change (the idealised design) rather than defining a target solution.
  • I referenced this book in Strategy Deployment and Idealised Design
Good Strategy Bad Strategy: The Difference and Why it Matters by Richard Rumelt
  • This is the best book on strategy I have read so far. Rumelt has great insights and practical recommendations on how to create a good strategy and how to avoid a bad strategy.
  • I referenced this book in Good Agile Bad Agile
Playing to Win: How Strategy Really Works by A. G. Lafley and Roger L. Martin
  • Another great book on how to define a good strategy which describes the Strategic Choice Cascade and a set of powerful questions to ask about a strategy.
  • I referenced this book in Strategy Deployment and Playing to Win
The Art of Action: How Leaders Close the Gaps between Plans, Actions and Results by Stephen Bungay
  • This book describes and introduces the concept of auftragstaktik from General Helmuth von Moltke, the Chief of the Prussian General Staff during the Franco-Prussian War. Bungay applies that to businesses with the Directed Opportunism model.
  • I referenced this book in Strategy Deployment and Directed Opportunism
Turn the Ship Around!: A True Story of Turning Followers into Leaders by L. David Marquet
Team of Teams: New Rules of Engagement for a Complex World by General Stanley McChrystal with Tantum Collins, David Silverman and Chris Fussell

That’s nine so far. I’m sure the list will grow and I’ll try and keep this page updated when it does.

Strategy Deployment and Team of Teams

This post is about Strategy Deployment and the book Team of Teams: New Rules of Engagement for a Complex World by General Stanley McChrystal, with David Silverman, Tantum Collins, and Chris Fussell. It’s another great book which has nothing to do with Lean or Agile explicitly. However, it contains some great, relevant stories and lessons from the post-9/11 war in Afghanistan and the Middle East.

It is another post comparing my views on Strategy Deployment and other approaches. This is because there are a couple of key points that jumped out at me in relation to Strategy Deployment. First, the description of what a Team of Teams actually is. Second, the description of the overall approach as an example of Strategy Deployment.

What is a Team of Teams?

The following diagram sums up the way the book defines a Team of Teams, compared to how I think many people think of it. This includes me before I read it!

Team of Teams Structure
Team of Teams Structure

In other words:

on a team of teams, every individual does not have to have an relationship with every other individual; instead the relationships between the constituent teams need to resemble those between individuals on a given team.

Team of Teams p128

The important element for me is the differentiation between a Team of Teams and a Command of Teams. I often see scaled agile structures (such as the SAFe Agile Release Train) referred to as a Team of Teams. In most cases, I would say they are actually a Command of Teams. Similarly, a Strategy Deployment structure can often be more of a Command of Teams. In hindsight, the visualisation I used in a previous post on the Dynamics of Strategy Deployment is guilty of this.

I recently came across a post on Creating Learning Organisations which included a picture of what Russell Ackoff called a Circular Organisation. While not exactly the same as what is described in Team of Teams, I think it is a closer representation of a possible way of creating that dynamic.

McChrystal describes that dynamic by saying “we didn’t need every member of the Task Force to know everyone else; we just needed everyone to know someone on every team.”

The Circular Organisation

How does this relate to Strategy Deployment?

Team of Teams introduces the concepts of empowered execution and shared consciousness. Shared consciousness is defined as a “state of emergent, adaptive organisational intelligence”. Empowered execution is described as when “officers, instead of handing the decision to me and providing guidance, were now entrusted with the responsibility of a decision”. The two go hand in hand as shown in the following diagram.

Team of Teams Dynamics

In other words:

The speed and interdependence of the modern environment create complexity. Coupling shared consiousness and empowered execution creates an adaptable organisation able to react to complex problems.

Team of Teams p245

This strikes me as very similar to the Alignment and Autonomy model described by Stephen Bungay. Autonomy is equivalent to Empowered Execution, and Alignment is equivalent to Shared Consciousness.

From this, we can use my definition of Strategy Deployment as “any form of organisational improvement in which solutions emerge from the people closest to the problem”. Empowered execution is what enables the solutions to emerge from the people closest to the problem. Shared consciousness is what creates a common understanding of what those problems are and their organisational context.

Team of Teams is another book which I’ll be adding to my list of top recommendations for people wanting to learn more about Strategy Deployment. I really need to publish that list somewhere!

50 Quick Ideas To Improve Your Agile Transformation

50
50 by Jonathan Khoo

I saw a tweet recently from Ben Mosir (@HiredThought) about creating a “(card) deck of tactics to level up how you and your teams do strategy”. Unsurprising this piqued my interest, although it turned out that the focus was strategy itself, as opposed to strategy deployment. However, that then raised the question of what a strategy deployment card deck, which could be used in an agile transformation, might look like.

While starting to think about what the content might be, it also reminded me of Gjoko Adzic and David Evans’ book “50 Quick Ideas to Improve Your User Stories” and its sister books in the series. The obvious first step would be to see if I could come up with a list of 50 quick ideas for strategy deployment, and this is that initial list.

Some caveats first. It’s a very rough first version which I’m sure will change over time. In fact, it has already changed since I started to write this post. I expect some ideas to merge, and some to split out into multiple ideas. While some are quite specific, some may prove to be too abstract, and may not prove to be useful or relevant. Some others are almost certainly missing, or I just don’t know enough about them yet (e.g. Substrate-Independence Theory!)

I’ve tried to organise the list of ideas into similar themes and added some additional links or clarifying words where I can. I’ve also tried to give credit or provide links or references where there has been a clear influence (or appropriation!).

So without further ado, here is the list. I’m curious what feedback people have, and whether this is an idea that is worth pursuing in itself…

Invitation

  1. Engagement Models
  2. Clean Language (by David J. Grove)
  3. Curiosity
  4. Ladder of Inference (by Chris Argyris)
  5. Improvisation
  6. Agendashift (by Mike Burrows)
  7. Liberating Structures (by Henri Lipmanowicz and Keith McCandless)
  8. Vector Theory of Change (via Dave Snowden)
  9. Four Disciplines of Execution (by Chris McChesney, Jim Huling, Sean Covey)

TASTE

  1. True North
  2. Aspirations (as a vector)
  3. Strategy
  4. Playing to Win (by Roger L. Martin)
  5. Even Over Statements
  6. Tactics
  7. Evidence (as a vector)
  8. Correlations
  9. X-Matrix

Situational Awareness

  1. Cynefin (by Dave Snowden)
  2. Coherence
  3. Sensemaking (via Dave Snowden)
  4. Evolutionary Potential (sidecasting)
  5. Emergent Strategy (by Henry Mintzberg)
  6. Wardley Mapping (by Simon Wardley)
  7. Impact Mapping (by Gojko Adzic)
  8. Team Topologies (by Matthew Skelton and Manuel Pais)
  9. Strategy Kernel
  10. Idealised Design (by Russell L. Ackoff)
  11. Present Thinking (via Jabe Bloom and Ben Mosior)

Strategy Deployment

  1. Backbriefing
  2. Intent
  3. Constraints
  4. Directed Opportunism (by Stephen Bungay)
  5. Mission Command (alignment, autonomy, agency)
  6. Leader-Leader (by L. David Marquet)
  7. Catchball
  8. Toyota Kata (by Mike Rother)
  9. Real Options (by Chris Matts, Olav Maassen and Chris Geary)
  10. OKRs

Deliberate Discovery

  1. Experiments
  2. Failure
  3. Thinking in Bets (bu Annie Duke)
  4. Ritual Dissent (via Dave Snowden)
  5. PDSA
  6. Predictability (as an outcome)
  7. Responsiveness (as an outcome)
  8. Productivity (as an outcome)
  9. Quality (as an outcome)
  10. Sustainability (as an outcome)
  11. Value (as an outcome)

How to Measure the Predictability of Agile

This post follows up a Twitter thread I posted in November exploring ways of measuring the predictability of teams. I also discussed some this these ideas in a Drunk Agile episode.

Fortuneteller
Fortuneteller by Eric Minbiole

When I begin working with an organisation on the agile transformation, an early conversation is around successful outcomes. My work on Strategy Deployment is all about answering the question “how do I know if agile is working?“.

Sometimes the discussion is around delivering more quickly, (i.e. being more responsive to the business and customer needs). Other times it is about delivering more often (i.e. being more productive to deliver more functionality). Both of these are relatively easy to measure. Responsiveness can be tracked in terms of Lead Time, and productivity can be measured in terms of throughput.

However, the area that regularly gets mentioned that I’ve not found a good measure for yet is predictability. In other words, delivering work when it is expected to be delivered.

Say/Do

Before I get into a few ideas, let’s mention one option that I’m not a big fan of – the “say/do” metric. This is a measure of the ratio of planned work to delivered work in a time-box.

Firstly, this relies on time-boxed planning. This means it doesn’t work if you’re using more of a flow, or pull-based process.

Secondly, the ratio is usually one of made-up numbers. Either story point estimates for Scrum or business value scores for SAFe’s Flow Predictability. This makes it far too easy to game, by either adjusting the numbers used or adjusting the commitment made. All it takes is to over-estimate and under-commit to make it look like you’re more predictable without actually making any tangible improvement.

System Stability

Another approach to predictability is to say that a system is either predictable, or it is not. With this frame, the concept of improving predictability is not valid. The idea builds on the work of Donald J. Wheeler, Walter A. Shewhart and W. Edwards Deming. These three statisticians would treat a system – in this case, a product delivery system – as either stable or unstable. An unstable system, with special cause variation, is unpredictable. By understanding and removing the special-cause variation, we are left with common-cause variation, and the system is now predictable.

For example, we can look at Lead Time data over time. For a stable system, WIP will be managed and Little’s Law will be adhered to. In this scenario, we can predict how long individual items take, by looking at percentiles. Different percentiles will give us different degrees of confidence. For the 90th percentile (P90), we can say that 90% of the time we will complete work within a known number of days.

With this approach, the tangible improvement work of making a system predictable is that of making the system stable. By removing the noise of special cause variation, we are able to make predictions on how long work will take and when it will be done.

Meeting Expectations

An alternative approach is to think of predictability as meeting expectations. Let’s assume I know my P90 Lead Time as described above. I know that there is only a 10% chance of delivering later than that time. However, there is still a 90% chance of delivering earlier than that time. Similarly, I might know my 10th percentile Lead Time (P10). This tells me that there is a 90% chance of delivering later, but only a 10% chance of delivering sooner.

If the distribution of the data is very wide, then there will be a wide range of possibilities. It is still difficult to predict a date between the “unlikely before” P10 date and the “unlikely after” P90 date. Thus it is difficult to set realistic expectations. Saying you might deliver anytime between 1 and 100 days is not being predictable. Julia Wester describes this well in her blog post on the topic using this diagram.

Seeing predictability at a glance on a Cycle Time Scatterplot from ActionableAgile
Seeing predictability at a glance on a Lead Time Scatterplot

With this approach, the tangible improvement work is reducing the distribution of the data to remove the outliers.

Inequality

One way of measuring this variation of distribution is to simply look at the ratio of the P90 Lead Time to the P10 Lead Time. (Hat-tip to Todd Little for this suggestion). This is similar to how Income Inequality is measured. Thus if our P90 Lead Time is 100 days, and our P10 Lead Time is 5 days, was can say that our Lead Time Inequality is 20. However, if our P90 Lead Time is 50 and our P10 Lead Time is 25, our Lead Time Inequality is 2. We can say that the lower the Lead Time Inequality, the more predictable the system is.

Coefficient of Variation

Another way is to measure the coefficient of variation (CV), which gives a dimensionless measure of how close a distribution is to its central tendency (Hat-tip to Don Reinertsen for this suggestion). The coefficient of variation is the ratio of the standard deviation to the mean. A dataset with a wide variation would have a larger CV. A dataset of all equal values would have a CV of 0. Therefore, we can also say that the lower the Lead Time Coefficient of Variation, the more predictable the system is.

Consistency

There are probably other statistical ways of measuring the distribution, which cleverer people than me will hopefully suggest. What I think they have in common is that they are actually measuring consistency (Hat-tip to Troy Magennis for this suggestion). A wide distribution of Lead Times might be mathematically predictable, but they are not consistent with each other. A narrow distribution of Lead Times are more consistent with each other and thus allow for more reliable predictions.

Aging WIP

One risk with these measures of Lead Time consistency is that there are essentially two ways of narrowing the distribution. One is to look at lowering the upper bound and work to have fewer work items take a long time. This is almost certainly a good thing to do! The other is to look at increasing the lower bound and work to have more items take a short time. This is not necessarily a good thing to do! That raises a further question. How do we encourage more focus on decreasing long Lead Times and less focus on increasing short Lead Times?

The answer is to focus on Work in Process and the age of that WIP. We can measure how long work has been in the process (started but not yet finished). This allows us to identify which work is blocked or stalled and get it moving again. Thus we can get it finished before it becomes too old. Measuring Aging WIP encourages tangible improvements by actively dealing with the causes of aged work. This might be addressing dependencies instead of just accepting them. Or it could be encouraging breaking down large work items into smaller deliverables (right-sizing).

In summary, I believe that measuring Aging WIP and Blocked Time will lead to greater consistency of Lead Times with reduced Lead Time Inequality and Coefficient of Variation, which will, in turn, lead to better predictions of when work will be done.

Caveat

A couple of final warnings to wrap up. The first is that these are just ideas at this stage. I’m putting them out here for feedback and in the hope that others will try them as well as me. Secondly, I am not promoting removing all variation completely. The following quote and meme seem appropriate given the seasonal timing of this post!

Deviation from the norm will be punished unless it is exploitable.

Strategy Deployment and Idealised Design

RTF W38, 1952, Alemania Oriental

This post introduces Idealised Design, as described in the book Idealized Design: How to Dissolve Tomorrow’s Crisis…Today by Russell L. Ackoff, Jason Magidson and Herber J. Addison, and explores how it relates to Strategy Deployment.

The post is a continuation of the series on Strategy Deployment And other approaches.

What is Idealised Design?

The basic premise of Idealised Design is to imagine that the current system has been destroyed and to redesign the system for the present time. There are two key implications with this. Firstly, you are starting with a blank sheet of paper unconstrained by the past. Secondly, you are designing a system that will work today, not in a perfect future.

Alongside this premise, there are two constraints:

  1. The solution should be technologically feasible. That means it shouldn’t rely on science fiction fantasies or conceptual inventions which do not exist yet such as vapourware.
  2. The solution should be operationally feasible. That means that it can survive in the current environment, which may include any existing culture, structures, processes or policies.

Finally, there is one requirement

  • The solution is capable of being improved. That means that it is not final or perfect, but the first step in an evolution.

In other words:

“The product of an idealised design is neither perfect, ideal, nor utopian, precisely because it can be improved. However, it is the best ideal-seeking system its designers can imagine now.”

How does Idealised Design relate to Strategy Deployment?

As a reminder, my definition of Strategy Deployment is:

“any form of organisational improvement in which solutions emerge from the people closest to the problem.”

Idealised Design supports organisational improvement. By imagining the current system has been destroyed, the approach is not to tweak and locally optimise the current solution, but to consider the system as a whole.

Idealised Design is inclusive of the people closest to the problem. By imagining the current system has been destroyed, any expertise in that current system becomes irrelevant. This creates an opportunity to include all perspectives and invites a wider group to come up with new solutions.

Idealised Design is emergent. By creating solutions for the immediate present the focus is on doing “the next right thing” (to quote Anna in Frozen II), or as Dave Snowden puts it, “moving to the adjacent possible and then looking again”. Thus Idealised Design is based on setting a direction and gradually moving towards a True North.

How can Idealised Design implement Strategy Deployment?

Obviously, there is a huge amount of more detail about implementing Idealised Design in the book. However, one way of using the approach that I like is articulated in the Ideal Present Canvas (below) from Jabe Bloom and Ben Mosior. Working through these four sections is a great way to frame conversations about the elements of TASTE that can be used to populate an X-Matrix e.g.

  • True North – what is the orientation of the Ideal Future?
  • Aspirations – what do we want to achieve in the Ideal Future?
  • Strategies – what choices will guide us in how to solve the Present Mess, enable the Ideal Future, and avoid the Future Mess?
  • Tactics – what should we do today in the Ideal Present?
  • Evidence – what would indicate that the Ideal Present is (or isn’t) enabling the Ideal Future?
Idea Present Canvas for Idealised Design

The telephone image refers to the story told by Ackoff about the CEO of Bell Laboratories announcing, “Gentlemen, the telephone system of the United States was destroyed last night”.

Continuous Strategy is the new Strategy Deployment

Cadiz Wilderness and Valley
Cadiz Wilderness and Valley

I’ve been trying to come up with a better name for Strategy Deployment for a long time. One that has stuck with me recently is Continuous Strategy.

What’s wrong with Strategy Deployment?

I’ve never loved Strategy Deployment as a name, although I much prefer it to the Japanese term Hoshin Kanri. People misinterpret “deployment” as pushing or forcing the strategy down on people. It is more about moving strategy to where people can use it. I like Directed Opportunism as an interesting alternative from Stephen Bungay. However, people still misunderstand “directed” as people being given instructions rather than shown direction. And “opportunism” can be misunderstood as people being taken advantage of rather than taking advantage of their own situation.

Why might Continuous Strategy be better?

Naming things is hard. Let’s looks at why Continuous Strategy might be a good name.

  • Its positive. It describes what it is for, rather than what it is against. That’s one of the reasons I’m not a big fan of things like #NoEstimates and #NoProjects as names. I am a fan of the ideas behind them.
  • It aligns with general pattern of names such as Continuous Testing, Continuous Deployment, Continuous Delivery etc. If things are hard (and strategy is hard), do them more often and more continuously. This creates faster feedback and enables greater learning and improvment.
  • It hints at strategy as something that is emergent over time. This differentiates it from the common view that it is something which can be planned once a year.

What’s wrong with Continuous Strategy?

It’s not a perfect name though.

  • It misses the aspect of solutions emerging from “people closest to the problem” that is meant by deployment.
  • It misses the aspect of strategy as guiding policies that enable people to make their own decisions about what to do.
  • It misses the aspect of idealised design where those decisions are about what to do for the present time, as opposed to an imagined future state.

Having said that, it’s probably difficult for strategy to be continuous without it being inclusive, emergent, and in the present. I’m sure I’ll continue to change my mind and come up with alternative ideas. For now, I’m continuing to experiment with the term Continuous Strategy.

Postscript: The image of the desert was chosen to depict both a continuous landscape and one which is continuously shifting and changing.

Time Capsules and Transformations

Millennium Dome Time Capsule
Blue Peter time capsule dug up 33 years early

Time capsules can be a metaphor for transformation; a prediction of what we think people should know in the future, based on what we know today.

Why time capsules?

The picture to the right is one UK readers might recognise. It shows Richard Bacon and Katy Hill, the then presenters of the popular BBC children’s TV show Blue Peter, burying a time capsule under the Millennium Dome in 1998.

The capsule was supposed to be unearthed again in 2050. Unfortunately, it was accidentally dug up by construction workers in 2017, 33 years too soon. Its location had been lost and forgotten and the contents ended up going back to Blue Peter and never reburied.

The story of the Millennium Dome time capsule provides an interesting metaphor for organisational transformations. Jason Feifer describes in a podcast episode how time capsules are an attempt to communicate with the future. They are a prediction of what we think people will want to know at some point in the future. Further, that prediction is based on what we know about the way we live our lives today.

However, what generally happens is that time capsules are lost or forgotten about, and never opened as a result. Others get accidentally discovered and damaged early like the Millennium Dome capsule. Moreover, when they are opened as intended, they are disappointing, with the contents being far from as exciting or interesting as was originally anticipated. Trying to predict and communicate what people will want to know in the future is usually a fruitless exercise.

What’s that got to do with transformations?

That description of the fate of time capsules seems similar to many organisational transformations. They start out as a prediction for what we think people should be doing in the future. Subsequently, they end up forgotten about, abandoned early as failures, or just not delivering the benefits that were anticipated.

What does this mean for transformations? Rather than looking to the future, we should focus on the present. We should stop trying to design an end state and close the gap from where we are now. Instead, we should consider the current state and try to make that more ideal. By focussing on where we are now along with the next adjacent possible ideal states, we can start to make small immediate improvements. As a result, the small improvements build up to move us in the right direction.

In other words, we move from our current state, to the next ideal current state, and to the next ideal current state after then, and so on. That allows us to learn on the way, navigate around unexpected obstacles, and take advantage of unanticipated benefits.

Backbriefing and the Curse of Knowledge

The Charge of the Light Brigade as an example of the Curse of Knowledge
Charge of the Light Brigade by Richard Caton Woodville Jr.

In a previous post on backbriefing, I described it as “a process with which people can check their understanding of the intent of their work and whether their plans will meet that intent”. On reflection, I realised I missed an important element. It is also leadership checking whether they have described their intent with enough clarity. Put another way, backbriefing is leadership checking they have not been struck by the Curse of Knowledge.

What is the Curse of Knowledge?

Wikipedia describes the Curse of Knowledge as “a cognitive bias that occurs when an individual, communicating with other individuals, unknowingly assumes that the others have the background to understand”. This is often the cause of chaos and confusion. The missing information leads to misunderstanding, which leads to people making poor decisions or taking incorrect action.

There is a really simple experiment you can run to demonstrate the Curse of Knowledge. Ask someone to tap out the rhythm of a well-known tune to an audience. Then ask them how many people in the audience will be able to guess the tune. Invariably they will estimate quite a high number. In reality, the number of people who do guess the tune is very low. The individual tapping out the tune has the Curse of Knowledge because they can “hear” the tune in their heads, which the audience cannot.

This experiment was run by Elizabeth Newton at Stanford University in 1990. She found that people estimated that 50% of the audience would guess the tune. In fact, less than 2% actually did.

Another well-known example is the Charge of the Light Brigade. This is described by Tim Hartford in his podcast episode The Charge of the Light Brigade and the Valley of Death. Lord Raglan’s high position gave him a view of the whole battlefield which the troops on the ground did not have. That missing information meant that the orders they received were misinterpreted and incorrectly carried out. His instructions to the light cavalry were therefore misunderstood, resulting in a failed assault and very high casualties. Lord Raglan had the Curse of Knowledge.

How does that relate to backbriefing?

Many leaders in organisations make plans and give out instructions like Lord Raglan. They assume that everyone has the same information that they do. While the results may not be quite as dramatic and fatal as the Charge of the Light Brigade, they can be just as disastrous. Plans get incorrectly implemented because the people doing the work can’t see the full picture.

Thus backbriefing is a technique to discover the Curse of Knowledge early. The curse can then be lifted before too much damage can be done. By asking people to replay their understanding of the brief, along with what their intentions and plans are for action, any missing or confusing information can be identified and cleared up early before its causes any problems.

In other words, backbriefing is as much about improving leadership’s communication, as it is about improving the team’s understanding. As such, I am updating my definition of backbriefing to the following.

“a process by which people can check whether the intent of their work has been clearly described and understood and whether their plans to carry out that work will meet that intent”.