Are we tabby cats trying to emulate cheetahs?

Credit: Dennis Church

Credit for the title of this post goes to Sam Murphy, Section Editor at Runners World UK. Those of you who have seen me recently we will probably know that as well as being an advocate of Lean and Agile, I also have a passion for running, and I subscribe to Runners World. Sam used this title for an article of hers in the September issue, which struck me as having lots of overlaps with how I go about coaching and consulting in businesses. The gist of it was that when training, rather than trying to copy what elite athletes do, we should find out what works for ourselves. Sound familiar?

Here’s some quotes:

Dr Andy Franklyn Miller … concluded that ‘a very unique and customised strategy is used by each swimmer to excel’. And if that’s the case, is looking at what the elites are doing and aiming to replicate it the best way to maximise our own sporting success? Or are we tabby cats trying to emulate cheetahs?

Is a very unique and customised strategy used by each successful organisation to excel? Is looking at what these organisations are doing and aiming to replicate it the best way to achieve success? Or are we tabby cats trying to emulate cheetahs?

Dr George Sheehan, runner and philosopher, said “We are all an experiment of one.’

and

Ultra runner Dean Karnazes … writes “I always encourage people to try new things and experiment to find what works best for them.’

It seems athletic training is not so dissimilar to building a successful organisation! Rather than just copying what we may have seen or read about working elsewhere, we should encourage organisations to try new things and be experiments of one. That’s what Strategy Deployment is all about!

Or (to close with the same quote Sam closed her article with) as Karnazes also said:

‘Listen to everyone, follow no-one.’

VN:F [1.9.22_1171]
Rating: 5.0/5 (3 votes cast)

Kanban Deployment with the X-Matrix

This is a continuation of my musings on Strategy Deployment, the X-Matrix and Kanban Thinking (including Strategy Deployment as Organisational Improv and How Do I Know If Agile Is Working). I’ve been thinking more about the overlap between Strategy Deployment and Kanban and come to the conclusion that the intersection of the two is what could be called “Kanban Deployment” [1].

Let me explain…

To begin with, the name Strategy Deployment describes how a centralised decision is made about strategy, which is deployed such that decentralised decisions can be made on defining and executing plans. The people who are engaged at the coal face are the people who are most likely to know what might (or might not) work. In other words its the strategy that is deployed, not a plan.

Similarly, Kanban Deployment can be used to describe how a centralised decision is made about kanban as an approach to change, which is deployed such that decentralised decisions can be made on defining and executing processes. Again, the people who are engaged at the coal face are again the people who are most likely to know what might (or might not) work. Its kanban that is deployed, not a process.

With this perspective, we can look at how the X-Matrix could be used to describe a Kanban Deployment in terms of Kanban Thinking. (For a brief explanation of the X-Matrix see a post on how we used the approach at Rally).

The Results describe the impact we want the kanban system to have, and the positive outcomes we are looking to achieve with regard to Flow, Value and Potential. Just like with ‘regular’ Strategy Deployment, an economic model as recommended by Don Reinertsen is likely to provide clues as to what good results would be, as will a good understanding of fitness for purpose.

For Strategies we can look to the Kanban Thinking interventions of Study, Share, Stabilise. Studying the system is a strategy for learning more about the current context. Sharing knowledge is a strategy for creating a common understanding of the work and the way the work is done. Stabilising the work is a strategy for introducing policies which will enable and catalyse evolutionary change.

The Indicators are equivalent to the Kanban Thinking intervention Sense. These measures of improvement, while proxies, should give quick and regular feedback about whether the kanban system is likely to lead to the results.

Lastly the Tactics are equivalent to the Kanban Thinking intervention Search. These are the specific practices, techniques and policies used as part of the experiment that are run. The Kanban Method core practices can also provide guidance as to what tactics can be used to design the kanban system.

While I’m not sure I would want to be overly rigid about defining the strategies, I find the X-Matrix a useful model for exploring, visualising and communicating the elements of a kanban system and how they correlate to each other. As with all tools like this (i.e. A3s) its not the template or the document that is important, its the conversations and thinking that happen that have the value.

[1] I did consider the name “Kanban Kanri” for the alliteration, but apart from preferring to minimise Japanese terminology, it’s probably meaningless nonsense in Japanese!

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

Strategy Deployment as Organisational Improv

IMG_0159At Agile Cymru this week Neil Mullarkey gave a superb keynote, introducing his rules of improv (left). He suggested that businesses can apply these rules to be more creative and collaborative, and that there is a lot of synergy with Agile. Like all the best keynotes, it got me thinking and making connections, in particular about how Strategy Deployment could be thought of as form of Organisational Improv.

I’ve blogged about Strategy Deployment a couple of times, in relation to the X-Matrix and Kanban Thinking, and Is Agile Working. Essentially it is a way for leaders to communicate their intent, so that employees are able decide how to execute. This seems just like an improv scene having a title (the intent), allowing the performers to decide how to play out the scene (the execution).

The title, and rules of the improve game, provide enabling constraints (as opposed to governing constraints) that allow many different possible outcomes to emerge. For example, we tried a game where in small groups of 4-5 people, we told a story, each adding one word at a time. The title was “The Day We Went To The Airport”. That gave us a “True North”, and the rules allowed a very creative story to emerge. Certainly something that no one person could have come up with individually!

B_SEWU8XIAIXsC5However, given our inexperience with improv, the story was extremely incoherent. I’m not sure we actually made to the airport by the time we had been sidetracked by the stewardesses, penguins and surfing giraffes (don’t ask). It was definitely skimming the edge of chaos, and I can’t help thinking some slightly tighter constraints could have helped. As an aside, I saw these Coyote/Roadrunner Rules recently (right). Adam Yuret pointed out that they were enabling constraints and I wonder if something like this would have helped with coherence?

What’s this got to do with Strategy Deployment? It occurred to me that good strategies provide the enabling constraints with which organisations improvise in collaborating and co-creating tactics to meet their True North. Clarity of strategy leads to improvisation of tactics, and if we take Neil’s Rules of Improv we can tweak them such that an offer is an idea for a tactic, giving:

  • Listen actively for ideas for tactics
  • Accept ideas for tactics
  • Give ideas for tactics in return
  • Explore assumptions (your own and others’)
  • Re-incorporate previous ideas for tactics
VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

How Do I Know If Agile Is Working?

Moving the queen

Moving the queen, Gabriel Saldana, CC BY-SA

“How do I know if Agile is working?” This is a question I’ve been asked a lot recently in one form or another. If not Agile, its Scrum, or Kanban or SAFe or something similar. My usual response is something along the lines of “How do you know of anything is working?” And there generally isn’t a quick and easy answer to that!

I’ve come to the view that Lean and Agile practices and techniques are simply tactics. They are tactics chosen as part of a strategy to be more Lean and Agile. And becoming more Lean and Agile are seen as important to make necessary breakthroughs in performance in order to deliver desired results.

With that perspective, then the answer to “How do I know if Agile is working?” is that you achieve the desired results. That’s probably a long time to wait to find out, however, as it is a trailing measure. It is necessary, therefore, to identify some intermediate improvements which might indicate the results are achievable, and leading measures can be captured to give hat earlier feedback.

The lack of a quick and easy answer to “How do you know if anything is working?” is often because Lean and Agile have been introduced as a purely tactical initiative, without any thought to how they relates to strategy, what measurable improvements they might bring, and how any strategy and improvements will lead to desirable results. In fact very few people (if any) know exactly what those desirable results are!

I’m increasingly trying to work the other way – what the Lean community call Strategy Deployment. For any transformation to work, everybody in the organisation needs to know what results are being strived for, what the strategic goals are that will enable the necessary changes to get there, and what measurable improvements will indicate progress. Then the whole organisation can be engaged in designing and implementing tactical changes which might lead to improvement. Everything becomes a hypothesis and an experiment, which can be tested, the results shared and adjustments made.

In other words, Strategy Deployment leads to organisations becoming laboratories, where Lean and Agile can inform hypothesis on strategies, improvements and tactics. I think its the secret sauce to any transformation, which is why I’ll be talking about it more at various conferences over the rest of the year.

The first one is Agile Cymru in a couple of weeks. There’s a few tickets left, and considering the line-up of speakers, and the ticket cost, its incredible value. I highly recommend going, and I hope to see you there!

 

VN:F [1.9.22_1171]
Rating: 4.0/5 (2 votes cast)

Understanding SAFe PI Planning with Cynefin

3128266497_077f9144af_z

Within SAFe, PI Planning (or Release Planning) is when all the people on an Agile Release Train (ART), including all team members and anyone else involved in delivering the Program Increment (PI), get together to collaborate on co-creating a plan. The goal is to create confidence around the ability to deliver business benefits and meet business objectives over the coming weeks and months – typically around 3 months or a quarter.

To many people this feels un-agile. Isn’t it creating a big plan up-front and defining work far ahead? However, a recent experience led me to realise why its not about the plan, but the planning and the dynamics of the event itself. In Cynefin terms, PI Planning is a catalyst for transition and movement between domains.

Let me explain.

Before PI Planning, and a move into an ART cadence, many organisations are in Disorder, relying on order and expertise when they should be using experiments and emergence. The scheduling of a PI Planning event triggers a realisation that there is  a lack of alignment, focus or vision. In order to prepare for the event people have to agree on what is important and what the immediate objectives, intentions and needs are. In short, what does the Program Backlog look like, and what Features should be at the top. The conversations and work required to figure are the beginning of a shallow (or sometimes not so shallow!) dive into Chaos.

During PI Planning is when the Chaos reaches a peak, typically at the end of Day One, as it becomes clearly apparent that the nice ordered approach that was anticipated isn’t achievable. More conversations happen, decisions are made about the minimum plausible solution and hypothesis are formulated about what might be possible. This is when action happens as everyone starts to pull together to figure out how they might collectively meet the business objectives and move the situation out of Chaos into Complexity.

After PI Planning there is still uncertainty, and the iteration cadences and synchronisation points guide the movement through that uncertainty. Feedback on the system development, transparency of program status and evolution of the solution are all necessary to understand progress, identify learning and inform ongoing changes. This may require subsequent dips into Chaos again at future PI Planning events, or over time the ART may become more stable as understanding grows, and PI Planning in the initial form may eventually become unnecessary.

It is this triggering of a journey that makes me believe that PI Planning, or equivalent “mid-range” and “big-room” planning events, are a keystone to achieving agility at scale for many organisations. I wonder how this matches other’s experience of successful PI Planning meetings? Let me know with a comment.

VN:F [1.9.22_1171]
Rating: 4.0/5 (2 votes cast)

A Kanban Thinking Pixar Pitch

I’ve been using the Pixar Pitch as a fun way for teams to discuss and explore the problem they are trying to solve by telling a story about what has been happening that led to the need for a kanban system. I thought it might be useful to use the formula to tell a dramatised story of what led me to first try using a kanban approach, and why that led to Kanban Thinking.

The Pixar Pitch

Once upon a time there was a team who were using Scrum to manage their work. They were cross-functional, with a ScrumMaster and Product Owner, developing and supporting a suite of entertainment websites for a global new-media organisation.

Every Sprint they prioritised and planned the next couple of weeks, developed and tested functionality, fixed live issues, released updates when ready, and reviewed and retrospected their work.

And every Sprint the team was unhappy with the way they were working. The Product Owner felt that the developers could be more productive, and the developers felt that the Product Owner could be giving more guidance on what build. Stories were completed, but features took a long time to release as the team thrashed to get the functionality right, meet commitments and increase velocity.

So every Sprint they inspected and adapted. The Spring length was shortened to get quicker feedback. The style of User Stories was adjusted to try and focus more on value. And yet things did’t improve significantly.

One day they decided to stop focussing on doing Scrum, and start using ideas from Kanban, experimenting with some different approaches that they hadn’t previously tried.

Because of that they stopped using Sprints and de-coupled their cadences, reprioritising their ready queue every week, planning only when they had capacity to start new work, releasing and reviewing every fortnight, and retrospecting every month.

And instead of breaking work down into small User Stories that would take less than 2 weeks, they focussed on finishing larger MMFs, which might take months in some cases.

And they paid more attention to their Work in Process, in particular making sure they only worked on one large new feature at a time.

And instead of the Sprint Burndown they tracked their work using a Cumulative Flow Diagram, Parking Lot and Lead Time report.

Because of that the whole team became more focussed on delivering valuable features, were more collaborative in how they figured out what the details of those feature were, and were more reliable in when they delivered them.

Until finally everyone was working well together as a much happier group, delivering much more value, and in a good place to continue to improve.

Kanban Thinking

The point of this story is not to try and make one approach sounds better than another, or to suggest that any approach can be a bad choice in some way. Rather it is that the team learned about what would and wouldn’t work by asking questions about their process and experimenting with all aspects of it. The end result might have ended up being exactly what an expert or consultant might have recommended or coached. That’s OK, because what is important is the understanding that was created about why things are as they are. This capability to solve problems, as opposed to implement solutions, sets a team up to continually evolve and improve as their context changes.

This principle – helping teams to solve problems rather than implement solutions – is what fascinated me about kanban when I first came across it and started exploring how it could be used. Its what ultimately led to the emergence of Kanban Thinking as a model for helping achieve this, and the Kanban Canvas as a practical tool.

VN:F [1.9.22_1171]
Rating: 3.5/5 (2 votes cast)

Insights from a Kanban Board

Donkey DerbyI was working with a team this week, part of which involved reviewing their kanban board and discussing how it was evolving and what they were learning. There was one aspect in particular which generated a number of insights, and is a good example of how visualising work can help make discoveries that you might not otherwise unearth.

The board had a “On Hold” section, where post-its were collected for work on which some time had been spent, but which was not actively being progressed or had any expectation to be progressed, and which wasn’t blocked in any way. Quite a lot of work had built up in “On Hold”, which made it an obvious talking point, so we explored what that work was, and why it was there. These are the things we learnt:

  1. Some of the work wasn’t really “On Hold”, but were really requests which had been “Assessed” (the first stage in the workflow) and deemed valid and important, but not urgent enough to schedule imminently. This led to a discussion about commitment points, and a better understanding of what the backlog and scheduling policies were. In this case, items were not really “On Hold”, but had been put back on the backlog. In addition, a cadence of cleaning the backlog was created to remove requests that were no longer valid or important.
  2. Some of the work was “On Hold” because while it was valid and important, the urgency was unknown. It was an “Intangible” class of service. As a result it was always de-prioritised in favour of work where the urgency was clearer. For example work to mitigate a security risk wasn’t going to be urgent enough until the risk turned into a genuine issue which needed resolving. To avoid these items building up, and generating even greater risk, a “Donkey Derby” lane was created as a converse of their “Fast Track” lane. Work in this lane would progress more slowly, but at least there would always be at least one “Intangible” items in process.
  3. A very few items were genuinely “On-Hold”, but they were lost amidst the noise of all the other tokens. Thus any discussion about what to do, or what could be learned, was being lost.

In summary, by visualising the “On Hold” work (however imperfect it was), and creating a shared understanding, the team were able to improve their knowledge creation workflow, better manage their risk and increase their ability to learn from their work.

VN:F [1.9.22_1171]
Rating: 5.0/5 (3 votes cast)

A Kanban Canvas Example

I’ve been doing the rounds of conferences and meet-ups recently, introducing the Kanban Canvas as a way of becoming a learning organisation. One of the more common requests and pieces of feedback is that an example would be useful. I have mixed feelings about this. On the one hand, giving an example feels like a slippery slope to giving a solution, and brings with it he risk of unconsciously narrowing the field of vision when looking for new answers. On the other hand, its come up often enough not to ignore!

I’ve decided put together a very simple example. Hopefully too simple, and simplistic, to be copyable, but concrete enough to help show how the canvas could be used. Its based on the experience from when I introduced a kanban system at Yahoo! I have taken some liberties in putting it together to keep it understandable – we didn’t formally use this approach, and it was a long time ago!

Click on the image to see a larger version. I have added some commentary about the different elements below.

en_Kanban Canvas v1-0 example

System

I have described the systemic challenge using the Pixar Pitch format (deferring the ‘Until finally’ to be covered by Impacts). It could be summarised by the the headline “Busy, Not Productive

Once upon a time a team was developing websites using Scrum

Every Sprint the team would struggle to agree with the PO on what they could deliver, and as a result they delivered very little.

One Sprint the team started trying to define smaller stories in more detail up front.

Because of that planning meetings became painfully long, and even less was delivered.

Because of that we tried a kanban system.

Impacts

There is a single utopian and dystopian future for each Impact, using colour to differentiate. Utopian positive (+ve) impacts are green. Dystopian negative (-ve) impacts are red.

Flow

These are related to being able to release functionality early and often

  • +ve: Continuous deployments of new functionality
  • -ve: Websites broken, out of date and shut down

Potential

These are related to being a stable and high performing team

  • +ve: People clamouring to be on the team
  • -ve: Massive team turnover. No-one able to make changes.

Value

These are related to being able to build the best product

  • +ve: Award winning websites, putting competition out of business.
  • -ve: No page views or visitors. Ridiculed on social media.

Interventions

Study

We identified 3 types of work; defects in production, enhancements to released functionality, and new features. These were generally equated to small, medium and large pieces of work.

We were following a basic Scrum workflow of Backlog, In Process, and Done.

Share

To share the work type, we used token size. Defects were on small index cards, enhancements on medium, and features on large. Additionally, token colour differentiated failure and value demand. Defects were on red cards, whilst the others were on green cards.

To share the status of work in process, we used token placement in a matrix to represent progress and quality. The horizontal placement showed progress, while the vertical placement showed quality. Thus a token in the lower right corner was nearly finished, but with unsatisfactory quality.

Completed work was kept visible by keeping its token placement in a Done column until then end of a quarter.

Stabilise

We replaced the backlog with a limited “Ready Queue” to help clarity on what the next priorities were. (This queue would be restocked every week).

Work in Process was limited by work type, focussing on only 1 feature and 2 enhancements at a time. Defects were not limited, but the goal was always to eliminate defects rather than optimise them. Additionally, there was a “Top 5” work items to focus on, which could be of any work type. (We had 5 “star” adornments as token inscriptions to share what the top 5 were).

The goal was to get work into production, so our Definition of Done policy was simply that work must also be approved by the UX, QA and Ops teams and actually deployed and available.

Sense

The primary desired impact was Flow, with an outcome of improved productivity being a indicator. This was quantified by measuring throughput of items per week, with an increase being the goal.

A risk, however, was that cutting quality, or optimising for defects, could be a negative consequence of focussing on throughput. This would badly impact both Value and Potential. Thus a count of live defects was also measured, with a decrease being the goal.

The traditional scrum cadences were de-coupled with a new set of cadences put in place:

  • Weekly scheduling meetings to restock the ready queue
  • Planning meetings on-demand when work actually starts
  • Fortnightly review meetings to check progress
  • Monthly retrospective meetings to insect and adapt the process

Search

While experiments were not formally run, there were essentially 2 hypothesis being tested. The first was related to the board design, visualising and focussing on the work types. The second was related to the de-coupled cadences, with work flowing across them.

Wrapping Up

I hope this example shows how the canvas can be used to tell a simple story about what the elements of a process are (the interventions), their context (the system), and their goal (the impacts).

If you’d like to use the canvas, please download it. And if you’d like me to facilitate a workshop which uses the canvas to design your kanban system, then please let me know.

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

Strategy Deployment, the X-Matrix and Kanban Thinking

Strategy Deployemnt

Kanban Systems are an enabler of evolutionary change. And so is Strategy Deployment. Strategy can be defined as how you will make a positive impact, and this implies change. As the saying goes, “hope is not a strategy”, and neither is doing nothing. Deploying that strategy, as opposed to defining and imposing a tactical plan, enables the evolution of the tactics by the people implementing them.

I have found that putting Strategy Deployment in place is my preferred approach to starting off any change initiative, and that as suggested above, there is a strong synergy with a kanban-based approach. (This is not surprising, given the roots of both in Toyota and Lean). In particular, I have been using a format known as the X-Matrix to setup Strategy Deployment.

The X-Matrix

The X-Matrix is a cornerstone of Strategy Deployment. It is an A3 format that provides a concise and portable shared understanding of how strategy is aligned to the deployed tactical initiatives, alongside leading indicators of progress and anticipated end results. I learned about the X-Matrix from the book “Hoshin Kanri for the Lean Enterprise” by Thomas L. Jackson, and I have previously blogged about how we used the approach at Rally.

CD Form 1-2_A3-X X-matrix

The X-Matrix has 5 primary sections, all of which are connected. At the bottom, below the X, are the results that we hope to achieve. To the left of the X are the key strategies that will get us to those results, and to the immediate right of the X are the measures of improvement that indicate how well we are doing. (Note that this is labelled process as it refers to process improvements). At the top, above the X, are the tactics that we will use to implement the strategy. Finally, on the far right are the teams that will be involved in the work. To link these together, the corners of the X-Matrix are used to show the strength of correlation or contribution between the different elements.

Thus it becomes easy to visualise and explore how a strategy, or a measure, correlates to achieving results. Similarly, it is easy to see and discuss how a tactic will correlate to achieving a strategy, or how it will contribute to moving the needle on a measure. And it is clear who is accountable or involved in each tactic. Having all this on a single page helps creating clarity and alignment on the why, how, what and who of the work.

Kanban Thinking

This works well because it wraps all the elements of Kanban Thinking nicely. The results are equivalent to Impacts, the process improvement measures are ways to Sense capability, the strategies can be derived from exploring various Interventions and the tactics are the experiments created to Search the landscape. (Note: While all the examples I have seen have financial results, focussing on value based impacts, there is no reason why flow and potential based impacts could not be forecast with alternative results).

What I really like about Strategy Deployment, and the way Jackson describes it in his book, is that it is a form of nested experimentation. From an organisations long-term vision, through its strategy, to tactics and day to day operations and action, each level is a hypothesis of increasing granularity. As each experiment is run, the feedback and learning is used by the outer levels to  steer and adjust direction. Thus a learning organisation is created as the learning is passed around the organisation in a process known as ‘catchball’, and within this context, Kanban Thinking (and the Kanban Method) provides a synergistic means to running experiments and creating learning.

Do you know what results your organisation are aiming for? Do you know the strategies being used and how they should lead to the results? Do you know what improvement measures should indicate progress towards the results? Do you know how your tactical work implements the strategy and which indicators it should improve? Are all these elements treated as hypothesis and experiments to create feedback and learning with which to steer and adjust?

If you’d like help answering these questions, using the X-Matrix, let me know!

VN:F [1.9.22_1171]
Rating: 5.0/5 (1 vote cast)

Making System Interventions with Kanban Thinking

This post pulls together a number of ideas on interventions into a single place, and will become the content for a page on Interventions on the Kanban Thinking site.

What is an intervention?

An intervention is an act of becoming involved in something in order to have an influence on what happens. It is an active and participatory action, as opposed to a passive and observational instruction. Thus in Kanban Thinking, interventions are ways of interacting with a kanban system with the intent of improving it and having a positive impact.

There are 5 types of Kanban Thinking intervention, which can be considered as heuristics to help the discovery of appropriate practices and techniques for a system’s context. These heuristics may point to the purpose behind known and proven practices, or they may lead to the identification of new and alternative practices. They are:

  • Study the Context
  • Share the Understanding
  • Stabilise the System
  • Sense the Capability
  • Search the Landscape

Study the Context

What could be done to learn more about customer and stakeholder needs, the resultant demand and how that demand is processed?

Kanban Thinking is based on a philosophy of “start where you are now” and the foundation for this evolutionary approach is Studying the Context. In his book “The New Economics”, W. Edwards Deming famously said, “there is no substitute for knowledge”, and it is the acquisition of this knowledge about the current situation that leads to an appreciation of how to improve it.

Various practices are widely used to study different aspects of the system. Empathy Mapping is one way of learning more about customer and stakeholder needs. The work that they request as a result of those needs can be studied with demand analysis and profiling to determine classes of service and response. The way that work is then progressed, from concept to consumption, can be examined using forms of value stream mapping.

Share the Understanding

What information is important to share, and how can tokens, the inscriptions on them, and their placements, provide a single visual model?

Sharing the Understanding of the current situation, typically in the form of a kanban board, creates an environment where people are more intrinsically motivated. The ability to see what needs work provides autonomy, the ability to see where improvements can be made provides mastery, and the ability to see where to focus provides purpose.

Potentially, lots will have been learned from studying the context, so it is important to decide what is the most relevant information to share to avoid being overloaded with too much noise. That information can then be visualised using various patterns based on the acronym TIP; Token, Inscription, Placement. Tokens are the cards, stickies or other tangible representations of the work, where aspects such as shape, size, colour or material can represent different information. Inscriptions are items of information added onto the tokens, where words, dates, pictures or symbols can all be used with suitable formatting. Placements are the Tokens are placed on the board to convey information, where horizontal, vertical and relative positioning can all be meaningful.

Stabilise the Work

What policies could help limit work in process, and remove unnecessary or unexpected delays or rework?

A kanban system which is stable is more likely to be able to evolve and have resilience in the face of change. Systems can be considered to have boundaries or constraints, and those with too loose constraints will devolve into chaos, whereas those with too tight constraints will become brittle with bureaucracy. Stabilising the Work is achieved with enabling constraints, open enough to avoid restriction and prescription, yet limiting enough to stimulate expansion and new possibilities.

Defining work in process limits is a primary means of stabilising the work. One approach is to start with the current work in process (WIP), and look to gradually reduce it. Alternatively, the WIP could be drastically reduced to stabilise quickly, and then gradually increased to a more optimal level. A middle ground could be to base an initial WIP in the size of the team, such as one work item per person. How WIP is spread across the system is also a factor, from a single limit covering everything (CONWIP) to a different limit per stage or column.

WIP is a form of explicit policy, and other common forms are Definitions of Ready and Done and similar, simple quality criteria. Additionally, Test Driven Development can be considered an explicit policy on how software is written to create a stable code-base.

Sense the Capability

What measures and meetings might create insights and guide decisions on potential interventions?

As well as acquiring knowledge on what the work is, and how it gets done, it is also important to know how well it gets done. Sensing the Capability of a kanban system involves getting a feel for its performance by both measuring and interacting with it. Metrics provide an objective, quantative sense of capability. Meetings, and other feedback cadences, provide a subjective and qualitative sense of capability.

Measures should be derived from the desired impact, and the outcomes which would make the impact. If Flow is the desired impact, being more responsive might be a good outcome, and thus Lead Time might be a good measure. Additionally, the subsequent behaviours, both good and bad, which a measure might drive should be considered, and trade-offs can be made by having a set of balancing metrics. A focus on Lead Time might result in a reduction in Quality by cutting corners, so measuring Released Defects could help balance that trade-off.

Meetings provide a cadence with regular interactions can generate feedback. A simple metronomic cadence can tie various events such as planning, reviewing and retrospection to a single rhythm, or a polyrhythmic cadence can decouple these events into multiple rhythms. Another option is for some events to be asynchronous and triggered by the work rather than time-driven.

Search the Landscape

What small experiments could be run to safely learn the impact of different interventions?

Searching the Landscape of a kanban system involves looking at a range of possible interventions and experimenting to discover what impact they have. Those that have a positive impact should be pursued and amplified. Those that turn out to have a negative impact should be reversed and dampened. This searching can be thought of as exploring the evolutionary potential of the present state, as opposed to defining a future state and trying to move towards it.

Defining an experiment involves proposing a hypothesis that has a rationale behind it, can be measured in order to validate or falsify it, and can be easily recovered from in case of unanticipated results. The intentional act of documenting the experiment, using formats such as A3s, encourages disciplined thinking and avoids falling into the trap of retrospective coherence to explain results.

Searching the Landscape is an exercise in continual curiosity about how to evolve and improve the kanban system, increasing impact through further insights, interactions and interventions.

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)