The Science of Kanban – Conclusions

This is the final part of a write-up of a talk I gave at a number of conferences last year. The previous post was about the science of economics

Scientific Management Revisited

Is scientific management still relevant for product development then? As I have already said, I believe it is, with the following clarifications. I am making a distinction between scientific management and Taylorism. Whereas scientific management is the general application of scientific approach to improving processes, Taylorism was his specific application to the manufacturing domain. Further, in more complex domains such as software and systems development, a key difference in application is that the workers, rather than the managers, should be the scientists, being closer to the details of the work.

Run Experiments

The used of a scientific approach in a complex domain requires running lots of experiments. The most well-known version is PDCA (“Plan, Do, Check, Act”) popularised by Deming and originally described by Shewhart. Another variation is “Check, Plan, Do”, promoted by John Seddon as more applicable to knowledge work because an understanding of the current situation is a better starting point, and Act is redundant because experiments are not run in isolation. John Boyd’s OODA loop takes the idea further by focussing even more on the present, and less on the past. Finally, Dave Snowden suggests “Safe To Fail” experiments as ways of probing a complex situation to understand how to evolve.

Whichever form of experiment is run, it is important to be able to measure the results, or impact, in order to know whether to continue and amplify the changes, or cease and dampen them. The key to a successful experiment is whether it completes and provides learning, not whether the results are the ones that were anticipated.

Start with Why

Knowing whether the results of an experiment are desirable means knowing what the desired impact, or outcome might be. One model to understand this is the Golden Circle, by Simon Sinek. The Golden Circle suggests starting with WHY you want to do something, then understanding HOW to go about achieving, and then deciding WHAT to do.

scotland_karl_18

Axes of Improvement

One set of generalisations about WHY to implement Kanban, which can inform experiments and provide a basis for scientific management is the following:

  • Productivity – how much value for money is being generated
  • Predictability – how reliable are forecasts
  • Responsiveness – how quickly can requests be delivered
  • Quality – how good is the work
  • Customer Satisfaction – how happy are customers
  • Employee Satisfaction – how happy are employees

The common theme across these measures is that they relate to outcome or impact, rather than output or activity. Science helps inform how we might influence these measures, and what levers we might adjust in order to do so.

Lean

In these posts I have described Kanban in terms of the sciences of people, process and economics. However, this can actually be generalised to describe Lean as applied to knowledge work, as opposed to the traditional definition of Toyata’s manufacturing principles. The differentiation is also a close match back to my original Kanban, Flow and Cadence triad.

  • Kanban maps to process, with the emphasis on eliminating delays and creating flow rather than eliminating waste.
  • Flow maps to economics, with the emphasis on maximising customer value rather than reducing cost.
  • Cadence loosely maps people and their capability, with the emphasis on investing in those who use the tools rather than the tools themselves.

References

The ideas in this article have been inspired by the following references:

The Flow Experiment

I put together a small simulation for the SPA Conference this year which seemed to go well, and which I re-ran at the London Limited WIP Society, and hope to run again. You can download the materials, and this is a short write-up of how it works so people can run it and experiment with it themselves.

Overview

The basic aim of the simulation is to solve maths problems. This idea was inspired by Simon Bennett and Mark Summers session The Incentive Trap which also uses maths as the problem domain. The solving of equations introduces variability into the exercise using some simple knowledge work which is hopefully more interesting and engaging than rolling dice.

The maths problems flow through the following value stream:

  • Options
  • Analysis
  • Solve
  • Check
  • Accepts
  • Done

The following roles are involved in the value stream:

  • Analyst
  • Solver
  • Checker
  • Accepter
  • Manager

The following scenarios are used to experiment with the flow:

  • Phase Driven
  • Time Boxed
  • Flow Based

Stages

Options

Each scenario starts with a portfolio of possible problems to solve, in the following format:

ID Operands Solution
1 3 25

In this example  we have an option to create an equation with 3 operands and a solution of 25.

Analysis

When an option is selected, it is transformed into an equation during analysis. Rather than expecting participants to come up with their own equations, which could result in trivial equations, a lookup is provided.  The equations in the lookup are in a different order to those in the portfolio so some effort is required!

Operands Solution Equation
3 25 3 * 7 + 4

Solve

The equations are then solved independently i.e. the solution is not available

Check

In order to check that the Solve stage produces a correct result, the equation is solved independently again.

Accept

Finally the two independent solutions are compared, along with the actual equation, to ensure it has been solved correctly

ID Operands Solution Equation
1 3 25 3 * 7 + 4

Done

When the correct equation has been independently solved correctly twice, then the problem can be considered Done.

Roles

Analyst

The analyst selects the options from the portfolio, matches them against the available equations, and writes them onto index cards. Each index card should contain the option ID and the equation as follows:

analyst

Solver

The solver takes each index card with an equation on it, and solves it. Any intermediate calculations should be written on a separate sheet, and calculators should not be used (although someone who did use a calculator at SPA didn’t seem to gain any advantage!) The answer is to be written on the back of the back of the index card, to the left side, and covered with a small post-it so that is hidden and can’t be copied.

solver

Checker

The checker also takes each index card with an equation on it, and solves it. Again, any intermediate calculations should be written on a separate sheet, and calculators should not be used. this time, the answer is to be written on the back of the back of the index card, to the right side, and again covered with a small post-it so that is hidden.

checker

Accepter

The accepter takes the index card and confirms whether the ID and equation match correctly, and that the two answers are both the same and correct. The they are, the the problem is Done, otherwise they reject it. Each scenario will handle rejection differently.

Manager

The managers job is to keep time, ensure the process is being followed and capture metrics. Every 30 seconds they should count how many of the maths problems are in each stage of the value stream and record it on a worksheet. It is these numbers which can be fed into a spreadsheet to generate a Cumulative Flow Diagram to visualise the flow.

manager

Scenarios

Each scenario is 5 minutes each.

Phase Driven

For a phase driven approach, the team should initially plan how many of the set of options they think that they can complete in the 5 minutes available. Then all the selected options are worked on phase by phase. Thus they are all analysed, then all handed over to be solved, then all handed over to be checked, and finally all handed over to be accepted. Any rejected work can only be moved back to the beginning once everything else has been accepted as Done.

Time Boxed

For the time boxed approach, the team should plan how many of the set of options they think that they can complete in the 1st of the 5 minutes. Those options are then worked on by the team individually. Specialism still applies, but once a problem has been analysed, it can move to be solved, check and analysed without waiting for the whole batch. At the end of the 1 minute time-box, the team should stop, review and re-plan the next minute, deciding how many problems to work on next. This is repeated until the 5 minutes are up i.e. there are 5 x 1 minutes time boxes. Any rejected work can be passed back immediately.

Flow Based

For the flow based approach, the team should pick 1 problem at a time to solve. As with the time boxed scenario, specialism still applies, so once a problem has been analysed, it can move to be solved, check and analysed. However, there should only be one problem in each stage of the value stream at a time, thus creating a pull system. Any rejected work can be passed back immediately (which may result in the WIP limits being broken), or the accepter can pull in the appropriate role to resolve the issue.

Results

The metrics from the managers worksheets can be fed into an excel spreadsheet (included in the download package) to generate CFD diagrams. Here are 3 from one of the teams at SPA.

Phase Driven

image

Time Boxed

image

Flow Based

image

Variations

There are a number of variations I’d like to try.

  • One of the things I’ve noticed is that the maths problems may be just a little bit too difficult for some teams, and the take too long sometimes to get any really useful results. One option would be to extend the time for each scenario to 10 minutes to allow more time. I wonder whether this could make it less snappy though.
  • The time-boxed scenario never really plays out how I envisaged it. This is partly down to the short time frames. Stopping, reviewing and replanning every minute doesn’t seem right – especially when you can only manage 1 problem in a minute! What i was trying to show was the small-batching nature a time-box can have. One way round this is to explicitly create the batches in a similar way to the Penny Game.
  • Some people don’t like the mental exercise involved in the maths! Katherine Kirk described a variation to me where the teams used a “Pictionary” workflow instead. Options –> Describing –> Drawing –> Guessing –> Checking –> Done
  • Its quite likely that the Flow scenario comes out “best” because its the last one. It would be interesting to run the scenarios in different orders to see what impact that had. Especially if there are 3 or more teams so that each team can start with a different scenario. This would possibly be more complicated to run, but with enough facilitation could be done.

Feel free to download the pack, which contains:

  • Handouts – PDFs of the options, analysis and accepter worksheets for each scenario
  • Spreadsheets – one with all the details used to create the worksheets, and one to be used to create the CFDs
  • Powerpoint – slides with simple instructions for running the experiment

All I ask is that you let me know how you got on, and what variations you come up with. Here are the SPA results and LWS results.