The Science of Kanban – Conclusions

This is the final part of a write-up of a talk I gave at a number of conferences last year. The previous post was about the science of economics

Scientific Management Revisited

Is scientific management still relevant for product development then? As I have already said, I believe it is, with the following clarifications. I am making a distinction between scientific management and Taylorism. Whereas scientific management is the general application of scientific approach to improving processes, Taylorism was his specific application to the manufacturing domain. Further, in more complex domains such as software and systems development, a key difference in application is that the workers, rather than the managers, should be the scientists, being closer to the details of the work.

Run Experiments

The used of a scientific approach in a complex domain requires running lots of experiments. The most well-known version is PDCA (“Plan, Do, Check, Act”) popularised by Deming and originally described by Shewhart. Another variation is “Check, Plan, Do”, promoted by John Seddon as more applicable to knowledge work because an understanding of the current situation is a better starting point, and Act is redundant because experiments are not run in isolation. John Boyd’s OODA loop takes the idea further by focussing even more on the present, and less on the past. Finally, Dave Snowden suggests “Safe To Fail” experiments as ways of probing a complex situation to understand how to evolve.

Whichever form of experiment is run, it is important to be able to measure the results, or impact, in order to know whether to continue and amplify the changes, or cease and dampen them. The key to a successful experiment is whether it completes and provides learning, not whether the results are the ones that were anticipated.

Start with Why

Knowing whether the results of an experiment are desirable means knowing what the desired impact, or outcome might be. One model to understand this is the Golden Circle, by Simon Sinek. The Golden Circle suggests starting with WHY you want to do something, then understanding HOW to go about achieving, and then deciding WHAT to do.

scotland_karl_18

Axes of Improvement

One set of generalisations about WHY to implement Kanban, which can inform experiments and provide a basis for scientific management is the following:

  • Productivity – how much value for money is being generated
  • Predictability – how reliable are forecasts
  • Responsiveness – how quickly can requests be delivered
  • Quality – how good is the work
  • Customer Satisfaction – how happy are customers
  • Employee Satisfaction – how happy are employees

The common theme across these measures is that they relate to outcome or impact, rather than output or activity. Science helps inform how we might influence these measures, and what levers we might adjust in order to do so.

Lean

In these posts I have described Kanban in terms of the sciences of people, process and economics. However, this can actually be generalised to describe Lean as applied to knowledge work, as opposed to the traditional definition of Toyata’s manufacturing principles. The differentiation is also a close match back to my original Kanban, Flow and Cadence triad.

  • Kanban maps to process, with the emphasis on eliminating delays and creating flow rather than eliminating waste.
  • Flow maps to economics, with the emphasis on maximising customer value rather than reducing cost.
  • Cadence loosely maps people and their capability, with the emphasis on investing in those who use the tools rather than the tools themselves.

References

The ideas in this article have been inspired by the following references:

The Science of Kanban – Economics

This is the fourth part of a write-up of a talk I gave at a number of conferences last year. The previous post was about the science of process

Having a good understanding of how creative people can have an efficient process still isn’t enough however. As Russell Ackoff is often quoted as saying, “It’s better to do the right thing wrong, than the wrong thing right”. An understanding of economics is needed to avoid “doing the wrong thing right”, by focussing on the “right thing”, whether that involves financial return, risk management or information discovery, all of which are of value. One financial model that I picked up from Chris Matts is that features should increase future revenue, protect existing revenue, reduce existing costs, or avoid future costs.

Life Cycle Profits

A basic understanding of investment over time helps explain why smaller batches and smaller increments are preferable from an economic perspective. In Software by Numbers, Denne and Cleland-Huang show the investment, payback and profit periods. A smaller cash injection, over a shorter investment period, can enable a product to become self-funding and break-even sooner, such that the profit can be invested back into the product for continued development.

scotland_karl_10

Cost of Delay

The Cost of Delay concept, as popularised by David Anderson, further informs scheduling decisions based on cost over time. The four most common archetypes used (but limited to) are:

  • Expedite – the cost of delay is high and immediate. These items are genuinely urgent and should be prioritised above everything else.
  • Standard – the cost of delay rises linearly. Examples are items with an opportunity cost, where the later the delivery, the more opportunity for gain is lost.
  • Fixed Date – the cost of delay rises sharply at a specific date. Examples are regulatory dates at which fines may be imposed, or seasonal dates such as Christmas or trade-shows.
  • Intangible – the cost of delay is likely to happen in future, but the exact nature is unpredictable. Examples are technical debt or infrastructure work

scotland_karl_11

Information Theory

Value does not have to be purely financial. In particular there is often value in information generated, as suggested by information theory.

scotland_karl_12

Information Theory says that for experiments with pass/fail results, a 50% failure rate is optimal. Always failing suggests that nothing is known, and subsequently nothing is being learned. Always succeeding suggests that everything is already known, and thus nothing is being learned.

The Lean Startup approach is essentially based on information theory, with the goal being to loop through the Build, Measure and Learn cycle as quickly as possible.

scotland_karl_13

This can be thought of as buying information, and asymmetric payoff curves help explain the benefits of this approach. Given some notional performance target, an asymmetric payoff curve is one where being below target results in a loss, being above target results in a gain, and hitting the target results in breaking even. Buying information enables the shape of the curve to be changed such that losses below target are minimised, and gains above target are maximised.

scotland_karl_14

Scheduling

Having a good understanding of the economics enables better decision making when designing and scheduling the work. Usually selection of what work should be pulled next relates to cost and value. Higher value for lower cost generally trumps lower value for highest cost.

scotland_karl_15

An example is risk reduction. There is value in risk reduction, where the higher the risk, the greater the value there is in reducing it, and there is also a cost associated with reducing the risk. Having an understanding of the relative values and costs of risk reduction activities informs the sequencing of high value, low cost items earlier and low value, high cost items later.

scotland_karl_16

Similarly Set Based Concurrent Engineering can be informed by economics. SBCE involves working on multiple parallel initiatives in order to reduce risk. Multiple initiatives should only be run while the total cost of the initiatives is less than the value of the risk reduction. Each additional initiative adds less value exponentially, while the total cost rises linearly. Multiple experiments are like buying insurance; when the cost of the insurance is greater than the economic benefit, it’s not worth paying for.

scotland_karl_17

In the final part, I’ll draw together some conclusions.

The Science of Kanban – Process

This is the third part of a write-up of a talk I gave at a number of conferences last year. The previous post was about the science of people.

Even though a kanban system describes knowledge work, we can still apply formal sciences such as mathematics. Rather than applying them at a detailed, micro level, we can take a system approach and apply them at the broader, macro level. As an example, if we try and predict the result of a single coin toss, we are only able to do so with 50% accuracy due to the unpredictable, random nature. However, if we to try to predict the result of 1000 coin tosses, we can be more accurate, albeit with less precision and some degree of variance.

Queuing Capacity Utilisation

The most common mathematics used with kanban systems is associated with queues and utilisation. Queuing capacity utilisation suggests that as capacity utilisation increases, queues increase exponentially. Thus extremely high utilisation will result in large queues.

scotland_karl_06

Further, Little’s Law tells us that the time something is in a queue (lead time) is equal to the size of the queue divided by the processing rate.

Lead Time = WIP / Processing Rate

It follows then, that if we want to complete work quickly (i.e. with a short lead time), then should reduce queues, which requires reducing utilisation. Reducing utilisation too much, however, will begin to have diminishing returns.

Traffic Flow Theory

Traffic Flow Theory gives an explanation as to why a work-in-process limit on one may be too low.

scotland_karl_07

Originally observed by Bruce Greenshields in 1934, Traffic Flow Theory describes how, as speed (in miles per hour) goes up, the density of traffic (in vehicles per mile) goes down, and vice versa. This is observable on roads with a low density of cars (i.e. empty) which are able to travel at high speeds, and roads with a high density of cars (i.e. gridlocked and bumper to bumper) which are stationary. Flow is defined as the product of speed and density, and thus there is an inverse u-curve, with high flow in the middle, and low flow at either end due to either low speed, or low density.

For product development, density can be viewed as work in process, speed correlates to lead time, and flow correlates to throughput. If work in process is too low for the available capacity, then lead time will be short, but throughput will be low. Similarly, if work in process is too high for the available capacity, then lead time will be long, and throughput will be low.

An interesting effect happens on either side of the flow curve.

On the left side, if density is increased (increasing work in process), speed decreases (increasing lead time), and flow decreases (decreasing throughput). The decreased flow causes a further increase in density resulting in a negative reinforcing loop.

However, on the right side, if density is increased (increasing work in process), speed decreases (increasing lead time), and flow now increases (increasing throughput). The increased flow causes a decrease in density resulting in a balancing loop.

Thus, while the optimum density is unlikely to be achieved precisely, it is better to be on the right hand side, with slightly less work in process, where a slight increase will balance itself out.

Feedback

It’s generally recognised that software development is a non-linear activity, due to the many feedback loops, so it’s useful to understand the effect that the feedback loops have on a process.

One of the impacts that a feedback loop has is related to its delay, where a long delay can lead to instability and oscillation. Taking a shower as an example, if when trying to turn the heat of the shower up, there is a long delay between turning the tap and the water temperature increasing, the natural reaction is to turn the temperature tap further. Eventually, when the change in water temperature comes through it will be too hot. The same can happen again when trying to turn the heat of the shower down again.

scotland_karl_08

Thus it is a good idea to minimise delays in feedback, or put another way, have fast feedback. Fast feedback also results in smaller queues, because less work builds up waiting for feedback. Smaller queues result in less work in process, and we have seen that lower work in process leads to shorter lead times. Shorter lead times further generate faster feedback, resulting in a positive reinforcing loop.

scotland_karl_09

In the next part we’ll cover the science of economics.

The Science of Kanban – People

This is the second part of a write-up of a talk I gave at a number of conferences last year. The previous post was the Introduction.

Software and systems development is acknowledged to be knowledge work, performed by people with skills and expertise. This is the basis for the Software Craftsmanship movement, who are focussing on improving competence, “raising the bar of software development by practicing it and helping others learn the craft.” A kanban system should, therefore, accept the human condition by recognising sciences such as neuroscience and psychology.

Visualisation

Neuroscience helps us understand the need for strong visualisation in a kanban system. Creating visualisations takes advantage of the way we see with our brains, creating common, shared mental models and increasing the likelihood that the work and its status will be remembered and acted on effectively.

Vision trumps all our other senses because our brains spend 50% of the time processing visual input. Evidence of this can be found in an experiment run on wine-tasting experts in Bordeaux, France. Experienced wine-tasters use a specific vocabulary to describe white wine, and another to describe red wine. A group in Bordeaux were given a selection of wines to taste, where some of the white wines had an odourless and tasteless red dye added. These experts described the tainted white wine using red wine vocabulary because seeing the wine as red significantly influenced their judgement. The same experiment has apparently also been run on novice wine-tasters who were less fooled, showing the danger of how experts becoming too entrained in their thinking.

Visual processing further dominates our perception of the world because of the way our brain takes the different inputs from each eye. For each index card on a board, instead of seeing two, one from each eye, the brain deconstructs and reconstructs the two inputs, synthesising them into a single image by making up and filling the blanks based on our assumptions and experiences. Thus what we end up with is a mental model created by the brain, and the kanban board helps that mental model to be one that is shared between the team.

The more visual input there is, the more likely it is to be remembered, en effect known as the pictorial superiority effect (PSE). Tests have shown that about 65% of pictorial information can be remembered after 72 hours, compared to only 10% of oral information. Visualising work status and other dimensions on a kanban board can, therefore, increase the chances of that information having a positive influence on outcomes.

Multitasking

Neuroscience also explains one of the benefits of limiting WIP; that of reducing multitasking.

Multitasking, when it comes to paying attention, is a myth. The important detail here is that it is when it comes to paying attention. Clearly we can walk and talk at the same time, but if we have to concentrate on climbing over an obstacle we will invariably stop talking. The brain can only actively focus on one thing at a time and studies have shown that being interrupted by multitasking results in work taking 50% longer with 50% more errors. For example, drivers using mobile phone have a higher accident rate than anyone except very drunk drivers. In other words, multitasking in the office can be as bad as being drunk at work!

Other studies have shown that effectiveness drops off with more tasks. In “Managing New Product and Process Development: Text and Cases” by Clark and Wheelwright show that when someone is given a second task, their percentage of time on valuable work rises slightly because they are able to keep themselves busy when they are blocked. However, beyond that, with each additional task the time spent adding value reduces.

scotland_karl_02

Gerald Weinberg suggests similar results in “Quality Software Management: Systems Thinking” when he reports that for each additional project undertaken, 20% of time is lost to context switching.

scotland_karl_03

Learning

Recognising the way we deal with challenging situations, and how we can change, enables us to deal with the visibility and transparency that a kanban system creates in order for us to learn and improve.

As humans, we are natural explorers and learners. We evolved as the dominant species on the planet by constantly questioning and exploring our environment and trying out new ideas. However, when faced with difficult situations, we tend to react badly. Chris Argyris coined the term Action Science to describe how we act in these situations. The natural reaction is single loop learning, where we resort to our current strategies and techniques and try to implement them better. A more effective approach can be double loop learning, where we question our assumptions and mind-set in order to discover new strategies and techniques.

scotland_karl_04

Another relevant model is Virginia Satir’s Change Model which describes how our performance dips into the valley of the ‘J-Curve’ while we adjust to a new way of being. Being aware of the dip, its depth, and our response it, can inform an appropriate approach to influencing change.

scotland_karl_05

Next we’ll cover the science of process.

The Science of Kanban – Introduction

This is a write-up of a talk I gave at a number of conferences last year. I have split it into 5 parts.

Abstract

Science is the building and organising of knowledge into testable explanations and predictions about the world. Kanban is an approach which leverages many scientific discoveries to enable improved flow, value and capability. This article will explore some of science behind kanban, focussing on mathematics and brain science in particular, in order to explain the benefits of studying a system, sharing and limiting it, sensing its performance and learning in order to improve it. Readers will gain a deeper understanding of why and how kanban systems work so that they can apply the theory to their own team and organisation’s practices.

Introduction

Background

When I first started talking and writing about Kanban I was trying to articulate that Kanban is more than just using a card-wall. The kanban board is the visible mechanics of the system, but the goal is achieve a flow of value, and while time-boxes become optional, a cadence is required to understand capability. I referred to this triad of Kanban, Flow and Cadence as KFC (the irony being that fried chicken is not at all lean!) and that blog post from October 2008 remains the most popular I have written. While my language and thinking has evolved since then, I have realised that as I learn more about the science behind Kanban, much of it still maps back to those three core elements.

Kanban Thinking

This article will not describe how to design a Kanban system, but explores some of the science behind Kanban Thinking, an approach to creating a contextually appropriate solution.

Kanban Thinking is a systemic approach which places an overall emphasis on achieving flow, delivering value and building capability. The primary activities are studying, sharing, limiting, sensing and learning, and thus Kanban Thinking is itself a scientific approach.

Scientific Management

Frederick Winslow Taylor is generally credited with the development of Scientific Management in the late 19th Century by applying a scientific approach to improving manufacturing processes and publishing “The Principles of Scientific Management” in 1911. Given that we are now in the 21st Century, how relevant is scientific management to us today for software and systems development? Scientific management is considered to have become obsolete in the 1930s, yet I believe we can still apply science to understanding why and how differences in productivity exist. Scientific theory can be used to inform the practices we use, while our experiential practice can also inform and evolve the scientific theory.

Cynefin

Dave Snowden’s Cynefin model is a good example of balanced theory and practice. Cynefin suggests that there are different domains, and that we should act appropriately for each one. Thus depending on our understanding of the current context, we should apply scientific theory differently, and implement alternative practices appropriately. Thus scientific management can still be relevant for software and systems development if we apply a scientific approach contextually.

scotland_karl_01

The manufacturing context was probably complicated at worst, with elements possible being simple. Thus Taylor’s approach to scientific management, with best and good practice, was appropriate. However, software development and knowledge work is often complex, so the appropriate approach is to allow emergent practice, using what Snowden calls probe-sense-respond.

Making an Impact

In a complex domain, not being able to predict or repeat cause and effect does not mean that a situation cannot be improved. It is still possible to understand the current state, and current performance, and known whether things are improving. Rather than simply reacting to the current state or attempting to predict or plan for a future state, having anticipatory awareness of the current state, with a view to exploring its evolutionary potential, allows the application of continuous experimentation to sense whether we are making an impact by improving outcomes for both the business and for the people.

I’ll cover some of the sciences that can be used to make an impact in the following future posts: