I’ve recently read John Seddon’s “Freedom from Command and Control“, which introduces his approach to Systems Thinking – the Vanguard Method. A few key points really struck home with me and helped clarify my thoughts on some of the challenges I’ve come across recently.
First is the following simple diagram, which shows that Management is responsible for defining the System, which is ultimately what defines Performance. Management’s role should be to analyse Performance, and change the System to improve it.
Agile initiatives are usually begun in order to improve performance. Seddon says that in order to analyse Performance, we first need to understand the Purpose of the System. Then we should create Measures to provide knowledge of how well we are meeting that Purpose, before finally applying a Method which meets that Purpose, using the Measures to help refine the Method.
Finally, Seddon says that understanding Purpose involves understanding Demand, and in particular, Seddon introduces the concepts of Value Demand and Failure Demand. Value Demand is what do our customers ask us to do because it add value, and Failure Demand is what our customers ask us to do because we failed to something, or do something right in the first place.
I’m increasingly aware of Lean and Agile methods being used for the sake of it. While Scrum, XP and Kanban will generally serve common software delivery Purposes, such as delivering real benefits and in short timescales, using them without recognising the Purpose will often result in no real improvement in Performance. Lean and Agile methods are often used just as alternative delivery approaches within an existing System, rather than as means to change the System itself. These organisations can be thought of as “wall-dwellers”, using a method within existing boundaries, rather than “wall-movers”, moving the boundaries to help create a System which helps meet the Purpose. To quote (or paraphrase) Russell Ackoff, this is “doing the wrong thing righter, rather than doing the right thing”.
Do you know what the Purpose of your software development process is? Do you have Measures about capability against that Purpose? Do you know what your Value and Failure Demand is?
Hear hear.
John Seddon’s book System Thinking in the Public Sector is also excellent, although it made me angry and depressed in equal measure.
What measures do you recommend for assessing system capability? Are they accurate enough to spot an improvement (or otherwise) within a reasonable timeframe?
Hi Christo!
Measures should be derived from *your* customers demand. Having said that, most customers want their software quickly, so lead time is a good measure. Similarly, most customers want software that works well, so some sort of measure of support issues or defects might be good. There will always be variation in the system, however, so you need to measure over time, and look at the average and upper/lower control limits to see whether system interventions have had an effect. I’d recommend reading Seddon’s book for more info on this!
Karl
I will check out Seddon’s book. Thanks. Nice blog, by the way 🙂
best,
christo
[…] Systems Thinking, The Vanguard Method and Software Development (Karl Scotland) […]
Hi Karl,
This whole idea of standardise, measure and improve, driven top down. I’ve seen it work well in manufacturing and even to an extent in hardware design, but I’ve never seen it work for software.
I’ve seen several attempts at this approach but they have all failed (to a greater or lesser degree). So in my experience, this approach as applied to software development is theoretical and unproven.
One of the issues that came up on the lean development board the other day is how well are we as an industry at applying the scientific method to test our processes and practices. I’d be interested in seeing whether any scientific studies have been performed to see whether the PDCA approach as you describe it here actually works for software and how well it works.
I’m thinking objective data, such as the perceptions of an organisation that has undergone such a process change, rather then the subjective opinions of a protagonist such as a coach or consultant (or book author).
Given that we are struggling as a “profession” to even come to a consensus on what is Agile, perhaps a more data driven, dear I say it “scientific” approach to assessing process improvement initiatives, what works, where and when, is the way forward?
I’m interested in your thoughts.
Regards,
Paul.
Hi Paul,
Seddon’s approach, which I am describing, is almost exclusively in service organisations. These are knowledge organisations, just like software development organisations. So Seddon has demonstrated that taking Ohno’s thinking (but not necessarily his tools) and managing the work, (and not the people) can work outside manufacturing. Its not a top-down, but outside-in. All measurements are outcome-based, not activity-based. See http://www.systemsthinking.co.uk/2-1.asp
I’d recommend adding Seddon’s book to your queue 🙂 Also check out David Joyce’s work (http://leanandkanban.wordpress.com/2010/02/11/bbc-worldwide-kanban-case-study/)
Karl
Hi Karl,
I have seen this approach work in service organisations too. The classic service organisation that comes to mind is the call centre. I have seen it working with drug safety also. The process by which new drugs become certified. Here ‘knowledge worker” experts were involved.
I am sure of the theory (I’ve studied it). What is less sure is the practice as applied to software. It seems to me that you are advocating that practice should follow theory. It begs the questions of whether there is any objective data to show that the theory actually works in practice when applied to software. I haven’t seen any (beyond anecdotes and opinions).
I note that note all professions have gone this route. The theatre, film, fashion etc are just a few that come to mind.
So the question is whether this approach is universally applicable, or whether there is a bounded context in which it applies. If there is then we need to know whether software falls inside this context, or maybe even more specifically, which types of software project fall inside this context.
Regards,
Paul.
I’m advocating theory and practice. Practice without theory is faith-based, or cargo-cult. It seems you reject any evidence of the theory and practice as either anecdotal, or wrong 🙁
Hi Karl,
Just read through David Joyces case study, and it makes the same mistake virtually all studies like this make. The focus is on process metrics. So David as decided what things in the process are indicators of success and has measured those. Of course there is an assumption here. Also there is a tendency in practice to “get what you measure”, which means that your process metrics will tend to improve over time. We know nothing about the things that were not measured, and whether they could be adversely effecting the overall result.
What is more important (an also more elusive) to measure is the actual result. So a result metric would be return on investment say, or customer satisfaction. These are the things that the organisation is looking to get back in return.
So what we are really interested in is the “value” of the process improvement initiative from the perspective of the organisation an its customers. It would have been great if David had performed a survey of the stake-holders and customers involved and got them to assess the “value add” from their perspective. A similar survey amongst the development team would have been useful too.
Whilst this would provide only qualitative data, it at least could be deemed to be results based, rather then process based.
I’m sure Mr Seddon talks about the difference in his book.
Regards,
Paul.
My reading is that the study focusses on outcome-based metrics rather than activity-based metrics, assuming the purpose of the method is to deliver high quality software quickly. So I think it is results based.
The paper doesn’t talk about demand, I agree. I’m pretty sure David would have been looking to increase value demand and reduce failure demand.
Its valid evidence in my opinion, and given I think been its published by Mr Seddon, I think he’d agree 🙂
Oh, is Mr Seddon one of the authors?
I wasn’t trying to dam Davids work. I’m merely pointing out that his case study doesn’t pass as scientific proof.
I still think a survey of the customers and dev team members to assertion their impressions of the approach taken would add significant weight.
I guess you disagree.
Paul.
Not one of the authors (although Dr. Peter Middleton is), but I have a feeling it is published as part of a Seddon collection of Case Studies. David can correct me if I’m wrong! Either way, David is active within the Vanguard Network so I’m still confident its in line with Seddon’s thinking.
Customer and team surveys would have been interesting, but I don’t think their omission is a valid reason for rejecting the case study as invalid.
Karl
No thoughts on process versus results metrics?
Again, I’m not trying to dam Davids work. From my own experience, I have found it very difficult to correlate cause and effect when using process metrics to measure software. This doesn’t mean that what happened at the BBC wasn’t effective. It just makes it very hard to put it down to any one specific cause. David is clear that it is the result of measuring and lean thinking. This is obviously a subjective opinion.
The social sciences have the same problem. How to assess a system that is so complex that it is hard to quantitatively relate cause and effect. What they do is collate qualitative data from as many people as possible and look for statistical significance.
Such qualitative data would be irrefutable (for example 20% of the developers strongly agreed that measuring cycle time had a positive effect on performance is an irrefutable statement).
Paul.
Paul
We seem to be talking past each other and reaching the point of diminishing returns – so one final attempt to clarify what I’m saying
1. Process or activity measures are not helpful.
2. Result or outcome measures are helpful.
Lead time or quality are the result or outcome of a process and not the process itself. Therefore, they are useful in helping a team improve its process.
While customer or employee satisfaction data would be interesting, I don’t see how they can help guide a team to look for *where* they can improve their process.
Karl
Hi Karl,
Thanks. I get you. I agree about outcome measures. I think the opposite though about “qualitative data” (employee satisfaction), and have found it a very useful tool.
I’ll read the case study in more detail. I don’t see how the quantitative data was used to “guide” improvement. I don’t see evidence that a relationship between cause and effect had been established.
I’m thinking about attending the conference in Atlanta, in which case I’ll get to talk to David in person perhaps.
Thanks for clarifying your thoughts.
Paul.
Hope you can attend Atlanta – would be great to talk about this and other topics over a beer!
Hi Guys,
one major problem JS and vanguard is NOT systems thinking. At best it is a hard systems approach derived from Deming/Shewart. The approach utilises analytical tools within a bounded deterministic system (activities) of work. The approach contains no ST concepts and ideas.