Back in 2002, I wrote an Experience Report for the XP/Agile Universe conference in Chicago – one of the precursors to the current Agile Alliance “Agile 200X” series. The title was “Agile Planning with a Multi-customer, Multi-project, Multi-discipline Team” and the abstract was as follows:
Most XP literature refers to teams which work on a single project, over a number of months, for a single customer using a narrow range of technical disciplines. This paper describes the agile planning techniques used by a team which works on multiple projects, for multiple customers, using a wide range of multiple disciplines. The techniques described were inspired by the agile practices of XP and Scrum. A small case study of a project shows how the team is able to collaborate with their customers to deliver maximum value under tight conditions.
I often reflect back on those early years of my Agile journey with my team at the BBC Interactive TV department. We had some great successes (including winning the BAFTA pictured here for the Red Button Wimbledon Multistream service in 2001), and many of the things we did back then have shaped the way I approach my work as a coach now. In particular, and admittedly only with hindsight, I can see the first seeds of Kanban Thinking even though I didn’t have the knowledge to understand that yet. This post will try to map that out in some more detail – it will make a lot more sense if you have read the original paper!
Studying the Context
We did a very crude form of demand analysis in a couple of ways. Firstly, we realised that we worked on “a large number of very short projects, typically of a couple of weeks in length”. Thus we evolved to what we would now call a Service Oriented Approach to delivery by treating “all projects as a single ongoing project”. Secondly, we recognised that we had a variety of different response profiles for projects – what we would now probably call Classes of Service. In particular, there were those with “immovable delivery dates, such as a major sporting event like Wimbledon” and others with “unknown delivery dates, due to the nature of channel scheduling”. These unknown dates could be sprung on us at very short notice. In addition, something not mentioned in the paper is that roughly every quarter, one new service would be identified as one which we try to do something different and use as an investment in innovation.
While we didn’t make it as explicit as I now would, we did have a simple value stream, both upstream and downstream of the development team’s work. The upstream work happened as each project approached its planned build time and it was broken down into stories and estimated. The downstream work happened during extensive exploratory testing on each set-top-box, especially for Satellite services which required independent testing by the platform operator Sky.
Finally, we understood that the specialist skills and disciplines within the team meant that we had an availability constraint for certain types of work. As well as using various practices to minimise the impact of this, we also ensured that we planned with this in mind, being very cognisant about which projects required which types of work so that we could balance the work accordingly, or allow for the fact that a particular area might be intentionally overloaded.
Sharing the Understanding
The fact that in the report I only gave a passing mention to the fact that story cards were also “put up on the wall to highlight the current work, and to provide a focus for regular stand-up meetings” says a lot about how much we have learned about the importance of visual management in the last 10 years. We used a very simple form of visualisation by hanging a sheet of plastic photograph holders on the wall. The sheet had several wallet slots into which photos were designed to slide, and which conveniently, index cards also slid into. Thus we displayed the week’s cards, and clearly marked them as they were completed so we could easily see progress during the week. This card-wall provided the focus for the week and as I recall, it helped us to talk about the progress work more clearly.
The Excel spreadsheet also provided a simple visualisation of the longer-term plan, and we posted it next to the card wall. While we only printed it in A4 size, it would easily have scaled to be a form of A3, providing a clear picture of what the current thinking was for the next few months on a single sheet of paper. In addition, the ‘provisional’ line gave a very crude indication of the value stream in that projects ‘above’ the provisional line were ones that we had fleshed out into stories with estimates. I have a vague recollection that we also added some form of visualisation of what work was in testing or ready for broadcast, but I don’t recall the details, unfortunately. The use of colours on the spreadsheet was simply to help see the size and ‘shape’ of each project. That is, how many weeks, and how many streams it was planned on taking. I used to think of this as “Tetris Planning”
Containing the Work
The Excel spreadsheet streams provided an elementary form of WIP limit in that it ensured that we never worked on more than three project-platforms at the same time. In other words, if we worked on 3 different projects, each would only be for a single platform (Satellite, Terrestrial or Cable). Generally, however, we would work on a single project across all three platforms, where each platform would be considered a single stream of work. In addition, there was also often back-end content generation work, which would also take up a stream.
This way of containing the work could also be described as a form of investment allocation, although we didn’t measure the specific allocations precisely. The simple visualisation did give us a means of roughly seeing how much time (in weeks per stream) we were investing in each platform.
At the team and story level, the card slots described above also provided a very crude way of limiting WIP. Because the plastic sheet only had a finite number of slots we soon learned that once the sheet was full, then we had a strong indication that we had enough work for the week.
Sensing the Capability
The main metric we used was velocity as that was the most documented Agile metric at the time. However, we realised that by measuring velocity over a weekly cadence there was always going to be unavoidable variability. At the time we used a moving average, although I’d love to be able to go back and use other techniques to understand that data such as statistical process control and percentile coverage. I also wish we had better data on the number of stories completed each week. The estimations were very coarse-grained, and we were pretty close to using #noestimates, although without the more sophisticated analysis and forecasting that is around today. It would also have been incredibly valuable to have known about other metrics, in particular the various qualified lead times of the stories flowing through the process.
The two key cadences that we used were the weekly and quarterly ones. The weekly cadence was effectively a review and queue replenishment one, where we looked back on our progress the previous week, and re-planned the stories for the coming week. The quarterly cadence was more of a steering cadence, where we updated the Excel plan with our best guess at what projects would be coming next and what the key dates would be.
Exploring the Potential
This period at the BBC was one of constant exploration, and it was this aspect, of breaking the “rules” and trying things out, that I probably learned the most from. I didn’t have all the knowledge of all the models and theoretical underpinnings that I have now, so much of the exploration was intuitive rather than with scientific discipline. Still, it certainly encouraged me to always try things out for myself, and not just stick to what I had read or heard somewhere.
[…] The BBC Seeds Of Kanban Thinking […]
[…] years ago, Karl wrote a post, The BBC Seeds of Kanban Thinking, that reflects on this article; it’s also worth […]