Announcing The Lego Flow Game

I have just published a page for the Lego Flow Game – a new game that I have co-created with Dr. Sallyann Freudenberg. Its based on an ‘experiment’ I originally ran in 2010 at the SPA conference. Then I used an exercise of matching, solving and checking equations to explore and experience workflow with different types of processes and policies. While this worked in terms of having a game that consisted of knowledge work, it turned out to be painfully tedious and hard work, which detracted from the main learning objectives. Sal was in the audience that day, and took the concept and evolved it by swapping in Lego building instead of the mathematics. We’ve run the updated version a few time over the last 6 months or so, and made some tweaks to the rules and timings, and we’re pretty happy with the latest version – hence publishing it.

The most recent outing was at the Kanban Coaching Exchange in London last week. It was pretty chaotic with around 50 people attending in a relatively small space, and only 3 full kits – we need to stock up on staplers! That meant that there were lots of observers – if you were there, we thank you for your patience. It didn’t help that we were also missing the right cable to connect a laptop to the TV in the room, so we had to explain the exercise without any supporting slides, and I had to show the end-result Cumulative Flow Diagrams by holding my laptop over my head. Given all that, there seemed to be high energy, good discussion and overall the feedback seemed positive. If we can run the exercise successfully in that environment, then it’ll probably work in most situations!

You can find the full instructions on the main page – please let us know if anything isn’t clear. You’ll also find the slides and spreadsheet to go with the exercise there. There are some photographs from the Kanban Coaching Exchange session on the event page and I’ll add any others that I come across here as well.

Here are the final spreadsheets for the 3 teams. I’ll briefly describe what I interpret from one of them (team 1).

Round 1 is a ‘waterfall-like’ process, or batch and stage. In other words, the work flows through the process in a single batch, one stage at a time. You can see the work build up in the Analysis stage until all 5 are complete, and then moved together into Supply stage, and then Build, where the team ran out of time. Thus no items were completed. The debrief of this round mentioned that the person working felt very pressured, while everyone else was very relaxed. Waterfall is great if you’re not the critical path – and you don’t care about getting anything done!

Team 1 Batch CFD

Round 2 is a ‘scrum-like’ process, or time-boxed, with 3 x 2 minute ‘Sprints’. The time-boxes themselves are not obvious from the CFD, but its clear that there is some more flow, even if it is fairly ragged. One item is completed in the the first time-box, but the team estimated, and started four items. However, they must have learned from this, because they didn’t start anymore work in the second time-box (between minutes 2:00 ad 4:00) and they completed one more item on this time-box. For the final time-box, they decided to start one more item, and complete 2 more. If they had seen the data and the graph (I didn’t show anything until the end) they might have realised that it probably wasn’t worth starting anything in the final time-box either. Four items were completed in total. You can also start to see some predictability at the end as the built in retrospective cadence generated learning and improvement in how much work the team took on. The debrief discussed how this round tends to feel fairly chaotic with strong focus, punctuated by the retrospectives.

Team 1 Timebox CFD

Round 3 is a ‘kanban-like’ process, or purely flow-based with WIP limits. The flow is much smoother this time, and the WIP limit of 1 in each stage is clear. The slope of the graph is also fairly smooth, and its would probably be fairly easy to predict the future performance of the team. Five items were completed at almost one per minute. The debrief discussed how this round tends to be more relaxed, with a more gently hum of activity.

Team 1 Flow CFD

I hope this is useful in getting a feel for how the game works, and making it possible to run the game yourself. Please try it for yourself and let us know how you get on. In particular Sal and I would love to hear about modifications you make, or what other policies you experiment with.

The BBC Seeds Of Kanban Thinking

BAFTA

Back in 2002 I wrote an Experience Report for the XP/Agile Universe conference in Chicago – one of the precursors to the current Agile Alliance “Agile 200X” series. The title was “Agile Planning with a Multi-customer, Multi-project, Multi-discipline Team” and the abstract was as follows:

Most XP literature refers to teams which work on a single project, over a number of months, for a single customer using a narrow range of technical disciplines.  This paper describes the agile planning techniques used by a team which works on multiple projects, for multiple customers, using a wide range of multiple disciplines.  The techniques described were inspired by the agile practices of XP and Scrum.  A small case study of a project shows how the team is able to collaborate with their customers to deliver maximum value under tight conditions.

I often reflect back on those early years of my Agile journey with my team at the BBC Interactive TV department. We had some great successes (including winning the BAFTA pictured here for the Red Button Wimbledon Multistream service in 2001), and many of the things we did back then have shaped the way I approach my work as a coach now. In particular, and admittedly only with hindsight, I can see the first seeds of Kanban Thinking even though I didn’t have the knowledge to understand that yet. This post will try to map that out in some more detail – it will make a lot more sense if you have read the original paper!

Studying the Context

We did a very crude form of demand analysis in a couple of ways. Firstly, we realised that we worked on “a large number of very short projects, typically of a couple of weeks in length”. Thus we evolved to what  we would now call a Service Oriented Approach  to delivery by treating “all projects as a single ongoing project”. Secondly, we recognised that we had a variety of different response profiles for projects – what we would now probably call Classes of Service. In particular, there were those with “immovable delivery dates, such as a major sporting event like Wimbledon” and others with “unknown delivery dates, due to the nature of channel scheduling”. These unknown dates could be sprung on us at very short notice. In addition, something not mentioned in the paper is that roughly every quarter, one new service would be identified as one which we try to do something different and use as an investment in innovation.

While we didn’t make it as explicit as I now would, we did have a simple value stream, both up-stream and down-stream of the development team’s work. The upstream work happened as each project approached its planned build time and it was broken down into stories and estimated. The downstream work happened during extensive exploratory testing on each set-top-box, especially for Satellite services which required independent testing by the platform operator Sky.

Finally we understood that the specialist skills and disciplines within the team meant that we had an availability constraint for certain types of work. As well as using various practices to minimise the impact of this, we also ensured that we planned with this in mind, being very cognisant about which projects required which types of work so that we could balance the work accordingly, or allow for the fact that a particular area might be intentionally overloaded.

Sharing the Understanding

The fact that in the report only I only gave a passing mention to the fact that story cards were also “put up on the wall to highlight the current work, and to provide a  focus for regular stand-up meetings” says a lot about how much we have learned about the importance of visual management in the last 10 years. We used a very simple form of visualisation by hanging a sheet of plastic photograph holders on the wall. The sheet had a number of wallet slots into which photos were designed to slide, and which conveniently, index cards also slid into. Thus we displayed the weeks cards, and clearly marked them as they were completed so we could easily see progress during the week. This card-wall provided the focus for the week and as I recall, it helped us to talk about the progress work more clearly.

The Excel spreadsheet also provided a simple visualisation of the longer-term plan, and this we posted next to the card-wall. While we only printed it in A4 size, it would easily have scaled to be a form of A3, providing a clear picture of what the current thinking was for the next few months on a single sheet of paper. In addition the ‘provisional’ line gave a very crude indication of the value stream in that projects ‘above’ the provisional line were ones that we had fleshed out into stories with estimates. I have a vague recollection that we also added some form of visualisation of what work was in testing or ready for broadcast, but I don’t recall the details unfortunately. The use of colours on the spreadsheet was simply to help see the size and ‘shape’ of each projects. That is, how many weeks, and how many streams it was planned on taking. I used to think of this as “Tetris Planning”

Containing the Work

The Excel spreadsheet streams provided a very basic form of WIP limit in that it ensured that we never worked on more than three project-platforms at the same time. On other words, if we worked on 3 different projects, each would only be for a single platform (Satellite, Terrestrial or Cable). Generally, however, we would work on a single project across all three platforms, where each platform would be considered a single stream of work. In addition there was also often back-end content generation work, which would also take up a stream.

This way of containing the work could also be described as a form of investment allocation, although we didn’t measure the specific allocations precisely. The simple visualisation did give us a means of roughly seeing how much time (in weeks per stream) we were investing in each platform.

At the team and story level, the card slots described above also provided a very crude way of limiting WIP. Because the plastic sheet only had a finite number of slots we soon learned that once the sheet was full, then we had a strong indication that we had enough work for the week.

Sensing the Capability

The main metric we used was velocity as that was the most documented Agile metric at the time. However, we realised that by measuring velocity over a weekly cadence there was always going to be unavoidable variability. At the time we used a moving average, although I’d love to be able to go back and use other techniques to understand that data such as statistical process control and percentile coverage. I also wish we had better data on the number of stories completed each week. The estimations were very coarse-grained, and we were pretty close to using #noestimates, although without the more sophisticated analysis and forecasting that is around today. It would also have been incredibly valuable to have known about other metrics, in particular the various qualified lead times of the stories flowing through the process.

The two key cadences that we used were the weekly and quarterly ones. The weekly cadence was effectively a review and queue replenishment one, where we looked back on our progress the previous week, and re-planned the stories for the coming week. The quarterly cadence was more of a steering cadence, where we updated the Excel plan with our best guess at what projects would be coming next and what the key dates would be.

Exploring the Potential

This time period at the BBC was one of constant exploration, and it was this aspect, of breaking the “rules” and trying things out, that I probably learned the most from. I didn’t have all the knowledge of the all the models and theoretical underpinnings that I have now, so much of the exploration was intuitive rather than with scientific discipline, but it certainly encouraged me to always try things out for myself, and not just stick to what I had read or heard somewhere.

 

2013 Year in Blogging

I  received my WordPress 2013 Annual Report a few days ago and there are a couple interesting things I noticed in it.

First, the top 3 most popular posts are all from three years ago, or older. In particular, the most popular post continues to both surprise and please me. Its the “Kanban, Flow and Cadence” post where I first really started to flesh out my thinking on Kanban, and while many of the ideas have moved on significantly since then, I can still find the roots of the current Kanban Thinking model in there. The other two posts, “What is Cadence” and “Fidelity – The Lost Dimension of the Iron Triangle” just surprise me. I’d love to discover what it is about those two posts that keeps bringing people back to them.

Second, I only wrote 12 new posts last year. That’s significantly fewer than I would have liked. Especially as one of them was  a post like this one talking about my 2012 report. In fact, going back and looking at that report I see I wrote 30 posts last year, so very roughly, my writing output has dropped by half. That’s very disappointing and something that I hope to address in 2014. One way I intend to do that is by starting to use 750words.com to help get the writing habit (or even addiction). In fact this post is taken from my first 750words entry. The other way is to get focussed back on my Kanban Thinking book. I’m confident that I’m in a much better position to be able to do that this year and its more than just another wishful hope. I have a method!

Thanks for reading this blog – especially if you are a regular reader! I hope everyone has a successful 2014. Here’s to more blogging over the next 12 months.