Tuesday, May 22, 2012

The Rhythm of the Board


Look at this short video:




You can see and hear the rhythm of a Kanban board.
Let's see how it has been created.

During these last months I started doing some experiments in visualizing how lively is a Kanban board.
I am trying to figure out ways to visualize the age of items in each column and to show how many items are entering and leaving each column per unit of time, that is the rhythm of the board.

A typical way to represent what's going on in a board is to use a Cumulative Flow Diagram (CFD).
Unfortunately CFDs are not so useful in visualizing age and rhythm.

Let's take a simple board like this:



and let's assume that each item spends 3 days in each of the Analysis, Code & Test columns. Throughput is one task per day (every day a task reaches the Done column) and the cycle time equals 9 days.

The resulting CFD looks like this:



Now suppose that task E gets stuck. It spends 7 additional days in Code and then it also spend 3 extra days in Test. Cycle time for task E equals 19 days. All other tasks run smoothly across the board, but due to WIP limits some take a bit longer to complete.


This is the corresponding CFD:


It doesn't tell too much, does it?
There is no way we can infer that a task got stuck and it took 19 days to complete.

If we look at the control chart it does not tell us too much either:

We realize that there's a problem only when it's too late. This is no surprise, a control chart is not supposed to be used as a leading indicator. 

Let's go back to our goals: (1) visualize the age of items in the board (2) visualize liveliness of the board: items entering and leaving each column per unit of time.

Let's start from the latter.

Actually a CFD does represent inflow and outflow, but in a way that is not easy to understand.

Let's take our previous CFD. Each colored area in a CFD represent a column of the board. Let's focus on the Test column, the red area in the CFD:
















We will now use a bar chart to represent how many items are in the Test column each day.



This is much easier to understand.
However we still can't see how many items are entering and leaving the column each day.
To do this let's superimpose colored bars: red bars for inflowing items, yellow bars for outflowing ones.



This starts to tell us something about the rhythm of the board.
We see that on day 2 the total number of items in the column dropped from 3 to 2 because no items came in and one item left the column.
On day 9 two items came in and one item left so the total went back to 3.

We can also use this chart to compare between two boards, like this:



The number of items per day is the same, but the bottom column is obviously less lively than the top one.

This chart helps us in understanding the liveliness of a board but it does not show any information related to the age of items.

To focus on that we switch to Google Spreadsheet and its Motion Chart widget.
It's a free widget that can create charts to show how our data evolve in time.
[For an explanation about how to use Motion Charts check this link: Motion Charts in Google Spreadhseet]

Let's use Moving Bars to show (a) number of items in each column (b) maximum age of items in each column.
In the following video the column height represent the number of items, while the column color represent age of items. Red color = old item.
In this video look at column 3 (Code):



It gets red because of our stuck item spends 10 days in the Code column.

Nice, but this chart helps us only with our first goal as it shows item aging but it tells nothing about rhythm.

Is there a way to visualize both kind of info in a single chart?

Sure there is! We can use the Bubble Chart.

Let's focus again on the Test Column.
In a Bubble Chart we can show (a) number of items (b) maximum age of items and (c) number of inflowing items.
We use again height and color for the first two parameters while we use bubble size to represent how many items are entering the column. Here is our resulting chart:



Now this is starting to rock :)
And to make it rock even more we can add another dimension to it: sound!
Mapping a different drum sound to the number of items entering and leaving the column we get this result:





We can do the same using multiple columns. In this case we see the evolution of the Code and ToDo columns:



This definitely rocks :)

So that's it. I just wanted to throw around some ideas.
This result is interesting but further work is needed.


Note: I presented this work in a Lightning Talk at LSSC12 in Boston.



If any reader is interested in further details please write me and I will be happy to provide them.

The document containing the sample data can be found here
The Rhythm of the Board doc
Unfortunately you need to specify from scratch settings for the Motion Chart:
1- Select the Bubble Chart tabs in the upper right tabs
2- select tot for Y Axis
3- select time for X Axis
4- select max age for color
5- select in for size
6- check the trails box
You are done!


Here is a short explanation about how sound has been added to the charts:
Basically there is a mapping between drum sounds and number of items inflowing and outflowing:
i.e. 1 item in - low conga, 2 items in - hi conga. 1 item out - low bongo, etc.
Each sound has also a different position in the measure.
Drums sounds editing has been done manually using GuitarBand
This could be automated creating a MIDI file from board data



Tuesday, August 30, 2011

Premortem Retrospectives


Catch Failure Before It Occurs

Retrospectives are key elements of the Agile approach and are described in a classic book by Esther Derby and Diana Larsen: Agile Retrospectives

They are held periodically, typically at the end of an iteration, to inspect and adapt the process and continuously improve it.

In medicine a postmortem examination is conducted to determine the cause of death. Information collected during the procedure can be used to explain what happened. The problem is that the procedure brings no benefit to the deceased person.


Replace 'person' with 'project' and you see that a retrospective at the end of a failed project is similar to a postmortem: helpful for future projects but useless for the failed one.

To avoid a painful postmortem it can be sometimes helpful to try to prevent project failure upfront.






"You’ve seen your own future, which means you can change it if you want to"
- Chief John Anderton, Minority Report 









A Project Premortem is described in an Harvard Business Review article by Gary Klein:

A Premortem Retrospective is an adaptation of that approach based on the 5 typical phases of Agile Retrospectives.
I have used it a number of times with great success and I am going to describe it in this post.

Saturday, May 21, 2011

Velocity, handle with care


You have to drive this road, it's full of impediments:


this is your truck, it's loaded with technical debt, you have no choice other than bring it with you:


would you swap your truck with this car ?

obviously not. You don't need more horsepower, you need to clean up and "tune" your road (process) and get rid of technical debt first.

In Agile, Velocity is supposed to help teams in being more predictable. 
Velocity measures how many units of estimation are completed in a given interval of time.
Units of estimation can be ideal time, story points, whatever.
Story points are the typical choice so it is pretty common that velocity is expressed as story points per iteration.
The idea is that looking at past average velocity it’s easy to “predict” the number of story points that the team will complete in each iteration.
Unfortunately things are not that simple.
Here are some thoughts and remarks about velocity and why you should not rely too much on it.
First ask yourself these questions:
“are we going in the right direction? how much value are we really delivering?”
This implies knowing the product vision (we all do, don’t we?) and constantly verifying that our results are aligned with that vision and valued by customers (let’s assume that we know how to measure value, unfortunately this is yet another tricky issue).
Being fast but producing functionality that is not valuable to the user is waste.
Velocity can never be constant due to inevitable fluctuations:
- Incomplete/Rejected stories (a sprint could also fail completely leading to a zero velocity)
- Wrong task/user story estimations. Over or underestimating means that sprint velocity can be significantly different than expected. (estimation is yet another tricky issue, more about this in another post)
- Dependencies (i.e. having to wait for other tasks to complete). Dependencies should in general be avoided, but this is not always feasible.
- Impediments. i.e. server failures, missing/malfunctioning tools, …
- Changes to the process that could cause temporary rundowns
- Unplanned bug fixing activities (i.e. to address blocking/urgent defects)
- People added to or removed from the team
- People being sick, on vacation, etc.
Average velocity is supposed to balance these variations. However events like a failed sprint can impact it very badly.
Velocity figures can be biased in many ways:
- people working overtime
- taking short-cuts, i.e. adding technical debt
- stories not really “done”
People cannot work overtime forever, technical debt will eventually slow down development etc.

A quick note about acceleration: it has been suggested  (i.e. metric acceleration) that teams should constantly improve/increase their velocity. I don't buy this. If you keep accelerating the end result is invariably a painful crash. BTW  sustainable pace is one of the 12 original Agile principles. I would forget about acceleration.
However, velocity can increase in discrete steps. This happens as a result of process improvement initiatives and actions usually agreed upon during Retrospectives. The following figure shows what happens in these cases:
 

In general you should look beyond velocity and focus on the tuning and optimizing your process (BTW "see the whole" is one of the key Lean Principles). 
Velocity should never be a target, people change their behavior when they know they are measured. That’s a recipe for gaming. i.e. you assign more story points to each story et voilà you are instantaneously faster It’s even worse if you try to measure individual velocity (a definite no-no).
Setting velocity targets can generate additional bad behaviors like increasing effort spent on estimating. People wants to be extra-sure that estimates are correct so that velocity is calculated correctly. This is a stupid waste of time and money.
*there is a simple way to avoid all these troubles with velocity: if you can, just avoid estimating stories ...

Final thoughts
Velocity does not imply effectiveness: being very fast but not doing the right things and/or not adding business value is worthless.
Remove impediments, tune-up your process, get rid of technical debt. Focus on these goals and ignore velocity. After you have done that, than you could try to look at velocity figures (but check *).
And remember once again: sustainable pace is one of the key agile principles.





memorable quotes from xp2011


I have been at XP2011 in Madrid and it's been a great conference.

Here I am listing some of the great quotes I heard at sessions I attended (I already twitted some of them @mgaewsj ):
Help people jobs suck less B Marick about the role of Agile (managers)

In which world would this work ? alas in which universe would this make sense? E Derby about questioning (stupid) corporate policies, rules, manager’s decisions, etc.

Optimize for Business Value is fantasy D J Anderson

Multitasking is a fact of life D J Anderson about context switching, limiting WIP

Just pick a number and see D J Anderson about finding optimal WIP limits

Don’t call them user stories D J Anderson about differentiating requirements/work item types based on their source and destination (Strategic Product Requirement, Sales Requirement, etc.)

Servant Leader = Saint with Budget Authority B Marick about the role of managers in an Agile organization

Let the customer do estimation, he cannot be worse than a manager D J Anderson about avoiding spending (wasting) time estimating

Product Owner is a boundary object B Marick about the PO role in between business and the development team

A PO is better than a requirements document: he can talk B Marick

Legacy is not just code, it’s a mindset GeePaw Hill 

Sometimes the best thing you can do is help people quit JB Rainsberger about coaching in problematic organizations

One day people will laugh about this … Why not now? R Davies about coaching in large (difficult) organizations/difficult transitions

Testers have the most evil minds in the universe L Keogh

When you have specs you stop thinking L Keogh

Resources are Fixed cost items in a high cost country K Vilkki about how corporations “value” people resources

How should managers learn to manage? treat employees like volunteers M Poppendieck

Backlog items == rocks in the asteroids game: break them one at a time L Keogh

Bugs are scenarios we didn’t write down => we didn’t know we didn’t know L Keogh

Pushing = guessing, use pulling to avoid this L Keogh

Metrics should have an expiration date A Dhondt

Enterprise Kanban is just Kanban in SOA D J Anderson about scaling kanban

BBC Worldwide got the Kanban “flu” M Senapathi about the successful transition to Kanban at BBC Worldwide

Let’s keep agile weird B Marick about avoiding Agile being swallowed by mainstream corporate culture

Deep Legacy = Permanent Emergency GeePaw Hill

Switch from "I typed more code" to "I helped the team most" GeePaw Hill about pairing sessions

Detail is the opposite of Value J Brodwall about writing user stories and scenarios

In BDD e ATDD we test our understanding, not the code L Keogh

Conversations are the most important thing in BDD, tools are killing this  L Keogh

Testers are problem finders, not problem solvers L Keogh

Requirements are product design decisions that software team doesn’t participate M Poppendieck

A good stage-gate process allows feedback loops M Poppendieck

Stage-gate process should not keep you from going everywhere, just keep learning M Poppendieck

Busy does not imply getting things done M Poppendieck about having people work at full capacity (no slack time)

Being two months late on a big project is much more costly than letting people have slack time (and be available if needed) M Poppendieck

Assume you got it wrong L Keogh about looking for feedback (not validation) about your BDD scenarios

Users stories focus just on users, scenarios include many different views for each different stakeholder L Keogh about BDD scenarios vs User stories

Real Options are the hearth of BDD L Keogh

Focus on similarities J Eckstein about dealing with cultural issues when managing large distributed teams

Monday, March 7, 2011

in difesa di kanban [ita]

Inizio con un post pro-kanban, o meglio in difesa di kanban (anche se so bene che c'è chi può farlo molto meglio di me …)

Premetto che ho usato con successo Scrum e continuerò a farlo quando appropriato. Ho tuttavia iniziato in alcune situazioni ad applicare kanban e i primi ritorni sono molto incoraggianti. La capacità di rendere trasparente stato del progetto e del processo costituisce già di per sé un valore enorme...


Una certa avversione a kanban da parte di una fetta della comunità Agile non è una novità:

Ken Schwaber: Telling It Like It Is 
Tobias Mayer: Scrum and Kanban - different animals (vedere però i commenti di Liz Keogh e Ron Jeffries)
Entrambi gli interventi sono stati discussi da David Anderson: Reflections on Scrum compared to Kanban
Il tema è sempre caldo, vedi questo post freschissimo: Mike Cohn about making hard changes

Lo spunto "italiano" me lo danno questi due post di Gabriele Lana che ho notato solo di recente:

Comincio dall'uso un pò improprio che a volte viene fatto della parola fase, che ha una certa connotazione seriale/waterfall, in linea di principio quanto di più lontano possa esistere da kanban.

In realtà più che fasi le colonne di una kanban board rappresentano gli stati in cui possono trovarsi i singoli task. La differenza può sembrare sottile ma non è così: fase si associa facilmente a un gruppo/blocco di task che procedono raggruppati da uno step al successivo come ad esempio nel waterfall, stato invece no e infatti kanban non prescrive "fasi" o iterazioni.

Riprendo dai post:

"chi sviluppa non fa deploy di quello che ha sviluppato? Evidentemente [no], altrimenti non si spiega perchè la fase dello sviluppo è dimensionato a 5 user story e il deployment a 1"

Più che fase dello sviluppo dimensionata a 5, quel limite sta a significare che possono esistere al massimo 5 task nello stato "in sviluppo". Ogni task vive di vita propria e procede, anzi è tirato (pull), indipendentemente verso gli stati successivi. E' una differenza sostanziale a mio avviso (tra l'altro il limite non è in genere collegato linearmente al numero di persone coinvolte).

Posto che esistono molte realtà in cui deployment e sviluppo sono a carico di persone diverse, in generale il WIP (Work In Process) non viene limitato "a piacere", ma con un fine preciso: è probabile che nel caso citato quei valori siano i migliori in termini di ottimizzazione del flusso e distribuzione del carico di lavoro.

Kanban prescrive molto poco, in particolare non prescrive che i team siano specializzati, generalisti o altro:
 "Nel nostro team siamo in 5, non possiamo essere nella fase di test tutti e 5?" . Certo che si.
Si possono avere persone dedicate solo al test oppure no.
Quindi va bene anche avere un team in cui tutte le persone si occupano di analisi, di sviluppo e di test (se però fanno tutti insieme analisi e poi tutti insieme sviluppo e poi tutti insieme test, la cosa comincia ad avere un waterfall-smell per niente bello …).

Si possono quindi adattare i limiti WIP alle situazioni più varie: si può anche arrivare all'estremo di avere un unico limite WIP globale  per l'intero board: siete in 5, mettete limite 5, o 10, o X al numero totale di task in lavorazione, in qualsiasi stato essi siano (lasciando senza limiti le singole colonne, god help us :) ).

L'importante è limitarlo questo benedetto WIP (il Lean, David Anderson e molti altri hanno spiegato molto bene perché):
"stop starting, start finishing"

Perché mai tutto questo non dovrebbe essere agile non lo capisco. Vogliamo produrre valore rapidamente -> allora completiamoli questi benedetti task invece di prenderne in carico tonnellate senza finirne uno.

Individuals and interactions over processes and tools: infatti in kanban è (dovrebbe essere) il team, per di più potenzialmente allargato rispetto a Scrum, che decide le policies: quali colonne, quali limiti, se e quali buffer, la definizione di "done" e di tutti gli altri criteri di passaggio da uno stato all'altro. E sempre il team dovrebbe poi modificarli tali limiti e se opportuno anche quante e quali colonne utilizzare -> miglioramento continuo -> responding to change (anche a livello di processo).
La kanban board rende evidenti i problemi ed è in genere semplice raggiungere il consenso sugli interventi necessari.

Responding to change -> mantenere la mente aperta e pronta a cogliere nuovi spunti e opportunità (anche questo è Agile).

kanban non è la ricetta magica.
kanban aiuta a cambiare, in modo graduale e probabilmente più condiviso.
a volte è preferibile.


PS un paio di link per chi vuole approfondire la conoscenza di kanban:
http://agilemanagement.net/index.php
http://www.limitedwipsociety.org/