The Stuff We Call Requirements


Now requirements isn’t the best word for it is it?

By saying we require something we place a constraint on it, we limit our options, we are saying we really really want that thing.

And when we say we require the screen to be blue or the button to be big or for a spell checker…are we really stating a requirement or a preference? And what problem or benefit is this lot solving or providing anyway?

And when I say problem or benefit…do I 100% know for sure that the problem being solved is the right problem and will the benefit I think I am going to get be completely realised?

The product shall provide a revenue of £10,000,000…..

Just another requirement…no, preference…no, wishful thinking.

In, Alistair describes the stuff that comes out a requirements phase as exactly that – stuff! It’s not requirements you have captured…

“…what goes into that initial collection of decisions, the Here’s What We’ll Build list, it is all of those and sometimes not any of those, and other stuff, all mixed in. They are simply what some key group of people say at some point, “Here’s what we should build.”

Let us no longer call them requirements. Let’s call them ideas of what we should build. I know, snappy isn’t it?

So if they are ideas of what we should build will we build them? A long long history of projects tells us they won’t all get built. Those that do get built won’t all get used. Those that do get used won’t all have been built as originally intended. Those that don’t change then are the original ideas we had that were worthy of being what we usually call requirements…and that is a small fraction of the overall ideas we had back in the hazy days of the requirements phase.

So the first point to keep in the back of your mind is this…

Requirements are not requirements they are our initial ideas of what to build.

Moving on let’s consider why we are coming up with these ideas of what to build.

There has to be a reason. There has to be goals we are trying to achieve, problems we are trying to resolve. Without this, then our ideas of what to build are more like ideas of something to do to pass the time.

What we really want to come up with are ideas to achieve x or solve the problem of y.

There is a great and worthy distinction here. We are not building something for the sake of it. This is pet project territory and should be avoided (unless you have nothing better to do and you want to learn something new).

We are building things for a specific purpose and you need to know what that purpose is. What are the reasons behind building this in the first place.

If the ideas are the what’s then the reasons are the why’s.

But the why’s are just requirements sorry I mean ideas anyway aren’t they? They are the strategic ideas the organisation has deemed fit to invest their money in.

So second point is this…

Goals and solving problems are ideas for things that will improve the organisation.

But how do we know that the ideas of what to build will realise the ideas that will improve the organisation? They are after all just ideas…

We need some means of ensuring our ideas of what to build directly contribute to our ideas that will improve the organisation and we need some means of validating this contribution.

This leads us to deduce that we somehow need to identify what this contribution will be in measurable terms and also, if we are unsure of this contribution, we will also need some means of rapidly gaining feedback to verify that this assumption is valid.

In Lean Startup[i], Eric Ries describes this as the build, measure, learn loop…

Instead of making complex plans that are based on a lot of assumptions, you can make constant adjustments with a steering wheel called the Build-Measure-Learn feedback loop.

The third and fourth point are therefore…

Ideas of what to build directly contribute to ideas that improve the organisation

We don’t know if they will – we just think they will. Therefore we need to…

Gather early feedback how our ideas of what to build contribute to ideas that improve the organisation before we invest wholly in the development of that idea.

So some bright spark somewhere has had a bright idea.

How does this bright sparks bright idea see the light of day?

And why is this bright sparks bright idea brighter than some other bright sparks bright idea?

And which bright spark gave another bright spark the permission to have a bright idea and start working on it? Who comes up with bright ideas?

The principles of the Agile Manifesto are clear on this aren’t they?

–       The best architectures, requirements, and designs emerge from self-organizing teams.

–       Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.

This suggests that our teams are empowered to come up with ideas for the architecture, the product and the design – they are not told what the architecture is, what the product must do and what the design must be. These emerge through the efforts and imagination of our teams. And we trust them to do this.

So I ask again… Who comes up with bright ideas?

The answer? Point number five…

No-one has the monopoly on bright ideas.

We also need to ensure future ideas are not lost. Deeper analysis of ideas may produce unexpected results that spurn other ideas. These cannot be lost. Ideas for new products, feedback and observations from past ideas need to be captured. Where? Point number six…

Maintain an idea inventory.

With all these ideas bubbling around and a commitment to start building we progress the endeavour with optimism and excitement. There is, after all, nothing like a team of software developers, analysts, testers all pulling together in the same direction trying to build something that is game changing.

But what if its 10 teams. What if its 20 teams or a 1000 developers?

It’s hard enough for a team of 7 to adopt the disciplined approach that successful Agile teams require…but hundreds of them?

You can be optimistic but scaled Agile projects fail just as bad and as many times as traditional ones.

So how do we communicate these ideas to all these teams of people? We can’t expect that we can just visit our teams and talk to them about all these ideas. We should form a backlog of these ideas of what to build and how they contribute to ideas that improve the organisation so they can act as placeholders for our conversations but when we scale it is a good idea to capture a little more.

We need some means of understanding where an idea fits into an overall end-to-end process and we need to know the scope of that idea. That is, we need to know what done looks like for that idea.

In the past we used to write Requirements Specifications. Specifying requirements is worse than capturing requirements when we now know that they aren’t requirements at all. However, I can’t bring myself to call what we should produce an Ideas Specification. It’s more like a map of how ideas are strung together and a set of examples that show what it’s trying to achieve.

The seventh and eighth point are therefore…

Map the ideas of what to build to understand how they relate to one another


Bring your ideas to life with concrete examples

Having produced a map and scaled our project we now need to use this to take our plethora of ideas and somehow distribute them equally between numerous teams. To stand any chance of success the ideas need to be independent yet still contribute to a meaningful clearly stated and communicated set of ideas to improve the organisation that we think would make a decent release.

Our ninth point to keep in the back of our minds is therefore…

Independent ideas are distributed equally amongst teams.

BUT…(and it is a big but isn’t it?)

Are we ready to distribute this much work amongst our teams? How can things be independent without the infrastructure already in place to allow these teams to surge? But getting this infrastructure is costly and time consuming and we haven’t yet verified the Ideas of what to build directly contribute to ideas that improve the organisation. We don’t know if they will – we just think they will. Which one is the egg and which one is the chicken?

My advice is that it doesn’t matter how sweet you’re architecture is if what’s built upon it doesn’t have the business value it deserves. You have to start verifying your assumptions on idea contribution at the expense of architecture…go on, do it…NOW!

Another rationale for deferring architectural perfection is that there is little point delivering a product to market if someone else gets in there first. A great product is, we’ll, great but sometimes getting in there first is more important.

Now lots of people will talk about this and talk about early return on investment and talk about getting the low hanging fruits done quickly. This makes an awful lot of sense. Rather than spending our time perfecting the product lets do the easy high value stuff first and ship it so we can get the first foot in the market and start earning…

Hold on and stop and think for a minute.

Is this always the best strategy?

New innovative products that successfully fill some gap will be copied and enhanced and perhaps eventually overtaken by one of your competitors. New products have a shelf life whereby you will maximise being the new shiny thing that everyone wants.

If you released the easy high value stuff first then how long will it take your competitors to build a similar product when all they have to do is build easy high value stuff to catch up. Plus you’re architecture is pants so its likely to be buggy and difficult to build on. If one of your competitors is a big name then chances are they’ll have more resource, existing assets, better marketing and greater connections and will destroy you in the blink of an eye before saying thank you very much for such great ideas.

So my tenth point is this…

The ideas behind choosing the first thing to build are as important as the thing itself.

Whether this is validating assumptions, time to market, proving architecture, quality to market or preserving the longevity of your market you have to be sure as you can’t achieve them all.

[i] The Lean Startup, Eric Ries, Penguin Books, 2011


The Light Bulb Moment

Our approaches to software development traditionally focus on lengthy phases whereby we understand the requirements, we design the product, we code it then go through numerous test phases until either done or we give up all hope. This traditional approach has few opportunities for feedback and never touches on whether the vision for the project was right in the first place.

Moving forward a number of years, our more contemporary iterative approaches to software development go somewhat further in that they try to get the product in the hands of the users as soon as possible so that we can validate our assumptions straight way. We label these approaches Agile, in that we can react quickly to the feedback we receive, or Lean, in that we remove waste from our process, including the waste of delivering products that are not required.

A project that that is the product of innovation, that exists to produce a product for a new or emerging market, or a product that causes a shift in thinking within an existing market has the most significant need for continuous feedback. This is not just the feedback on whether the products features do what they intended. No, it is much more imperative than that.

Think back to our Saving Lives story. How did this innovative approach become successful? Do you think a group of people got together, had an idea, built it and delivered? They then  sat back and watched the success?

Of course not. An innovation is a guess. It is an analysis of an opportunity that has high risk that it will not deliver the outcomes expected. It is imagining an uncertain future.

In Gamestorming[i], Dave Grat, Sunni Brown and James Macanufo make an analogy from the journey towards an uncertain vision to voyages of discovery:

Like Columbus, in order to move toward an uncertain future you need to set a course. But how do you set a course when the destination is unknown? This is where it becomes necessary to imagine a world; a future world that is different from our own. Somehow we need to imagine a world that we can’t really fully conceive yet – a world that we can see only dimly, as if through fog.

When we have visualized our world we make many steps towards it but the direction is not certain and we need to constantly check where we are and from these results, determine where we go next.

Those that fail to do this may start off in one direction, overcome many obstacles as they travel to their uncertain destination, only to find nothing there or they are back where they started from.

One of the worlds greatest ever innovators learnt this lesson early. Thomas Edison invested time, money and energy in an electric vote recorder. Edison found that the legislative bodies at which the product was aimed exploited the current slow manual process to charm the public for last minute votes.  Edison had assumed that there was a market for his invention but because the recorder would deprive them of this opportunity, the legislators rejected it.

In Edison on Innovation[ii], Alan Axelrod described what Edison learnt from this endeavor:

From this initial popular failure, Edison drew a valuable lesson. He resolved from then on to determine — firsthand — the existence of a market and a need before embarking on any other invention. For him, that lesson was sufficient to turn momentary failure into lifelong success. Nothing, Edison believed, was truly a failure if you managed to learn something from it.

Travelling the route of innovation is different from travelling the common path of business process. The vision and the strategy are not precise so that steps to get there must be different to. We need to provide a framework for creativity and experimentation so that both our vision and our strategy can change as we learn many lessons on our journey to building the product,

The basic assumption of the innovation – the Vision and the Strategy to implement that vision – need to be tested before funds are heavily invested in the product. Edison and the future contingent of innovators treated the approach as experiments with the outcome of learning.

In Lean Startup[iii], Eric Ries describes this as the build, measure, learn loop…

Instead of making complex plans that are based on a lot of assumptions, you can make constant adjustments with a steering wheel called the Build-Measure-Learn feedback loop.

The feedback loop is a means to steer the endeavor towards its mission, it’s vision. The endeavor will commence with a strategy to achieve this vision. The endeavor will build (some of the product), measure the strategy’s effectiveness towards the vision and from those measurements will learn whether they should continue with the strategy or change the strategy to be more effective in achieving the vision.

These experiments shape and fine-tune the product over time and many experiments can happen at once to learn many new things. They are important because they provide the constant feedback needed to verify our assumptions to ensure we build the right thing and not the thing we imagine upfront.

The point we enter the world of developing an innovative product is therefore the creation of the Vision. The identification of the opportunity.

At the heart of innovation is the identification of this opportunity.

This is the generation of the vision and the strategy to achieve that vision. These provide the initial requirements for the product.

This is not all of the equation however. We may have a great idea but fulfilling this requires much more. The characteristics of the innovation team are important. This is not your typical stakeholder and requirements manager relationship.

The environment in which innovation thrives is different from our typical office environment. This is where we generate and test the ideas so is an important ingredient in deriving the innovative product requirements.

As discussed, the experimentation process is key to innovation. This impacts the requirements approach tremendously. It is the essence of discovering the right requirements and is a far cry from our traditional review and approval process.

We also need to ensure future ideas are not lost. Each experiment may produce unexpected results that spurn other innovations. These cannot be lost. Requirements for new products, ideas for new experiments, need to be captured on an innovation inventory.

In order to succeed and sustain these ideas we need to ensure we keep an eye on the commercialism of the innovation. Without investment and a profitable outcome then the endeavor will fail. It is therefore key that the requirements for the product also cover affordability aspects and ideas for ensuring the longevity of the product in its target market.

What I’m getting at is that innovation isn’t just about having a great idea.

The idea is not enough.

[i] Gamestorming, Dave Grey, Sunni Brown, James Macanufo, o’Reilly Media, Inc., 2011

[ii] Edison on Innovation: 102 Lessons in Creativity for Business and Beyond, Alan Axelrod, Jossey-Bass, 2008

[iii] The Lean Startup, Eric Ries, Penguin Books, 2011

The Offside Rule explained using Specification by Example


People who know me know that I like two things:

I like my football.

I like my Agile.

Why do I like these things?

The beautiful game.  The beauty of a team, each player knowing the role they play but willing to step into other roles as needed for the benefit of the team. Each player excelling within their own role, some dazzlingly so, but the whole that the team produces is significantly more than its individual parts.

They are self-organizing and cross-functional.

A team who have many practices they work on tirelessly in the pursuit of perfection.

They pursue technical excellence.

They focus only on the next game, each game contributing to the overall objective – Premiership Champions……

They iteratively pursue their goals (and measure progress in points).

You may be forgiven to think that this article is leading to a comparison of football teams and Agile teams. Sorry – that must surely have been done already – it’s just I get carried away when I talk about the beautiful game (now am I talking about football or Agile?).

This article aims to bring an Agile technique together with a football rule in an attempt to explain both.

The off-side rule explained using Specification by Example.

Should have something for everyone.

So let us start with an overview of Specification by Example.

Simply put, it is a technique from the school of Agile that is used to specify requirements using examples. It also goes by the name of Acceptance Test Driven Development. However, I personally feel that dilutes what it is trying to achieve.

Although it uses tests as its primary angle, it is primarily a technique for elicitation and understanding of requirements.

Does that make sense?

It may not as it reverses a traditional viewpoint and achieves something that often comes up in a traditional projects lessons learned.

The traditional viewpoint is that we derive our test specifications from our requirements specifications. In Specification by Example, our tests are our requirements specification.

The lesson learned we do something about is to bring our testers into the project earlier. That is, during requirements elicitation. If our tests are our requirements specification, then it stands to reason we need our testers wonderful edge case seeking brain in the starting position to start coming up with these tests / requirements / whatever.

Make sense yet?

When we sit with our clients during those early stages to try and understand what they want, our Business Analyst often asks our client for concrete examples or scenarios to explain and help the analyst understand the requirement.

When the analyst has this understanding, he tends to discard those examples. They are transitional artifacts used only to get it. Once they get it they then write it down in the form of a Software Requirements Specification or a Use Case or a Functional Specification where they try and describe, in as much detail as they can and as precisely and accurately as the English language will allow, what the system should (or should I say Shall) do.

Even worse, the analyst never got examples in the first place or these examples were basic happy-day scenarios leaving big gaping holes and questions in the resultant specification.

The developer comes along and codes to the spec but some of the meaning is lost so they are forced to make assumptions. They come up with their own examples in isolation to test their work before handing it over to a tester, who has spent the same amount of time coming up with his own assumptions and his own examples in the form of tests.

They come together and the assumptions clash.

They work through the clashes, agree solutions and show it the customer who declares that’s not exactly what I meant, here let me give you an example….

So what if we get the tester and developer involved with the customer and Business Analyst and as a team they come up with examples? The tester will ask all sorts of difficult to answer questions around business rules that the analyst wouldn’t have thought of and the developer will be able to input technical knowledge and his experience on what normally gets missed in order to support the development and automation of tests.

Make sense now?


Time for an example.

The offside rule.

To break the off-side rule a player needs to be in an offside position so let’s keep it simple and start there.

To be in an offside position a player needs to be nearer to his opponents goal line than both the ball and the second last opponent.

Position of Player from opponents goal line Position of Ball from goal line Position of second last opponent from goal line In offside position?
10 yards 12 yards 11 yards Yes
10 yards 9 yards 11 yards No
10 yards 12 yards 9 yards No

Our tester will surely ask now – what if they are in line?

The ball can be played sideward so although you may be in front of the second last opponent the ball has to be played forwards for you to be off side.

Also, you can be level with the second last opponent when the ball is played forward without being in an offside position.

Tables gets updated to now look like this.

Position of Player from opponents goal line Position of Ball from goal line Position of second last opponent from goal line In offside position?
10 yards 12 yards 11 yards Yes
10 yards 9 yards 11 yards No
10 yards 12 yards 9 yards No
10 yards 10 yards 11 yards No
10 yards 12 yards 10 yards No

Now matter where on the pitch, if the player is in front of the ball and the second last opponent, is he offside?

No – they can only be offside if they are in the opponents half.

Now lets imagine our football pitch is 100 yards long, resulting in the half way line being 50 yards from either goal line.

So if our player is more than 50 yards from the opponents goal line they can never be in an offside position. Even if standing directly on the half way line.

Position of Player from opponents goal line Position of Ball from goal line Position of second last opponent from goal line In offside position?
10 yards 12 yards 11 yards Yes
10 yards 9 yards 11 yards No
10 yards 12 yards 9 yards No
10 yards 10 yards 11 yards No
10 yards 12 yards 10 yards No
51 yards 53 yards 52 yards No
50 yards 53 yards 52 yards No
49 yards 53 yards 52 yards Yes

So that determines if the player is in an offside position.

Yes – but many pundits often make errors with regard to the second last opponent so we should clarify that with an example.

The error they make is that they miss the fact that the last opponent (not the second last opponent) is usually the goal keeper and is often disregarded from the decision. For the player to be in an offside position this is irrelevant. The player must be  closer to the goal line than the ball and two players (whether this is the goal keeper of not) to be in an offside position.

Position of Player from opponents goal line Position of Ball from goal line Position of second last opponent from goal line Position of goal keeper In offside position?
10 yards 12 yards 11 yards 1 yard Yes
10 yards 9 yards 11 yards 1 yard No
10 yards 12 yards 9 yards 1 yard No
10 yards 10 yards 11 yards 1 yard No
10 yards 12 yards 10 yards 1 yard No
51 yards 53 yards 52 yards 1 yard No
50 yards 53 yards 52 yards 1 yard No
49 yards 53 yards 52 yards 1 yard Yes
10 yards 12 yards 11 yards 80 yards Yes

In this last case the goal keeper is up for a last minute corner in an attempt to get a last ditch equalizer. It happens but to be onside when the ball is played forward and the player is in the opponents half the opponent would need to have at least two outfield players further forward than the player for the player to be onside.

Make sense?

We now need the rules to determine whether being in an offside position causes an offside offence and the resultant raised red or yellow flag followed by the blowing of a whistle.

The player in an offside position only commits an offence if the ball touches or is played by one of his team and the referee considers the player in an offside position to be interfering with play, with another opponent or is gaining an advantage.

Player in offside position played by or touched by other team member interfering with play, opponent, or gaining advantage offside offence?
No N/A N/A No
Yes Yes Yes Yes
Yes Yes No No

In reality, when designing your tables, you’d probably need to ask a few questions of your customer (a referee in this case) to get a deeper understanding of the requirement.

For example, what does “touched by other team member mean?”, “what constitutes interfering with play?”, “what gives a player an advantage?” and most importantly, “can you give me examples of each?”. You can then build those examples into the tables for a clearer picture.

Back to the next part of the rule.  The player cannot be penalized if the ball was played from a goal kick, a throw-in or a corner kick. We therefore need to include the type of play that is leading up to a potential offside call.

Player in offside position played by or touched by other team member interfering with play, opponent, or gaining advantage Play Type offside offence?
No N/A N/A Open No
Yes Yes Yes Open Yes
Yes Yes No Open No
Yes Yes Yes Goal Kick No
Yes Yes Yes Corner Kick No
Yes Yes Yes Throw-in No
Yes Yes Yes Free Kick Yes

There we go.

Specification by Example explained with an example.

The offside rule explained using Specification by Example.

Today sees the start of the Premier league so go impress your friends in the pub tonight with your knowledge of the most discussed rule in football. There will always be a bad offside call this afternoon that needs dissecting over a pint.

A final note…..

Come on Villa……!

Don’t Always Believe What’s Been Written Down

At the turn of the millennium I was managing a project for a software house who had been contracted to develop a software system to support the sales process for a large financial institution.

I started off with the expectation that I would work with the client to detail a set of workshops that would produce the vision of the product, would capture the features of the product and would rank these features so that we could then take them into development over a number if iterations with our customer present, answering questions, further detailing what was needed and verifying the product did what they wanted it to do.

I couldn’t have been more wrong.

The starting point for my teams engagement in this project was a huge constantly changing beast of a document that detailed the sales process in minute detail. We were simply asked to go develop it.

Why did we need to understand the vision for the product? It was all there in the document.

Why did we need to talk to the client? It was all there in the document. If we needed clarity on anything then be sure to ask and we will get back to you.

The problem we had was that we just didn’t believe the document was complete or that the strategy for the product was correct. Eh – we were just the supplier and we were asked to develop it the way it was so why not just crack on?

In addition, the beastly document came with a high level description of how the process worked and hundreds of detailed forms that described the questions asked to the client as part of this Sales Process.

There were sample letters and reports that were to be fully automated, and simple examples of calculations that would work out client premiums and valuations.

The key business outcomes for the project were to reduce the overall time and cost it took to make a sale and to reduce the number of queries and rejections raised by the organisations compliance function.

It was absolutely essential therefore that the product automated the generation of the letters and reports that justified the advice and automated large parts of the compliance function to ensure that the advice was right first time.

As the story unfolded, and we were kept in the background, we found that behind each page of this document there where thousands of anomalies, inconsistencies and ambiguities that led to statements such as:

“It’s been agreed in principle” – translated to “We’re on the right lines but not there yet”.


“It’s a placeholder” – translated to “We’ve no &%&*^% idea what’s supposed to go there.”

The detailed forms gave the impression that they were complete. Far from it, some basic ones were fine as these questions were always asked of clients (e.g. what’s your name?), but the forms that were there to support the advice given as part of the Sales Process fell well short of the mark. These were the critical questions as they were there for us to determine if the advice given was sound.

The calculations were too simplistic so as to be worthless and the automation of the reports and letters was proving to be impossible because of the errors in the forms.

The client was getting more and more frustrated with the problems with the beast of a document and it soon became clear that what was needed was an acceptance that the document wasn’t an appropriate vehicle for moving the project forwards and a new approach was required based on fast feedback cycles.

This lead to an intense three month period whereby my team camped out at the clients premises and delivered the requirements week by week.

The automation of the reports and letters was seen as a key justification for the project so the team focused there first as it was also the most risky. We were in danger of not being able to deliver anything. Fully justifying every recommendation was proving to be a nightmare, and most stakeholders agreed was a near impossibility. We therefore suggested a solution that was semi-automated which would still justify the business case but would be practical to implement. This included asking key questions up front, then, depending on the answers given, a consistency checker would run that would:

a)    Highlight any further questions needed

b)   Highlight any justifications required

c)    Highlight any risky scenario’s in the data captured and recommendations provided

These would all then be presented in a letter whereby the advisor could answer the additional questions, supply the justifications and understand the risky scenario’s.

The results of this letter would then be given to the compliance function which would highlight in red / amber / yellow paragraphs in the letter that they would need to check to ensure the advice was sound.

If there were no paragraphs highlighted, the letter would go through as accepted. Right First Time.

This approach was prototyped and demonstrated to the stakeholders and they loved it.

They endorsed the approach and could see the whole system shaping week by week over the following months. At its completion, the customer felt huge satisfaction in the part they played in shaping the product and as they took it on the road to train the advisors in its use were proud to demonstrate its functionality and glowed as the tributes came flooding in.

The starting point for my teams involvement in this project was the big document. It was just that, a big document.

But because it was big, it was deemed as complete (in principle).

The key thing here is that just because something is written down it does not mean it is right. Just because lots of things are written down, it does not mean it is thorough and compete.

We have a terrible affliction in that when we take the time and effort to write things down then somehow we think that we have put thought, analysis and diligence in our approach. Far from it, writing stuff down is a solitary exercise, the power lies in the hands of the guy with the pen and, as we know it, language has a terrible habit of being interpreted in many different ways.

As a software development team embarking upon a new project it is ill advised to take carte blanche a bunch of documents given to you that describe the product. They will describe the writers view of the product, it may even have been reviewed by the stakeholders to some degree, but it won’t contain the tacit knowledge garnered from their conversations and I’ve never come across one that contains the disclaimers it deserves (e.g. this section is incomplete, this requirement is vague, have a whole bunch of requirements we haven’t investigated yet but we run out of time so this lot is a best guess).

Regardless of your starting position, you will need to understand value from the customers perspective. You need to understand the business justification for the project and how this value is delivered by the product. You will need to translate whatever beastly documents you are given at the start into something that gets the product (even as a throw away prototype) into the hands of the stakeholders and users quickly.

You can’t always believe what is written down but if you can feel it, play with it and test your assumptions and hypotheses against it then this is the one source of the truth.

The product in the hands of those using it.

Saving Lives

In 2010, over 7500 people died of cholera world wide.

That’s 20 every day.

That’s 140 a week.

That’s 280 in a two-week Sprint.

That’s one hell of a velocity.

According to the World Heath Organisation[i] this was a 52% increase from 2009, attributed to an outbreak in Haiti following the earthquake in January 2010. The outbreak was so significant that 53% of this global total occurred over a period of just 70 days.

Cholera is caused by ingestion of food or water that is contaminated with the Vibrio Cholera bacterium that can rapidly bring on acute diarrhoea and lead to severe dehydration and ultimately death if treatment is not received promptly.

In May 2011, the World Health Assembly recognized the re-emergence of cholera as a significant global threat to countries that cannot guarantee access to safe drinking water and minimum standards for hygiene.

Consequently, almost every developing country faces the potential for cholera outbreaks.

Given the experiences in Haiti, a single catastrophe, such as the outbreak of war, an earthquake or a flood in a developing country can lead to a cholera epidemic of biblical proportions.

One of the main problems in containing cholera is the time it takes to identify a water source as being contaminated and for this source to then be quarantined and the community notified of this.

The sampling process is done on a monthly frequency, depending on the local population, and more frequently if an outbreak has occurred. In can take a number of days to test the sample in a laboratory and once the source has been identified as being contaminated it takes a significant effort to then communicate and isolate the source from use by the local population.

While this is happening lives are being lost.

Remember we have a velocity of 280 deaths per Sprint.

Now imagine we have a project that aims to provide the locals with a number of inexpensive devices that send an electric pulse into a sample of water to test for the presence of the Vibrio Cholera bacterium. The outcome is reported immediately and through a wireless connection in the device, the results are transmitted back to a centralized database instantaneously, that highlights the water source as contaminated or not.

This information is then used to communicate sources of contamination, understand where outbreaks or epidemics are likely to occur, which populations are the most at risk, and more importantly, it can help pinpoint the origins of contaminated sources.

The benefits of a successful project are numerous. The front-end devices can immediately warn or endorse a water supply for consumption, and the back-end analysis of the results across an area, country, continent or planet provides the ability to isolate the origins of an outbreak for rapid treatment. The end result, our velocity reduces from 280 deaths per sprint to a number approaching something more agreeable / acceptable / manageable…….no term seems appropriate where deaths are concerned….it is just less than 280.

Utilising our imaginations a little more, let’s now suppose we adopt a traditional waterfall approach to developing and delivering this project.

It takes 3 months to document the requirements in their entirety.

It takes 4 months to design the complete system, including the device and the back-end functionality.

It takes a further 9 months to develop then 6 months of testing before implementing a 6 month roll out plan across the globe.

That’s 28 months before we start to see the benefits. That’s the equivalent of 60 Sprints and, with a velocity of 280, that nearly 17000 deaths.

And that’s if the project goes to plan. What about all the risks?

  • Will the devices operate appropriately? How are they powered? Will they actually work?
  • Do they have a shelf-life?
  • How are they calibrated?
  • Does the continent have the appropriate infrastructure to support the wireless communications?
  • How are the devices distributed and the population trained in their use?
  • How do we roll out the devices in war-torn territories?

Given our past large scale project experiences, the schedule is more than likely to overrun, let’s say by a mere 50%. That’s another 8500 lives lost.

So to deliver the potential outcomes of this project using a traditional approach, it will cost millions of pounds and more importantly it will cost about 25000 lives.

Each day we delay delivering the benefits, it could cost 20 more lives.

Each week we delay delivering the benefits, it could cost 140 lives.

Each month we delay delivering the benefits, it could cost  560 lives.

Is a traditional waterfall approach appropriate for this project given the dire consequences of delayed Return on Investment?

So let’s assume the business case for this expects the project to not only reduce the number of deaths to pre-2010 figures, but to cut them further still by 20%. That’s a reduction of 72% from 7543 to 2112.

That’s 6 deaths a day.

That’s 40 deaths a week.

That’s 80 deaths in a two-week Sprint.

That’s a much worse velocity (but a better one given our unit of measure).

These savings are attributed to:

  1. 45% to the use of the wireless devices that immediately detects contaminated sources
  2. 20% to communication of contaminated sources in centralized database
  3. 35% to isolation of origins of contaminated sources and treatment to remove contamination

In addition, the business case also reflected which areas of the globe are most vulnerable to an outbreak of cholera. Based on figures collected by the World Heath Organisation the countries with the highest number of deaths from cholera are:


Number of Cases

Number of Deaths

Case-fatality rate













This equates to 84% of the total of cholera related deaths in 2010.

Now there’s a great example of pareto’s law.

Given this information, the associated risks and a recognition that a traditional waterfall big-bang approach is going to not just cost a fortune, it’s also going to cost lives, what other approach could we possible adopt?

Now lets not fall into the trap of labeling the approach. We could call is Agile or we could call it Lean but for now let’s not, let us define an approach to the project that helps us start to meet our goals the quickest…..

In this project, time to market and product utility are paramount.

In this project, our biggest risk is the devices not being fit for purpose.

In this project, if we can get utilisation high in three countries, we are well on our way to achieving our ultimate goals.

If we can therefore implement the feature of the system that gives us the most benefit (i.e. the devices) that also attacks the greatest risk to project success early (i.e. the devices not being fit for purpose) in the countries most impacted (i.e. Haiti, Nigeria and Cameroon) then we start to save lives.

Consequently, our plan shows a number of releases over a two year period showing testing of the devices in Haiti before roll-out to Nigeria and Cameroon as our initial releases. The central database, the communications from the devices to the database, the infrastructure to support wireless communications and the complex analytical features are all added in later releases building the project incrementally.

The project develops a prototype of the device and demonstrates it working on known contaminated sources in a controlled laboratory within four weeks of project startup. There were some minor problems with calibrating the electric pulse, analyzing the results, and presenting the results back in a form that could be understood and acted upon with minimal training but these were rectified and the design of the device (which resembled a light-weight aluminium pen) was fine-tuned.

After four weeks, the go ahead was given to manufacture 10000 devices for shipment to Haiti. 1000 could be manufactured a week and each week 1000 were distributed to Haiti and locals were trained in their use.

The large scale use of the first 1000 devices detected numerous sources of contamination that would have led to ingestion of the bacterium but also marked a small number of sources as contaminated that did not contain the bacterium and would have provided water that would be safe to drink. The boffins back in the project modified the software, calibrating it so that these sources would be marked as safe and released this software for the second batch of 1000.

The role out continued, water sources were sampled and the number of cases and the case-fatality rate began to drop.

From the initial release of 1000, not only did the team quickly get feedback on how the devices worked in situ, they also spotted the need to be able to do software updates while the devices were being used in the field (the first 1000 had to be recalled and the software patch applied).

This caused the project to adjust their schedule so that the next release would cover setting up the wireless infrastructure so that patches could be applied to the devices using their wireless capabilities. This proved invaluable as numerous changes were made to improve the devices over the initial releases to Haiti, Nigeria and Cameroon.

Six months into the project, the death rate  attributed to cholera in Haiti, Nigeria and Cameroon had dropped by 25%.

Six months into the project we were saving  4 lives a day.

Six months into the project we were saving 30 lives a week.

Six months into the project we were saving 61 lives in a two-week Sprint.

If our target was to reduce global death rates by 72%, after six months from project startup we were 75% of the way there.

The approach to project delivery we are yet to label, with its quick feedback loops that validated our assumptions quickly, contained our risks while they were small enough to deal with and allowed us to get the key features into the target users hands quickly. It was the white labeled approach that was the key reason for achieving these benefits so quickly.

Now this is just a fictitious project, where we have used our imaginations to demonstrate why the approach described is so powerful. It is void of technique, but it is common sense, it is logical is it not when the unit of measure for success is saving human lives.

The intent of using this fictitious project was to shock. To carve the rational of why such an approach should be used on the readers memory. Talking about saving lives leaves longer lasting trails of understanding than if we talk about efficiencies or investments……

But our projects are different.

Yes they are. They always are.

They always involve different people who all have different personalities, different experiences, different skills and different values about how the project should operate and what it should do.

They always have different customers who have different priorities, different goals, different problems, different infrastructure and different cultures.

In our projects we measure success in market share, in pounds and pence, in return on a monetary investment, in increased profits, in reduced costs. Not in lives…..

Our projects use technologies that are proven.

Our projects are based on automating known business requirements. We know what we want.

Our projects use inexpensive resources in India.

Our projects are based on fixed price contracts.

Our projects are enterprise-wide and involve hundreds of developers.

Our projects are not driven by time to market they are legislative and all requirements must be there by A-day.

All projects are different but they all have one commonality. They are all based on a myriad of assumptions that remain until the hypothesis has been proven. The best way to prove a theory or validate an assumption? Do the work and get the feedback. If the feedback proves the supposition then stay calm and carry on, if the feedback suggests an alternate is true then adjust, calibrate and try again.

Hopefully, by now you understand why the white label approach described in saving lives is a better approach to a traditional waterfall big bang approach. That the approach evolved because it was what was needed to start saving lives NOW! Not in 2 years.

All projects are different but they all have another commonality. They all have a myriad of statements about what the system should do. We call these requirements. Each of these statements is an assumption about what is needed, what problem is being solved or how the system should work. Each of these statements should provide some value to some entity of the systems external environment.  The totality of these statements covers a complete description of what the system should do and why.

However, they are still assumptions and still need to be validated. They are all statements of intent that need to be communicated. They are expectations. We therefore need feedback loops in place to quickly understand what we are trying to achieve, to ensure that those who work from these statements of expectation get it! and we need to validate that we are on course to achieve it.

We need to ensure we are doing the right thing and we all agree on what that thing is.

The best way to know this is to just do it.

[i] Weekly epidemiological record, 29 July 2011, No. 31, 2011, 86, 325–340,

Kanban and Retrospectives

Kanban is a tool that helps us achieve continuous flow by setting limits on the amount of Work in Progress that can be in a products value stream at any point in time.

Kanban also offers us the concept of heartbeats or cadence against which we can perform regularly scheduled activities.  However, we should be cognisant of leans pursuit of perfection so the use of cadence type activities should be treated with care. As our organization and project team matures, we should be aiming to respond to changes in priorities as they happen and not on scheduled weekly, fortnightly or monthy heartbeats. Failure to react immediately could result in producing less valuable items which would be plain muda.

While we are still in our infancy and we are trying to resolve some dysfunctions in our current setup, cadence can be a helpful tool to ensure we get the relevant people together to perform vital activities.

For example, we can set a cadence of one week to revisit and reprioritise the items for inclusion on the Kanban board. Similar to Scrum’s Sprint Planning.

We could set a cadence of 4 weeks to review and reprioritise the items in a release plan. Similar to Scrum’s Release Planning.

We could set a cadence of 1 day to review queues / blocks / impediments. Similar to Scrum’s Daily Stand-up.

We can also use a cadence of 2-4 weeks to perform an analysis of our current approach as a means of acquiring a means for continuous improvement. Similar to Scrum’s Retrospective.

There are a number of different helpful techniques for running Agile Retrospectives (see Agile Retrospectives: Making Good Teams Great by Esther Derby and Diana Larsen) and these can be used as appropriate.

A simple approach is to allow a short amount of time for the team to identify what made them glad, what made them mad and what they would like to change.

The team can then vote on their favourite three change actions and implement those changes.

This works well in Scrum or when using a cadence in Kanban. However, the lean team are actually looking to refine the value stream by removing waste and reducing cycle time (or takt time as it is called in lean) from the customers ask to the customers get in an attempt to improve flow and improve quality. This should be the focus of the retrospective rather than basing it on what went well, what went badly and what we want to change.

The value stream needs constant attention and needs relentless improvement.

Once these improvements are made we can then start to see the impact on our cycle time (since we plot the average cycle time each day we can visualise any improvement very quickly).

We can happily do retrospectives this way every four weeks using cadence to step back and do a drains up on how we do things as a team but we should also supplement this with something that is more specific and more timely.

The following 3 techniques may help with this.

  1. WIP Limit on Issues
  2. Root Cause Analysis on Blockages
  3. Retrospective on completion of each item

WIP Limit on Issues

Just as our Kanban system has WIP limits on each activity in the value stream, we can apply a similar concept to our issues / impediments board and place WIP limits on the amount of impediments that are allowed to be open before the teams resources swarm on a resolution.

The problem resolution team can look at why the impediment occurred, what can be done to resolve it and how it can be prevented from happening in the future.

Root Cause Analysis on Blockages

Using a cadence of one day to review the Kanban board we can do something similar to Scrums Daily Stand-Up.

The difference here however is that in Kanban we do not care about what happened yesterday and what is happening today. We review the kanban board for evidence of queue’s building and blockages forming.

When this is spotted, the team can swarm on the problem and resolve it, doing a root cause analysis while the information is still fresh with plans to implement changes so the queue / blockage does not happen again.

Retrospective on completion of each item

Another option is to include a review of the approach to delivery on completion of each and every item (i.e. a retrospective swimlane at the end).

Again we can set a WIP limit on this so we are forced into a retrospective when the limit is reached.

When this happens, a select number of the team congregate and review the value stream for each item and what can be done to improve flow, throughput and quality.

If this is done on each and every item, rather than on a small batch of items (as defined by the WIP limit) then the team members involved in the value stream for this item can quickly meet for 10-15 minutes to discuss and agree improvements based on fresh and relative information.

In Summary

In Scrum we use retrospectives to view what made us glad, what made us sad and what we want to change. Which is great – just as long as we then go change things – otherwise just another history lesson.

In lean however, improvement is relentless. So much so, that the objectives of the project focus more on learning and making improvements than on delivery.  The goal is learning, not delivery. Completely counter-intuitive to the traditional delivery focused managers and organizations.

In addition, lean has this view of perfection and it is part of its guiding principles. Scrum doesn’t. Scrum just wants us to improve….Lean wants us to define what perfection means to the organization then strive to achieve it. For me, the latter summons all sorts of powerful emotions and motivations; the former makes me want to do just about enough.

What can be observed about process problems in software development is that they often show up in the boundaries between one units work and another’s – the handovers.

These typically generate detailed specifications that clearly state, in a contract like manner, the information that crosses the boundaries. One approach to this is to specify the boundary conditions as tests. If the test fails, the problem is identified quickly and can be solved immediately by the team doing the work. Thus feedback and therefore learning is relatively immediate.

Also, in Scrum, boundary crossing impediments are often handed over to the Scrum Master to resolve. This may be convenient but is far from ideal. Those best placed to solve a problem are those who are suffering because of it. The Scum Master can help mentor in its removal but the actual team member who has the angst will also have the drive to see it removed.

In the form of a Retrospective, Scrum teams are encouraged to reflect on how they could improve. However, we often find that these Retrospectives only identify problems but fail to diagnose them and identify, plan, implement and assess the change actions that will eradicate the problem from arising again.

One Lean technique used in problem solving is the five whys.

Taiichi Ohno, father of the Toyota Production System, believed that “by repeating why five times, the nature of the problem as well as its solution becomes clear.”

As an example, we often find that when developing software in iterations against a definition of done, our stories fall short of achieving said done-ness.


We regularly fail to complete stories in a Sprint


Why 1 – we don’t have enough time to test them all

Why 2 – the stories are too big

Why 3 – because they are initially too vague to break down any further

Why 4 – because our development team don’t understand the detail of the requirement

Why 5 – because we don’t work with the customer and the tester enough to understand the acceptance tests prior to starting development


Customer, tester and developer to define acceptance tests that accurately specify the requirement for each story a Sprint ahead of the Sprint they are to be developed in.

Kanban and Release Tracking

Value is delivered in releases.

In the project world, releases are scheduled and release content is defined upfront. Progress towards that release is monitored and, if behind schedule, trade-offs between release content and release date are made.

In Scrum we define the release content, size the content in story points, measure the number of points we complete each Sprint (our velocity) and work out how many sprints we need to complete the job.

This level of tracking gives us a good view to measure our progress and adjust accordingly.

Kanban does not prescribe an approach to release planning and tracking.

However, this does not mean it shouldn’t be done. You need to do as much release planning as you need. If there are no dependencies on your release, which is rare in the project world, then do you need to do any release planning at all? If the need is to fix a critical defect rather than meet some legislative drop-dead date, then the lead time to release is entirely dependant on the time to fix that defect.

When progress towards a release date with an agreed release content are important measurements to you (as they tend to be in the project world) then a mechanism is needed to size the release, track progress of completed items towards the release and extrapolate time remaining to complete that release based on current performance.

We have this using a release Backlog of User Stories, Story Points from Planning Poker, Iterations, Release Burndown and Velocity.

However, it is often argued that the overhead of planning and estimating that surrounds a Sprint can be waste so how can we achieve measurement towards a release without Iterations?

As a starting point we need a measure of size. One suggestion would be number of stories or features though this is problematic as stories / features in the immediate vicinity of development tend to be more granular and those stories in the distant future tend to be larger. If this is the case, then our current delivery (e.g. average of 10 stories per week) cannot be used to determine an end date as the stories developed will be wildly different in size and complexity than those remaining.

This is why we need a method to size backlog items relative to each other. Planning Poker and story points are excellent candidates for this.

The customer (i.e. Product Owner) will select the stories required in the next release from which we can then determine the size of the release.

We can then start to capture the number of story points completed every single day and from this extrapolate the following measures:

  • Story points completed per day
  • Story points completed in the last 5 days
  • Story points completed in last 4 weeks
  • Story points completed in each 4-week timebox
  • Total planned points for the release
  • Release Burndown

This can be plotted as a visible indicator of progress as shown below:

This gives us the ability to measure trends over static periods of time (Velocity) or a rolling velocity which shows immediate trends of days and weeks. In the latter we can look (on a daily basis) to see if we have a volatile flow (if last 5 days goes up and down sporadically) or whether we are suffering from process bottlenecks and cycle time impediments (if last 5 days / last 4 weeks starts to show a trend downwards). What we are also looking for here are trends upwards. If our rolling velocity remains static then we are not continuously improving.

It also allows us to plot release burndown which we can use to determine if we are on schedule for the agreed release date with the agreed release content.

We do not need iterations to do this. All we need is a release content, for each item in that content to be sized in relation to the other items, and for us to capture how many points go in the “DONE” column each day.