About Javier Artime

Javier grows innovative products as part of a global R&D team. Lean Product Development and Agile Software Development changed his work. He has a broad range of interests and an immense child-like curiosity, first for the experiences and facts, second for the theories that organize them. He is frequently described as a book-eater, music-addict, espresso-drinker-cook, frequent-walker and dog-friend (two last ones go together).

This blog header is dead. Move to javierart.wordpress.com.

Leannovation is the name of an existing unrelated company. They haven’t claimed the name or asked me to stop. Probably they don’t know this blog exists. Simply put, it doesn’t make sense for me to use the same name.

I will be posting under javierart from now on.


Great post on the relationship between utilization golden spot at 80% and the 20% free choice time in companies like Google.

Connected Knowledge

The 20% time has become popular in the software industry in recent years. Even though most programmers don’t work at companies that have 20% time, most have heard or know someone who works at a place like Google, where programmers spend 80% of their hours working on what the company requires them to do and 20% on their own projects. Or so we have been told.

A shop across town is doing it and now we want to do it too. Many programmers have tried to introduce 20% time in their workplaces and that proved to be very difficult. So, how can we do it? What are the dos and don’ts? Is there some theory behind this practice? I want to summarize answers to these questions in this post and hope programmers find it useful.

queue size as a function of utilization - rapid rise after 0.8

The main reason for 20% time is to keep capacity utilization at 80% rather than…

View original post 801 more words

Boiling More Than One Ocean (At the Same Time)

Since I joined my first non-corporate software development team a few years back I have seen plenty of examples of product development organizations trying to do too much in parallel. I’m not talking about the usual peak of activity here or there, but about a constant stretch to higher utilization levels or to a bigger number of projects crawling through the development life-cycle.

I am bringing one particular example as an illustration. It was maybe an exaggerated case which, fortunately enough, I have only witnessed but never been a part of. The organization in question was a software development department within the IT division of a mid-sized corporation. By then they usually bought most of their software “in-a-box” or through externally contracted development projects so the department was not very big. Most of their developers were allocated to multiple projects; the most demanded ones to as many as 10 projects. Teams didn’t last long enough to gel and individual allocations were “calculated” through some pretty sophisticated resource management techniques, considering several variables like title, occupation, availability (in chunks of 5%) and even cost.

I asked within an informal conversation and just out of curiosity, how many projects the department was running at that time.
– I don’t know the exact number. I could find it out but it can take some time.
– How many do you close in any given quarter? – I asked back too quickly.
– I don’t remember the average figure exactly. – The director replied.
– Oh, let’s take last one as an example then. – Again, too fast to be prudent.
– Well, ahem… none actually. No project was completed last quarter.

Suddenly I realized how inconvenient those questions were for them. I realized the middle-managers were staring at the floor and the programmers looked uneasy. Then a programmer broke the tense silence: “There are probably more ongoing projects than contributors, be it programmers or testers.” I refused to keep asking. “I see” was my only answer but they understood the implied meaning: “sorry for having asked” .

A few more contacts with the programmers and their managers and I could realize how that single decision, attempting to run every piece of work ever requested from the department at once, was impacting the life of everyone there. High levels of stress, delays everywhere, constant change in plans, design, architecture, inter-component specifications… It was a real hell to work in.

This kind of issue is not that infrequent and its impact on the lives of the product development people is really high. It is my intent herein to describe some of the effects and possible causes of trying to bite more than we can chew, in the hope this can help me not to repeat the same mistakes.

Why is that a problem?

Trying to do too much in parallel increases lead time, which is in our case, the time it takes a project to move from a request or idea to a delivered product.

According to Little’s Law (from queueing theory) the long-term average number of items in a stable system L is equal to the long-term average effective arrival rate, a, multiplied by the (Palm-)average time an item spends in the system, W; or expressed algebraically:

L = aW.

In our case, the system is the development organization, and the items in the system are the projects we run. If there are more projects within the system (L increases) and provided the arrival rate (expressed by a) hasn’t change, the average time to complete a project (W) increases linearly. This is true for the whole system and its subsystems. Don’t get fooled, increasing the amount of work you run in parallel always increases lead time. There’s no way to avoid this simple law.

This is only one of the problems, others are associated with the cost of queues:

The longer a project stays in development,

  • the more risks it accumulates,
  • the more changes will happen in its environment,
  • the higher the Cost of Delay, and
  • the most frequent the scheduling adjustments need to happen.

The more projects are run in parallel by an organization,

  • the smaller the focus of individuals in each project,
  • the higher the wastes associated to task-swapping,
  • the higher the stress level of team members, and
  • the biggest the cost and difficulty associated to resource management.

What are some possible causes?

If this is a problem, why is it so prevalent? Why so many organizations out there show similar patterns? It should be rooted in some difficult to change behaviours producing some other benefits to individuals or organizations or they would not abound.

I listed down some possible causes below, knowing this list cannot be complete and accepting not all organizations are driven by the same forces and needs.

lack of prioritization

  • frequently caused by not having clear evaluation rules to measure the quality of a project proposal or idea
  • projects compete for resources within the system, without the organization holding a clear discussion on their relative merits and priorities
  • the organization is not able to say no to a new request as there are no objective measures of the cost of accepting it in the queue
  • stakeholder expectations not managed when all of them want to set the top priority

projects are difficult to kill

  • also related to not having clear evaluation rules for the value of a project, a proposal or an idea
  • no threshold levels the ideas should exceed in order to become an actionable project
  • death-march or rotten projects clogging the system
  • ideas are pushed down from high-above and killing them would be perceived as a revolt
  • ideas not pursued or discarded are perceived as failures (i.e. fail to deliver)
  • failure is not an option, no learning from it is allowed

the already-in-progress fallacy

  • “It won’t be long; we are already working on it.”
  • “The earlier my favorite project is started the earliest it will be finished.”

Is there an alternative?

Believe it or not what described above is not the only way a product development organization can manage its portfolio. I’ll try to provide some hints of what constitutes in my experience a good approach to controlling the size of a portfolio in order to maximize the throughput while reducing the risk. It is not my intention to present them as portions of the universal unique and immutable truth. They are approaches frequently found useful for most organizations most of the time with a sound theoretical basis.

  • limit work to capacity: do not accept more projects per time period than the organization is able to complete in that period
  • establish some sound quick estimating techniques to evaluate the relative merits of project proposals
  • projects are accepted into the backlog or rejected, no mid-point, no saved for later projects
  • projects in the backlog are estimated and ordered in strict priority order, which can change overtime
  • teams are stable units of individuals who are experts in working together
  • pull, not push: when a team completes a project it pulls the next one from the top of the backlog
  • teams and the organization as a whole are encouraged to swarm around problems to solve them quickly
  • no project is allowed to clog the pipes: once a project has been in the system for longer than an agreed threshold its priority is increased or it is killed


More information on this kind of approach can be found on the following excellent books:

Both changed the way I see and do my work. I haven’t reviewed the books again for writing this post and I don’t think my own take on this matter is exactly what they describe in the books, but I’m sure there is a lot coming from them in what written above, except the errors which are all mine.