Thursday, June 27, 2013

Metrics and Measures: What’s the Difference?


What is the difference between ‘metrics’ and ‘measures’? While answering this question, I am going to stay away from ‘text book’ definitions or differences on these two terms and I am going make it experience-based.

For people on the ground who are typically 'hands-on' team members,  ‘measures’ and ‘metrics’ are synonyms. Yes. For most of them these are synonyms. Those who write code or test code do not make a big deal of differentiating these two terms. They say, “Let us get the work done first, measure what matters, analyze, improve and move on.”  Some of them do understand the differences but they do not keep it all along and focus on those differences.

For Software Quality Analysts or Process Specialists, Researchers, Teachers etc., these are two different terms.

Whether that is an appealing answer to you or not, I think it is worth reading further and appreciating what these two terms mean to us in simple terms.

You measure certain things in software projects. Examples include effort, size (such as Function Point), number of review defects, number of testing defects, etc. When you combine or consider multiple measures to infer something, it becomes a metric. Examples include productivity (FP per Person Day), or cost of quality (COQ). Measures are fundamental (or raw data). Metrics are derived and help us in sense making and planning the next course of action.

Here is a simple example - weight and height of a person are measures and Body Mass Index, which is a function of height and weight, is a metric.

Metrics help in decision making. Metrics are expected to exhibit certain behavior. For example, productivity of a team is expected to remain steady or increase over a period of time under all general circumstances. Measures are collected but not analyzed in isolation in all general circumstances. Nor are they bound to exhibit certain behavior.

It is not that simple. Something which you perceive as a ‘measure’ can be a valuable ‘metric’ to someone else. For example, for a development manager who is working with a team and moving closer to the production release every measure matters. “Why have we taken 18 hours to fix this defect?” is not a reactive question. And, “Let us not react. Let us look at the average hours to fix defects. The numbers are not modulating at all. We are within limits” is not going to pacify either.  Right?

Obviously the way different stakeholders see and treat metrics and measures when the battle is going on and the way they do it after the battle is over are different. After the battle is over, some of those stakeholders and a new set of experts analyze, infer, interpret, extrapolate, and predict what is going to happen in the next battle. We fight different battles at different times. That is why, in order to win, understanding these differences is the first step and selecting the right set of measures and metrics is the next.

There is more to it. One topic is classification of measures and metrics. You will find different classification of measures and metrics at the website of National Institute of Standards and Technology.

Do you have any other points to add here?

Friday, June 21, 2013

Software Project Metrics and Agility


These days, it is very rare to find status reports that do not have metrics, graphs, trend charts and so on. In some cases this happens because of organizational norms or guidelines from Project Management Office (PMO).  In other cases it is a fad - leads and managers in software project teams find it very trendy to identify and report metrics in status reports.  Perhaps your management consultant or subject matter expert suggested you this approach or you heard about it in some conference or it is driven by your senior management.

The idea behind this approach is to measure what matters and report the right stuff in order to communicate project status in terms of numbers.   Similar approaches are followed in large programs which involve multiple projects.  We call it quantitative management.  Quantitative management is often promoted by senior managers with the quote, “In God we trust, all others must bring data.”   That makes sense but whether or not projects succeed in spite of quantitative management is a big question.

When we work with geographically distributed teams it becomes even more important to ensure transparency among virtual teams as well as stakeholders.

Does this approach provide adequate visibility on project progress and risks?  Does it improve trust? 
Why do project teams find it a big puzzle when it comes to identifying and reporting metrics?
How do last-minute surprises hide under the carpet until the last minute?
What are the key aspects to keep in mind when you identify, analyze and report metrics?
Quantitative management is necessary to ensure transparency and build trust. But is it sufficient?
What do project teams need to do go beyond the necessary?
What else is needed to move further up? 

You will find answers to these questions in my recent article ‘Quantitative Management, Transparency and Trust’ published in SD Times.

"Measures" and "Metrics":  Are they different? How?  That is the topic of discussion in my next post

Tuesday, June 11, 2013

When Tools Destroy Value

A week ago, during my morning walk I noticed a 4 feet long bamboo stick with a broken edge hanging from one of the mango trees. This tree is one among the four mango trees of a household in a gated community on our walking track. Considering the hundreds of full-size unripe mangoes, I could guess how that stick made it to the top of the tree. Someone living there must have used a long stick to pick mangoes and eventually broke it in to two pieces. The top piece got stuck to the tree and the other remained secure in the hands of that person. That is easy, and obvious.

Daily I notice this stick trapped in the tree at a height of about fifteen feet from the ground. It is surrounded by bunches of mangoes unpicked. Soon those mangoes will ripe, some of them will serve the monkeys and squirrels and the rest will fall on the muddy soil and decay unless there is an attempt to pick as many mangoes before they ripe and start hitting the ground one by one.

This is a simple story of an entangled tool that did not live long enough to serve its purpose and four fertile-but-barely-reaped trees. Using a tool like this will result in definitive neck, shoulder and back pain when you attempt to pick as many mangoes from all four trees. When you feel the pain, you will decide to pick less than one percent of the mangoes and leave the rest on the trees.

It goes on like this. When a mango picking stick breaks, like how it happened in this story, a frugal approach is to find another stick which is about five feet long and tie it with a rope or string to the latter part which was secure in the owner’s hand. As you guess, sooner it will give some happiness with its utility in fetching three or four mangoes and naturally get entangled in the tree or fall apart. In order to get some more happiness, you will fix it the same way yielding to intermittent neck, shoulder and back pain. With bouts of time wasted in mending the tool and an abysmal number of mangoes picked, the story will continue.

We know, there are better tools for this context. There are hi-tech tools too – they are meant for mango orchards with hundreds of trees.


In IT industry, we come across tools and their ill effects.

After years of working you find that one or more of your tools are not serving the purpose and they are entangled in the whole mesh of corporate IT systems. Your tool is too slow or does not make you productive. You don’t get to maximize the number of mangoes collected. Lots of mangoes go wasted on the tree. In spite of brilliant ideas and better choices, you and your team find it hard to retire the tool or replace it with a better tool. Meanwhile, those who recommended or commissioned that tool several years ago live with those victorious memories and continue to recommend that tool. According to them a tool like this cannot have a shorter runway - it is their baby!  With patch upgrades, maintenance fixes, and some face lift - of course, all these applied to the tool - they make it live longer. The entanglement continues and any tool like this destroys value!

When I step back and think, I realize that this issue is not specific to IT industry. It can happen anywhere and it happens everywhere. In business terms, this is about ongoing issues in fixing recurrent tools related problems meanwhile leaving lot of money on the table and witnessing loss of productivity and increase in stress.

You may ask, “So, what do you suggest?” I anticipated this question. Let me answer.

I don’t mean to suggest that we need to invest in fancy tools. I want to emphasize that tools must serve the purpose of the present and the near future. Tools need to align with strategic goals related to business growth and help us remain competitive. This means that tool selection should not consider margins or cost-saving alone as a criteria. Tools need to boost productivity. There has to be a strategic investment in procuring, commissioning and maintaining tools. There has to be a periodic review – may be every six months or one year – to introspect and find answer to, “Do our tools create value?” Finally, decision makers including senior leaders must be part of this review to decide on what to do when tools destroy value.

What do you think?