As testers and test managers, we are frequently asked to report on the progress or results of testing to our stakeholders.  Questions like “How is testing going?” may seem simple enough at a glance, but there are actually a number of ways that one could respond when asked.  For example:

“We’re on-track.”

“95% of test cases so far have passed.”

“We found 15 new defects yesterday.”

While each of these responses does provide factual details about the status of testing, it is highly likely that none of them give all of the information that is being sought.

Good metrics are about more than just data.  Used properly, they can be powerful communication tools that draw back the veil on testing and provide transparency to the process.  Used improperly, they have the ability to send the wrong message to stakeholders and trigger false alarms or, even worse, to hide problem areas and give a false sense of confidence when things are not going well.

While many organizations do not have comprehensive metrics programs, all organizations have a need to provide information to their stakeholders.  These stakeholders need information about the progress and status of testing in order to make important decisions, and metrics are a key tool in delivering that information.

For test managers, metrics also play an important role throughout the test process.  Starting in the early stages of a project, metrics give us a basis for providing estimates, as well as a way to define objective suspension and exit criteria.   Once testing has begun, metrics serve as an ongoing risk management tool, allowing us to quickly identify delays or problem areas as we measure progress and evaluate against any pre-defined targets.

As we approach the end of a test cycle, metrics will tell us whether or not we’ve achieved our targets and help us decide whether or not we should continue testing.  Even once the project is complete, metrics continue to provide benefit.  By analyzing what was done throughout the course of the project, we are able to evaluate the process itself and implement improvements for future projects.  This can be as simple as comparing estimates to actuals, or can involve more complex processes, such as root cause analysis for defects.

While they have the potential to provide many benefits, metrics are less of a science and more of an art.  With that in mind, here are some key points to consider when incorporating metrics into your testing process.

Metrics, like anything, should be planned in advance.

You can’t report on data that you haven’t captured and before metrics can be captured, they must be defined.  The first step in this process is to understand your reporting needs.  Doing this analysis up-front will allow you to identify what data elements are needed and how they must be broken down before setting up any tools to capture them.

At the same time, it’s also important to be clear about what each metric represents.  For example, what is an “open” defect?  What is considered “resolved”?  Definitions of these terms need to be applied consistently from one report to the next and from one project to the next.

Metrics need context.

There’s a saying that “There are three kinds of lies: lies, damned lies, and statistics.”  Unfortunately, testing metrics have managed to earn a similar reputation.   This is due in part to their openness to interpretation.  When used in isolation, metrics can easily be manipulated to make a just about any situation look either good or bad to suit a person’s needs.

Of course, metrics aren’t always used for the purpose of deceit.  Still, even when there is no intent to mislead, stakeholders can still draw the wrong conclusions if no context is provided and they are instead left to interpret the data on their own.

Whenever metrics are presented to stakeholders, it’s important to ensure that that their significance is easily understood.  Since this significance isn’t always obvious, it can be helpful to provide textual summaries to accompany metrics.  This narrative provides an opportunity to comment on progress, explain anomalies and identify any areas of concern.

A simple chart is a clear chart.

How you present information is sometimes as important as the data that you are presenting.  If the presentation is unclear, any potential meaning or message behind the data can be lost.  As mentioned above, a textual summary can be helpful in providing the necessary context when reporting on metrics, but so too can the right chart or graph.

The best chart or graph is one that immediately draws the target audience’s attention to the important points or trends and is not cluttered with unnecessary data that might distract from those.  Just like Goldilocks with her porridge, the goal is to get the level of detail “just right” – not too much and not too little.

To ensure charts and graphs are as clear as possible, it’s always best to include proper titles and labels, as well as any trend lines or annotations that are needed.  Where applicable, red /yellow / green indicators can also be very useful for helping stakeholders interpret the data.

As an additional step, you may also consider using scorecard or dashboards views to present sets of related data elements, rather than relying solely on individual charts or graphs.

Be on the lookout for trends.

Just as it is impossible to measure the speed of a vehicle with a single point of data, so too is trying to measure the progress of a test cycle.  Only when a series of data points are examined, do trends in that data start to emerge.

Trends are helpful because they allow us to differentiate between systemic behaviour and temporary anomalies.  They also allow us to make predictions about the future.  Of course, the validity of these trends and the accuracy of any resulting predictions increase as more historical data is considered.  Too often, organizations consider only a limited set of historical data when looking for trends.  While this may suffice for measuring performance or making predictions within a given project, it does not allow for continuous process improvement at an organizational level.

Metrics influence tester performance, but not always in the way you might think.

How do we assess the abilities of a tester or compare the skills of one tester to another?  Since the most visible tasks a tester typically performs are executing test cases and logging bugs, it’s not hard to see why some people choose to evaluate testers based on the number of test cases they’ve executed or the number of bugs they’ve logged.  This is actually a very narrow view of testing and, while it may be seen as a good way to motivate testers, it can actually have unintended side-effects.

People typically work to optimize what we measure them against, but often this comes at the expense of the things we aren’t measuring.  For example, if we measure testers based on the number of bugs they log, how likely is it that they will spend their time thoroughly documenting test cases and defects or coaching other testers?  On the other hand, how likely is it that they will focus on finding simple, cosmetic defects or logging variants of the same issue to artificially inflate their defect count?

When it comes to metrics, more is better.

Stakeholders don’t all have the same needs, nor do they always know what questions to ask in order to get the information they are looking for.  As a result, many conversations about testing status tend to focus on only a few of the more basic metrics, such as completion percentage, pass rate or the number of open defects.  While there is nothing inherently wrong with any of these metrics, it is important to recognize that there is no single metric that fully represents the status of testing or the quality of the product that is being tested.  Metrics are situation and context-specific.  There is no “right” answer and there is no silver bullet that will solve all your problems.  The key lies in choosing the right set of metrics for each particular situation, then presenting them in a meaningful way.

No matter who the audience is or how they are presented, metrics will only ever tell part of the story.  In reality, metrics are most beneficial when they are used as a starting point for discussion and further investigation.  They give us clues about what’s going well and what isn’t and show us where to focus our attention.

So, while we must take care to not put too much stake in our metrics, we should also be sure not to ignore them entirely.  As with anything, the best approach lies somewhere in the middle.  By finding the proper balance and approach for your organization, you can help ensure you are only using metrics that matter.

Mike Trites is a Senior Test Consultant with PLATO.  He has over eight years of experience in the software testing industry and is certified at the Advanced level by the ISTQB.In his time as a consultant, Mike has worked with a number of clients in a wide range of environments.  He has tested products that include database management software, financial software, claims adjudication systems, web applications, and VLT gaming software, among others. As a Test Manager, Mike has experience managing all aspects of the test process, from test planning to test case and defect management.  In addition, Mike has worked with clients to help define, implement, and improve their internal testing processes.

Categories: Uncategorized