We are living in a data-driven world. Almost everything from our purchasing of day-to-day goods, the effectiveness of driverless cars, or the quality of our workouts can be tracked to monitor how things are working and how we can plan for the future. That process of data collection, measurement, and analysis is also a critical piece of developing good metrics, a strong testing practice, and ultimately creating better software. A strong testing practice is one that provides the right intel to stakeholders to determine how to continually improve and develop more robust products.

Group of people surrounding a white board with a graph on it

So what makes metrics so important?

Your testing practice is only as good as its successful outcomes in improving the final state of a product or platform; metrics are what helps you determine whether you’ve hit the mark. They play a critical role in estimation, evaluating entry and exit criteria, status reporting, and process improvement. The best metrics approach covers not only those related to your testing practice and activities but also draws close alignment to your business goals to ensure that everything is working together toward your common goal.

As testers and test managers, we are frequently asked to report on the progress and results of our testing. The question “How is testing going?” may seem simple enough, but our answer is ultimately based on our ability to extract useful metrics from our work and present them in a meaningful way. This is particularly important in agile environments, where clear, concise, and up-to-date metrics are potentially needed multiple times per day. Answering the “how is testing going” question requires both test monitoring and test control.

Test Monitoring: The purpose of test monitoring is to gather information and provide feedback and visibility about test activities. Information to be monitored may be collected manually or automatically and should be used to assess test progress and to measure whether the test exit criteria, or the testing tasks associated with an Agile project’s definition of done, are satisfied, such as meeting the targets for coverage of product risks, requirements, or acceptance criteria.

Test Control: Test control describes any guiding or corrective actions taken as a result of information and metrics gathered and (possibly) reported. Actions may cover any test activity and may affect any other software lifecycle activity.

Examples of test control actions include:

  • Re-prioritizing tests when an identified risk occurs (e.g., software delivered late)
  • Changing the test schedule due to availability or unavailability of a test environment or other resources
  • Re-evaluating whether a test item meets an entry or exit criterion due to rework

Categorizing your test metrics

We’re big believers in the power of metrics-driven engagement which prioritizes monitoring, measuring, and analyzing meaningful data from all projects and deriving Key Performance Indicators (KPI) and Key Risk Indicators (KRI) for both project and governance teams to review and take required actions for improving the product quality.

You can categorize your metrics into two main buckets:

  1. Operational at the project level
  2. Strategic at the program/account level

Operational Test Metrics

These metrics are what you would typically think of as testing indicators of how a project is going. They are reviewed by the project team to monitor and control day-to-day efforts:

  • Requirement coverage
  • Pass Test Cases %
  • Failed Test Cases %
  • Blocked Test Cases
  • Fixed Defects %
  • Accepted Defects %
  • Triaged/Un-triaged Defects %
  • Defects Deferred %
  • Critical Defect %
  • Average defect fix rate
  • Defect Density
  • Defect distribution by status, requirement component, severity, release
  • Automation coverage

Strategic Test Metrics

These metrics are reviewed by management and stakeholders to monitor and control program-level goals. They provide insight on where a testing team should be considering corrective or preventive measures.

Team Velocity variance
This metric valuable for the team to understand how they’re doing, and to management and stakeholders as it helps them predict what can be delivered and when
What does it mean: Planned vs Actual efforts spent on a sprint
Ideal threshold: < 5%

Test delivery schedule variance
This metric is similar to the one above but focuses less on the overall project goals and sprints, and more on the progress of the testing team themselves
What does it mean: All the QA activities milestone are being met as planned
Ideal goal: 100%

Fixes in Production with QA testing
This metric highlights the need for all fixes to undergo testing before being release
What does it mean: All the items should be tested by the QA team before production release
Ideal goal: 100%

Defect Leakage percentage
This metric provides a good understanding of the overall success of your testing process
What does it mean: Production defect leakage – defects missed by QA in non-prod
Ideal goal: 0% for Sev 1, 2 % for Sev 2, 3% for Sev 3 and high

Automation ROI and coverage
The return on investment for a team’s automation efforts is a good metric to work toward, but is a difficult one to capture and convey
What does it mean: Efforts saved by Automation on the automatable scope year by year
Ideal goal: 20% year by year

Cost of Quality variance
Important for both monitoring the current project and planning for future testing budgets
What does it mean: Overall cost of quality budget planned vs actual
Ideal goal: < 2%

Interesting in reading more about metrics? Check out our post on the dos and don’ts of designing metrics dashboards and take a listen to our latest episode of the PLATO Panel Talks podcast that dives into the ways that we can make metrics that work for everyone, from your testing team to your CEO.

Abhishek is a QA evangelist who is passionate about quality assurance and testing at all levels of the organization. He is currently the Director of Service Delivery, Ontario, and also leads Web Accessibility TCoE at PLATO. Abhishek is PMP and has played key roles throughout his career in positions like Service Center Manager, Delivery Manager, QA Portfolio Manager, and led Managed Services Testing Teams spread across the globe. Abhishek loves to train and coach teams in software testing and its principles.

https://www.linkedin.com/in/abhishek-gupta-pmp/