We think of spring as a time of renewal and cleaning up our homes after getting through the winter. So, why not take some time to do the same thing for your automation?
In the same vein as development teams taking some time in the schedule to address technical debt, automation specialists should periodically have a look at their tests with the intention to make them more efficient and effective.
Is this a worthwhile use of time?
If you have ever spent days trying to debug “flaky” tests or wondering why a set of tests is being run at all, you know it is. When this happens, there are several courses of action you can take.
A simple strategy is to remove any tests that aren’t needed.
If an area of the application is stable and not affected by new development, automation structures that test that area may not be necessary for now and can be archived until they’re needed again. This will make the automation faster and more targeted. An alternative to this is to limit the scope of the tests if they are run several times a day, then have a larger set of tests running less commonly, in off-hours. This is attractive when working in a continuous integration framework, where automated tests should be running on every code check-in.
It is good practice to periodically review the tests with the aim of limiting them to testing one thing at a time. If a test is long and has multiple assertions, it should be broken into smaller tests. This is more maintainable; when the tests are testing one thing at a time, it is much easier to pinpoint where failures are occurring. In addition, a critical eye should be turned on those assertions. If the assertions are merely verifying the existence of something that is not changing, like verifying a user exists, they are likely not necessary. Instead, those assertions are much more valuable if they are testing something that reflects a state change in the system; for example if that user has permissions added, they now have read-write access1.
Automated tests should be refactored for maintainability. Often, the creator of the automation is the only person who touches that code, but if the team grows or if that person moves on, it is important that someone else can work on and understand that code. One thing that can help with this is to ensure the tests follow a naming convention that makes it clear what the test is testing.
Where the naming is vague, for example, emailErrorTest(). Consider renaming it to something clearer, for example, registerNewAccount_emailAlreadyExists_shouldReturnErrorMessage().
While that might look like a long method name, there is no ambiguity in what it is testing. If the automation is testing the UI layer of an application, a popular strategy for writing maintainable code is to use the Page Object Model, in which there is a separate class file which finds UI elements, fills them, and verifies them. The tests then look more like readable scripts, and are simpler to understand and maintain.
In the interests of speed and maintainability, sometimes it makes sense to move those tests down to a lower level. For example, consider whether UI automation is better moved down to the API level, where we are checking whether the right data is returned, rather than how it is rendered. This is often a good choice if you are finding the UI automation is “flaky”, or if the UI is in an active state of development. API tests are much faster than UI tests, and if a development group is switching to a continuous integration pipeline, this can be an important conversion to make.
Testing tools are evolving at an impressive rate. Development teams should be keeping an eye out for new tools, and evaluating them against their current toolset. If a tool can make the tests more bulletproof, faster, and can expand the repertoire of the tests, it should be given serious consideration. The effort to port perhaps hundreds of tests over to a new tool would be considerable, so it is worth weighing this effort against a more gradual cutover, in which any new tests are written using the new tool.
This article has presented a few thoughts on spring cleaning your automated tests. Hopefully, they can help you step back and think about your automation from time to time. Using these approaches, you can make your tests simpler, more powerful, faster, and easier to maintain.
Want to schedule a meeting or learn more about automation script maintenance? Contact us! At PQA, we pride ourselves in being Canada’s leading experts in software testing.
About Jim Peers
Jim Peers is a QA Practitioner at PQA Testing, with more than sixteen years of experience in software development and testing in multiple industry verticals. After working in the scientific research realm for a number of years, Jim moved into software development, holding roles as tester, test architect, developer, team lead, project manager, and product manager. A trusted technical advisor to clients, Jim creates test strategies, and mentors and assists testers on multiple projects.