A Word From Our Chief Scientist

As 2017 is coming to an end, I wanted to reflect back on some of the technology and testing trends that I have seen in the last year.  It seems to me that technology continues evolving at an ever-increasing speed, and 2017 was no exception.  Attempting to list all the technology changes would be a Herculean effort, but I think it is possible to identify underlying technology trends, and make inferences about the impact on testing.  The technology trends that I think will have the biggest long-term impact on testing relate to big data, artificial intelligence and cybersecurity.

Our use of technology has expanded and it is now an integral part of everything we do, from managing our finances to exercising.  As technology continues to evolve, the tester role, the skills required to test, and our test approaches will need to continue evolving and advancing too.

Big data is the trend that I expect to have the most profound impact on testing.  There are more devices than ever recording and transmitting data.  We no longer live in a world where data are only collected on desktop machines and sent to a central database in the same building.  Instead we now have mobile devices, aerial instruments, wearable technology and nanosensors collecting text-based information, sound, video, and biometric data that is transmitted wirelessly to distributed databases.  Testing the recording of the data will require a wider understanding of how the different devices function, but also a broader base of knowledge of different types of data.  Not only is the testing itself difficult, but setting up the appropriate test environments will be arduous, if not impossible.

The ability to record and analyze larger sets of data has facilitated the introduction of machine learning, which is one of many approaches to artificial intelligence.  I believe testing applications of machine learning, and other approaches to artificial intelligence, will require a radical change of mindset. We are used to testing systems that are deterministic, and can be modelled as such.  Unless there is an error, the same input should always generate the same output in a predictable manner. Artificial intelligence is different and we will have to develop new test approaches.  An interesting question to explore further is how will we integrate artificial intelligence itself in software testing?

Cybersecurity has gone from being a specialized topic with a limited audience to being a constant subject of articles, blogs and news media; ranging from reports and dissections of large-scale data breaches (Uber is a name that comes to mind), to stories about individuals who have had their online accounts hacked.  The amount of data we are sharing, and how we are sharing it, makes us vulnerable. Testers, being user and stakeholder advocates, need to design tests to identify issues that could increase that vulnerability.  Traditionally, we have relied on separate teams handling all internal security concerns, including security testing, but those days are gone.  I think there will still be specialized teams, using specific tools and skill sets, but I also think that security is going to need to be an aspect that all testers consider at all times.  Raising awareness of security threats in the applications we test is a responsibility we will not be able to delegate.

New technology requires new skills, or new application of skills we already have.  I am not going to attempt to predict what new technology we might see in 2018, but I am convinced that it will require us to be modern renaissance testers; continuously learning and honing our skills.  The context might change, but our tester skills will remain valuable and essential, and recognizing that will be pivotal for the success of testers, and testing.

Christin Wiedemann
Chief Scientist and Co-CEO of PQA Testing Ltd.