To help PLATO Panel Talks kick off the new year, our host Mike Hrycyk sat down with friends of the pod Director of Quality Engineering at Slalom Build, Dr. Christin Wiedemann and Nathaniel Couture, QA Manager at Patriot One, to take a look at the future of testing in 2021. Together our panelists share their thoughts on which testing topics will be on everyone’s minds this year, from AI and machine learning to edge computing and automation.

Mike Hrycyk:
Hello, everyone. Welcome to another episode of PQA Panel Talks. I’m your host, Mike Hrycyk , and today we’re going to talk about the year ahead and testing. What 2021 is going to look for testing, what’s out there on the edge and interesting, and really what we’re all going to be thinking about that we would love to be doing even while we’re doing our manual testing. PQA of course loves to think of itself as being on the forefront of all these things. So we really feel passionate about having this as one of our discussions. I would like to welcome you to come in and listen, and I’m going to introduce our speakers. So first I’m going to turn to Christin Wiedemann, who used to be with PQA and, and has graciously come back to talk. Christin, would you like to introduce yourself, please?

Christin Wiedemann:
Well, thank you very much, Mike, for inviting me today. This is going to be a really interesting discussion I’m sure. So worked for PQA for many years and have a lot of good friends and colleagues still there. And these days I am Director of Quality Engineering at Slalom Build in Vancouver.

Mike Hrycyk:
Thanks, Christin. And Nat, another ex-pat of PQA. How about you introduce yourself?

Nathaniel Couture:
Sure. Thanks for the invite. I’m currently acting as the QA manager at Patriot One and have been around testing and startups for probably 16 years. And yeah, just happy to be here, talking about testing again. .

Mike Hrycyk:
Awesome. Thanks. Welcome, Nat. So I figured that that one of the ways that we could start this would be to instead of look entirely forward, let’s maybe just do sort of a recap of 2020. Can we talk about what was the most interesting or exciting trend that you saw in 2020 around testing? And let’s start with you, Christin.

Christin Wiedemann:
Yeah, that’s an interesting question, but I think it’s also a really hard question to answer. And in 2020, I think I’ve been acutely aware of how much more narrow my context has been than usual. Normally I go to conferences, I meet people in person and I get more external influences, but working from home, and being able to turn the in-person events I’ve found that I’ve been much more focused on my own day today. So I think I’m probably less aware of trends than I would have been in a normal year. The trends that I have seen though, that I think are maybe not specific to testing are of course, that we seem to be more comfortable with things moving fast. That’s always been true in some environments, but the pandemic I think has definitely overall created more comfort around projects moving quickly and truly iterate them work in and in a more agile manner. And the other thing is that the importance of data and data engineering just keeps increasing and I think that’s an area where testing still has some catching up to do. How do we test data? How do we test in a cloud environment? How do we test machine learning implementations?

Mike Hrycyk:
Interesting. Cool. Thank you. It twigs in a prior podcast maybe a couple back where we talked about remote working one of the questions we discussed was in this huge scramble to get up and operating again, as we were all transitioning to home, what got lost in quality and what should we look out for and what were the dangers? And then we won’t discuss that here, but you can go back and listen to what we talked about in that podcast if you’d likeSo Nat, what did, what did you see in 2020? What kind of trends did you encounter?

Nathaniel Couture:
Yeah, I think much like Christin this year is one where you do have a lot less exposure to what’s going on around you. The year was spent inwardly focused. I had a bit of a strange year because I transitioned from consulting in the testing world to actually be embedded in a startup-style, very agile, very AI-centric company. So we had AI built-in into our core product. So I was introduced to a world of testing where that was relevant and try and understand, okay, how, how do we go about testing a product that relies on AI for a core part of its function. But yeah, I think if I was to kind of rise above the nitty-gritty details, it’s just that accepting the fact that automation isn’t going to solve all your problems and that teams have to yes, rely on it because we are moving very quickly with things and you do have to automate where appropriate, but people have realized that you’re not going to solve all your problems with automation. And the solution is a mix of automated testing with good tools, but also that there’s going to be a fair amount of manual interaction without software in order to give you the outcomes that you want.

Mike Hrycyk:
Awesome. Thanks. So, okay. You both sort of talked about it, so maybe we’ll spend a second on it. So has going full remote across so much of the way we test in organizations. So suddenly has it changed our practice? Have you seen big changes in the way we test? So not just the fact that you’re sitting at home staring at a screen in the corner of your kitchen, but has it changed the way we do testing Nat?

Nathaniel Couture:
I think it’s been detrimental from what I’m seeing and I’m in a bit of a unique context. But because here in New Brunswick we’ve had kind of a mix of yellow state and orange state, which means we’re changing the amount of time we can actually spend in-person in our office around other developers and testers that are part of our team. And so what I notice is that whether you like to admit it or not, when you’re remote, you know, you do come in on the meetings and the technology is very good for communicating. We got Slack, we get, you know, all these communication tools. But I think when you hear the conversations and you’re in earshot of developers talking about something not working, or you get insights as a tester that help you do your job better. And I think it’s a little detrimental for sure. You can still get your job done. But I think going full remote is definitely not as good from development or testing. But it’s definitely from a testing perspective because you’re often outside of those conversations between developers you’re missing out.

Mike Hrycyk:
So that ties to something I call the serendipity effect, which means is you can still test and you can still do testing, but finding the things that you only found through happenstance are a lot less common. Christin?

Christin Wiedemann:
Yeah, to some extent I agree with, and I have seen the same things that Nat was describing. That all rolls, but testers, in particular, I think do rely on those casual conversations to pick up information and to overhear the discussion. And we don’t get that these days because any assigned conversation using our virtual communication tools tend to be one-on-one or a formal meeting. And I think part of our role as testers too, is, of course, to advocate for bugs and to be diplomatic around how we do that. And it’s much harder, I think, to do that in writing in a tool, whether it’s in the defect tracking tool or in an email or in a chat tool message. It’s hard to be as delicate as you sometimes need to be when you talk about it, the effect. So I think in those situations, it is tougher. I think there are good sides too. And part of that is that in general, I’ve also seen that people are much more deliberate and intentional in how they communicate. It does take more effort, but it tends to actually make the communication that does happen better. And in a lot of situations, I’ve seen people be more diligent about documenting, writing things down, posting the message on Slack to make sure everyone’s on the same page instead of just taking things for granted or assuming that everyone heard that same conversation. So I think there are opportunities for the kind of moving it’s in both directions where it can be worse for testers and it can actually be better. And so much of that is team dependent and depends on the culture that was set up before this happened. I think it’s also important to keep in mind that there were certainly a lot of teams that were already distributed. So they were already used to not working face to face. And I think the impact has been very different depending on what the team starting point was.

Mike Hrycyk:
Yeah, for sure. In having discussions and I’ve discussed this quite a bit across the last year is that there are different teams that are finding tricks or ways to insert a little bit of that serendipity back in whether that’s having drop-in calls where once a week you just come in and chats go where they’re going to go. Or making sure the start of meetings have a little bit of small talk just to allow it to happen a little. It’s not perfect, but it helps being more deliberate that’s another one. But for my own company for PQA, where we’re a consultancy, the biggest miss that I have is the, between team catches. So when you have a person A working on project B and person C working on project D and they sit near each other and they’re talking through a problem in a call or on the phone or whatever, and they get off that call and this other tester not even on their project, says, have you thought of this? There’s just no avenue to bring that back. There’s just nothing I’ve found or new ideas that I’ve heard that can really bring back. So that’s a bit of an impact. And if anyone out there has any ideas on that specifically where disconnected project people can get some of that serendipity, let us know!

Christin Wiedemann:
Maybe I can just add to that, that sometimes I feel a bit like a switchboard operator today where we use all sorts of communication tools, but we use of course chat tools too. And as a leader, I see now the part of my role is to have an overview of all the communication and discussions that are going on and trying to make the connections that our people are not naturally doing themselves to the same extent. And I think that’s always been part of a leader’s role, but more so now than ever it feels really like I’m operating in a 1920s switchboard.

Mike Hrycyk:
That’s a good image. And you’re right. I mean, being deliberate and making those connections, which is something that we’ve probably always done, but listening even closer and being in more of your internal chat rooms so you can say, “Hey, something’s going on in this other room and you should talk to this person maybe.” And yeah, you’re right. That’s a good thought. Alright. So that was, that was the year that was, let’s start talking about the year that will be 2021. What are our predictions here? What are we going to see significant change or iteration moving forward in testing this year? And don’t go too deep. We’ll dig into the ones that are interesting, but Christin, what are your predictions? What are we going to see?

Christin Wiedemann:
I think one big thing is going to be definitely how we view test automation and happy to talk more about that, but I leave it at that for now. I think AI , ML, all sorts of decision rules and decision-based implementations will be really important as well. And maybe the final thing that I would add to that, is also just data. I already mentioned it, but I think data is going to become increasingly important.

Mike Hrycyk:
Awesome. Alright. Nat?

Nathaniel Couture:
Oh, the crystal ball question, I guess I’m generally not someone who has a great crystal ball and I tend to be a little pessimistic, a slow adopter of things that are changing. You know, I think we’re going to see a lot of the same trends. I mean, people are going to continue to work remotely. I think people, testers and developers, are only going to get better at working in this format. I mean, we’re already seeing evolutions in the way that we use our tool, better documentation, as Christin mentioned. We’re just getting a little more diligent in the way that we write things because we know that people are ingesting more content that way. We’re getting better at using the communication tools. So creating specialized channels or maybe over including people in meetings, which is a bit of a pain. But yeah, as far as you know, what I think we’re not going to see is fundamental upheavals in the world of testing, due to the introduction of new tools, whether they’re AI-based or not. I think time and time and again, we see all these companies making promises, “Oh, our new AI tool is going to come out.” Yeah, there may be some new tooling that comes about, but I think most of it’s going to be people-centric and it’s going to be a very small, incremental change and mainly adapting around kind of increased remote work.

Mike Hrycyk:
Well, I like that Nat. You’re predicting, it’s going to be the same as it has been for the last 15 years incremental and slow adoption of change.

Nathaniel Couture:
Yes. How do you like that?

Christin Wiedemann:
Sounds like it’s a safe bet.

Mike Hrycyk:
As Nat said, he doesn’t want to put himself out there. He likes to hedge his bets. Okay. One of the topics that I feel has been a part of this what’s coming in the new year, every year for the past five, if not 10 or 20 is AI and machine learning. And it definitely has grown each year, but let’s address that. Is this the year and Nat, you’ve already answered what you’re saying on this, but I’m still going to make you talk. Is this year that it goes big. Is this the year that it takes over the universe? Where are AI and machine learning going in 2021? And we’ll start with you, Christin.

Christin Wiedemann:
Well, I was thinking a lot about this before this chat and what I think is really happening is that for a lot of different things, and AI and machine learning, being an example, what’s happening is just that our language is catching up with what we’re already doing. Machine learning is not new. AI is not new. Implementations have been around for decades. It’s just that we are becoming more aware of them and we’re having the language to talk about them. So let’s say 20 years ago, we would have used terms such as support vector machines, et cetera. We would have had very specific statistical terms and mathematical terms to talk about implementations of artificial intelligence. Whereas today we have a much more common language and a much more well-established understanding of what that is. The tech is changing too. But I think the big changes that we’re talking about this time, we’re talking about it in a way to make some more approachable. So I think that the AI and machine learning revolution has kind of already happened, but we’re starting to become really aware of its impact. That’s an area where I think testing still has a lot of growing up to do, a lot to learn and where we need to make sure we really build strong collaborations between data engineering or data scientist roles and test and quality roles.

Mike Hrycyk:
Cool. Nat, do you have anything to add or discuss there?

Nathaniel Couture:
Yeah, I know my previous answer would indicate that it’s not going to have big impact, but I think that there are a couple of areas where I think AI is having a tremendous impact on the work that we do as testers. And I’m probably seeing this more because our company actually has a fair amount of AI within its tooling. But I think there’s kind of two items. Some of this stems back from relationships with tool vendors from the past decade, being in the testing space. You know, tools, leveraging AI to get better at recognizing components on the screen. And so on some of the painful things that a test automator deals with on a day-to-day basis, there were great demos put on by companies like Tricentis and Mabl and so on. I think there’s still going to be progress on that and I think it’s going to be helpful. But I think more importantly are all the software out there that is starting to embed artificial intelligence or deep learning neural networks into their products. It Is affecting the way that we do our jobs as testers. And I think there’s going to be a combination of looking at what data scientists do and being a lot more. I believe Christin hinted at this earlier when she talked about kind of being a little heavier on the data science front or making data a bigger part of your day to day work. And I think when we’re setting up a bunch of our tests to compare whether one algorithm works better than another or one machine learning model works better than another, it’s a lot of information that we’re looking at. We’re comparing, it’s all just numbers and results and probabilities and okay, is this one better on this metric, but worse on the other and making guesses and hedging bets. But I think we as testers have to evolve as well and take into consideration the effect that, you know, having AI built into a product has on our ability to test it. And what is a high-quality machine learning model doing versus a low-quality one?

Christin Wiedemann:
Yeah. Maybe I can just add to that. I think when it comes to implementations of artificial intelligence we’re in the same spot that we used to be when we first started having real in quotation marks testers. So we were moving from a world in which developers wrote and tested their own code. And now I think we’re moving into a realm of software development where to large extent data engineers and data scientists are still testing their own models and I’m not arguing, which is the right approach. But I think we’re now in that transition phase where we’re starting to have tests, there’s been more and more involved in not just the testing of the application and the data themselves, but actually evaluating the models, which I think up until now, that responsibility has to a large extent, in a lot of environments, still been on the data team itself.

Mike Hrycyk:
Yeah. I think that it’s important as we have these conversations moving into 2021 that we realize that there are two facets of testers or maybe multiple, but two that I’m going to address here between AI and machine learning in testing. And so one is that machine learning and AI become components of tools that we use to do testing. And so that’s what you’re talking about with Mabel and Applitools had something recognizing objects and it’s leveraging the capabilities of machine learning and AI to do your testing. Then there’s the other facet. And the other facet is how are we testing the process of implementing machine learning and AI into what you’re doing? How are we making sure that the data sets that we’re using to learn from are appropriate data sets? How are we testing the software? That’s using machine learning to have a good output. And a lot of that is data, but it’s the algorithms that we use with the data as well. It’s not so much that a tester has to understand the deep depth of a really complicated algorithm, but they need to, again, it’s induration, and testing and making sure that all of that can work together and still produce quality output.

Nathaniel Couture:
Yep. And I think Christin actually zeroed in on a point that was something that I hadn’t really considered, but it is actually happening in our organization where testers are now working with the data scientists to understand, okay, what data did you use to train this model? Because we want to make sure that when we’re evaluating it, we’re not using that same data. So there’s a tightness there that probably didn’t exist before. So yeah, we knew it’s important to talk to the developers building the software, but the data science team was far removed prior to this. And so we’re actually drilling into how did you build this? You know, why is this data or what data did you omit when you built this and trained this model? Cause that, that is much more important when you’re evaluating the end system than looking just at the use case it was designed for more often than not. And then you’re also looking at quality criteria that we often didn’t consider as testers like the ethical nature or where could this software go wrong? And we just saw someone in the US not long ago, wrongfully identified by facial recognition software ended up in jail for a number of days just because of software malfunction. Anyway, I think  2021 is definitely gonna bring about some new challenges for testers. Just more and more. I think

Mike Hrycyk:
This sort of flags for me that something that, that I want to push out to all the people who are listening and all the testers out there that a really important thing for testers to understand embrace and really get is the notion of testing bias and learn how to take the notion of testing bias and push it into data sets. So when we think about that is a tester bias going to only look at white faces and facial recognition or not look at people of colour or there are so many ways that that bias can creep in. And I think that we should call out to testers to be one of the gatekeepers of trying to make sure that that bias does not become inherent in our systems. Okay. So getting off my soapbox. So maybe there’s no easy answer to this question. I think it is definitely a hard one, but so one of our listeners raises their hand and said, that’s awesome, how do I get machine learning to be part of my organization? Any suggestions on how to start a company down that path? Start with you, Nat?

Nathaniel Couture:
Well, I guess again, we’ve got these kinds of two ways that AI, and if you’re just talking about it from a testing perspective and testing tooling perspective, I think the onus on that front is really about having strong test leadership who sees where the latest tools can be helpful in your organization. The question is maybe a little less applicable to how do you get AI into if you’re a product company, you know, it’s not normally a tester role to build in AI, but I think it’s important to understand that your job as a member of the quality assurance team in an organization that does have machine learning or AI models embedded within the product is that your scope needs to adjust accordingly. And your role as a tester needs to encompass things like bias in the machine learning models themselves. You know, the ethical considerations performance and so on, because a lot of this stuff is more resource-intensive. So there’s a number of areas that AI has an impact on. And I think you’re less likely to bring it into an organization from the product side unless you’re part of a product decision-making team, but on the testing side, it’s really about choosing products that work for you. And I don’t think people are going to buy a testing product because it has AI in it necessarily they’re going to buy a product that just fundamentally makes their life better.

Mike Hrycyk:
Awesome. Christin, anything to add?

Christin Wiedemann:
Yeah. Like what everything else I think you start by asking yourself, what is the problem you’re trying to solve? And don’t create a solution for a problem that you don’t have, right? Machine learning doesn’t have any value in itself. There has to be something you’re trying to achieve that you can’t achieve in another way. So you always have to be skeptical and not be too distracted by new trends and buzzwords and all that. Looking at the product you’re building, is there an application for machine learning? And I think it’s easy to get things confused and confuse super interesting data problems with machine learning problems. A lot of the time what you’re doing is trying to make decisions based on data, but that in itself is of course not machine learning. It can be a static rules-based system. So sort of understanding the fundamental concepts and understanding your context and understanding the problem you’re trying to address. And from a test tools perspective, I think at this stage, most of the test tools that use machine learning are still probably more interesting than maybe useful. And a lot of them, I think have a higher maintenance effort that comes with them and they can be pretty tricky to set up, but one area that I would be super keen to see people explore more is how we can use machine learning on our test data. So not the test data we use to test applications, but the data we generate when we run our tests. Test results, test cases, can we use machine learning to help us make better use of all that information that we actually generate, but don’t always go back and look at.

Mike Hrycyk:
Cool. So let’s talk about edge computing a little bit. That’s a buzzword that I’ve been hearing recently. I’m going to start. Can either of you give me a reasonable definition that our users who don’t know what edge computing is?

Christin Wiedeman:
Can you define reasonable?

Mike Hrycyk:
No, no, but I can judge it really well.

Christin Wiedemann:
I think one way to think of edge computing is just thinking of centralization and decentralization. When you’re talking about edge computing, one context in which it matters is, of course, IOT devices. So with edge computing, you’re trying to move some of the processing and the data from the central, let’s call it a server for the sake of conversation, out to the devices. And that can be, for example, traffic cameras, passport readers in the airports, et cetera. So it’s rather than having a central system that does all the processing, you’re trying to that out to the units where there are actually things happening. So to summarize that somewhat rambling definition, I would say edge computing can be thought of a little bit as decentralized computing

Mike Hrycyk:
That matches with my thoughts, anything to add or take away from that Nat.

Nathaniel Couture:
No, that’s, that’s a very reasonable definition. No, I mean, I think people can read into it what they want, but yeah. At the end of the day, that that meets in my domain at work that fits our use of it. So I can’t think of how that wouldn’t fit.

Mike Hrycyk:
So, okay. So now that we’re on the same reasonable page is edge computing going to change this year is it a trending thing this year.

Nathaniel Couture:
I’m going to say yes, and maybe that’s because, for the organization that I’m in for us, it’s having an impact in a couple of ways. And, and I think we’re not unique as an organization. And so a couple of examples, we like many companies are extending the capability of software through a mobile arm of our solution. And this is in essence, we’re going to be relying on data captured from an end device. Our software processes, typically video feeds from static cameras, there are cameras, video surveillance systems that are part of a network of cameras on a facility, for example, one use case for our software. And so by extending this to agents that are wandering a facility, we can actually take video from a live camera that’s in motion through the facility and actually do some of the processing and analysis on the device before sending it back to a centralized system. So that’s one example that and the other is actually now cameras themselves. You can get some with GPU’s in them and actually do some of your object recognition right on the camera. So I think in the space that I’m in, and I don’t think this is unique, I think the advent of really capable computing devices in small formats, that capability is just increasing every year. And right now we’re at a point where you can start to run basic machine learning models on small embedded devices. And so, you know, I think we’re just at the start of that trend and we’re going to see more and more of it, and it’s going to have a big impact.

Mike Hrycyk:
So what does your average test or have to learn for that? Are you emulating most of it? So they’re still testing it centrally first or how’s that going to change the life of Joe Tester?

Nathaniel Couture:
It definitely has an impact. I’m trying to think functionality is probably equivalent as if you’re doing it centrally, except you’re constraining the resources on that device and the software that you’re testing. Now, you’re relying on that outcome coming from the edge device. So I think it’s going to complicate your life a little bit, especially if there’s a large variety of devices. So in my world, let’s say I’ve got to test against, you know, a dozen different camera models. I want to know that that software, yes, it might work on the one in my lab. Is it going to work across the other models? So I think much like the introduction of software on mobile devices and mobile apps in general increased testing scope this is just going to increase it that much more.

Mike Hrycyk:
Interesting. At least we’re starting to understand distributed and interfaces and APIs. So at least we’re not approaching with a completely new idea, I guess. Christin, anything to add or say?

Christin Wiedemann:
Yeah, maybe to go back to your first question there, if edge computing is going to be a trend or if it’s going to be relevant, I agree with Nat, but I would maybe add that I think it’s important for two reasons. One is that there’s going to be more devices that need to be able to work somewhat independently. And that’s just because everything is getting in quotation marks smarter, right? I mentioned traffic cameras. There’s going to be all sorts of devices out there that need to be able to think of their own so smart trashcans, smart cars, et cetera. So there’s going to be more of it. Absolutely. I completely agree when after, but the other thing is I think because there is more of it and there’s more awareness, we’re also becoming more aware of the vulnerability and that I think is huge. So making physical objects smart in the sense that they can think of their own it’s not really new. We’ve been doing that for decades too, but it wasn’t really something that, again, we talked about, that people were aware of, they didn’t tend to do super important necessarily or supercritical things. Today they do, and they are very vulnerable to adversarial attacks. And that’s where I think security testing is going to come in and how we define that. And who does that is something that we are going to be talking about this year. For sure.

Mike Hrycyk:
That was an awesome segue, Christin, because the next item that I wanted to talk about is, so security is a bigger buzzword every day that the potential of attacks, potential of your data going wild. But maybe let’s just focus on how are these security conversations going to impact today’s tester? And let’s start with you, Christin.

Christin Wiedemann:
Well, I think we’re going to have to think both more and less about it. As we’re moving more and more to building things in a cloud environment, rather than an on-premise, some of those security problems kind of get solved for us by the environment. It’s easier to know where our endpoints are and where they’re pointing. What’s going to be inside sort of our safe cloud environment. What’s going outwards, who has access to what. So in those contexts, understanding the architecture and understand the cloud environment is really important. But there might be someone else that cloud environment provided us actually worrying about the security. And at the same time, as I said, there is going to be more, I think, edge computing and there is going to be more devices that are really vulnerable. And I think we’re only really becoming aware of how vulnerable they are and there are gaps and concerns, issues, and risks out there that need to be addressed. And we can’t be oblivious to that. So we have to build better awareness and a better understanding of how to make things safe for everyone and make things safe for the users primarily.

Mike Hrycyk:
Awesome. Nat anything to add?

Nathaniel Couture:
No, you should always let Christin answer first. Well, let me go back. I’m just so the way that the question’s worded, you know, will it impact the average tester this year? I think the average tester, the only impact I can see is that they need to be more aware of security-related issues. But I think the bulk of the security testing work and vulnerability assessment work is still going to be in the hands of specialists who do this type of work on a day-to-day basis. Like, I don’t think the average tester in an organization is going to be impacted in 2021 anyway. You know, I think in 2020, we’re anyone out there as aware of the typical standard attacks I think the movement towards cloud and trusting large organizations to manage their infrastructure better than small companies are just going to make, as Christin said, you know, you worry about it a little less in those areas. Edge computing might bring about some additional landscape or increased landscape for security vulnerabilities, I guess, to appear, but I don’t think it’s going to change the life of the average tester all that much.

Mike Hrycyk:
Okay. So we’re approaching the end of our time that we have, or maybe even going to be a little long. And I’m going to list off a few of the things that I wanted to talk about just to put that out there so that if our listeners want to continue the conversation around that they can. So codeless automated testing, is it a real thing? Is it really gonna make a mark this year? DevOps is that just a trend or is it just the way things are now. Robotic process automation, the move towards private clouds, i.e. Having your own AWS instance where you do stuff in your own network. All of those are things that we could talk about, but won’t. If there are enough people out there who want it, we can come back and have another chat in another podcast. But the last topic I wanted to hit here was one that I kind of promised earlier is how is automation going to grow and change this year? It’s been part of the trend for the last 25 years I’m sure. And where’s it going? And what’s your, where’s it at? So we’ll start with Christin, so that Nat may not have to talk.

Christin Wiedemann:
I’m hoping that this is going to be the year when we finally stop having that conversation and we just talk about testing. We talk about development and it’s all we’re building product. And I think we all agree that there is a certain level of technical skills that are needed for testers. The difference between testing and developing is more of the mindset than the skills you have. And that’s what I’m hoping will change in conversations this year. And that will finally move away from tools that are claiming to make it easy to automate tests by being very UI focused by providing very appealing user interfaces, and just really focus on building good products together on a team.

Mike Hrycyk:
Awesome. Good Nat?

Nathaniel Couture:
So I’ve been exploring a tool that does kind of, I guess you would bucket it into codeless, automated testing, but it’s an AI-centric tool that you just kind of point it at your application and you just let it bump into the stuff and it learns over time and you can kind of guide it. And I have hopes for this kind of thing, and it’s not because I think it’s going to fundamentally reduce my workload. But I think any kind of progress towards eliminating or finding defects in software with minimal supervision and little work to set up, I think there’s some benefit to that. And I don’t think that it’s fundamentally gonna revolutionize testing, but I think if I can install an agent and just point it at my application and get some value out of it over time and, you know, I think it’s helpful. But I don’t think it’s going to revolutionize testing in any way. That’s my hope. My hope is that there’ll be enough progress that you can see little tidbits of things appear in 2021 that make my life a little bit better. And that’s one of them,

Mike Hrycyk:
AI-driven, automated monkey testing. I love that Nat. Alright. So, we are, at the end of our time, I’m going to give you 20 seconds each. What are you most excited about for testing in 2021? And we’ll start with you, Nat.

Nathaniel Couture:
Oh, most exciting. You know, I hope that and I don’t know that it’s testing specific, but I hope that we can return to some normalcy through the rollout of the vaccine. Just get back into a kind of, you know, a way of working where we’re a little more, in-person, a little more interaction with the actual labs and technology that we’re working with, you know more conferences. So attending conferences, interacting with people outside of our organization. Internal communication is great, of course, but it does bubble you. And I think the lack of mixing at these events it’s limiting and virtual conferences are just not the same. I think you learn a lot by talking to other testers, in-person sharing your war stories and I’d like to see a little bit of that come back for the end of the year.

Mike Hrycyk:
Thanks now that was a terribly long 20 seconds. Christin?

Christin Wiedemann:
Well, I can definitely stick to that 20 seconds by just saying testing always excites me and this year is no exception. I think it’s going to be fantastic! There’s so much to explore still.

Mike Hrycyk:
Awesome. Thank you. Kind of the same answer. Everything about testing excites me. The way we can interact the way we can convince people that testing is important. The way that we’ve got a lot of people convinced the testing is important. Now let’s get data scientists convinced the testing of what they’re doing is important. I think that that is exciting to me and I like that the conversations are happening. Alright. So I would like to thank our panel for joining us for this really great discussion about the trends for 2021 in testing and thank our listeners for tuning in. If you have anything you’d like to add to our conversation, we’d love to hear your feedback, comments, and questions. You can find us at PQA Testing on Twitter, LinkedIn, and Facebook, or on our website. You can find links to all of our social media and website in the episode description. And if anyone out there wants to join in on one of our podcast chats, or has a topic that they’d like us to address, please reach out. If you are enjoying our conversations about everything, software testing, we’d love it if you could rate and review PQA panel talks on whatever platform you’re listening to. Thank you again for listening, and we’ll talk to you again next month.

A Ph.D. particle physicist by training, Christin uses her scientific background and analytical skills to dissect complex software solution problems. Her career in the technology sector has primarily been spent in the professional services industry, starting as a quality assurance consultant and progressing through the different manager and director roles. As Director of Quality Engineering at Slalom Build, she is part of an incredible team of quality advocates, working collaboratively with clients to create groundbreaking products, while simultaneously advancing quality engineering practices.

LinkedIn: https://www.linkedin.com/in/christinwiedemann/
Twitter: https://twitter.com/c_wiedemann

Mike Hrycyk has been trapped in the world of quality since he first did user acceptance testing 21 years ago. He has survived all of the different levels and a wide spectrum of technologies and environments to become the quality dynamo that he is today. Mike believes in creating a culture of quality throughout software production and tries hard to create teams that hold this ideal and advocate it to the rest of their workmates. Mike is currently the VP of Service Delivery, West for PLATO Testing, but has previously worked in social media management, parking, manufacturing, web photo retail, music delivery kiosks and at a railroad. Intermittently, he blogs about quality at http://www.qaisdoes.com.

Twitter: @qaisdoes
LinkedIn: https://www.linkedin.com/in/mikehrycyk/

Nathaniel (Nat) Couture, BSc MSc., has over a 15 years of experience in leadership roles in the technology sector. He possesses a solid foundation of technical skills and capabilities in product development including embedded systems, software development and quality assurance. With a decade of experience as CTO in two successful small ITC companies in New Brunswick, Canada, Nat has proven himself as a solid leader, capable of getting the job done when it counts by focusing on what’s important. He’s a very effective communicator who has mastered the art of simplicity. Nat has served as an expert witness on matters of software quality. He is always driving to adopt and create technologies that incrementally reduce manual labor and at the same time improve the quality of the end product.”

LinkedIn: https://www.linkedin.com/in/natcouture/,

Twitter: @natcouture