This month we’re talking all things microservices. While they’re certainly not new to the world of testing, microservices have changed the way we ensure quality for applications. Our host Mike Hrycyk sits down with Suhaim Abdussamad, host of “That’s A Bug !”, and PLATO Senior QA Manager Matt Villeneuve to discuss how microservices impact the way we test and can make it even better.

Mike Hrycyk:
Hello, everyone. Welcome to another episode of PQA panel talks. I’m your host Mike Hrycyk. And today we’re going to talk about testing and microservices with our panel of testing experts. Today, we have Matt and Suhaim. They’re our experts and I’m going to put that in quotes so that they can define how expert they are in microservices. Microservices have been interesting to me for a while because they really change the plane of testing. They change how you’re testing and how you think about testing. And in certain ways they make a QA’s life easier. And as we get into the discussion, maybe I’ll be proven right or wrong about that. So, I felt that this would be a really good discussion to have, because I think there’s a lot of people out there that have not yet encountered microservices or are just on the cusp of encountering them. But it’s realistically a way that our architecture has been moving for quite a while now. And, it’s the norm now. It’s not leading edge. It’s not new. It’s what a lot of organizations have just accepted. That’s the way that they should have as an architecture. First off, let’s do some introductions. So Suhaim can tell us a bit about yourself.

Suhaim Abdussamad:
Sure. I’m Suhaim as you said and I’ve been in the testing field since around 2006. I currently run a podcast called That’s a Bug. It’s a podcast where go over one software bug, every episode, so that any bugs that we find interesting, and as testers we all encounter bugs all the time that are quite interesting. And I like looking into why they happen and so on. And so that’s what we do for the podcast. So right now I’m quite interested in looking at new ways of testing and new technologies. I’ve been, actually the last podcast, the one that is getting published right now is about AI and some of the issues with that.

Mike Hrycyk:
That’s awesome. Did you happen to catch our podcast last month about AI and machine learning?

Suhaim Abdussamad:
I did actually. Yeah, it was pretty interesting.

Mike Hrycyk:
Good! Glad you enjoyed it. Okay, Matt, over to you. Tell us about yourself.

Matt Villeneuve:
Hi. So as Mike said I’m Matt. I’ve been in the testing field since 2002. My background is pretty varied. I started in hardware testing and a new product introduction cell way back in the day when we were making millions of cables for very large telecom companies out here in Ottawa. Since then I’ve moved on to doing software testing, embedded testing, integration testing, system-wide testing. Verticals that have ranged from everything from simple web designs to EMR and EHR to e-commerce websites. I’ve been using microservices or at least the last project that I was on used microservices. And I have tested them for roughly about three years. So I’ve got a little bit of experience with them.

Mike Hrycyk:
Oh Matt we’re going to have to talk to you and teach you how to extend your credibility a little bit. Yeah, no one believe him. He’s super modest. Matt is super smart and really gets what he’s talking about. Alright. Let’s jump in and start our discussion. And as our regular listeners may be aware, we really like to talk by level setting and help making sure that we’re talking from the same space. Let’s just start with a simple – some of our listeners probably are like, what are you talking about? Microservices? I’m just here because I like Mike. So maybe let’s start with you, Matt. What is a microservice?

Matt Villeneuve:
So microservice is any part of an application that can be removed from the larger monolithic containment that applications have been built in for a long time. Now I’m not saying that every single application has been like that forever, but taking small pieces of it and making it accessible via a network connection, via an API connection, removing it from a larger application so that while it’s out on a network or, you know, through an API connection, as I said, you can easily reproduce it. You can have fail overs or you can update them quickly. You can change what they do without a lot of impact to the rest of the system. It’s just a better way of making sure that a large application has a redundancy and has the capability of being able to be dynamic and changing over the time, depending on usage loads updates from development teams, that type of thing.

Mike Hrycyk:
Suhaim anything to add change, argue about?

Suhaim Abdussamad:
No, I mean, Matt’s got it pretty good there. Like an expanded quite well. One thing that we can mention is that for things like performance and stuff, if you can’t fix a problem right away, one thing you could potentially do with, depending on the architecture, you could have multiple instances off the same microservice. So, you can add extra availability pretty quickly and scale out your environment with microservices a little bit easier. That’s something that I find interesting.

Mike Hrycyk:
Yeah, me too. I think another one of the things, and we’ll get into more details about other things, but one of the, one of the details that wasn’t absolutely clear when you said that Matt is that once you start it’s not something that it’s like a potato chip. You don’t start with just one and then you’re done. The goal with most monoliths, when you start with microservices, is you’re going to create many, many, many, right. It’s the idea of taking all of the functions that might be consumed by multiple options or have these other needs and building out a fleet of microservices that start replacing a lot of the functions of the monolith. That’s not all, there’s certainly still things that stay within a larger monolith, but it’s creating a host of them to fit your needs.

Matt Villeneuve:
Yeah. I agree with what you were saying, Mike, I think you’re right. It was a bit of a miscommunication on my part, but I think the larger view of it is that you don’t have a monolith anymore. Right? You have these smaller containers that are spread out. So whereas you may have in the past a UI that was running on a web server on a computer, and you would simply talk to a database that was stored on the same computer and written to retrieve information that way. Instead of being, you know, all co-located in one place, it’s now spread out. And I guess in theory, you could look at the entire system as, you know, like a giant monolith, but in actuality, it’s not, you could very easily take down a service or add a service without impacting how the overall system is going to actually respond.

Suhaim Abdussamad:
Yeah. So like that part is interesting. Right? So in terms of you take out one of the services, or one of the services go down. It doesn’t mean the entire system goes down. So if you take, I don’t know, something like Netflix and one of its services go down and let’s say the subtitles don’t work or something like that. Not that that’s how the Netflix app architecture is, but one piece goes down doesn’t mean the most critical thing, which is to be able to watch the video, Netflix stays in a degraded state, but a portion of it is not working. Right? That’s just an example. That is something nice about the architecture .

Matt Villeneuve:
Yeah. And I mean, even to add to what you’re saying, part of what the microservice architecture is all about is the idea that there are these fail overs and these fail safes. So if you do have a microservice running that fails, you would have an orchestrator or some type of overseer that would say, “Hey, that service just went down. I better spin up a new one really quickly and replace it.” So you might notice a couple of seconds of your video or your movie without subtitles, if we’re still using that example. But then, you know, as soon as the scene changes or something, it would come back to you and you would start having subtitles again. So it would, should be a negligible removal of functionality when a microservice does fail.

Mike Hrycyk:
And even more powerful that is often generally you’re not restricted to one instance of a microservice either. So you could have 50 of the same microservice serving the needs. And so if one of them goes down for whatever reason, it’s reasonably easy for that call to be understood and redirected to another one. So that it’s multiple ways that it becomes more stable.

Matt Villeneuve:
Yeah, exactly, exactly. And I mean, to add to what you’re saying, not only is it you’ve got 50 running and one goes down, but it also works on the other way where you’ve got 50 running and let’s say you’re using an event ticketing site. You’re using a site like that. And all of a sudden it’s 6:00 AM and a huge concert goes on sale. And all these people start flooding your websites. Well, those 50 microservices might grow to be 5,000 microservices and they’ll stay at 5,000 as the load is there. And then suddenly, you know, as people start buying tickets and they stop you’ll go back down to 50 and your resource pool will reduce, but you are able to handle the traffic once that happened.

Mike Hrycyk:
So your example is amazingly valid. And yet I have a firm belief that when those tickets go on sale, they shrink to two services and maintain that through the first hour of the sale.

Suhaim Abdussamad:
We do need to be careful when you set something up like that. Like, because you know, costs could go up like crazy if you just scale it up to infinity. Right? So you do have to set those limits.

Mike Hrycyk:
Yeah I mean, in your systems like AWS, they put caps, right? You can auto-scale to a certain point and then you’ll have set it up so they won’t go past that.

Suhaim Abdussamad:
I mean, you have to configure it that way is what I mean,

Mike Hrycyk:
But we’re drifted a little bit. So I’m going to pull us back in a little bit. This is the testing podcast. Let’s now that we understand microservices – why are microservices so powerful for testing and quality? And let’s start with you first this time Suhaim.

Suhaim Abdussamad:
So Microsoft is supposed to do like a limited amount of things very well. Right? So it does a few things or one thing very clearly. So for testing, it becomes a bit easier as long as you test that microservice, like, you know what it’s supposed to do, what it’s the inputs and outputs and you test that, but it is the more interesting part I find for testing is that you need to also understand how all those microservices play well together. And how in concert you need to test that part too. For testing and quality, like it’s the same as everything else, in my opinion. Any other app, you need to need to understand the application itself, but the, microservice on its own it’s easier to test that because you kind of know what it’s supposed to do and it’s clear, or it should be clear. Hopefully there’s some documentation around it. That’s sometimes auto-generated depending on what it is. And then you can use that and create the tests for it, and write automated tests for it that can be run on every release or any mergers or however you want to set it up in your development cycle. And if there are any regressions, you should be able to catch it pretty quick.

Mike Hrycyk:
Cool. Matt?

Matt Villeneuve:
Yeah. I mean, what you said is completely is completely valid. You know, you’ve got these new little microservices that you should know what’s happening and what’s going on inside them with good documentation. What is given to you by the team that’s developing the microservice? There’s a few things that I would add. One thing that I would really add though, is the changeability of the microservice and how easy it should be to update it. You’re talking about probably a very small dedicated team of testers, maybe three or four, if it’s a large microservice kind of working on this, and they’re going to be doing updates on a fairly regular basis. Microservices of course, lends really well to CI/CD making sure that you have that continuous integration and continual deployments. And part of the continual deployment of course, is that when they are ready to push, it will go to a dev branch or a staging branch so that testers can take a look, it having a very easy way. Let’s say it’s easy, a way of updating these microservices. We now have the ability as testers to get changes basically on the fly. No longer do we have to wait for a huge release to happen. Wait, you know, maybe two sprints or three sprints until that team’s ready to push their microservice. I’ve worked on project where microservice updates happened on a daily basis sometimes. So it gives us as testers, a very good insight into what’s happening and how the application is changing. Now, mind you, I didn’t work on something large, like a Netflix or, an Amazon, anything like that. The project I worked on was much smaller with only about 12 microservices containers running. So it was quite easy to see how they all interacted and how the changes affected the functionality of the system just by getting one quick change. So that’s, that’s one thing that I think is really important about microservices and something that’s really, you know, it helps us to basically make sure that we’re catching the change as quickly. Its the fail fast and fail often type of mentality where it’s, you know, there’s a change, Oh, we catch it right away. We don’t have to wait until the information is at the end of the line before we’re able to actually verify it and make sure that that’s correct.

Suhaim Abdussamad:
Yeah. I agree with that. Like, it’s kind of nice to get things like if let’s say a building, some type of UI and the backend is quite often – many projects I’ve worked on where back in comes first. So you get the API calls or the microservice bit first. So you can write those tests right away. And once the front end is ready, you kind of know that the backend is good to go and you can also, let’s say the opposite happens. You can also, if you have the documentation, you can also do mocks and write your tests. And once it’s ready, you can just run those tests and it’d be good to go. So the development cycle gets faster by doing it that way too.

Mike Hrycyk:
So this was going to be a later question, but just for those who don’t know, can you just tell me a little bit more what a mock is?

Suhaim Abdussamad:
Yes. So let’s take an endpoint and you want to kind of mock out what is going to return. So you make your call and you know, what you’re supposed to get back. So a lot of tools do that, like Postman, for example, does that. You can create a mock and call it against that because you don’t have an actual, like a service that’s running at the moment. So, and that’s what you do to kind of simulate the service of the rest API in this case.

Mike Hrycyk:
Yeah. We used it a lot for external calls so that we weren’t dependent on, we made calls out to Facebook and Twitter and so on, and their staging is very undependable and it can take a lot of time, so it can take your test runs and make them unstable. So we would mock that out because you know what you’re supposed to get back. Cause that’s not what you’re actually testing. So yeah.

Suhaim Abdussamad:
Yeah, no, that definitely makes sense. If it is not your, like what you’re testing or your product, you don’t particularly care if something fails on that end on the external systems, you can, yeah. I would use mocks for that too.

Mike Hrycyk:
Or if that microservice hasn’t been developed yet, because it’s actually in phase three, sprint seven, but you want that data to be able to test what you’ve done.

Suhaim Abdussamad:
Exactly. Yeah. That’s the use case that I had to deal with.

Mike Hrycyk:
Well, one of the things that I really like about microservices is how strongly and naturally they lend themselves to automation. And you touched a bit of on this, Matt, when you talked about putting it into CI, which is there controlled well-defined interfaces and well-defined functions. So you can pretty much write automation quickly without a lot of looking for things. Without a lot of trouble. And they also run really fast because they’re component level tests. So rather than a gooey we test, which is hard to maintain, brittle when changes happen, and then it also takes 3 to 30 seconds for your scripts to run, they run in the 20 to 100 microsecond timeframe, depending on the test. You can build a lot of them. They can run fast and they’re not difficult to maintain because they’re so simple. And so it’s kind of like the testing pyramid, where you have a whole bunch of them and each individual test, doesn’t prove a whole lot, but 500 of them proves that your backend is still going to be working fine.

Matt Villeneuve:
Yeah. I agree with that. The best way to kind of set things up is to have automation in mind. And I might be jumping a question ahead because I think one of the ones is what skills and tools are necessary to test microservices and

Mike Hrycyk:
Hey, good segue. That’s exactly the next topic. So go ahead.

Matt Villeneuve:
Sure. So I think that you need to have a certain amount of automation skills or at least scripting skills to test microservices. If you’re going to be making mocks or using stubs or something of the sort you’re going to have to be able to write those in a way that they’re useful and that they can be run. Can manual tester hope to test microservices successfully? I don’t know. That’s a good question. I’ve actually had manual front end UI testers have to start scripting once we started going into microservices so that they were able to keep up and to provide good testing for it. You could connect to it through Postmen and have a bunch of mocks already to go and hit run and have it returned to you. But the way that I’ve had it kind of fleshed out is that every microservice that runs in the container had a partner quality container that would run with it. And that partner would contain basically the mocks, the stubs, the API calls. And when we would do continuous integration, it would be spun up, run next to the container, talk to that container and then spin down, I would not expect a manual test or to be able to go in there and update those and to use them. And in fact, depending on how technical the manual tester is, you know, a lot of these containers are running a Linux kernel, something of the sort. You would have to understand how to use a Docker command line to bash into the Linux box to be able to connect to the command line in there itself so that you can actually see what’s going on. So I say, yes, I think a manual tester can do it if they have some technical background.

Suhaim Abdussamad:
Yeah. So, I mean, there may be a different way to approach it too. Right. So in your case definitely. When it’s kind of early on in the process, but what if it’s the UI is already developed up, you kind of doing some type of regression and you run into some type of bug in the UI. So now there’s a couple of things, a manual test you could potentially do in Chrome. If it’s a web UI product, you can go into the network tab and look at the error, right. The call that’s being made, and then look at the error. And that is something I expect a manual tester to be able to do. The other option is server-side logging. There’s lots of tools that aggregate all those logs now. Splunk is one I have a lot of experience with. So those things, I would assume that there’s some type of dashboard or some type of like a query being made for that that’s already available for your product at that time. So you can go in and look for that error. So now you have two data points for your bug and hopefully be able to diagnose the problem by those error messages, but this is all dependent on whether you have this all set up. And so I agree with you for the most part, except that I think manual testers can do a little bit more on the other side of things, even if they’re not overly technical, but at the front end, when I say front end at the beginning of the cycle, definitely they do need to be a bit more technical.

Matt Villeneuve:
Yeah. And that’s a, that’s an extremely fair point. You know, I should have pre faced it with the project that I worked on was in early stages. And we didn’t have, you know, the, the logging set up, it was not high on the priority list for some reason. But you know they just never got around to actually implementing the logging. So for us to receive logs, we would actually bash into a Linux box and cat the file out and copy paste it and send it to a developer. So it wasn’t set up in a way that it was nice for testers. Let’s put it that way. They didn’t build that into the architecture to begin with.

Suhaim Abdussamad:
Yeah, no, that’s completely fair. And I’ve done that in the past too. And yeah, having a tool is quite nice, but on the front end though, on the browsers, I do think that manual testers, I would expect them to be doing that right now. I would consider that a pretty common thing to do, but I mean I don’t know if that’s the case or not, but the ones I worked with, they do usually do that.

Mike Hrycyk:
Well, I think it’s a minimal expectation that we should have of every test or who’s going to call themselves a senior and wants to stay relevant.

Suhaim Abdussamad:
Definitely.

Mike Hrycyk:
Would say that what you were talking about is less about testing the microservices and doing an integrated or end-to-end test, or however you want to say it of the system, that includes microservices of part of it, right? So when there’s a failure, being able to dig into the logs, but you’re testing the system at that point and not testing directly the microservices.

Suhaim Abdussamad:
It’s true. But the point I was trying to make is that during that time you can at least pinpoint the microservice by looking through the network tab and understanding where it’s coming from. And as long as you have some type of idea of off the architecture, you should be able to give a lot more information. And so as a tester, when we find a bug, the more information we give the better, right? And hopefully we get to the point where we can almost figure out the exact cause of the bug.

Mike Hrycyk:
Absolutely. Right. SoI mean, the statement is that if you live in an ecosystem that has microservices, you have to grow your knowledge to the point where you can do the things that we’ve been talking about, or you will find yourself losing your relevance

Matt Villeneuve:
It’s true. Yeah. 100%.

Mike Hrycyk:
One of the things you said that when you, when you were actually testing microservices Matt, that was interesting to me is, yes, you can go into Postman. If you learn enough, you can manually create the statements that can manually sort of test the microservices. But my thought, when you were saying that, it’s like, yeah, but now you’ve done three quarters of the job of automating it. As soon as you’ve developed that, that hit that you want to make it’s like, why don’t you just wrap a little bit of a framework around that and then boom! You can run it every time.

Matt Villeneuve:
So I guess I should clarify that we use Postman too. So the way that I look at automating this, and this is probably more of an automation question than a microservices question, but the way that I like to look at automation is you write the manual test case first, and you run the manual test case. If the manual test case passes, then you know, you can automate that test case. And for me, a test case needs to be written, like a manual test case needs to be written, in a way that an automator can look at it and say, I can do all those steps one at a time. And I mean, I’m sure there are people out there who will disagree with the way that I do this, but because of the architecture that I like to use, when I spin up a new project, if I’m given free reign over it is that every time that a test runs, it comes back and passes a test step. So having said that, what I like to do is Postman is a good way to check that manual test step and make sure that it works. But would I use Postman in a larger automation framework? Probably not. I’m going to use like a robot library in Python, or I’m going to write it in Java, something of that sort, so that it’s not using something like Postman, I consider Postman manual tool. So if it works, yes, it’s good. You’re right. I get three quarters of it, but that last quarter is going to be going to the actual framework and writing it in some coding language to complete the task. So that it runs quickly. It’s small enough to fit into, like I said, into a Docker container that contains the QA framework to run alongside the microservice.

Mike Hrycyk:
Well, Matt, you’ve just signed yourself up a conceptual automation discussion at a future date, but we’re going to move along now on the on the microservices discussion. Think about agile, waterfall. Doesn’t matter. You’re gonna write a test strategy. You’re gonna write a test strategy that tells you this is the test you’re going to write. This is how you’re going to write them. This is where they’re going to go. How is that different from when you have microservices as a core part of your architecture? Is it different? Maybe it’s the same. And maybe in framing your answers, I’ll give you the idea of, does it change regression testing much? And let’s start with you start with you Suhaim.

Suhaim Abdussamad:
I don’t think it changes regression testing greatly. I do think understanding your microservices depends on what it is, right? So at what level of the project you’re on. Understanding all the microservices that you’re responsible for and understanding how they all interact is important. And as a strategy, you need to know like which microservice does what. Everything on that chain here. But other than that, personally, I don’t find the testing strategy changing very much. The automation continues at the microservice level that you presumably already have at that point. And at the UI level the Selenium tests or anything of that sort still stays the same. It doesn’t matter if it is microservices or not. Cause you don’t really care at that point. Usually if your testing the UI. But on the backend, it becomes a bit easier because you’re testing just the microservices individually. Usually. Because if you’re going to add in the same flows at the microservice level again then your UI tests and this becomes redundant. At least that’s my opinion. I know Matt maybe you would disagree, but I’m kind of curious with what your thoughts are on that.

Matt Villeneuve:
Yeah. So for me, the test strategy, I think takes a bit of a turn it’s important to, you know, understand where the automation is going to be and things like that, but understanding the contracts between the different microservices and how you’re going to test those, I think becomes a much bigger part of the functionality. You know, the end to end test if you’re on the UI and you’re trying to do something and it returns, I think that becomes, at least in my opinion, that becomes almost a second type of test compared to if you’re doing the microservices. The, the setup, I think, has to be that you’re looking at the low level backend stuff that will bubble up to the UI. The whole, as I kind of said earlier, the whole thing about the quick changes, you know, it might take you 30 seconds to go through a UI test to find an issue. Whereas if you’re testing the microservice directly, you can find it in milliseconds, if you’re using an automation framework to do that type of test. So for me the test strategy changes, right? It’s no longer let’s look at the UI and everything will fall into place. It’s more of a let’s start at the microservices and build up from there.

Suhaim Abdussamad:
Yeah, okay. That’s fair. I agree with that because you do test the microservices, but you also test the UI. What I’m saying, it doesn’t change very much. Because if you do test, it should be by that point part of your routine, it could be a nightly run or a part of CI/CD or whatever. If anything at the microservice API level is failing, it’ll catch that right away. But in when it comes to regression, and when I think about regression I’m thinking about the whole picture, you would still test the UI and that part doesn’t really change. But hopefully by that point, the number of bugs because of backend things should be, hopefully, negligible because you already caught those things. But there may be cases where you don’t catch it, just because let’s say there’s an acquisition with like 30 or 40 microservices or even more and one of those things changed and you have one microservice that you test individually or independently, does not know about the other microservices that is kind of working together in your UI because that’s where I have found bugs usually. So then that’s where you usually catch in the UI or through other kinds of testing or integration testing or something of that sort.

Mike Hrycyk:
Probably gonna move us along a little bit, sorry. We’re running a little long. So that was a good conversation though. And maybe we can continue that in comments and stuff after if people ask questions, but you did raise a question for me. As you’re looking at your strategy and you talked about, Hey, we have a mix of microservice automation tests that we have Selenium tests should there be an expectation that because you now have a bunch of focus on your microservice testing, that you’re going to decrease the penetration or the width of your selenium testing, or is it really, you’re just adding a bunch more tests?

Suhaim Abdussamad:
I think you should reduce it to like the testing pyramid thing. I think somebody mentioned that earlier. You should be at that point, be comfortable reducing the UI tests that you have, but I do find that you still need it because you do end up catching well catch front end issues sometimes. Well, that’s kind of why you write them, but you don’t need as many, like all those negative cases and things like that, that you probably used to do before. You don’t need that as much. That’s what I think, based on my experience.

Matt Villeneuve:
I don’t know if I agree with that for me. I think you still need the negative cases because developing a UI, if you are writing something that talks to the backend, you still have a UI component that could have issues with it where, you know, boundary testing on a field or what have you, right? So there are some issues in there I think that where you do still have to have quite a few negative test cases. And in my opinion, if you are at the point where you have automation running against microservices, you should have automation running against your UI. And if you have automation running against your UI regression should be quite simple because you’ll know ahead of time, of course your UI is going to change and understand where a test cases need to be updated, which allows your tester more freedom to do some exploratory or negative testing on the UI. So I think it kind of is the opposite where you have time to do more of the negative, more of the exploratory if you have a decent UI test automation library that you’re going to run.

Suhaim Abdussamad:
So when you say exploratory, do you mean like writing the automated tests itself or just like a manual exploratory? So what do you mean by that one?

Matt Villeneuve:
Manual exploratory. The way that I like it to run is if you’re writing manual test cases, you’re running them once, they pass, you automate them. Then in theory, your automation is now taken care of running your manual tests for you or your checks, let’s call them checks if you want to. That gives you more time as a manual tester to actually poke the bear and see if it reacts at all.

Suhaim Abdussamad:
Yeah. I used to do that too and agree with it. But then what happens in the long-term, at least it’s easier. Like you do get a lot more confidence if you have all those things automated in the front end. And by the way, the first point you mentioned about the negative test cases. You’re right about that. I just didn’t think through that point there, so you do want that, but I don’t think I would go to every single field. So if there’s a page there’s like 10 fields and you want to check negative cases for each field, I wouldn’t do it just because you make sure you check the UI behavior because presumably they’re all the same and check that because you already kind of tested the backend part of it already. But now, because of the whole testing pyramid thing, if you subscribed to that thought, then I would reduce the amount of UI tests, Selenium tests for that reason, because you kind of cover some of it already. But it’s depending on the project, right? So, and you can decide how you want to do it and the amount of risk you’re willing to take.

Matt Villeneuve:
Yeah, I totally get, it’s a risk, not mitigation, but like it’s a risk analysis to see what happens. I guess, just to quickly counter what you were saying. If you have 10 fields on it and when they were developed, somebody has to test them and they passed, then why not automate them? And then you don’t even have to check one of them.

Mike Hrycyk:
So the reason – I’m going to stop you Matt because I going to tell you the reason then we’re going to move on. So that’s just going to be annoying for you. The reason is a lot of the reasons that we’re going to CI/CD is how we can get to continuous delivery. And continuous delivery doesn’t work when you’re automation runs take longer and longer and longer. So there’s ways to optimize your parallelizing, et cetera. But UI tests are expensive in terms of time, in terms of maintenance, in terms of everything else. So what you have to do is build this balance and accept certain amounts of risk and put the safeguards around it so that you can get your build time with tests down to an hour, so that you can then push within that hour. So you can push multiple times a day. And that just doesn’t work and like oh, us testers love the idea that, “Hey, I can just have more tests and I can have absolute proof of everything.” But we live in a universe where people want to be able to get stuff out fast. And so what we have to do is build our unit test, build our microservice tests, make them into our safety net. So that gives us a really high level of confidence and then have some gooey tests that maybe do an end to end their test critical functions for that. And you’re right. And just like I’ve written all these tests and they do all these wonderful things and they’re like, yes, but it takes three hours to run and that’s too much.

Matt Villeneuve:
Yeah.

Suhaim Abdussamad:
Yeah. And one thing you could potentially do is like, let’s say you have all those tests and just as another thought, you could tag the ones you want and run. And depending on what’s happening, that’s a whole other podcast, I guess.

Matt Villeneuve:
Yeah. Yeah. It is. I get where you’re going, Mike, I get it.

Mike Hrycyk:
And we can have that discussion tomorrow. We’re gonna wrap up pretty soon. Well, you mentioned one term that is a very common microservice term that I don’t think people have a great understanding of. So Matt define a contract.

Matt Villeneuve:
So a contract is simply the stated output from a microservice that it will send to another microservice as a provider. And then the other side of the contract is what a microservice can consume as a consumer and what it would understand. So you would have, you know, a file stating all the data that it could accept and how it would be able to accept it. And that is what you would base your tests on when you’re doing mocks and that type of thing. You would already know what that microservices supposed to accept them, what it’s supposed to send out. So you can very easily create that environment to send the data to it and to receive data from it, to verify if it’s doing it correctly. And the other nice thing about contracts too, is if something has to change, you just update the contract and it basically gives you a good overview of all the changes that have happened to that microservice.

Mike Hrycyk:
Good. I think that works for me. Is there anything you’d like to add Suhaim?

Suhaim Abdussamad:
No, I agree with that.

Mike Hrycyk:
Awesome. So I think that it’s reasonably clear if our listeners do not, please reach out to through comments and social and via the podcast links and we can continue that conversation. Okay. So we’ve used up our time I’m just going to throw out you’ve got a 25 seconds each to wrap it up and tell us: “Hey, microservices, good or bad for testers.” Let’s start with Sudan.

Suhaim Abdussamad:
I think it’s good. Just an easy way to understand the system and being able to test that really quickly. I think it’s a good thing.

Mike Hrycyk:
Awesome. Matt?

Matt Villeneuve:
Yeah. I think microservices are good as well. It gives you insight and it allows you to kind of see where the system is falling down without having to go through a bunch of log files and being able to give more value to the developers.

Mike Hrycyk:
That’s awesome. Although I don’t always think that I want to give more value to the development that offers more value to the product, more value to the company or value to the clients, developers?

Matt Villeneuve:
More value to the developers who broke it.

Mike Hrycyk:
Oh yeah. Okay, good. I like that. Alright. Well, so that was awesome. We actually had a little bit of debate in this one. We’re not always on the same path, so that was good. Thank you so much, guys. I think that this was a great discussion. I think it’s going to help our listeners understand microservices to a lot more depth and that’s great. So as always, as I’ve already said, if you’d like to add to the conversation, we’d love to hear your feedback, comments, questions, etc. So you can find us at PQA testing on Twitter and LinkedIn on Facebook or on our website. You can find links to all of our social media and website of the episode description. We’ll also put a link up to “That’s a Bug!” Which is Suhaim podcasts there. So you can go and check it out. It’s pretty good. I’ve enjoyed the episodes that I listened to. It’s really smart. And if anyone wants to join one of our podcasts chats or has a topic that they’d like us to address, please reach out to us. If you are enjoying our conversations about everything, software testing, we’d love it if you could rate and review PQA panel talks on whatever platform you’re listening on. Thank you again for listening and we’ll talk to you again next month.

Matt Villeneuve

Matt has been testing and leading quality teams on software and hardware projects professionally since 2002. His experience includes building quality processes and tools from the ground up; leading local and global teams; working with designers and customers to fine tune requirements; and testing software, hardware and embedded systems, using both automation and manual methods. He has worked on a wide range of technologies both hardware and software, including a variety of platforms. He is a Certified ScrumMaster (CSM) and enjoys the human connections that are fostered in high performing agile teams.

Mike Hrycyk has been trapped in the world of quality since he first did user acceptance testing 21 years ago. He has survived all of the different levels and a wide spectrum of technologies and environments to become the quality dynamo that he is today. Mike believes in creating a culture of quality throughout software production and tries hard to create teams that hold this ideal and advocate it to the rest of their workmates. Mike is currently the VP of Service Delivery, West for PLATO Testing, but has previously worked in social media management, parking, manufacturing, web photo retail, music delivery kiosks and at a railroad. Intermittently, he blogs about quality at http://www.qaisdoes.com.

Twitter: @qaisdoes
LinkedIn: https://www.linkedin.com/in/mikehrycyk/

Suhaim Abdussamad has been working in the testing field since 2006 in various roles. In 2020, he decided to start a podcast called “That’s a bug!” A show where every episode they dive into the ins and outs of one software bug.

Twitter: https://twitter.com/thatsabug1
Facebook: https://www.facebook.com/Thatsabug