In recent years, the process and tools used for performance testing have evolved as organizations moved to the cloud. In this episode of PLATO Panel Talks, our host Mike Hrycyk is joined by PLATO VP of Customer Success, Jonathan Duncan and PLATO Senior Performance Tester Lakshmi Narasimhan. Together they discuss how the cloud has changed the way we test for performance, from making it faster and to how automation has changed the game.

Mike Hrycyk:
Hello, everyone. Welcome to another episode of PQA Panel Talks. I’m your host, Mike Hrycyk. And today we have a very exciting conversation to have about performance testing and the cloud and how these things have intersected and how the cloud is changing performance testing. I’ve brought together a panel of experts to talk about this today. I think that PQA is well-suited to hold a podcast about this, because we’ve been doing for performance testing pretty much as long as we’ve been doing testing. And, and we’re, uh, we’re in our 24th or 25th year now. And so we’ve got a long history of performance testing. Our current performance testing team is in the six to seven people range, and that goes up and down based on the projects that we have. And so we’re, we’re definitely working in this space and we’re doing some interesting things. So I thought that would be good to have a conversation. So with that said, I’m going to introduce my two speakers today and it will start with, uh, John Duncan. Do you want to tell us who you are, please?

Jonathan Duncan:
Yeah. Thanks Mike. So I am Jonathan Duncan and I have spent the last 25 years, I guess now I’m in the development space and in the testing space. So , I’ve written a lot of code, not all of it would have been well-written code and performance testings is a bit of a passion because I think using the tools that exist today and the technology that exists can really sort of help us build better code from a performance perspective.

Mike Hrycyk:
Thanks, Jonathan. And just to add a little bit, Jon is a VP here at PQA and he owns the performance testing team. So he’s definitely got his hand in the pot there. Lakshmi, how about you tell us a little bit about who you are?

Lakshmi Narasimhan:
Definitely Mike. Hi everyone. This is Lakshmi Narasimhan now. And I’ve been at PQA for the last year, being a senior performance tester with them, and I’ve handled over 10 clients with PQA during my tenure here. And I completed my Masters from University of Buffalo, pursuing a master’s in electrical and computer engineering, focusing on memory management. And that is what made me take this career path as a performance tester. And I’ve worked in the US and I’m currently working in helping a number of clients in Canada through PQA.

Mike Hrycyk:
Awesome. Thanks, Lakshmi. So, let’s just jump right in. Most of our listeners out there are probably familiar with the traditional toolsets and how performance testing has worked, with load runner or Jmeter or stuff like that. But what do you think are some of the biggest differences that we’re starting to see are we’ve seen that the cloud is making in, in performance testing, Jonathan, let’s start with you.

Jonathan Duncan:
Yeah, so I think the biggest thing and the most obvious one right off the start is just the ability to drive more load than we can through a single laptop desktop, or even a small server that we may have in our facility. So the cloud allows us to, we can put load in different places around the world if that’s what is important to our customers, being able to drive significant amounts of data through the number of users with the horsepower that’s available in the cloud.

Mike Hrycyk:
Awesome. Lakshmi?

Lakshmi Narasimhan:
And just to add a few more points there, cloud enables a number of additional features like auto-scaling, which actually makes sure that your application can be scaled. By scale what I mean is on an unexpected day. If you are having more number of hertz, you don’t really need to go and add more CPU or more memory to your existing physical infrastructure. If at all application is on cloud, it can scale by itself. And that is the most critical thing in performance testing. So you can basically have your users having continuously access your application without having any stop or discrepancies in their access.

Mike Hrycyk:
Awesome. Yeah. The highlight one thing that I remember for myself. A few years ago, maybe eight years ago now, I was working at a, uh, at a company that did retail photo sites and they had users across North America. And one of the things that when we did our performance testing, one of the things that was really difficult for us to set up was we had noticed, and we have a lot of evidence to support the fact that if you had users in Southern Ontario, that there was a slowdown that happened in the hub that went through Chicago and Southern Ontario. And if you weren’t aware of what might happen there, and there were a lot of uploads, right. So there was a lot of performance hits. If you weren’t aware of that kind of thing and are able to test around it, you just wouldn’t have a very accurate idea of what might happen when you come to your big performance hits. So Black Friday and Christmas, and the really heavy things for that kind of retail market. And you just couldn’t plan and test for it because you didn’t really understand how to have geographic spread. And I think that we found ways around it then, but that was really about engaging with other people who could do the testing from other places. And it was complicated and difficult and annoying. And the cloud has really opened that up and made it more possible.

Mike Hrycyk:
Thinking about toolsets has the cloud changed the tools that we’re going to day-to-day Jon?

Jonathan Duncan:
I don’t know that it’s changed the tools. It would change how we would implement them. So early days performance testing was let’s manually test it and we’ll sort of have that stop clock beside it. In your scenario, it would be okay do we have people across the country that could test it for us. Early days it was let’s install something on a local machine and we’ll drive the load through that. Now, obviously, it’s going to move to, okay, let’s install the application up in the cloud. Some of them are there are things out there like Flood.io that let me create my scripts locally, but then I will put them up in the cloud to have the cloud be the horsepower behind generating that load. So I think that’s the biggest differentiator. With all tools it’s really in my mind about the process of performance testing that is the most critical to make sure that you’re actually testing and that when the results come out on the other end, they were designed in a purpose that was going to give you valuable information to understand the actual performance of an application.

Mike Hrycyk:
Okay. Lakshmi do you have anything to add?

Lakshmi Narasimhan:
Just to add a few more points. So pretty much the tools which we use LoadRunner Jmeter Gatling, or even Selenium for performance testing. We really didn’t change anything from moving from physical to cloud infrastructure. The only thing which is actually changed is that now we have the capability to execute these tests from the cloud. So basically you have your load generators, which can be situated at different geographical locations. And like Mike, you were telling me that you had an affordable upload application that you are working on a few years back. So in that case, you can actually simulate the load across different geographical locations say from the east coast, from central, from the west coast, and you have low generators available in different places. So you can definitely leverage a flood.io or any that by the provider, which can actually give you that benefit of actually using different load generators at different locations. So this can give you a diverse load and more of a realistic load about what is going to happen in production. Because no way you’re going to have all the load coming from a single system or a single physical machine, which is located at a very specific location.

Mike Hrycyk:
Cool, thanks, Lakshmi. So these days, when you just say cloud, you get to hop into a conversation with IT minded folks who say cloud to them that auto translates generally into talking about AWS, Amazon, or maybe talking about Azure or even Google or some other cloud service provider. But so when we’re talking performance testing, how do the cloud platforms fit into performance testing? Because in some ways we could be talking about stuff that is hosted on AWS and how are we performance testing, but we might also be talking about stuff that’s not hosted in one of those cloud providers, but using the cloud systems to be the background for our testing. Is there a way that you guys can help shed some light on, on that discussion or those ideas? Maybe Lakshmi we’ll start with you?

Lakshmi Narasimhan:
Oh sure. So the three big providers of cloud platforms here are going to be Azure, Amazon, and you’re going to have the Google Cloud and all of them have their own versions. Pretty much the same features called by different names, but they’re going to give pretty much the same service. And what I see is even with the relational database, they have right with Azure sequel, or Amazon RDB or Redshift, or even Google Cloud sequel. All of them are actually pretty much having the same capabilities, which is having the same scaling options, but they have a few unique features, which the client or the customers choose according to their application need. From a performance testing perspective, irrespective of what technology or what cloud they are on we are going to do pretty much a similar type of testing on making sure that we are able to handle the expected load. The first criteria, the second criteria is going to be handling more than expected load. And what if something fails the fail-over testing. So these are the key aspects of what we look into irrespective of the cloud platform.

Mike Hrycyk:
And does that discussion at all when your application’s not hosted on one of the big three?

Lakshmi Narasimhan:
So if it is hosted on, say on a totally different cloud platform, we are still going to test pretty much the same way, same types of tests. And these types of tests is going to be specific to the type of application, not on based on where it is hosted and what type of cloud it is hosted on. And if at all it is hosted on physical infrastructure, we still test it. One benefit of moving towards cloud is that even if your application is hosted on physical infrastructure, you can use multiple platforms to simulate load from cloud. Like basically you will be having the capabilities to simulate load across the geographical locations, irrespective of your application being hosted on cloud or not

Mike Hrycyk:
Awesome. Jonathan, anything to add?

Jonathan Duncan:
Yeah, I think just to add it’s similar to the conversations that we had before the existence of the cloud is understanding where are my users accessing the application from, and then trying to find the right path to best simulate that, right? You may never get it identical. But back when we were doing performance testing on physical infrastructure, it was okay, we know that our user community looks like this, and we need to throttle back the throughput on that in order to best simulate that. Right? So in this world, well if we’ve got an application sitting in AWS, if we’re running the performance testing in AWS, we’ve got to make sure that the traffic still goes outside of AWS before it goes, right. Cause obviously cloud to cloud internally, that’s not going to be the realistic scenario. So I think it’s understanding how an application’s been put together, the components that are there and where the user community is going to be accessing it from and making sure that you’re doing everything you can to best simulate that.

Mike Hrycyk:
Absolutely. And that at reminds me of another story from when I was at a different place. And we were doing a parking meter company where we had tens of thousands of parking meters distributed across North America. And they were calling into physically hosted server farm that we managed. And when we started doing performance testing on some of that stuff, we discovered that we were getting false results because some of the places that we were collapsing was in the creating independent sessions and we needed thousands of independent sessions formed on these things. And while when we generated these sessions from inside the server farm the sessions, weren’t truly independent and it wasn’t giving us the results that we needed. And in order to create those sessions from outside of that server farm at the time, because this is still seven or eight years ago, that it was a lot more difficult. So having, having the capability of having a separate location to come from, AWS, Azure, whatever, what that does is it, it really proves that your user base, your people that are touching your servers, aren’t actually part of that ecosystem itself. And that would have been really powerful at the time. And I’m glad that it’s grown in that direction. So let’s shift a little bit. We’ve mentioned auto-scaling a couple of times in this, and then, so that’s, that’s the concept that with Amazon, with all three of them, I believe offer auto-scaling is when your servers are overloaded, it can add another server on the fly. And it has a bunch of management software that helps with that. So a question I have and a question that might come from a cost-conscious managers out there is do we have to do performance testing when we have this idea of auto-scaling isn’t that just taking care of everything for me, Jonathan?

Jonathan Duncan:
That’s a funny one. So years gone by oftentimes you hear well, what’s the solution to the performance testing problem. Well, let’s add more hardware, the invention cloud, and then auto-scaling after that is it made it so, okay well, it’ll just add it for me. I don’t have to think about it. That’s not fixing a performance problem. It masks it in my mind. And when another instance comes online through auto-scaling, the customer’s paying for that new instance based on the amount that they use it. So in my view, it’s just masking the performance problems. And I hear in a second, we’ll let Lakshmi speak about how to sort of best uncover those to try to get best performing code. But I think it’s a matter of let’s make sure it performs its very, very best it can early in a single instance to make sure that the code is written with the best performance techniques in mind so that auto-scaling can work to reduce your risk while still not increasing costs significantly on your cloud bill when you get it at the end of the month. But maybe Lakshmi, if you want to talk a bit about sort of how do we work through that? Because I know we all believe auto-scaling is of a huge advantage, but maybe you can talk about how do we make sure that the code is performing well.

Lakshmi Narasimhan:
Sure, Jon, I’d like to actually give an example here when your application is actually being performance tested and you don’t really have performance testers on the application, just trusting auto-scaling and you go to production. So when you go to production with auto-scaling enabled and you identify a memory leak there that might be a temporary band aid where basically you’re going to add an additional instance and your application is still functioning, as it is expected to function. Eventually what’s going to happen is you’re going to run out of memory in spite of adding additional incentives and it’s going to be happening in a loop. So basically going to keep adding additional instances, and this is never going to stop, and this is going to affect the cost of the project on a higher scale than hiring a performance tester. So hiring a performance tester would have been an easier option. So in this specific case, what could have been done or what should be done at an earlier stage is that when you test for a code, you need to make sure that you test it without having the auto-scaling enabled. That is you need testing on the raw instance to see how much your current instance can handle even before auto-scaling kicks in. Say, if you’re having 200 concurrent accessing your application over a period of one hour duration, you need to test that first and try to go a little ahead of it and go with 300 users for the same one hour and see what’s happening. And then you can probably run another round of tests with auto-scaling enabled to see actually if auto-scaling is helping you, when there is 400 users. So that’s how you need to plan your tests. And that’s how auto-scaling should always be a luxury or can be used and it can be used only if it is needed. It doesn’t need to be used all the time. And it’s not that if you have auto scaling, you don’t need performance testing. That’s a huge misconception.

Mike Hrycyk:
Awesome. Thanks. And a couple of things come out of that to me. So one of the things I think to say in a different way that the, one of the things that the performance testing you’re doing is helping highlight the health of the code that you’ve written and scaling can obfuscate that. So, that in the end, if you don’t highlight that and fix it, you’ll have a catastrophe that you didn’t know was going to come. And so I think that’s awesome. And I think that’s a really important thing. Another thing that I think it also helps point out is that when you set yourself up on AWS and you auto-scale, when you add another server, it’s adding another server of the same size, but when you do the architecture of how you’re going to build your infrastructure on the cloud and how you’re going to implement there, you pick a server size and it could be that you’re picking the wrong server size for your system to operate optimally. And you pick that server side for budgetary reasons, bigger servers cost more money. What performance testing can help prove for you is that you’ve picked the right server size. And so I think that’s also important. So does cloud-based performance testing is, everything that we’ve said so far, is it the same story for all the different forms of performance testing? So just for examples, stress testing, spike, testing, load testing, endurance, testing, etc. Is that the same as cloud but better for some of them or worse for others? Do you have any opinions there Lakshmi?

Lakshmi Narasimhan:
So irrespective of cloud or physical infrastructure, right? So it’s all going to be pretty much the same type of test. So, it’s going to be load, scalability, stress, resilience, and endurance. And you go to even browser level performance testing to make sure that the, your end users are seeing the page load within the anticipated time. So, one major thing is that with cloud, there are multiple other types of tests where you can go and test it at every port level. Or if you have any type of specific infrastructure enabled, you need to make sure that how well each of your ports can behave or like how well each of the ports can handle the load so that you will be able to, again, this goes back to your budgeting part where you will be able to know how many ports exactly you need within your servers. You don’t really need to have more than what is needed. At the same time you should not really have a very less numbers assuming that it’ll scale on the go.

Mike Hrycyk:
Jonathan, anything to add.

Jonathan Duncan:
I think the biggest thing is that they’re all conceptually the same, but the necessity has changed over time. As applications got bigger and more complex and more widely distributed, the necessity of using cloud to perform those tests is almost necessary now as oppose to a nice to have. I don’t foresee that changing at any point in the near future. The cloud allows for the capacity of horsepower that we need to run some of those test to have our customers feel confident that when they move to production that it is going to work exactly how they intended it to and they won’t be stuck with a performance problem.

Mike Hrycyk:
Okay. So shifting again, a little bit – monitoring has always been a really big part of performance testing. Without monitoring you don’t understand if there is a problem or where the problem’s coming from or what bottlenecks need to be addressed. Has the cloud changed the way that we’re integrating with those tools or how are you using them or what’s available – Jonathan?

Jonathan Duncan:
So again, I don’t think that things have changed in the needs for what we need to capture for information. The biggest thing that’s happened is tools are now more sophisticated, and the ability for us to capture the information has become easier, if anything with the integration of the other tools. The other piece is that performance testing is potentially this long cycle that has thousands – and tens of thousands of records flowing through the application so being able to sync up the timelines with the tools – so integrating the timelines from a monitoring tool in with the timelines from your performance testing result has become easier, which makes it much easier for the performance testing analyst to link events together to help provide the information back to the teams as to what was happening and what time and, what likely caused of the problem was. So again, not that they’ve changed but as tools continue to adapt, cloud does make things a little easier to be able to integrate everything in. So you can imagine with all that data, if I was to try run that all through my laptop and get everything synced up, my laptop just wouldn’t be able to handle it – the cloud doesn’t have that constrain.

Mike Hrycyk:
Alright. Lakshmi?

Lakshmi Narasimhan:
Oh, sure. Definitely. So 10 years back, if you talk about performance testing, it’s about running a test and giving the client or giving the end user’s response time, what they are looking for. But now if you look at performance testing, it’s more about identifying or helping what is actually happening behind the scenes. So you basically need to tell them the complete root cause analysis. If a page load is taking eight seconds, but the growth and monitoring and if you see a lot of new tools like App Dynamics, Dynatrace, and they’ve been in the market for quite some time, I wouldn’t call them new tools, but with the growth of cloud and with the growth of these tools, both go hand in hand, and you can really go to and dig to a level where you can find complete details of how each call in each page performs. And this is going to help the developers a lot. So when you are simulating a performance test, when you are actually doing a performance test or an endurance test or a load test for a specified duration of time, you would be able to monitor or get into your dashboards of these tools – like New Relic, App Dynamics, Dynatrace, or any other tool, you’d be able to see thread-level details. So you can go and identify the exact point that the performance bottlenecks are happening. And this is what the developers are looking for, which would help them and getting to the problem a little easier and fixing them at a faster rate than just giving them that, okay, this page is taking a longer time to load that is not going to help them in any way, because that is the first part of what you can tell them, but you need to give additional details on where it is happening and probably performance engineering is going towards helping them to rectify it by doing multiple code coverages and whatnot.

Mike Hrycyk:
Awesome. And that’s going to segue into the next big question I have, but maybe one little follow up first, and hopefully, we don’t get any irritated emails from Dynatrace or New Relic on this question. If I’m hosted on Amazon or Azure do I need third-party monitoring tools when I’m doing my performance testing or is the packaged stuff that just comes with the cloud services is that enough?

Lakshmi Narasimhan:
So the packet stuff which comes along or would be basic or sufficient enough for you to probably start your development work on. Say, if you’re migrating from a physical legacy application and you’re moving on to a cloud-based application. When you’re doing that, you should have done a number of tests on your existing infrastructure. So you already have confidence in how your application would behave. From physical to the cloud there, isn’t going to be a huge improvement in performance unless you change your codebase or pick code basis. But what’s going to happen is eventually when you’re trying to implement new features and new code changes on your cloud infrastructure, that is the time when these tools can give a huge value add. In addition to what is the prepackage tools, which come along with AWS or Azure.

Mike Hrycyk:
Okay, well maybe we won’t get irritated letters then. Cool. Alright. So this one will be for you Lakshmi to start with – the service that we offer is we’re not developers. We’re not, we’re not digging into the code. We’re not trying to fix the problems that we find, but performance testing or good performance testing services. We make suggestions on things that the development should look at when there’s issues. Or ways that they can try and mitigate the issues that we found. Has the new cloud-based model changed the way we give advice or the advice that we give in your experience?

Lakshmi Narasimhan:
Thanks, Mike. That’s a very good question. So with regular use of CI/CD and DevOps pipelines and whatnot, and the technology is going at a very fast pace where people are trying to identify performance defects at a very early stage, and no more people are following a waterfall model, or maybe very few people are actually following a waterfall model where you come to performance testing one week before the codebase goes to production. Now we are going into an agile world. So with performance testing going in an agile world and with applications being on cloud definitely has been performance sisters and performance engineers give additional insights using all these tools available in the addition to the testing, which we do. So when we do the testing, we get our client-side metrics. Like we get our response, same 90th percentile, fifth percentile hits per second throughput and all, all of these values, but when you integrate it with the cloud and when you have additional performance monitoring enabled on the cloud, you’ll be able to get additional details. If you see your 90th percentile, or if you see a maximum response, same of going at 27 seconds and low, your 90th percentile is five seconds, which is within your resilience. You might need to actually go and dig deeper to find what has cost at 27 seconds. And with what tools we have and the applications being on cloud, this makes it easier for us to pinpoint that issues. And this is actually helping developers to fix it at a very faster pace, rather than just having it on their backlog and fixing it eventually.

Mike Hrycyk:
Awesome. Jonathan, anything you’d add or disagree with?

Jonathan Duncan:
Thanks, Mike. Definitely not going to disagree with anything that Lakshmi said but his last point where he talked about being able to fix things at a faster pace I think that’s a nice little segue into that whole world of CI/CD and DevOps and integration of some sort of performance test into that pipeline – even if it’s just a very basic baseline of performance matrix for an application allows results to be captured quickly potentially even on a daily basis and having developed code in the past I know it was always easier when I got feedback immediately. If I knew the next day that I had broken something, it was easy to remember, whereas something that was out there for a month, it was hard to get my mindset back into that right spot. So I think everybody should be looking at ways to not leave performance testing as this final activity. Obviously, you want to do that check at the end to make sure everything is still good before you release but getting information throughout the entire development life cycle potentially even right from Sprint 1 to try to understand performance improvements or degradations over time, over the course of the project, I think are instrumental in making for higher quality code and being able to release it faster with confidence.

Mike Hrycyk:
Awesome. You’ve sort of highlighted that there’s an area of topic here that could have its own podcast in the future which is around the idea of continuous performance testing or performance testing and DevOps. And so what I’d like to recommend to our listeners is if you have interested in this area, use one of our social channels to give us that feedback and indicate your interest. So that’s great. Thanks for that answer, guys. I’m going to attempt to stymie you with a new question. One that we haven’t really prepped for, and, and I’ll start with you, Jon. Going into a new contract, a new relationship, starting a new performance testing gig – have you built a preference around recommending whether we’re going to build our own self run set of tests that we host in the cloud and therefore leverage, or is it using one of the new services, like flood.io that do the test management and the geographical management and send the packages on their own? Do you, have you built a preference? How would you talk to a client about how to make a choice there?

Jonathan Duncan:
Great question, Mike! One that I definitely didn’t prepare for. That said, what would I suggest? Do I have a preference? I think everybody’s got biases as to where they want to do things but for me the biggest thing is understanding what the end goal of the performance test is and for end goal on that point is obviously the real end goal is to provide the information around the performance of the application but thinking broader – what are the long term goals for our customers? Are they going to maintain and build this on their own? If so, what are the skills of the folks there? Do they have a really technical team that can deal with the monitoring and set up their own infrastructure and run it?  If not, potentially, items like flood.io or Blazemeter are the best choice. If they’re extremely technical, I would probably go down the path of, no, let them stand up their own infrastructure and tear it down on an as-needed basis. So I think – it depends – is the real answer to that question. It depends on what makes sense for that customer at any given moment in time.

Mike Hrycyk:
You did great for me putting you on the spot, Lakshmi anything to add? In a lot of cases you’re the hands-on performance tester. Have you built a preference?

Lakshmi Narasimhan:
I like to go with an example here. So let’s consider two clients.. Our first client is actually going to be using this performance testing activity is going to be like a one-time activity. So if it’s going to be a one-time activity and they’re looking for load across the geographic and locations, it would be pretty expensive for them to set up multiple load generators on AWS or on Azure, on Google Cloud by themselves to actually run these tests just for once, there’s just like a two-month initiative, so you might need to actually use, and it’s better to use flood.io or any other third-party provider, which have their infrastructure hosted on the cloud already. All you need to do is create your scripts and upload your scripts. You will be able to see their dashboard as well to see how things perform. And as I told you earlier, you will have your own clients and metrics as well. And let’s assume client two to who’s actually going to be running these tests for a period of two-year engagement and they would be using this quite often. And if that’s the case I would generally recommend setting up their own infrastructure. By their own infrastructure what I mean is they can have their own AWS machines configured as load generators with suitable memory and, spacing options. So you might need to actually use these point to make sure that what exactly is needed so they can actually use this existing infrastructure a number of times, and they don’t really need to go to a paid service,

Mike Hrycyk:
Which makes a lot of sense if you’re going to integrate it in your CI pipeline.

Lakshmi Narasimhan:
Yes, exactly.

Mike Hrycyk:
Awesome. Okay. So, we’ve had a great conversation. I’ve got one final wrap up question that is going to work in and also be our conclusion. So we’ll start with you, Jonathan. Any last thoughts you have on what’s the future of performance testing in the cloud.

Jonathan Duncan:
So that’s a tough one. Having done this for 25 years, what I have figured out is that I am unable to determine what the future is. I do know that it will change. I don’t believe cloud is going to go away by any means and even more, it will be, there’ll be more adaptors and it will be in everybody’s sort of tool bag as they go in to build out new projects. One of the things I think, and maybe it’s another talk for another day, but I think the addition of edge computing – so taking the power of cloud and moving a little closer to where end users are – is something that performance testers are going to need to figure out. How do I test that? How do I confirm cloud edge computing and my local environment? So that’s going to be an interesting one to solve. I don’t think unsolvable and I think in order to solve it, we all just need to take a step back, look at what’s happening in performance testing over the years, look at how we solve every challenge along the way and then use some of those tricks and tools we picked up along our careers – figure out how to solve problems like edge computing and the performance of that. I don’t know what the future is but I know it’s going to be exciting filled with lots of things that will challenge the minds of the testers.

Lakshmi Narasimhan:
So from my end, working with a number of tools and with a number of different clients, what I personally feel could be the next step is more oriented towards end-user and our browser-based performance testing. So right now we have a LoadRunner stroke client, which still consumes a lot of resources for you to ramp up to 1000 users. So you might need a number of load generators to simulate the same. So I’m actually hoping that there will be a solution for this by using a number of headless browsers to simulate the load through the UI. And that is what the business is also looking for of late. Like, so they are more interested to see how the end-user experiences, rather than just being the server-based response team. We were actually waiting to get that update or that sort of implementation from the load testing tools.

Mike Hrycyk:
Awesome. And I think that that’s a very interesting point that bears its own discussion around the idea that a lot of performance testing when we’re talking about mobile has moved server-side to make sure that your bottleneck isn’t your API calls and your server calls and stuff like that. So I think that’s interesting and that, that bears watching. So thank you very much for that. Okay. I’d like to thank the panel, Lakshmi and Jonathan for joining us for a really great discussion about performance testing in the cloud. I think that there’s a lot of stuff, and maybe, listening to two or three times may help you really get what you like out of the topic. Thank you to our listeners for tuning in. I love our audience. I think that there’s a lot of room here, and I love that that there are one or two talks that I see in the future coming out of this. So I think that’s great as well. If you the listener have anything you would like to add to our conversation we’d love to hear your feedback, comments, ad questions. We’d like to continue the discussion as we can. You can find us at PQA Testing on Twitter, LinkedIn, Facebook, or on our website, and you can find links to our social media and website in the episode description, on the platform where you listen to us, if you are enjoying our conversations about everything, software testing, we’d love it if you could rate and review our panel talks on whatever platform you’re listening to us on. And, thank you again for listening and we’ll talk to you again next month. Thank you everyone.

PQA Vice President of Customer Success Jonathan Duncan

Jonathan Duncan is the VP of Customer Success at PLATO Testing based in Fredericton, NB. Jonathan has over two decades of wide-ranging experience across all facets of the software development cycle. He has experience in a variety of industries that stretch from the public sector to start-ups in satellite communications and everything in between. Having worked in organizations from both the development and testing standpoints provides Jonathan with the ability to see problems from all aspects allowing for complete solutions to be delivered.

Lakshmi Narasimhan is a Senior Performance Tester at PLATO Testing who has over 6 years of experience with Performance Testing. He has worked with multiple clients from different domains which include Telecom, Petroleum, Insurance and Education. Completing his Master’s degree in Electrical Engineering from University at Buffalo. Lakshmi has worked for clients across North America. He is skilled in LoadRunner/ Performance Center, Apache Jmeter, End to End Non-Functional testing lifecycle and application performance monitoring. Lakshmi is currently pursuing his part-time MBA at Wilfred Laurier University. Outside of work, Lakshmi enjoys playing online games and binge-watching TV shows.

Mike Hrycyk has been trapped in the world of quality since he first did user acceptance testing 21 years ago. He has survived all of the different levels and a wide spectrum of technologies and environments to become the quality dynamo that he is today. Mike believes in creating a culture of quality throughout software production and tries hard to create teams that hold this ideal and advocate it to the rest of their workmates. Mike is currently the VP of Service Delivery, West for PLATO Testing, but has previously worked in social media management, parking, manufacturing, web photo retail, music delivery kiosks and at a railroad. Intermittently, he blogs about quality at http://www.qaisdoes.com.

Twitter: @qaisdoes
LinkedIn: https://www.linkedin.com/in/mikehrycyk/