While we learn a lot from discussions of tools, concepts, and processes at PLATO Panel Talks, we think some of the best lessons come from being out in the wild and sharing the stories of our experiences with one another. So for the summer edition of PLATO Panel Talks, our host Mike Hrycyk is calling a few of our software testing friends and asking them to share their stories and lessons learned while being out in the field of software testing. Hopefully, sharing these fun, surprising, and challenging stories can help us all become even better software testers.

Mike Hrycyk:

Hello, everyone. Welcome to another episode of PQA Panel Talks and as always I’m your host, Mike Hrycyk. For this episode, we thought we’d bring you something a little different. We’re going to bring you a summer edition of our podcast. Normally we try to bring you interesting content that you can directly apply to your day-to-day. Knowledge-Filled discussions that you can directly relate to. But being a good tester is more than that. Experience builds the capability to recognize bugs just as much as learning about tools and processes does. For the summer edition, we thought going a bit lighter would be a good idea. So you could listen to our podcast or on the pool or whatever you’re doing, but we still wanted to provide some testing values. So what we’re going to do is to add our experiences to your own, and then that helps you react even better than the next thing. So PQA Panel Talks is friends with a lot of testers with lots of stories about being out there in the field of software testing, but they don’t always get to share. So for this episode, I’m going to be calling in with some of our friends, some familiar voices you’ve heard before and some new, to hear some testing stories that challenged them or changed how they think about testing. I might even share a story of my own. So for our first guest, we’re heading over to New Brunswick to speak with an old friend of the podcast and a veteran of automation at PQA, John McLaughlin. Hi John, what testing story do you have to share with us?

John McLaughlin:

Hi. So the story I have is one that I think of it regularly when similar scenarios come up. It was a number of years ago. Now, maybe in the range of 10 years ago. Testing an application, we were using quick tests, or at the time it was, I think HP had bought it by that time. So it would have been rebranded to UFT. And the application we were testing, did a lot of things, but one of the core capabilities that it had to do was register a client and the client in this context is basically a customer that would come in and do a service or make a change to their profile. So I had been asked to run a script to help with other forms of testing, which required the creation of a large volume of clients or customers. So my test would basically just open the application, go to the registration form, create a client, and then finalize the registration process. And then repeat, repeat, repeat, repeat again. To register, the goal was a couple of hundred clients to get into the system. I found that after about 50 or so clients were entered one, my system would slow down really noticeably and the browser would eventually just crash and die. So after some troubleshooting and kicking that around for a while, I watched it and observed with Process Explorer and noticed that with each iteration, that it was creating a client and the memory that my browser was consuming was going up and up and up and up. And some extra digging beyond that point, I found that each time a client was it’s basically like a session was added to the cookies for that specific client that was being created. But every time the form was saved and then closed and then reopened that cookie wasn’t cleared or the cache wasn’t cleared. So my browser cache was getting filled up really quickly with all of the clients I was creating. And then it would just overload the browser and killed the test basically. So that defect kind of stood out to me ever since it happened, because it’s not something that we would have ran into in the normal day-to-day testing that we were doing, which normally we would have just been creating one client or customer at a time and stepping through a workflow. Where this one was repeating a process of continual client registrations. Which after investigating this and finding it, the issue that was a real-world kind of use for this application, was something that really would have occurred. I don’t think we would have found it, testing it manually, just because of the pure volume of clients and actions that it had to take on the form. That’s probably one of the defects that I always remember as, an interesting one that was in general found by us and then found with using an automation tool to help me get there.

Mike Hrycyk:

So would this have been, maybe your first foray into starting performance testing, John?

John McLaughlin:

That’s an interesting way to think about it. It wasn’t the traditional form of performance testing where we weren’t, I wasn’t triggering a volume kind of breakage where it was, if I had 50 or a hundred people entering a client at the same time, it wasn’t that type of issue, which probably relates to traditional performance testing. Now, this is more of a session. I would almost say like an endurance style of test where it’s only one user that’s acting, but they’re doing the same process over and over and over again because of a flaw with how the cache was cleared each time that particular form was closed, it loaded up the system. And it wasn’t even necessarily loading up the system. It was loading up the browser because of the browser… This was at the time too – I didn’t mention this, but this was at a time when the standard that we had to use was Internet Explorer 8, which is not the fanciest thing in the world, but cache got loaded up in the browser and then killed the browser. So while it kind of had an illusion of a performance effect, I don’t think I would put it in the category of say traditional performance.

Mike Hrycyk:

Hmm. Yeah. Cause you don’t normally performance the single browser. I mean, it was definitely what I would qualify as a memory leak, I think, but that’s interesting! Yeah, and I guess you said that this is definitely something that would happen in the real world. I guess in the real world, it would take a lot longer though, cause you’re not going to register a whole bunch of times on your own PC, but I think that the more you did within a session, right? So if you kept a session and did today and tomorrow and it would build upright?

John McLaughlin:

That’s right. It would take a very specific type of day for them to have encountered this. So the reason that I thought that it could be applied to the real world is from what I understood when this application went live, there was supposed to be one individual whose task it was to register clients. If it happened to fall on a day that they had a really busy morning and they were just registering client after client, after client that would have occurred. But in most days, like in a real-world kind of queued type of environment. One user comes up and registers and then you move on down the line and there’s a bit of a law or a break in between the registrations you have to do. It wouldn’t have been caught as easily at that point because the cache would have had a much longer time to load up and then eventually die. But if it was one person’s task on a really busy day to register clients, it could’ve presented itself at that point.

Mike Hrycyk:

And nobody likes their browser just blowing up, especially when you can’t really attribute it to anything. And most, especially in these days with multiple tabs open, it’s hard to say, yes, it was tab X that did it. So I think that was a really good, find.

John McLaughlin:

It was one, I was pretty excited about it at the time. Cause it was the first time I had found anything like that. Like a memory leak or anything to do with memory usage. It was exciting at the time. And it was something I’ve remembered ever since then. I’ve told that story a number of times to the various clients that I go-to ever since it happened basically over the years of being here and doing this, I’ve kept that story in mind when considering the consequences of how and when and why we test certain things.

Mike Hrycyk:

Awesome. Thanks, John. That was a great story!

Mike Hrycyk:

Our next guest with a testing story is Christin Wiedemann. Now you’ve all probably listened to Christin in prior podcasts, but this is her chance to give us a little vignette into her history in testing. So welcome Christin and thank you.

Christin Wiedemann:

Thanks Mike. And thanks for inviting me again. I appreciate it. I was thinking about this topic and sort of interesting bugs I’ve found or interesting test situations and it really brought me back to the early stages of my career. So this would be over 10 years ago when I was testing retail software. And it was very interesting, but it’s also challenging work because you’re working with peripherals, a lot of different hardware. And, anyone that tested systems with hardware, knows that there are additional challenges. But you’re sitting there in your lab. You have everything set up, you get pretty confident and learn how things work and you get a little bit overconfident and thinking that you actually also know how the users are gonna use things you’re building. But when you’re standing in a warehouse at five in the morning, just before the store opens, it is quite different.

Christin Wiedemann:

So I was part of a lot of store roll-outs around new stores opened, or when stores switched over to this hardware and being there, working with the staff, that was quite an experience that really opened my eyes up to what the users are like and what it’s like for them to be exposed to a new system, to watch them use the previous system, which we might have thought was too simplistic. It was old-fashioned, it was inefficient, but they were really good at using it. And we gave them this brand new, amazing system that was really hard for them to use. And the peripherals didn’t fit into the small cash register spaces on things like that. And also appreciating their working conditions. How hard it is to stand on that concrete floor for eight hours a day and serve customers while trying to use all these new tools that we’re giving them. So I was very humbled by that experience and it definitely changed how I view end-users and end-user advocacy. I would say those early days, testing out in the field like that with the end-users really changed my outlook on testing.

Mike Hrycyk:

Excellent reminds me of my days back with our legacy green-screen applications. And we used to have people who you’d watch them come into a screen and they would press tab, enter tabs, space, space tab, enter space, put in an X and then do that again, right? And they would go through the screen and you’d be like, they just have these memorized patterns. And testers today have no ability to put themselves into that type of brain set, what they’re testing. They just can’t relate. I mean, I guess we don’t do that as much, but even in all applications, now people build these habits and these patterns, and it’s really hard for our logical brains to put ourselves into a brain set where that becomes normal because we don’t have months and months of practice. So yeah, it’s really a good thing for people to think about it.

Christin Wiedemann:

Well, and it definitely made me realize there’s a difference between what we think is the best solution for people, the solution people want, and the solution people can actually take advantage of. Cause you’re absolutely right. There’s a lot of sometimes just muscle memory, how to use these old-fashioned systems that are keyboard-based. And back then 10 years ago, when they got touch screens, that was still a fairly new thing. People didn’t quite know how to use them. They weren’t always super accurate. And it’s not for us to tell people what’s going to make their lives easier.

Mike Hrycyk:

Yeah, that reminds me of back when Windows first came in at the railway where I was working, they had this fight that went all the way up to the VP level about leaving FreeCell and solitaire on the machine. And the argument was it will help people understand how to use a mouse. And it was true. It was just having a game, gamifying, was the only thing that helped people understand and relate to the new system. And then that’s where you start building your confidence.

Christin Wiedemann:

I think that’s super clever. That makes a lot of sense.

Mike Hrycyk:

Well, that’s great. Thank you so much, Christin. We’ll let you get back to your day, but that was a great story.

Christin Wiedemann:

Thanks, Mike.

Mike Hrycyk:

For this next story, we’re welcoming a first-time guest to PQA panel talks. It’s Afshin Shahabi. He’s worked with PQA for seven or eight years. He’s the first or second employee in our Vancouver office and he’s a great tester with a whole bunch of experience in banking and a number of other places. And I’m excited to hear the story that he’s brought for us today. So with that over to you Afshin,

Afshin Shahabi:

Hello everyone. I found a defect in the iPhone map direction that client was not able to fix it. And it went up to Apple and found out that the logic that they were using to find the correct direction compared to Google Maps was incorrect. This issue, plus other issues caused Apple to change the Dev Lead for the Apple Map. And then they have rewritten the whole Apple Map. And that was the case that I reported in one of the clients, as I was testing the map for them.

Mike Hrycyk:

That’s a pretty big impact, for someone to get reassigned and then to change the entire thing all the way back up to Apple. It almost makes you famous only you’re probably anonymous. But what’s not clear to me Afshin is what did the bug look like if I was a user? What would I have seen?

Afshin Shahabi:

On the iPhone when you actually run the client’s app. The client app was a bank. So you are actually searching for the nearest either ATM machine or the bank branch. So the map was supposed to show you which direction is actually the closest, possible way to get to ATM from your location to the ATM board branch. And when I saw that and compare you know when it actually draws the direction, it actually has a dot line. And by looking at it, and then I run the same thing on Google, on my non-iPhone device, which was like a PC. I found that, that Google actually is closer and it was closer than maybe 50 meters or so, but it was closer. So compare that when you are actually doing something like 100 kilometres or 30 kilometres, then that 300 meters plus 50 meters, that it was longer, it creates some issues. So, for me initially, it was okay. And then I realized that let’s just compare that with Google and, this is how I found that this is corrupted. This is a defect.

Mike Hrycyk:

Did you put in your writeup, Google does it better? Cause I think that’d be a really fast way for someone to lose their lead at Apple.

Afshin Shahabi:

I did take a picture from the Google Map. I did take a picture of the iPhone with the direction that the app was using. And I just let people compare. Yes.

Mike Hrycyk:

So I think that’s the number one fastest way to get any sort of, to get an action item out of Apple. So that’s cool. That’s a really interesting bug. I don’t know if it would have saved a life in a banking app because it’s not that urgent where you end up, but if it was a problem all the way back to Apple that could have ended up with people ending up in weird places or in the woods or in dangerous areas. So that’s really cool. Good job. Afshin.

Afshin Shahabi:

Just one thing to add, the app that they were running, it was number one in North America in the banking app and it was number two or three in the whole world. So the app that they are running is actually, they’re very peculiar and then very much like detect everything that they can so that they can be number one in North America still. So even such minor thing was very, very much like in the attention of the PM and everybody’s a stakeholder. So that was the reason that they took it like seriously.

Mike Hrycyk:

Oh, that’s awesome. Well, I like it if any bug gets taken seriously, but you know, the customer is king. So if the customer is going to be unhappy, I like that people react. So that’s awesome. That’s a great story. Thank you so much. Afshin.

Afshin Shahabi:

Thank you very much for listening to it.

Mike Hrycyk:

Thanks again Afshin. We’re going to let you head off to enjoy your day and next, we’re going to give our friend Shawnee Polchis a call. Shawnee is a tester with our PLATO team, and you may have heard her when she did our ‘Women in Testing’ episode last year with Christin Wiedemann. She’s got a bug story from her early days in testing, which is getting longer ago now, you’re no longer a spring chicken in the testing world, Shawnee. So that’s pretty awesome. Welcome Shawnee and tell us your story,

Shawnee Polchis:

Thank you a few years ago. I worked for a company that compiled data from a bunch of search engines, and then they would resell that information to their clients. And so I created a user with my name on it, so I could go ahead and I use that for testing and someone else was using it and ended up putting information in it that created this huge defect. The reason why it became such a huge thing is that it actually took quite a few months to fix as well as a couple of different teams to dig into it quite a bit. But the fact that my name was used on it, they dubbed it, the Shawnee defect. So in the meetings, they would always use to call it the Shawnee defect. And my team lead printed it out and posted it on the wall because he said that he had never heard of someone having a bug named after them before. And, and he just thought it was funny and wanted me to be reminded of it every day. So he printed it out for me.

Mike Hrycyk:

Oh, well, now, now I’m curious, are you still connected to anyone you should ask if they still talk about the Shawnee defect.

Shawnee Polchis:

I don’t know if they have it within their company anymore. Unfortunately, I’m not as connected with them anymore, but I do see a few people from our team around the office that worked on it. And sometimes it comes up,

Mike Hrycyk:

You know, in my 20 plus years of testing, I don’t think I’ve ever had a bug named after me. So that’s –

Shawnee Polchis:

That’s what he said as well.

Mike Hrycyk:

So, you’re pretty famous. Well done, Shawnee!

Shawnee Polchis:

Thank you.

Mike Hrycyk:

Alright. Well, thanks. That was cool. Thanks for your story and that’s it have a great afternoon.

Shawnee Polchis:

You too!

Mike Hrycyk:

Well, folks, it’s not every day that you have a bug named after you. I don’t think I’ve ever had one named after me. I’m really curious, have you ever had a bug named after you? If you have let us know! I wonder if our next guest, an old friend of our podcast, has ever had a bug named after him? Welcome back Nat Coture. Why don’t you tell us what your favourite bug is?

Nathaniel Couture:

Absolutely. So as you sent me the request, I started going back through some of the bugs that have come to mind throughout my career. And one, in particular, stood out and it was while I was working on a product that did an analysis on diagnostic images. And so, as you can imagine, your expectation as a user of such a system is that, you know, when you’re looking at a diagnostic image, again, this was industrial imaging, not, not medical, but you know, the things that you do with that image are quite important from an engineering perspective. And, so one of the tasks that the software performed as our hardware would capture the image itself or the data behind it. And then our software would produce the image. And then you could, as a user use the software that we were provided with it to perform some calculations. And so one of those things was we could image a tank or a pipe, and within the image of that pipe, you could measure the wall thickness. And so what we ended up finding was that through the use of the platform, you could manipulate the images, but you’re applied function after. So we had this little drag tool that the user could draw a line across the wall that was in the image. And it would provide kind of an estimate of the thickness of the remaining wall. So that’s important from an engineering perspective because these pipes or tank walls will kind of corrode over time and thin. And when they reach the kind of half their original thickness, it’s time to replace or repair the asset. And so what was neat or what was unique here is that as testers, we were kind of playing around with it and the value would change depending on how, you know, what filter you applied to the image. Sometimes the various filters made the flaws in the pipes, kinda more apparent than others. But what we also found is that thickness changed depending on what filter you had or what zoom levels you had. And so the running joke was how thick you want it to be? And so, you know, we could take these images of pipes and you can manipulate the thickness to pretty much anything you wanted with the software tool. So it was definitely one of those interesting bugs that, you know, if left out in the field, uncorrected, an engineer might make some pretty serious mistakes just based on how they might use the software. So that’s my bug contribution to the podcast.

Mike Hrycyk:

So did you find it by accident or had you written a test case that made it obvious?

Nathaniel Couture:

It was more experimental? I mean, we had a test case to make sure that the function more or less would do what it said again as we’re going through each – cause there was a variety of tools that were there for the users, but yeah, it wasn’t a specific test that found this. Like it was more through exploration and use of the tool. And it was actually a bit coincidental or accidental that we found it because you had to apply a particular filter or pre-apply some functions to the image for this to happen. And so it was a bit accidental that we found it.

Mike Hrycyk:

Yeah. I mean, as I was listening, I was like that wouldn’t be easy because you’re probably going to write a test case for this filter. And then you’re going to read a test case for this filter. And then you’re going to run a test case for this filter. But it’s not absolutely obvious. They’re just like, well, okay, I should write a test case where I applied this filter and then this filter and then this filter, and then this filter and compare the results. So you have to be clever and awake. Right. So that’s one of the things about being a tester is to be awake and notice things.

Nathaniel Couture:

Absolutely. And, I mean, obviously, as we were releasing the software, we were engaging with potential customers and, they were playing around with it. And there were some other flaws in the design of such a tool. We were kind of mimicking some of the tools available and other like medical platforms, stuff like that. And we knew that there were some weaknesses, it was hard to draw the line straight across the features. And so there was some error just by the way that the tool was built into the software as well. But again, this was a specific bug that was written into the software that basically guaranteed that your values were going to be wrong. But yeah, there were some inherent design limitations to the way these tools could work. Yeah. It was kind of neat.

Mike Hrycyk:

Yes, totally awesome bug. Thanks. And there’s a lesson in there somewhere. I’ll let our listeners figure out what it is, but that’s awesome. Thanks, Nat.

Nathaniel Couture:

No problem. Glad to contribute.

Mike Hrycyk:

Alright. So we’ve brought in some great friends of the podcast so far but for the summer edition, we wanted to bring in someone extra special. We have the founder of PQA and PLATO, Keith McIntosh. He is also a tester. He started a testing company and he’s been testing most of his life. So, he’s got tons of stories and he’s come here to tell us one today. So over to you, Keith, what’s your story?

Keith McIntosh:

Well, I was trying to think of a good story to tell about your testing ability Mike, but some things are better left unsaid. So I was thinking about stuff we used to do when I first started a company I told a guy I admired what I was going to do he said, well, testing is great, but you could really create a great company if you’d find a way to make requirements better. You know, the most of the problems of bugs, aren’t actually bugs in code they’re bugs in missing requirements or misunderstood requirements or misapplied requirements. So my story, I guess, is around that. One of the projects we did with another gentleman that works with us now who will remain nameless to protect the sort of innocent – he was working at a big airline. And we wrote a lot of tests, for them in WinRunner and Test Director at the time. We’d run a test in WinRunner and put reports into Test Director and you come in the morning and look at the results and it worked pretty well.

Keith McIntosh:

We wrote about 1000, maybe 2000 test cases for them, for that project over a course of a few months and ran them. But the unfortunate part about that was their goal, the development team’s goal that, our folks who are now working for us, were doing at the time was to get all the test cases to pass. So, we had about 2000 test cases and you’d run them at night and you’d come in in the morning, you look at them and anything that had a green checkmark saying it past they just never ran again. The whole goal was to get 1000 checkmarks. I don’t think that goal was maybe applied in a proper way. The whole idea of WinRunner being able to set a status of pass or fail was really important for automation back then at the time. And there’s all whole lot of different things that would happen now. But then it was really simple. There was a global variable that the test would trigger. You could say it was pass or fail, and it would report into Test Director. And you just look at thousands of results in the morning and you’d go back and look at the ones that were wrong. So we had a project we were working on and I hired a new person, new to testing, a coder. And I told him, you know, we’re going to write these test cases and make sure you, when you start test case, the global variable set is blank. Set null, or whatever. And when you run a test case, and then you set the test case to pass or fail, or set the variable to pass or fail. So we did that and they wrote several hundred tests and it ran and it was a pretty good piece of software. So they all pass. And back in those days, as I said most of the bugs were found in actually writing the test. The tests typically didn’t find many cause you were working on them out as you’re building them because you had to go through the interface mostly. Anyway. so tests were all passing. That was all good. And we’d run them again and they all pass. And one day we ran a new build and I knew that things were going to fail and I came in, and they’re all passed. So I went back and I looked at a couple of test cases and the guy had taken all the test cases, written them all nice. Made sure he set the global variable off, started with the test and runs it. And at the end of the test case, he set that global variable to pass. So, clearly, I didn’t specify in requirements for him writing the test cases exactly what I wanted him to have happening. And I should have been done a better job of being clear on requirements and being current expectations. But it’s also a sign you can’t just take the status or the automated test for gospel. You need to go back and debug it because just as programmers make mistakes in code testers can make mistakes in their test cases and codes too, that you need to go back and check.

Mike Hrycyk:

So that’s kind of brilliant. I think what you’re saying there is that you hadn’t indicated that you have to update the global variable based on the results. And he was just updating it to pass no matter what, right.

Keith McIntosh:

Well, I told him to set the global variable at the end of your test to set it to pass or fail. And I didn’t, I guess, specify what the criteria should be.

Mike Hrycyk:

So that makes it sound like that. You’re about 10 years ahead of the shift-left movement there, Keith. If you were just louder, we could have been way ahead in testing.

Keith McIntosh:

Yep, Yep. That would have been maybe so, maybe so. Anyway, that’s my story.

Mike Hrycyk:

Awesome. Thanks for the story. Keith was great.

Keith McIntosh:

Thanks, Mike.

Mike Hrycyk:

Hey folks. The next story we’ve got is coming from a friend of the podcast and prior panel guest Suhaim. As you may already know, Suhaim is an experienced teller of stories of interesting bugs over on his own podcast called That’s a Bug which I got to guest on last month, if you liked this podcast, I definitely recommend you go out and check his. And with that, I’m going to hand it over to Suhaim, so you can tell us your interesting story.

Suhaim Abdussamad:

Sure. I’ll talk about something that happened quite a long time ago, actually. So, I used to work for a company where we used to build software for Blackberries. That company doesn’t exist anymore. Anyway, they got acquired by Blackberry years ago. So this is one of my earlier testing jobs. And one of the things that came up was to be able to test the software everywhere at that company in like low coverage areas or low network areas or no coverage at all, like no data and things like that. And I couldn’t figure out how to do that. And I would be trying to move it around to different places in our office and at my house. And I couldn’t figure that out. And I was talking to somebody at work who mentioned the Faraday cage. I didn’t actually know what a Faraday cage was back then. And I started Googling. Someone said that online suggested putting it inside a fridge or freezer. And I tried doing all those things and I would put them in there and I would call the phone and it would always ring. I spent probably a whole weekend trying to find different places to put it in, to see what would happen. Then a bit later I found this place I was staying at my landlady had left, like a tin can, like a cookie can kind of thing. And for some reason, I put it in there, and the cover it like it almost went dead. That was the one thing that worked. Then the following Monday I took this can to work. And I said like, I’d found the solution here, put this in there. And so that’s what I did. And in the end, what happened was that actually that worked and as part of our test cases that we like put it in the ‘Out of Coverage Can’ for testing low coverage situations. I was quite proud of this, but it was a complete coincidence that I found a solution for that. It’s something that I quite enjoyed. You

Mike Hrycyk:

You missed the chance to call it the Faraday can.

Suhaim Abdussamad:

Yeah, I definitely… Somebody else named it. Not, me. I don’t remember who though, but somebody else named it the Out of Coverage Can.

Mike Hrycyk:

It is way too late to help you. But I was trying to get hold of one of my employees one day and he couldn’t get him. Couldn’t get him, couldn’t get him. And I called his lead. And I said, “Hey, why can’t I get a hold of blah, blah?” And he said, “oh, I think he’s testing coverage.” And I said, “oh, so how’s he doing this?” And he said, “I haven’t the faintest idea.” So eventually he called me back and it turned out that they found a super simple solution. It’s just going to the bottom of a parking garage.

Suhaim Abdussamad:

Yeah, no in Fredericton, unfortunately, where I’m based, there aren’t too many of those underground tracking garages, but, yeah.

Mike Hrycyk:

I guess that would be true. Maybe there are underground bunkers that you just didn’t know, but you could have kind of gone out and done some sleuthing.

Suhaim Abdussamad:

Yeah, that’s possible. The office we were in used to be a post office and there was one of these rooms I was in there, which also had a little tiny vault in it and my manager and I did go put it in there, but that didn’t work. So it would still ring which I was surprised by, but those phones were doing a pretty good job.

Mike Hrycyk:

So, another missed opportunity there would have been to continuously lose your can and then have to expense the purchase of new cookie cans, which you would then have to constructively empty of cookies.

Suhaim Abdussamad:

Yeah. I’d have been really into that.

Mike Hrycyk:

Awesome story. I think you could actually just go on the internet and buy Faraday cages now, but this, as you said, is a long time ago. So that’s cool. I think being able to stretch and be creative to get your testing is a great testing skill. So thanks for the story, Suhaim. Have a great afternoon.

Suhaim Abdussamad:

Yeah, you too.

Mike Hrycyk:

And now we’re going to talk to Goldie Zohar. Goldie is one of my senior testers who works on a couple of our different sites and Goldie’s always has an interesting thing to tell us. So, Goldie, I’m told that you have a nice and interesting bug that you’re going to tell us about.

Goldie Zohar:

Hi!

Mike Hrycyk:

So why don’t you just go ahead and tell us?

Goldie Zohar:

Yeah. Oh, I have an interesting one. So one of the companies I worked for provides an app that gets measurements for meters and displays information in different types of charts or diagrams, whatever the client chose. According to the customer we tested usually local. And before we hand things, a version over to the client, we tested on a virtual server on the cloud. In one of the releases, the day before we sent a version to the client, we noticed a funny slash disturbing issue. The display data was inconsistent. Every time we click on the display button, the data display different information. And then we find out that every click on the button increases the time zone by one hour. So the display data change every click. It only happened on an AWS server, but not on the local. So it’s a good thing. We found it before we handed the version over to the client so we could fix it. We basically worked during the weekend to fix it. So we were able to send the fixed version on time.

Mike Hrycyk:

That is a really unique issue. So you’re saying that every time you clicked on the button within the interface that the background decided you were in a new time zone. So every time you were looking for consistent data, it’d be data that was plus an hour and then plus an hour plus an hour like that?

Goldie Zohar:

Exactly. So the data change every time we click on the display button.

Mike Hrycyk:

Wow. That’s subjective enough that it wouldn’t really show up very well in your test script as expected results, right?

Goldie Zohar:

Exactly. Correct. Yeah, because also we on local hosts, it didn’t happened. It only happened on the cloud because it was something disconnected between the time zone and the display button.

Mike Hrycyk:

Wow. That is an interesting one. And then if that had hit production, the capability of figuring it out – Oh it would take days of trying to recreate. Well, maybe it’d get lucky, but I can just see that being a rabbit hole that took forever to figure out.

Goldie Zohar:

Yeah. And also it’s a lot of money involved though because all the information is gas or electricity consumption. So, companies want to save money by monitoring their usage. And if the data is inconsistent, they can’t really follow it.

Mike Hrycyk:

Right. And, and if it’s just feeding into reports, oh it would be so difficult.

Goldie Zohar:

Yeah. Totally.

Mike Hrycyk:

That’s an awesome bug. Well, thank you for that Goldie. That’s pretty awesome.

Goldie Zohar:

Your welcome.

Mike Hrycyk:

So the next guest we have today is Ellery Furlong. Ellery is an Automator with PQA. He’s been with us for quite a while, and he’s got a pretty good story to tell us. So welcome Ellory!

Ellery Furlong:

Hi. Thank you, Mike.

Mike Hrycyk:

Alright. So why don’t you dive right in and tell us, is it a bug or an issue or tell us your story?

Ellery Furlong:

It’s really a whole project. And of course, it has nothing to do with automation, but I think it really helps hit home that testing is more about the process than the software itself. So about two and a half years ago, I was on a project for a fairly large company. And the goal of the project was to test their help desk staff to make sure that they followed the correct process and gave us the steps we were supposed to. So we would call in with a fake name tell them what our problem was and we would check to see if they gave us the steps to find a resolution that we expected them to. And it might be with an application or something like that. And we didn’t even have the actual application in front of us. We just had a script that we followed and we just agreed with them no matter what they told us. And then we took some notes and pass that along to the client at the end. So one interesting story was, well, you know, there’s only so many people working the help desk. So you would call in with one name and then a few minutes later you might call in with another name and you get the same person. So obviously they know, what’s going on, but we all kind of played the part, even though we knew it was a test. And then another time, so the fake name I had, I actually, I mispronounced it and they couldn’t actually find this user in their system. And then they said, well, do you mean this name? And then I had to, you know, kind of sheepishly admit that I mispronounced my own name, but yeah. So it was an interesting project.

Mike Hrycyk:

Wait, so the bug here is the caller in didn’t know their own name?

Ellery Furlong:

Well, that was, yeah, that was a user error, I guess. But it was a name from a different language and I mispronounced it and they had no idea who I was just because they were looking for a name that didn’t exist.

Mike Hrycyk:

Something like that you could have just owned and said, no, no, that’s not how you pronounce my name. It’s like this.

Ellery Furlong:

Yeah, no, yeah, looking back, maybe that’s what I should’ve done. But at the moment I was, you know, it’s a little weird anyways, because as testers, we tend to be a bit more introverted, I guess. Some might claim. I’m not, but so, you know, even just talking on the phone, you know, can kind of get your nerves up. So I don’t think I had the confidence at that moment to pretend that I knew how to pronounce my name better than they did,

Mike Hrycyk:

And that’s fair. We’ve done a number of engagements like that, and they’re really, they are interesting. And they take you outside of, your ideas of testing because what you’re doing is you’re testing their capability to use the software through the script. So you still are testing software to an extent, but mostly you’re testing that they’ve built scripts that are meaningful and useful. But you still need a human to help them through that. And testers are okay for that, but you’re not really there to trap a bug, right? So you’re there to improve quality, but as opposed to a tester who’s looking for a bug, you don’t really get to be the person who finds a bug unless they get stalled or stymied or can’t get through it. Right?

Ellery Furlong:

Yeah. And I think, I think the idea was just to make sure that you know, given a problem, they knew to identify that that was the problem and that there was the solution for that problem.

Mike Hrycyk:

So, I mean, it helps us understand that testing wears way more hats than most testers.

Ellery Furlong:

Yeah. And I think it just shows that you know, for software testing, the software is just the medium we’re testing, but you know, we look at business processes, I think.

Mike Hrycyk:

Awesome. Thanks, Ellery. That was a great little story.

Ellery Furlong:

Yeah. Thank you.

Mike Hrycyk:

Well, after listening to these stories, and Ellery’s especially, I think it’s unfair for me just to facilitate all the way through all these, without giving my own story. So I was thinking about what story that I would be willing to tell or what hadn’t been lost in the midst of the past. The one thing was it was less an issue and more so a realization that I came to and it was pretty early in my career. I was working at a railroad, which was my first testing gig and I was their first tester and it had a lot of legacy systems in it, but it was messaged based and it was all about moving rail cars around and stuff. And they had a lot of people who had developed systems and they became the sole supporters of those systems. And it’s really where I started to understand the developer’s mindset and the idea of silos.

Mike Hrycyk:

And so we would get a production issue and the production issue would go to that computer operators and they would send out a note saying, “Hey, I’ve got this issue. And it’s involved in these six systems” because the messages were moving around and they would find it in the end system. And what you would get is they would just pass from a developer who would look at it and the developer would look at it, go, “Hey, this isn’t me.” And they would just throw up their hands and say, “not me.” And the next person would get it and “not me”. And the next person would get it. And eventually, someone would figure out that, oh, look, it was that second developer in that chain of four or five, but you’d have to go back to them with proof and you have to explain to them how this is indeed their fault. And they will go, “yeah, okay. This is my fault.” And if they just spent an extra three minutes looking at it, the first time they would’ve known it was there, but they just had no clue what the systems before them really did. And the systems after them really needed, they just lived in this little bubble of their own. But for myself, as a tester, I had to relate all these things together. I had to track the messages through, I had to create the data and I had to understand all this stuff. And it became part of my job that the operators would call me and say, “we’re having this problem”. And I would take a look at the problem and I’d say, “oh yeah, this is Bob’s fault.” And I would go and I would hand it to Bob. And he would say, “Hey, this isn’t my fault.” And I would say “oh wait, Bob, take a little deeper.” And what it ended up doing is I think I even looked at the stats, is that overall, it was saving us 80% reaction time on issues that came in the field. I mean, I was the first tester they’d had. They loved me because I produced reproduction steps and stuff, but no one had ever really thought that QA had a big picture. They understand how things fit together and they really provide value at this level. And so for all of you out there who are younger testers remember that that learning how things go together is a core strength of testers. And it’s a really important thing and a value that you can provide.

Mike Hrycyk:

But enough about me. This is about other people’s testing stories. So, let’s hop on the line with my friend Satya.

Satya Patro:

Hey Mike. So the story that we are looking for, I will tell you the experience that we have with one of the clients, a provincial energy provider. We’ve been working with this client to find out performance bottlenecks. What happens when a power outage goes on and the customers keep calling those energy providers to find out what is the scenario? And in this scenario, all the customers will try to log in to their provincial sites and find out what’s the update on what’s going on. So as a PQA service provider, we try to design those scenarios in our scripts to handle those mapping scenarios. Like knowing where is the outage pointed out in a particular area? So handling this one with the JMeter tool, it’s quite an interesting thing that I like to share. Most of this one is on a GIS map scenario. So the customer doesn’t want to invest anything to implement this one. They wanted to implement this on an open-source tool. So we took the challenge and cause of the JMeter site we develop the scripts to handle the scenarios. It’s actually bit by bit loading those maps and investigating where the bottlenecks are, where the payloads are going on, those things. And we actually found quite a bit of bottleneck on the servers and the customer was really happy to get it fixed before it goes to the production site. Yeah, that’s quite a bit of interesting work to be done. And we have accomplished successfully on this. The customer was very happy with those estimations and those bottlenecks that we found early before going to the production site.

Mike Hrycyk:

So what we’re talking about here is, I’m a user I’m sending in my home, my lights go out, I pull up my phone, I go to the energy provider site and I try and look on the map to find out if there’s an outage notification for my area. So the big challenge there is if it’s a big outage, right? If it’s a whole city that’s out, you’re going to have 10, 30, a hundred thousand people trying to do this. And so you were making sure that those bottlenecks didn’t exist and that you didn’t have people who are already afraid cause they’re in the dark, not able to get any information. So you were making sure that that panic situation was diffused, right?

Satya Patro:

That’s correct. We wanted to make the user customer experience more friendly rather than like a breaking during those outages so that the customer doesn’t get, or the user doesn’t get frustrated, loading the maps on their user interfaces right on the mobiles.

Mike Hrycyk:

Well, that’s cool. That’s like some real-life value. I mean, one of the best feelings for a tester is knowing that the issue that they’ve helped overcome has a real, tangible impact on people in their lives. And in this case, it’s making people less afraid. So I think that’s really cool.

Satya Patro:

Yeah. That’s a nice experience working with this energy provider and knowing about how the maps and the GIS system works. That’s been a good experience and I’ll always remember this client. There’s quite a bit of experience on this.

Mike Hrycyk:

Awesome. Well, that’s a really great story. Thanks, Satya.

Satya Patro:

Thanks, Mike.

Mike Hrycyk:

Awesome. And with this last story, I would like to say a special thank you to Christine, John, Ellery, Keith, Shawnee, Satya Goldie, Afshin, Suhaim and Nat today for calling in with us and sharing some really great testing stories. I really enjoyed hearing them. It just goes to show that every experience that we have can help open up your eyes or help you remember something or work on the way that you yourself can be a better tester in the future. Although some of these stories were funnier than others. I think every single one of them had one or possibly multiple nuggets of truth that will help us all be better testers. And I think one of the best things about when testers get together is we tell these stories, our testing stories of what was good and what was bad. And I know that other people standing there might think that we’re just super geeks or whatever, but it’s important that we tell these stories and it not only connects us as a community. It makes us better. We’re going to do more of these in the future.

Mike Hrycyk:

And if you have a great story to tell, please hop over to @PQAtesting on Twitter, LinkedIn, and Facebook, or just into our website and share them with us. And let’s see if we can get some dialogue going and become an even better community of testers. If you’re enjoying our conversations about everything, software testing, we’d love it if you could rate and review PQA Panel Talks on whatever platform you’re listening on. Thanks again for listening. And we’ll come back to you again in September, I think with a discussion about automation.