Header-canalplus
May 10, 2023

Testing Talks with Isabel Evans

Mathilde Lelong
Blog > Agile
Testing Talks with Isabel Evans

Testing Talks is a series of interviews highlighting well-known personalities from the world of software quality and testing. In this new episode, we had the pleasure to talk with Isabel Evans.

Isabel Evans has more than 30 years experience in the IT industry, mainly in quality management, testing, training and documentation. Most of her work has been with clients in the financial, communications and software sectors.

Can you introduce yourself and your background?

My name is Isabel Evans. I took a computer science degree in the 1970s, and I’ve been working since the 1980s in software testing and software quality in a variety of different roles. I've worked as Test analyst, Test manager and finally as Quality manager, which is a slightly different role. It consists in looking at processes across a company, not just at the software level, but across the whole company's activities. I've also done consultancy work in a variety of different domains around testing and quality, and a certain amount of teaching as well in that area. Since 2017, I've been working as a postgraduate researcher at the University of Malta, researching into the experiences of testers with the tools that they use.

What do you love the most about your job?

I think it's interesting with software testing in particular, because people sometimes think that it's quite a boring job, and I would say it's the reverse of that. (...) You have to do a lot of thinking, it's a very skillful job: you're very frequently working under pressure, you're constantly problem-solving, and you're constantly looking at risk. (...) But I think that's what I've enjoyed about it, particularly the problem-solving, or when you get conflicts between constraints and risks on a project. For instance, conflicting different requirements from different stakeholders, looking at finding ways that you can manage between those and actually come out with what's the best outcome for the organization and for its customers.

(...) So, I would say that's what I've enjoyed the most for the last 40 years now., You're always learning something, and that's carried on into my research activities. I'm looking at stuff now, ways of working topics that I've never had to do before. 

What do you think about test teams that refuse to move to automation? Do you have any advice for them?

I think it's a very interesting question because automation isn't actually possible for all of the testing. I would argue, though, large chunks of tests, even of test execution,  would need to be done with a person, a person's brain driving it, and it needs to be done well at speed. Sometimes going as fast as possible, which is one of the reasons people automate, isn't actually the best way of finding the problems. Sometimes slowing down, thinking very slowly, stepping through things and looking at things in a critical way is a better way of understanding what's going on and finding out what the problems are. If I had a team that was refusing to automate, what I'd want to do is understand what their reasons are. Quite often there will be very good reasons for that. There will be aspects of what they're doing which cannot be automated. And, even if you automate some of the test execution, there are aspects of the whole testing activity which require thinking analysis. Consider this team, whose managers may desire automation for activities that require cognitive abilities - brain power or emotional response, such as some tests. (...) I would say if there was a team that refused to use any tools at all, then I would show them how tools can support them with some of the tasks. But, I don't think automation is necessarily a good goal in itself.

ChatGPT is the current go-to topic. What do you think about using it for test creation?

I've had to try Chat-GPT and looked at it to see what other people have done with it. I think it's a bit of a blunt instrument. I would probably not use it for test creation personally, because where I've seen it used, what seems to be happening to me is that you're getting answers that are plausibly okay, because they're very standardized. (...)

I'm not sure that it's something I would trust. The people I've seen using it successfully, and this is not just in software testing, are people who know enough about what they're expecting of the output. And they spend enough time checking the output so that they can differentiate between where it's worked and where it hasn't. I think the problem with it is that you probably need to do quite a lot of work to check it out. 

For example, I wrote a report for somebody recently, which had a detailed analysis.. They ran part of my report through ChatGPT to get a summary. But, actually, when I saw the summary and I checked it back into the report, there was a mismatch. There were things that I'd got in the report that hadn't got through the ChatGPT summary, and there were things in the Chat-GPT summary which were generically plausible as things that I might have talked about as a result of the analysis but actually weren't in there. (...) The problem is that you get results that look plausible. But, in fact, the amount of work you have to do to check everything is pointless.

The mistake I made in that particular context was I should have done a shorter summary, which I could have done myself, and it would have been the right shorter summary – instead of the wrong one. I don't think we're there with it yet, but I'm very concerned that people are starting to trust it and starting to use it in different ways.

What are the key takeaways of your paper ‘Test tools: an illusion of usability’?

That’s a good transition from the previous question. I think there are some illusory things there. We see things on the surface that come out from tools or in the way that tools are behaving, and it doesn't necessarily reflect reality. When I started researching, I was thinking that there was a problem with the usability of testing tools. When I actually did the research in a rigorous and academic way, I started by asking people not whether they had a problem, but actually what their experiences were. I stepped back.

Results showed that people had good experiences and some people had not such good experiences, and then I analyzed that data and one of the things that came out was that people did have a problem with the usability of tools. For a lot of the testers who were reporting back to me, the tools that they had to use were not sufficiently good from a usability point of view. But when you dug under that a bit more deeply, this was for a number of reasons. And some of those tools actually had very good user interfaces – attractive and clear and apparently easy to use – but they didn't properly support the actual workflow of the tester. The usability had been applied in a very superficial manner. (...) The usability was an illusion. 

And then another illusion of usability was that it wasn't enough. Even if you'd got good usability, that's not enough without other quality and use attributes. One of the things that people reported over and over was that they had tools to support the testing, which then weren't maintainable. The software under test changes, therefore everything in the test tool needs to change. And if the tools didn't support that continuous change, they became unusable. People were finding that they were getting a tool, they would use it for a bit, it would become unusable because of the maintenance problems, they would move on to another tool and have the same problems.

I am also talking about technical attributes or quality and use attributes, which were also not well-supported. Some testers were reporting that, for example, there were security issues with the tool. Not in the terms of it being insecure because it was allowing data out, but insecure in terms of not allowing the testers in. That the security access to the tool was set such that they couldn't actually do the work that they wanted to do. (...)

You had people who were not particularly technical and finding the tools too technically difficult to use, so they weren't being supported in initial learning. You also had people who were technically able and were finding that they couldn't actually use the tools in a technically proficient way because the interface of the tool didn't allow them to. It wasn't allowing for different types of people, and it wasn't allowing for growth as people were learning their craft, learning about the software domain and indeed learning about the tool set.

I think it would be really great if test tools could support people through personal growth, and through changes to the software under test. Those were key messages out of that particular paper.

Can we have a snapshot from your next paper?

I started to think that maybe the reason that the usability was being treated in a superficial manner was perhaps that test tools designers may not have had a clear understanding of who was testing. And it was becoming clear to me from that early data that there was a greater range of people than I, with all my experience in the industry, was expecting. I did another research which was looking at who is doing testing, an industry-wide survey over 70 respondents about themselves, their hobbies, how they worked… It turns out that talking about who is testing is very heterogeneous.

If you look at what tool sets those people are going to need, there's going to be a diversity of needs. Even if you've got the same underlying functionality, it's supporting different types of people with different needs, different goals at different stages, coming from all sorts of different backgrounds. I think it is an interesting idea for people designing tools. I'm working on that at the moment, with papers in draft and out for review. I'm trying to come up with some heuristics that might help guide people who are designing test tools. (…)

There's a richness and diversity of people, activities, types of testing and goals… which means just using the word automation is too thin. I'm hoping that some of this work will feed into a better tool set for testers potentially, although we're a long way off that of course. It's a very beginning idea in what I'm doing.

Can you tell us an anecdote about your career as a speaker?

I got into speaking by accident! I didn't plan to be a speaker at conferences, or to teach testing. It all just happened almost by accident along the way. I remember I was in a forum meeting and Dot Graham was there. I'd met her on a couple of occasions – it was a sort of special interest group meeting. We'd been talking about different things in a group around quality and testing, and she said to me “You should come and give a talk”. I was really surprised, I said “No, that would be a ridiculous idea”... But she persuaded me.  

I gave a talk and it went quite well. I was just chatting away about the project I was working on at the time. People seemed to appreciate it. She said to me “You should apply to speak at EuroSTAR”. And I just thought that was a laughable idea. But again, she persuaded me. I put in an application and got accepted. 

I remember going out to the conference and giving my talk and just being so excited to be at a big European conference. Again, people really liked the talk, and I was really surprised because I remember I was so nervous beforehand. I am still always really nervous before I do any sort of speaking or before something like this. (...) I think the end result at this stage, when I hear people younger than me talking about their projects, my response is very often to go and give a talk about that. I think that's the key about speaking. You need to be taking in something that's your own real story, something you've really done, because many other people at that conference will be experiencing similar challenges, and so they'll appreciate hearing about yours.

I remember at one particular conference I'd done a project which was for me the biggest project I'd ever done. I was asked to go and give a talk about it. The topic was “Big projects and how we deal with them”. I did my talk and talked about how immense this task had seemed. It really was for me, it was huge. During a talk, a tester of IBM mainframe systems realized that a big project isn't big just because it's the biggest project. Instead, it's the biggest project you've done so far. Whatever challenge you face in this industry, it will feel like the biggest project you've ever done. Remember this when you attend conferences and share it with others who may need to hear it. 

Your final word?

Looking back from where I am now, 68, I think the key thing is lifelong learning. Absolutely key. Not just because the industry changes all the time, but actually because you change all the time. 

I've learned the things that I need to know. You need to be thinking, in five years time, in ten years time, in 20 years time, in 25 years time… You might be doing something completely different. And that's okay. It's actually that constant growth and learning, I think that is so important. Take the opportunities that come up for you. Just keep seizing out, grabbing the opportunities, and essentially do what you want to do, because you've only got one life. So enjoy it.

Want to give Agilitest a try?

See Agilitest in action. Divide by 5 the time needed to release a new version.

Scaling functional test automation for happy teams

  • From manual to automated tests
  • From test automation to smart test automation
  • Finding the right tools
ebook-scaling-test-automation-agilitest
Mathilde Lelong

About the author

Mathilde Lelong

Mathilde is Content & Community Manager at Agilitest and has 4+ years experience in marketing and communication.

twitter-logo
linkedin logo

Get great content updates from our team to your inbox.

Join thousands of subscribers. GDPR and CCPA compliant.