Header-canalplus
March 16, 2023

Testing Talks with Michael Bolton

Mathilde Lelong
Blog > Agile
Testing Talks with Michael Bolton

Testing Talks is a series of interviews highlighting well-known personalities from the world of software quality and testing. In this new episode, we had the pleasure to talk with Michael Bolton. Here is a digest from his interview. 

Can you introduce yourself?

My name is Michael Bolton. I have been involved in software development and testing for 35 years now. And it is weird because I cannot remember if I am even 35 years old yet. But I am, I guess. Before that, I worked in a theater. (...) Through the 1990s, I worked for a company called Quarterdeck, in which I was a tech support person, a tester, a program manager, a developer, a documenter, then a program manager again. And then in 1998, I went independent. In 2001, I met James Bach. And in 2003, we became colleagues in teaching and presenting Rapid software testing, which he had devised and developed. In 2006, I co-authored that material and I have been presenting rapid software testing approaches, training and writings all around the world ever since then. It has been 20 years  since I have been involved in rapid software testing.

How would you build a successful testing strategy? What methodologies would you recommend?

In building a successful testing methodology or in applying one, I would suggest Rapid Software Testing. It is designed to work in any context because the methodology and the approach are context driven. The point is that testing depends on the situation. The situation also affects our choices, and both of these things can change over time. There is a paper you can find on James' site, Satisfies.com, which answers the question of how to build a successful testing strategy. But our principal motivator is a focus on trouble. A focus on problems. And that is a kind of different take on testing than I suspect many of your listeners might be engaged in. Because a developer center view of testing is not the same as a tester's view of testing. Developers are fundamentally makers. Their idea is to make things that make your troubles go away. A tester is the kind of person who is going to seek and find trouble wherever they look. The difference between developer-oriented testing and tester-oriented testing is that developers try to determine if their work is what they meant to create. (...)

A good testing strategy will incorporate both kinds of testing. (...) A lot of developer testing is not really strongly focused on rich, challenging and complex data. A lot of developer testing is not focused on developing a strong understanding of how people actually use the product. When the rubber meets the road, developers do not have time to set up test environments, platforms, different mobile devices, Safari, Windows and that type of thing.

A good testing strategy encompasses - from our perspective - the understanding of your environment and your context. The mission is to test the information that is available to you. Your relationships as a tester with the test team, the equipment and tools that are available or what you are going to need to develop and build the schedule. These have a powerful influence on the strategy. The kind of strategy applied to testing a game is very different from the one applied to a banking application, which is still different from a medical device. 

In fact, a strategy is risk focused, developed on the basis of the faith that there are problems in the product and about the product that matters. It is practical, achievable, it is product and project-specific. And those are some of the most important elements of a strategy. There is another document that I refer to in rapid software testing called The Heuristic Test Strategy Model. It can be related to a metastrategy because it is a test strategy. So another really important element of any strategy for testing a product is that it will be unique to that product and to that set of conditions in that set of circumstances.

What would you recommend to transition from manual to automated tests?

Give up on the idea. Testing is not manual and testing is not automated. What is the strategy for transitioning from manual development to automated development? There is not one because development is not an automated or manual process. It is a cognitive process, it is a social process, it is an intellectual process. Testing is the same, management is the same. Nobody talks about transitioning from manual management to automated management. Nobody talks about going from manual research to automated research. Let us think instead of that concept that people call automated testing. We should instead say automated output checking. Checking the output of some mechanistic process, using mechanistic means to do the check that can be automated. But the testing part to make that test requires intention, design and encoding… It explains to the machine how to perform the elements of this task. Now, how about the check? This part can be accomplished by the machine, automated, and all over again. Testing is the application of an oracle, a principle or a means by which we recognize a problem.

And just because a check is returned red doesn't mean that there is a problem in the product. There might be a problem in our conception or misunderstanding between what the check is supposed to do. So the check and product could be right but they don't meet. So stop thinking in terms of automated testing and start thinking of automated checking as a tool that we would use. (...) The other thing that drives me crazy about automated testing - so called - is that it absolutely ignores all of the other ways in which we could use tools in order to help us test, to generate data, to configure and to reconfigure the machines, to analyze, parse, search, sort data and output, to model the product in various interesting ways.

All these things are tool applications, and they are normal for testers. This is not a transition. This is just about deepening your skill and perspectives on how to use testing tools. So this has been a trigger question for essentially my entire career. (...) Testing is so much more than that. Testing is a heuristic process, not an algorithmic one.

What are the key takeaways of your book ‘Rapid Software Testing’?

(...) Number one, software development and software testing are fundamentally human social processes. And they go awry, they go off the rails for all the reasons that human social processes go off the rails. Misunderstanding driven by economic considerations, by ambition and influenced by risk and reward. (...) The tester plays a crucial role on the project. While everyone else is focused on building and making the product successful, the tester ensures that it meets the expectations of both humans and machines.

As testers, we focus on trouble between these two. In rapid software testing, we are trying to do that in the fastest, most efficient, and least expensive way – while still providing excellent results in achieving the mission of testing – which is finding problems that matter before it is too late. (...) The tester becomes a central piece in the testing process. Not the tools, not the documentation, not the process model, but the tester and his or her skills. 

There is a series of blog posts, three fairly short blog posts that James and I wrote on the subject. They are hosted on my site on developsense.com. Look for rapid software testing premises, and you'll find them.

An advice you always give about agile testing?

I always try to give credit to my friend Scott Barber who says that rapid software testing and testing in general, needs to respond to whatever context it finds itself in. One of the things that the manifesto for Agile development focuses on is individuals and interactions over processes and tools. I believe what they meant was "process models," since interactions can be seen as processes, whether they are interactions between individuals or processes themselves. Software development is then also a process. But I think what they are talking about is individuals and interactions over process models and tools because we are constantly developing new things, and every description of a process is essentially a story about it. (...) 

But it is interesting to note that software work is not our business in rapid software testing, which will shake a lot of people up. Our goal is for people to be aware of the software and have knowledge about it. If the software works, that is great. If it doesn’t work, we want to be informed about it, and we want others to know too. Instead of just focusing on working software, we could add a small change to the Agile manifesto charter. (...)

I don't see that very much in the testing world, and it would be really good to see more of it because that is how we are going to find deeper problems. On the other hand, that is why customer support is a great migration path for people in a development organization. If you have experience in customer support, you are going to have a whole set of insights about how customers interact and use a product and how they experience problems that informs what testing is really all about. To us from a tester's perspective, testing is experiencing, exploring and experimenting with the product. And as I say in the Agile testing world, the focus is often on the product functions. Checking the output. And that is, of course, important, but it is by no means the only thing to check. Now, there is a fourth element in the manifesto.

(...)  The product is changing all the time. And therefore our testing strategy. It needs to shift and adapt to what is going on in the project. Our approach to testing in a project is broader than what is usually considered in certain developer-driven notions of testing in agile contexts.

What do you think about AI in testing?

A friend of mine, Leonard Zurman, a programmer who engaged me at one point on a project that was based on machine learning, has a t-shirt that says: “I'm not so much worried about artificial intelligence. It is natural stupidity that I'm worried about.” We are very dazzled by claims of AI. There is a lot of talk these days from certain testers in certain circles about how AI is going to change everything. But really, when I look more deeply into it, what they are talking about isn't AI at all. It is software. Regular, good old fashioned software. People are talking about AI as a marketing term. It is not really an engineering term in my opinion. On the one hand, your question might be about how we would test AI. And my reaction to that would be, we should test AI with enormous skepticism. And here is why. When we are applying machine learning models, especially, we are throwing spaghetti against the wall and choosing which spaghetti sticks. Essentially, what we are doing is we have an algorithm that attempts to make classifications, and sometimes it is a terrible thing for it to try to do.

However, it is impossible to make accurate predictions of human behavior based solely on past data. You can get around this laden with some kinds of bias simply from where the data is being obtained. It is being obtained from electronic input. And then we are getting algorithms to generate other algorithms, and we do not know what those algorithms actually do. It is not like a program that we wrote intentionally where we can see a line of code and say, there is a problem. We can fix that line of code. We can fix this thing that generates an erroneous conclusion or erroneous output. We cannot do that with machine learning. Unless we have someone who is really, really good at assembly language. We can find out what parameter prompted the product to produce this output, but that is not very likely to happen. AI is essentially like lifetime employment for testers as critics, because we do not have any means by which we can check, review and read the code. And that means we are going to have testers who are focused on being able to exploit and use the product in ways that cause it to spit out its problems.

For our kind of testers, that is feasible. For the kind of testers that depend on a step by step, formally described, procedurally structured process of automated checking, that method is hopeless in the domain of AI. So here is one thing you could do when applying AI to testing. You can use it as a jiggler. You could use chatGPT, for instance. You could use it to generate a sample of test ideas. But do not use them. Use them critically. Use them to stimulate or jingle your own ideas. (...) 

Sure, it depends on the application of that idea, on our wonderfully human ways of taking something and making sense of it. AI is like that, too. AI is not something that we should take for granted because it is really being generated. Concretely, the type of AI I'm talking about now is these large language models that generate things that look human. There is a paper by Stephen Wolfram that I recommend, which essentially explains that we are linking together sequences of content that are likely to be coherent. (..) 

And so if you use a large language model as a trustworthy way of thinking about testing, the result will be disastrous. So we need reasoning, and we need a conscious rejection of the idea that this thing is smart. It is not. Emily Bender has the best line for it: “It is a stochastic parrot”. It does not really understand what it is saying. And as humans, we've got to be really careful of anthropomorphizing it, of making it more than it is.

What should I focus on when starting a testing career? 

Many people arrive in the testing world from somewhere else. Universities, colleges, and any formal education don't spend a lot of time talking about testing. (...) So when you start, that is totally up to you. And I noticed that people very often fall into this career by happenstance and by accident. And that is great. That is great because we need a wide variety of people from a wide variety of disciplines. With different cultural, ethnic or educational backgrounds and with different temperaments, skills, preferences. All of those things make a richer testing community. And that is important because what we need more than anything else in testing is an outsider's perspective. Any testing that is based on what the developers think, how developers think, how code is made.

(...) My principal suggestions would be to build your knowledge of testing and build your understanding of it by immersing yourself in testing communities. Take whatever interests you and figure out a way to link it back to critical inquiry about products and projects. That is what I would recommend.

Your final word for this interview?

I encourage people to look into our material and to our offerings for testing: Developsense.com – as my site and Satisfies.com as James Bach’ site. But dear testers, please, please focus on problems. That is what we've got to do. We are the only people on the project whose mission is to focus on risk, trouble, and to learn from every bug. (...) Testing doesn't do that on its own. But we provide a powerful set of headlights, microscopes, telescopes, satellite dishes, parabolic, microphones, by which we can understand the product better and help convey that understanding to the people who make it better.

Want to give Agilitest a try?

See Agilitest in action. Divide by 5 the time needed to release a new version.

Scaling functional test automation for happy teams

  • From manual to automated tests
  • From test automation to smart test automation
  • Finding the right tools
ebook-scaling-test-automation-agilitest
Mathilde Lelong

About the author

Mathilde Lelong

Mathilde is Content & Community Manager at Agilitest and has 4+ years experience in marketing and communication.

twitter-logo
linkedin logo

Get great content updates from our team to your inbox.

Join thousands of subscribers. GDPR and CCPA compliant.