Header-canalplus
November 3, 2022

Top testing challenges in the software industry

Marc Hage Chahine
Blog > Test Automation
Top testing challenges in the software industry

Testing is constantly changing. With these changes, come new challenges that testers must face. ‘Unfortunately’, testing, the software industry, and even our society are changing so fast that new challenges are coming up while existing challenges are not yet fully resolved.

Today's testers are faced with both historical and current challenges. As for tomorrow's testers, they will probably have to face the same challenges as today's testers, while welcoming new challenges that we are starting to see coming.

In this article, I will share a selection of challenges that are, in my opinion, the main testing challenges of today and tomorrow.

The historical testing challenges

The place of testing and the tester

This issue is recurrent, and I see it going on indefinitely. It is true that today, the vast majority of companies, software development firms and teams need to test their products. And for this, specialists are needed… testers.

From this point of view, the way has been done. Nevertheless, this is only the beginning and there is still a long way to go to make people accept the tester, his interventions. But also the fact that testing is not exclusively reserved to testers.

I have noticed name changes to designate the tester's job. QA or QE to name a few. Indeed, in addition to conveying a positive image, these changes avoid a shortcut too often made: ‘testing is for testers’. Testing is a measure of quality, and quality is everyone's responsibility!

Test automation

I've been hearing about GUI test automation since I started this business, in 2011. At that time, I was already told that automation was a topic that had been around for many years. I was introduced to QTP and told about its advantages and that it was not ‘capture and replay’.

Since then, the tools have evolved considerably. They are more accessible, more powerful, easier to use, ‘cross-technology’… and some tools rely on AI to facilitate this automation. Nevertheless, test automation is still struggling to become widespread and manual testing hasn't disappeared. The most common reasons are numerous:

  • Lack of time,
  • Lack of automation skills,
  • Lack of resources.

Even if these reasons can be true, they are not the only ones. Other reasons have emerged:

  • Methodology issues to initiate automation,
  • Ever-changing software and technology,
  • Environmental and data issues,
  • The desire to automate primarily (or even only) GUI tests when it is often simpler and faster to automate lower-level tests such as API tests.

Finally, automation does not only mean automation of test execution, but also automation of other activities such as reporting, design, environment and data management.

Training for the test

It is obvious that test training is not sufficient in France. According to the last WQR report, testers represent about 30% of the Agile teams members:

QE in agile teams
WQR 2022-23

However, I see very few training courses to become a tester, whereas there are many more to become a developer. It is therefore urgent for universities and schools to offer dedicated courses, as the institutions of the QTL Sup association do.

Post high school training is important but not sufficient for several reasons. The two main ones are the time required for training and the evolution of practices. To answer the second problem, there are currently different training courses, the main ones represented are the ISTQB courses.

There are also retraining courses that are working better and better. In my opinion, we must continue to work in this direction while offering more operational training courses that do not necessarily rely on specific tools.

Current testing challenges

Adapting to Agile

This challenge must seem obvious to everyone; and moreover, it does not only affect testing. However, just because it is obvious does not mean it is trivial. Adapting to Agile is perhaps the most complex, because it is specific to each company, each environment, each team! You can be perfectly adapted to Agile in your team and then arrive in another team and not be adapted to the in place’s Agile system.

In order to answer this problem, the tester must have communication, listening, coaching, accompanying, questioning skills, while remaining curious. Integrating an Agile team in a human way is not easy. An Agile tester, as a member of a QA team, must make himself as useful as possible in this team. This will lead to an increase in skills/knowledge on various subjects that can be very technical or very functional.

For now, I have only talked about the tester’s adaptation to Agile. But a test must also adapt itself with more frequent execution, product vision and not Agile, multidisciplinary teams... In the end, we could almost say that in Agile, we must test more (in terms of frequency and scope), better and in less time.

Technical and functional skills (API, BDD, data)

This challenge is a consequence of the previous one. Testing takes many forms, from shift left (with BDD, ATDD and TDD) to shift right (production testing), including the management of environments and data.

This multiplicity of needs also requires a multiplicity of skills, which is moreover complex to have all in one person. This is why more and more presentations on QE engineering are emerging. Where competences in a team and thus complementarity between its different members are sought. This does not exclude the need to increase competence in different skills; quite the contrary.

However, this arrangement has the advantage of facilitating the development of skills within the team with collaborative practices in pairs between learners and experts.

Environments and data

This challenge could be considered as a historical one. However, a few years ago, the issues were much smaller. The test team had its environments and data that it managed unilaterally.

This is no longer the case with Agile. The environments required are often more numerous. In fact, the data must be present, coherent and make it possible to ensure the validity of what is being tested. In the same way, developments must respect standards such as the GDPR (with, for example, the use of anonymized data), which greatly reinforces the challenges related to data and environments.

The CI/CD

Challenges in continuous testing, continuous integration and continuous deployment are closely related to automation… while being quite different, especially because of their different objectives.

Continuous integration aims at validating the code modifications or evolutions made by the developer. It takes place at the time of branch mergers (‘merges’) when the developer integrates his code into the ‘common’ code. At this stage, many tests are performed. They generally include a static analysis (with Sonar-type tools) and unit tests.

However, it is possible to add some automated functional tests, to make sure it does not slow down the process too much. It is therefore important for testers to find the right balance between selecting tests with a sufficient level of quality and respecting productivity with non-time-consuming tests. Even when automated, test execution takes time.

Continuous deployment goes further than continuous integration, because the goal is no longer to deliver in test but in production. The constraints remain the same as for continuous integration (even if the time constraint is a little less strong) but the quality requirement goes beyond simple functionality. It is therefore essential to think about the non-functional, with the integration, for example, of performance, security, accessibility tests or any other significant criteria.

Testing AI

AI and intelligent software are two topics that are being talked about more and more. In fact, it is no longer just ‘hearsay’ but a reality. There is more and more software using AI and, in many cases, Machine Learning. Voice assistants are perfect and fairly well-tested examples of applications using AI.

We can also cite examples of conversational robots on social networks that become extremist in a few hours. Or softwares that offer to release hardened criminals, but not people with no real danger of re-offending. Or voice assistants that are unable to understand what is said to them!

Testing AI is complex because the expected result is not determined. It is therefore necessary to orientate one's testing strategy (which includes testing the data used to train this AI) in order to succeed in adapting one's tests according to one's constraints.

Complementing AI

Software using AI is not limited to consumer software, as there are obviously applications that aim to facilitate the work of testers. Among these applications, it is relevant to mention:

  • Applications related to regression management using user traces to determine the most sensitive paths,
  • Algorithms for pre-analyzing failed tests or prioritizing tests,
  • Data anonymization or even the creation of synthetic data,
  • Quality measurement tools.

The possibilities are limited to our imagination.

The emergence of these tools will free up time for the tester. Testers will have to adapt by focusing on high value tasks - methodological (strategy, exploratory tests, specific cases design…) or technical (writing specific tests, setting up additional tools, analysis of AI results and improvement of these…).

Testing with people in mind

This is perhaps the most important point for me!

Software is becoming more and more essential in our lives. It has a growing power and affects us directly. At a time when scandals are multiplying, we must be aware of this and take into account the impact of certain products and features on users’ daily life.

At the same time, the human factor must become a quality criterion in its own right. This criterion could be made up of two sub-criteria:

  • Ethics
  • Why do we test? Is the main interest the good of users and humans or on the contrary a purely lucrative goal (adding unwanted ads, collecting data…)?
  • The impact
  • The road to hell is paved with good intentions. Recommendation algorithms demonstrate this. The idea is to propose contents that we will surely appreciate, but this can very quickly lock the user in a ‘fake world’ of opinions.

To sum up

The present and the future of testing are rich. Testing challenges are numerous and varied, they push testers to be inventive, to adapt and to think. The proposed list is far from being exhaustive, but I hope it represents the current effervescence of this field.

In addition to a classification by ‘temporality’, it seems obvious to me that we could use another scale relative to testing challenges, a scale based on methodology and technique. Indeed, depending on the testing challenges, the skills required are not the same. The map below summarizes the main challenges encountered.

test challenges

Have you encountered other testing challenges? I invite you to mention them in comments or to place them on this same diagram.

Want to give Agilitest a try?

See Agilitest in action. Divide by 5 the time needed to release a new version.

Scaling functional test automation for happy teams

  • From manual to automated tests
  • From test automation to smart test automation
  • Finding the right tools
ebook-scaling-test-automation-agilitest
Marc Hage Chahine

About the author

Marc Hage Chahine

Agilitest Partner – Passionate about software testing at Qestit - ISTQB Certified (Foundation, Agile, Test Manager)

twitter-logo
linkedin logo

Get great content updates from our team to your inbox.

Join thousands of subscribers. GDPR and CCPA compliant.