Test automation can fail for many reasons. Today we will look at one of the major causes of these failures, the robustness of the tests.
The lack of robustness of the tests necessarily leads to the failure of the automation.
First of all it is important to remember what a "robust" test is. A test is robust if, when it fails, this failure is due to an error in what it should check. This failure is not due to a third party problem like :
- related to a random event
- caused by changing data
- caused by a too long display time
- connectivity with partners
- related to any other external cause...
The analysis of a weakly failing case is generally quite easy because the problems are often known and recurrent. This failure then causes a loss of time in the analysis and can even lead to the abandonment of the analysis of this test when it fails. A common practice with weak tests is to re-execute them before analysis. This increases the number of executions and requires verification before re-execution. The verification can be manual or automatic. In the second case it requires the development of an additional tool. In all cases, replay does not ensure the success of failed tests. To overcome this double execution, Agilitest records a video file of the course of the tests when it was replayed, this allows the immediate analysis of the conditions of failure.
It is also good to remember that more than one test is executed in an automated test campaigns! Finally, I would like to remind you that a good practice in Agile (and even more with a continuous integration chain) is to prevent deliveries when automated tests fail.
The multiplication of tests leads to a multiplication of the risks of failure of a test due to lack of robustness. This probability of having one or more tests fail, even if each test is considered as "rather robust".
Let's take the example of a test campaign with tests that can be considered at first glance as "fairly robust" with a robustness of "98%":
As you can see, even with 98% robust tests for a particularly small campaign of 10 tests, a test fails for the wrong reasons in more than 1 in 6 cases (18%).
With only 50 cases, success is guaranteed only one time out of three. From 100 onwards, it's difficult to have all your tests passed.
These repeated failures can then generate many problems that lead in the short or medium term to big failures like :
- No more blocking failed tests
- No more analysis of failed tests: no need to look every time it fails for a bad reason
- Stop running test campaigns
- Systematic replay that becomes very time consuming
You have understood : in order to ensure a true robustness of the test campaigns and therefore a true efficiency of the latter, it is necessary to have particularly stable tests whose robustness is close to 100%!
How to improve the robustness of your tests?
The problem of test robustness is quite complex to manage because it depends on many parameters. Nevertheless, there are some good practices that allow to increase it considerably. Here are some of them:
Have dedicated environments for automated testing
Test failures due to an uncontrolled environment are a very common cause of error. A totally dedicated and controlled environment ensures control over the data, an up-to-date environment, an environment not impacted by other tests or other teams. This considerably reduces the randomness and therefore the chances of failure.
Write tests that correctly verify the results
An automated test is performed by a robot, not a human. Some inaccuracies that are acceptable with a human are not acceptable with a robot. An automated test must therefore verify the presence of an element that is unique and that appears all the time on this page. An element that will ensure that the software is indeed in the desired state. This can be done with a combination of several elements.
Agilitest is very well thought out to ensure that the verification is relevant and provides sufficient proof thanks to its capture tool. Indeed, its capture tool allows for uniqueness but also for search at a particular location and, if needed, for image verification. In short, it is a feature that allows functional testers to create technically robust tests.
Have a well-defined data creation and management policy
Failures due to spurious or missing data are particularly common.
Here is an example to convince you :
An application allows me to book tickets for a concert. I have 10 tickets. I do a test to reserve a seat: it is successful. I then do another test to buy 10 seats, the test fails if the reservation of the previous test has not been cancelled.
Adopting strict and well-framed processes is therefore a great help to improve the robustness of the tests.
We can for example think of the creation and deletion of data directly in each test.
The reader may refer to another article on the subject, which deals with the management of a large volume of tests and therefore of robustness.