Header-canalplus
September 9, 2020

Why lack of analysis in automated tests leads to failure

Marc Hage Chahine
Blog > Test Automation
Why lack of analysis in automated tests leads to failure


Test automation can fail for many reasons. Today we will look at a recurring cause of these failures, the lack of analysis of the tests after their replay.

Lack of automated tests analysis after replay results in automation failure

The implementation of automated tests allows to decrease the cost of execution and the implementation of a very good testing practice, which is one of the principles of testing: "Early testing".

Testing more frequently is one of the reasons why it is advisable to automate your tests. Nevertheless, the multiplication of test campaigns is useless if the results of these campaigns and more specifically the tests executed during these campaigns are not analyzed. In fact, not analyzing test campaigns is even worse than not executing them at all, because execution alone gives the illusion that quality is assured when in fact it is not.

At last, you can have test campaigns executed several times a day but never analyzed, because the analysis time for each campaign can take several hours. This leads to a vicious circle where automated tests no longer ensure anything because their results are no longer analyzed and deliveries are made in all cases. This leads to the deployment of versions containing critical anomalies that should have been detected and therefore to the failure of the automation.

This drift is rarely voluntary and comes from the received idea that with test automation the test campaigns are free. This is unfortunately not the case. Only the execution of the tests is almost free with automation, the execution itself is only a part of the test campaign as we can see with this example:

Workload comparison manual testing VS automated testing

Multiplying executions means multiplying analyses and the cost of analyzing failed tests is far from being free. This analysis can be more expensive with an automated campaign than with a manual campaign, for the simple reason that there is no human eye behind the execution.

To avoid this problem, it is possible to implement some good practices:

How to avoid the problem of non-analysis of automated test campaigns?

To avoid this problem, it is obviously necessary to analyze each campaign results, to know why each test failed and to take the necessary actions following this failure such as the creation of an anomaly sheet, the replay of the test or the update of the test. This is easy to say, more complicated to do, which is why the team can implement several best practices:

Limit the number of tests in the campaigns

The cost of analyzing a test campaign must be kept at a cost that the team can afford. The more tests there are, the more expensive the analysis. It is therefore sometimes necessary to run fewer tests. This can be done in several ways, such as creating several campaigns, adapting the tests to be run according to the campaigns or maintaining fewer tests.

Limit the number of campaign replays

An analysis must be done after each campaign. It is better to do 1 campaign per day and analyze it than 3 campaigns without analyzing it.

Implement systems to facilitate the analysis of failed tests

This is particularly important with test automation. Tests must be able to provide evidence of what happened and not just a status. When the tester analyzes the test he must be able to know what happened and how to reproduce and understand the reason for the failure.

Agilitest provide tools for this facilitation with videos or screenshots for each test step and failure. You can find a complete test campaign report, including video, html and pdf for each test.

All the reports produced by Agilitest are customizable, and it is also possible to have a report that groups failures by different criteria to facilitate the analysis.

The HTML reports in particular allows you to jump directly in Agilitest to edit the faulty line hence reducing the maintenance cost.

Propose automatic replay of some given tests

This is not an ideal solution but it is not uncommon to have instabilities related to the environment or with partners on the test platforms. In this case the tests fail for the wrong reasons and have to be replayed. It is therefore preferable to automatically replay them in this case, which can be useful if you have had hardware failures, and it will reassure you about the quality of your software.

It is important to note that Agilitest is a robust solution that ensures that there is a test failure before producing an error. This reduces false negatives, tests that are useless to analyze (flaky tests).

Add tests to continuous integration and make them blocking

This practice ensures the effectiveness of regression campaigns. If a test considered to be blocking fails, the deployment cannot take place, which forces an analysis.

In the absolute, if the test campaign is not intended to lead to an automatic deployment, we recommend replaying all the tests and not stopping at the first failure, in order to have a more complete vision of the quality of the software.

The Agilitest tool responds perfectly to this problem by interfacing very easily with tools such as Jenkins, through the TestNG standard

Want to give Agilitest a try?

See Agilitest in action. Divide by 5 the time needed to release a new version.

Scaling functional test automation for happy teams

  • From manual to automated tests
  • From test automation to smart test automation
  • Finding the right tools
ebook-scaling-test-automation-agilitest
Marc Hage Chahine

About the author

Marc Hage Chahine

Agilitest Partner – Passionate about software testing at Qestit - ISTQB Certified (Foundation, Agile, Test Manager)

twitter-logo
linkedin logo

Get great content updates from our team to your inbox.

Join thousands of subscribers. GDPR and CCPA compliant.