Test automation can fail for many reasons. Today we will look at one of the major causes of these failures, tests that do not verify what is required.
Having tests that do not verify what is required necessarily results in automation failure
First of all, it is important to remember that the problem of tests that check what is required is not exclusive to automation. Nevertheless, this phenomenon is more visible with automation because during the execution there is no human behind the screen but a robot that executes exactly what it is told.
There are several reasons why a test does not verify what is desired:
- An automatic check that looks for an object that is not always present
- An automatic check on an object that is present several times on the page
- A check that looks for an object that is indeed present and unique on the page but that appears on several pages
- A poorly designed test
The first topic makes the test unstable. It will work some times and other times not. For example, we can think of a promotion that appears every Thursday and that would remove the usual prices, for example for pizzas at 5€ instead of 10€. If we check the price of 10€ when we arrive on the page of purchase of pizzas then this price will not be present on Thursday and therefore the test will fail every Thursday.
The second topic is more complicated to detect, especially since it is unlikely to happen manually. Still on the example of the promotion, we can think of looking for the term "Promotion" on the page with the price of pizzas on Thursday. Similarly, there is nothing to prevent another insert talking about promotion. In this case the term promotion appears twice. In case the Thursday promotion does not appear anymore, the automation does not notice it.
The third case is usually due to the omission of a parameter. A check that seemed to be relevant may no longer be relevant. Keeping the example of the Thursday promotion, it is easy to imagine that the term "promotion" with the price of 5€ appears on all the pages of the site or the application on Thursday. Doing a search on this price or the term "promotion" is then useless because it proves nothing! It is the same as checking that the url contains "www" to verify that I am on the site https://www.agilitest.com/.
The last topic is simply due to poorly thought out tests. Still on the pizza promotion. The test could be done on something that is not necessarily related or ephemeral like an advertisement.
Good tests are made with good controls. Good tests are the materials needed to make good test campaigns. Good test campaigns are mandatory for successful test automation. It is therefore essential to ensure the quality of the test checks, that they actually do what you want to demonstrate.
How do you ensure the quality of test audits?
Even if it is more important with automation, this problem is not new. Some good practices are therefore not exclusive to automation, here are some of them:
This good practice is potentially the most important one because it allows to cover the 4 previous topics but also to improve in a general way the quality of the tests. An external point of view always helps! It is surprising that specification reviews and code reviews are very developed but not yet test reviews. Tests are deliverables designed and written by humans. Error is human, so it seems obvious to do test reviews.
Ensuring the uniqueness of an object in our research
This is essential! The object you are looking for must be unique in the space you are searching for. Otherwise its absence would not be noticed. Agilitest is a very good tool to ensure this uniqueness thanks to its capture tool. Indeed, its capture tool allows to:
- Reduce the search area and thus potential parasites or unanticipated changes
- Avoid searches with Xpaths that are not very robust
- Ensure the uniqueness of our object in our search field in 1 click as we can see with this image:
Run the test several times under different conditions
Doing this allows you to make sure that the item you are looking for is present in a large number of conditions. For the example of the article we can think of a test every day of the week, with a user account or not, with additional discounts...
Here again, Agilitest can help you on some points. Indeed, the modification and the management of the data can be done easily with the tool with the creation of csv of json files used for the test data.
Check that in case of error the test fails
This point could be compared to a TDD practice that aims to verify that a test fails before development and then succeeds after development.
This good practice is something that should be a reflex because it tends to show that the test sends the information that is desired. This helps to avoid many problems quickly and efficiently.