Test automation can fail for many reasons. Today we will look at a recurring cause of these failures: automation that is too expensive or wanting to automate everything.
Too high a cost of automation leads to automation failure
The implementation of automated tests is an investment... And like any investment, we expect a return. This return on investment (in case you automate to earn money) must be achievable!
It is also good to note that this principle also applies to testing in general. Indeed, spending 50,000€ on tests for an application that will only bring in 40,000€ is a serious mistake. At that price (if only the price is taken into account), it is better not to release the application!
RLet's go back to test automation. The cost of automation can be simplified with this equation :CA = I + n*X where :
- CA = Cost of automation
- I = Investment (costs of preliminary studies, licenses, training, writing tests, setting up environments...)
- X = cost of a campaign (execution (often close to 0), analysis, maintenance...)
- n = number of campaigns
Similarly, the cost of manual testing can be simplified with a similar equation :CM = I’ + n*X’ where :
- CM = Cos of manual testing
- I’ = Investment and manual tests (writing, design...)
- X’ = cost of a campaign (execution, analysis, maintenance...)
- n = number of campaigns
The investment cost for automation is higher than that for manual testing. In order to ensure a positive return on investment, it is necessary, in addition to having a reasonable investment, to have a unit cost of the automated test campaign that is lower than that of manual testing (otherwise the curves will never cross) and also to have a number "n" that is large enough for the curves to cross.
It can be seen here that a successful test automation project means knowing and identifying how far you can go in terms of investment, but also automating tests that will be re-run often enough to become profitable. Wanting to automate all the GUI tests will in the vast majority of cases prove to be a very costly investment that will not pay off.
How to ensure the profitability of test automation?
In order to avoid this problem of high costs, it is necessary to keep the investment cost of automated tests at an acceptable level, but also to automate tests that will be profitable because they are sufficiently re-run. To ensure this profitability, the team can put in place certain good practices such as
Automate tests that are intended to be re-run:
The main criterion of profitability for automated tests is the ability to re-execute them. Indeed, the cost of writing an automated test is generally much higher than that of a manual test. In order to limit the general cost of tests, it is also possible to limit the cost of manual tests that will not be re-executed by not scripting them and by doing exploratory testing. We then build a test strategy where regression tests are automated and validation tests of additional functionalities are done in exploratory mode. This makes it possible to stabilize the software's new features and to give ourselves a little more time to perform automated tests on these new features. For Agile teams, it is possible to consider carrying out automated non-regression tests only when all the developments of an epic have been obtained, and not at the user-story level, where the granularity is too fine.
Architect tests in a way that limits maintenance:
As we saw in the first part, a very important aspect of making your tests profitable is the cost of each campaign. This cost must be as low as possible in order to reach the break-even point as quickly as possible and then make the maximum profit. An important part of the cost of automated test campaigns is the maintenance of test cases. In order to limit this maintenance, it is essential to think carefully about the architecture of the automation, to factorize as much as possible the steps common to several tests so that only one script needs to be modified when a modification affects several tests. Agilitest has of course thought of this problem, which is why it offers "sub-scripts" that can be called by each test but also parameterized. For example, we can think of a test of a form page with 10 fields to be filled in that would be transformed into a single subscript with as many parameters and that would be reused in 20+ tests.
Automate tests for which the automation cost is acceptable
Some tests are technically complicated to automate. This may be because the tool can hardly or not at all do it (like passing a captcha), because it is complicated to ensure a stable and reliable test, because the result is difficult to "quantify" and a human eye is preferred, because the path is complex (like accessing a secure machine) or for any other reason. In this case, the investment cost of writing the test may be too high to be profitable and it is therefore preferable (if possible) not to automate it. It is then sufficient to assume a "stabilization" cost intended to replay all the manual tests before allowing the deployment of the version.
Limit the cost of analysis of automated tests
Here again, we are talking about the unit cost of a campaign. As we saw in a previous article, it is essential for the success of the automation to analyze the results of the tests. This analysis is not free and can quickly become very expensive if the right tools are not put in place. A too high analysis cost can make the unit cost of a campaign too high to expect a return on investment. In order to avoid this, you need to be able to quickly and simply understand why a test is failing and therefore have the execution information available to analyze the problem as quickly as possible. The Agilitest team is particularly aware of this problem, which is why it is possible to have screenshots for each step and also videos of the executed tests. Seeing what really happened when a test failed is a real contribution to analyze this failure when you did not witness the execution which is usually the case with automated tests.
Having a profitable test project is not always easy. You have to pay attention to several parameters and know how to stop at the right moment on what you want to automate. This stopping point depends on many parameters and the context, which is why it is generally preferable to carry out a study on the scope to be automated, to develop a strategy on the tests to be automated and also to establish a follow-up throughout the life of these tests.