Are we really going from manual to automated tests ?
The theory is that it is possible to automate tests very quickly, even before the very first test execution, thanks to BDD.
This same theory comes with many good practices, such as automation as early as possible and, within a sprint where user storys are developed in Agile method.
The theory is all very well, but it is often very different from what we see in practice! One reason is obviously the context.
Automating from the outset is all very well... but what can you do when you're working on existing software that doesn't have automated tests?
Similarly, automating a feature may not be relevant when major changes are planned for that same feature.
In practice, we regularly observe a rather late automation of tests. This automation is carried out on a test directory used for regression purposes.
In this case, the usual practice is to automate existing regression tests, as these have already proven their effectiveness.
Note: there are contexts in which there is no scripted or defined regression campaign. In this case, test automation can be an opportunity to work on the latter by defining a test directory for this campaign.
Automating manual regression tests
Now that we know why teams need to automate manual tests, it's important to know how!
Automation is an investment. As with any investment, we expect a return as quickly as possible.
Determine the value/effort ratio of tests to be automated, in order to prioritize test automation.
Similarly, automating manual tests can also be seen as Agile development, where the tests to be automated represent the US and the campaign to be automated represents the product. If we look at it this way, it becomes clear that we need to prioritize the tests to be automated in order to maximize value delivery.
This prioritization is done in a fairly "classic" way, with a combination made between a test automation effort and the value that this automation should bring.
All that remains is to define how to evaluate this "value".
Calculating the value can be a complex task, because depending on the reasons for automating (saving time, saving money, implementing continuous testing...) it can vary greatly. It depends on a number of factors, such as
- Time spent running tests
- Frequency of test execution
- The business value of the test
- Test execution complexity...
However, it is essential to estimate the value correctly, as this can lead to a decision not to automate certain tests, as in Agile, if the value is not sufficient in relation to the effort required to automate them.
It is therefore essential to define the effort required to automate the tests... This effort is even greater for the first tests, when an entire architecture has to be put in place. In fact, it is highly recommended to break down tests into reusable sub-scripts, in order to limit maintenance and simplify future test automation. The effort required to automate a test will depend on :
- The quality of the manual test: is it well written, clear, with steps and expected results?
- The complexity of the actions to be automated
- The complexity of the path: is it modal? Longer or shorter?
- Potential dependencies
- Data sensitivity
- Potential impact on other tests...
This combination of effort and value can then be used to prioritize the backlog of tests to be automated. As with a "product" backlog, this will certainly evolve, as will the prioritization of its constituent elements.
Note: complexity depends on the automation tool and the skills of the team.
Now that the prioritization has been done, it's time to start automating the tests... Provided that the automation tool has been selected and the team has the necessary skills (with training if necessary).
It may be a good idea to continue working in the Scrum way, with milestone dates and objectives for the tests to be automated. As with Scrum, it may be necessary to work on "technical" tasks to facilitate automation and future maintenance.
The first tests are therefore likely to take longer to automate than the following ones... especially if you wish to follow best practice in automated test development, which limits maintenance by using sub-scripts which then become the equivalent of keywords and enable keyword-driven automation, also known as "Keyword Driven Testing".
It can also be interesting to variabilize these keywords and couple this KDT automation with data-driven tests for elements such as authentication or form filling.
As you can see, 1 test to be automated does not necessarily mean 1 automated test.
In the end, 1 test to be automated may result in :
- Several sub-scripts
- Several sub-scripts and a single part
- Several tests to better target objectives and facilitate analysis
- One dataset for a data-driven test (e.g. on login)
- Several datasets to offer better coverage or adapt to certain events
Monitoring automated tests
The true value of automation should be apparent as soon as the first automated tests are made available.
It is therefore essential to define indicators that will enable you to see the contribution of these tests, and thus evaluate and/or monitor the eventual return on investment. It is thanks to this monitoring that decisions can be made, such as :
- Continuing automation
- Stopping automation (this may be for a number of reasons, such as the achievement of set objectives or a return on investment deemed insufficient)
- Changing strategy in order to improve
- Change of tool to better meet the need...
This follow-up is essential. It must be carried out from an operational point of view, with the team working on the technical aspects, but also from a "project" point of view, with an evaluation of the concrete benefits of automation.
This is the only way to know and demonstrate that automating your manual testing campaign is worthwhile and successful. It is also the only way to know whether it is preferable to stop or to move in a different direction, in order to achieve the objectives that test automation should fulfil.
The automation of manual regression campaigns is a very frequent issue, and one that needs to be addressed. This automation, which is not carried out "as soon as possible", generates a certain complexity, particularly when it comes to prioritizing the tests to be automated.
In addition to the usual good practices, one way to ensure successful automation is to manage the construction of your automated test campaign "product". This requires real prioritization work, with an assessment of the value and effort required, as well as regular monitoring through constant adjustment of the strategy and monitoring indicators.