April 27, 2022

What is a regression test?

Marc Hage Chahine
Blog > Test Automation
What is a regression test?

What is a regression test? 

The ISTBQ proposes this definition of a regression test: ‘a test of a previously tested program, after a modification, to ensure that defects have not been introduced or discovered in unmodified parts of the software as a result of the modifications made. These tests are performed when the software or its environment is modified.’

This definition highlights important principles such as testing software that has already been tested before, or later modifying the same software. We will come back to these notions in the rest of the article. Nevertheless, like any definition, it cannot answer all our etymological and operational questions.

Let's start with the etymological aspect of a regression test

As you have probably noticed, it is difficult to talk about ‘regression testing’ in France without being asked: ‘Are you talking about non-regression testing?’

We are touching on a sensitive subject that is quite specific to France. 

The term most often used outside of France is ‘regression tests’. This etymology implies that there can be regressions even if we do not find any.  This refers to the well-known test principle: ‘exhaustive tests are impossible’.

In France, the term ‘non-regression’ is used almost exclusively, even when working in an English-speaking environment. This terminology disturbs me, because it implies that after running these tests, we can ensure that there is no regression. This contradicts the principle mentioned earlier. This leads to misunderstandings and I have often received remarks in the tone of a reproach such as: ‘you did the non-regression on this application, so why didn't you detect this regression?’ with a real disappointment, because the person really thought that there was no regression during the production launch.

Now let's talk about the operational context 

Definition of a regression campaign

A regression test rarely stands alone! It is part of a coherent set of several tests, each with its own characteristics and objectives. This set represents the regression campaign (or ‘non-regression’ campaign).

The objective of a regression campaign is to check whether the changes made to the software (or to its environment) have not caused major bugs in the functionality already in place, and therefore whether these changes have caused major regressions.

You can notice 2 important points in  the purpose description of a regression campaign:

  • First, in the ISTQB definition, regression campaigns (and thus regression testing) exist only on existing parts of the software. The use of the term ‘previously tested’ emphasizes this idea. The purpose of a regression campaign is not to find out whether there are bugs in the new features, or even whether existing bugs have been fixed. The goal of a regression campaign is to detect significant changes in the behavior of the software.
  • The second point is to emphasize that the goal is never to find all the regressions. The goal is to detect major regressions, those that will impact our users. It is particularly important to remember this, because a regression campaign can quickly become very expensive to run, maintain or upgrade. Also, the goal is not to check if the whole product corresponds to what we expect by going into details... These verifications have been done upstream.

Let's see our software as if it were built with an architecture built with wooden bricks. The regression campaign, made by regression tests, aims at verifying that the addition of one or more new bricks has not compromised the part already built by having, for example, collapsed a part, unbalanced another or modified the positioning of certain bricks... and this significantly.

There is still one ‘vague’ element that cannot be more precise, and that is the notion of ‘major’ or ‘significant’. This element, to be properly identified, must necessarily be filtered by the context. A bug can be major for one product and totally minor for another... as one of the principles of testing states: testing depends on context.

Now that we have defined the objectives of regression testing and of the campaign it composes, I think it is important to talk about its construction and maintenance.

Build and maintain your regression tests campaign

In order to fully understand the construction and maintenance of a regression campaign, here is an interesting image to illustrate:

The latter was designed for an Agile cycle, but can also be adapted to a V-cycle, in particular for the 2nd and 3rd columns.

Modifications have been made here. It is essential to validate that the changes do what we want. To do this, we will test these changes in depth, which may be new features, bug fixes or technical updates.

When this validation is done, it is then important to execute and analyze the results of the regression campaign. When the regression and changes are added to the product in production, they become part of the product that must be covered by the regression. Regression tests covering these changes should then be added as needed to enrich the regression campaign.

However, enriching the regression campaign is not sufficient to maintain the regression campaign. It is obviously necessary to maintain the regression tests that make up the campaign! Similarly, enrichment and maintenance quickly becomes insufficient, and it quickly becomes necessary to evolve one's test campaign if one wants to avoid having to deal with the pesticide paradox (another test principle!).

To combat the pesticide paradox, it is essential to evolve your testing campaign... which is often easier said than done. There are several ways to evolve your regression campaign by working on these tests. For example, it is possible to:

  • Propose new tests on legacy features (often related to bugs identified and fixed)
  • Modify the data or the path of some tests (while keeping their objective) that do not detect bugs

This is all well and good, but it regularly happens that it becomes operationally impossible to do! The reason is simple, with the process described, a snowball effect is created, and the regression campaign becomes bigger and bigger with more and more tests. This causes problems on several levels:

  • The execution time is getting longer and longer 
  • Maintenance time is getting longer and longer too
  • The analysis time is also getting longer and longer
  • The cost of a campaign is increasingly important

In order to avoid this, there are many technical solutions such as automation or parallelization, a good factorization of the tests... These technical solutions can however be insufficient, and it is often necessary to remove some tests from the regression campaign. This can be done in 2 ways:

  • Prioritization of tests by executing the highest priority tests each time, then selecting regression tests based on a risk study (this decreases execution time, analysis time and cost, but does not impact maintenance time).
  • The outright removal of regression testing from its campaign.

In the end, the regression tests of a regression campaign correspond in large part to tests done during the validation of the product with original tests related to the life of the software:

Should regression tests be automated or manual?

The subject of regression test automation was a real question that was often answered ‘no’ for budgetary reasons, before the generalization of Agile methodologies. The democratization of iterative development has multiplied the need to execute regression tests through regression campaigns which, ideally, are executed at least once per User Story and therefore several times per sprint of a few weeks.

Regression automation then becomes a necessity... especially if we want to move towards continuous deployment. 

The only obstacles to automating regression campaigns are now more on the side of the possibility to do so.

This impossibility can have 2 causes:

A technical impossibility (or too high cost) to execute certain tests

This impossibility is often linked to the testing tool used... I remember choosing not to automate a regression test, especially in a campaign containing 200 of them, because even if it was technically feasible, it required many days of development for a test that was done in 30 seconds.

A solution like Agilitest allows you to automate on many technologies and to do multichannel tests. It is rare to find scenarios that are impossible to automate (even if it is possible, for example, advanced graphical tests). From experience, I can even say that with Agilitest it is possible to automate tests or even processes that previously seemed almost impossible to automate.

An impossibility linked to the team's skills

This impossibility is now much more frequent than the technical impossibility (if we exclude the costing part). Automation is often technical, especially with tools not designed for testers with a functional and not very technical profile. Fortunately, there are now accessible and efficient tools on the market that allow you to overcome these difficulties. Among these tools, there is obviously Agilitest which was initially designed for these tester profiles! The feedback from Agilitest customers is very good and Agilitest is adopted as soon as it is tried (no unsubscribes recorded to date).

If a regression campaign has to be done entirely or partly manually, it is a good idea to limit as much as possible the number of recurrent regression tests to be done naturally, and to complement these campaigns with exploratory test campaigns aimed at detecting regressions.

To sum up

A -regression test is above all a test that will test a part of the product that has already been tested and whose role is to detect changes in behavior linked to modifications to the software. This test is part of a specific campaign whose goal is to detect major regressions in order to avoid deploying them.

These tests, which used to be essentially manual, must now, with Agile development, in the vast majority of cases be automated. At least in part, because they must be executed very regularly. The less automated these tests are, the less they are executed and the later they are tested, which, in addition to going against the principle of ‘test early’, leads to losses of money and motivation.

Want to give Agilitest a try?

See Agilitest in action. Divide by 5 the time needed to release a new version.

Scaling functional test automation for happy teams

  • From manual to automated tests
  • From test automation to smart test automation
  • Finding the right tools
Marc Hage Chahine

About the author

Marc Hage Chahine

Agilitest Partner – Passionate about software testing at Qestit - ISTQB Certified (Foundation, Agile, Test Manager)

linkedin logo

Get great content updates from our team to your inbox.

Join thousands of subscribers. GDPR and CCPA compliant.