Automated Testing Ice cream cone

Automated testing

Automation concerns unit tests more than system tests

Automated testing
Agility Maturity Cards > Automated testing
Automated Testing Ice cream cone

Description

Automated Testing Ice Cream Cone (ATICC) is an antipattern which finds its root in the “Shift Left” principle.

It is best to address testing issues as early as possible, because the later the testing is done, the harder it is to detect bugs and the more costly the fixes.

Shift Left Testing leverages preventive testing to reduce the cost of bugs [Moustier 2019-1]

This shift towards requirement engineering has some impact in terms of testing strategies…

According to the ISTQB, there are 3 levels of testing [ISTQB 2018]:

  • Component/Unit Testing (UT): this covers something like 24 different definitions [Fowler 2014] about low level testing of small parts, usually written by Developers and faster to run than higher levels - to ensure the isolation of units, test doubles are added
  • Integration Testing (IT): once every part is working as defined thanks to UTs, IT ensures those parts are able to work together thanks to test doubles that delimit an area into which IT takes place [Fowler 2018]; therefore, those tests still require some technical artifacts
  • System/End-to-End Testing(E2E): once every test double has been removed, the whole system can then be assessed

Practicing testing at every level reveals that the more isolated the tests are, the closer from the code they are, the faster they are and the sooner those tests can be made available. Then, from a Shift Left perspective, it becomes clear there should mainly be UT, then the higher the level, the less tests there would be, this would then draw a pyramid.

Test Pyramid vs test slowness and isolation [Vocke 2018]

When applied to test automation, the pyramid looks the same, except that at the top of the stack, there must be some manual testing to complete existing scripts with unpredictable situations that would be discovered notably through Exploratory testing in order to cope with the “Pesticide paradox” test principle which states that test scripts become ineffective to find more bugs after a while if they are left unchanged [ISTQB 2018].

The Test Automation Pyramid also include some manual testing [Moustier 2019-1]

The pyramid becomes a pattern which can be declined in many different ways [Meerts 2019] and the ATICC happens if the pyramid is reversed. This phenomenon appears when test automation mainly focuses on E2E testing with fewer IT and even fewer UT.

The ATICC is an antipattern notably because

  • most test scripts are done in reactive mode - they have to be adapted only when the release is made available
  • E2E tests are usually fragile since they are UI dependent, expensive to write, and time consuming to run [Fowler 2012] because of network and database dependent
  • a lot of bugs are kept hidden from an E2E point of view because of some bad testability of lower layers of the application and then generate low confidence in the product
  • it would have been easier to detect bugs as they are introduced with no round trip between development, delivery and system testing
  • it would reduce the delivery cost to take care of lower level testing before E2E
  • it would worsen the First Pass Yield
  • most issues are left unnoticed during the delivery process with lost feedback opportunities with a testing budget cuts to catch the schedule up [Cohn 2009]
  • it generally reveals some silos between testing levels and generates a mini waterfall [Pereira 2014]

Illustration of the ATICC [Moustier 2019-1]

Sometimes, the ATICC antipattern is also nicknamed the “Testing Cupcake Antipattern” because Development Teams do provide a significant amount of UT; therefore, there is a large base, but automation is still focused on higher testing levels with a lot of manual testing [Pereira 2014].

Software Testing Cupcake Antipattern [Pereira 2014]

The Test Automation Pyramid idea is often said to be created to Mike Cohn’s “Succeeding With Agile: Software Development Using Scrum” which emphasizes to integrate testing in the course of the process instead of doing it later on [Cohn 2009]; however, the oldest artifact stems back in 2006 [Gheorghiu 2006] although Martin Fowler mentions a conversation with Lisa Crispin around 2003-2004 [Fowler 2017] [Moustier 2019-1].

Actually, the foolish concept to wait for the product to be ready came from Winston Royce’s paper [Royce 1970] who tried to find some organizational solution to face massive development needs. His simple idea stems from the belief that “you cannot build the roof before the foundations”; unfortunately

  • with software development, the rendering of the house (the UI) can be done before the foundations (the application sub-levels)
  • even if Royce mentioned this way of working would be risky and would invite failures

Royce mentioned the waterfall concept would not work but apparently no one read the comment underneath the diagram [Moustier 2020]

The industry had to wait until Takeuchi & Nonaka publication [Nonaka 1986] and Mike Cohn in 2009 to make this foundation-roof paradigm fallacy obvious to nearly everybody.

To avoid ATICC, there are many recipes that prevent this antipattern; they mostly involve Building In Quality techniques [SAFe 2021-27] [Cohn 2009] such as:

Impact on the testing maturity

One of the drawback with UI automated tests is the latency due to the browser; fortunately, it is possible to run “headless” tests (aka “subcutaneous tests”) [Fowler 2017], ie. instead of depending on a Selenium driver that interacts with the rendered HTML, the robot directly interpretes the HTML which can save 10% of the execution time [SauceLabs 2019].

But there are some situations that require something a bit more different, especially when the architecture involves microservices. For instance at Spotify, they use a honeycomb shape [Fowler 2021].

In fact, since the creation of the test automation pyramid, the concept has been refined according to the context. This original vision seems too simplistic and can therefore be misleading [Vocke 2018]. For example, in the aircraft industry, the Testing Pyramid approach is the backbone of the certification process. This process is classically based on existing previously certified hardware. Unfortunately, this certification process takes a long time notably because material properties substructures do not correlate with larger structures. Nowadays, people have generated a trend known as “Breaking the Pyramid” which consists in applying “Virtual Testing” [You 2019]. Realistic simulators from hardware parts are provided to accelerate developments. This Virtual Testing is merely like providing a stub to enable a “Contract-Driven Development”. It is named “Model-Based System Engineering” (MBSE) [Galiber 2019] [SAFe 2021-19] and leverages the project agility.

In the e-commerce industry, the traditional pyramid pattern can also be tackled when a special focus is made on the Customers’ experience [Craske 2020]. More generally, this pattern can be definitely rethought thanks to the Swiss cheese model [Reason 2006] when considering Context-Driven Testing (CDT) for instance when developments are based on components or software packages off the shelf because unit testing is not applicable.

The Swiss Cheese Model - Illustration from [Moustier 2019-1]

The ATICC antipattern is often perceived from a functional point of view; however, Non-Functional Requirement (NFR) testing should also be considered. If such testing occurs only at the end of the delivery process, there will be yet another situation under which the very same issue will take place. Only a CDT with shared rationales may lead to successful and optimal releases. 

Test pyramid pattern applied to NFR

Agilitest’s standpoint on this practice

Since Agilitest is a #nocode technology, it is really easy to be subjected to the ATICC antipattern, especially when combined with the Conway’s law effect with independent scripting Teams. This is why it is particularly important to take special attention to this matter and have a clear vision about the context of your project in order to limit the impacts of frequent changes on the UI or silo issues as explained beforehand.

To discover the whole set of practices, click here.

To go further

© Christophe Moustier - 2021