Automated Testing Ice cream coneAutomated testing
Automation concerns unit tests more than system tests
Automated Testing Ice Cream Cone (ATICC) is an antipattern which finds its root in the “Shift Left” principle.
It is best to address testing issues as early as possible, because the later the testing is done, the harder it is to detect bugs and the more costly the fixes.
This shift towards requirement engineering has some impact in terms of testing strategies…
According to the ISTQB, there are 3 levels of testing [ISTQB 2018]:
- Component/Unit Testing (UT): this covers something like 24 different definitions [Fowler 2014] about low level testing of small parts, usually written by Developers and faster to run than higher levels - to ensure the isolation of units, test doubles are added
- Integration Testing (IT): once every part is working as defined thanks to UTs, IT ensures those parts are able to work together thanks to test doubles that delimit an area into which IT takes place [Fowler 2018]; therefore, those tests still require some technical artifacts
- System/End-to-End Testing(E2E): once every test double has been removed, the whole system can then be assessed
Practicing testing at every level reveals that the more isolated the tests are, the closer from the code they are, the faster they are and the sooner those tests can be made available. Then, from a Shift Left perspective, it becomes clear there should mainly be UT, then the higher the level, the less tests there would be, this would then draw a pyramid.
When applied to test automation, the pyramid looks the same, except that at the top of the stack, there must be some manual testing to complete existing scripts with unpredictable situations that would be discovered notably through Exploratory testing in order to cope with the “Pesticide paradox” test principle which states that test scripts become ineffective to find more bugs after a while if they are left unchanged [ISTQB 2018].
The pyramid becomes a pattern which can be declined in many different ways [Meerts 2019] and the ATICC happens if the pyramid is reversed. This phenomenon appears when test automation mainly focuses on E2E testing with fewer IT and even fewer UT.
The ATICC is an antipattern notably because
- most test scripts are done in reactive mode - they have to be adapted only when the release is made available
- E2E tests are usually fragile since they are UI dependent, expensive to write, and time consuming to run [Fowler 2012] because of network and database dependent
- a lot of bugs are kept hidden from an E2E point of view because of some bad testability of lower layers of the application and then generate low confidence in the product
- it would have been easier to detect bugs as they are introduced with no round trip between development, delivery and system testing
- it would reduce the delivery cost to take care of lower level testing before E2E
- it would worsen the First Pass Yield
- most issues are left unnoticed during the delivery process with lost feedback opportunities with a testing budget cuts to catch the schedule up [Cohn 2009]
- it generally reveals some silos between testing levels and generates a mini waterfall [Pereira 2014]
Sometimes, the ATICC antipattern is also nicknamed the “Testing Cupcake Antipattern” because Development Teams do provide a significant amount of UT; therefore, there is a large base, but automation is still focused on higher testing levels with a lot of manual testing [Pereira 2014].
The Test Automation Pyramid idea is often said to be created to Mike Cohn’s “Succeeding With Agile: Software Development Using Scrum” which emphasizes to integrate testing in the course of the process instead of doing it later on [Cohn 2009]; however, the oldest artifact stems back in 2006 [Gheorghiu 2006] although Martin Fowler mentions a conversation with Lisa Crispin around 2003-2004 [Fowler 2017] [Moustier 2019-1].
Actually, the foolish concept to wait for the product to be ready came from Winston Royce’s paper [Royce 1970] who tried to find some organizational solution to face massive development needs. His simple idea stems from the belief that “you cannot build the roof before the foundations”; unfortunately
- with software development, the rendering of the house (the UI) can be done before the foundations (the application sub-levels)
- even if Royce mentioned this way of working would be risky and would invite failures
The industry had to wait until Takeuchi & Nonaka publication [Nonaka 1986] and Mike Cohn in 2009 to make this foundation-roof paradigm fallacy obvious to nearly everybody.
To avoid ATICC, there are many recipes that prevent this antipattern; they mostly involve Building In Quality techniques [SAFe 2021-27] [Cohn 2009] such as:
- Test Harnesses
- Collaboration notably through 3 Amigos [Pereira 2014] or PanTesting
- Automation of observables
- Definition of Done (DoD)
- Living Documentation
- Pair Programming / Mob Programming
- Testability as soon as possible
- Testability of the Product Backlog
Impact on the testing maturity
One of the drawback with UI automated tests is the latency due to the browser; fortunately, it is possible to run “headless” tests (aka “subcutaneous tests”) [Fowler 2017], ie. instead of depending on a Selenium driver that interacts with the rendered HTML, the robot directly interpretes the HTML which can save 10% of the execution time [SauceLabs 2019].
But there are some situations that require something a bit more different, especially when the architecture involves microservices. For instance at Spotify, they use a honeycomb shape [Fowler 2021].
In fact, since the creation of the test automation pyramid, the concept has been refined according to the context. This original vision seems too simplistic and can therefore be misleading [Vocke 2018]. For example, in the aircraft industry, the Testing Pyramid approach is the backbone of the certification process. This process is classically based on existing previously certified hardware. Unfortunately, this certification process takes a long time notably because material properties substructures do not correlate with larger structures. Nowadays, people have generated a trend known as “Breaking the Pyramid” which consists in applying “Virtual Testing” [You 2019]. Realistic simulators from hardware parts are provided to accelerate developments. This Virtual Testing is merely like providing a stub to enable a “Contract-Driven Development”. It is named “Model-Based System Engineering” (MBSE) [Galiber 2019] [SAFe 2021-19] and leverages the project agility.
In the e-commerce industry, the traditional pyramid pattern can also be tackled when a special focus is made on the Customers’ experience [Craske 2020]. More generally, this pattern can be definitely rethought thanks to the Swiss cheese model [Reason 2006] when considering Context-Driven Testing (CDT) for instance when developments are based on components or software packages off the shelf because unit testing is not applicable.
The ATICC antipattern is often perceived from a functional point of view; however, Non-Functional Requirement (NFR) testing should also be considered. If such testing occurs only at the end of the delivery process, there will be yet another situation under which the very same issue will take place. Only a CDT with shared rationales may lead to successful and optimal releases.
Agilitest’s standpoint on this practice
Since Agilitest is a #nocode technology, it is really easy to be subjected to the ATICC antipattern, especially when combined with the Conway’s law effect with independent scripting Teams. This is why it is particularly important to take special attention to this matter and have a clear vision about the context of your project in order to limit the impacts of frequent changes on the UI or silo issues as explained beforehand.
To go further
- [Cohn 2009] : Mike Cohn - 2009 - “Succeeding With Agile: Software Development Using Scrum” - isbn:9788131732267
- [Craske 2020] : Antoine Craske - APR 2020 - “The Traditional Test Automation Pyramid, Pitfalls and Anti-patterns” - https://laredoute.io/blog/the-traditional-test-pyramid-pitfalls-and-anti-patterns/
- [Fowler 2014] : Martin Fowler - MAY 2014 - “UnitTest” - https://martinfowler.com/bliki/UnitTest.html
- [Fowler 2017] : Martin Fowler - NOV 2017 - « TestPyramid » https://martinfowler.com/bliki/TestPyramid.html
- [Fowler 2018] : Martin Fowler - JAN 2018 - “IntegrationTest” - https://martinfowler.com/bliki/IntegrationTest.html
- [Fowler 2021] : Martin Fowler - JUN 2021 - “On the Diverse And Fantastical Shapes of Testing - Pyramids, honeycombs, trophies, and the meaning of unit testing” - https://martinfowler.com/articles/2021-test-shapes.html
- [Galiber 2019] : Flavius Galiber, III - « Lean-Agile MBSE: Best Practices & Challenges » - SAFe Summit 2019 - 29/SEP - 04/OCT/2019 - https://vimeo.com/372960506
- [Gheorghiu 2006] : Grig Gheorghiu - « Thoughts on giving a successful talk » - 28/FEV/2006 - http://agiletesting.blogspot.com/2006/02/thoughts-on-giving-successful-talk.html
- [ISTQB 2018] : ISTQB - 2018 - “Certified Tester Foundation - Level Syllabus” - https://www.istqb.org/downloads/category/2-foundation-level-documents.html
- [Meerts 2019] : Joris Meerts - 2019 - “Here Be Pyramids” - https://www.testingreferences.com/here_be_pyramids.php
- [Moustier 2019-1] : Christophe Moustier – JUN 2019 – « Le test en mode agile » - ISBN 978-2-409-01943-2
- [Nonaka 1986] : Hirotaka Takeuchi, Ikujiro Nonaka - « The New New Product Development Game » - Harvard Business Revue - Janvier 1986 - https://hbr.org/1986/01/thenew-new-product-development-game
- [Pereira 2014] : Fabio Pereira - JUN 2014 - “Introducing the Software Testing Cupcake (Anti-Pattern)” - https://www.thoughtworks.com/insights/blog/introducing-software-testing-cupcake-anti-pattern
- [Reason 2006] : James T. Reason - « Revisiting the "Swiss Cheese" Model of Accidents » - Eurocontrol - Oct 2006 - http://www.eurocontrol.int/eec/gallery/content/public/document/eec/report/2006/017_Swiss_Cheese_Model.pdf
- [Royce 1970] : Winston Royce - AUG 1970 - “Managing the Development of Large Software Systems” - https://www.praxisframework.org/files/royce1970.pdf
- [SAFe 2021-19] : SAFe - FEV 2021 - “Model-Based Systems Engineering” - https://www.scaledagileframework.com/model-based-systems-engineering/
- [SAFe 2021-27] : SAFe - FEV 2021 - « Built-in Quality » - https://www.scaledagileframework.com/built-in-quality/
- [SauceLabs 2019] : SauceLabs - APR 2019 - “Why, When and How to Use Headless Testing” - https://saucelabs.com/sauce-labs/resources/whitepapers/sl-wp-headless-testing-v2.pdf
- [Scott 2019] : Alister Scott - c. 2019 - « Testing Pyramids & Ice-Cream Cones » - https://watirmelon.blog/testing-pyramids/
- [Vocke 2018] : Ham Vocke - FEV 2018 - “The Practical Test Pyramid” - https://martinfowler.com/articles/practical-test-pyramid.html
- [You 2019] : Shawn You & Shawn Gao & Arlin Nelson - DEC 2019 - “Breaking the Testing Pyramid with Virtual Testing and Hybrid Simulation” - https://www.researchgate.net/publication/341660077_Breaking_the_Testing_Pyramid_with_Virtual_Testing_and_Hybrid_Simulation