In the cable industry, wires are laid on a bench that mocks the final integration layout. Cables are harnessed on a board with predefined attachment waypoints which match actual attachment locations. This way of working is named “wire harnessing” certainly because generated cables look like some harnesses to put on a horse.
According to ISTQB, a Test Harness (TH) is “a test environment comprised of stubs and drivers needed to execute a test” [ISTQB 2015]. The Drivers, associated with automated testing means, would follow a test scenario and inject data into a Component Under Test (CUT) and the Stubs (aka “Test Doubles” - TD) mock the components the CUT depends on. Those TD are used to isolate the CUT from external and uncontrolled behaviors [Ribeiro Rocha 2008]. In our wire harness example, the whole harness board acts as a Stub of the operating environment and the waypoints on board will provide repeatable and isolated Drivers to ensure the produced cables will fit the operating environment.
TH may be
- part of a project deliverable but usually kept separated from the production source code and may
- shared and reused on multiple projects.
A TH enables notably:
- Simulating corner cases and situations hard to configure in a production-like environment
- Increasing probability that regression testing will occur because TH exploit corner cases
- Faster tests scripts since stubs are inevitably faster than the components they mock, especially when mocked parts are on the network
- Repeatability of subsequent test runs because TH prevent any data variation from mocked parts, thus False Positives
- Offline testing because remote parts, such as web services, are locally mocked
- Detecting missing features between the CUT and the test intention [André 2013]
- Testability since TH generate Drivers and TD, then interfaces are required on the CUT to pilot it and so the subsequent components invoked by the CUT are
- Modularity improvement [Gao 2003]
- Integration Testing with mocked parts and avoids big bang approaches
When it comes to TD, Developers have created a couple of terms to introduce different capabilities with some subtle differences between those definitions [Meszaros 2011][Sheel 2021] with confusing similarities:
- Stubs: TD with hardcoded values to method calls - they always returns the same output regardless of the input
- Mocks: improved stubs with dynamically responses based on scenarios
- Fakes: concrete implementation that work as the actual implementation with a simplified version of production code without heavyweight dependencies
- Spies: this kind of TD tracks when functions were called, with which arguments
These terms provide some idea about extended usage of TDs; they could also include Simulators with controllable features as an extension of Fakes. Simulators are not only cockpits similar to sophisticated aircraft, but also any TD that would simulate a backend to prevent dramatic consequences from bad coded CUT. This happens with any device or asset that would be either expensive, or seldom, or too slow, or with uncontrollable/unreachable real-life features. Therefore, in most mature fields such as insurance [DOCOsoft 2015], stock market [Kriger 2015], or transportation [BugRaptors 2019], ready-made solutions and services or certifications exist to test CUTs against a platform or a standard.
TH for large software usually results in provisioning those TD within a framework composed with tools and methods. The framework is made available to tackle B2B Interoperability issues [Jardim-Gonçalves 2007] with dynamically generated stubs [Ribeiro Rocha 2008].
Simulators, just like any TD aim to learn something about the CUT. Aircraft simulators aim to test the Pilot and tell the Auditors about the learning curve from the one who mainly learns, the Pilot.
Impact on the testing maturity
TH can be engineered in a Model-Based Testing (MBT) process [Ribeiro Rocha 2008]:
- start from Sequence Diagram or Activity diagrams or any representation of sequences of execution of operations
- convert the diagram into a sequence graph;
- select paths from this graph to engineer test cases
- specify the test cases corresponding to each selected path
- identify data inputs needed to cause each scenario path to be taken
- implement the test cases in the programming language of choice
In a Model-Based approach, TH can also be generated in a Model-Driven Design (MDD) approach [André 2013]:
Even if such a way of work can easily be understood management wise, it appears that MDD introduces some pitfalls [Den Haan 2009]:
- MDD actually introduces a lot of rigidity
- Models are flexible if only the MDD tool is
- The roles of project members are quite different with a ‘Meta-Team’ building the model-driven software factory which generates some distance between Developers and the Business
- The modeling tool/approach is "almost" finished at project start - the silo generated by the Meta-Team usually produces a model that cannot be 100% tested which can generate huge impacts when it comes to large projects
- From the model, the Requirements Team is not able to understand what is allowed and what not which introduces wrong assumptions
- MDD requires experience
- People tend to focus on the new tool they are using with all its cool features which draws them away from both business and technical reality
To mitigate the waterfall-like effect, you need to start doing everything at the same time, notably with tools such as Swagger or Postman [Clayton 2022]. Such platform enable generating
- API specification to generate TD and Drivers
- API documentation
virtually from one input.
Test Driven Development (TDD) can be a good opportunity to handle TH. TDD exists in 2 “schools”:
- Detroit school: the classic TDD approach based on a digging deep and bottom-up approach [Beck 2002] [Henke 2018]
- London school: a top-down approach [Freeman 2009]
For both schools, component dependencies are managed with TD, and tests are run once components can be deployed [Heineman 2009].
In CI/CD sometimes, Integration Testing thanks to TH is simply skipped. Integration issues are actually experienced by some Users (e.g. Canary Releasing or Crowd Testing) to tackle corner cases in a stochastic approach.
In critical systems, parts have autotests features to enable diagnosis while in operating conditions with Built-in Self Testability (BIST) [Agrawal 2003]. This strategy can be used for deployed components shared with different 3rd parties that require robustness to bear unknown usages [Mariani 2004]. This Shift Right approach is based on an isolated testing mode. To enable this, a signal/command is sent to targeted BIST-abled components to turn into self-test mode [Agrawal 2003]. It means THs are also part of the production code, thus a “Design for Testability and Built-In Self-Test” approach which come in several flavors well-known by hardware manufacturers [Ben Jone 2022].
A possible adaptation to BIST to software solutions could be a test fixture, say a special REST API resource. This resource would position the asset in maintenance mode and then block actual calls with some TDs while running the self-tests.
Finally, TDs may also be enriched from production data to improve autotests in the Jidoka way.
Agilitest’s standpoint on this practice
Agilitest is clearly a Driver. The automation tool enables driving web, MS Windows, mobiles applications and web services as well.
Most Agilitest Customers are Business-oriented thanks to #nocode scripting. Creating TD usually requires some technical skills which infers close relations between so-called “Functional Testers” and Developers, the social part of Testability.
To discover the whole set of practices, click here.
To go further
- [Agrawal 2003] : Vlshwanl D. Agrawal & Charles R. Klme & Kewal K. Saluja - MAR 1993 - “A tutorial on built-in self-test. I. Principles - IEEE Design & Test of Computers” - https://www.cs.colostate.edu/~malaiya/530/agrawalBIST1.pdf
- [André 2013] : Pascal André & Jean-Marie Mottu & Gilles Ardourel - 2013 - “Building Test Harness from Service-based Component Models” - https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.403.2119&rep=rep1&type=pdf
- [Beck 2002] : Kent Beck - 2002 - “Test Driven Development: By Example” - isbn:9780321146533
- [Ben Jone 2022] : Wen Ben Jone - c. 2022 - “Built-in Self-Test (BIST)” - https://eecs.ceas.uc.edu/~jonewb/BIST2.pdf
- [BugRaptors 2019] : BugRaptors - NOV 2019 - “Testing of EDI Based Applications”- https://www.bugraptors.com/blog/testing-of-edi-based-applications
- [Clayton 2022] : Tom Clayton - JAN 2022 - “15 Best Swagger Alternatives 2022” - https://rigorousthemes.com/blog/best-swagger-alternatives/
- [Den Haan 2009]: Johan Den Haan - JUN 2009 - “” - http://www.theenterprisearchitect.eu/blog/2009/06/25/8-reasons-why-model-driven-development-is-dangerous/
- [DOCOsoft 2015] : DOCOsoft - SEP 2015 - “DOCOsoft Claims Write Back with ACORD certification” - http://www.docosoft.com/news-events/latest-news/2015/09/24/docosoft-acord-video
- [Freeman 2009] : Steve Freeman & Nat Pryce - 2009 - “Growing Object-Oriented Software, Guided by Tests” - isbn:0321699750
- [Gao 2003]: Jerry Zeyu Gao & H.-S. Jacob Tsao & Ye Wu - DEC 2003 - “Testing and Quality Assurance for Component-Based Software” - ISBN: 9781580534802
- [Heineman 2009] : George Heineman - FEB 2009 - “Unit testing of Software Components with intercomponent dependencies” - https://ftp.cs.wpi.edu/pub/techreports/pdf/09-02.pdf
- [Henke 2018] : Mark Henke - NOV 2018 - “London TDD Vs. Detroit TDD - You're Missing the Point” - https://blog.ncrunch.net/post/london-tdd-vs-detroit-tdd.aspx
- [ISTQB 2015] : ISTQB - https://glossary.istqb.org/en/term/test-harness-2
- [Kriger 2015] : Anna-Maria Kriger - NOV 2015 - “EXTENT-2015: A Test Harness for Algo Trading Systems” - https://www.youtube.com/watch?v=zTaQIn0kkLA
- [Mariani 2004] : Leonardo Mariani & Mauro Pezzè & David Willmor - SEP 2004 - “Generation of Integration Tests for Self-Testing Components” - https://www.researchgate.net/publication/220703532_Generation_of_Integration_Tests_for_Self-Testing_Components
- [Meszaros 2011] : Gerard Meszaros - FEB 2011 - “Test Double Patterns” - http://xunitpatterns.com/Test%20Double%20Patterns.html
- [Moustier 2019-1] : Christophe Moustier – JUN 2019 – « Le test en mode agile » - ISBN 978-2-409-01943-2
- [Ribeiro Rocha 2008] : Camila Ribeiro Rocha & Eliane Martins - FEB 2008 - “A Method for Model Based Test Harness Generation for Component Testing” - https://www.researchgate.net/publication/220327614_A_Method_for_Model_Based_Test_Harness_Generation_for_Component_Testing
- [Sheel 2021] : Ankhur Sheel - JUL 2021 - What is the difference between Stubs, Mocks, Fakes and Spies - https://www.ankursheel.com/blog/difference-stubs-mocks-fakes-spies
- [Whitney Blake 2016] : Whitney Blake - SEP 2016 - “Harness Boards & Assembly Test Design at Whitney Blake Company “ - https://www.youtube.com/watch?v=O9AqvPt33eo