Testability as soon as possible

Planning

Technical and social testability must be introduced as soon as possible

Planning
Agility Maturity Cards > Planning
Testability as soon as possible

Why you should test as soon as possible

« Testing is an infinite process of comparing the invisible to the ambiguous
so as to avoid the unthinkable happening to the anonymous »
- [Bach 06-2006]

It is well known that exhaustive testing is not possible [Guru99 2014] [Radid 2018-2], mostly because of a limited amount of time dedicated to testing, not to mention the Tester’s creativity to imagine unpredictable situations. Even if there are some very simple situations where full testing seems possible because cases are finite. There are virtually five kinds of situations when thinking about testability [Rodríguez 2013]:

  • (a) finite - the system is limited to a finite number of possibilities, e.g. when facing a finite state machine such as an ATM
  • (b) infinite, but any arbitrarily high degree of partial completeness can be finitely achieved - say with partition testing [XXXX]
  • (c) countable infinite - say when all input parameters are based on integers
  • (d) uncountable infinite - say when an input parameter is based on floating values
  • (e) no existing test suite that would tell if a system is even testable

Considering any of those situations, testability actually relies on four aspects:

  • What is visibly testable from a user point of view - the “extrinsic testability
  • What is testable but not visible  - the “intrinsic testability
  • What are the technical means offered to test the System Under Test (SUT) -  the “technical testability
  • How is the knowledge made available to any mean of testing -  the “social testability” - should it be something shared on during a coffee break or some well known process

If any of those components is missing, Testers will face the situation (e).

Testability aspects [Moustier 2020]

Moreover, there are some situations that impede testability, one of them is named “Implicit dependency”: say a given component retrieves the date directly from the OS, not only it will reduce the reusability of the component [Horré 2011] but also it will disable you from running a test case that is supposed to happen at a different date if this component is involved. Therefore, testability is something that should be included at least at design time [SAFe 2021-27] with explicit dependencies [DevIQ 2014] or with the “Dependency Injection Principle” which is one of the SOLID principles [Martin 2002] [Moustier 2019-1]. Actually, to enable built-in quality, testability is part of “Architecturally Significant Requirements” (ASR) [Wikipedia 2021] and must be provided as soon as possible to ease both design and testability.

As seen before, testability and social interactions are intertwined: if no one knows about a way to test something, Testers will face the situation (e). Moreover, it also appears that testability gives confidence because you may easily know the reliability of a testable part of the product provided you have enough time for that or you invested in automated test scripts beforehand.

Speaking about testability and automation, Lean Management includes something named “Jidoka” (自働化) [Monden 2011] which could be translated by “Autonomation” a portmanteau word for “Automatisation” and “Automation” and relies on four principles:

  1. Detect the abnormality.
  2. Stop.
  3. Fix or correct the immediate condition.
  4. Investigate the root cause and install a countermeasure.

The idea behind that is to push a robot to get more robust to failures and let the Operator that monitors robots to spend less time on robot maintenance. Jidoka becomes extremely relevant when it comes to false positives on automated test scripts. If the script is able to handle situations that may lead to generate false positives, the integration pipeline will get more robust and would deliver the product more often virtually without any human intervention that would slow down the delivery flow. Introducing Jidoka inside scripts can be easily done with multiple retries if a component does not answer but also with embedded sensors that could

  • consolidate the script with hints or evidences on some error on intermediate component (e.g. a failing PING command on a server may confirm a widget may have been displayed if the server would have been up and running)
  • inform the Operator a script failed because the server did not respond hence neither the script nor the product are failing but the environment did that which avoids wasting time in debugging the script

A lack of extrinsic (or at least easily accessible) testability would also [Meaney 2018b] [Moustier 2020]

  • make testing heavier to engineer and run
  • slower the feedback which is a impediment especially in a DevOps environment
  • generate doubts and lower motivation and morale

As a consequence, low testability will inevitably raise the cost of quality and lower the quality of the product.

Finally, sometimes, introducing testability is done afterthought. This approach has pros and cons. The pro is that introducing testability is always good; but, as any ASR, late testability implies often heavy refactoring which provides no customer-facing added value hence the imperative need for early testability.

Impact on the testing maturity

Providing testability as soon as possible leads to built-in quality which implies notably

  • Introducing acceptance criteria in US to enable extrinsic or accessible testability means that could be used in automated test scripts with the ATDD practice
  • Including SAFe’s Leading indicators in epics [SAFe 2021-34] which will require the design of observables to monitor how good the solution was
  • Coding with mockable modules to provide test harnesses
  • Designing the product to enable Non-Functional Requirement (NFR) testing (e.g. to measure performance on a fully integrated system with Internet calls may not provide a good measure while isolated components will help spotting where genuine performance issues are)
  • Developing code in TDD will help designing testable units
  • Updating the DoD with testability checks (e.g. the code review could introduce a No Implicit Dependency criterion)

When it comes to code legacy, it may be useful to define a testability map from an architecture diagram. This map spots testability issues and draws a trust zone just like the STRIDE analysis in security design which defines “trust boundaries” [Hernan 2015].


Agilitest’s standpoint on this practice

Early testability will undoubtedly ease scripting test cases with Agilitest. This practice will prevent multiple occurences of the same test case provided that the social part of testability is taken into account. This kind of “sharing principle” is also very useful when it comes to reuse some part of a script since Agilitest is able to create subscripts with parameters to help sharing test procedures.

Regarding the technical side of testability, early testability is also most valuable; for instance, if all widgets in a web page embed some ID, the script editor will find them very quickly and so will it be at runtime. However, Agilitest is gifted with some IA that probabilistically finds the widgets from criteria.

Moreover, even if Agilitest is UI-centered on multiple platforms (Windows, iOS, Mac OS, Android), it is also able to play with REST API services that would provide some low level sensors to enable Jidoka.

To discover the whole set of practices, click here.


To go further

  • [Monden 2011] : Yasuhiro Monden - OCT 2011 - “Toyota Production System: An Integrated Approach to Just-In-Time” - ISBN 9781439820971
© Christophe Moustier - 2021