Compatibility tests
Active testingInteroperability, coexistence with other systems, compatibility with standards, replaceability
What are compatibility tests?
Compatibility Testing (CT), happens when the solution is made available to End Users (EU) under different platforms that may alter the EU’s experience.
CT aims to assess those alterations under several possible configurations such as different:
- Browsers,
- Databases,
- Operating Systems (OSs),
- Mobile devices,
- Networks,
- Devices,
- …
with different versions and combinations; the classic example used for CT is the browser or mobile compatibility. The delivered solution must be tested against the main platforms and versions.
To achieve this, there are 2 kind of CT:
- the Backward Compatibility Testing (BCT): it ensures new releases will still run on older versions of platforms
- the Forward Compatibility Testing (FCT): it assesses the upcoming versions of platforms will still support the current or new releases of the solution
The platform onto or into which the component is installed may differ in terms of:
- Hardware - the deployed component may run on different kinds of device such as laptops, mobiles, tablets, and also embedded software, eventually on chips
- Software - the delivered component may rely on existing software parts such as an OS, a browser, an API with which the component must comply with
- Network - the many deployed components communicate through some means including hardware, software and protocols with different parameters.
The impact of the platform starts from the installation that can be assisted with processes, eventually automated.
Impact on the testing maturity
The installation process is a direct consequence of CT because targeted platforms will influence deployment. Once automated, this installation process should integrate checks that should prevent non supported configurations. This is one of the verification tasks that should be included in a “Continuous Deployment” pipeline.
In a DevOps environment, the deal is to update targeted environments, installation process, CT and the delivered components at the same time.
As a consequence, the whole Team must be familiar with the 4 parts to know how to manager with each part
- at Environment level, the Team needs to
- set the environment up
- define the many accepted versions of the platform
- interact with the platform
- notice its behaviors through the available interfaces
- at Installation level, to
- define and configure the installation process which can be eventually automated
- test the installation process against the many accepted versions of the platform
- at CT level, to
- define the assets to submit to CT and test the delivered product against the accepted versions of the platform
- limit the possible configuration notably thanks to a Pareto analysis
- define any automatable CT for both BCT and FCT
- at Component level, to
- prepare CT with some testability items for built-in quality [SAFe 2021-26][SAFe 2021-27]
- include some variability assumption in the design
Naturally, this enforces the need to empower the whole team on the product quality.
To complete the CT maturity overview, you must also figure out that platforms can derive different levels of CT roughly modeled in 4 levels:
- Reactive: the environment is discovered at the same time as the engineering of the components to be delivered - this happens when prototypes are created - compatibility issues are met mainly when the product is deployed
- Ad hoc: those parts are combined in an ad hoc mode - CT is driven by compliance to a single objective, manufacturing the product - issues may be discovered during the development phase
- Specified: when conventions need to be established between several teams with dependencies - architectural design patterns are proposed to scale the use of the environment
- Standardized: when the field is sufficiently mature, standards and certifications emerge from the trade - the environment is protected against known misuses and usual problems - this level can reach legal provisions that may differ from one country to another - Sometimes, when a protocol has been established, plugfests are organized to improve compatibility between Vendors
CT levels
The levels described here above highlights a possible Shift Left testing strategy (SLT) by analyzing the requirements of the platform and environment to define the many platform configurations End-Users will face.
To provide multiple feedback on CT, you also should include some Shift Right Testing (SRT) techniques, notably thanks to Alpha and Beta testing, Dark or Canary Releasing to perform crowd testing [Leicht 2017] and get some feedback from the field. However, SRT is possible when the delivered product is not submitted to certifications.
An exhaustive attempt to address the compatibility matter could be to
- Start from a deployment diagram
- Identify the many assets (devices, servers, cabinets, databases, networks, displaying devices, …) and OS
- Build a list of assets and OS’s by available versions and number of occurrences on the field
- Define for each part its added value to prioritize the testing effort
- Propose a strategy and an action plan along with budgets
Among the many possible strategies, there are few that often emerge with a risk-based testing approach such as targeting
- Significant CT scenarios to cover platform-specific pitfalls
- A “mobile first” strategy emphasizes the compatibility towards mobile platforms to discover mobile-specific issues
- Compliance with few strategic partners or protocols
This waterfall approach needs to be sliced into smaller chunks to fit some alternate and lightweight ways, e.g.:
- addressing issues as their appear in a simple reactive approach
- addressing any of the phases within a sprint-wise agile testing quadrants [Marick 2003][Crispin 2011][Bach 2014][Crispin 2021] to enable such testing practices little by little tho help Team raising their skills in this NFR
- applying the subsidiarity principle and decentralize decision making with the help of the management to optimize the testing effort
- applying “Model-Based System Engineering” (MBSE) [SAFe 2021-19][Moustier 2020][Galiber 2019], especialy if the system is huge or monolithic that would require an holistic vision at product level
Agilitest’s standpoint on this practice
As an attempt to help CT, Agilitest provides automation means that are able to plug onto different application types and devices at script design time such as
- Mobile apps (both iPhone and Android)
- HTML apps (any browser can be selected)
- Desktop apps for MS Windows OS
- REST / SOAP API backends
The current limitation is the test targeted environment which needs to be reached from the injection services under which Agilitest needs to be installed. This constraint disables public mobile and navigator farms such as SauceLab or BrowserStack and promotes a test lab that hosts devices and HTML browsers, eventually with a BYOD strategy to ease the flow although it creates some GDPR issues.
To discover the whole set of practices, click here.
Related cards
To go further
- [Bach 2014] : James Bach - « The Real Agile Testing Quadrant » - SEP/2014 - http://www.developsense.com/presentations/2014-06-Dublin-RSTAgileTesting.pdf
- [Crispin 2011] : Lisa Crispin - NOV 2011 - “Using the Agile Testing Quadrants” - https://lisacrispin.com/2011/11/08/using-the-agile-testing-quadrants/
- [Crispin 2021] : Lisa Crispin & Janet Gregory - JAN 2021 - “Applying the Agile Testing Quadrants to Continuous Delivery and DevOps Culture – Part 2 of 2“ - https://agiletester.ca/applying-the-agile-testing-quadrants-to-continuous-delivery-and-devops-culture-part-2-of-2/
- [Galiber 2019] : Flavius Galiber, III - « Lean-Agile MBSE: Best Practices & Challenges » - SAFe Summit 2019 - 29/SEP - 04/OCT/2019 - https://vimeo.com/372960506
- [Leicht 2017] : Niklas Leicht & Jan Marco Leimeister & Ivo Blohm - MAR 2017 - “Leveraging the Power of the Crowd for Software Testing” - https://www.researchgate.net/publication/31568453
- [Marick 2003] : Brian Marick - « Agile testing directions : tests and examples » - 22/AOU/2003 - http://www.exampler.com/old-blog/2003/08/22/#agile-testing-project-2
- [Moustier 2020] : Christophe Moustier – OCT 2020 – « Conduite de tests agiles pour SAFe et LeSS » - ISBN : 978-2-409-02727-7
- [SAFe 2021-19] : SAFe - FEV 2021 - “Model-Based Systems Engineering” - https://www.scaledagileframework.com/model-based-systems-engineering/
- [SAFe 2021-26] : SAFe - FEV 2021 - « Core Values » - https://www.scaledagileframework.com/safe-core-values/
- [SAFe 2021-27] : SAFe - FEV 2021 - « Built-in Quality » - https://www.scaledagileframework.com/built-in-quality/