Observables are tangible feedbacks from which measures can be generated.
Measurement and automation are key in DevOps. From a DevOps perspective, feedback and measurement are a necessity, and it is rooted in the core values of the practice:
- This is the “M” in SAFe’s “CALMR” [SAFe 2021-43] or the well known “CALMS” coined by Jez Humble [Buchanan 2021]
- Gene Kim, Jez Humble and Patrick Debois (the “father” of DevOps [DevOps Cafe 2010]) also wrote some DevOps principles about feedback
- Google’s point of view on DevOps named “Site Reliability Engineering” (SRE) also has a strong concern about feedback and alerting [Beyer 2016]
These indicators give values that allow us to have a point of view, both on a situation and on its projection to Focus on the entire solution.
When measures are done manually, metrics can be quite time consuming and dodgy. Since DevOps also strongly relies on automation, this is the “A” part in CALMR/CALMS, metrics are actually automated to facilitate the calculation of these KPIs that can be found at different levels [CFTL 2021]:
- The good quality of the product and associated services
- The good quality of the Delivery and the associated Time-to-Market (TTM)
- The situation regarding the automation of manual actions, including scripted tests.
Each level influences customer satisfaction and reinforces the idea that testing goes far beyond the simple execution of automated scripts [Bach 2014][Bach 2016]. Observables actually enable the discovery of issues that could not be covered by testing. Observables are therefore part of the Shift Right approach.
Possible families of observables:
- Product KPIs
- Test report indicators to have a view on the compliance with requirements
- KPIs on the hypotheses to be tested as per the Lean Startup approach to aim the market needs instead of blindly delivering a product compliant with some irrelevant requirements
- Customer/user satisfaction indicators to ensure the market will love your assumption on their needs
- KPI on the delivery efficiency
- Code quality measurement to have a view on the technical debt and future changes to come
- On the release pipeline to ensure the Team ability to deliver new releases as fast as possible
- Productivity and TTM indicators to know how fast the market needs can be made available to start producing some income from your invested time on features ideation
- Employee satisfaction indicators to make sure your human production means will be able to deliver on the long term [Moustier 2019-2]
- KPI on automation
- The good quality of what is automated to ensure your test automation is good enough to deliver steady quality products that will not be emped by false positives inferred by automated test scripts
- The efficiency of the service provided by the automation to see how good automation is for the product
- The smooth running of the automation activity and its speed to monitor the automation ability to process services as fast as possible
Those observable-based KPIs should be selected as per the context of your project; however, they should be balanced like the balanced-scorecard system.
Impact on the testing maturity
In the SAFe context, epics are formalized with a “Leading Indicators” (LI) part. Those LI describe a set of metrics to measure how close a stream value is from expectations. At some point, those metrics are to be implemented through some User Story (US) that codes the generation of observables and the metric that go along [SAFe 2021-34].
Actually, any US may virtually contribute to develop such metrics, provided that the Definition of Ready (DoR) includes an analysis on opportunities to generate observables from LI. This opportunity analysis can be helped with some Dependency management notably with some impact mapping [Adzic 2013] technique:
- from a goal, actors (eg. humans or machines) are identified
- for each actor, impacts are defined
- for each impact, “deliverables” are defined, such as the tests objectives and of course possible observables
Beside the feedback on how good your product is, observable should also be used to ease the Operation Teams such as the Helpdesk that would monitor the health of the product but also the ideation Teams which may use the outcome of those observables to help decision making on improving, fixing, preventing or eventually throw away [SAFe 2021-34].
Therefore, it appears that observables lead to alerting. Observables and appropriate alerting secures innovation through feedback provided that
- alerting (Andon) is efficient for immediate corrective action
- the delivery process is fast (less than 10’ - see SMED)
- incidents are limited to a small scope of Users to avoid impacting the whole Users flock (see Canary Releasing)
Under those premises, some teams lower the amount of integration testing to let Users trigger “corner cases” that would have taken a lot of time to engineer such tests.
Observables are a DevOps pillar. This pillar actually enables a testing approach that bets on heavy automation. This is the required consequence to accelerate deliveries and take care of Customers.
Agilitest’s standpoint on this practice
Click here to ask for Agilitest's standpoint.
To go further
- [Adzic 2013] : Gojko Adzic - 2013 - “Impact Mapping” - isbn:9784798136707
- [Bach 2014]: Bach, J. (2014). Test Cases are Not Testing: Toward a Performance Culture. CAST 2014 Keynote. https://www.youtube.com/watch?v=JLVP_Z5AoyM
- [Bach 2016]: Bach, J., & Bolton, M. (2016). A Context-Driven Approach to Automation in Testing. https://www.satisfice.com/download/a-context-driven-approach-to-automation-in-testing
- [Beyer 2016] : Betsy Beyer, Chris Jones, Jennifer Petoff et Niall Richard Murphy - « Site Reliability Engineering: How Google Runs Production Systems » - O’Reilly Media - 2016 - ISBN-13 : 978-1491929124 - https://landing.google.com/sre/sre-book/toc/index.html
- [Buchanan 2021] : Ian Buchanan - accessed in SEP 2021 - “CALMS Framework” - https://www.atlassian.com/devops/frameworks/calms-framework
- [CFTL 2021]: Comité Français du Test Logiciel - collective work - “Automatisation des activités de test - l’automatisation au service des testeurs” - ISBN:9782956749011 - voir p105
- [DevOps Cafe 2010] : DevOps Cafe - SEP 2010 - “Love it or hate it, the term DevOps is here to stay... and we know who to thank” - http://devopscafe.org/show/2010/9/15/episode-12.html
- [Moustier 2019-2] : Christophe Moustier – OCT 2019 – “Vélocité, Bière et Sexe” - https://fan-de-test.fandom.com/fr/wiki/V%C3%A9locit%C3%A9,_Bi%C3%A8re_et_Sexe
- [SAFe 2021-34] : SAFe - SEP 2021 - “Epic” - https://www.scaledagileframework.com/epic/
- [SAFe 2021-43]: SAFe - FEV 2021 - “CALMR” - https://www.scaledagileframework.com/calmr/