Monitoring of the solution
Active testingAlerts to mobilize immediate human action / Tickets for deferred processing / Logs for information
Definition of solution monitoring
Monitoring the solution is usually an Ops technique to anticipate issues before Customers do notice it. This stratagem is close to Service Level Objectives (SLO) that are a bit more challenging than the Service Layer Agreement (SLA) which usually generates penalties. Google’s vision on DevOps nicknamed “Site Reliability Engineering” (SRE) uses this approach extensively [Beyer 2018]. SRE introduces a concept named Error Budget which helps to take a step back on SLA issues. This margin can be consumed when the situation gets tough and refunded when the Product Owner (PO) needs to.
This kind-of “budget management” can be driven by Andon which is triggered on a higher threshold. However, fixing only the item that leads to trigger the alert may lead the team to be constantly under pressure. Usually, the Team tries to lower the amount of negative effects (say the number of active bugs) until a lower threshold is reached. This technique generates a hysteresis-like phenomenon notably used in Real-Time systems to avoid overuse of the compensation system.
Impact on the testing maturity
Monitoring the solution once it has been deployed is a Shift Right technique which leads the Team to learn more about what they delivered. This Customer feedback combined with some internal testing in a Shift Left approach generates a double loop learning structure [Argyris 1977] [Moustier 2020].
This nested loops structure can be thought of as Ries' "Lean Startup" [Ries 2012], which starts with assumptions built from internal feedback and loops back to market feedback to validate those assumptions. The market feedback would then reinforce the choice or lead to pivot and abandon what appears to be wrong from an Customer and thus economic point of view.
This Lean Startup approach has been adopted by SAFe with “Leading Indicators” (LI’s) to be provided in every Epics description [SAFe 2021-34]. Those LI’s are observables to be thought and implemented along with the product which is facilitated by early testability.
When building an MVP, these LI’s can be collected manually to avoid investing too much on a solution that still provides no income but after few Minimal Marketable Releases (MMR), a “full automation paradigm” should lead to an Automation of observables that would include Feedback from resources, the domain and the business.
Agilitest’s standpoint on this practice
Agilitest usually helps to monitor the quality of the product before it is delivered thanks to test automation. However, some organizations around SaaS solutions run daily smoke tests in the morning when arriving at work. This kind of operation can provide some basic monitoring of the deployed solution, at least from a domain perspective. Tweaking Agilitest initial use to feed some health dashboard can be a good start when product monitoring becomes relevant.
To discover the whole set of practices, click here.
Related cards
To go further
- [Argyris 1977] : Chris Argyris - « Double Loop Learning in Organizations » - Harvard Business Review - SEP/1977 - https://hbr.org/1977/09/double-loop-learning-in-organizations
- [Beyer 2018] : Betsy Beyer & Niall Richard Murphy & David K. Rensin & Kent Kawahara & Stephen Thorne - JUL 2018 - “The Site Reliability Workbook: Practical Ways to Implement SRE” - isbn:9781492029458 - https://sre.google/workbook/table-of-contents/
- [Moustier 2020] : Christophe Moustier – OCT 2020 – « Conduite de tests agiles pour SAFe et LeSS » - ISBN : 978-2-409-02727-7
- [Ries 2012] : Eric Ries - 2012 - « Lean start-up » - Pearson - ISBN 978-2744065088 - http://zwinnalodz.eu/wp-content/uploads/2016/02/The-Lean-Startup-.pdf
- [SAFe 2021-34] : SAFe - FEV 2021 - “Epic” - https://www.scaledagileframework.com/epic/