Monitoring of the solution

Active testing

Alerts to mobilize immediate human action / Tickets for deferred processing / Logs for information

Active testing
Agility Maturity Cards > Active testing
Monitoring of the solution

Definition of solution monitoring

Monitoring the solution is usually an Ops technique to anticipate issues before Customers do notice it. This stratagem is close to Service Level Objectives (SLO) that are a bit more challenging than the Service Layer Agreement (SLA) which usually generates penalties. Google’s vision on DevOps nicknamed “Site Reliability Engineering” (SRE) uses this approach extensively [Beyer 2018]. SRE introduces a concept named Error Budget which helps to take a step back on SLA issues. This margin can be consumed when the situation gets tough and refunded when the Product Owner (PO) needs to.

This kind-of “budget management” can be driven by Andon which is triggered on a higher threshold. However, fixing only the item that leads to trigger the alert may lead the team to be constantly under pressure. Usually, the Team tries to lower the amount of negative effects (say the number of active bugs) until a lower threshold is reached. This technique generates a hysteresis-like phenomenon notably used in Real-Time systems to avoid overuse of the compensation system.

Hysteresis-based phenomenon [Moustier 2020]

Impact on the testing maturity

Monitoring the solution once it has been deployed is a Shift Right technique which leads the Team to learn more about what they delivered. This Customer feedback combined with some internal testing in a Shift Left approach generates a double loop learning structure [Argyris 1977] [Moustier 2020].

This nested loops structure can be thought of as Ries' "Lean Startup" [Ries 2012], which starts with assumptions built from internal feedback and loops back to market feedback to validate those assumptions. The market feedback would then reinforce the choice or lead to pivot and abandon what appears to be wrong from an Customer and thus economic point of view.

Lean Startup combined with ATDD & TDD [Moustier 2020]

This Lean Startup approach has been adopted by SAFe with “Leading Indicators” (LI’s) to be provided in every Epics description [SAFe 2021-34]. Those LI’s are observables to be thought and implemented along with the product which is facilitated by early testability.

When building an MVP, these LI’s can be collected manually to avoid investing too much on a solution that still provides no income but after few Minimal Marketable Releases (MMR), a “full automation paradigm” should lead to an Automation of observables that would include Feedback from resources, the domain and the business.

Agilitest’s standpoint on this practice

Agilitest usually helps to monitor the quality of the product before it is delivered thanks to test automation. However, some organizations around SaaS solutions run daily smoke tests in the morning when arriving at work. This kind of operation can provide some basic monitoring of the deployed solution, at least from a domain perspective. Tweaking Agilitest initial use to feed some health dashboard can be a good start when product monitoring becomes relevant.

To discover the whole set of practices, click here.

To go further

© Christophe Moustier - 2021