Milestones are evaluated objectivelyState of mind
Evaluate the solution at each increment, not the intermediate deliverables
In agile, milestones are, among other things, the moments when the product is delivered to the customer. There are other types of milestones such as the delivery of each User Story (US). SAFe proposes other milestones such as the Program Increment (PI) which classically frames four consecutive two-week sprints [SAFe 2021-14].
The point made by this practice is that each of these milestones must be framed by verifiable criteria that allow us to say whether the milestone has been reached.
In agility, this practice must absolutely be combined with the incremental approach, otherwise we fall back into the waterfall and tunnel effect [SAFe 2021-5].
Application to test maturity
In the US, this practice is found in Scrum with the notion of "Definition of Done" [Schwaber 2020]. It corresponds to a list of criteria common to all the units, to which specific acceptance criteria are added for each unit. The purpose of these criteria is :
- to provide in advance the work to be done in order to know when a US is completed and to facilitate its evaluation in the sprint
- to know the exact scope of what is being delivered, in particular through the examples described, tested and passed
- to inform the clients of each US what has been checked in order to clearly know the limits of reliability
In the image of what one sees on a US must thus be found on the various objects of ideation such as :
- the epics
- SAFe's capabilities and features [SAFe 2021-15].
- the content of sprints and IPs
but also :
- the code, hence the consistency of use of TDD which meets the same types of criteria
- the individual validated components and their interactions
- the delivered system and its conditions of acceptance by the customer (what is called the acceptance) or more generally of operability by the operations management teams, the "Ops".
This need for criteria is all the more true because with the quantity of objects accumulated, if these criteria are unclear, one quickly ends up not knowing what works and what does not and the technical debt [Cunningham 1992] [Moustier 2019-1] accumulates with the consequence of a complex tangle of problems that are difficult to solve.
Agilitest's position on this practice
Agilitest facilitates the automation of functional tests at the system and component level.
This automation allows the proper functioning of the delivered parts to be demonstrated effectively at any time, provided that the functional acceptance criteria have been identified, scripted and included in the test platform.
However, this correct operation only results in a product that conforms to functional expectations and in no way dispenses with verifications related to non-functional requirements often linked to operability conditions such as [IS0 25010 2011] :
- software performance
To go further
- [Cunningham 1992] : Ward Cunningham - « The WyCash Portfolio Management System » - OOPSLA’92 Experience Report - 26/MAR/1992 - http://c2.com/doc/oopsla92.html
- [ISO 25010 2011] : BSI Standards Publication - “Systems and software engineering — Systems and software Quality Requirements and Evaluation (SQuaRE) — System and software quality models” - BS ISO/IEC 25010:2011
- [Moustier 2019-1] : Christophe Moustier – JUN 2019 – « Le test en mode agile » - ISBN 978-2-409-01943-2
- [SAFe 2021-14] : SAFe - FEV 2021 - “Program Increment” - https://www.scaledagileframework.com/program-increment/
- [SAFe 2021-5] : SAFe - FEV 2021 - “Principle #5 – Base milestones on objective evaluation of working systems” - https://www.scaledagileframework.com/base-milestones-on-objective-evaluation-of-working-systems/
- [SAFe 2021-15] : SAFe - FEV 2021 - “Features and Capabilities” - https://www.scaledagileframework.com/features-and-capabilities/
- [Schwaber 2020] : Ken Schwaber et Jeff Sutherland - « Le Guide Définitif de Scrum : Les Règles de Jeu » - NOV 2020 - https://scrumguides.org/docs/scrumguide/v2020/2020-Scrum-Guide-French.pdf