What is Definition of Ready (DoR)
Once a team starts getting mature with their User Story (US), they know what is usually expected to have enough information on a Product Backlog Item (PBI) to get a confident estimate during the Sprint Planning.
For instance, they may have realized that:
- the US should have been presented in Sprint Refinement in “3 Amigos” mode
- the US should be formatted in full Gherkin to enable ATDD with tools such as Cucumber or Behave
- the US should hold some Non Functional Requirement (NFR) to enable deployment and confidence for Ops guys
- the US embeds some test automation to enable DevOps
All those items should be gathered in a list nicknamed “Definition of Ready (DoR)”. A DoR aims to reduce uncertainty, that which is good for better estimates at Sprint Planning time.
This list is a kind of checklist the Team and PO use to provide enough details before the US development starts in order to be as much as possible autonomous, just like the Definition of Done (DoD) to tell if a US is effectively done; however, while the DoD is mandatory in a Scrum Team [Schwaber 2020], the DoR is not.
A DoR should not be mandatory [Cohn 2016] [Moustier 2019-1] because
- at Sprint Refinement time or even during the Sprint, some details may be missing while they could be provided Just-in-time which is a Lean Management practice
- having everything completely provided before starting increases the WIP, raises lead times of PBI and leads to Waterfall
On the other hand, since the DoR provides you merely everything to achieve a US, it also appears that the DoR should improve the First Pass Yield (FPY), i.e. the ability to reach the DoD on the first try which is also a Lean Management good practice…
Actually, since a US should fit within a Sprint, there is a balance to be found between [Moustier 2019-1][Moustier 2019-2]
- the DoR which facilitates predictability
- the DoD which hardens the US
- the FPY which is proxy for the flow of the US
- and the morale of the Team (e.g. a Niko-Niko) which is a proxy for the Team’s sustainability
Impact on the testing maturity
A good DoR is a reminder on what is expected to deliver a PBI within an iteration and eventually get deployed. For that matter, the four quadrants [Marick 2003][Bach 2014][Crispin 2021] provide a high level series of activities that should be included in the PBI.
Yet, those four quadrants may not be enough to cover the needs around a deliverable PBI. Other criteria may be added such as [Moustier 2021]:
- the well known “INVEST” criteria [Wake 2003] [Moustier 2019-1]
- Testability - this includes notably planned automated tests, NFR testing, test data to use or generate, tests from inherited PBI such as Epic Leading Indicators [SAFe 2021-34], exploratory testing to perform, observables to implement, GDPR or WCAG matters,
- Communication matters - this includes notably points of contacts to have some extra informations from or people that would expect the PBI delivery, documents to generate, planned Pair or Mob programming to handle knowledge transfer and higher quality level, review in 3 Amigos mode with some Ops guy, alert management, training to catch or provide
- Deployment matters - this includes notably expected delivery or activation dates, deployment strategies, monitoring in production
- US Format matters - this includes notably links with inherited or linked PBI in the ticketing tool, links with any business objectives [SAFe 2021-30], BDD/Gherkin style format, ubiquitary language compliance
Agilitest’s standpoint on DoR
Planning test scripts that would be automated along with the US to be implemented definitely helps assessing how big is the US and therefore facilitates predictability of the Sprint Planning outcomes.
Moreover, if the DoR helps the Team to think about relevant business wise test data to be involved in the tested scenarios, the automation efficiency will be drastically improved. For that matter, Agilitest handles test data directly within the script or in a CSV file. The choice of using a simple CSV file avoids complex test settings and provides a robust way to reduce false positives (“flaky tests”) due to testing environment issues when databases are involved.
To discover the whole set of practices, click here.
Related cards
To go further
- [Bach 2014] : James Bach - « The Real Agile Testing Quadrant » - SEP/2014 - http://www.developsense.com/presentations/2014-06-Dublin-RSTAgileTesting.pdf
- [Cohn 2016] : Mike Cohn - « The Dangers of a Definition of Ready » - 09/AOU/2016 - https://www.mountaingoatsoftware.com/blog/the-dangers-of-a-definition-of-ready
- [Crispin 2021] : Lisa Crispin & Janet Gregory - JAN 2021 - “Applying the Agile Testing Quadrants to Continuous Delivery and DevOps Culture – Part 2 of 2“ - https://agiletester.ca/applying-the-agile-testing-quadrants-to-continuous-delivery-and-devops-culture-part-2-of-2/
- [Marick 2003] : Brian Marick - « Agile testing directions : tests and examples » - 22/AOU/2003 - http://www.exampler.com/old-blog/2003/08/22/#agile-testing-project-2
- [Moustier 2019-1] : Christophe Moustier – JUN 2019 – « Le test en mode agile » - ISBN 978-2-409-01943-2
- [Moustier 2019-2] : Christophe Moustier – “Le Zen de la vélocité” - https://www.researchgate.net/publication/339340044_Le_Zen_de_la_velocite or https://fan-de-test.fandom.com/fr/wiki/V%C3%A9locit%C3%A9,_Bi%C3%A8re_et_Sexe
- [Moustier 2021] : Christophe Moustier - JUL 2021 - “DoR Cards” - http://dx.doi.org/10.13140/RG.2.2.27028.22409
- [SAFe 2021-30] : SAFe - FEV 2021 - “PI Objectives” - https://www.scaledagileframework.com/pi-objectives/
- [SAFe 2021-34] : SAFe - FEV 2021 - “Epic” - https://www.scaledagileframework.com/epic/
- [Schwaber 2020] : Ken Schwaber et Jeff Sutherland - « The Scrum Guide : The Definitive Guide to Scrum: The Rules of the Game » - NOV 2020 - https://scrumguides.org/docs/scrumguide/v2020/2020-Scrum-Guide-French.pdf or https://scrumguides.org/docs/scrumguide/v2020/2020-Scrum-Guide-US.pdf