Definition of Done (DoD)

Active testing

Have a definition on a finished US (ready to be deployed)

Active testing

What is Definition of Done (DoD)?

«The “Definition of Done” (DoD) is a formal description of the state of the Increment when it meets the quality measures required for the product» - [Schwaber 2020].

While Scrum was created in 1995, this concept has been introduced in Scrum from 2009 [Verheyen 2020], apparently from a pattern that was proposed in 2002 by Dan Rawsthorne [Rawsthorne  2015]. The DoD creates transparency for everyone on delivered artifacts. It is based on a shared understanding of what should be done to make those things done when considered at product level [Madan 2019]; therefore, if several Teams are working on the same product, they may have different DoD but their combined work must involve a DoD based on the product to make it releasable. 

Even if this approach tends to result in a staggered SDLC, it will always be preferable to releasing mediocre code because, as SAFe likes to say, "You can't scale crappy code" [SAFe 2021-42] and the DoD helps to know the smell ahead of time. Without it, code is sent to production without knowing the impacts it will have on operations. However, staggering is not a curse, aiming towards a Feature Team based organization reduces this bottleneck and there are tons of agile practices that help to do everything at the same time.

Impact on the testing maturity

The DoD concept is often confused with “Acceptance Criteria” (AC); actually, while ACs vary from User Story (US) to US, the DoD is at higher rank [Purushotham 2020]. It is also applicable to any Product Backlog Item (PBI) as well as any event such as Sprint Refinement and Planning, Demoes or Retrospectives to make sure each ceremony is complete [Dalton 2018].

The DoD consists of 3 main components [Madan 2019]:

  • Business or Functional requirements: the ACs
  • Subjective or measured Quality Matters such as test coverage, open bugs, etc.
  • Non-Functional Requirements

These components can be perceived through a “Done Thinking Grid” [Gupta 2008] to get more into details and pin point items on a checklist more specifically.

Gupta’s “Done Thinking Grid” [Gupta 2008]
Gupta’s “Done Thinking Grid” [Gupta 2008]

Actually, a survey over publications on the DoD issue managed to spot 62 criteria which cover 9 categories mostly focused on regulatory compliance [Silva 2017].

Distribution of criteria per category [Silva 2017]
Distribution of criteria per category [Silva 2017]

Indeed, depending on the point of view adopted, the DoD will have different values depending on who expresses its expectations, thus different levels of “Done” can be defined [Rawsthorne 2015]:

  • US must be “Done/Done” – US get “Done” and are verified in terms of DoD.
  • Features must be “Done/Done/Done” when all composing US are “Done/Done” and the PO validates that the Feature is good enough.
  • Products must be “Done/Done/Done/Done” when there are enough “Done/Done/Done” Features to be useful to users and the release is ready from Ops guys’ point of view.

These feedback loops on the “Done” statuses are merely like Argyris’ double loop learning process [Argyris 1977] [Moustier 2020]; doing things right at level N infers aiming at a higher level. They reflect the maturity of the Team who update the DoD from both internal & external constraints. The more they practice Gemba Walks or act as X-Teams, the more accurate and relevant is the DoD [Moustier 2020]. However, setting a DoD with very high standards from the beginning with Teams that are not ready for that is not a good idea. Just like with games, you should have a couple of wins and good first pass yields to get into it. Therefore, you should measure happiness within the Team, notably thanks to a simple Niko-Niko Calendar.

Whatever the content of your DoD, the thing to keep in mind is the “shared understanding” dimension. It should then never be pushed by a single person but it should rather emerge from the people who will be responsible for applying those criteria. To facilitate this approach, a set of cards named “DoD Kards” has been created to let the whole team decide on the DoD to apply along with the Product Owner [Velasquez 2017].

Agilitest’s standpoint on this practice

From an economic and DevOps perspective, automation is a must have and automating test scripts is inevitable and this should then be part of any DoD at US level. Moreover, when it comes to test scripts automation, the scripts should be treated as production code which also implies extra criteria on the DoD to enable maintainable and sustainable scripts such as “FIRST” [Tekguard 2018] [Moustier 2019-1] or the use of test harnesses to enable testability and fast test execution.

To discover the whole set of practices, click here.

Related cards

To go further

© Christophe Moustier - 2021