Merging the development / test automation ecocycles

Automated testing

The more remote the activities, the less agile we are

Automated testing
Agility Maturity Cards > Automated testing
Merging the development / test automation ecocycles


To improve automated testing, the matter “who should automate script for testing” emerges with low maturity organizations. The obvious answer to this question consists in applying the automation test pyramid with [Segal 2022] 

  • a lot of Unit Tests done by Developers
  • some Integration Tests also done by Developers
  • fewer End-to-End (E2E) testing done by QAs at the UI level
Automation Testing Pyramid [Fowler 2012]

Then, tools such as 

provide some abstraction layers that make test automation technically affordable to non-developer. From this context, test automation is something that can actually be driven by 

  • people’s skills to let mature Teams do self organize
  • or Teams’ structure decided by some hierarchy.

Whatever the driver of the organization, the question is about the interaction between people who develop production code vs people who write test scripts. This question can be covered with 4 main strategies [Axelrod 2018]

  • Strategy A: Promoting Manual Testers or Inexperienced Programmers to Automation Developers as a start
  • Strategy B: Having Dedicated Automation Team with strong development skills
  • Strategy C: Give the Developers the ownership for the automation along with the production code
  • Strategy D: Share the ownership within the Team

In the graphs here under, the size of each area provides ranges of possible concerns which depend on the agile maturity.

Those strategies may rely, or not, on a shared testing framework that would enable collective ownership of automation (eg. KDT, low-code, fitness-like or #nocode). This contributes a lot towards sharing concerns related to testing and coding matters.

Whatever the strategy, if a testing platform is shared between Developers and Testers, both coding & testing concerns will be shared towards a common ownership

Strategy A

This opportunistic approach will lead to creating poorly maintainable test scripts if those Testers do not have a Developer background. The produced code will neither introduce modularity nor reusability and then script fixes will have to be propagated throughout the whole test scripts asset.

Strategy B:
The automation Team is empowered with enough resources to develop and run automated scripts though, this Team and production code Developers use different libraries and design patterns. Obviously, in that case, a silo has been introduced and usually falls into Conway’s law [Conway 1968]  pitfall with an Ice Cream Cone syndrome and bottlenecks in the delivery process.

Although this configuration is supposed to address the inattentional blindness cognitive bias with test engineering from both Developers and Testers’ Teams with possible duplicate tests. As a result, useless tests at QAs’ level emerges with 4 proposed rules [Cruden 2015]:

  1. Only work on test automation if you have the skills to work on production code if required.
  2. Only work on production code if you have the skills to work on test automation if required.
  3. In case of 1) or 2) not being true, pair with someone who meets the requirements of 1) and 2) until it is. 
  4. The team must own and contribute to test code, not certain individuals or disciplines within it.

Those issues should lead the organization to move to more mature strategies.

Strategy C:
Devs’ interest on test scripts is major avoid strategy B issues; therefore, if it may first appear that it would be more convenient for developers to create new features and let testers producing the tests, the aforementioned drawbacks lead the Developer who wrote the feature to be the best person to write the automation [Segal 2022]. Devs are naturally “wired” to manage scripts as production code by applying design patterns on scripts with “PageObject Model” (POM) or even better with “ScreenPlay” which is SOLID principles applied to POM [Kutz 2022]. Devs’ perspective on scripting is very structuring [Axelrod 2018].

Still, this strategy leads Developers to trust that the QAs know what they have to test at the higher level and the QAs trust that the Developers are testing 'the right stuff' at the lower level. Here again, QAs focus on their tasks and naturally forget about assisting Devs and turn towards Business Analysts.

Strategy D:
From previous strategies, it appears both Developers and Testers need guidance from their colleagues to provide effective test automation [Axelrod 2018]. This guidance becomes particularly relevant when

  • ATDD is involved from Acceptance Criteria generated notably in a 3 Amigo session
  • TDD is driven from ATDD scripts

Even if ATDD and development could be split between any Dev and Automation Teams, strong communication will be required to align scripts and production code but separate Team will definitely prevent this combination promoted by SAFe [SAFe 2021-27], LeSS [Yin 2022] and DAD [Ambler 2012].

Under this circumstance, test independence [ISTQB 2018] may rely on Testers that would verify high level scenarios and challenge scripts to

  • ask Developers to introduce some Jidoka in test scripts 
  • add some corner cases they would find eventually relevant to automate with the help of Developers

As an extension of this strategy, Testers should also support Unit Tests notably

  • with the help of tools such as Fitnesse to let Testers provide test data
  • or Tester could let Developers explain the context and the actions performed on their Unit Tests so that such improvement may rise as soon as possible as per the Shift Left testing approach.

This builds a shared ownership of testing and automation across the Team.

However, simply applying this strategy is not enough, it also depends on how the Team applies some agile testing practices, hence the wide spectrum of this strategy when sharing automation with different Teams when no tool.

This cohesion can be facilitated notably through a Swiss Cheese model analysis by Team Members:

  • defining the testing opportunities swiss cheese model wise
  • skip rightmost slices that cannot be addressed (eg. non testable parts from a given context notably due to unreachable testing mean such as a private API
  • skip rightmost testing opportunities on the less risky (domain/business wise) to leave
  • eventually introduce multiple checks on the same premise to test on the most critical parts (domain/business wise) to enable testing from different contexts

The outcome of this strategy is that collaboration is key [Laurent 2018].

The underlying need is the “Social Testability” with the ability to share testability knowledge from a higher level vs an intimate knowledge of the application’s internals, which testers may not have access to [Segal 2022].

While Strategy C relies on Developer as good candidates to write automated tests because 

  • they already have intimate knowledge of the application internals
  • they can quickly adapt testability to enable testing

But since there should not be a Team inside the Team as per the Scrum Guide [Schwaber 2020] to prevent bottlenecks [Segal 2022]. Therefore, Strategy D relies on T-Shape people as an objective. To enable this approach, Google’s SRE proposes strong collaboration between Devs & Ops and a 50% limitation on the amount of time dedicated to new features in order to enhance a shared ownership [Beyer 2016], test-oriented profiles may then be also taken into account in a similar way.

Impact on the testing maturity

Although strategy B seems to comply with the ISTQB advice regarding Testers’ independence [ISTQB 2018], it appears that as long as the testing maturity is not high enough, organization independence would create more issues than independent testing may discover. On one hand, dependency pushes the organization to intrinsically force to multiply the types of tests on the solution but dependency is not only a matter of separate teams but actually a question of critical thinking which proposes to establish the necessary distance with the topic [Bolton 2017]. Although it has been proved that independent testing improves testing by 16% [Sunyaev 2015] but it also appears that Shift Left opportunities such as Built-In Self Tests (BIST) are missed thus, knowledge of lower level tests is also missed [Segal 2022]. Moreover, Sunyaev’s paper does take any agile testing practices into account while they aim to prevent bugs instead of testing the product; therefore, it does not prove this 16% improvement would not be higher with a collaborative testing but this would also infer some extremely complex and multifactorial KPI. Finally, even if redundant engineering would seem to introduce a stronger reliability [Chen 1978], a survey highlighted this is just a fallacy since Teams actually introduce the same issues [Knight 1986].

Actually, to improve test results, B and D strategies should be combined as with security testing. This approach proposes to involve both internal and external Teams that have full, partial or no knowledge on the system to attack, i.e. to test.

Security Testing approaches - Picture from [Moustier 2019-1]


When it comes to putting the coordination of development and automation matters at scale, PanTesting could provide you with some tools to progressively merge both (eco)cycles and eventually redesign the organization to use the natural tendency to follow Conway’s law and head towards strategy D [Skelton 2019].

Agilitest’s standpoint on this practice

Agilitest is able to address E2E testing UI level on browsers, mobile and MS Windows applications or closer to integration level thanks to REST API calls. Since Agilitest is a #nocode technology, any of the strategies described here can be involved.

When it comes to automating scripts at scale, Agilitest can be plugged to a Git source control server and a Jenkins-based toolchain at no cost since the script runner is an open source component. These capabilities enable organizations to cope with combinations of all strategies and support the company growth.

To discover the whole set of practices, click here.

To go further

  • [Ambler 2012] : Scott W. Ambler et Mark Lines - « Disciplined Agile Delivery - a Practitioner’s guide to Agile Software Delivery in the Enterprise » - IBM Press - 2012 - ISBN: 978-0-13-281013-5
  • [Axelrod 2018] : Arnon Axelrod - 2018 - “Complete Guide to Test Automation: Techniques, Practices, and Patterns for Building and Maintaining Effective Software Projects” - isbn:9781484238318
  • [Beyer 2016] : Betsy Beyer, Chris Jones, Jennifer Petoff et Niall Richard Murphy - « Site Reliability Engineering: How Google Runs Production Systems » - O’Reilly Media - 2016 - ISBN-13 : 978-1491929124 -
  • [Bolton 2017] : Michael Bolton & James Bach - MAR 2017 - "Critical Thinking for Testers" - 
  • [Skelton 2019] : Matthew Skelton & Manuel Pais - 2019 - “Team Topologies: Organizing Business and Technology Teams for Fast Flow” - isbn:9781942788829
© Christophe Moustier - 2021