A charter defines the exploration session of the application to search for bugs outside the existing test cases
Since Cem Kaner coined the term in 1984 [Kaner 2008] [Bach 2015-1], Exploratory Testing (ET). Since then, it has become a must in agile software testing. This practice is to be distinguished from ad hoc testing, for ET are simultaneous learning, test design and test execution regarding what you have learnt so far [Bach 2004].
During 15 years, ET has been progressively refined from Kaner and Bach’s collaboration that started in 1995 [Bach 2015-1][Bach 2015-2] to reach along with James’ brother, Jonhatan Bach, what James names “Session-based test management” (SBTM) [Bach 2000]. SBTM appears to be “ET 2.0” [Bach 2015-2]. It incorporates planning, structuring, guiding and tracking the test effort with good tool support when conducting ET [Ghazi 2017].
The “Do everything at the same time” combined with a timebox approach makes the ET an agile approach [Nonaka 1986][Moustier 2019-1]. Moreover, it also shortens feedback loops between test engineering and test run. It emphasizes adaptability and learning while the classical testing approach implies accountability and decidability [Bach 2004]. Since it is basically an adaptive testing approach with emergent test scenarios and test techniques, it is compliant with complex projects as agility is [Snowden 2007].
The Cynefin framework by [Snowded 2014]
James’ insights and thoughts on testing leads to the following facts:
“Test cases are not testing” [Bach 2014]
“Exploratory testing is not a testing technique. It's a way of thinking about testing - Any testing technique can be used in an exploratory way” [Bach 2004].
Impact on the testing maturity
ET addresses intrinsically two of the testing principles [ISTQB 2018] :
“the Pesticide Paradox” - since ET is not scripted, the same scenario will not be done the same way [Bach 2006]
“Testing is context dependent” - it is fed by what you have learnt so far [Bach 2004]
The 2nd aspect is that important that James Bach has generated a “testing school” named “Context-Driven Testing” (CDT) - https://context-driven-testing.com/ - with 7 principles which help raising the bar on ET:
The value of any practice depends on its context.
There are good practices in context, but there are no best practices.
People, working together, are the most important part of any project’s context.
Projects unfold over time in ways that are often not predictable.
The product is a solution. If the problem isn’t solved, the product doesn’t work.
Good software testing is a challenging intellectual process.
Only through judgment and skill, exercised cooperatively throughout the entire project, are we able to do the right things at the right times to effectively test our products.
CDT led James Bach to establish a “Heuristic Test Strategy Model” [Bach 2020]:
The so-called “Test Strategy” also known as “Quality Story” or “Charter” which is required by SBTM. The charter is rather necessary
to keep track of what was tested at very high level
to define the testing mission for the test session and a high level plan that determines what should be tested, how it should be tested and the associated limitations [Ghazi 2017]
to let the Product Owner (PO) manage his ROI and agree on the proposed testing effort
This charter makes a big difference with ad hoc testing. It may cover [Bach 2004]
Early coverage heuristics [Moustier 2021-2][Meerts 2012]
Active reading of reference materials
Failure mode lists
Project risk lists
A survey also gathered some test charters content [Ghazi 2017]:
C01: Test Setup: Description of the test environment.
C02: Test Focus: Part of the system to be tested.
C03: Test Level: Unit, Function, System test, etc.
C04: Test Techniques: Test techniques used to carry out the tests.
C05: Risks: Product risk analysis.
C06: Bugs Found: Bugs found previously.
C07: Purpose: Motivation why the test is being carried out.
C08: System Definition: Type of system (e.g. simple/ complex).
C09: Client Requirements: Requirements specification of the client.
C10: Exit Criteria: Defines the “done” criteria for the test.
C11: Limitations: It tells of what the product must never do, e.g. data sent as plain text is strictly forbidden.
C12: Test Logs: Test logs to record the session results.
C13: Data and Functional Flows: Data and work flow among components.
C14: Specific Areas of Interest: Where to put extra focus during the testing.
C15: Issues*: Charter specific issues or concerns to be investigated.
C16: Compatibility Issues: Hardware and software compatibility and interoperability issues.
C17: Current Open Questions: Existing questions that refer to the known unknowns.
C18: Information Sources: Documents and guidelines that hold information regarding the features, functions and systems being tested.
C19: Priorities: Determines what the tester spends most and least time on.
C20: Quality Characteristics: Quality objectives for the project.
C21: Test Results Location: Test results location for developers to verify.
C22: Mission Statement: One liner describing the mission of the test charter.
C23: Existing Tools: Existing software testing tools that would aid the tests.
C24: Target: What is to be achieved by each test.
C25: Reporting: Test session notes.
C26: Models and Visualizations: People, mind maps, pictures related to the function to be tested.
C27: General Fault: Test related failure patterns of the past.
C28: Coverage: Charters boundary in relation to what it is supposed to cover.
C29: Engineering Standards: Regulations, rules and standards used, if any.
C30: Oracles: Expected behavior of the system (either based on requirements or a person)
C31: Logistics: How and when resources are used to execute the test strategy, e.g. how people in projects are coordinated and assigned to testing tasks.
C32: Stakeholders: Stakeholders of the project and how their conflicting interests would be handled.
C33: Omitted Things: Specifies what will not be tested.
C34: Difficulties: The biggest challenges for the test project.
C35: System Architecture: Structure, interfaces and platforms concerning the system, and its impact on system integration.
(*) You may notice that using already discovered issues actually exploits the “Defects cluster together” ISTQB principle [ISTQB 2018]
Another way to build a charter is to involve the “Persona” concept [Hendrickson 2013][Moustier 2019-1] which consists in providing a forged profile with enough information to impersonate the mindset of a user stereotype. Ideally, this stereotype should come from the marketing but extra personae may be generated for specific testing purposes (say, “Kevin, a 28 years old geek who loves to try some ready-made skiddies to break the security of systems”).
As said, ET is strongly based on tools. James Bach provides some [Bach 2004]:
When it comes to handle ET in group, there is a prototype of a tool [Leveau 2020] capable of
sharing explorations actions on a web site
showing widgets already involved in previous explorations and those which have never been used
Since SBTM, ET has been improved with the “Thread-Based Test Management (TBTM)” [Bach 2010]. TBTM is an activity-based approach and is comparable with conversation threads that raise, are interrupted, change, etc. While SBTM are timeboxed and committed to complete a task (the charter), TBTM is a generalisation of SBTM that works even in chaotic and difficult environments with interruptions and opportunities [Gatien 2012]. To handle TBTM, a simple mind-mapping tool can be used with icons, font size, color, notes or any visual features to represent
testing pauses to eventually resume later
importance of some thread
people assigned to some thread
Regarding the efficiency of ET, surveys have been conducted and several benefits were raised [Itkonen 2005]:
Writing a complete test suite is long and hard while ET discards this effort
ET is a way of testing the product from a end user’s perspective
ET enables testing a larger combinations of features altogether because traditional test cases are based on the requirements (usually only one to ease traceability)
ET helps to find usability problems in the software
ET allows viewing the feature as a whole while following a script focuses the Tester on the scenario
ET is most useful for giving fast feedback to developers regarding newly developed features
ET is a natural choice when features are versatile
ET can be viewed as a way to learn about the product
ET goes deeper into the tested feature
While retesting a fixed bug with ET, it is not done the same way and enables finding other bugs
ET helps to find more important defects in a short amount of time than it would with traditional test cases
ET is actually so efficient that the FDA has approved its use for medical devices [Bach 2011] [FDA 2013]. The bottom line on how efficient ET is, in 2015 James Bach decommissioned the term “Exploratory Testing” since “testing” is actually ET and even classical scripted testing has some exploratory phase, at least to discover how things could be prepared and run afterwards [Bach 2015-2].
Agilitest’s standpoint on this practice
Agilitest is a tool for automating test scripts which is so far the right testing strategy in iterative SDLC such as agile frameworks. This is even more true when it comes to DevOps.
However, even if automated, test cases are not testing [Bach 2014]. ET should actually be part of an automation testing strategy. While automated scripts mostly ensure regression testing, ET tackles bugs from the latest release to be delivered. This approach can be seen in the well known test automation pyramid gifted with a manual “cloud” at the top dedicated to ET and its antipattern named the “ice cream cone” [Akram 2020].