A subcategory of usability testing that consists in making your web and mobile apps usable to as many people as possible. It makes apps accessible to those with disabilities, such as vision impairment, hearing disabilities, and other physical or cognitive conditions.
Iterative approach to project management and software development. An agile team delivers work in small, but consumable.
Agile method (in software testing)
Methodology to turn a vision for a business need into software solutions. It describes software development approaches that employ continual planning, learning, improvement, team collaboration, evolutionary development, early delivery, and encourages flexible responses to change.
Agile software management
A project management approach that develops software increments in frequent iterations as requirements change.
Refers to a result that is different from the expected one. This behavior can be the result of a document or also of a tester's notion and experiences. This notion also refers to a usability problem, as the test software may behave as expected in the specification. Sometimes the anomaly can also refer to a defect/bug.
A list of all desired product features, based on customer requirements, that can evolve accordingly. This list allows you to establish a chronology for the implementation of these features.
Basis Path Testing
A structured or white-box testing technique used to design test cases to examine all possible execution paths at least once.
The process of intentionally adding known defects to the application in order to monitor the detection and removal rate. This method improves the quality of the product, and is also used to determine the reliability of a test set or test suite.
A set of tests performed on a new build to verify the testability before it is passed to the test team.
Code coverage test
The code coverage test determines the amount of code tested.
Code inspection is a type of formal review that helps to avoid the multiplication of defects in a job, and to highlight process improvements.
A systematic review that finds and removes code vulnerabilities such as memory leaks. This type of review is usually done by peers, without management involvement. Technical reviews can be informal or formal and can have several purposes (discussion, decision making, evaluation of alternatives, defect finding, and technical problem solving).
Software engineering practice that involves continual integration of new development code into the existing codebase.
Approach that encourages testing as early as possible. It is a guarantee of stability and non-regression.
Short communication session to check the status, progress and report problems of the Scrum team.
A systematic process that identifies and corrects the number of bugs or defects in software so that it behaves as expected. Debugging is more difficult for complex systems, because changes in one system or interface can cause bugs to appear in another.
Definition of Done (DoD)
A set of predetermined criteria that a product must meet when it is finished. See DoD in details.
Definition of Ready (DoR)
A DoR aims to reduce uncertainty, that which is good for better estimates at Sprint Planning time. This list is a kind of checklist the Team and Product Owner used to provide enough details before the User Story development starts in order to be as much as possible autonomous, just like the Definition of Done (DoD) to tell if a US is effectively done. See DoR in details
Multidisciplinary approach that combines software development and IT operations. Helps to create a more agile work process.
A technique used to verify that the flow of an application behaves as expected, from start to finish. It identifies system dependencies and ensures that data integrity is maintained.
Tests performed to determine if the application under test can withstand continuous loads. Also known as 'Soak Testing'.
A test technique that validates a system's ability to allocate additional resources and transfer operations to backup systems in the event of a server failure. This determines whether a system is capable of handling additional resources, such as additional CPUs or servers, during critical failures or when the system reaches a performance threshold.
FIRST is an acronym which stands for
• ‘Fast’ (tests should run quickly to have fast feedback in an easy way to let the test script run be easily part of the development activity. Moreover, when integrated to a pipeline, keep in mind that the whole C/CD process should last less than 10’).
• ‘Independent’ (Tests should not depend on each other. You should be able to run each test independently and run the tests in any order. This is probably why some author name this characteristic ‘Isolated’ and eventually ‘Hermetic test pattern’ or ‘Test is an Island Pattern’).
• ‘Repeatable’ (Tests should be repeatable at any time, in any environment).
• ‘Self-Validating’ (The tests should have a boolean output. No interpretation should be required, either they pass or fail).
•‘Timely’ (The tests need to be written in a timely fashion, the sooner the better, even before production code).FIRST testing in details.
A test technique used to test the features or functionality of the system or software. These tests should cover all scenarios, including failure paths and edge cases.
Gemba sessions help to find improvement opportunities because innovation does not take place in the meeting room but where the value is created. It is crucial to do Gemba walking. Seating at your desk provides only explicit knowledge while discussion with people and having a look at things in their locations help to spot hints, this is tacit knowledge. Gemba sessions in details.
GUI Software Testing
Hybrid Integration Testing
Hybrid integration testing exploits the advantages of both top-down and bottom-up integration testing. These hybrid integration tests are adopted if the customer wants to work on a working version of the application as soon as possible, in order to produce a basic working system in the early stages of the development cycle. These tests are primarily focused on the middle-tier target layer and are selected based on the system characteristics and code structure.
Test environment comprised of stubs and drivers that need to execute a test. The test harness allows you to run a set of tests. It captures the inputs of the application under test, and provides flexibility and support for debugging. It helps developers measure code coverage, increase productivity (because automation is in place), and improve software quality. It can also manage complex conditions that testers have difficulty simulating. Test harness in details.
Independent team that participates in non-developer testing activities to avoid author bias. This often makes them more effective at finding defects and failures.
Check the operation, performance and reliability of the modules that are integrated.
A fixed or time-boxed period of time, generally spanning two to four weeks, during which an Agile team develops a deliverable, potentially shippable product. A typical Agile project consists of a series of iterations, along with a planning meeting prior to development and a retrospective meeting at the end of the iteration.
Process of breaking down projects into more manageable components known as iterations. In Agile, iterations are essential methodologies for producing a potentially shippable deliverable or product.
Japanese word for automation that corrects the problems of robotization. Jidoka is used to improve robots, detecting each situation and adapting the robots' behaviors with the appropriate manipulations. Jidoka on the pipeline in details.
Key Performance Indicator (KPI)
Indicators used to evaluate the efficiency of the software process. That’s an important parameter because their usage are analyzed, and the result of the measurement is used for any process improvement.
Keyword Driven Testing
Functional automation test framework that uses a table format to define keywords or action words for each function we want to perform.
Methodology allows to optimize the resources and efforts of an organization to create value for the customer. See Lean approach in details
System used to simulate the load in order to run performance tests. This system may involve either concurrency testing or SQL performance testing. The system sends a query remotely, called the "host system" or "load generation system".
A performance testing technique that measures system response under a variety of load conditions. These tests are performed for normal and peak load conditions. Article about Load testing: Load testing - You can pay attention to this
, by Jolivé Hodehou.
Testing process performed manually to find defects without the use of automation tools or scripts. A test plan document is used to guide the testing process to achieve complete test coverage.
Model-base testing is a software testing technique where test cases are derived from a model that describes the functional aspects of the system under test. This model generates both online and offline tests.
Modularity Driven Testing
An automation testing framework in which small independent modules of automation scripts are developed for the application under test. These individual scripts are built together to form a test that performs a particular test case.
A software testing technique in which tests are performed on the system under test in a random fashion. The input data used for testing is also randomly generated and entered into the system.
A testing technique that uses the structure of the code to guide the testing process. At a very high level, it becomes a process of rewriting the source code in small increments, to help eliminate redundancies in the source code. These repetitions can cause failures in the software if not corrected and can easily pass the testing phase undetected.
Ensures that the application under test does not fail when an unexpected input is given. These tests break the system to verify the behavior and response of the application.
No-code test automation
Process that runs on automated tests. It reduces the amount of time and cost used by manual testing.
Allows you to verify system attributes such as memory leaks, performance or system robustness. These tests are done at all test levels.
An OKR has two parts : the Objective and the Key Results. The Objective is what is to be achieved. It must be meaningful, based on observables and inspiring. It must be an ambitious goal. Reaching 100% means it was too easy and may not lead to continued effort. Goals must also be shareable across the organization to avoid creating performance silos. The Key Results are evidence of the objective achievement and show progression toward the objective and eventually how close we are. They must be SMART and binary (done or not done). See OKR in details
A testing technique that verifies the operational readiness, prior to release, of a product or application under test as part of the software testing life cycle.
A software testing technique in which two people test the same functionality in the same place at the same time by continuously exchanging ideas. This allows to generate more ideas and to better test the tested application.
Model for agile testing at scale. It’s composed with testability to provide technical and social means to enable testing, theory of Constraints enables flow at the system level, Panarchy is an interaction model between subsystems [Gunderson 2002] and leads to an evolutionary maturity standard towards an organization without siloes and a double loop learning proposes intertwined feedback loops that lead to a “good product” instead of a “compliant product” since there is an Understanding what is to be tested. See PanTesting in details.
Structural testing method based on the source code or algorithm, not on the specifications. It can be applied at different levels of granularity.
A software testing technique that observes the system without interaction.
A technique for showing that a product or application under test does what it is supposed to do. It checks if the application does not have an error when it is not supposed to and if it has an error when it is supposed to.
Audit and reporting procedures used to provide stakeholders with the data needed to make informed decisions. It allows monitoring of processes and products throughout the SDLC.
Functional tests that are performed when there is not enough time to write and execute the tests.
Run a previously failed test with a new software to see if the problem is resolved. A retest should be performed after each defect correction under the same conditions.
A type of non-functional testing technique that determines how quickly the system can recover from a hardware failure or malfunction. It forces software failure to verify successful recovery.
Re-run tests that are affected by code changes. These tests should be run as much as possible throughout the software development life cycle. The final regression tests validate the version that has not been modified for a certain period of time. Normal regression tests check if the build has not broken other parts of the application.
A release plan is a tactical document designed to capture and track planned features for an upcoming product release. Most of the time, it is an internal working document, which is intended for product and development teams.
Scenarios are a better way to test a complex system. They must be credible and easy to evaluate.
A framework for project management, with an initial emphasis on software development, that is a lightweight, iterative and incremental framework for developing, delivering, and sustaining complex products.
Checks for smoke from hardware components once the hardware is powered up. In the context of software testing, it tests the basic functionality of the release. If it fails, the release is declared unstable, and it is no longer tested until the release smoke test passes.
Refers to a top-down visualization, or roadmap, of product backlog. The story map starts with a goal or specific functionality, which is then broken down into user stories
Ensures that the required level of quality is achieved by suggesting improvements to the product development process. It aims to develop a culture within the team. It must be independent of project management to ensure independence from cost and schedule compliance.
Visual representation of user stories broken down into tasks or work units.
Implementation of a project's test strategy, which defines how testing will be performed. There are two different techniques. Proactive technique, where the test design process is initiated as early as possible, in order to find and fix defects before the build is created. And the reactive technique, in which the tests are not started until the design and coding are completed.
Software test automation uses specialized tools to monitor test execution and compare actual results to expected results. Testing tools help perform regression testing, but also automate data configuration generation, product installation, GUI interaction, defect logging, etc.
A document that contains a set of test data, expected results and postconditions. It is developed for a particular test scenario to verify compliance with a specific requirement. It is the starting point for test execution.
The practice of designing and building tests for functional, working code, and then building code that will pass those tests. See TDD in details.
A short program fragment written for testing and verifying a piece of code once it is completed. It is the first level of testing a software development product.
A testing technique used to identify the presence of defects in a product or software under test using the graphical user interface (GUI).
A non-technical description of a software system requirement written from the customer’s or end-user’s point of view.
Software development life cycle methodology. It describes the activities to be performed and the results to be produced during the product life cycle, and is known as the ‘verification and validation model’.
The process of evaluating software during the development process or at the end of the development process. It determines whether it meets the specified business requirements. Validation testing ensures that the product actually meets the customer's needs. It can be defined as the demonstration that the product fulfills its intended purpose when deployed in an appropriate environment.
Allows testers to create virtual users to increase the load on the application under test. The virtual user generator captures requests to create a virtual user and can read the user's operations.
Software testing technique that evaluates the quantum of risk involved in the system to reduce the probability of the event.
A software testing technique adopted for testing applications that are web-hosted and in which the application's interfaces and other functionality are tested.
It consists in limiting the number of jobs in progress in order to ensure the quantity of work delivered to the customer. In an agile team, the absence of WIP monitoring often results in an accumulation of tasks in progress on a Kanban column. See WIP in details.
Routing a record through all possible paths to ensure that each workflow process accurately reflects the business process. This type of testing is valid for workflow-based applications.
An X-Team proposes that the team decenters itself and immerses itself in a more systemic approach [SAFe 2021-2] and that it is concerned with the environment in which it is immersed. This environment includes many aspects, like the teams in the organisation with which the team evolves, the product to be produced,… See X-Teams in details