Software Testing Dictionary

In this article, I shared the many buzzwords and important keywords about software testing. I hope this software testing glossary will be an important reference for many software testing professionals.

acceptance criteria The exit criteria that a component or system must satisfy in order to be accepted by a user, customer, or other authorized entity. [IEEE 610]
acceptance testing Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system. [After IEEE 610]
accessibility testing Testing to determine the ease by which users with disabilities can use a component or system.
automated testware Testware used in automated testing, such as tool scripts.
availability The degree to which a component or system is operational and accessible when required for use. Often expressed as a percentage. [IEEE 610]
balanced scorecard A strategic tool for measuring whether the operational activities of a company are aligned with its objectives in terms of business vision and strategy. See also corporate dashboard, scorecard.
beta testing Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers, to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes. Beta testing is often employed as a form of external acceptance testing for off-the-shelf software in order to acquire feedback from the market.
black box test design technique Procedure to derive and/or select test cases based on an analysis of the specification, either functional or non-functional, of a component or system without reference to its internal structure.
boundary value An input value or output value which is on the edge of an equivalence partition or at the smallest incremental distance on either side of an edge, for example the minimum or maximum value of a range.
bug A flaw in a component or system that can cause the component or system to fail to perform its required function, e.g. an incorrect statement or data definition. A bug, if encountered during execution, may cause a failure of the component or system.
call graph An abstract representation of calling relationships between subroutines in a program.
cause-effect diagram A graphical representation used to organize and display the interrelationships of various possible root causes of a problem. Possible causes of a real or potential defect or failure are organized in categories and subcategories in a horizontal tree-structure, with the (potential) defect or failure as the root node.
checklist-based testing An experience-based test design technique whereby the experienced tester uses a high-level list of items to be noted, checked, or remembered, or a set of rules or criteria against which a product has to be verified.
classification tree method A black box test design technique in which test cases, described by means of a classification tree, are designed to execute combinations of representatives of input and/or output domains.
code coverage An analysis method that determines which parts of the software have been executed (covered) by the test suite and which parts have not been executed, e.g. statement coverage, decision coverage or condition coverage.
complexity The degree to which a component or system has a design and/or internal structure that is difficult to understand, maintain and verify. See also cyclomatic complexity.
condition A logical expression that can be evaluated as True or False, e.g. A>B. See also condition testing.
coverage The degree, expressed as a percentage, to which a specified coverage item has been exercised by a test suite
data-driven testing A scripting technique that stores test input and expected results in a table or spreadsheet, so that a single control script can execute all of the tests in the table. Data-driven testing is often used to support the application of test execution tools such as capture/playback tools. [Fewster and Graham] See also keyword-driven testing.
debugging The process of finding, analyzing and removing the causes of failures in software.
decision table testing A black box test design technique in which test cases are designed to execute the combinations of inputs and/or stimuli (causes) shown in a decision table. [Veenendaal04] See also decision table.
defect-based
test design technique
A procedure to derive and/or select test cases targeted at one or more defect categories, with tests being developed from what is known about the specific defect category. See also defect taxonomy.
defect management The process of recognizing, investigating, taking action and disposing of defects. It involves recording defects, classifying them and identifying the impact. [After IEEE 1044]
defect taxonomy A system of (hierarchical) categories designed to be a useful aid for reproducibly classifying defects.
dynamic testing Testing that involves the execution of the software of a component or system.
entry criteria The set of generic and specific conditions for permitting a process to go forward with a defined task, e.g. test phase. The purpose of entry criteria is to prevent a task from starting which would entail more (wasted) effort compared to the effort needed to remove the failed entry criteria. [Gilb and Graham]
equivalence partitioning A black box test design technique in which test cases are designed to execute representatives from equivalence partitions. In principle test cases are designed to cover each partition at least once.
exit criteria The set of generic and specific conditions, agreed upon with the stakeholders for permitting a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding parts of the task which have not been finished. Exit criteria are used to report against and to plan when to stop testing.
expected result The behavior predicted by the specification, or another source, of the component or system under specified conditions.
experience-based
test design technique
Procedure to derive and/or select test cases based on the tester’s experience, knowledge and intuition.
exploratory testing An informal test design technique where the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests. [After Bach]
functional requirement A requirement that specifies a function that a component or system must perform. [IEEE 610]
functional testing Testing based on an analysis of the specification of the functionality of a component or system. See also black box testing.
keyword-driven testing A scripting technique that uses data files to contain not only test data and expected results, but also keywords related to the application being tested. The keywords are interpreted by special supporting scripts that are called by the control script for the test. See also data-driven testing.
load testing A type of performance testing conducted to evaluate the behavior of a component or system with increasing load, e.g. numbers of parallel users and/or numbers of transactions, to determine what load can be handled by the component or system. See also performance testing, stress testing
negative testing Tests aimed at showing that a component or system does not work Negative testing is related to the testers’ attitude rather than a specific test approach or test design technique, e.g. testing with invalid input values or exceptions. [After Beizer].
non-functional testing Testing the attributes of a component or system that do not relate to functionality, e.g. reliability, efficiency, usability, maintainability and portability.
pairwise testing A black box test design technique in which test cases are designed to execute all possible discrete combinations of each pair of input parameters. See also orthogonal array testing.
Pass A test is deemed to pass if its actual result matches its expected result
priority The level of (business) importance assigned to an item, e.g. Defect
product risk A risk directly related to the test object. See also risk
project risk A risk related to management and control of the (test) project, e.g. lack of staffing, strict deadlines, changing requirements, etc. See also risk.
regression testing Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made. It is performed when the software or its environment is changed.
re-testing Testing that runs test cases that failed the last time they were run, in order to verify the success of corrective actions.
risk-based testing An approach to testing to reduce the level of product risks and inform stakeholders of their status, starting in the initial stages of a project. It involves the identification of product risks and the use of risk levels to guide the test process
risk impact The damage that will be caused if the risk become an actual outcome or event.
risk likelihood The estimated probability that a risk will become an actual outcome or event.
root cause A source of a defect such that if it is removed, the occurrence of the defect type is decreased or removed. [CMMI]
severity The degree of impact that a defect has on the development or operation of a component or system. [After IEEE 610]
smoke test A subset of all defined/planned test cases that cover the main functionality of a component or system, to ascertaining that the most crucial functions of a program work, but not bothering with finer details. A daily build and smoke test is among industry best practices. See also intake test.
static testing Testing of a software development artifact, e.g., requirements, design or code, without execution of these artifacts, e.g., reviews or static analysis.
stress testing A type of performance testing conducted to evaluate a system or component at or beyond the limits of its anticipated or specified workloads, or with reduced availability of resources such as access to memory or servers. [After IEEE 610] See also performance testing, load testing.
test case A set of input values, execution preconditions, expected results and execution postconditions, developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement. [After IEEE 610]
test data Data that exists (for example, in a database) before a test is executed, and that affects or is affected by the component or system under test.
test execution The process of running a test on the component or system under test, producing actual result(s).
user story A high-level user or business requirement commonly used in agile software development, typically consisting of one or more sentences in the everyday or business language capturing what functionality a user needs, any non-functional criteria, and also includes acceptance criteria. See also agile software development, requirement.
user story testing A black box test design technique in which test cases are designed based on user stories to verify their correct implementation. See also user story.
user test A test whereby real-life users are involved to evaluate the usability of a component or system.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.