Software Testing Glossary

Acceptance testing: Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system. [As per IEEE 610]



Accessibility testing: Testing to determine the ease by which users with disabilities can use a component or system.



Ad Hoc Testing: Testing carried out using no recognized test case design technique.



Agile testing: Testing practice for a project using agile methodologies, such as extreme programming (XP), treating development as the customer of testing and emphasizing the test-first design paradigm.



Alpha testing: Simulated or actual operational testing by potential users/customers or an independent test team at the developers’ site, but outside the development organization. Alpha testing is often employed as a form of internal acceptance testing.



Assertion Testing: A dynamic analysis technique which inserts assertions about the relationship between program variables into the program code. The truth of the assertions is determined as the program executes.

--------------------------------------------------------------------------------
Back-to-back testing: Testing in which two or more variants of a component or system are executed with the same inputs, the outputs compared, and analyzed in cases of discrepancies. [IEEE 610]

Background testing: is the execution of normal functional testing while the SUT is exercised by a realistic work load. This work load is being processed "in the background" as far as the functional testing is concerned.

Benchmarks: Programs that provide performance comparison for software, hardware, and systems.

Benchmarking: is specific type of performance test with the purpose of determining performance baselines for comparison.

Beta testing: Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers, to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes. Beta testing is often employed as a form of external acceptance testing in order to acquired feedback from the market.

Big-bang testing: Integration testing where no incremental testing takes place prior to all the system's components being combined to form the system

Black box testing: Testing, either functional or non-functional, without reference to the internal structure of the component or system.

Breadth test: - A test suite that exercises the full scope of a system from a top-down perspective, but does not test any aspect in detail

Bottom-up testing: An incremental approach to integration testing where the lowest level components are tested first, and then used to facilitate the testing of higher level components. This process is repeated until the component at the top of the hierarchy is tested. See also integration testing.

Boundary value: An input value or output value which is on the edge of an equivalence partition or at the smallest incremental distance on either side of an edge, for example the minimum or maximum value of a range.

Boundary value analysis: A black box test design technique in which test cases are designed based on boundary values.

Branch coverage: The percentage of branches that have been exercised by a test suite. 100% branch coverage implies both 100% decision coverage and 100% statement coverage.

Branch testing: A white box test design technique in which test cases are designed to execute branches.

Business process-based testing: An approach to testing in which test cases are designed based on descriptions and/or knowledge of business processes.
--------------------------------------------------------------------------------
Capability Maturity Model (CMM): A five level staged framework that describes the key elements of an effective software process. The Capability Maturity Model covers practices for planning, engineering and managing software development and maintenance. [CMM]

Capability Maturity Model Integration (CMMI): A framework that describes the key elements of an effective product development and maintenance process. The Capability Maturity Model Integration covers practices for planning, engineering and managing product development and maintenance. CMMI is the designated successor of the CMM. [CMMI]

CAST: Acronym for Computer Aided Software Testing. See also test automation.

Cause-effect graphing: A black box test design technique in which test cases are designed from cause-effect graphs. [BS 7925/2]

Clean test: A test whose primary purpose is validation; that is, tests designed to demonstrate the software`s correct working.(syn. positive test)

Code coverage: An analysis method that determines which parts of the software have been executed (covered) by the test suite and which parts have not been executed, e.g. statement coverage, decision coverage or condition coverage.

Code Inspection: A manual [formal] testing [error detection] technique where the programmer reads source code, statement by statement, to a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards. Contrast with code audit, code review, code walkthrough. This technique can also be applied to other software and configuration items. Syn: Fagan Inspection

Code Walkthrough: A manual testing [error detection] technique where program [source code] logic [structure] is traced manually [mentally] by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer's logic and assumptions. Contrast with code audit, code inspection, code review.

Coexistence Testing: Coexistence isn’t enough. It also depends on load order, how virtual space is mapped at the moment, hardware and software configurations, and the history of what took place hours or days before. It’s probably an exponentially hard problem rather than a square-law problem.

Compatibility bug: A revision to the framework breaks a previously working feature: a new feature is inconsistent with an old feature, or a new feature breaks an unchanged application rebuilt with the new framework code.

Compatibility Testing. The process of determining the ability of two or more systems to exchange information. In a situation where the developed software replaces an already working program, an investigation should be conducted to assess possible comparability problems between the new software and other programs or systems.

Compliance testing: The process of testing to determine the compliance of component or system.

Concurrency testing: Testing to determine how the occurrence of two or more activities within the same interval of time, achieved either by interleaving the activities or by simultaneous execution, is handled by the component or system. [After IEEE 610]

Condition Coverage. A test coverage criteria requiring enough test cases such that each condition in a decision takes on all possible outcomes at least once, and each point of entry to a program or subroutine is invoked at least once. Contrast with branch coverage, decision coverage, multiple condition coverage, path coverage, statement coverage

Configuration management: A discipline applying technical and administrative direction and surveillance to: identify and document the functional and physical characteristics of a configuration item, control changes to those characteristics, record and report change processing and implementation status, and verify compliance with specified requirements. [IEEE 610]


Conformance directed testing. Testing that seeks to establish conformance to requirements or specification.

CRUD Testing. Build CRUD matrix and test all object creation, reads, updates, and deletion.

Cyclomatic Complexity: The number of independent paths through a program. Cyclomatic complexity is defined as: L – N + 2P, where - L = the number of edges/links in a graph - N = the number of nodes in a graph - P = the number of disconnected parts of the graph (e.g. a calling graph and a subroutine). [After McCabe]

--------------------------------------------------------------------------------
Data-Driven testing: An automation approach in which the navigation and functionality of the test script is directed through external data; this approach separates test and control data from the test script.

Data flow testing Testing in which test cases are designed based on variable usage within the code


Database testing. Check the integrity of database field values.

Debugging: The process of finding, analyzing and removing the causes of failures in software.

Decision table: A table showing combinations of inputs and/or stimuli (causes) with their associated outputs and/or actions (effects), which can be used to design test cases.

Decision table testing: A black box test design techniques in which test cases are designed to execute the combinations of inputs and/or stimuli (causes) shown in a decision table.

Defect: A flaw in a component or system that can cause the component or system to fail to perform its required function, e.g. an incorrect statement or data definition. A defect, if encountered during execution, may cause a failure of the component or system.

Defect density: The number of defects identified in a component or system divided by the size of the component or system (expressed in standard measurement terms, e.g. lines-ofcode, number of classes or function points).

Defect Discovery Rate. A metric describing the number of defects discovered over a specified period of time, usually displayed in graphical form.

Defect Removal Efficiency (DRE). A measure of the number of defects discovered in an activity versus the number that could have been found. Often used as a measure of test effectiveness.

Defect Seeding. The process of intentionally adding known defects to those already in a computer program for the purpose of monitoring the rate of detection and removal, and estimating the number of defects still remaining. Also called Error Seeding.

Defect Masking. An existing defect that hasn't yet caused a failure because another defect has prevented that part of the code from being executed.

Definition-use pair: The association of the definition of a variable with the use of that variable. Variable uses include computational (e.g. multiplication) or to direct the execution of a path (“predicate” use).


Deliverable: Any (work) product that must be delivered to someone other that the (work) product’s author.



Depth test. A test case, that exercises some part of a system to a significant level of detail.



Decision Coverage. A test coverage criteria requiring enough test cases such that each decision has a true and false result at least once, and that each statement is executed at least once. Syn: branch coverage. Contrast with condition coverage, multiple condition coverage, path coverage, statement coverage



Dirty testing Negative testing.



Driver: A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system.



Dynamic testing: Testing that involves the execution of the software of a component or system.




--------------------------------------------------------------------------------



End-to-End testing. Similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.



Entry criteria: the set of generic and specific conditions for permitting a process to go forward with a defined task, e.g. test phase. The purpose of entry criteria is to prevent a task from starting which would entail more (wasted) effort compared to the effort needed to remove the failed entry criteria.



Equivalence Partitioning: An approach where classes of inputs are categorized for product or function validation. This usually does not include combinations of input, but rather a single state value based by class. For example, with a given function there may be several classes of input that may be used for positive testing. If function expects an integer and receives an integer as input, this would be considered as positive test assertion. On the other hand, if a character or any other input class other than integer is provided, this would be considered a negative test assertion or condition.



Error: A human action that produces an incorrect result. [After IEEE 610]



Error Guessing: Another common approach to black-box validation. Black-box testing is when everything else other than the source code may be used for testing. This is the most common approach to testing. Error guessing is when random inputs or conditions are used for testing. Random in this case includes a value either produced by a computerized random number generator, or an ad hoc value or test conditions provided by engineer.



Error seeding. The purposeful introduction of faults into a program to test effectiveness of a test suite or other quality assurance program.



Exception handling: Behavior of a component or system in response to erroneous input, from either a human user or from another component or system, or to an internal failure.



Exception Testing. Identify error messages and exception handling processes an conditions that trigger them.



Exhaustive Testing. (NBS) Executing the program with all possible combinations of values for program variables. Feasible only for small, simple programs.



Exit criteria: The set of generic and specific conditions, agreed upon with the stakeholders, for permitting a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding parts of the task which have not been finished. Exit criteria are used by testing to report against and to plan when to stop testing.



Exploratory Testing: An interactive process of concurrent product exploration, test design, and test execution. The heart of exploratory testing can be stated simply: The outcome of this test influences the design of the next test.




--------------------------------------------------------------------------------



Failure: Actual deviation of the component or system from its expected delivery, service or result.



Failure Mode and Effect Analysis (FMEA): A systematic approach to risk identification and analysis of identifying possible modes of failure and attempting to prevent their occurrence.



Finite state machine: A computational model consisting of a finite number of states and transitions between those states, possibly with accompanying actions. [IEEE 610]



Follow-up testing, we vary a test that yielded a less-than spectacular failure. We vary the operation, data, or environment, asking whether the underlying fault in the code can yield a more serious failure or a failure under a broader range of circumstances



Free Form Testing. Ad hoc or brainstorming using intuition to define test cases.



Functional Decomposition Approach. An automation method in which the test cases are reduced to fundamental tasks, navigation, functional tests, data verification, and return navigation; also known as Framework Driven Approach.



Function Point Analysis (FPA): Method aiming to measure the size of the functionality of an information system. The measurement is independent of the technology. This measurement may be used as a basis for the measurement of productivity, the estimation of the needed resources, and project control.



Functional testing: Testing based on an analysis of the specification of the functionality of a component or system. See also black box testing.




--------------------------------------------------------------------------------



Glass box testing: See white box testing.



Gray box testing: The testing approach which is a mixture of Black box and White box testing. Gray box testing examines the activity of back-end components during test case execution. Two types of problems that can be encountered during gray-box testing are:
1) A component encounters a failure of some kind, causing the operation to be aborted. The user interface will typically indicate that an error has occurred.
2) The test executes in full, but the content of the results is incorrect. Somewhere in the system, a component processed data incorrectly, causing the error in the results.




--------------------------------------------------------------------------------



High-level tests. These tests involve testing whole, complete products




--------------------------------------------------------------------------------



Impact analysis: The assessment of change to the layers of development documentation, test documentation and components, in order to implement a given change to specified requirements.



Incremental development model: A development life cycle where a project is broken into a series of increments, each of which delivers a portion of the functionality in the overall project requirements. The requirements are prioritized and delivered in priority order in the appropriate increment. In some (but not all) versions of this life cycle model, each subproject follows a ‘mini V-model’ with its own design, coding and testing phases.



Incremental testing: Testing where components or systems are integrated and tested one or some at a time, until all the components or systems are integrated and tested.



Incident: Any event occurring during testing that requires investigation.



Inspection: A type of review that relies on visual examination of documents to detect defects, e.g. violations of development standards and non-conformance to higher level documentation. The most formal review technique and therefore always based on a documented procedure. [After IEEE 610, IEEE 1028]



Interface Tests Programs that probide test facilities for external interfaces and function calls. Simulation is often used to test external interfaces that currently may not be available for testing or are difficult to control. For example, hardware resources such as hard disks and memory may be difficult to control. Therefore, simulation can provide the characteristics or behaviors for specific function.



Internationalization testing (I18N) - testing related to handling foreign text and data within the program. This would include sorting, importing and exporting test and data, correct handling of currency and date and time formats, string parsing, upper and lower case handling and so forth.



Interoperability Testing which measures the ability of your software to communicate across the network on multiple machines from multiple vendors each of whom may have interpreted a design specification critical to your success differently.



Integration testing: Testing performed to expose defects in the interfaces and in the interactions between integrated components or systems. See also component integration testing, system integration testing.



Interface testing: An integration test type that is concerned with testing the interfaces between components or systems.



Interoperability: The capability of the software product to interact with one or more specified components or systems. [After ISO 9126] See also functionality.



Interoperability testing: The process of testing to determine the interoperability of a software product. See also functionality testing.




--------------------------------------------------------------------------------



Latent bug A bug that has been dormant (unobserved) in two or more releases.



LCSAJ: A Linear Code Sequence And Jump, consisting of the following three items (conventionally identified by line numbers in a source code listing): the start of the linear sequence of executable statements, the end of the linear sequence, and the target line to which control flow is transferred at the end of the linear sequence.



Load testing Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails.



Load-stability test. Test design to determine whether a Web application will remain serviceable over extended time span.



Load Ìsolation test. The workload for this type of test is designed to contain only the subset of test cases that caused the problem in previous testing.




--------------------------------------------------------------------------------



Maintainability: The ease with which a software product can be modified to correct defects, modified to meet new requirements, modified to make future maintenance easier, or adapted to a changed environment.



Master Test Planning. An activity undertaken to orchestrate the testing effort across levels and organizations



Monkey Testing. Input are generated from probability distributions that reflect actual expected usage statistics -- e.g., from user profiles. There are different levels of IQ in smart monkey testing. In the simplest, each input is considered independent of the other inputs. That is, a given test requires an input vector with five components. In low IQ testing, these would be generated independently. In high IQ monkey testing, the correlation (e.g., the covariance) between these input distribution is taken into account. In all branches of smart monkey testing, the input is considered as a single event.



Maximum Simultaneous Connection testing. This is a test performed to determine the number of connections which the firewall or Web server is capable of handling.



Memory leak: A defect in a program’s dynamic store allocation logic that causes it to fail to reclaim memory after it has finished using it, eventually causing the program to fail due to lack of memory.



Metrics: A measurement scale and the method used for measurement.



Moderator: The leader and main person responsible for an inspection or other review process.



Mutation testing/Mutation analysis: A method to determine test suite thoroughness by measuring the extent to which a test suite can discriminate the program from slight variants (mutants) of the program.



Multiple Condition Coverage. A test coverage criteria which requires enough test cases such that all possible combinations of condition outcomes in each decision, and all points of entry, are invoked at least once.[G.Myers] Contrast with branch coverage, condition coverage, decision coverage, path coverage, statement coverage.




--------------------------------------------------------------------------------



Negative testing. A testing method whose primary purpose is falsification; that is tests designed to break the software




--------------------------------------------------------------------------------



Off-the-shelf software: A software product that is developed for the general market, i.e. for a large number of customers, and that is delivered to many customers in identical format.



Orthogonal array testing: Technique can be used to reduce the number of combination and provide maximum coverage with a minimum number of TC.Pay attention to the fact that it is an old and proven technique. The OAT was introduced for the first time by Plackett and Burman in 1946 and was implemented by G. Taguchi, 1987



Oracle (Test Oracle). a mechanism to produce the predicted outcomes to compare with the actual outcomes of the software under test




--------------------------------------------------------------------------------



Pair programming: A software development approach whereby lines of code (production and/or test) of a component are written by two programmers sitting at a single computer. This implicitly means ongoing real-time code reviews are performed.



Pair testing: Two testers work together to find defects. Typically, they share one computer and trade control of it while testing.



Penetration testing The process of attacking a host from outside to ascertain remote security vulnerabilities.



Performance Testing. Testing conducted to evaluate the compliance of a system or component with specific performance requirements



Preventive Testing Building test cases based upon the requirements specification prior to the creation of the code, with the express purpose of validating the requirements



Portability: The ease with which the software product can be transferred from one hardware or software environment to another. [ISO 9126]




--------------------------------------------------------------------------------



Quality: The degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations. [After IEEE 610]



Quality assurance: Part of quality management focused on providing confidence that quality requirements will be fulfilled. [ISO 9000]



Qualification Testing. (IEEE) Formal testing, usually conducted by the developer for the consumer, to demonstrate that the software meets its specified requirements. See: acceptance testing.



Our definition of Quality: Achieving the target (not conformance to requirements as used by many authors) & minimizing the variability of the system under test




--------------------------------------------------------------------------------



Race condition defect: Many concurrent defects result from data-race conditions. A data-race condition may be defined as two accesses to a shared variable, at least one of which is a write, with no mechanism used by either to prevent simultaneous access. However, not all race conditions are defects.



Recovery testing: Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.



Reengineering: The process of examining and altering an existing system to reconstitute it in a new form. May include reverse engineering (analyzing a system and producing a representation at a higher level of abstraction, such as design from code), restructuring (transforming a system from one representation to another at the same level of abstraction), recommendation (analyzing a system and producing user and support documentation), forward engineering (using software products derived from an existing system, together with new requirements, to produce a new system), and translation (transforming source code from one language to another or from one version of a language to another).



Reference testing. A way of deriving expected outcomes by manually validating a set of actual outcomes. A less rigorous alternative to predicting expected outcomes in advance of test execution.



Regression testing: Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made. It is performed when the software or its environment is changed.



Release note: A document identifying test items, their configuration, current status and other delivery information delivered by development to testing, and possibly other stakeholders, at the start of a test execution phase. [After IEEE 829]



Reliability testing. Verify the probability of failure free operation of a computer program in a specified environment for a specified time.

Reliability of an object is defined as the probability that it will not fail under specified conditions, over a period of time. The specified conditions are usually taken to be fixed, while the time is taken as an independent variable. Thus reliability is often written R(t) as a function of time t, the probability that the object will not fail within time t.

Any computer user would probably agree that most software is flawed, and the evidence for this is that it does fail. All software flaws are designed in -- the software does not break, rather it was always broken. But unless conditions are right to excite the flaw, it will go unnoticed -- the software will appear to work properly.



Range Testing: For each input identifies the range over which the system behavior should be the same.



Resumption criteria: The testing activities that must be repeated when testing is re-started after a suspension. [After IEEE 829]



Re-testing: Testing that runs test cases that failed the last time they were run, in order to verify the success of corrective actions.



Risk analysis: The process of assessing identified risks to estimate their impact and probability of occurrence (likelihood).



Risk management: An organized process to identify what can go wrong, to quantify and access associated risks, and to implement/control the appropriate approach for preventing or handling each risk identified.



Robust test. A test, that compares a small amount of information, so that unexpected side effects are less likely to affect whether the test passed or fails.



Root cause analysis: Analyzing the underlying factors that caused a non-conformance and possibly should be permanently eliminated through process improvement.




--------------------------------------------------------------------------------



Sanity Testing: typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is often crashing systems, bogging down systems to a crawl, or destroying databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state.



Scalability testing is a subtype of performance test where performance requirements for response time, throughput, and/or utilization are tested as load on the SUT is increased over time.



Scribe: The person who has to record each defect mentioned and any suggestions for improvement during a review meeting, on a logging form. The scribe has to make ensure that the logging form is readable and understandable.



Security testing: Testing to determine the security of the software product.



Sensitive test. A test, that compares a large amount of information, so that it is more likely to defect unexpected differences between the actual and expected outcomes of the test



Severity: The degree of impact that a defect has on the development or operation of a component or system. [After IEEE 610]



Simulation: The representation of selected behavioral characteristics of one physical or abstract system by another system. [ISO 2382/1]



Skim Testing A testing technique used to determine the fitness of a new build or release



Smoke test: A subset of all defined/planned test cases that cover the main functionality of a component or system, to ascertaining that the most crucial functions of a program work, but not bothering with finer details. A daily build and smoke test is among industry best practices. See also intake test.



Specification: A document that specifies, ideally in a complete, precise and verifiable manner, the requirements, design, behavior, or other characteristics of a component or system, and, often, the procedures for determining whether these provisions have been satisfied. [After IEEE 610]



Specification-based test. A test, whose inputs are derived from a specification.



Spike testing. to test performance or recovery behavior when the system under test (SUT) is stressed with a sudden and sharp increase in load should be considered a type of load test



State transition: A transition between two states of a component or system.



State transition testing: A black box test design technique in which test cases are designed to execute valid and invalid state transitions. See also N-switch testing.



STEP (Systematic Test and Evaluation Process) Software Quality Engineering's copyrighted testing methodology.



State-based testing Testing with test cases developed by modeling the system under test as a state machine



State Transition Testing. Technique in which the states of a system are fist identified and then test cases are written to test the triggers to cause a transition from one condition to another state.



Static testing. Source code analysis. Analysis of source code to expose potential defects.



Statistical testing. A test case design technique in which a model is used of the statistical distribution of the input to construct representative test cases.



Stealth bug. A bug that removes information useful for its diagnosis and correction.



Storage test. Study how memory and space is used by the program, either in resident memory or on disk. If there are limits of these amounts, storage tests attempt to prove that the program will exceed them.



Stress testing: Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements. [IEEE 610]



Structural Testing. (1)(IEEE) Testing that takes into account the internal mechanism [structure] of a system or component. Types include branch testing, path testing, statement testing. (2) Testing to insure each program statement is made to execute during testing and that each program statement performs its intended function. Contrast with functional testing. Syn: white-box testing, glass-box testing, logic driven testing.



Stub: A skeletal or special-purpose implementation of a software component, used to develop or test a component that calls or is otherwise dependent on it. It replaces a called component. [After IEEE 610]



System integration testing: Testing the integration of systems and packages; testing interfaces to external organizations (e.g. Electronic Data Interchange, Internet).



System testing: The process of testing an integrated system to verify that it meets specified requirements.




--------------------------------------------------------------------------------



Test case: A set of input values, execution preconditions, expected results and execution postconditions, developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement. [After IEEE 610]



Test conditions: The set of circumstances that a test invokes.



Test Coverage: The degree to which a given test or set of tests addresses all specified test cases for a given system or component.



Test Criteria: Decision rules used to determine whether software item or software feature passes or fails a test.



Test data: The actual (set of) values used in the test or that are necessary to execute the test.



Test Documentation: (IEEE) Documentation describing plans for, or results of, the testing of a system or component, Types include test case specification, test incident report, test log, test plan, test procedure, test report.



Test Driver: A software module or application used to invoke a test item and, often, provide test inputs (data), control and monitor execution. A test driver automates the execution of test procedures.



Test environment: An environment containing hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test. [After IEEE 610]



Test Harness: A system of test drivers and other tools to support test execution (e.g., stubs, executable test cases, and test drivers). See: test driver.



Test infrastructure: The organizational artifacts needed to perform testing, consisting of test environments, test tools, office environment and procedures.



Test Item: A software item which is the object of testing



Test Log: A chronological record of all relevant details about the execution of a test



Test management: The planning, estimating, monitoring and control of test activities, typically carried out by a test manager.



Test oracle: A source to determine expected results to compare with the actual result of the software under test. An oracle may be the existing system (for a benchmark), a user manual, or an individual’s specialized knowledge, but should not be the code.



Test Plan: A high-level document that defines a testing project so that it can be properly measured and controlled. It defines the test strategy and organized elements of the test life cycle, including resource requirements, project schedule, and test requirements



Test Rig: A flexible combination of hardware, software, data, and interconnectivity that can be configured by the Test Team to simulate a variety of different Live Environments on which an AUT can be delivered



Test strategy: A high-level document defining the test levels to be performed and the testing within those levels for a program (one or more projects).



Test Stub: A dummy software component or object used (during development and testing) to simulate the behavior of a real component. The stub typically provides test output.



Test suite: A set of several test cases for a component or system under test, where the post condition of one test is often used as the precondition for the next one.



Testing: The execution of tests with the intent of providing that the system and application under test does or does not perform according to the requirements specification.



Testware: Artifacts produced during the test process required to plan, design, and execute tests, such as documentation, scripts, inputs, expected results, set-up and clear-up procedures, files, databases, environment, and any additional software or utilities used in testing. [After Fewster and Graham]



(TPI) Test Process Improvement: A method for baselining testing processes and identifying process improvement opportunities, using a static model developed by Martin Pol and Tim Koomen.



Thread Testing: A testing technique used to test the business functionality or business logic of the AUT in an end-to-end manner, in much the same way a User or an operator might interact with the system during its normal use



Top-down testing: An incremental approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.



Traceability: The ability to identify related items in documentation and software, such as requirements with associated tests. See also horizontal traceability, vertical traceability.



Traceability matrix: The 2 dimensional matrix that maps the requirements from the requirement specifications to the test cases developed and the test cases executed with the status of each execution. This helps in finding out test coverage for a release.


--------------------------------------------------------------------------------



Usability testing. Testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer.



Use Case: A use case is a description of a system’s behavior as it responds to a request that originates from outside of that system. The use case technique is used in software engineering to capture the functional requirements of a system. Use cases describe the interaction between a primary Actor (the initiator of the interaction) and the system itself, represented as a sequence of simple steps. Actors are something or someone which exist outside the system under study, and that take part in a sequence of activities in a dialogue with the system to achieve some goal. They may be end users, other systems, or hardware devices. Each use case is a complete series of events, described from the point of view of the Actor.



Use case testing: A black box test design technique in which test cases are designed to execute user scenarios.




--------------------------------------------------------------------------------



V-model: A framework to describe the software development life cycle activities from requirements specification to maintenance. The V-model illustrates how testing activities can be integrated into each phase of the software development life cycle.



Validation: Confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled. [ISO 9000]



Verification: Confirmation by examination and through the provision of objective evidence that specified requirements have been fulfilled. [ISO 9000]



Vertical traceability: The tracing of requirements through the layers of development documentation to components.



Volume testing: Testing where the system is subjected to large volumes of data


--------------------------------------------------------------------------------



Walkthrough: A step-by-step presentation by the author of a document in order to gather information and to establish a common understanding of its content. [Freedman and Weinberg, IEEE 1028]



White box testing: Testing based on an analysis of the internal structure of the component or system.



Wide Band Delphi: An expert based test estimation technique that aims at making an accurate estimation using the collective wisdom of the team members.