QA and test glossary

There are often confusion and misunderstanding in Testing.
This is the reason why a glossary might be of interest.

Testing [Myers] The purpose of testing is to discover errors. Testing is the process of trying to discover every conceivable fault or weakness in a work product.
[ANSI]
(1) The process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component.
(2) The process of analyzing a software item to detect the differences between existing and required conditions, (that is, bugs) and to evaluate the features of the software items.

All Types of Testing Glossary

Testing Type Definition
Development [IEEE] Testing conducted during the development of a system or component, usually in the development environment by the developer. Contrast with acceptance testing, operational testing.
  • Unit
  • Component
[IEEE] Testing conducted to verify the implementation of the design for one software element; e.g., a unit or module; or a collection of software elements.
[NIST] Testing of a module for typographic, syntactic, and logical errors, for correct implementation of its design, and for satisfaction of its requirements.
Smoke Smoke testing is non-exhaustive software testing, ascertaining that the most crucial functions of a program work, but not bothering with finer details. The term comes to software testing from a similarly basic type of hardware testing, in which the device passed the test if it didn't catch fire the first time it was turned on.
Alpha [PressMan] Acceptance testing performed by the customer in a controlled environment at the developer's site. The software is used by the customer in a setting approximating the target environment with the developer observing and recording errors and usage problems.
Beta [PressMan] Acceptance testing performed by the customer in a live application of the software, at one or more end user sites, in an environment not controlled by the developer.
System [IEEE] The process of testing an integrated hardware and software system to verify that the system meets its specified requirements. Such testing may be conducted in both the development environment and the target environment.
[C&A] The testing of a complete system prior to delivery. The purpose of system testing is to identify defects that will only surface when a complete system is assembled. That is, defects that cannot be attributed to individual components or the interaction between two components. System testing includes testing of performance, security, configuration sensitivity, startup and recovery from failure modes.
  • Black-Box
  • Functional
  • Input/Output Driven
[BCS SIGIST]
(1) Testing that ignores the internal mechanism or structure of a system or component and focuses on the outputs generated in response to selected inputs and execution conditions.
(2) Testing conducted to evaluate the compliance of a system or component with specified functional requirements and corresponding predicted results. Contrast with white-box testing.
[ANSI] Testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to the selected inputs and execution conditions.
  • White-Box
  • Glass-Box
  • Structural
  • Logic-driven
[IEEE]
(1) Testing that takes into account the internal mechanism [structure] of a system or component. Types include branch testing, path testing, statement testing.
(2) Testing to insure each program statement is made to execute during testing and that each program statement performs its intended function.
[ANSI] Testing that takes into account the internal mechanism of a system or component. Types include branch testing, path testing, statement testing.
[GCSSDT] ???
Contrast with black-box testing
Integration An orderly progression of testing in which software elements, hardware elements, or both are combined and tested, to evaluate their interactions, until the entire system has been integrated.
Regression [NIST] Rerunning test cases which a program has previously executed correctly in order to detect errors spawned by changes or corrections made during software development and maintenance.
[BCS SIGIST] Retesting of a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made.
Performance Functional testing conducted to evaluate the compliance of a system or component with specified performance requirements.
Stress/Load [IEEE] Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements.
[Beizer] Testing in which a system is subjected to unrealistically harsh inputs or load with inadequate resources with the intention of breaking it.
[ANSI] Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements. See boudary value testing.
[Pressman] Stress tests are designed to confront programs with abnormal situations. ... Stress testing executes a system in a manner that demands resources in abnormal quantity, frequency, or volume. ... Essentially, the tester attempts to break the program.(p.652-653)
  • Negative
  • Dirty
[Beizer] Testing which encompasses upper and lower limits, and circumstances which pose the greatest chance finding of errors. See boudary value testing, stress testing. Tests aimed at showing that software does not work; e.g., most effective tests.
Boundary value [GCSSDT] A testing technique using input values at, just below, and just above, the defined limits of an input domain; and with input values causing outputs to be at, just below, and just above, the defined limits of an output domain. See stress testing.
Branch [NBS] Testing technique to satisfy coverage criteria which require that for each decision point, each possible branch (outcome] be executed at least once. Contrast with path testing, statement testing.
Path Testing to satisfy coverage criteria that each logical path through the program be tested. Often paths through the program are grouped into a finite set of classes. One path from each class is then tested. Contrast with branch testing, statement testing.
Statement Testing to satisfy the criterion that each statement in a program be executed at least once during program testing. Syn: statement coverage. Contrast with branch testing, path testing.
Smart [Beizer] Tests that based on theory or experience are expected to have a high probability of detecting specified classes of bugs; tests aimed at specific bug types.
  • Parallel
  • Oracle
Testing a new or an alternate data processing system with the same source data that is used in another system. The other system is considered as the standard of comparison.
Operational [ANSI] Testing conducted to evaluate a system or component in its operational environment.
Penetration [NSA] The portion of security testing in which the evaluators attempt to circumvent the security features of a system. The evaluators may be assumed to use all system design and implementation documentation, that may include listings of system source code, manuals, and circuit diagrams. The evaluators work under the same constraints applied to ordinary users.
[IETF] Penetration testing may be performed under various constraints and conditions. However, for a TCSEC evaluation, testers are assumed to have all system design and implementation documentation, including source code, manuals, and circuit diagrams, and to work under no greater constraints than those applied to ordinary users.
Acceptance Testing conducted to determine whether or not a system satisfies its acceptance criteria and to enable the customer to determine whether or not to accept the system. Contrast with development testing, operational testing.

Generic Glossary

Term Definition
Test Components
Test An activity in which a system or component is executed under specified conditions, the results are observed or recorded and an evaluation is made of some aspect of the system or component
[XQual] Unitary element considered for testing. A test includes one of several test cases.
[ANSI]
(1) An activity in which a system or component is executed under specified conditions, the results are observed or recorded and an evaluation is made of some aspect of the system or component.
(2) To conduct an activity as in (1).
(3) A set of one or more test cases.
(4) A set of one or more test procedures.
(5) A set of one or more test cases and procedures.
[Beizer] Subtests are grouped into tests, which must be run as a set, typically because the outcome of one subtest is the input or the initial condition for the next subtest in the test. Tests can be run independently of one another but are typically defined over the same database. (p.447)
Test Case Documentation specifying inputs, predicted results, and a set of execution conditions for a test item.
[XQual] A test case is a bunch of steps that lead to consider one "item" as successfully working.
[ANSI]
(1) A set of test inputs, execution conditions, and expected results developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement [do178b?].
(2) Documentation specifying inputs, predicted results, and a set of execution conditions for a test item.
[C&A] A document describing a single test instance in terms of input data, test procedure, test execution environment and expected outcome. Test cases also reference test objectives such as verifying compliance with a particular requirement or execution of a particular program path.
Test Suite
(1) The process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component.
(2) The process of analyzing a software item to detect the differences between existing and required conditions, i.e., bugs, and to evaluate the features of the software items.
[XQual] set of tests organised by the test operator.
[Beizer] A test suite is a set of related tests, usually pertaining to a group of features or software component and usually defined over the same database. Suites are combined into groups.(p.448) A group of tests with a common purpose and database, usually run as a group.
Test Documents
Project Plan [NIST] A management document describing the approach taken for a project. The plan typically describes work to be done, resources required, methods to be used, the configuration management and quality assurance procedures to be followed, the schedules to be met, the project organization, etc. Project in this context is a generic term. Some projects may also need integration plans, security plans, test plans, quality assurance plans, etc. See test plan
Test Plan [ANSI] Documentation specifying the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, responsibilities, required resources, and any risks requiring contingency planning.
A document describing the scope, approach, resources, and schedule of intended test activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning.
[BCS SIGIST] A record of the test planning process detailing the degree of tester indedendence, the test environment, the test case design techniques and test measurement techniques to be used, and the rationale for their choice.
Test Procedure [ANSI]
(1) Detailed instructions for the set-up, execution, and evaluation of results for a given test case.
(2) A document containing a set of associated instructions as in (1).
(3) Documentation specifying a sequence of actions for the execution of a test.
[NIST] A formal document developed from a test plan that presents detailed instructions for the setup, operation, and evaluation of the results for each defined test.
[GCSSDT]???
Test Report [C&A]A document describing the conduct and results of the testing carried out for a system or system component.
A document that summarizes the outcome of testing in terms of items tested, summary of results (e.g. defect density), effectiveness of testing and lessons learned.
[ANSI]A document that describes the conduct and results of the testing carried out for a system or component.
Others
Ad Hoc [COBUILD] Something that is ad hoc or that is done on an ad hoc basis happens or is done only when the situation makes it necessary or desirable, rather than being arranged in advance or being part of a general plan.
System Under Test (SUT) [ATIS] The real open system in which the Implementation Under Test (IUT) resides.
Oracle
Reliability [PSS] The probability of a given system performing its mission adequately for a specified period of time under the expected operating conditions.
[CIGITAL] Software reliability is the probability that software will provide failure-free operation in a fixed environment for a fixed interval of time. Probability of failure is the probability that the software will fail on the next input selected. Software reliability is typically measured per some unit of time, whereas probability of failure is generally time independent. These two measures can be easily related if you know the frequency with which inputs are executed per unit of time. Mean-time-to-failure is the average interval of time between failures; this is also sometimes referred to as Mean-time-before-failure.
Analysis
(1) To separate into elemental parts or basic principles so as to determine the nature of the whole
(2) A course of reasoning showing that a certain result is a consequence of assumed premises.
[ANSI]
(3) The methodical investigation of a problem, and the separation of the problem into smaller related units for further detailed study.
Anomaly [IEEE] Anything observed in the documentation or operation of software that deviates from expectations based on previously verified software products or reference documents. See bug, defect, error, exception, fault.
[COBUILD] An anomaly is a rule or practice that is different from what is normal or usual, and which is therefore unsatisfactory.
[ANSI] Anything observed in the documentation or operation of software that deviates from expectations based on previously verified software products or reference documents.
Assertion [NIST] A logical expression specifying a program state that must exist or a set of conditions that program variables must satisfy at a particular point during program execution.
Baseline [NIST] A specification or product that has been formally reviewed and agreed upon, that serves as the basis for further development, and that can be changed only through formal change control procedures.
Batch [IEEE] Pertaining to a system or mode of operation in which inputs are collected and processed all at one time, rather than being processed as they arrive, and a job, once started, proceeds to completion without additional input or user interaction. Contrast with conversational, interactive, on-line,real time.
Benchmark A standard against which measurements or comparisons can be made.
Bug [GCSSDT] A fault in a program which causes the program to perform in an unintended or unanticipated manner. See: anomaly, defect, error, exception, fault.
Certification [PSS] The comprehensive evaluation of the technical and nontechnical security features of an AIS and other safeguards, made in support of the accreditation process, that establishes the extent to which a particular design and implementation meet a specified set of security requirements.
Code Review [IEEE] A meeting at which software code is presented to project personnel, managers, users, customers, or other interested parties for comment or approval. Contrast with code audit, code inspection, code walkthrough.
  • Coding Standards
  • Development Standards
  • Programming Standards
Written procedures describing coding [programming] style conventions specifying rules governing the use of individual constructs provided by the programming language, and naming, formatting, and documentation requirements which prevent programming errors, control complexity and promote understandability of the source code.
Completeness [NIST] The property that all necessary parts of the entity are included. Completeness of a product is often used to express the fact that all requirements have been met by the product.
Complexity [IEEE]
(1) The degree to which a system or component has a design or implementation that is difficult to understand and verify.
(2) Pertaining to any of a set of structure based metrics that measure the attribute in (1).
Control Flow Diagram [IEEE] A diagram that depicts the set of all possible sequences in which operations may be performed during the execution of a system or program. Types include box diagram, flowchart, input-process-output chart, state diagram. Contrast with data flow diagram.
Correctness [IEEE] The degree to which software is free from faults in its specification, design and coding. The degree to which software, documentation and other items meet specified requirements. The degree to which software, documentation and other items meet user needs and expectations, whether specified or not.
Crash [IEEE] The sudden and complete failure of a computer system or component.
  • Criticality
  • Severity
[IEEE] The degree of impact that a requirement, module, error, fault, failure, or other item has on the development or operation of a system.
  • Data Corruption
  • Data Contamination
[ISO] A violation of data integrity.
Data Validation [ISO]
(1) A process used to determine if data are inaccurate, incomplete, or unreasonable. The process may include format checks, completeness checks, check key tests, reasonableness checks and limit checks.
(2) The checking of data for correctness or compliance with applicable standards, rules, and conventions.
Dead Code Program code statements which can never execute during program operation. Such code can result from poor coding style, or can be an artifact of previous versions or debugging efforts. Dead code can be confusing, and is a potential source of erroneous software changes.
Debugging [Myers] Determining the exact nature and location of a program error, and fixing the error.
Defect Noncomformance to requirements. See anomaly, bug, error, exception, fault, failure
DesignSpecification [NIST] A specification that documents how a system is to be built. It typically includes system or component structure, algorithms, control logic, data structures, data set [file] use information, input/output formats, interface descriptions, etc Contrast with design standards, requirement.
Diagnostic [IEEE] Pertaining to the detection and isolation of faults or failures. For example, a diagnostic message, a diagnostic manual.
Disaster Recovery [RICE] The ability or process to restore and reestablish processing in the event of a disaster.
Environment [ANSI]
(1) Everything that supports a system or the performance of a function.
(2) The conditions that affect the performance of a system or function.
Error [ISO] A discrepancy between a computed, observed, or measured value or condition and the true, specified, or theoretically correct value or condition. See anomaly, bug, defect, exception, fault.
[ANSI]
(1) The difference between a computed, observed, or measured value or condition and the true. specified, or theoretically correct value or condition.
(2) An incorrect step, process, or data definition. Also: fault.
(3) An incorrect result. Also: failure.
(4) A human action that produces an incorrect result. Also: mistake.
[GCSSDT] ???
[Krsul] An error is a mistake made by a developer. It might be typographical error, a misleading of a specifications, a misunderstanding of what a subroutine does, and so on (IEEE 1990). An error might lead to one or more faults. Faults are located in the text of the program. More precisely, a fault is the difference between incorrect program and the correct version (IEEE 1990).
Evaluation [COBUILD] Evaluation is a decision about significance, valua, or quality of something, based on careful study of its good and bad features.
[Common Criteria] Assessment of a PP [Protection Profile], an ST [Security Target] or a TOE [Target of Evaluation], against defined criteria.
Exception [IEEE] An event that causes suspension of normal program execution. Types include addressing exception, data exception, operation exception, overflow exception, protection exception, underflow exception.
Failure [IEEE] The inability of a system or component to perform its required functions within specified performance requirements. See bug, crash, exception, fault.
[BCS SIGIST] Deviation of the software from its expected delivery or service.
[ANSI] (after Fenton) The inability of a system or component to perform its required functions within specified performance requirements.
Fault [ANSI] An incorrect step, process, or data definition in a computer program which causes the program to perform in an unintended or unanticipated manner. See anomaly, bug, defect, error, exception. An incorrect step, process, or data definition in a computer program.
[BCS SIGIST] A manifestation of an error in software. A fault, if encountered may cause a failure.
Fault Injection [CIGITAL] The hypothesized errors that software fault injection uses are created by either:
(1) adding code to the code under analysis,
(2) changing the code that is there, or
(3) deleting code from the code under analysis. Code that is added to the program for the purpose of either simulating errors or detecting the effects of those errors is called {\it instrumentation code}. To perform fault injection, some amount of instrumentation is always necessary, and althrough this can be added manually, it is usually performed by a tool.
Fault Tolerance [NSA] The ability of a system or component to continue normal operation despite the presence of hardware or software faults.
Formal [Common Criteria] Expressed in a restricted syntax language with defined semantics based on well-established mathematical concepts.
Function [ISO]
(1) A mathematical entity whose value, namely, the value of the dependent variable, depends in a specified manner on the values of one or more independent variables, with not more than one value of the dependent variable corresponding to each permissible combination of values from the respective ranges of the independent variables.
(2) A specific purpose of an entity, or its characteristic action.
Functional Design [IEEE]
(1) The process of defining the working relationships among the components of a system.
(2) The result of the process in (1).
Functional Requirements [IEEE] A requirement that specifies a function that a system or system component must be able to perform.
Inspection A manual testing technique in which program documents [specifications (requirements, design, source code or user's manuals are examined in a very formal and disciplined manner to discover errors, violations of standards and other problems. Checklists are a typical vehicle used in accomplishing this technique. See static analysis, code audit, code inspection, code review, code walkthrough.
Instruction [IEEE]
(1) A program statement that causes a computer to perform a particular operation or set of operations.
[ISO]
(2) In a programming language, a meaningful expression that specifies one operation and identifies its operands, if any.
Instrumentation [NIBS] The insertion of additional code into a program in order to collect information about program behavior during program execution. Useful for dynamic analysis techniques such as assertion checking, coverage analysis, tuning.
[COBUILD] Instrumentation is a group or collection of instruments, usually ones that are part of the same machine.
[ANSI] Devices or instructions installed or inserted into hardware or software to monitor the operation of a system or component.
[BCS SIGIST] The insertion of additional code into the program in order to collect information about program behaviour during program execution.
[NBS] The insertion of additional code into a program in order to collect information about program behavior during program execution. Useful for dynamic analysis techniques such as assertion checking, coverage analysis, tuning.
[GCSSDT] ???
Integrity [NSA] Assuring information will not be accidentally or maliciously altered or destroyed.
[PSS] Sound, unimpaired or perfect condition.
Interface [ISO]
(1) A shared boundary between two functional units, defined by functional characteristics, common physical interconnection characteristics, signal characteristics, and other characteristics, as appropriate The concept involves the specification of the connection of two devices having different functions.
(2) A point of communication between two or more processes, persons, or other physical entities.
(3) A peripheral device which permits two or more devices to communicate.
Malicious Code [IETF] Hardware, software, or firmware that is intentionally included or inserted in a system for a harmful purpose.
[PSS] Hardware, software, or firmware that is intentionally included in a system for an unauthorized purpose; e.g., a Trojan horse.
Measure [IEEE] A quantitative assessment of the degree to which a software product or process possesses a given attribute.
Module
(1) In programming languages, a self-contained subdivision of a program that may be separately compiled.
(2) A discrete set of instructions, usually processed as a unit, by an assembler, a compiler, a linkage editor, or similar routine or subroutine.
(3) A packaged functional hardware units suitable for use with other components.
Platform The hardware and software which must be present and functioning for an application program to run [perform] as intended. A platform includes, but is not limited to the operating system or executive software, communication software, microprocessor. network, input/output hardware, any generic software libraries, database management, user interface software, and the like.
Program [ISO]
(1) A sequence of instructions suitable for processing. Processing may include the use of an assembler, a compiler, an interpreter, or another translator to prepare the program for execution. The instructions may include statements and necessary declarations.
[ISO]
(2) To design, write, and test programs.
[ANSI]
(3) In programming languages, a set of one or more interrelated modules capable of being executed.
(4) Loosely, a routine.
(5) Loosely, to write a routine.
Quality Assurance (QA) [ISO]
(1) The planned systematic activities necessary to ensure that a component, module, or system conforms to established technical requirements.
(2) All actions that are taken to ensure that a development organization delivers products that meet performance requirements and adhere to standards and procedures.
(3) The policy, procedures, and systematic actions established in an enterprise for the purpose of providing and maintaining some degree of confidence in data integrity and accuracy throughout the life cycle of the data, which includes input, update, manipulation, and output.
(4) (QA) The actions, planned and performed, to provide confidence that all systems and components that influence the quality of the product are working as expected individually and collectively.
Real-Time [IEEE] Pertaining to a system or mode of operation in which computation is performed during the actual time that an external process occurs, in order that the computation results can be used to control. monitor, or respond in a timely manner to the external process. Contrast with batch.
Relational Database Database organization method that links files together as required. Relationships between files are created by comparing data such as account numbers and names. A relational system can take any two or more files and generate a new file from the records that meet the matching criteria. Routine queries often involve more than one data file; e.g., a customer tile and an order file can be linked in order to ask a question that relates to information in both tiles, such as the names of the customers that purchased a particular product
Requirement [IEEE]
(1) A condition or capability needed by a user to solve a problem or achieve an objective
(2) A condition or capability that must be met or possessed by a system or system component to satisfy a contract, standard. specification, or other formally imposed documents.
(3) A documented representation of a condition or capability as in (1) or (2). See: design requirement, functional requirement, implementation requirement, interface requirement, performance requirement, physical requirement.
Risk [IEEE] A measure of the probability and severity of undesired effects. Often taken as the simple product of probability and consequence.
[PSS] The probability that a particular threat will exploit a particular vulnerability of the system.
[IETF] An expectation of loss expressed as the probability that a particular threat will exploit a particular vulnerability with a particular harmful result.
Security [NSA] A condition that results from the establishment and maintenance of protective measures that ensure a state of inviolability from hostile acts or influences.
[SPIDT] The subfield of information science concerned with ensuring that information systems are imbued with the condition of being secure, as well as the means of establishing, testing, auditing, and otherwise maintaining that condition.
[IETF]
(1) Measures taken to protect a system.
(2) The condition of a system that results from the establishment and maintenance of measures to protect the system.
(3) The condition of system resources being free from unauthorized access and from unauthorized or accidental change, destruction, or loss.
[Common Criteria] Security is concerned with the protection of assets from threats, where threats are categorised as the potential for abuse of protected assets. All categories of threats should be considered; but in the domain of security greater attention is given to those threats that are related to malicious or other human activities.
Simulator [IEEE] A device, computer program, or system that behaves or operates like a given system when provided a set of controlled inputs. Contrast with emulator. A simulator provides inputs or responses that resemble anticipated process parameters. Its function is to present data to the system at known speeds and in a proper format.
Emulator
Specification [IEEE] A document that specifies, in a complete, precise, verifiable manner, the requirements, design, behavior, or other characteristics of a system or component, and often, the procedures for determining whether these provisions have been satisfied. Contrast with requirement. See formal specification, functional specification, performance specification, interface specification, design specification, coding standards, design standards.
Test Bed [ANSI] An environment containing the hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test.
[LIS] Any system whose primary purpose is to provide a framework within which other systems can be tested. Test beds are usually tailored to a specific programming language and implementation technique, and often to a specific application. Typically a test bed provides some means of simulating the environment of the system under test, of test-data generation and presentation, and of recording test results.
according to Dictionary of Computing, Vallerie Illingworth, C1996
Oracle [BCS SIGIST]A mechanism to produce the predicted outcomes to compare with the actual outcomes of the software under test.
[Beizer](after Adrion) Any (often automated) means that provides information about the (correct) expected behavior of a component (HOWD86). Without qualification, this term is often used synonymously with input/outcome oracle.
Trace [IEEE]
(1) A record of the execution of a computer program, showing the sequence of instructions executed, the names and values of variables, or both. Types include execution trace, retrospective trace, subroutine trace, symbolic trace, variable trace.
(2) To produce a record as in (1).
(3) To establish a relationship between two or more products of the development process: e.g., to establish the relationship between a given requirement and the design element that implements that requirement.
Precondition [BCS SIGIST]Environmental and state conditions which must be fulfilled before the component can be executed with a particular input value.
Validation [ANSI]The process of evaluating a system or component during or at the end of the development process to determine whether it satisfies specified requirements.
[FDA]
(1) Establishing documented evidence which provides a high degree of assurance that a specific process will consistently produce a product meeting its predetermined specifications and quality attributes. Contrast with data validation.
[GCSSDT]??

XQual-Specific Glossary

Term Meaning
Agent The agent is part of XStudio or can be executed on its own. This component inncludes all the aavailable launchers and will know which launcher to use to run a specific test.
The agent is the intermediary in between the physical tests and XStudio.
Launcher The launcher is an external component of XStudio. The launcher is the Java piece of software that MUST implement a specific interface (provided by XStudio). Its role is to take a test (retrieved from the agent), execute it and provide results and complementary information to the agent.
A list of available launchers is available.
Refer to the Develop a launcher section for more details.
Trigger A trigger is an external component of XStudio. The trigger is the Java piece of software that MUST implement a specific interface (provided by XStudio). Its role is to execute an action or a suite of actions when an event occurs. An event can be a change of status of an object for innstance.
Refer to the Develop a trigger section for more details.
PDNL PDNL stands for Procedure Definition Natural Language and is a simple way to describe manual test procedures using the keywords testcase, do, check and and.
It can be automatically converted to scripted tests.
Refer to the Exploratory Session section for more details.