The test plan

The test plan

74.1 Purpose

The ultimate objective of testing is to ensure that the system performs as designed and, by extension, to ensure that it meets the user’s needs. More specifically, testing is the process of exercising the system and its components to locate, investigate, and correct errors and bugs.

74.2 Strengths, weaknesses, and limitations

The strengths and weaknesses of specific techniques will be discussed in context.

74.3 Inputs and related ideas

Chapter 75 discusses test data. Virtually every component described in Part VI must be tested. General system design principles are discussed in Chapter 72. Inspections (Chapter 23) support a form of testing that can be performed on logical components. The joint application design technique (Chapter 14) can be used to develop test procedures. Gantt charts (Chapter 20) and project networks (Chapter 21) can be used to plan, document, and manage a test schedule. The requirements specification (Chapter 35) is an important source of functional and performance requirements, and serves as a base for establishing a test plan. Version controls (Chapter 80) are used to ensure that the appropriate version of the code is tested.

74.4 Concepts

The ultimate objective of testing is to ensure that the system performs as designed and, by extension, to ensure that it meets the user’s needs. Con-sequently, user involvement is crucial in the testing process.

More specifically, testing is the process of exercising the system and its components to locate, investigate, and correct errors and bugs. The goals of testing include ensuring that all system components work properly, finding errors and identifying their causes, revising or modifying the software and other components to eliminate errors, tracking the status of errors, and adjusting system performance and/or operating procedures as appropriate.

74.4.1 The test plan

Effective testing does not just happen; it must be carefully planned. A complete test plan incorporates testing strategies, testing procedures, test data, and a testing schedule. Test data is covered in Chapter 75. The other three elements of a test plan are discussed in this chapter.

Tests are performed on the system’s physical components. Consequently, although the logical models and design documentation prepared during the analysis and design stages of the system development life cycle can (and should) be evaluated and inspected, testing does not begin until the implementation stage begins. However, the test plan should be developed in parallel with the design stage of the system development life cycle so it is ready when implementation begins. The requirements specification (Chapter 35) is an important source of functional and performance requirements, and serves as a base for establishing a test plan.

Finally, note that the test plan is constrained by resources (the computing platform, hardware, software, and peripherals), personnel, and time (in the form of the project schedule). The test plan designer’s objective is to test the system as thoroughly and as effectively as possible within the constraints.

74.4.2 Testing strategies

Testing can be performed top down, bottom up, and/or middle out. Top-down testing starts at the top (with the broad, control modules) and works through the module hierarchy level by level until the bottom level (the detailed computational modules) is reached. Bottom-up testing starts at the bottom and works up through the hierarchy to the top. The middle-out (or hybrid) approach starts in the middle of the hierarchy and moves bi-directionally toward both the top and the bottom.

With white-box testing, the objective is to directly verify and review the logical structure, flow, and/or sequence of a proposed system by focusing on such internal components as the code. White-box testing is employed when the system is developed internally and the program structure, sequence, and coding are completely understood.

Black-box testing, in contrast, ignores the internal contents of the module, program, subsystem, or system and considers only the inputs and the outputs. Black-box testing is ideal for functional testing or for testing external operations. Often, sophisticated test cases and associated test data sets are designed to support black-box testing.

Gray-box testing is a hybrid approach for testing both the functions and the contents of major programs and/or modules that are likely to be internally maintained, modified, or customized later.

The correct testing strategy is a function of the specific test to be performed. A given test plan might incorporate several different strategies.

74.4.3 Test procedures

Test procedures are needed to define the process for creating the test data, determine the testing sequence, specify test logistics, and document the test results. There is no standard format or style for developing test procedures, although many organizations have implemented broad testing standards and rely on a joint application design (JAD) session to define the specific procedures for a given project.

The individual tests, the test criteria, and the associated test data must be defined. The testing environment and necessary resources must be specified. The people who will conduct each test and the people who will evaluate the results must be identified. (They should be different.) The test scope and test boundaries must be clearly defined, and the tests should overlap to make sure every aspect of the systems is covered. The criteria for passing each test and person or persons responsible for the pass/fail decision must be specified.

Library control procedures are used to create and maintain a test data library and the relevant testing software. Change control procedures are used to record, assess, control, and track all requests for change both during and after the testing process. Typically, when a module or program is changed, the original version, the revised version, and information about the change (the initial proposal, justifications for the change, approvals, the anticipated impact, etc.) are maintained by the change control procedures. Reporting control procedures are used to document all test results. Version controls (Chapter 80) are used to ensure that the appropriate version of the code is tested and passed on to subsequent steps.

74.4.4 Testing levels

Testing occurs throughout the implementation process. Figure 74.1 shows how the various levels of testing build on each other.

74.4.4.1 Scaffolding

Scaffolding is software written specifically to test the system. Scaffolding is generally used to simulate the system environment or to initiate a calling sequence to trigger the execution of selected modules in the proper order. It is not part of the final system. It exists for a limited period of time, and its only purpose is to support testing. Scaffolding software typically contains test drivers (to call modules in the proper order) and stubs (simplified versions of selected modules). Using stubs simplifies system behavior and makes it easier to detect certain types of errors.

74-01
Figure 74.1  Testing levels.

74.4.4.2 Unit testing

As the name implies, unit testing (or module testing) is conducted on a single program, a single module, or a single component. The idea is to use test data to check the behavior of a unit without regard for its interfaces or relationships with other units. Unit testing is typically performed at the developer’s site by the responsible programmers.

74.4.4.3 Integration testing

Integration testing focuses on two or more individual modules that are grouped to form a partial system. Integration testing uses test data to evaluate the individual units, their interfaces with each other, and their combined behavior. Like unit testing, integration testing is usually performed at the developer’s site by the responsible programmers and/or systems analysts.

Integration testing can be performed top down, bottom up, or middle out. A sequence of related tests can be conducted using a phased, incremental, or evolutionary approach (Chapter 72). Black-box, white-box, and gray-box testing are all possible. Note, however, that no matter which strategy is chosen, integration testing must be performed every time the relevant partial system is changed in any way.

74.4.4.4 Function testing

Function testing is performed on one or more partial systems that have already been integration tested. The objective is to use test data and simulated data (including range constraints, format constraints, etc.) to test a user-defined function. Typically, the function tests are performed at the developer’s site by the programmers and/or the systems analysts.

74.4.4.5 System testing

The system test is conducted on the entire system using both test data and real, user supplied data. Generally, the system test is performed at the developer’s site using the developer’s own hardware and software.

74.4.4.6 User acceptance testing

User acceptance testing is a complete system test performed at the user’s site, on the user’s hardware/software platform, under the user’s control, using real data provided by the user.

An alpha test is a controlled environment test. Often, the designers demonstrate key system functions, perhaps selected by the users, and the users manipulate the system under developer guidance. Typically, the systems requirements, the general design philosophy, and selected portions of the code and documentation are reviewed.

The purpose of a beta test is to allow real users who are unfamiliar with the technical details to “try out” a preliminary, pre-release beta version of the system. The job of the beta testers is to exercise the system, identify its strengths and weaknesses, document any errors they find, and report their impressions back to the technical experts. Note that a beta test is conducted by (selected) real users who use real hardware, real software, and real (unplanned) data to work on real and imagined problems with any frequency and in any sequence. Often, an automated testing tool or procedure is employed to ensure that all the system’s functions and operations are tested and none are ignored.

Gamma testing checks such additional details as the system’s compatibility with the old system and the system’s performance under peak demand, utilizing such tools as data transform analysis, operator sequence analysis, symbolic manipulation analysis, and so on. Data transform analysis is used to test if the data generated from one platform can be transformed without difficulty as an input to another platform. Operator sequence analysis is used to check if the system can still be operated correctly and reliably when the input sequence of different tasks or jobs is changed. Symbolic manipulation analysis is used to test the ability of the system to operate given different symbolic inputs, such as audio and/or video signals.

74.4.4.7 Regression testing

Regression testing complements unit, integration, function, or system testing and is usually performed by technical personnel. The idea is to use old test cases and test data on an updated or modified version of a system to ensure that the changes have not affected the system’s ability to perform its fundamental tasks. Usually, the old system is tested to establish a benchmark, the changes are made, the new system is tested and a new set of benchmark values generated, and the new and old benchmark values are compared.

74.4.4.8 Systems performance testing

Systems performance testing focuses on system behavior. Peak load testing is intended to ensure that the system can handle the stress of a peak load demand. Recovery testing simulates emergency situations such as power failures, equipment malfunctions, database failures, and so on.

74.4.4.9 Audit testing

The purpose of an audit test is to verify that the system is free of errors. The audit is usually conducted by an organization’s quality assurance group or by qualified outsiders. The idea is to have objective technical experts with no personal ties to the system conduct an in-depth, white-box evaluation of the system and its components.

74.4.4.10 Testing the hardware and the procedures

Many large organizations employ specialists to test, install, and maintain hardware. A small firm might hire a consultant or rely on the analyst for hardware testing. Some organizations rely on the equipment manufacturer’s representatives.

Basic electronic functions are normally tested first. Many hardware components come with their own diagnostic routines, which should be run. Modern electronic equipment is highly reliable; if a component survives the first several hours of operation, it usually continues to work until the end of its useful life. However, start-up failures are common, so many hardware test plans include a burn-in period; for example, a disk drive might be tested by repetitively reading and writing a set of test data for several hours. Stress tests are also a good idea; for example, the system might be run at or near its environmental (temperature and humidity) limits.

Manual procedures, auditing procedures, and security procedures are easily overlooked when the test plan is created. Initially, a draft procedure might be tested in a controlled environment, with technical personnel reading the instructions and doing exactly what they say. (The results can be humorous.) Next come controlled user tests, with selected users walking through the procedures and suggesting improvements. Finally come live tests with real users and real data.

74.4.5 The test schedule

The test schedule defines the order in which key tests are performed. Gantt charts (Chapter 20) and project networks (Chapter 21) are useful planning and project management tools. Figure 74.2 summarizes the dependency relationships between the various types of tests, a key factor in planning a test schedule. For example, all unit testing must be completed before the integration test is performed for a given subsystem, and all the subsystems that contribute to performing a given function must be tested before the function is tested.

Finally, remember that testing is part of the system development life cycle. Consequently, the test schedule is a subset of the system development schedule.

74.5 Key terms

74-02
Figure 74.2  The dependency relationships between the various types of tests.

Alpha test —
A controlled environment test in which the designers demonstrate key system functions, perhaps selected by the users, and the users manipulate the system under developer guidance.
Audit test —
An objective, in depth white-box evaluation of the system and its components to verify that the system is free of errors.
Beta test —
A test conducted by (selected) real users who use real hardware, real software, and real (unplanned) data to work on real and imagined problems with any frequency and in any sequence.
Black-box testing —
A testing strategy that ignores the internal contents of the module, program, subsystem, or system and considers only the inputs and the outputs.
Bottom-up testing —
A testing strategy that starts at the bottom and works up through the hierarchy to the top.
Burn-in period —
An initial period during which a hardware component is run continuously in an attempt to find and eliminate start-up errors.
Change control procedures —
A set of procedures for recording, assessing, controlling, and tracking all requests for change both during and after the testing process.
Function test —
A test performed on one or more partial systems that have already been integration tested; the objective is to use test data and simulated data to test a user-defined function.
Gamma test —
A test of such details as the system’s compatibility with the old system and the system’s performance under peak demand.
Gray-box testing —
A hybrid of white-box and black-box testing in which both the functions and the contents of major programs and/or modules that are likely to be internally maintained, modified, or customized later are tested.
Integration test —
A test conducted on an aggregate of two or more components or modules that focuses on the individual units, their interfaces with each other, and their combined behavior.
Library control procedures —
A set of procedures for creating and maintaining a test data library and the relevant testing software.
Middle-out testing (hybrid testing) —
A testing strategy that starts in the middle of the hierarchy and moves bi-directionally toward both the top and the bottom.
Peak load test —
A test designed to ensure that the system can handle the stress of a peak load demand.
Recovery test —
A test that simulates emergency situations such as power failures, equipment malfunctions, database failures, and so on.
Regression test —
A form of test in which old test cases and test data are applied to a modified version of a system to ensure that the changes have not affected the system’s ability to perform its fundamental tasks.
Reporting control procedures —
A set of procedures for documenting all test results.
Scaffolding —
Software written specifically to support testing.
Stress test —
A test conducted under extreme conditions.
System performance test —
A test that focuses on system behavior.
System test —
A test conducted on the entire system that uses both test data and real, user supplied data.
Test plan —
A plan for conducting the necessary tests that incorporates testing strategies, test procedures, test data, and a test schedule.
Testing —
A front-end process intended to exercise the system and its components to locate, investigate, and correct errors and bugs.
Top-down testing —
A testing strategy that starts at the top (with the broad, control modules) and works through the module hierarchy level by level until the bottom level (the detailed computational modules) is reached.
Unit test (module test) —
A test conducted on a single program or a single module.
White-box testing —
A testing strategy in which the tester directly verifies and reviews the logical structure, flow, and/or sequence of a proposed system.
74.6 Software

Visual Test (Relational Software), ATF (Softbridge), FERRET (Azor Inc.), and QARun (Compuware) support GUI-related testing. Chariot (Ganymede Software), ITE and SDTF (Applied Computer Technology), and FastBench Agent Tester (NETMANSYS) are communication-related software tools. AdaTEST (IPL), C-Cover (Bullseye Testing Technology), and Code Wizard (Parasoft) are used with C and C++. Web testing tools include Webexam and Webload (Radview Software), WebART (OCLC), and TestWorks/Web (Software Research). Performance-related software tools include Silkperformer (Segue), QAStress (Compuware), Loadrunner and Astra Sitetest (Mercury Interactive Corporation), and Benchmark Factory (Client/Server Solutions Inc.).

74.7 References
1.  Beizer, B., Software Testing and Quality Assurance, Van Nostrand Reinhold, New York, 1984.
2.  Beizer, B., Software Testing Techniques, Van Nostrand Reinhold, New York, 1983.
3.  Burch, J. G., Systems Analysis, Design, and Implementation, Boyd & Fraser, Boston, MA, 1992.
4.  Davis, W. S., Business Systems Analysis and Design, Wadsworth, Belmont, CA, 1994.
5.  Dewitz, S. D., Systems Analysis and Design and the Transition to Objects, McGraw-Hill, New York, 1996.
6.  Howden, W. E., Functional Program Testing & Analysis, McGraw-Hill, New York, 1987.
7.  Lamb, D. A., Software Engineering: Planning for Change, Prentice-Hall, Englewood Cliffs, NJ, 1988.
8.  Norman, R. J., Object-Oriented Systems Analysis and Design, Prentice-Hall, Upper Saddle River, NJ, 1996.
9.  Power, M. J., Cheney, P. H., and Crow, G., Structured Systems Development: Analysis, Design, Implementation, 2nd ed., Boyd & Fraser, Boston, MA, 1990.
10.  Pressman, R. S., Software Engineering: A Practitioner’s Approach, 2nd ed., McGraw-Hill, New York, 1987.

Comments

Popular posts from this blog

The Conversion Cycle:The Traditional Manufacturing Environment

The Revenue Cycle:Manual Systems

HIPO (hierarchy plus input-process-output)