Operation and maintenance:Performance analysis

Performance analysis

78.1 Purpose

This chapter briefly discusses system evaluation and explains three significant performance criteria: reliability, productivity, and quality.

78.2 Strengths, weaknesses, and limitations

Not applicable.

78.3 Inputs and related ideas

Control charts are discussed in Chapter 10. The system requirements against which performance is measured are documented in the requirements specification (Chapter 35). Performance analysis yields valuable information to support future project planning (Part III) and future cost/benefit analysis (Chapter 38). During the system evaluation process, the physical components (as implemented) are compared to the design specifications, so any or all of the design topics discussed in Part VI might be relevant. Performance analysis is closely related to testing (Chapters 74 and 75), implementation (Chapter 76), and to the other chapters in Part VIII.

78.4 Concepts

This chapter briefly discusses system evaluation and explains three significant performance criteria: reliability, productivity, and quality.

78.4.1 System evaluation

After the system is released to the user, it should be evaluated to determine how well it meets the user’s needs (as defined in the requirements specification) and conforms to the design specifications. Davis and Olson1 suggest three categories for system evaluation. Economic evaluation focuses on comparing the project’s actual time, cost, and benefits to the estimates prepared after the analysis stage and/or during design. The objective is to improve the estimating process. Technical evaluation deals with the technology and the system design and considers such factors as reliability, productivity, quality, efficiency, and effectiveness. Operational evaluation focuses on such operational elements as system controls, interface design, and security design and considers such factors as integration, flexibility, compatibility, user friendliness, and system efficiency.

Numerous hardware and software tools are available to support system evaluation. IBM’s system management facility (SMF) is an example of a job accounting system that can be used to project patterns of growth, manage and plan capacity, and assess system efficiency. A software monitor is a benchmarking program that can be used to measure program efficiency, measure execution performance, keep track of resources used, and so on. System access monitoring can be used to measure such parameters as throughput, turnaround time, access time, and response time. A hardware monitor consists of specially designed circuitry that can be used to measure such parameters as average seek time, rotational delay, arm movement time, and so on.

78.4.2 Reliability

Reliability is the probability that a given component or system will perform as expected (or will not fail) for a given period of time. Reliability is typically measured or estimated using probabilities or such statistical parameters as means, modes, and medians. For example, the mean time between failures (MTBF) is the sum of the mean time to fail (the average time between initial use and failure) and the mean time to repair.

There are many mathematically based reliability models2. For example, the reliability growth modeling technique expresses cumulative failures as a function of execution time. The reliability cost model describes the relationship between associated costs and failure intensity. System reliability can also be measured by analyzing the relationship between completion date and failure intensity. See Everett and Musa2 for additional details.

Reliability is an extremely important performance criterion on most computer-based systems, and explicit reliability targets or requirements are often documented in the requirements specification. After the system begins operation, all failures and their associated repair times should be documented in detail. Based on the failure and repair data, the mean time between failures should be computed regularly and compared to the target, perhaps by plotting a control chart. Less than acceptable system performance suggests a need for corrective maintenance and/or system enhancement. An evaluation of the causes of system failure can help to improve the reliability of future systems.

78.4.3 Productivity

Productivity is defined as output per unit of labor, or more generally, output per unit of input. Increased productivity reduces development time and development cost.

The first step in increasing productivity is to measure productivity. After the system is released to the user, all (or most) of the costs and other resources expended on creating the system (the inputs) and all of the system’s components, features, and facilities (the outputs) are known. Given the inputs and the outputs, various productivity measures can be computed and compared to similar numbers for other systems.

Software productivity is sometimes measured by computing lines of code per unit of time (for example, lines of code per programmer day). The number of lines of code is taken from the program source listings. Programmer time is taken from the appropriate labor or payroll statistics. Comparing the resulting ratio to the same ratio for other projects can show if the organization’s productivity is increasing, decreasing, or staying the same. Explaining discrepancies between projects and linking those discrepancies to other measurable project characteristics (e.g., the computing platform, response time requirements, system type, programming language, etc.) can help to improve the cost estimating process on future projects. Tracking productivity can also help the organization determine if new technology (e.g., a fourth-generation language) really does lead to productivity gains.

Not all system development activities involve code, of course. Other, more general measures of productivity include effort months per user supported, effort months per project or task completed, and reported defects or repairs per user supported.

The distribution of actual costs can also be significant. Assume, for example, that historically 20 percent of post-analysis costs were spent on design, 40 percent were spent on coding, and 40 percent were spent on debugging and testing. On a new project that uses a fourth-generation language, 45 percent of the post-analysis costs were spent on design, 30 percent on coding, and 25 percent on testing and debugging. Assuming that design is language independent, those numbers suggest that fourth-generation languages increase productivity. Additionally, the distribution of costs as a function of language is a useful guide to future cost estimating.

78.4.4 Quality

Quality can be defined (narrowly) as conformance to requirements. In a broader sense, quality implies that the requirements match user needs and that the system meets the requirements. Quality measures are sometimes implemented late in the system development life cycle, but they should be considered during the analysis stage, and specific quality requirements should be documented in the requirements specification.

Quality is often measured by counting defects, where a defect is any failure to meet requirements. The number of defects (perhaps categorized by severity) discovered during testing is a measure of programming and debugging quality. The number of defects discovered after the system is released is a measure of overall system quality. Cost per defect is computed by dividing total debugging or maintenance costs by the total number of defects discovered, and is also used as a measure of productivity. Specific targets (or limits) can be included in the requirements specification and used to define control limits for a control chart. Declining numbers suggest improving quality. Comparing defect statistics for a project developed using traditional programming languages to a project developed using a fourth-generation language will show if the newer technology improves quality.

Quality assurance is a four-step process. The first step, review, involves collecting quality-related information and identifying quality factors. Key quality factors include correctness, reliability, efficiency, integrity, usability, maintainability, testability, flexibility, portability, reusability, and interoperability.3 During the study phase, a quality framework is identified by selecting and ranking the quality factors and choosing measurable quality attributes for each one. In the implementation step, related quality attributes are grouped (e.g., error tolerance, consistency, accuracy, and simplicity might be grouped under reliability) and conflicting attributes (e.g., execution efficiency and instrumentation, conciseness and completeness) are resolved. During the documentation step, the quality attributes are expressed as measurable system parameters, the appropriate quality information is collected, and quality is tracked. Note that the documentation process can reveal new quality factors, which leads back to the review step. In other words, quality assurance is a continuous process.

78.5 Key terms
Defect —
Any failure to meet requirements.
Economic evaluation —
A type of system evaluation that focuses on comparing the project’s actual time, cost, and benefits to the estimates prepared after the analysis stage and/or during design.
Hardware monitor —
Specially designed circuitry that can be used to measure such parameters as average seek time, rotational delay, arm movement time, and so on.
Mean time between failures (MTBF) —
A measure of reliability; the sum of the mean time to fail and the mean time to repair.
Operational evaluation —
A type of system evaluation that focuses on such operational elements as system controls, interface design, and security design and considers such factors as integration, flexibility, compatibility, user friendliness, and system efficiency.
Productivity —
Output per unit of labor; more generally, output per unit of input.
Quality —
Conformance to requirements; in a broader sense, quality implies that the requirements match user needs and that the system meets the requirements.
Quality assurance —
Goals, procedures, and techniques for measuring and ensuring quality.
Quality factor —
A parameter that implies quality, such as correctness, reliability, efficiency, integrity, usability, maintainability, testability, flexibility, portability, reusability, and interoperability.
Reliability —
The probability that a given component or system will perform as expected (or will not fail) for a given period of time.
Software monitor —
A benchmarking program that can be used to measure program efficiency, measure execution performance, keep track of resources used, and so on.
System access monitoring —
Software and hardware used to measure such parameters as throughput, turnaround time, access time, and response time.
Technical evaluation —
A type of system evaluation that deals with the technology and the system design and considers such factors as reliability, productivity, quality, efficiency, and effectiveness.
78.6 Software

IBM’s system management facility (SMF) is an example of a job accounting system that can be used to project patterns of growth, manage and plan capacity, and assess system efficiency. A software monitor is a benchmarking program that can be used to measure program efficiency, measure execution performance, keep track of resources used, and so on. System access monitoring software can be used to measure such parameters as throughput, turnaround time, access time, and response time.

78.7 References
78.7.1 Citations
1.  Davis, G. B., and Olson, M. H., Management Information Systems: Conceptual Foundation, Structure, and Development, 2nd ed., McGraw-Hill, New York, 1985.
2.  Everett, W. W., and Musa, J. D., Software reliability and productivity, Software Engineering Productivity Handbook, Keyes, J., Ed., McGraw-Hill, New York, 1993, chap. 8.
3.  McCall, J. A., Richards, P. K., and Walters, G. F., Factors in Software Quality Assurance: RADC-TR-77-369, Rome Air Development Center, Rome, Italy, November 1977.
78.7.2 Suggestions for additional reading
  1. Burch, J. G., Systems Analysis, Design, and Implementation, Boyd & Fraser, Boston, MA, 1992.
  2. Davis, W. S., Business Systems Analysis and Design, Wadsworth, Belmont, CA, 1994.
  3. Dewitz, S. D., Systems Analysis and Design and the Transition to Objects, McGraw-Hill, New York, 1996.
  4. Hoffer, J. A., George, J. F., and Valacich, J. S., Modern Systems Analysis and Design,Benjamin/Cummings, Redwood City, CA, 1996.
  5. Keyes J., Software Engineering Productivity Handbook, McGraw-Hill, New York, 1993.
  6. Lamb, D. A., Software Engineering: Planning for Change, Prentice-Hall, Englewood Cliffs, NJ, 1988.
  7. Pressman, R. S., Software Engineering: A Practitioner’s Approach, 2nd ed., McGraw-Hill, New York, 1987.
  8. Swanson, E. B., Information Systems Implementation: Bridging the Gap between Design and Utilization, Irwin, Homewood, IL, 1988.
  9. Vincent, J., Waters, A., and Sinclair, J., Software Quality Assurance: Practice and Implementation, Vol. 1, Prentice-Hall, Upper Saddle River, NJ, 1988.
10.  Vincent, J., Waters, A., and Sinclair, J., Software Quality Assurance: A Program Guide, Vol. III, Prentice-Hall, Upper Saddle River, NJ, 1988.

Comments

Popular posts from this blog

The Conversion Cycle:The Traditional Manufacturing Environment

The Revenue Cycle:Manual Systems

HIPO (hierarchy plus input-process-output)