Operation and maintenance:System controls

System controls

77.1 Purpose

This chapter discusses several system control tools and techniques that are commonly used in information systems.

77.2 Strengths, weaknesses, and limitations

Where appropriate, the strengths and weaknesses associated with specific controls will be discussed in context.

77.3 Inputs and related ideas

General system principles are discussed in Chapters 1 and 72. Key information for defining system controls is documented in the requirements specification (Chapter 35). Effective controls must be designed into a system, so controls are an important consideration throughout the design process (Part VI). Control charts are discussed in Chapter 10. Technical inspections are discussed in Chapter 23. Several techniques for screening input data are described in Chapter 46. Security is discussed in Chapter 71. Configuration management and version controls are discussed in Chapter 80.

77.4 Concepts

In addition to inputs, processes, interfaces, and outputs, a system also includes control and feedback mechanisms that together allow the system to determine if it is achieving its purpose. Feedback is the return of a portion of the system’s output to its input. If the feedback suggests a deviation from the expected value (the control), the system reacts by attempting to adjust itself.

This chapter discusses several system control tools and techniques that are commonly used in information systems.

77.4.1 Auditing

An audit is a study of a system or a process designed to ensure that the established procedures and controls are followed. Note that the point of an audit is not to correct errors. For example, a well-conducted audit might not catch an incorrect value input by a data entry clerk, but it should flag an attempt by an unauthorized person to change that value.

One technique used by auditors is to follow an audit trail. Sometimes, the auditor starts with selected input transactions and traces the data through the system until they are eventually output. An option is to start with selected outputs and trace the values back to the source data. Good systems are designed to maintain such audit trails.

Regression testing is a technique that compares the results obtained when the system is being audited to the results obtained under normal condition. Parallel simulation involves testing both the live system and a simulated system with the same data. With both techniques, any discrepancies are analyzed to determine the accuracy and reliability of the system. Finally, experimental design is used to audit system accuracy by building a pilot prototype and testing it using controlled sample data.

In addition to being significant in their own right, audits are an important supplement to virtually all system controls.

77.4.2 Information processing controls

Information processing controls consist of input controls, processing controls, and output controls.

The objective of input controls is to screen out and (if possible) correct bad data before they enter the system. Validity tests are used to ensure that each input field is the right type (numeric, alphabetic), that the value of a given field is within upper and lower bounds, and that fixed-length fields (e.g., social security number, telephone number) are the right length. Exception tests are used to screen such “exceptional” values as a zero (0) in a field that will be used as a divisor. Reasonableness tests are used to screen invalid values (e.g., anything but F or M in a single-character sex or gender field).

Record control is a simple processing control technique that involves counting and verifying the existence of every record in a database. Error controls are used to see if a program or routine can handle an unexpected response or input. Interrupt controls involve intentionally restarting, abandoning, or abnormally terminating a system or program to determine if it is capable of recovering. Transmission controls are used to ensure that there are no missing, incorrectly converted, or wrongly transmitted data. Additionally, audit trails (Section 77.4.1) are valuable processing control tools.

Distribution controls are designed to ensure that all outputs are distributed to the right location at the right time. Quantity controls verify that the correct number of copies is generated. Reconciliation controls are focused on ensuring that the right amount of data is output to support daily statistical analysis and decision-making activities. Finally, control totals (Section 77.4.3) help to detect other types of output exceptions and errors.

77.4.3 Operational controls

The purpose of operational controls is to provide an early warning in the event of system malfunction. The idea is to collect data about system performance (feedback), compare the feedback to established standards (the controls), and sound an alarm if reality differs from expectation.

It is impossible to directly monitor everything that happens on a computer-based information system, but control totals are both effective and relatively easy to generate. A control total is an accumulated sum, a count, or a similar value that summarizes the results of numerous computations or transactions.

For example, consider the process of printing paychecks. In many companies, the necessary computations are performed by the payroll program and stored on disk or magnetic tape. The output from that program also includes such control totals as the number of checks to be printed, the sum of the computed net pays for all those checks, and so on. Later, when the check printing routine runs, it independently computes the same counts and totals. If the control values generated by both programs match, it is reasonable to assume that no one modified the payroll data between the time the computations were made and the checks were printed.

Control totals are sometimes monitored on control charts (Chapter 10). For example, the number of inventory transactions per day might be a useful control total for an inventory system. If the daily count lies between the upper and lower control limits (numbers that should appear in some form on the requirements specification), it is reasonable to assume that the inventory system is functioning as expected. If, however, the transaction count is out of control, management should look for the reason why. Note that the control total does not indicate what is wrong, merely that something is wrong.

Inventory controls help to ensure that the necessary software, hardware, and other peripherals are properly maintained and connected for operation. Documentation controls focus on the documentation library. Scheduling controls are used to monitor input or output timings and provide early warning of increasing queue lengths. Service controls measure such parameters as response rate, throughput rate, and turnaround time.

Other operational controls are designed to ensure that backup and recovery (and other operating procedures) are followed; logs and transaction counts are common tools. Finally, audits are used to verify that the correct procedures are followed.

77.4.4 Personnel controls

Not all system activities take place on the computer, so personnel controls are essential. One underlying principle is the segregation of functions. For example, at a university, the registrar registers students for classes, the finance department bills the students, and the bursar collects the payments. When functions are segregated, it is relatively easy to design reports and controls that, in essence, allow the different functional groups to check on each other and allow an auditor to verify that the appropriate procedures were followed.

To cite another example, imagine that the requirements for an inventory system specify that the warehouse is responsible for controlling inventory and the shipping department is responsible for delivering the orders to the customers. Given such a structure, comparing daily inventory transactions to daily orders shipped might serve as an effective control on the performance of both groups.

77.4.5 Ensuring data integrity

Data integrity is ensured by carefully controlling and managing data entry, data maintenance, and data access from the time the data first enter the system until they are of no further use. The process can be compared to a chain of evidence in a criminal trial. Unless the police can account for a piece of evidence from the instant it is collected to the instant it is presented in court, that evidence is inadmissible. Similarly, unless the system can account for a particular data element from the instant it is captured until the instant it is no longer needed, that data element cannot be trusted.

Only authorized personnel (as defined in the system requirements) should be allowed to enter data, and clear, unambiguous, verifiable, easily monitored data entry procedures are a must. Relatively few individuals should be authorized to modify data, and steps must be taken to verify the identity of anyone who attempts to change a data value. Similar restrictions must be used to limit data access to authorized personnel. The key is building such controls into the system rather than simply adding them on after the system is completed.

Data integrity controls start with data entry, so the input controls described in Section 77.4.2 are an important component. Often, transactions, errors, and corrections are counted and plotted on a control chart, and data entry procedures might be monitored electronically. Ensuring that only authorized personnel enter, access, and modify data is a security function (Section 77.4.6). Detailed logs of all changes to a database allow an auditor to verify that the appropriate change procedures were followed.

77.4.6 Security controls

Security (Chapter 71) involves procedures and other safeguards designed to protect the hardware, software, data, and other system resources from unauthorized access, use, modification, or theft. Once a system’s security is breached, the data are particularly vulnerable because they are so easy to copy or change. It is impossible to ensure data integrity if unauthorized people can bypass the normal controls and access the system.

Physical security is concerned with denying physical access to a system. For example, mainframe computers are often located in controlled-access rooms, personal computers are sometimes placed in locked cabinets when they are not in use, and network connections can be deactivated when an office is closed. Typical physical security controls include counting and logging all attempts to access the system or facility. Procedures are needed for tracking keys and entry codes, changing codes regularly, and so on.

Logical security is implemented by the computer itself. Typically, each user is assigned a unique identification code and a password that must be entered each time he or she logs onto the system. On some systems, additional passwords are required to access more secure data or to execute sensitive programs. Logical security controls might include counts of successful and unsuccessful log-ons, detailed records of attempted break-ins, statistics on password changes (on time, late), and so on. Procedures are needed for screening out easy to guess passwords, ensuring that passwords are changed regularly, quickly removing disallowed passwords from the system, and so on. Audits help to verify that the appropriate procedures were followed.

77.4.7 Software development controls

Software development controls are essential. Undocumented or rogue code can cause debugging, testing, and maintenance nightmares. Hackers and crackers routinely exploit Trojan horses, undocumented trap doors, and known bugs to gain access to computer systems, and disgruntled programmers have been known to insert destructive logic into their code.

The first key is insisting that all programmers follow well-defined coding standards. Special software tools can help. For example, a static code analyzer is a program that scans (but does not execute) the code and flags such potential errors as synonyms (different names for the same data element), poor structure, inconsistent usage, dead code (modules that cannot be executed), unreferenced variables, and other deviations from coding standards. A clean code analyzer output is a prerequisite to code approval.

Another key is to conduct technical inspections of all software. A programmer is unlikely to insert unauthorized code into a program if the code is subject to inspection by his or her peers. Inspections can also help to ensure that the programmer does not deviate from the approved design.

Version control (Chapter 80) provides a mechanism for enforcing software development controls. Only the current version of a program is approved for production. Programmers are not permitted to directly access the production version, and all modifications are made to a test version. Before the test version becomes the current version, it must generate a clean compilation and a clean code analysis, pass a technical inspection, pass the appropriate acceptance tests, and be approved by the configuration approval board.

Functional segregation helps, too. For example, in most mainframe environments, computer operators are not allowed to modify software and programmers are not allowed to operate the computer. Unless that standard is enforced, a programmer might be able sit down at the console and make unauthorized changes to a program that cannot be detected by management or by the normal controls. In fact, in many computer centers, banning “via the console” debugging was one of the very first software development controls.

Audits can help to verify that the appropriate procedures are followed.

77.4.8 Communication controls

Encryption is a technique for encoding data, transmitting the data, and then decoding the data at the receiving end for processing. Line monitoring involves attaching special circuitry to the communication link to diagnose problems. For example, using loop back analysis, all data received by a destination node are automatically looped back to the transmitting node and compared with the original data.

77.5 Key terms
Audit —
A study of a system or a process designed to ensure that the established procedures and controls are followed.
Configuration approval board —
A committee that reviews change requests and proposed adaptive and perfective maintenance tasks, authorizes work to begin, and schedules the work.
Control —
An expected value that can be compared with feedback. If the feedback suggests a deviation from the expected value (the control), the system reacts by attempting to adjust itself.
Control total —
An accumulated sum, a count, or a similar value that summarizes the results of numerous computations or transactions.
Data integrity —
The state of a database that is protected against loss or contamination; data integrity is ensured by carefully controlling and managing data entry, data maintenance, and data access from the time the data first enter the system until they are of no further use.
Distribution control —
An output control designed to ensure that all outputs are distributed to the right location at the right time.
Documentation control —
An operational control that focuses on the documentation library.
Encrypt —
To convert to a secret code.
Error control —
A system control designed to determine if a program or routine can handle an unexpected response or input.
Exception test —
A test used to screen such “exceptional” values as a zero (0) in a field that will be used as a divisor.
Experimental design —
An auditing technique used to audit system accuracy by building a pilot prototype and testing it using controlled sample data.
Feedback —
The return of a portion of the system’s output to its input.
Information processing control —
An input, processing, or output control.
Input controls —
Tests used to screen out and (if possible) correct bad data before they enter the system.
Interrupt control —
A control or test to determine if a system or program is capable of recovering after it is intentionally restarted, abandoned, or abnormally terminated.
Inventory control —
A type of operational control that helps to ensure that the necessary software, hardware, and other peripherals are properly maintained and connected for operation.
Line monitoring —
A communication control technique that involves attaching special circuitry to the communication link to diagnose problems.
Logical security —
Security precautions implemented by the computer itself.
Loop back analysis —
The process of automatically returning all received messages to the transmitting node where they are compared with the original data.
Operational control —
A control intended to provide an early warning in the event of system malfunction.
Parallel simulation —
An auditing technique that involves testing both the live system and a simulated system with the same data.
Physical security —
Techniques and procedures concerned with denying physical access to a system.
Processing control —
A test or technique that measures and controls a processing activity.
Reasonableness test —
A test used to screen invalid values (e.g., anything but F or M in a single-character sex or gender field).
Reconciliation control —
An output control designed to ensure that the right amount of data is output to support daily statistical analysis and decision-making activities.
Record control —
A simple processing control technique that involves counting and verifying the existence of every record in a database.
Regression testing —
An auditing technique that compares the results obtained when the system is being audited to the results obtained under normal conditions.
Scheduling control —
An operational control that is used to monitor input or output timings and provide an early warning of increasing queue lengths.
Security —
Procedures and other safeguards designed to protect the hardware, software, data, and other system resources from unauthorized access, use, modification, or theft.
Service controls —
Operational controls that measure such parameters as response rate, throughput rate, and turnaround time.
Software development controls —
A set of controls imposed on the software development process. Examples include static code analyzers, technical inspections, version controls, and so on.
Static code analyzer —
A program that scans (but does not execute) the code and flags such potential errors as synonyms, poor structure, inconsistent usage, dead code, unreferenced variables, and other deviations from coding standards.
Transmission control —
A processing control designed to ensure that there are no missing, incorrectly converted, or wrongly transmitted data.
Validity test —
A test used to ensure that each input field is the right type (numeric, alphabetic), that the value of a given field is within upper and lower bounds, that fixed-length fields (e.g., social security number, telephone number) are the right length, and so on.
Version control —
A set of tools and procedures used to track and manage multiple versions of the system and its components.
77.6 Software

Not applicable.

77.7 References
1.  Davis, G. B. and Olson, M. H., Management Information Systems: Conceptual Foundations, Structure, and Development, 2nd ed., McGraw-Hill, New York, 1985.
2.  Davis, W. S., Business Systems Analysis and Design, Wadsworth, Belmont, CA, 1994.
3.  Gilhooley, I., Defining the scope of DP controls, in A Practical Guide to EDP Auditing, James Hannon, Ed., Auerbach, New York, 1982, 15.
4.  Powers, M. J., Cheney, P. H., and Crow, G., Structured Systems Development: Analysis, Design, and Implementation, 2nd ed., Boyd & Fraser, Boston, MA, 1990.
5.  Stamper, D. A., Business Data Communications, 4th ed., Benjamin/Cummings, Redwood City, CA, 1994.

Comments

Popular posts from this blog

The Conversion Cycle:The Traditional Manufacturing Environment

The Revenue Cycle:Manual Systems

HIPO (hierarchy plus input-process-output)