Process design

Process design

61.1 Purpose

This chapter briefly discusses several general process design principles and guidelines.

61.2 Strengths, weaknesses, and limitations

Not applicable.

61.3 Inputs and related ideas

Processes are designed in the context of a system. The information to support process design is collected during the problem definition (Part II) and analysis (Part IV) stages of the system development life cycle, and the processes are identified during the high-level design stage (Part V). Problem analysis techniques are discussed in Chapter 15. Process design tools and techniques include data flow diagrams (Chapter 24), Warnier-Orr diagrams (Chapter 33), system flowcharts (Chapter 37), logic flowcharts (Chapter 55), Nassi-Shneiderman diagrams (Chapter 56), decision trees (Chapter 57), decision tables (Chapter 58), pseudocode (Chapter 59), structured English (Chapter 60), and structure charts (Chapter 63). Such concepts as decomposition, cohesion, coupling, and span of control are discussed in Chapter 62.

61.4 Concepts

This chapter briefly discusses several general process design principles and guidelines.

61.4.1 Factors that influence process design

Before starting process design, the analyst or designer must carefully consider the process in the context of the system. For example, on-line, batch, and real-time systems (Chapter 73) are inherently different, and process timings must be consistent with system timings.

A complex process might be converted into several smaller and more feasible subprocesses using decomposition techniques (Chapter 15). Numerous, small processes create numerous interfaces that can lead to operational complexity. An option is to merge several subprocesses into one process by using factoring techniques (Chapter 15).

In some cases, abstraction might be used to transform an abstract problem into a form that is more easily understood by the user. Abstraction is a problem-solving technique that focuses on investigating the most critical aspects of a problem and using the results to suggest a solution. Such tools as searching, generate-and-test, and justification building (Chapter 15) can be used to abstract a problem.

Transform-oriented processes create and/or derive new information based on the input data. For example, a payroll process calculates an employee’s net income given such input as hours worked, pay rate, federal, state and city income tax rates, and so on. Such processes can usually be divided into input (afferent), output (efferent), and process (transform) modules to form a high-level control structure (Chapter 62).

Transaction-oriented processes transmit or route the right information to the right process and are typically decomposed to form a case structure (Chapter 62). For example, after receiving a customer order, an order routing process might be used to check the order type and then transmit backorders to a back order subprocess, new customer orders to a new customer subprocess, orders from existing customers to a customer verification subprocess, and so on.

The available technology also affects process design. For example, CASE tools, screen, and form generators, and prototyping fundamentally change the process of designing processes. To cite one example, the CASE repository (Chapter 5) might serve as database for designing a new process based on existing similar processes.

61.4.2 Process content

Before starting detailed process design, the analyst or designer must define both static and dynamic information for each process. Static information includes such attributes as the process name, the process number (and other process identifiers), any algorithms or logic associated with the process, the inputs to the process, and the outputs from the process. Dynamic information includes such attributes as the processing cycle (daily, weekly, monthly, quarterly, annually), the nature of the output (query, periodic report), any parameters that vary over time (e.g., the price of gasoline), and any other parameters not subject to the organization’s control (e.g., Internal Revenue Service regulations).

61.4.3 Process data

Once the process contents are defined, the analyst or designer must check the process’s data flows. Except for constants, input data elements must either flow from a source or a prior process and output data elements must flow to a following process. Any data stores associated with the process must be mapped to a file or a database. A data flow diagram (Chapter 24) is an excellent tool for checking data flows.

61.4.4 Process design guidelines

Listed below are several general process design principles and guidelines.

61.4.4.1 Stepwise refinement

Stepwise refinement is a top-down strategy for dealing with complex or abstract processes. The basic idea is to study the process and define it in a conceptual level, analyze the conceptual knowledge and describe it at a logical level, and transform the logical information into corresponding physical specifications. The structured analysis and design methodology (Chapter 3) utilizes stepwise refinement.

61.4.4.2 Modularization

A complex process is normally implemented as a set of linked, single-task modules (Chapter 62), with a high-level control module calling subordinate modules in the proper order. The control modules and their subordinates must be designed in such a way that they can be easily linked and can share common information. Only those data elements directly relevant to its subtask should be passed to or returned by a submodule, and a called module should store no global data elements. A called module should always return control to the calling module.

Coupling is a measure of module interdependence. Generally, coupling is a function of the amount of data passed between the calling module and its subordinate, and more data implies tighter coupling. A major objective of process design is to reduce coupling. Structured walkthroughs and inspections (Chapter 23) can help.

Cohesion is a measure of a module’s completeness. A well-designed module performs a single, complete task. If a module must be decomposed, each submodule should perform a single, complete subtask.

Span of control is another important criterion. Generally, a super-process should control no more than seven subordinate subprocesses.

61.4.4.3 Information hiding

The information hiding principle suggests that all information not directly relevant to a given process should be hidden from that process. Only essential data elements should be passed to a process when the process is called. No subprocess should be allowed to access or modify any global data element that is not explicitly passed to it. If a given process utilizes local data elements, the local data should be known only within that process. A called process should be designed to react only when the correct information is passed to it.

61.5 Key terms
Abstraction —
A problem-solving technique that focuses on investigating the most critical aspects of a problem and using the results to suggest a solution.
Afferent process —
A process that gathers and prepares input data.
Cohesion —
A measure of a module’s completeness.
Control structure —
A hierarchical model of the flow of control through a program. The control structure resembles a military chain of command or an organization chart. At the top is a main control module that calls secondary control structures. At the bottom are the computational routines, each of which implements a single algorithm.
Coupling —
A measure of a module’s independence; fewer parameters flowing into or out from a module imply looser coupling.
Decomposition —
A problem analysis paradigm that calls for breaking a problem into more manageable subproblems and then attacking the subproblems.
Dynamic information —
Time-related parameters, or process information that can change; for example, the processing cycle, the nature of the output, any parameters that vary over time, and any other parameters not subject to the organization’s control.
Efferent process —
A process that structures and/or transmits output data.
Factoring —
Merging several small, isolated, overlapping, or related problems to form a meta-problem.
Functional decomposition —
A program design methodology in which the program is broken down (or decomposed) into modules based on the processes or tasks they perform.
Information hiding —
A principle that suggests that all information not directly relevant to a given process should be hidden from that process.
Span-of-control (breadth) —
A measure of the number of modules directly controlled by a higher-level routine.
Static information —
Process information that is not likely to change; for example, the process name, the process number, necessary algorithms, inputs, and outputs.
Stepwise refinement —
A top-down strategy for dealing with complex or abstract processes.
Transaction-oriented process —
A process that transmits or routes the right information to the right process.
Transform process —
A process that converts the input data to output form.
Transform-oriented process —
A process that creates and/or derives new information based on the input data.
61.6 Software

Not applicable.

61.7 References
1.  Burch, J. G., Systems Analysis, Design, and Implementation, Boyd & Fraser, Boston, MA, 1992.
2.  Burd, S. D., Systems Architecture: Hardware and Software in Business Information Systems, Boyd & Fraser, Boston, MA, 1994.
3.  Dewitz, S. D., Systems Analysis and Design and The Transition to Objects, McGraw-Hill, New York, 1996.
4.  Hoffer, J. A., George, J. F., and Valacich, J. S., Modern Systems Analysis and Design, Benjamin/Cummings, Redwood City, CA, 1996.
5.  Pressman, R. S., Software Engineering: A Practitioner’s Approach, 2nd ed., McGraw-Hill, New York, 1987.

Comments

Popular posts from this blog

The Conversion Cycle:The Traditional Manufacturing Environment

The Revenue Cycle:Manual Systems

HIPO (hierarchy plus input-process-output)