Security

Security

71.1 Purpose

The objective of security is to protect the hardware, software, data, and other system resources from unauthorized, illegal, or unwanted access, use, modification, or theft. This chapter describes several information system security risks, outlines some strategies for countering them, and briefly discusses how security is designed into a system.

71.2 Strengths, weaknesses, and limitations

This chapter focuses on concepts and principles. Where appropriate, the strengths and weaknesses of various approaches will be discussed in context.

71.3 Inputs and related ideas

During the problem definition (Part II) and analysis (Part IV) stages of the system development life cycle, the system’s security exposures and risks are identified. The costs associated with appropriate countermeasures are a part of the cost/benefit analysis (Chapter 38). At the end of the analysis stage, the necessary security measures are documented in the requirements specification (Chapter 35). Virtually any system component can present a security risk. Consequently, security is an important consideration in the design of almost every system component and is relevant to most of the chapters in Part VI. System controls, including security controls, are discussed in Chapter 77.

71.4 Concepts

The objective of security is to protect the hardware, software, data, and other system resources from unauthorized, illegal, or unwanted access, use, modification, or theft. In a traditional information system constructed around a centralized mainframe the computer and most of its peripherals are locked in a restricted access room. Such lock and key security is not very useful on a modern network, however. The combination of large numbers of users and physically unsecured peripherals, cables, communication lines, and access points make modern network-based systems particularly tempting targets. The Internet complicates the problem.

This chapter describes several information system security risks, outlines some strategies for countering them, and briefly discusses how security is designed into a system.

71.4.1 Security threats

To an expert, an item is considered secure if the cost of breaking security (including the risk of getting caught) exceeds the item’s value. To some people, such things as military secrets or a corporation’s strategic data are considered priceless. Consequently, perfect information system security may be an impossible goal.

A good way to visualize security threats is to imagine the system as a chain and look for weak links. Exposures can come from people, hardware, and/or software.

71.4.1.1 People

Recently, hackers and crackers have received a great deal of publicity. Originally, a hacker was an expert programmer with a knack for creating elegant software. Today, however, the term is more commonly applied to someone who illegally breaks into computer systems. Within the programming community, hackers are viewed as relatively harmless, while crackers, people who break into computers (generally over a communication line) with malicious intent, are viewed as criminals. In popular usage, hacker (the more common term) is applied to both benign and malicious intruders.

In spite of all the publicity about hackers and crackers, such insiders as employees, former employees, consultants, clients, and customers commit most security violations. Unlike hackers, insiders have relatively free access to the system. Industrial spies have been known to approach insiders with offers of money in exchange for sensitive information or software. Disgruntled information system employees (both current and former) are particularly dangerous.

Even honest insiders can represent a security risk. People are not always careful about protecting their passwords, security codes, telephone numbers, equipment, and work places. For example, hackers have been known to guess casually selected passwords, obtain passwords and other security information by going through paper waste (dumpster diving), or passing themselves off as authorized users and convincing an employee to give them the information they need (social engineering).

71.4.1.2 Hardware

The personal computer or workstation is one of the weakest links in network security. Users upload and download data to and from the Internet, share public domain software, and share common peripherals, any of which can constitute a security threat. Unauthorized access to the server’s public access files and peripherals (magnetic tape, printers, plotters, and so on) complicates security.

The physical network is also vulnerable. Intruders have been known to tap a cable or a telephone line or intercept satellite and microwave communications. Dial-in access is particularly difficult to control because an incoming call can originate anywhere. In fact, hackers and crackers sometimes run programs that dial thousands of numbers in sequence and note only the numbers that return a modem tone (power dialing). Those numbers are later used as possible access points to a system.

The theft of laptop computers is a growing problem. In addition to the value of the hardware and software, a laptop’s hard disk might hold corporate data, passwords, access codes, and other sensitive information.

71.4.1.3 Software

Execution errors and inaccurate input data generated by both authorized and unauthorized users present a special challenge for network security design because backtracking to the point when the affected information was correct is very difficult given the number of concurrent users and active tasks. Additionally, unauthorized access, whether malicious or benign, makes it difficult to certify the integrity of a database, particularly if there is a chance that the contents were modified.

Other software problems are a bit more dramatic. A time bomb is a program that executes on a particular date or when a particular condition is met. A Trojan horse is a seemingly harmless program that invites an unsuspecting user to try it. Some time bombs and Trojan horses set off logic bombs, programs that (symbolically) blow up in memory, perhaps trashing a hard disk or selected data.

A rabbit is a program that replicates itself until no memory is left and no other programs can run. For example, one well-known rabbit creates two copies of itself and then starts them. A few microseconds later there are four rabbits running. Then eight, then sixteen, and so on until the rabbit is out of control.

A virus is a program that is capable of replicating itself and spreading between computers. Like its biological namesake, a virus is a parasite that attaches itself to another program to survive and propagate. (The boot routine found on every diskette is a common target.) Viruses typically spread to other computers through infected diskettes or downloaded copies of infected programs.

A virus needs a host. A worm, in contrast, is a program that is capable of spreading under its own power. One common technique is to send out small, virus-like scout programs from a source computer. Once the scout is established on the target computer, it sends a message back to the source computer requesting transmission of the rest of the worm.

In addition to the logic needed to replicate and establish itself on a new computer, a virus or worm can also carry a payload that holds a logic bomb, a time bomb, a rabbit, or some other type of destructive code. Viruses and worms have been known to erase disks, crash programs, and modify data.

71.4.2 Counter measures

There are numerous tools and techniques for countering a security threat.

71.4.2.1 Physical Security

Physical security is concerned with denying physical access to the system, preventing the physical destruction of the system, and keeping the system available. For example, mainframe computers are often located in controlled-access rooms and personal computers are sometimes cabled to work tables or placed in locked cabinets when they are not in use. Access to a secure area can be controlled by issuing identification cards, badges, keys, or personal identification numbers (PINs) to authorized personnel, and surveillance cameras are becoming increasingly common. Modern biometric devices can be used to identify an individual via retinal scan, fingerprint analysis, voiceprint, or signature analysis.

The Internet is a significant source of security intrusions. Consequently, many organizations use firewalls (Figure 71.1) to insulate their internal network from the Internet (or from other public networks). A firewall is a set of hardware, software, and data that sits between the internal network and the Internet, screens all incoming and/or outgoing transactions, and allows only authorized transactions to get through. Often, the firewall is implemented on a physically separate computer, with a public host (the computer that is linked to the Internet) outside the firewall and the internal server inside the firewall. Additionally, critical software can be kernelized, or partitioned to make unauthorized access more difficult.

71-01
Figure 71.1  A firewall insulates the internal network from the Internet.

A disaster plan is essential in the event of such physical threats such as fire, flood, earthquake, or power loss. Environmental controls might be needed to regulate heat, moisture, dust, and so on. Backup copies of all software and data and redundant hardware components are important elements of a recovery plan.

71.4.2.2 Logical security

Logical security is implemented by the system as it runs. For example, on most network-based systems, each authorized user is assigned a unique identification code and a password. In some cases, additional passwords are required to access certain critical data or to execute sensitive programs. Often, access privileges are assigned in layers, with most users restricted to read-only access, a smaller group given the authority to change selected data (perhaps subject to independent verification), and only a few people assigned system operator (sysop) status (which implies the authority to access and change anything). Typically, the operating system checks a user profile or an access control matrix to verify a given user’s access privileges.

Just having a valid user code and password does not necessarily prove that a user is who he or she claims to be. Authentication, the process of verifying the user’s identity, often relies on remembered information (such as a PIN or a mother’s maiden name) or variations of the biometric devices described in Section 71.4.2.1.

Callback is another useful authentication tool. After a user logs on from a remote workstation, the host computer verifies the user code and password, breaks the connection (hangs up), looks up the authorized telephone number for that user’s workstation, and then redials the workstation.

Viruses can be difficult to detect or remove, so the best defense is prevention. Personnel should not accept “free” software (on diskette, CD-ROM, or via the network) unless the source is known to be clean. Anti-virus software is designed to recognize certain code patterns (called virus signatures) and sound an alarm when a virus is detected. Such software should be used to screen all foreign disks, CD-ROMs, and downloaded software (including software from “legitimate” sources) before they are released for use. On many systems, anti-virus software runs continuously in the background.

Other techniques are intended to provide recovery information or legal documentation when a security breech does occur. A transaction log is a list of all of a system’s recent transactions. A comparator is a software routine that compares the contents of a file or a record before and after a transaction and reports any differences. Audit trails and audit procedures can help, too.

71.4.2.3 Personnel security

People cause most security problems. Consequently, although they are expensive and sometimes controversial, such personnel controls as pre-employment screens, periodic background checks, and rotating job assignments are necessary. A basic accounting principle suggests that no single individual should ever be allowed to place an order and pay the resulting bill. Similarly, systems are often designed to segregate such related functions as data entry and data verification by assigning the responsibility to different departments.

Standard operating procedures, policies, and/or security manuals are an important part of any security plan, and training is crucial. Employees must understand how to implement the security procedures. Perhaps more important, they must know why a given security procedure is necessary.

For example, given a choice, most people select an easy to remember (and thus easy to guess) password that they never change. Standard procedures can be implemented by the system to force users to change their passwords at regular intervals. The password selection software can be designed to help the user select a better password by rejecting dictionary words, requiring a minimum password length, requiring a combination of letters and digits, and so on. Additionally, explaining why security is necessary and outlining some of the tricks hackers use to guess passwords can help encourage employees to do a better job.

71.4.2.4 Encryption

To make sensitive information difficult to read even if a message is intercepted, the data can be encrypted (converted to a secret code), transmitted, and then decrypted at the other end of the line. The U.S. National Bureau of Standards’ Data Encryption Standard (DES) is considered very difficult (perhaps impossible) to break. A public/private key system, DES is used for secure government transmissions and for most electronic funds transfers. Another popular public/private key encryption algorithm called PGP (Pretty Good Privacy) was created without government support and is available on the Internet.

As the name implies, a two-key or public/private key system uses two keys. The recipient’s public key, which is published or readily available on-line, is used to encrypt the message. Once the message is received, only the secret private key can be used to decrypt it.

71.4.3 Security design

Every organization should have established security standards and guidelines that apply to all information systems. Such standards help to ensure that security is not overlooked during the system development process and provide the designer with a security template.

An important element of any set of security standards is the recognition that not all systems, or even all components of the same system, require the same level of security. For example, using retinal scans to control access to a file of press releases is silly, and such inappropriate precautions can destroy the credibility of legitimate security. The standards should identify several levels of security risks and suggest security precautions consistent with the risk.

To an expert, an item is considered secure if the cost of breaking security exceeds the item’s value. Consequently, the appropriate level of security for a given system or component is a function of the value of that system or component to those who might be tempted to access or steal it. The objective is to balance cost and risk. During the problem definition and analysis stages of the system development life cycle, the security exposures and risks should be identified. The costs associated with appropriate countermeasures should be a part of the cost/benefit analysis. At the end of the analysis stage, the necessary security measures should be documented in the requirements specification.

Security cannot be added onto a system; it must be designed into the system. A system-wide security plan should be created early in the design process. Once the approach to system security is selected, appropriate security features should be designed into the hardware, the software, the data, and the procedures. Virtually any system component can represent a security risk. Consequently, security is an important consideration in the design of almost every system component.

During the operation and maintenance stage of the system development life cycle, system controls (Chapter 77) play an important role in supporting system security. In addition to the security controls, operational controls, data integrity controls, and auditing controls can provide an early warning of security problems.

71.5 Key terms
Anti-virus software —
Software designed to recognize certain code patterns (called virus signatures) and sound an alarm when a virus is detected.
Authentication —
The process of verifying the user’s identity.
Biometric device —
A system component that can identify an individual based on such biological criteria as a retinal scan, a fingerprint analysis, a voice print, or a signature analysis.
Callback —
An authentication tool in which the host computer verifies the user code and password, breaks the connection (hangs up), looks up the authorized telephone number for that user’s workstation, and then redials the workstation.
Comparator —
A software routine that compares the contents of a file or a record before and after a transaction and reports any differences.
Cracker —
A person who breaks into computers (generally over a communication line) with malicious intent.
Data encryption standard (DES) —
A public/private key encryption system used for secure government transmissions and for most electronic funds transfers.
Dumpster diving —
Searching for passwords and other security information by going through paper waste.
Encrypt —
To convert to a secret code.
Firewall —
A set of hardware, software, and data that sits between the network and the Internet (or other public network), screens all incoming and/or outgoing transactions, and allows only authorized transactions to get through.
Hacker —
Originally, an expert programmer with a knack for creating elegant software; today, the term is more commonly applied to someone who illegally breaks into computer systems.
Kernel —
A unit of code or a routine that is physically and/or logically isolated from other software and consequently protected.
Logic bomb —
A program that (symbolically) blows up in memory.
Logical security —
Security features implemented by the system as it runs.
Password —
A secret word or string of characters used to uniquely identify a given user.
PGP (pretty good privacy) —
A popular public/private key encryption algorithm that was created without government support and is available on the Internet.
Physical security —
A set of security features concerned with denying physical access to the system, preventing the physical destruction of the system, and keeping the system available.
Power dialing —
Running a program that dials thousands of numbers in sequence and notes only the numbers that return a modem tone.
Public/private key system —
An encryption system that uses two keys; the message is encrypted using the published public key and decrypted using the secret private key.
Rabbit —
A program that replicates itself until no memory is left and no other programs can run.
Security —
Hardware, software, and procedures intended to protect the hardware, software, data, and other system resources from unauthorized, illegal, or unwanted access, use, modification, or theft.
Social engineering —
The act of pretending to be an authorized user and attempting to convince an employee or other human source to divulge sensitive information.
Time bomb —
A program that executes on a particular date or when a particular condition is met.
Transaction log —
A list of a system’s transactions.
Trojan horse —
A seemingly harmless program that invites an unsuspecting user to try it.
Virus —
A program that is capable of replicating itself and spreading between computers by attaching itself to another program.
Worm —
A program that is capable of spreading from one computer to another under its own power.
71.6 Software

The following World Wide Web sites are excellent sources of information on various types of security software:

Encryption software
http://www.pgp.com

Firewall software
http://www.sctc.com

Security software products
http://www.datafellows

71.7 References
1.  Abrams, M. D. and Podell, H. J., Tutorial on Computer and Network Security, IEEE Computer Society Press, Washington, DC, 1987.
2.  Cheswick, W. R., Bellovin, S. M., and Cheswick, W., Firewalls and Internet Security: Repelling the Wiley Hacker, Addison-Wesley, Reading, MA, 1994.
3.  Cross, R. H., III and Yen, D. C., Security in the networking environment, Journal of Computer Information Systems, 32(1), 4, 1991.
4.  Davis, W. S., Computers and Information Systems: An Introduction, West, Minneapolis, MN, 1997.
5.  Pipkin, D. L. and Pipkin, D., Halting the Hacker: A Practical Guide to Computer Security,Prentice-Hall, Englewood Cliffs, NJ, 1997.
6.  White, G. W., Risch, E. A., Pooch, U. W., White, G. B., and Fisch, E. A., Computer System and Network Security, CRC Press, Boca Raton, FL, 1995.

Comments

Popular posts from this blog

The Conversion Cycle:The Traditional Manufacturing Environment

The Revenue Cycle:Manual Systems

HIPO (hierarchy plus input-process-output)