Computers: Read only memory

Read only memory

Read only memory, or ROM, is a permanent store on microchip, normally used for holding programs. In this type of memory the tiny transistor switches cannot be turned ON or OFF, but are permanently encoded at the time of the chip's manufacture to produce the required program. These chips are called read only because it is not possible to write new programs or data to them.

The advantage of using ROM chips instead of storing data on disk and reading it into RAM as required is:

• It is more convenient to have frequently-used software immediately available inside the computer instead of on disk.

• When the computer is running, all of the RAM is left free for data (though note that for the PC the 640K limit applies to both ROM and RAM).

If the computer has a hard disk (see next chapter), the first of these is of little account, as the software can be almost as quickly loaded from the hard disk. However, ROM-based software is useful (sometimes essential) in the case of laptop and hand-held computers which lack a hard disk (or some­ times any sort of disk).

The disadvantages of ROM-based software are:

• The relatively high cost compared to disk storage.

• It may be difficult to upgrade to later versions of the software.

Some ROMs can be erased and reprogrammed. These are called EPROMS (short for Erasable Programmable ROM). You can recognize one of these by the small glass window in the surface of its casing, below which is the actual microchip (see Figure 1.1). The contents of the chip can be erased by exposing it to ultraviolet light for about 20 minutes, and it can then be reprogrammed by loading a new program into it.

clip_image001

The memory map

I've said that memory on the PC is limited to 640K. This is not strictly true - the CPU is able to access up to 1 Mb. However, the area of memory above 640K (the top 360K) is reserved for system tasks such as controlling the output to the monitor. The 640K is the amount of memory reserved for programs and data, including the operating system itself which takes up about 40K. (The exact amount depends on the version of the operating system.)

It is in fact possible to add many Mbytes of 'expanded' memory. The CPU is not able to access this directly, but what it can do is swap a 64K chunk of this memory into a 64K 'page frame' located in the top 360K of ordinary memory, and access that. By rapidly swapping different 64K pages of expanding memory in and out of this page frame area, it can in effect scan the entire memory.

For this to work, the expanded memory has to conform to the so-called LIM (Lotus-Intel-Microsoft) standard, and any software which wishes to use this facility has to be written to this standard. It sounds complicated, but it works well enough. In fact the software I am using to write this book, and the text itself, are residing in the expanded memory area of my computer, leaving virtually the entire 640K of ordinary memory free for other things. This means that I can run other software at the same time, should I so wish, and jump instantly between my writing work and other computing activities.

clip_image001[1]

Types of computer

Computers can be classified in a variety of ways. Tra­ditionally, they have been divided into mainframe, mini, and microcomputers, but with the increasing power of microcomputers this distinction is becoming blurred.

The largest type of computer is the mainframe, which takes its name from the big metal cabinet or 'frame' which was originally required to house the central processing unit. In the past, a mainframe might occupy a large room and cost millions of pounds (and be less powerful than modern PCs!). Even today it must be housed in a number of sizeable cabinets, and costs are of the order of £100,000 and upwards. A mainframe can cope with the data processing requirements of a large business, having the following advantages over smaller computers:

• It processes data at higher speeds, and so handles large jobs more quickly.

• The disk drives can store much more data than is possible in a smaller system, and they can therefore handle larger files.

• Its operating system allows a number of people to use it simultaneously, through a technique called multipro­gramming (see page 207). They are connected to it by keyboard-and-screen units called terminals or visual display units (VDUs).

Minicomputers are cut-down mainframes, often costing between £10,000 and £20,000, and able to handle the work of smaller organizations. A mini will be smaller than a mainframe, its storage capacity will be smaller, and it will not be able to support so many users at the same time.

Microcomputers are the desktop machines that have swept the computer scene in the last decade. They include hand­ held devices and home computers, as well as business machines. The latter are called personal computers, because they are intended for the use of a single individual rather than for shared use by a number of people.

Today, personal computers are often networked, meaning that they are connected by cable to each other and to central facilities such as large hard disks and printers. Networked PCs can, in many organizations, perform the type of task that required mainframe or minicomputers in the past. They can share large files of data, and, with the processing speeds of modern microprocessors, execute large data processing tasks at high speeds.

Personal computers are therefore taking over much of the computing work in many organizations, leaving mainframes and minis for specialist tasks requiring massive processing such as airline bookings systems, banking systems, and factory control. Even these areas may eventually be taken over by the next generation of RISC-based microcomputers that are now beginning to appear.

Comments

Popular posts from this blog

The Conversion Cycle:The Traditional Manufacturing Environment

The Revenue Cycle:Manual Systems

HIPO (hierarchy plus input-process-output)