Computers: Digital signal processors

clip_image001

Digital signal processors

Digital signal processors (DSPs) are used in voice recog­nition systems (page 59), computer video applications such as interactive compact disc (page 169), complex mathemat­ical calculations, music synthesis, as well as in more standard bits of equipment such as disk controllers and modems. They allow the high speed processing of digital signals from audio, video, and other sources.

DSPs are microchips optimized to carry out, at high speeds and with a high degree of accuracy, complex numerical calculations. They incorporate a number of enhance­ments to increase the processing speed. These may include dual arithmetic logic units, separate program and data memories, and high-speed memory access. This makes them suitable for numerically-intensive processing applications such as those listed above.

The first high-speed DSP was produced by AT&T in 1978. Since then Motorola, Texas Instruments, and others have produced DSPs, and these chips are now incorporated in a wide range of devices.

Processing speeds

The main way in which a computer's power is judged is the speed at which it runs. This depends upon two factors:

• The speed of its internal clock, measured in millions of cycles per second (megahertz, or MHz for short).

• The average number of clock cycles it requires to execute an instruction.

For example, a PC with an 80386 chip may have a clock speed of 20 or 25 MHz and will require about 4.5 clock cycles to perform an instruction. By dividing the clock speed by the number of cycles per instruction you can see that this gives a processing speed of around 5 million instructions per second (MIPS). (Compare this with the human brain - its neurons conduct electrical pulses at the frequency of about 1 kilohertz, which is snail-like in comparison.)

The purpose of the internal clock is to ensure that all the devices in the computer act in unison. It does this by sending electrical pulses through the system. The speed at which the clock is able to run is limited not only by the speed of the CPU but also the speed of the other components. So replacing an 8086 processor in an old PC by an 80386 processor does not mean that the PC will be able to run at 20 or 25 MHz.

The original IBM PC with its 8086 chip supported a clock speed of 4.77 MHz. To execute the average instruction required about 15 clock cycles, so its processing speed was 0.3 MIPS. The latest PCs using the 80486 chip have clock speeds of 30 or 35 MHz and execute an instruction in just over 2 clock cycles, giving a processing speed of around 15 MIPS. The i860 RISC chip has a clock speed of around 40 MHz and executes about 1instruction per clock cycle, giving a processing speed of around 40 MIPS - about 130 times as fast as the original PC!

For much office software, e.g. character-based word pro­cessing and record keeping, the internal processing speed of the computer may not be very important, because this kind of software does not make heavy demands of the processor (i.e. it involves relatively few instructions per period of time). On the other hand, graphics software, speech pro­cessing, and some engineering and mathematical applica­tions, make heavy demands on the processor and so are best run on fast computers.

Increasingly, standard office applications such as word processing are being run within graphical environments such as 'Windows' (see Chapter 5), and fast PCs, preferably based on the 80386 processor or above, are best for this. In fact, the PC world seems to be splitting into two camps, those with slower and cheaper computers running character­ based software, and those with the more expensive models running graphics-based software within the Windows environment. As I shall explain later, for certain applica­tions a graphics environment is highly desirable; however, for many run-of-the-mill office applications there is little point in using this environment and, indeed, certain advan­tages in remaining in the character-based world.

Computer memory

'Memory' is an area of storage within the computer where programs and data are held ready for processing by the CPU. The significant feature of memory, compared to disk storage, is that the CPU can access it at extremely high speeds, and any delays caused by moving data in and out of memory are therefore minimized. When you 'load' a file from disk, you are in fact copying it into an area of memory. However, compared to disk, memory is expensive and limited. The typical PC has less than 4 Mbyte of memory,

but 40, 80, or 120 Mbyte of hard disk capacity, and access to an indefinite number of floppy disks.

Computer memory is of two types, RAM and ROM.

clip_image001[1]Random access memory

Random access memory, or RAM, is a temporary store for holding programs and data loaded from disk, or typed at the keyboard, or input from some other device. The term 'random access' means that the data can be picked out of the memory in any order, and contrasts with 'sequential access', which is the kind of access you get with magnetic tape and some other storage devices, where the data has to be read in sequence, starting at the beginning and working through to the end.

Nowadays, a RAM device is normally a silicon chip, made up of thousands of tiny (transistor) switches, each of which can be either ON or OFF, and so represent a binary digit (1 or 0). Memory of this type is volatile, meaning that its contents are lost when the power is turned off.

In the case of mainframe computers, core store memory was normally used. The memory devices in this case are tiny magnetic rings, threaded onto a matrix of criss-crossing

wires. The direction of magnetization in a ring is determined by the current flowing through the wires, one direction representing a binary 0, the other a binary 1. Because the rings remain magnetized even when the power is turned off, the data is retained. So this type of memory is called non­ volatile.

In both types of memory, the individual devices - transis­tors or rings - are laid out in rectangular arrays, each one occupying a location or address that can be identified by its row and column numbers. These numbers are, of course, in binary digital form. Each item of data stored in memory therefore has associated with it a memory address.

When the CPU reads an item of data from memory, it has to do two things:

1 Look up the address of the data in memory.

2 Read the data, i.e. the sequence of 0s and 1s, at that address.

The numbers identifying memory addresses travel in elec­tronic form down an address bus inside the computer, those representing the data travel down a data bus.

Comments

Popular posts from this blog

The Conversion Cycle:The Traditional Manufacturing Environment

The Revenue Cycle:Manual Systems

HIPO (hierarchy plus input-process-output)