Computers: RISC microprocessors

RISC microprocessors

In most CPUs, the control unit can handle a wide range of instructions. Most of these instructions are, however, infre­quently used. RISC stands for reduced instruction set com­puter, and in RISC microprocessors the control unit is only able to handle the 20% most frequently used instructions. The remaining 80%, when needed, can be obtained by combining two or more of the instructions which are avail­ able. The design of RISC chips is such that the frequently used instructions are carried out very rapidly, far faster than on conventional chips. (A conventional chip is a 'complex instruction set computer', or CISC.)

The first microcomputer using RISC technology was the Acorn Archimedes, launched in Britain in 1987. Costing under £1000, this machine ran several times faster than other contemporary microcomputers, and was able to run applications involving intensive processing, such as graphics applications, at a speed never before seen in computers in this price range.

Today, RISC chips are available from both Intel and Motorola. Intel's main RISC chip, the i860, is compatible with its 80x86 chip series, and so can be incorporated in standard PCs. Its latest chip in the 80x86 series, the 80586 (not yet in production at the time of writing), will incorpor­ ate RISC technology. Also, new chips are available which incorporate both RISC and CISC technologies. Motorola's new 68040 processor incorporates both, and computers that use this are able to run at very high speeds.

The transputer

Short for transistor computer, the transputer contains, on a single chip, both the CPU and some memory, as well as communications channels to link it to other transputers. It is a RISC chip, so processing is very fast, and the fact that transputers can be linked means that they can process instructions in parallel.

The type of processing carried out in conventional com­puters is called serial processing. In this, the instructions contained in a program are carried out one after the other. This works well enough in office administration applications such as word processing and record keeping, which make relatively light demands of the CPU, but it is quite inad­ equate for very heavy processing tasks such as image recog­nition or speech recognition (see later).

To illustrate the problem, imagine how you would get on if you tried to recognize a face by breaking it down into a series of points and examining each in turn (i.e. serially). The task would take ages. What the brain does, in fact, is to examine each point simultaneously, so that you are able to instantly recognize the face. This is called parallel processing.

Because the transputer is a parallel processing device, any equipment that is based upon it will be able to operate more like the human brain, capable of carrying out the kind of complex recognition tasks that we take in our stride. The hope is that the next generation of IT equipment will be able to recognize and act upon speech, drawn images, handwritten instructions, and so on, as readily as we can.

Neural networks

Neural networks attempt to take computing even closer to the human brain. Even with parallel processing, computers are vastly outperformed by the brain at tasks such as image recognition. On the face of it, this is surprising, since current computers process data about a million times faster than the brain!

The reason is that the brain is able to learn from experi­ence. In effect, it uses its experiences to build up generalized sets of rules, and then uses these to discern the essential characteristics in a mass of otherwise irrelevant data. This is what allows it to recognize instantly a face, a voice, an object, etc. What happens at the physical level in the brain is that successive experiences of a similar type build up and strengthen particular neurons. (Neurons are the filament­ like tissue that carry electrical pulses of data in the brain.)

Neural networks are an attempt to mimic this learning activity of the brain. They consist of layers of simulated neurons on a silicon chip. The connections between these and the input and output nodes on the chip are strengthened or weakened according to the 'experiences' of the chip. Alternatively, neural networks can be software simulations running on conventional computers - these are much cheaper, but much slower than using specially-built chips.

To use a neural network, you first have to train it by repeatedly inputting samples of information it has to recog­nize, 'telling' it what that information is by indicating what its outputs should be. Eventually it builds up its internal connections so that it can reliably produce the desired output from other samples of information. In this way you can, for example, train a network to recognize letters of the alphabet written in a variety of hands. You present it with a series of a, b, c, etc., indicating at the same time the appropriate sequence of bits that correspond to each.

Neural networks are starting to be used in a variety of applications, including checking airport baggage for explos­ives and weapons, listening to car engines to spot defects, and picking out trends in financial trading data.

Comments

Popular posts from this blog

The Conversion Cycle:The Traditional Manufacturing Environment

The Revenue Cycle:Manual Systems

HIPO (hierarchy plus input-process-output)