previous | contents | next

92 Part 1 ½ Fundamentals Section 2 ½ The Computer Space


algorithm (instruction fetch time) and many of the references to Mp for temporary data are eliminated by hardwiring. A hardwired algorithm can easily outperform a stored program by a factor of 10 ~100. The lack of these processors in systems stems mainly from lack of market demand.

It is not clear that the special algorithm processors meet our criteria for being a processor, because of the rather limited functions they perform. In fact, some so-called processors are just K's, or D's, since they have no instruction location counter and interpret only a single instruction at a time, requesting each new instruction from a superior component.

Algorithms that have been hardwired (or proposed) include the fast Fourier transform using the Cooley-Tukey algorithm; cross correlation, autocorrelation, and convolution processing; polynomial and power-series evaluation; floating-point array processing; and neural network simulation.1 Programmable processors with specialized data paths to support these algorithms have also been constructed (e.g., Floating Point Systems' AP-120B Array Processor with pipelined 38-bit floating-point add, and floating-point multiply units that can produce a result every 167 ns).

Fault- Tolerant Processors

Gaining wide acceptance are processors constructed primarily to tolerate failures. Initially these processors were devoted to special-purpose aerospace control (e.g., the JPL Self-Test and Repair Computer, Chap. 27). Fault tolerance is now being applied to such commercial activities as communications (e.g., telephone switching, Chap. 28, data switching, Chap. 23) and transaction processing (Chap. 29). The PMS structure, ISP, and implementation details of these processors depend strongly on such specifications as assumed fault type, assumed fault extent (e.g., local or global), and specified system goal (e.g., availability, reliability, or data integrity). The basic concepts of fault tolerance as well as several example systems are presented in Sec. 6 of Part 2.

 

Memory Access

The most useful classification of memories is according to their accessing algorithm.2 These are queue (access according to first- in-first-out discipline); stack (access according to first-in-last-out discipline); linear (e.g., a tape with forward read and rewind); bilinear (e.g., a tape with forward and backward read); cyclic (e.g., a drum); random (e.g., core); and content and associative. All these memories are explicitly addressed except the stack and queue, which deliver an implicitly specified i-unit on each read.

Memory size and basic operation times (i.e., the time constants in the access algorithm) are important too, of course. But once a distinction is made between Mp and Ms, then for any given technological era there have existed characteristic sizes and speeds for memories of a specified access algorithm. Where there has been variation, either it has been linear with size (e.g., buying two boxes of magnetic core Mp versus buying one) or there has been a narrow range of cost/performance tradeoff (as in data rate for magnetic tapes, in which modest increases in density and tape speed can be bought for substantially increased price). Table 5 shows the relative price, size, and performance of various memories. The memory-size versus information-rate plot (Fig. 20) shows the clustering of memories and their suitability for a particular function.

From a technology standpoint, Mp's have been constrained to either cyclic- or random-access memories (although one can easily construct any type from random-access memories). Similarly, Ms's have been constrained to be cyclic or linear, although quasi-random access has been achieved with some disks and magnetic-card memories (random by block and linear or cyclic within a block). Any Ms's can be part of almost any computer structure. Thus there is no large effect of Ms structure on the main design features of computer systems, and they are not discussed to any extent in the remainder of the book. Our discussion of memory type here deals exclusively with Mp and Mps.

Stack and Queue Memories (M.stack, M.queue)

Data elements in a stack and queue are not accessed explicitly, as we have noted. The stack has some unique properties that aid in the compilation and evaluation of nested arithmetic expressions. Although there are no machines employing stacks exclusively for primary memory, there are stacks in some arithmetic processors. Chapter 9 presents a processor with a stack memory (i.e., with stacks in the processor state). Several other processors provide support for stacks through stack manipulation instructions (e.g., the PDP-11 and VAX-11).

Cyclic-Access Memories (Mp.cyclic)

Nearly all the first-generation (vacuum-tube) computers had Mp.cyclic. The Mp.cyclic acoustic, magnetostrictive delay line, and magnetic drum provided an inexpensive, simple, producible memory. By the second generation the cost of Mp. random (though still more expensive than an Mp.cyclic) was about equal to the processor logic. The incremental cost for an Mp. random in a large system was then small, whereas the performance gain could be a factor of up to 3,000 (access time of 10 m s versus 30 ~ 30,000 m s). Some of the first-generation machines were reimplemented

1Chasm: A Macromodular Computer for Analog Neuron Models [Molnar, 1967].

2Access for writing should be distinguished from access for reading. Memories are conceivable with arbitrarily different read and write access algorithms (e.g., random read and cyclic write.) However, in general, the two access algorithms are tightly coupled, and normally only the read access algorithm is given.

previous | contents | next