previous | contents | next

Chapter 26

NOVA: a list-oriented computer1

Joseph E. Wirsching

Since the advent of the internally-stored program computer, those of us concerned with problems involving massive amounts of computation have taken a one-operation, one-operand approach. But there is a very large class of problems involving massive amounts of computation that may he thought of as one-operation, many-operand in nature. Some familiar examples are numerical integration, matrix operations, and payroll computation.

This article proposes a computer, called NOVA, designed to take advantage of the one-operation, many-operand concept. NOVA would use rotating memory instead of high-cost random access memory, reduce the number of program steps, and reduce the number of memory accesses to program steps. In addition it is shown that NOVA could execute typical problems of the one-operation, many-operand type in times comparable to that of modem high-speed random access computers.

Rotating memories were used in early computers because of low cost, reliability, and ease of fabrication. These machines have been replaced by machines with more costly random access memories primarily to increase computing speed as the result of a decrease in access time to both operands and instructions.

The NOVA approach

Let us take two simple examples and use them to compare conventional computing techniques with those proposed for NOVA.

Example 1. Consider two lists (a's and b's) of which the corresponding pairs are to be added. With a conventional computer this is done with a program that adds the first a to the first b, the second a to the second b, etc., and counts the operations. The working part of such a program might consist of the following instructions:

Fetch a

Add b

Store (a + b)

Count, Branch, and Index

In general, the four or more instructions must be brought from the memory to the instruction register once for each pair in the lists. This seems to be a great waste when only one arithmetic operation is involved. Indeed it is, when one considers that the majority of computing work consists of the performance of highly repetitive operations that are merely combinations of the simple example given. Attempts have been made to alleviate this waste by incorporating "instruction stacks" and "repeat" commands into the instruction execution units of more recent computers.

Example 2. Consider three lists (a's, b's and c's), where we wish to compute (a + b) x c for each trio. There are two distinct methods by which this can be accomplished: first, by forming (a + b) X c for each trio of numbers in the list, or second, by forming a new list consisting of (a + b) for each a and b, and then multiplying each c by the corresponding member of the new list. Clearly the second method is wasteful of memory space and wasteful of programming steps.

Next, let us take a look at the memory requirements for these two examples. First, the instructions are kept in a high-speed random access memory, and while the bulk of the variables need not be kept in a random access memory, they must be brought to one before the algorithm can be performed. This extra transfer may entail more instructions to perform the logistics. Thus the simplicity of the overall program is directly related to the size of the memory. The variables (a's, b's, etc.) are usually stored in consecutive memory locations. Except for indexing this ordering of the data is not exploited.

In NOVA, lists of variables are kept on tracks of a rotating bulk memory. When called for, the lists of variables are streamed through an arithmetic unit and the results immediately replaced on another track for future use. This process takes maximum advantage of the sequential ordering of the variables. Instructions need only be brought to the instruction execution unit once for each pair of lists rather than once for each operand; thus the instructions need not be stored in a random access memory but may also be stored on the rotating bulk memory. This departure from the requirement for random access memory significantly

1Datamation, vol. 12, no. 12, pp. 41-43, December, 1966.


previous | contents | next