previous | contents | next

Section 4

Multiple-Processor Systems'


With the advent of larger-scale integrated circuits, it is possible to construct highly complex system building blocks. Indeed, design with mass-produced processors and memories as primitive components is now a viable, if not the only, approach to providing the advanced functionality that increasingly sophisticated users require. We are entering an era where multiple-processor systems are not only an everyday occurrence hut also a necessity.

For the purposes of this discussion we will consider a multiple-processor system to he composed of two or more processors that are capable of independent instruction execution and able to exchange information through some interconnection mechanism. Thus array processors (such as the Illiac IV) and associative processors (such as STARAN) are excluded from the present discussion.

The purpose of this section is threefold. First, the reasons motivating multiple-processor PMS structures are explored. Second, the issues in interconnecting multiple processors are illustrated. This represents a continuation of the interconnect-bus-switching discussion in Chap. 6 on computer space structure. This discussion of interconnection demonstrates that there is a continuum from processors sharing a common memory (those termed tightly coupled multiprocessors) to processors communicating via messages hut cooperating on one task (termed loosely coupled, distributed multiple processors) and on to independent computer systems interconnected to share information (termed networks). The third, and last, purpose of this section is to provide examples of tightly coupled multiprocessors. Networks, a very mature interconnection technology, are described in Sec. 5.


Motivations for Developing Multiple-Processor Systems

The earliest form of multiple-processor systems was local computer networks designed to make efficient use of large uniprocessors by segmenting particular functions among particular machines. As an example, front-end processors would be dedicated to hatch input and terminal control. Other processors might handle I/O spooling, as did the IBM attached support processors (Chap 52). Subsequently, geographically distributed networks evolved. There are several reasons for justifying a particularly network. The following list is adapted from Roberts [1967]:

Load sharing. A problem (program and data) initiated at one computer that is temporarily overloaded is sent to another for processing. The cost of transshipment must clearly be less than the costs of delay in getting the problem processed. Load sharing implies highly similar facilities at the nodes of the network.

Data sharing. A program is run at a node that has access to a large, specialized data base, such as a specialized automated library. It is less costly to bring the program to the data than to bring the data to the program.

Program sharing. Data are sent to a computer that has a specialized program. This might happen because of the size of the program (hence, fundamentally the same reason as data sharing), but it might also happen because the knowledge (i.e., initialization and error rituals) to run the program is available at one computer but not at another.

Specialized facilities. Within the network there need exist only one of various rarely used facilities, such as large random-access memories, special display devices, or special-purpose array processors.

Message switching. There may be a communication task of such magnitude that sophisticated switching and control are worthwhile.

Reliability. If some components fail, others can be used in their place, thus permitting the total system to degrade gracefully. (At the present state of the art, peripheral computers are needed to isolate the periphery from the unreliability of the network, and vice versa.)

Peak computer power. Large parts of the total system can be devoted for short periods to a single task, if there are important real time constraints to be met. This depends on being able to fractionate the task into independent subtasks.

Communication multiplexing. Efficient use of communication facilities is obtained by multiplexing a number of low-data-rate users. This may not be a reason for a network per se but may justify a larger network, provided that there is some reason for having one in the first place.

Better communication. A community of users (e.g., a scientific or engineering community) that could mutually use the same programs and data bases and converse about these directly (i.e., not by writing about them but in the context of mutual use) might become a much more productive community, with less duplication of work and faster communication of results.

Better load distribution through preprocessing. Some tasks require very high-data-rate communication with a computer. By doing preprocessing in a smaller computer, a reduced information rate can be sent to the more general system.

1Parts of this section introduction are based on an unpublished research paper, "The Multiple-Processor Design Space," by Daniel P. Siewiorek.



previous | contents | next