Concurrency: Single-Processor Systems
At any given time, technology determines the major time constants (e.g., memory access time, microprocessor cycle time) that dictate the performance of an implementation. The simple two-parameter model involving microcycle time and memory pause time presented in Chap. 5 has been applied to three computer families and shown to be a good predictor of performance of minicomputers and maxicomputers (see Chaps. 39 and 52).
Computer implementations can exceed the performance available through technology alone by introducing concurrency into the organization. The degree of concurrency is the number of operations that are happening simultaneously. The concurrency in a structure is also a measure of its complexity; to have a highly concurrent structure implies control structure together with multiple data paths (and operations) that can be simultaneously active.
The impact of concurrency on software varies from none to need for totally
new programming styles. Instruction prefetch and interleaved memory are
two examples of hardware concurrency that are totally transparent to the
software. Some concurrency techniques impact only the operating system
(e.g., processor-I/O overlap) or impact user software in minor ways (e.g.,
in the imprecise interrupts in the IBM System S/360 Model 91). At the extreme,
concurrency structures may not only require dedicated programming but also
require entirely new algorithms (as do associative and multiple processors,
for example). In general, only the first two levels of software impact
are acceptable for general- purpose computing. The extreme level is usually
acceptable only for solving special-purpose problems where the computer
is actually a support processor to a general-purpose computer.
Table 1 lists the dimensions of the concurrency space. There are two
major approaches to achieving concurrency: overlap of heterogeneous functional
units and parallelism of homogeneous functional units.
Consider the traditional view of a computer with processor, memory, and I/O. The earliest computers employed the processor to control I/O. Since the speed differential between electronic and mechanical technologies was two orders of magnitude, the processor was inefficiently utilized. When a small amount of logic was moved into the I/O device, the processor only had to start the
I/O operation and then continue non-I/O processing. Periodic polling of the state of I/O devices was used to determine I/O completion.
So that time would not have to be spent periodically polling I/O devices, the concept of an interrupt was introduced, whereby the I/O device signals the processor upon completion by forcing a change in the processor state. The processor state change involves the initialization of an interrupt-handling program. Interrupt schemes can be categorized by priority and number of levels:
previous | contents | next