previous | contents | next

34 Part 1 The structure of computers

The ISP does not constitute a distinct system level. Rather, it describes the interface between two levels, the register-transfer level and the programming level. It is used to define the components of the programming level-instructions, operations, and sequences of instructions-in terms of the next lower level. In principle, and usually in fact, the language of the lower level is used to describe the components and modes of connections, one level up. In many ways ISP is a register-transfer language (in symbolic rather than graphical form-but as we noted in Chap. 1, there appear always to be two such isomorphic notations at each system level). However, ISP has been extended by allowing the instruction-expression to be a general linguistic expression for a computation, just as if ISP were FORTRAN or ALGOL. This is what permits us to talk of ISP as not necessarily determining the exact set of physical registers and transfer paths. The instruction-expressions describe the functions to be performed without entirely committing to the RT structure.

If the ISP is the interface language between the RT and programming levels, what is its relationship to PMS, which is one level above? Every PMS component has associated with it a set of operations and a control structure for getting those operations executed in connection with the arrival of various external signals. As we noted earlier in the chapter, there is an ISP description for each operation in its context of control. That is, ISP is the interface language for describing all PMS components in terms of the register-transfer level, not just P. It happens that only one of these PMS components, the processor, carries with it an entire new systems level-the programming level. All the other components have no analog of the programming level and interface directly to the register-transfer level (or even in simple cases to the logic-circuit level). Precisely because of the simplicity, we have not bothered to develop ISP descriptions of other components of components other than processors.

The second question, namely, the relation between the ISP and PMS descriptions of the same processor, arises from the ability to represent PMS components recursively as PMS structures made up from more elementary PMS components. Thus, Mp(32 kw, 16 b) can be considered as compounded of 32k memories, M(1 w, 16 b), with an addressing switch, S.random. Indeed, if one carries this to the limit, where the M's are single bit memories (flip-flops), the S's are one bit gates, a couple of specific K's are defined for AND and OR, etc., then it is possible to draw a PMS diagram isomorphic to any logic circuit. Thus, a processor (P) can be rep resented as a PMS involving M's, K's, D's, S's, etc., and at varying levels of detail. Since we also have a description of this same P in ISP, it is appropriate to consider the correspondence.

First of all, every memory in the ISP description corresponds to a memory in the PMS description. The data operations in ISP imply corresponding D's in PMS and every occurrence of transmit (¬ ) implies a corresponding link between the Ms and D's on the right hand side and the M on the left, being written into. That the instructions of the ISP are evoked only under certain conditions implies that a control (K.operation-decode) exist in the PMS structure. Similarly, the simple, two-state stored-program model (instruction-fetch, instruction-execute) for the interpreter implies an interpreter control (K.interpreter). The action-sequence of each instruction, if it contains any semi-colons or next's, requires additional K and possibly additional M (if the structure involves embedded operations such as (A + B) x (C + D)). Thus for every ISP component there is an implied component in the PMS structure of the processor.

The PMS diagram model for a computer shown initially on page 17 has the "natural units" implied by the ISP description (with the exception of the instruction format part) as suggested on page 24. The data-operations D are therefore implied each time an operation is written. Each process implies a control which we lump into the single K of the figure. The model also shows both the arrival of instructions and the flow of data between the processor (P) and memory (Mp).

There are several memories within Pc which are not explicitly shown on page 17. These include temporary memory within D and the K for carrying out complex arithmetic operations. The interpreter control has temporary memory, of course. Finally, other kinds of memories have been omitted to simplify the model. In multiprogrammed computers a mapping control and memory would be used, and in pipeline or highly parallel processors there would be temporary memory for various buffering (e.g., instructions and data). The Appendix lists the various memories of the processor.

K(P), the control for the processor above, controls data movement among the Mp and M.processor_ state and evokes the data- operations of D. Functionally, K(P) can be broken into several parts, each of which is responsible for a part of the overall instruction interpretation and execution process, and each corresponds to a part of the ISP description. This decomposition is allowed in PMS, and if we did so, each component would contain an independent control for its own domain, e.g., a K(D), K(Mp), K(Instruction-set interpreter). More elaborate processor structures imply having controls for functions like multiprogram mapping. The K(Instruction-set interpreter) is the supervisory component which causes other processor K's to be utilized in a complex processor. In an ISP description of a C, the interpreter usually

previous | contents | next