previous | contents | next

228 Part 2 ½ Regions of Computer Space Section 2 ½ Memory Hierarchies and Multiple Processes

hierarchy management and multiprocessing in separate subsections and then illustrate their combination and interaction via examples from existing computers.

Memory Hierarchy Management

Because of the variations in cost and performance of various memory technologies, contemporary memory systems are composed of a number of memory technologies. Figure 2 depicts the physical structure of contemporary memory systems. Usually the fastest, and most expensive, technology is used in the registers in the Pc. Ideally one would like to execute programs as if all data existed in Pc registers. When more data are required, slower, larger, and lower-cost storage, such as Mp, is added. Larger program and data storage and medium-term storage can be provided by Ms. Finally, Mt provides archival or long-term storage. Other forms of memory, such as caches and extended bulk storage, have been added between the previously discussed levels in the storage hierarchy in an attempt to bridge the gap between larger, slower storage at higher levels and smaller, faster storage at the next lower level. Typical access time, transfer time, size, and technology for each level in the hierarchy are also shown in Fig. 2. It should be noted that random-access memories are usually employed through the M.extended level, thus making access and transfer time identical. At the Ms and Mt levels there is an access delay, usually due to physical motion that is several orders of magnitude larger than the information transfer time. Hence these devices tend to be block-oriented so that multiple data are transferred for each access.

An important breakpoint in the memory hierarchy occurs when the number of available addressable units exceeds the number of unique addresses producible by the processor. Prior to that

point there are automatic techniques that can be used to make the multiple levels in the hierarchy appear as one, the so-called one-level store. Beyond that point the meaning of an address has to be changed and the programmer has to modify the address space in an overt action such as a call to an operating system.

Table 1 lists the dimensions of the memory hierarchy region of computer space. The first dimension is that of mapping functions. Figure 3 graphically depicts the translation from processor-generated addresses (usually called the address space or name space) to physical memory (usually called the memory space or physical space). Consider a particular program, PROGRAM-1, one of many that might wish to reside in the Mp. PROGRAM-1 assumes a set of addresses, some explicitly and some implicitly, in the addressing algorithm it uses. PROGRAM-1 requires a memory space that has addresses that satisfy all these requirements, the implicit and explicit ones (explicit addresses present in the program and data and implicit relations between addresses due to addressing algorithms-e.g., that programs are laid sequentially in Mp, or that the elements of an array are to be accessed by indexing and hence must occupy consecutive addresses). Once the address requirements are met, the program does not care how these addresses are realized. Let us call this address space required by PROGRAM-1 its virtual memory, Mv. Thus, each program has its own virtual memory. (You might say each program has its own Mp, except, as we shall see, this Mp may be many times bigger than any actual Mp and still be entirely feasible.)

Actually, to run PROGRAM-1 requires that it be placed in the real Mp in such a way that the real addresses of Mp containing it satisfy all the requirements, that is, that it be a faithful image of the virtual memory. Thus there must be some memory mapping that maps the actual addresses into the actual memory. Once PROGRAM-1 is placed in Mp there must be some process that takes each virtual address (as it occurs to be processed in an instruction) and finds the actual address in Mp, so that the correct contents can be obtained.

This might seem simply a complicated and abstract way to view matters, but it becomes essential as soon as we realize that the computer can have hardware memory mappings other than the familiar direct-addressing structure of Mp. What we have really done is to divorce the addressing required by the programs from that provided by the physical computer, so that we can redesign the addressing (via the memory mapping) to meet new design requirements that were not apparent when the original random-addressing schemes were created.

Let us make the notion of memory mapping more precise. The program contains virtual addresses, z (that is, symbols in the program that denote addresses are taken to denote addresses in Mv). During the execution of the program, whenever there is a reference to an address z (either explicitly via an address calculation or implicitly via, say, getting the next instruction), a
 
 

previous | contents | next