previous | contents | next


core technology. Stored in the cache are address data AD pairs consisting of an Mp address and a copy of the contents of the Mp location corresponding to that address.

The operation of the cache is as follows. When the Pc addresses Mp, the address is first compared against the addresses stored in the cache. If there is a match, the access is performed on the data portion of the matched AD pair. This is called a hit and is performed at the fast access time of the cache. If there is no match - called a miss - Mp is accessed as usual. Generally, however, an AD pair corresponding to the latest access is stored in the cache, usually displacing some other AD pair. It is the latter procedure which tends to keep the contents of the cache corresponding to the Mp locations most commonly accessed by the Pc. Because programs typically have the property of locality (i.e., over short periods of time most accesses are to a small group of Mp locations), even relatively small caches have a majority of Pc accesses resulting in hits. The performance of a cache is described by its miss ratio - the fraction of all Pc references which result in misses.


There are a number of possible cache organizational parameters. These include:

1. The size of the cache in terms of data storage.

2. The amount of data corresponding to each address in the AD pair.

3. The amount of data moved between Mp and the cache on a miss.

4. The form of address comparison used.

5. The replacement algorithm which decides which AD pair to replace after a miss.

6. The time at which Mp is updated on write accesses.

The most obvious form of cache organization is fully associative with the data portion of the AD pair corresponding to basic addressable unit of memory (typically a byte or word), as indicated by the system architecture. On a miss, this basic unit is brought into the cache from Mp. However, for several reasons, this is not always the most attractive organization. First, because procedures and data structures tend to be sequential, it is often desirable, on a miss, to bring a block of adjacent Mp words into the cache. This effectively gives instruction and data pre-fetching. Second, when associating a larger amount of data with an address, the relative amount of the cache storage which is used to store data is increased. The number of words moved between Mp and the cache is termed the block size. The block size is also typically the size of the data in the AD pair* and is assumed to be that for this discussion.

In a fully associative cache, any AD pair can be stored in any cache location. This implies that, for a single hardware address comparator, the Mp address must be compared serially against the address portions of the AD pairs - which is too slow. Alternatively there must be a hardware comparator for each cache location - which is too expensive. An alternative form of cache organization which allows for an inter mediate number of comparators is termed set associative.

A set associative cache consists of a number of sets which are accessed by indexing rather than by association. Each of the sets contains one or more AD pairs (of which the data portion is a block). There are as many hardware comparators as there are AD pairs in a set. The


* In a few complex cache organizations such as that used in the IBM 360/85. the size of the D portion of the AD pair (called a sector in the 360/85) is larger than the block size. That potential will be ignored in this discussion.

previous | contents | next