Label taped to the video case reads: "Cache memories are used in almost all large and medium size computers in order to reduce the average main memory access time. Cache memories are also in use with, or being built for (or on), high performance microprocessors. In this talk, we providean overview of the issues in cache memory design, concentrating on some recent work by the speaker. Particular attention is given to cache workloads, cache consistency mechanisms, and the miss ratio as a function of line size and cache size. Other topics may include: cache fetch algorithms (demand vs. prefetch), placement (set associative, direct mapping, etc.) and replacement (LRU, FIFO, etc.) algorithms, store through vs. copy back updating of main memory, cold start vs. warm start miss ratios, the effect of input/output through the cache, virtual address caches, user/supervisor caches, multilevel cache, the behavior of split instruction/data caches, and translation lookaside buffers."
Stanford Computer Forum Distinguished Lecture Series