Sunday, 26 December 2021

Cache Memory

Cache memory is intended to give memory speed approaching that of the fastest memories available, and at the same time provide a large memory size at the price of less expensive types of semiconductor memories. There is a relatively large and slow main memory together with a smaller, faster cache memory contains a copy of portions of main memory.

When the processor attempts to read a word of memory, a check is made to determine if the word is in the cache. If so, the word is delivered to the processor. If not, a block of main memory, consisting of fixed number of words is read into the cache and then the word is delivered to the processor. 

If programs and data which are accessed frequently are placed in a fast memory, the average access time can be reduced. This type of small, fast memory is called cache memory which is placed in between the CPU and the main memory. 


When the CPU needs to access memory, cache is examined. If the word is found in cache, it is read from the cache is called cache hit and if the word is not found in cache is called cache miss , main memory is accessed to read word. A block of word containing the one just accessed is then transferred from main memory to cache memory. 

When a cache hit occurs, the data and address buffers are disabled and the communication is only between processor and cache with no system bus traffic. When a cache miss occurs, the desired word is first read into the cache and then transferred from cache to processor.

Locality of Reference: 

The reference to memory at any interval of time tends to be confined within a few localized area of memory. This property is called locality of reference. This is possible because the program loops and subroutine calls are encountered frequently. When program loop is executed, the CPU will execute same portion of program repeatedly. Similarly, when a subroutine is called, the CPU fetched starting address of subroutine and executes the subroutine program. Thus loops and subroutine localize reference to memory. 

This principle states that memory references tend to cluster over a long period of time, the clusters in use changes but over a short period of time, the processor is primarily working with fixed clusters of memory references. 

Spatial Locality: 

It refers to the tendency of execution to involve a number of memory locations that are clustered.  It reflects tendency of a program to access data locations sequentially, such as when processing a table of data. 

Temporal Locality: 

It refers to the tendency for a processor to access memory locations that have been used frequently. For e.g. Iteration loops executes same set of instructions repeatedly.  

Cache Memory Mapping Functions: 

The transformation of data from main memory to cache memory is referred to as memory mapping process. 

Because there are fewer cache lines than main memory blocks, an algorithm is needed for mapping main memory blocks into cache lines.

There are three different types of mapping functions in common use and are 

  • Direct Mapping
  • Associative Mapping
  • Set Associative Mapping

Direct Mapping:

It is the simple technique, maps each block of main memory into only one possible cache memory or a given block of main memory  can be placed in one place on cache. 

Associative memories are expensive compared to random-access memories because of the added logic associated with each cell.

 i = j modulo m , 

Where i = cache line number; j = main memory block number; m = number of lines in the cache 

The mapping function is easily implemented using the address. 

The CPU address of 15 bits is divided into two fields. The nine least significant bits constitute the index field and the remaining six bits form the tag field.

 The number of bits in the index field is equal to the number of address bits required to access the cache memory.

In the general case, there are 2' words in cache memory and 2" words in main memory. The n-bit memory address is divided into two fields: k bits for the index field and n - k bits for the tag field. The direct mapping cache organization uses the n-bit address to access the main memory and the k-bit index to access the cache.

Each word in cache consists of the data word and its associated tag. 

If raises for a request for a perticular word the index adress is used to search in cache. if corresponding tag field is equal then cache hit is occured and data is returned. 

Associative Mapping:

The fastest and most flexible cache organization uses an associative memory. The associative memory stores both the address and content (data) of the memory word. This permits any location in cache to store any word from main memory. The address value of 15 bits is shown as a five-digit octal number and its corresponding 12 -bit word is shown as a four-digit octal number. A CPU address of 15 bits is placed in the argument register and the associative memory is searched for a matching address. 





If the address is found, the corresponding 12-bit data is read and sent to the CPU. If no match occurs, the main memory is accessed for the word. The address data pair is then transferred to the associative cache memory.

Set-Associative Mapping:

It is a compromise between direct and associative mappings that exhibits the strength and reduces the disadvantages Cache is divided into v sets, each of which has k lines; number of cache lines = vk
M = v * k 
I = j modulo v


Where, i = cache set number; j = main memory block number; m = number of lines in the cache. So a given block will map directly to a particular set, but can occupy any line in that set (associative mapping is used within the set) 

Cache control logic interprets a memory address simply as three fields tag, set and word. The d set bits specify one of v = 2d sets. Thus s bits of tag and set fields specify one of the 2s block of main memory.  

The most common set associative mapping is 2 lines per set, and is called two-way set associative. It significantly improves hit ratio over direct mapping, and the associative hardware is not too expensive.

0 comments :

Post a Comment

Note: only a member of this blog may post a comment.

Machine Learning

More

Advertisement

Java Tutorial

More

UGC NET CS TUTORIAL

MFCS
COA
PL-CG
DBMS
OPERATING SYSTEM
SOFTWARE ENG
DSA
TOC-CD
ARTIFICIAL INT

C Programming

More

Python Tutorial

More

Data Structures

More

computer Organization

More
Top