Wednesday, 12 January 2022

Cache Coherence problem and solutions

Cache Coherence

For higher performance in a multiprocessor system, each processor will usually have its own cache. Cache coherence refers to the problem of keeping the data in these caches consistent. The main problem is dealing with writes by a processor.

There are two general strategies for dealing with writes to a cache:
  1. Write-through - all data written to the cache is also written to memory at the same time.
  2. Write-back - when data is written to a cache, a dirty bit is set for the affected block. The modified block is written to memory only when the block is replaced.

Write-through caches are simpler, and they automatically deal with the cache coherence problem, but they increase bus traffic significantly. Write-back caches are more common where higher performance is desired. The MSI cache coherence protocol is one of the simpler write-back protocols. 

Conditions for incoherence 

Cache coherence problem exit in multiprocessors with private caches because of the need to share writable data. Read-only data can safely be replicated without cache coherence enforcement mechanisms.

In general, there are three sources of inconsistency problem −

  1. Sharing of writable data
  2. Process migration
  3. I/O activity

Another configuration that may cause consistency problem is a direct memory access(DMA) activity in conjunction with an IOP connected to the system memory.

Solutions to the Cache Coherence problem

Various solutions are available for cache coherence problem. Here we discuss some of the briefly.

Solution-I: A simple scheme is to disallow private caches for each processor and have a shared cache memory associated with main memory. Evey data access is made to the shared cache.

Disadvantage:

This method violates the principle of closeness of CPU to cache and increases the average memory access time.In effect, this scheme solves the problem by avoiding it.

Solution-II: Snoopy Protocols:

  • Snoopy protocols distribute the responsibility for maintaining cache coherence among all of the cache controllers in a multiprocessor system.
  • A cache must recognize when a line that it holds is shared with other caches.
  •  When an update action is performed on a shared cache line, it must be announced to all other caches by a broadcast mechanism.
  • Each cache controller is able to “snoop” on the network to observed these broadcasted notification and react accordingly.
  • Snoopy protocols are ideally suited to a bus-based multiprocessor, because the shared bus provides a simple means for broadcasting and snooping.
  • Two basic approaches to the snoopy protocol have been explored: Write invalidates or write- update (write-broadcast)
  • With a write-invalidate protocol, there can be multiple readers but only one write at a time.
  • Initially, a line may be shared among several caches for reading purposes.
  • When one of the caches wants to perform a write to the line it first issues a notice that invalidates that tine in the other caches, making the line exclusive to the writing cache. Once the line is exclusive, the owning processor can make local writes until some other processor requires the same line.
  • With a write update protocol, there can be multiple writers as well as multiple readers. When a processors wishes to update a shared line, the word to be updated is distributed to all others, and caches containing that line can update it.

0 comments :

Post a Comment

Note: only a member of this blog may post a comment.

Machine Learning

More

Advertisement

Java Tutorial

More

UGC NET CS TUTORIAL

MFCS
COA
PL-CG
DBMS
OPERATING SYSTEM
SOFTWARE ENG
DSA
TOC-CD
ARTIFICIAL INT

C Programming

More

Python Tutorial

More

Data Structures

More

computer Organization

More
Top