CPU Cache Memory Points to Remember.

Spread the love

Principle of ‘locality of reference‘ : This principle suggests that at any given time, the CPU will be accessing memory in a localized region of memory.

When the instruction is found in the cache, the transaction is called a hit else a miss.

When cache memory sits along side the main memory then the architecture of the cache is called ‘look aside cache architecture‘. In this arrangement, both main memory and cache can ‘see’ the bus cycle at the same time. When the CPU starts a read cycle, the cache monitor the bus. This activity of cache is called a snoop operation. If the address is present in the main memory then data is copied from main memory to data bus. This data is copied by cache memory. This activity of cache is called snarf operation.

Dirty data : If the data in cache is modified by CPU but not updated in main memory.

Stale data : If the data is modified in main memory and cache have old copy (not updated) of it.

Two policies to write back data during a write cycle:

  1. Write back : CPU writes the data to cache memory and then cache memory writes the data back to the main memory.
  2. Write through : CPU writes the data to main memory through cache memory. The former policy is faster.

Now a days CPU have multilevel cache memory and that too built in the CPU chip. Which makes the data access much faster. Today Intel is using 3 level cache  named L1, L2, L3.