Cache Optimizations that reduce Hit Time

Hit Time is defined as the time required to access a level of the memory hierarchy, including the time needed to determine whether the access is a hit or a miss.

Small and Simple First-Level Caches

A Cache Hit process involves the following 3 steps

  • addressing the tag memory using the index portion of the address
  • comparing the read tag value to the address
  • choosing the correct data item if the cache is set associative.

The time required for the above steps can be improved by reducing the number of tag comparisions and associativity. In fact, Direct-mapped caches can overlap the tag check with the transmission of the data. Thus, having a smaller and simpler Cache improves Hit Time.

Way Prediction

In this optimization, extra bits are added to each block of the Cache. These are called block predictor bits. They are used to predict which block in the set to try on the next data access. If the predictor is correct, the Hit Time is fast. If the predictor is incorrect, other blocks are tried.

Set-AssociativityPrediction Accuracy
2-way>90%
4-way>80%

Leave a Reply

Your email address will not be published. Required fields are marked *