Loading...

Domino cache: an energy-efficient data cache for modern applications

Naderan Tahan, M ; Sharif University of Technology | 2018

361 Viewed
  1. Type of Document: Article
  2. DOI: 10.1145/3174848
  3. Publisher: Association for Computing Machinery , 2018
  4. Abstract:
  5. The energy consumption for processing modern workloads is challenging in data centers. Due to the large datasets of cloudworkloads, the miss rate of the L1 data cache is high, andwith respect to the energy efficiency concerns, suchmisses are costly formemory instructions because lower levels ofmemory hierarchy consume more energy per access than the L1. Moreover, large last-level caches are not performance effective, in contrast to traditional scientific workloads. The aim of this article is to propose a large L1 data cache, called Domino, to reduce the number of accesses to lower levels in order to improve the energy efficiency. In designing Domino, we focus on two components that use the on-chip area and are not energy efficient, which makes them good candidates to use their area for enlarging the L1 data cache. Domino is a highly associative cache that extends the conventional cache by borrowing the prefetcher and last-level-cache storage budget and using it as additional ways for data cache. In Domino, the additional ways are separated from the conventional cache ways; hence, the critical path of the first access is not altered. On a miss in the conventional part, it searches the added ways in a mix of parallel-sequential fashion to compromise the latency and energy consumption. Results on the Cloudsuite benchmark suite show that read and write misses are reduced by 30%, along with a 28% reduction in snoop messages. The overall energy consumption per access is then reduced by 20% on average (maximum 38%) as a result of filtering accesses to the lower levels. © 2018 ACM
  6. Keywords:
  7. Cloud workloads ; Associative storage ; Budget control ; Buffer storage ; Cache memory ; Computer architecture ; Digital storage ; Energy utilization ; Green computing ; Associative cache ; Cache ; Energy ; Energy efficient ; Last-level caches ; Modern applications ; Prefetching ; Scientific workloads ; Energy efficiency
  8. Source: ACM Transactions on Design Automation of Electronic Systems ; Volume 23, Issue 3 , April , 2018 ; 10844309 (ISSN)
  9. URL: https://dl.acm.org/citation.cfm?doid=3184476.3174848