Loading...
Search for:
prefetching
0.131 seconds
Management of Storage Resources, Power, and Performance in Enterprise Storage Systems Using Prefetching Method
, M.Sc. Thesis Sharif University of Technology ; Asadi, Hossein (Supervisor)
Abstract
Nowadays, enterprise storage systems are widely used in data centers such as databases, servers, and bank institutes. Due to mechanical activity of disk drives, these systems are one of the most power consumptive components in data centers. Prefetching and RAID, which are employed to enhance performance, are widely used in data storage systems. These two methods, however, impose significant energy overhead to the system due to the increased disk activity. In this thesis, we first elaborate shortcomings of prefetching and RAID methods. Then, we propose a new energy efficient and RAID-compatible prefetching method (LPTAP), which aims at improving performance and energy consumption. To evaluate...
Evaluating Data Prefetching Methods and Proposing an Energy-aware First Level Cache for Cloud Workloads
, Ph.D. Dissertation Sharif University of Technology ; Sarbazi Azad, Hamid (Supervisor)
Abstract
Data generation rate is far more than the technology scaling rate in a way that there will be a 40x gap between the data generation rate and the technology scaling rate in 2020. On one hand, unlike traditional HPC clusters, processors in data centers are not fully utilized and on the other hand, unlike traditional embedded processors, they are not idle most of the time. Therefore, energy consumption of such processors is an important issue; otherwise dealing with a huge volume of data will be problematic in the near future. In this dissertation, we will show that while first level data cache encounters high miss rate, traditional approaches such as data prefetching, which were efficient for...
Designing Instruction Prefetcher with Low Area Overhead for Server Workloads
,
M.Sc. Thesis
Sharif University of Technology
;
Sarbazi Azad, Hamid
(Supervisor)
;
Lotfi Kamran, Pejman
(Co-Supervisor)
Abstract
L1 instruction cache misses creates a crucial performance bottleneck for server applications. Server applications extensively use operating system services, and as such, have large instruction footprint that dwarfs instruction cache size. Meanwhile, fast access requirements preclude enlarging instruction cache that can hold the whole instruction footprint of current server workloads. Prior works proposed using hardware prefetching schemes to eliminate or reduce the effect of instruction cache misses. They use the fact that server application instruction sequences are repetitive. So by recording and prefetching based on such sequesnces, L1 insruction misses could be reduced. While they...
Proposing a Scalable and Energy-aware Architecture for Register File of GPUs
, Ph.D. Dissertation Sharif University of Technology ; Sarbazi-Azad, Hamid (Supervisor)
Abstract
Graphics Processing Units (GPUs) employ large register files to accommodate all active threads and accelerate context switching. Unfortunately, register files are a scalability bottleneck for future GPUs due to long access latency, high power consumption, and large silicon area provisioning. In this thesis, we propose the Latency-Tolerant Register File (LTRF) architecture to achieve low latency in a two-level hierarchical structure. We observe that compile-time interval analysis enables us to divide GPU program execution into intervals with an accurate estimate of a warp’s aggregate register working-set within each interval. The key idea of LTRF is to prefetch the estimated register...