Professor, Department of Computer Science and Engineering
University of Michigan
Power Management from Smartphones to Data Centers
Power has become a first-class design constraint in computing platforms from the smartphone in your pocket to warehouse-scale computers in the cloud. Historically, semiconductor innovation has repeatedly provided more transistors (Moore’s Law) for roughly constant power per chip by scaling down supply voltage each generation. Unfortunately voltage scaling has ended due to stability limits and chip power densities are increasing each generation on a trajectory that outstrips improvements in the ability to dissipate heat. To continue to extract value from Moore’s Law, we need to find system-level approaches to improve efficiency and deliver more performance within tight energy, power, and thermal constraints.
In the first part of this talk, I will discuss Computational Sprinting, a technique to improve the responsiveness of smartphone platforms by transiently exceeding sustainable thermal limits—firing up numerous `dark silicon’ cores to complete a sub-second burst of computation while buffering the resulting heat in a phase change material embedded in the chip’s heat sink. Then, I will shift focus to warehouse-scale computing to discuss power management in online data intensive services. These applications, such as web search, social networking, and ad serving, must process terabytes of data in interactive time scales, making them a challenging target for power management.
Loading more stuff…
Hmm…it looks like things are taking a while to load. Try again?