Abstract: As we reach the limits of Moore’s law (doubling the number of transistors per square inch every 18 months), and the end of Dennard scaling (power use proportional to area), it has become necessary to explore new paradigms to shape future computer architectures and software systems. A promising approach is Approximate Computing, where the computation accuracy is traded for improved performance and energy efficiency. Approximate Computing relies on the ability of the applications to tolerate some loss of accuracy in the output results to trade off for better performance and energy efficiency. It has mostly been applied in non-HPC domains, such as image processing, machine learning, and visualization. Due to the stringent accuracy requirements of scientific applications, we have seen limited use of many of the approximate computing techniques in HPC applications.
For widespread adoption of approximate computing in HPC, we need to address several challenges, including understanding how approximations affect the output quality, providing programming language support. In this talk, I will describe some of our work on developing tools and methods to study the amenability of approximate computing techniques. I will also detail a framework to enable programmers to conduct empirical studies to understand the trade-offs of applying various approximate computing techniques on their HPC codes.