Abstract: As Moore’s Law reaches the end of life, there is an urgency to find alternative computing methods to meet the soaring demands of modern computing (e.g., those that result from machine learning applications). In this talk, I will elaborate our efforts to use “time” as a new dimension for accelerating computing in modern CMOS integrated circuits. I will first discuss our developments in mixed-signal time domain computing where we use “time” as in information carrier to solve modern machine learning problems such as time series analysis, image recognition, and artificial neural networks. Second, I will show how to leverage instruction-level timing information in GPU processors to save energy in modern computing tasks including machine learning jobs.
Bio: Jie Gu is an assistant professor atNorthwestern University. He is currently working on energy-efficient computing techniques for the design of modern microprocessor and machine learning accelerators.