LANS Seminar
Abstract: Modern deep learning has led to revolutionary advances in dozens of scientific fields, with notable breakthroughs in protein folding, image recognition, and natural language processing. However, there remains a large gap between the compute efficiency of deep neural networks when run on GPUs and other digital accelerators, and the extraordinary efficiency of biological brains. Physics-based computing paradigms such as analog in-memory computing have the potential to dramatically increase the power efficiency and scalability of deep neural network architectures, however core challenges remain. In this talk, I will discuss the advantages, challenges, and state-of-the-art of analog in-memory computing.