Machine learning has made impressive inroads in areas including genetics, medicine, computer vision and automatic language translation. The majority of the accomplishments, however, are in algorithms developed for specific machine learning tasks. Lagging behind is a strong theoretical foundation explaining why machine learning algorithms are successful.
To address this gap, Purdue University is hosting a conference on “Approximation Theory and Machine Learning.” Two researchers from Argonne were invited to give presentations at the conference, September 29–30, 2018, in West Lafayette, Indiana.
Sven Leyffer, a senior computational mathematician in Argonne’s Mathematics and Computer Science (MCS) division, spoke on the connections between optimization and machine learning. How can techniques from optimization under uncertainty help us understand the robustness of deep learning? What effects are different model formulations such as mixed-integer optimization and conic constraints having? “Our goal is to highlight both the limitations and promises of these novel optimization methodologies and formulations for machine learning,” Leyffer said.
David Bindel, an associate professor at Cornell University and currently on a year’s sabbatical in the MCS division, presented work he and his colleagues have been doing on Gaussian processes – a key part of the modern arsenal in machine learning, but one for which standard factorization approaches to the underlying linear algebra problems exhibit poor scaling. In his talk Bindel discussed recent work on more scalable kernel methods that draw on a combination of preconditioned Krylov subspace methods, stochastic estimators, and tricks with structured matrices.
The recorded talks and slides from the workshop on Approximation Theory and Machine Learning are available online: https://www.youtube.com/playlist?list=PLYXnvrTLTswVSwGbDOsabTPJZlxw7Ubzu