Abstract: While formal reasoning in the machine learning domain has achieved remarkable success in certifying properties specified over highly complex, non-linear functions (e.g., DNNs), lifting this formal reasoning to these function’s derivatives has been severely understudied. Further, because gradient computations and more broadly differentiable programming make up the backbone of modern machine learning and are also equally ubiquitous in other applications like Scientific Computing and Optimization, the immediate needs of practitioners have rapidly outpaced formal development on the programming languages side.
To address these challenges, I present a framework for abstract interpretation of Differentiable Programming and Automatic Differentiation (AD) and show how this idea allows us to cleanly, formally, and compositionally reason about both a function (e.g., DNN, scientific model or optimization objective) and its derivatives in a sound manner, even in the face of points of non-differentiability. I also show how this framework can be generalized to arbitrary order-derivatives as well as instantiated with more expressive abstract domains to further improve the generality and precision. I will lastly conclude by showing how to specifically leverage the structure of AD computations to optimally synthesize AD abstractions, thus maximizing the precision of the abstract interpretation.
With this work, we make the first step toward unlocking the potential to define, analyze and verify a brand-new set of formal properties expressed over derivatives, for a broad and general class of programs.
Bio: Jacob Laurel is a current PhD Student at the University of Illinois Urbana-Champaign. He completed a double degree in Electrical Engineering and Mathematics at the University of Alabama at Birmingham.