Abstract: Machine learning is being deployed across a growing number of high-stakes mission-critical applications, including defense, transportation, and medicine. When it comes to AI systems in safety-critical contexts, significant challenges remain. The current generation of ML models tend to be greedy, brittle, opaque, and shallow. The systems are greedy because they demand huge sets of training data. Brittle, because they are all susceptible to an emerging set of counter-AI attacks. They are opaque because, unlike traditional programs with their formal, debuggable code, AI systems are black boxes, whose outputs cannot be explained, raising doubts about their reliability and biases.
As part of this talk, we will explore the AI Assurance challenges that we will need to overcome in order for society to fully benefit from advances in machine learning.
Bio: Mikel Rodriguez is the director of The Artificial Intelligence and Autonomy Innovation Center at MITRE Labs and leads the AI Red Team for the Department of Defense. He obtained his PhD at University of Central Florida’s Center for Research in Computer Vision.