Trustworthy AI for Resilient and Adaptive Aerospace Autonomy

By Shahriar Talebi (Harvard University)

Talk Abstract: Future aerospace systems must operate efficiently and safely under unpredictable conditions. Advances in AI, sensing, and actuation provide new opportunities for autonomy, but current methods struggle with limited data, real-time constraints, and safety-critical guarantees. This talk presents a Trustworthy AI framework that integrates geometry, control theory, and machine learning to enhance decision-making in aerospace applications. I will discuss how leveraging problem structure improves efficiency, how risk-aware control enables resilience in extreme scenarios modeled with heavy-tailed noise, and how multi-agent data-driven coordination supports large-scale autonomy. By combining learning, control, and geometry, this approach paves the way for safe, efficient, and scalable autonomy in next-generation aerospace systems.

Speaker Bio: Shahriar Talebi is currently affiliated with Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), the Faculty of Arts and Sciences (FAS), and the NSF AI Institute in Dynamic Systems (Dynamics AI). He received his Ph.D. in control theory in 2023 from the University of Washington, Seattle, under the supervision of Professor Mehran Mesbahi, focusing on constrained decision-making and control in complex systems. His dissertation, “Constrained Policy Synthesis: Riemannian Flows, Online Regulation, and Distributed Games,” introduced novel approaches for policy optimization under constraints. Concurrently, he earned an M.Sc. in Mathematics, specializing in differential geometry, under Professor John M. Lee. His research develops rigorous mathematical frameworks that integrate geometry, machine learning, and control to study data-driven decision-making and complex systems. He focuses on geometric methods for inference and decision-making under uncertainty, contributing to scalable control and decision-making frameworks.