By Ming Yin (Princeton University)
Talk Abstract: Reinforcement Learning has become the go-to solution for solving many sequential decision-making problems. Specifically, offline reinforcement learning is the central framework for real-life applications where online interaction is not feasible. In such cases, data is often scarce, and sample complexity is a major concern. In this talk, I will introduce the primary challenges of offline RL and highlight recent efforts to address these issues. We will explore how various techniques can enhance sample efficiency and how they can adapt to the complexity of individual problems. Additionally, we will examine the relationship between these methodologies and practical applications and outline potential avenues for future work.
Speaker Bio: Ming Yin is a Postdoctoral Associate in the Electrical and Computer Engineering Department at Princeton University. He holds dual PhDs in Computer Science and Statistics from the University of California, Santa Barbara. His research spans the theory, algorithms, and applications of Machine Learning and Artificial Intelligence, with a particular emphasis on Reinforcement Learning and Generative AI. His work has been published in top venues such as NeurIPS, ICML, and ICLR, and with an Oral presentation at AISTATS. Ming’s contributions have earned multiple Rising Star Awards and recognition as a Best Paper Finalist at CVPR 2024. He has served as a Senior Program Committee member for many leading Machine Learning conferences and as an Area Chair for NeurIPS and ICML. Beyond academia, he gained industry experience through two summers at Amazon AWS AI.