Details

Understanding Last Layer Retraining Methods for Fair Classification: Theory and Algorithms:
Last-layer retraining (LLR) methods have emerged as an efficient framework for ensuring fairness and robustness in deep models. In this talk, we present an overview of existing methods and provide theoretical guarantees for several prominent methods. Under the threat of label noise, either in the class or domain annotations, we show that these naive methods fail. To address these issues, we present a new robust LLR method in the framework of two-stage corrections and demonstrate that it achieves SOTA performance under domain label noise with minimal data overhead. Finally, we demonstrate that class label noise causes catastrophic failures even with robust two-stage methods, and propose a drop-in label correction which outperforms existing methods with very low computational and data cost.
Lalitha Sankar is a Professor in the School of Electrical, Computer and Energy Engineering at Arizona State University. She joined ASU as an assistant professor in fall of 2012, and was an associate professor from 2018-2023. Sankar's research interests are at the intersection of information and data sciences including a background in signal processing, learning theory, and control theory with applications to the design of machine learning algorithms with algorithmic fairness, privacy, and robustness guarantees. Her research also applies such methods to complex networks including the electric power grid and healthcare systems.
Lunch will be provided starting at 12:15pm