TAG in AI Robustness, Explainability and Safety 2026
A Workshop at the Joint Math Meetings (JMM 2026), Seattle, WA, January 4, 2026
A Workshop at the Joint Math Meetings (JMM 2026), Seattle, WA, January 4, 2026
Answering questions around the safety, robustness, and explainability of AI models is becoming increasingly critical. Mathematics helps us understand AI failure modes and make AI more transparent and reliable. This special session features mathematics research that analyzes and addresses AI assurance concerns, showcasing areas such as algebraic geometry, probability theory, and computational topology which provide the insights required for AI systems to meet the needs of real-world applications.
Organizers:
Scott Mahan, Pacific Northwest National Lab
Eric Yeats, Pacific Northwest National Lab;
Henry Kvinge, Pacific Northwest National Laboratory;
Tim Doster, Pacific Northwest National Lab;
Alexander Cloninger, UCSD
Location: Room 102A, Walter E. Washington Convention Center
Motivating coherence-driven inference via sheaves
8:00 a.m.
Steve Huntsman
Explicit loss minimizers and geometric generalization bounds in deep networks
8:30 a.m.
Thomas Chen
Curvature Tuning: Provable Training-free Model Steering From a Single Parameter
9:00 a.m.
Randall Balestriero
Poisoning Large Language Models with Model Editing
9:30 a.m.
David Shriver, Keltin Grimes, Marco Christiani, Marissa Connor
Retraining Emulation: A General Framework for Machine Unlearning
10:00 a.m.
Yiran Jia, Eric Yeats, Scott Mahan
The Measure of Deception: An Analysis of Data Forging in Machine Unlearning
10:30 a.m.
Rishabh Dixit
Detecting Collateral Damage in Unlearning for Diffusion-Based Image Generation Models
11:00 a.m.
Aaron Jacobson, Scott Mahan
Machine Unlearning via Information Theoretic Regularization
11:30 a.m.
Shizhou Xu, Thomas Strohmer