TAG in Safety, Explainability and Robustness 2025
A Workshop at the Joint Math Meetings (JMM 2025), Seattle, WA, January 8, 2025
A Workshop at the Joint Math Meetings (JMM 2025), Seattle, WA, January 8, 2025
This special session showcases research that applies ideas from topology, algebra, and geometry to the goal of increasing the safety, robustness, or explainability of modern machine learning. We will feature research that (i) proposes novel approaches to machine learning by drawing on tools and ideas from topology, algebra, and geometry or (ii) uses mathematics to illuminate how and why existing state-of-the-art models work as well as they do in some situations but fail in others.
Organizers:
Henry Kvinge - Pacific Northwest National Laboratory
Tegan Emerson - Pacific Northwest National Laboratory
Tim Doster - Pacific Northwest National Lab
Scott Mahan - Pacific Northwest National Laboratory
Sarah McGuire - Michigan State University
8:00 AM
Dr. Berfin Simsek, PhD1, Johanni Brea2 and Amire Bendjeddou2, (1)Flatiron Institute & NYU, (2)EPFL
613 (Level Six, Seattle Convention Center Arch at 705 Pike)
8:30 AM
Analysis of internal activations to indicate undesirable behaviors in large language models
Jonathan H Tu, Pacific Northwest National Laboratory
613 (Level Six, Seattle Convention Center Arch at 705 Pike)
9:00 AM
Herman Chau, University of Washington
613 (Level Six, Seattle Convention Center Arch at 705 Pike)
9:30 AM
Vitaliy A Kurlin, University of Liverpool (UK)
613 (Level Six, Seattle Convention Center Arch at 705 Pike)
10:00 AM
613 (Level Six, Seattle Convention Center Arch at 705 Pike)
10:30 AM
Diss-lECT: Dissecting Data with local Euler Characteristic Transforms
Bastian Rieck, University of Fribourg
613 (Level Six, Seattle Convention Center Arch at 705 Pike)
11:00 AM
Elizabeth Diane Coda, Pacific Northwest National Laboratory (PNNL); UC San Diego
613 (Level Six, Seattle Convention Center Arch at 705 Pike)
11:30 AM
Andrew Lee, St. Thomas Aquinas College, Sparkill, NY, Harlin Lee, University of North Carolina at Chapel Hill, Jose Perea, Northeastern University, Boston, MA, Nikolas Schonsheck, Rockefeller University and Madeleine Weinstein, University of Puget Sound
613 (Level Six, Seattle Convention Center Arch at 705 Pike)
1:00 PM
Uniform convergence guarantees for adversarially robust learning
Rachel Morris, North Carolina State University and Ryan W. Murray, North Carolina State University, Raleigh, NC
613 (Level Six, Seattle Convention Center Arch at 705 Pike)
1:30 PM
Convergence rates for deterministic generative diffusion models.
Matt Jacobs, UCSB, Isla Vista, CA
613 (Level Six, Seattle Convention Center Arch at 705 Pike)
2:00 PM
Zehua Lai, University of Texas at Austin, Lek-Heng Lim, University of Chicago and Yucong Liu, Georgia Institute of Technology
613 (Level Six, Seattle Convention Center Arch at 705 Pike)
2:30 PM
POLICE: Provable Linear Constraint Enforcement for Deep Networks
Randall Balestriero, Brown University, Houston, TX
613 (Level Six, Seattle Convention Center Arch at 705 Pike)
3:00 PM
613 (Level Six, Seattle Convention Center Arch at 705 Pike)
3:30 PM
Critical points of ReLU neural networks: Analytics and Empirics
Marissa Masden, University of Puget Sound
613 (Level Six, Seattle Convention Center Arch at 705 Pike)
4:00 PM
Eliza O'Reilly, Johns Hopkins University, Ricardo Baptista, Caltech and Yangxinyu Xie, The University of Texas at Austin, Austin, TX
613 (Level Six, Seattle Convention Center Arch at 705 Pike)
4:30 PM
An interpretation for the role of depth in a deep neural network
Patricia Munoz Ewald, University of Texas at Austin and Thomas Chen, University of Texas at Austin, Austin, TX
613 (Level Six, Seattle Convention Center Arch at 705 Pike)