2nd Annual TAG in Machine Learning
A Workshop at the 40th International Conference on Machine Learning , Honolulu, HI, July 28th, 2023 in Room 317B
Call for Papers
Paper Length and Format
The full paper submission must be at most 8 pages in length (excluding references and supplementary materials) and double blind. We will be following the ICML general conference submission criteria for papers - for details please see: ICML Call For Papers. As a note the reviewers will not be required to review the supplementary materials so make sure that your paper is self-contained. For the extended non-archieval abstracts please use the same template but limit the submission to 4 pages inclusive of references. There will be an option on the submission site to differentiate full papers and extended abstracts.
(NEW) Topological Deep Learning Challenge
Dr. Melanie Weber
Melanie is an Assistant Professor of Applied Mathematics and of Computer Science at Harvard University. Her research focuses on utilizing geometric structure in data for the design of efficient Machine Learning and Optimization methods. In 2021-2022, she was a Hooke Research Fellow at the Mathematical Institute in Oxford. Previously, she received her PhD from Princeton University (2021), held visiting positions at MIT and the Simons Institute in Berkeley, and interned in the research labs of Facebook, Google, and Microsoft. In addition to her academic work, she is the Chief Scientist of the Legal Artificial Intelligence startup Claudius.
The problem of identifying geometric structure in heterogeneous, high-dimensional data is a cornerstone of Representation Learning. In this talk, we study the problem of data geometry from the perspective of Discrete Geometry. We start by reviewing discrete notions of curvature with a focus on discrete Ricci curvature. Then we discuss how curvature is linked to mesoscale structure in graphs, which gives rise to applications of discrete curvature in node clustering and community detection. For downstream machine learning and data science applications, it is often beneficial to represent graph-structured data in a continuous space, which may be Euclidean or Non-Euclidean. We show that discrete curvature allows for characterizing the geometry of a suitable embedding space both locally and in the sense of global curvature bounds, which have implications for graph-based learning.
Dr. Michael Bronstein
University of Oxford
Michael Bronstein joined the Department of Computing as Professor in 2018. He has served as a professor at USI Lugano, Switzerland since 2010 and held visiting positions at Stanford, Harvard, MIT, TUM, and Tel Aviv University. Michael received his PhD with distinction from the Technion (Israel Institute of Technology) in 2007. His main expertise is in theoretical and computational geometric methods for machine learning and data science, and his research encompasses a broad spectrum of applications ranging from computer vision and pattern recognition to geometry processing, computer graphics, and biomedicine. Michael has authored over 200 papers, a book, and holds over 35 granted patents. He was awarded five ERC grants, two Google Faculty Research awards, two Amazon ML Research awards, Facebook Computational Social Science award, Dalle Molle prize, Royal Society Wolfson Merit award, and Royal Academy of Engineering Silver Medal. He is a PI and ML Lead in Project CETI, a TED Audacious Prize winning collaboration aimed at understanding the communication of sperm whales. During 2017-2018 he was a fellow at the Radcliffe Institute for Advanced Study at Harvard University and since 2017, he is a Rudolf Diesel fellow at TU Munich. He was invited as a Young Scientist to the World Economic Forum, an honour bestowed on forty world’s leading scientists under the age of forty. Michael is a Member of Academia Europaea, Fellow of IEEE, IAPR, ELLIS, and BCS, alumnus of the Technion Excellence Program and the Academy of Achievement, and ACM Distinguished Speaker. In addition to academic work, Michael's industrial experience includes technological leadership in multiple startup companies, including Novafora, Videocites, Invision (acquired by Intel in 2012), and Fabula AI (acquired by Twitter in 2019). Following the acquisition of Fabula, he joined Twitter as Head of Graph Learning Research. He previously served as Principal Engineer at Intel Perceptual Computing (2012-2019) and was one of the key developers of the Intel RealSense 3D camera technology. He is also an angel investor and supporter of multiple early-stage startups.
Title: Graph Rewiring in GNNs
Dr. Yusu Wang
University of California, San Diego
Yusu Wang is currently Professor in the Halicioglu Data Science Institute at University of California, San Diego, where she also serves as the Associate Director for Research for the NSF National AI Institute TILOS. She obtained her PhD degree from Duke University in 2004, and from 2004-2005, she was a post-doctoral fellow at Stanford University. Yusu Wang primarily works in the fields of Computational geometry, Computational and applied topology, and appliations to data analysis. She received DOE Early Career Principal Investigator Award in 2006, and NSF Career Award in 2008. She is currently a member of the Computational Geometry Steering Committee, and also serves on the editorial boards for SIAM Journal on Computing (SICOMP) and Journal of Computational Geometry (JoCG).
Machine learning, especially the use of neural netowrks have shown great success in a broad range of applications. Recently, neural approaches have also shown promise in tackling (combinatorial) optimization problems in a data-driven manner. On the other hand, for many problems, especially geometric optimization problems, many beautiful geometric ideas and algorithmic insights have been developed in fields such as theoretical computer science and computational geometry. Our goal is to infuse geometric and algorithmic ideas to the design of neural frameworks so that they can be more effective and generalize better. In this talk, I will give two examples in this direction. The first one is what we call a mixed Neural-algorithmic framework for the Steiner Tree problem in the Euclidean space, leveraging the celebrated PTAS algorithm by Arora. Interestingly, here the model complexity can be made independent of the input point set size. The second one is an neural architecture for approximating the Wasserstein distance between point sets, whose design /analysis uses a geometric coreset idea.
Dr. Tess Smidt
Massachusetts Institute of Technology
3D Euclidean symmetry-equivariant neural networks (E(3)NNs) are emerging as an effective machine learning paradigm in molecular modeling, protein design, computer graphics, and beyond. In this talk, I'll discuss the fundamental building blocks of E(3)NNs and how these pieces are combined to create the growing zoo of E(3)NNs available today.
Dr. Tegan Emerson
Tegan Emerson received her PhD in Mathematics from Colorado State University. She was a Jerome and Isabella Karle Distinguished Scholar Fellow in optical sciences at the Naval Research Laboratory from 2017-2019. In 2014 she had the honor of being a member of the American delegation at the Heidelberg Laureate Forum. Dr. Emerson is now a Senior Data Scientist and Team Leader in the Data Sciences and Analytics Group at Pacific Northwest Laboratory. In addition to her role at Pacific Northwest National Laboratories, Dr. Emerson also holds Joint Appointments as Affiliate Faculty in the Departments of Mathematics at Colorado State University and the University of Texas, El Paso. Her research interests include geometric and topological data analysis, dimensionality reduction, algorithms for image processing and materials science, deep learning, and optimization.
Dr. Henry Kvinge
Henry Kvinge received his PhD in Mathematics from UC Davis where his research focused on the intersection of representation theory, algebraic combinatorics, and category theory. After two years as a postdoc in the Department of Mathematics at Colorado State University where he worked on the compressive sensing-based algorithms underlying single-pixel cameras, he joined PNNL as a senior data scientist. These days his work focuses on leveraging ideas from geometry, and representation theory to build more robust and adaptive deep learning models and frameworks.
Dr. Tim Doster
Pacific Northwest National Laboratory
Tim Doster is a Senior Data Scientist at the Pacific Northwest National Laboratory. He received the B.S. degree in computational mathematics from the Rochester Institute of Technology in 2008 and the Ph.D. degree in applied mathematics and scientific computing from the University of Maryland, College Park, in 2014. From 2014 to 2016, he was a Jerome and Isabella Karle Distinguished Scholar Fellow before becoming a Permanent Research Scientist in the Applied Optics division with the U.S. Naval Research Laboratory. During his time with the U.S. Naval Research Laboratory he won the prestigious DoD Laboratory University Collaboration Initiative (LUCI) grant. His research interests include machine learning, harmonic analysis, manifold learning, remote sensing, few-shot learning, and adversarial machine learning.
Dr. Bastian Rieck
Bastian Rieck, M.Sc., Ph.D. is the Principal Investigator of the AIDOS Lab at the Institute of AI for Health at Helmholtz Munich, focusing on topology-driven machine learning methods in biomedicine. Bastian is also a faculty member of TUM, the Technical University of Munich, and a member of ELLIS, the European Laboratory for Learning and Intelligent Systems. Wearing yet another hat, he serves as the co-director of the Applied Algebraic Topology Research Network. Bastian received his M.Sc. degree in mathematics, as well as his Ph.D. in computer science, from Heidelberg University in Germany. He is a big proponent of scientific outreach and enjoys blogging about his research, academia in general, and software development.
Dr. Nina Miolane
University of California, Santa Barbara
Nina Miolane is an Assistant Professor at the University of California, Santa Barbara where she directs the BioShape Lab. Her research investigates the hidden geometries of life: how the shapes of neuronal activity, proteins, cells, and organs relate to their healthy and pathological biological functions. Her team co-develops Geomstats, an open-source software for differential geometry and machine learning. Prof. Miolane graduated from Ecole Polytechnique and Imperial College, received her Ph.D. from Inria, was a postdoctoral fellow at Stanford and a former software engineer. Research fundings include a NIH R01 grant, the NSF SCALE MoDL, Google Season of Codes, and the Noyce Initiative UC Partnerships in Computational Transformation Program grant. Prof Miolane was the recipient of the France-Stanford Award for Applied Science, the L'Oréal-Unesco for Women in Science Award and co-winner of the C3.aigrand Covid-19 challenge.
Dr. Sophia Sanborn
University of California, Santa Barbara
Sophia Sanborn is a Postdoctoral Scholar in the Department of Electrical and Computer Engineering at UC Santa Barbara. Her research lies at the intersection of applied mathematics, machine learning, and computational neuroscience. In her work, Dr. Sanborn uses methods from group theory and differential geometry to model neural representations in biology and construct artificial neural networks that reflect and respect the symmetries and geometry of the natural world. She received her Ph.D. in 2021 from UC Berkeley in the Redwood Center for Theoretical Neuroscience and is the recipient of the Beinecke Scholarship, the NSF GRFP, and the PIMS Postdoctoral Fellowship.
University of California, Santa Barbara
Mathilde Papillon is a Physics PhD student in the BioShape Lab at UC Santa Barbara where she develops novel deep learning methods leveraging geometry and topology. She harnesses these models to study relational data, with a special focus on full-body human movement. Mathilde obtained her BSc in Honours Physics from McGill University, and also works as a data scientist in sports analytics.