All Seminars

Title: Game Theory, AI, and Optimal Transportation
Seminar: Computational Math
Speaker: Levon Nurbekyan of University of California, Los Angeles
Contact: Lars Ruthotto, lruthotto@emory.edu
Date: 2022-01-24 at 10:00AM
Venue: MSC W201
Download Flyer
Abstract:
The modern world constitutes a network of complex socioeconomic interactions leading to increasingly challenging decision-making problems for participating agents such as households, governments, firms, and autonomous systems. We consequently need refined mathematical models and solution techniques for addressing these difficulties.

In this talk, I will demonstrate how to apply mathematical game theory, optimal control, and statistical physics to model large systems of interacting agents and discuss novel dimension reduction and machine learning techniques for their solution.

An intriguing aspect of this research is that the mathematics of interacting agent systems provides a foundation for fast and robust core machine learning algorithms and their analysis. For example, I will demonstrate how to solve the regularity problem in normalizing flows based on their "crowd-motion" or optimal transportation interpretation.

Yet another essential utility of optimal transportation in data science is that it provides a metric in the space of probability measures. I will briefly discuss the application of this metric for robust solution methods in inverse problems appearing in physical applications.

I will conclude by discussing future research towards socioeconomic applications, data science, and intelligent autonomous systems.
Title: Zarankiewicz problem, VC-dimension, and incidence geometry
Seminar: Discrete Mathematics
Speaker: Cosmin Pohoata of Yale University
Contact: Dwight Duffus, dwightduffus@emory.edu
Date: 2022-01-21 at 10:00AM
Venue: Zoom and W201
Download Flyer
Abstract:
The Zarankiewicz problem is a central problem in extremal graph theory, which lies at the intersection of several areas of mathematics. It asks for the maximum number of edges in a bipartite graph on $2n$ vertices, where each side of the bipartition contains $n$ vertices, and which does not contain the complete bipartite graph $K_{s,t}$ as a subgraph. One of the many reasons this problem is rather special among Tur\'an-type problems is that the extremal graphs in question, whenever available, always seem to have to be of an algebraic nature, in particular witnesses to basic intersection theory phenomena. The most tantalizing case is by far the diagonal problem, for which the answer is unknown for most values of $s = t$, and where it is a complete mystery what the extremal graphs could look like. In this talk, we will discuss a new phenomenon related to an important variant of this problem, which is the analogous question in bipartite graphs with bounded VC-dimension. We will present several new consequences in incidence geometry, which improve upon classical results. \\ \\ This is based on joint work with Oliver Janzer.
Title: Coloring hypergraphs of small codegree, and a proof of the Erdős–Faber–Lovász conjecture
Seminar: Discrete Mathematics
Speaker: Tom Kelly of The University of Birmingham
Contact: Dwight Duffus, dwightduffus@emory.edu
Date: 2022-01-19 at 10:00AM
Venue: Zoom
Download Flyer
Abstract:
A long-standing problem in the field of graph coloring is the Erd$\ddot{\mathrm{o}}$s–Faber–Lovász conjecture (posed in 1972), which states that the chromatic index of any linear hypergraph on n vertices is at most n, or equivalently, that a nearly disjoint union of n complete graphs on at most n vertices has chromatic number at most n. In joint work with Dong Yeap Kang, Daniela Kühn, Abhishek Methuku, and Deryk Osthus, we proved this conjecture for every sufficiently large n. Recently, we also solved a related problem of Erd$\ddot{\mathrm{o}}$s from 1977 on the chromatic index of hypergraphs of small codegree. In this talk, I will survey the history behind these results and discuss some aspects of the proofs.
Title: Tensors and Training: Optimal Multidimensional Representations and Efficient Deep Learning
Seminar: Computational Math
Speaker: Elizabeth Newman of Emory University
Contact: Lars Ruthotto, lruthotto@emory.edu
Date: 2022-01-18 at 10:00AM
Venue: MSC W201
Download Flyer
Abstract:
The explosion of available data and the revolution in computing technologies have created a critical need for both compressed representations of large, real-world data and powerful data-driven algorithms. In this talk, we will address these needs in two distinct ways: by obtaining optimal multidimensional approximations, and by designing efficient deep learning algorithms.

The traditional approach to dimensionality reduction and feature extraction is the matrix singular value decomposition (SVD), which presupposes that data have been arranged in matrix format. In the first half of this talk, we will show that high-dimensional datasets are more compressible when treated as tensors (multiway arrays). We will obtain these representations using a tensor algebra under which notions of rank and the tensor SVD are consistent with their matrix counterparts. This framework yields provably optimal approximations, and we will support this theory with empirical studies.

Deep neural networks (DNNs), flexible models composed of simple layers parameterized by weights, have been successful high-dimensional function approximators in countless applications. However, training DNNs (i.e., finding a good set of weights) is notoriously challenging, requiring significant time and computational resources. In the second half of this talk, we will describe two approaches for training separable DNNs, the commonly-used architecture where the weights of the final layer are applied linearly. We will leverage this linearity using partial optimization in a deterministic setting and iterative sampling in a stochastic setting. We will demonstrate empirically that both approaches yield faster convergence to more accurate DNN models and less tuning of hyperparameters.

We will conclude with a discussion about new ideas to bring these two powerful data-based techniques together.
Title: Novel Methods for Parameter Estimation and Inverse Problems: from Big Data to Surrogate Data
Seminar: Computational Math
Speaker: Matthias Chung of Virginia Tech
Contact: Lars Ruthotto, lruthotto@emory.edu
Date: 2022-01-14 at 10:00AM
Venue: MSC W201
Download Flyer
Abstract:
Emerging fields such as data analytics, machine learning, and uncertainty quantification heavily rely on efficient computational methods for solving inverse problems. With growing model complexities and ever-increasing data volumes, inference methods have exceeded their limits of applicability, and novel methods are urgently needed. In this talk, we discuss modern challenges in parameter estimation and inverse problems and examine novel approaches to overcome such challenges.

We focus on massive least-squares problems, where the size of the forward process exceeds the storage capabilities of computer memory or the data is simply not available all at once, and inference for dynamical systems with noisy data, model uncertainties, and unknown mechanisms. We present sampled limited memory approaches, where an approximation of the global curvature of the underlying least-squares problem is used to speed-up initial convergence while automatically addressing potential ill-posedness. This research is a fundamental building block for accelerating machine learning approaches. Then, we discuss a novel surrogate data approach that merges mathematical models and stochastic processes to ultimately provide stringent uncertainty estimates. We demonstrate the benefits of our proposed methods for a wide range of application areas, including medical imaging and systems biology.
Title: Containers, cluster expansion, phase transitions and algorithms
Seminar: Discrete Math
Speaker: Aditya Potukuchi of University of Illinois - Chicago
Contact: Dwight Duffus, dwightduffus@emory.edu
Date: 2022-01-13 at 10:00AM
Venue: MSC W201
Download Flyer
Abstract:
The main focus of the talk will revolve around the following problem:

Given a d-regular bipartite graph G on n vertices, output approximately the number of independent sets in G.

This problem is well-studied in combinatorics and theoretical computer science. In combinatorics, we would like asymptotic expressions for certain families of graphs. In algorithms, this problem captures the difficulty of several counting and sampling problems and its computational complexity is currently unknown. I will describe some recent algorithmic progress on this problem, in particular, running time improvement and efficient algorithms if G satisfies some weak expansion conditions. The techniques combine graph container methods with cluster expansion methods from statistical physics and involve understanding certain phase transitions. I will also talk about using these techniques to approximately count the number of independent sets of a given size in the discrete hypercube, and point towards future applications. The talk is targeted at a broad audience, and no specialized knowledge of any of the mentioned topics is assumed.

The talk is based on results joint with Matthew Jenssen and Will Perkins.
Title: Fokker-Planck Equations and Machine Learning
Seminar: Computational Math
Speaker: Yuhua Zhu of Stanford University
Contact: Lars Ruthotto, lruthotto@emory.edu
Date: 2022-01-12 at 10:00AM
Venue: MSC W201
Download Flyer
Abstract:
As the continuous limit of many discretized algorithms, PDEs can provide a qualitative description of algorithm's behavior and give principled theoretical insight into many mysteries in machine learning. In this talk, I will give a theoretical interpretation of several machine learning algorithms using Fokker-Planck (FP) equations. In the first one, we provide a mathematically rigorous explanation of why resampling outperforms reweighting in correcting biased data when stochastic gradient-type algorithms are used in training. In the second one, we propose a new method to alleviate the double sampling problem in model-free reinforcement learning, where the FP equation is used to do error analysis for the algorithm. In the last one, inspired by an interactive particle system whose mean-field limit is a non-linear FP equation, we develop an efficient gradient-free method that finds the global minimum exponentially fast.
Title: Thresholds
Seminar: Discrete Math
Speaker: Jinyoung Park of Stanford University
Contact: Dwight Duffus, dwightduffus@emory.edu
Date: 2022-01-11 at 4:00PM
Venue: Online
Download Flyer
Abstract:
Thresholds for increasing properties of random structures are a central concern in probabilistic combinatorics and related areas. In 2006, Kahn and Kalai conjectured that for any nontrivial increasing property on a finite set, its threshold is never far from its "expectation-threshold," which is a natural (and often easy to calculate) lower bound on the threshold. In this talk, I will first introduce the Kahn- Kalai Conjecture with some motivating examples and then talk about the recent resolution of a fractional version of the Kahn-Kalai Conjecture due to Frankston, Kahn, Narayanan, and myself. Some follow-up work, along with open questions, will also be discussed.
Title: A Journey to the World of Computational Inverse Problems
Seminar: Computational Math
Speaker: Julianne Chung, PhD of Virginia Tech
Contact: Lars Ruthotto, lruthotto@emory.edu
Date: 2022-01-10 at 10:00AM
Venue: MSC W201
Download Flyer
Abstract:
In this talk, we take a journey to the world of computational inverse problems, where we highlight important connections to mathematics, statistics, machine learning, and applications. The main goal of an inverse problem is to extract some underlying parameters or information from available and noisy observations. However, there are enormous computational challenges when solving state-of-the-art inverse problems of interest.

We present new tools for tackling these challenges. We describe generalized hybrid projection methods, which are iterative methods for solving large-scale inverse problems, and we show how approximations provided by the iterative method can be used for subsequent uncertainty quantification. Then, for problems where training or calibration data are readily available, we describe recent advances in exploiting machine learning techniques for estimating regularization parameters. Examples from atmospheric inverse modeling and image processing are discussed.
Title: Brill-Noether Theory of k-Gonal Curves
Seminar: Number Theory
Speaker: Kaelin Cook-Powell of Emory University
Contact: David Zureick-Brown, dzureic@emory.edu
Date: 2021-11-30 at 4:00PM
Venue: MSC W301
Download Flyer
Abstract:

Given a curve $C$ the Brill-Noether variety $W^r_d(C)$ parameterizes line bundles on $C$ of degree $d$ and rank at least $r$. When $C$ is general in the moduli space $\mathcal{M}_g$ of smooth genus $g$ curves these varieties exhibit a number of ``desirable'' geometric properties and their dimension can be computed explicitly in terms of $g,r,$ and $d$. However, these varieties exhibit bizarre behaviour when one considers curves that are not general in $\mathcal{M}_g$. Our goal will be to understand how one can still study line bundles on these non-generic curves, called $k$-gonal curves. We begin with a study of the Brill-Noether varieties $W^r_d(C)$ and then consider a new variety $W^{\mu}(C)$ that parameterizes line bundles governed by the discrete invariant $\mu$.

Using machinery from tropical geometry and Berkovich spaces we may encode families of line-bundles as a special family of tableaux known as $k$-uniform displacement tableaux. We will discuss how $k$-uniform displacement tableaux on rectangular partitions parameterize $W^r_d(C)$. Furthermore, we will push this combinatorial analysis to a family of partitions known as $k$-cores to parameterize the varieties $W^{\mu}(C)$ explicitly in terms of $k$-uniform displacement tableaux.