All Seminars

Title: Tensors and Training: Optimal Multidimensional Representations and Efficient Deep Learning
Seminar: Computational Math
Speaker: Elizabeth Newman of Emory University
Contact: Lars Ruthotto, lruthotto@emory.edu
Date: 2022-01-18 at 10:00AM
Venue: MSC W201
Download Flyer
Abstract:
The explosion of available data and the revolution in computing technologies have created a critical need for both compressed representations of large, real-world data and powerful data-driven algorithms. In this talk, we will address these needs in two distinct ways: by obtaining optimal multidimensional approximations, and by designing efficient deep learning algorithms.

The traditional approach to dimensionality reduction and feature extraction is the matrix singular value decomposition (SVD), which presupposes that data have been arranged in matrix format. In the first half of this talk, we will show that high-dimensional datasets are more compressible when treated as tensors (multiway arrays). We will obtain these representations using a tensor algebra under which notions of rank and the tensor SVD are consistent with their matrix counterparts. This framework yields provably optimal approximations, and we will support this theory with empirical studies.

Deep neural networks (DNNs), flexible models composed of simple layers parameterized by weights, have been successful high-dimensional function approximators in countless applications. However, training DNNs (i.e., finding a good set of weights) is notoriously challenging, requiring significant time and computational resources. In the second half of this talk, we will describe two approaches for training separable DNNs, the commonly-used architecture where the weights of the final layer are applied linearly. We will leverage this linearity using partial optimization in a deterministic setting and iterative sampling in a stochastic setting. We will demonstrate empirically that both approaches yield faster convergence to more accurate DNN models and less tuning of hyperparameters.

We will conclude with a discussion about new ideas to bring these two powerful data-based techniques together.
Title: Novel Methods for Parameter Estimation and Inverse Problems: from Big Data to Surrogate Data
Seminar: Computational Math
Speaker: Matthias Chung of Virginia Tech
Contact: Lars Ruthotto, lruthotto@emory.edu
Date: 2022-01-14 at 10:00AM
Venue: MSC W201
Download Flyer
Abstract:
Emerging fields such as data analytics, machine learning, and uncertainty quantification heavily rely on efficient computational methods for solving inverse problems. With growing model complexities and ever-increasing data volumes, inference methods have exceeded their limits of applicability, and novel methods are urgently needed. In this talk, we discuss modern challenges in parameter estimation and inverse problems and examine novel approaches to overcome such challenges.

We focus on massive least-squares problems, where the size of the forward process exceeds the storage capabilities of computer memory or the data is simply not available all at once, and inference for dynamical systems with noisy data, model uncertainties, and unknown mechanisms. We present sampled limited memory approaches, where an approximation of the global curvature of the underlying least-squares problem is used to speed-up initial convergence while automatically addressing potential ill-posedness. This research is a fundamental building block for accelerating machine learning approaches. Then, we discuss a novel surrogate data approach that merges mathematical models and stochastic processes to ultimately provide stringent uncertainty estimates. We demonstrate the benefits of our proposed methods for a wide range of application areas, including medical imaging and systems biology.
Title: Containers, cluster expansion, phase transitions and algorithms
Seminar: Discrete Math
Speaker: Aditya Potukuchi of University of Illinois - Chicago
Contact: Dwight Duffus, dwightduffus@emory.edu
Date: 2022-01-13 at 10:00AM
Venue: MSC W201
Download Flyer
Abstract:
The main focus of the talk will revolve around the following problem:

Given a d-regular bipartite graph G on n vertices, output approximately the number of independent sets in G.

This problem is well-studied in combinatorics and theoretical computer science. In combinatorics, we would like asymptotic expressions for certain families of graphs. In algorithms, this problem captures the difficulty of several counting and sampling problems and its computational complexity is currently unknown. I will describe some recent algorithmic progress on this problem, in particular, running time improvement and efficient algorithms if G satisfies some weak expansion conditions. The techniques combine graph container methods with cluster expansion methods from statistical physics and involve understanding certain phase transitions. I will also talk about using these techniques to approximately count the number of independent sets of a given size in the discrete hypercube, and point towards future applications. The talk is targeted at a broad audience, and no specialized knowledge of any of the mentioned topics is assumed.

The talk is based on results joint with Matthew Jenssen and Will Perkins.
Title: Fokker-Planck Equations and Machine Learning
Seminar: Computational Math
Speaker: Yuhua Zhu of Stanford University
Contact: Lars Ruthotto, lruthotto@emory.edu
Date: 2022-01-12 at 10:00AM
Venue: MSC W201
Download Flyer
Abstract:
As the continuous limit of many discretized algorithms, PDEs can provide a qualitative description of algorithm's behavior and give principled theoretical insight into many mysteries in machine learning. In this talk, I will give a theoretical interpretation of several machine learning algorithms using Fokker-Planck (FP) equations. In the first one, we provide a mathematically rigorous explanation of why resampling outperforms reweighting in correcting biased data when stochastic gradient-type algorithms are used in training. In the second one, we propose a new method to alleviate the double sampling problem in model-free reinforcement learning, where the FP equation is used to do error analysis for the algorithm. In the last one, inspired by an interactive particle system whose mean-field limit is a non-linear FP equation, we develop an efficient gradient-free method that finds the global minimum exponentially fast.
Title: Thresholds
Seminar: Discrete Math
Speaker: Jinyoung Park of Stanford University
Contact: Dwight Duffus, dwightduffus@emory.edu
Date: 2022-01-11 at 4:00PM
Venue: Online
Download Flyer
Abstract:
Thresholds for increasing properties of random structures are a central concern in probabilistic combinatorics and related areas. In 2006, Kahn and Kalai conjectured that for any nontrivial increasing property on a finite set, its threshold is never far from its "expectation-threshold," which is a natural (and often easy to calculate) lower bound on the threshold. In this talk, I will first introduce the Kahn- Kalai Conjecture with some motivating examples and then talk about the recent resolution of a fractional version of the Kahn-Kalai Conjecture due to Frankston, Kahn, Narayanan, and myself. Some follow-up work, along with open questions, will also be discussed.
Title: A Journey to the World of Computational Inverse Problems
Seminar: Computational Math
Speaker: Julianne Chung, PhD of Virginia Tech
Contact: Lars Ruthotto, lruthotto@emory.edu
Date: 2022-01-10 at 10:00AM
Venue: MSC W201
Download Flyer
Abstract:
In this talk, we take a journey to the world of computational inverse problems, where we highlight important connections to mathematics, statistics, machine learning, and applications. The main goal of an inverse problem is to extract some underlying parameters or information from available and noisy observations. However, there are enormous computational challenges when solving state-of-the-art inverse problems of interest.

We present new tools for tackling these challenges. We describe generalized hybrid projection methods, which are iterative methods for solving large-scale inverse problems, and we show how approximations provided by the iterative method can be used for subsequent uncertainty quantification. Then, for problems where training or calibration data are readily available, we describe recent advances in exploiting machine learning techniques for estimating regularization parameters. Examples from atmospheric inverse modeling and image processing are discussed.
Title: Brill-Noether Theory of k-Gonal Curves
Seminar: Number Theory
Speaker: Kaelin Cook-Powell of Emory University
Contact: David Zureick-Brown, dzureic@emory.edu
Date: 2021-11-30 at 4:00PM
Venue: MSC W301
Download Flyer
Abstract:

Given a curve $C$ the Brill-Noether variety $W^r_d(C)$ parameterizes line bundles on $C$ of degree $d$ and rank at least $r$. When $C$ is general in the moduli space $\mathcal{M}_g$ of smooth genus $g$ curves these varieties exhibit a number of ``desirable'' geometric properties and their dimension can be computed explicitly in terms of $g,r,$ and $d$. However, these varieties exhibit bizarre behaviour when one considers curves that are not general in $\mathcal{M}_g$. Our goal will be to understand how one can still study line bundles on these non-generic curves, called $k$-gonal curves. We begin with a study of the Brill-Noether varieties $W^r_d(C)$ and then consider a new variety $W^{\mu}(C)$ that parameterizes line bundles governed by the discrete invariant $\mu$.

Using machinery from tropical geometry and Berkovich spaces we may encode families of line-bundles as a special family of tableaux known as $k$-uniform displacement tableaux. We will discuss how $k$-uniform displacement tableaux on rectangular partitions parameterize $W^r_d(C)$. Furthermore, we will push this combinatorial analysis to a family of partitions known as $k$-cores to parameterize the varieties $W^{\mu}(C)$ explicitly in terms of $k$-uniform displacement tableaux.

Title: An optimal Bayesian estimator for a stochastic problem in Diffuse Optical Tomography
Seminar: Numerical Analysis and Scientific Computing
Speaker: Anuj Abhishek of University of North Carolina at Charlotte
Contact: Elizabeth Newman, elizabeth.newman@emory.edu
Date: 2021-11-19 at 12:30PM
Venue: MSC W201
Download Flyer
Abstract:
Studying coefficient inverse problems in a stochastic setting has increasingly gained in prominence in the past couple of decades. In this talk, we will present some results that were obtained for a Bayesian estimator built from the noisy data obtained in a simplified one-parameter Diffuse Optical Tomography (DOT) Model. We establish the rate of convergence of such an estimator in the supremum norm loss and show that it is optimal. This work extends the approach proposed by Abraham and Nickl in a recent article (On Statistical Calderon problems) and applies it to the problem in DOT setting. We also present some preliminary numerical simulations in support of our theoretical findings. This is joint work with Dr. Taufiquar Khan (UNCC) and Dr. Thilo Strauss (Robert Bosch GmbH).
Title: Generation and propagation of moments for the binary-ternary Boltzmann equation
Seminar: Analysis and Differential Geometry
Speaker: Ioakeim Ampatzoglou of Courant Institute
Contact: Maja Taskovic, maja.taskovic@emory.edu
Date: 2021-11-18 at 3:00PM
Venue: ATWOOD 360
Download Flyer
Abstract:
The binary-ternary Boltzmann equation is a recently derived kinetic equation describing the evolution of the probability density a non-ideal gas in non-equilibrium. In this talk we focus on the homogeneous (space invariant) equation and discuss the generation and propagation of polynomial and exponential moments properties of a solution. We will then employ these properties to discuss global in time existence and uniqueness. This is a joint work with Maja Taskovic.
Title: Non-orientable enumerative problems in $\mathbf{A}^{1}$-homotopy theory
Seminar: Number Theory
Speaker: Andrew Kobin of Emory University
Contact: David Zureick-Brown, dzureic@emory.edu
Date: 2021-11-16 at 4:00PM
Venue: MSC W301
Download Flyer
Abstract:
Many enumerative problems in classical algebraic geometry, such as counting lines on a smooth cubic surface, admit a solution over an arbitrary ground field $k$ (of characteristic $\not = 2$) using Morel and Voevodsky's $\mathbf{A}^{1}$-homotopy theory. Recently, several authors have formulated and solved such ``enriched'' enumerative problems using Kass and Wickelgren's ``enriched'' Euler class, which takes values in the Grothendieck--Witt group of $k$ and is only defined when the associated vector bundle is orientable. In joint work with Libby Taylor, we extend Kass--Wickelgren's construction to non-orientable vector bundles using a stacky construction. This allows us to enrich a larger class of enumerative problems, including the count of lines meeting $6$ planes in $\mathbf{P}^{4}$.