All Seminars

Title: Multifidelity linear regression for scientific machine learning from scarce data
Seminar: Numerical Analysis and Scientific Computing
Speaker: Elizabeth Qian of Georgia Tech
Contact: Elizabeth Newman, elizabeth.newman@emory.edu
Date: 2024-04-04 at 10:00AM
Venue: MSC W201
Download Flyer
Abstract:
Machine learning (ML) methods have garnered significant interest as potential methods for learning surrogate models for complex engineering systems for which traditional simulation is expensive. However, in many scientific and engineering settings, training data are scarce due to the cost of generating data from traditional high-fidelity simulations. ML models trained on scarce data have high variance and are sensitive to vagaries of the training data set. We propose a new multifidelity training approach for scientific machine learning that exploits the scientific context where data of varying fidelities and costs are available; for example high-fidelity data may be generated by an expensive fully resolved physics simulation whereas lower-fidelity data may arise from a cheaper model based on simplifying assumptions. We use the multifidelity data to define new multifidelity Monte Carlo estimators for the unknown parameters of linear regression models, and provide theoretical analyses that guarantee accuracy and improved robustness to small training budgets. Numerical results show that multifidelity learned models achieve order-of-magnitude lower expected error than standard training approaches when high-fidelity data are scarce.
Title: Improving Sampling and Function Approximation in Machine Learning Methods for Solving Partial Differential Equations
Defense: Dissertation
Speaker: Xingjian Li of Emory University
Contact: Xingjian Li, xingjian.li@emory.edu
Date: 2024-03-29 at 9:30AM
Venue: White Hall 200
Download Flyer
Abstract:
Numerical solutions to partial differential equations (PDEs) remain one of the main focus in the field of scientific computing. Deep learning and neural network based methods for solving PDEs have gained much attention and popularity in recent years. The universal approximation property of neural networks allows for a cheaper approximation of functions in high dimensions compared to many traditional numerical methods. Reformulating PDE problems as optimization tasks also enables straightforward implementation and can sometimes circumvent stability concerns common for classic numerical methods that rely on explicit or semi-explicit time discretization. However low accuracy and convergence difficulty stand as challenges to deep learning based schemes, fine-tuning neural networks can also be time-consuming at times.\\ \\ In our work, we present some of our findings using machine learning methods for solving certain PDEs. We divide our work into two sections, in the first half we focus on the popular Physics Informed Neural Networks (PINNs) framework, specifically in problems with dimensions less than or equal to three. We present an alternative optimization based algorithm using a B-spline polynomial function approximator and accurate numerical integration with a grid based sampling scheme. With implementation using popular machine learning libraries, our approach serves as a direct substitute for PINNs, and through performance comparison between the two methods over a wide selection of examples, we find that for low dimensional problems, our proposed method can improve both accuracy and reliability when compared to PINNs. In the second half, we focus on a general class of stochastic optimal control (SOC) problems. By leveraging the underlying theory we propose a neural network solver that solves the SOC problem and the corresponding Hamilton–Jacobi–Bellman (HJB) equation simultaneously. Our method utilizes the stochastic Pontryagin maximum principle and is thus unique in the sampling strategy, this combined with modifying the loss function enables us to tackle high-dimensional problems efficiently.
Title: Quantitative stability of traveling waves
Seminar: Analysis and Differential Geometry
Speaker: Christopher Henderson of University of Arizona
Contact: Maja Taskovixc, maja.taskovic@emory.edu
Date: 2024-03-29 at 10:00AM
Venue: MSC W301
Download Flyer
Abstract:
In their original paper, Kolmogorov, Petrovsky, and Piskunov demonstrated stability of the minimal speed traveling wave with an ingenious compactness argument based on, roughly, the decreasing steepness of the profile. This proof is extremely flexible, yet entirely not quantitative. On the other hand, more modern PDE proofs of this fact for general reaction-diffusion equations are highly tailored to the particular equation, fairly complicated, and often not sharp in the rate of convergence. In this talk, which will be elementary and self-contained, I will introduce a natural quantity, the shape defect function, that allows a simple approach to quantifying convergence to the traveling wave for a large class of reaction-diffusion equations. Connections to the calculus of variations and generalizations to other settings will be discussed. This is a joint work with Jing An and Lenya Ryzhik.
Title: Homogeneous Substructures in Ordered Matchings
Seminar: Combinatorics
Speaker: Andrzej Rucinski of Adam Mickiewicz University, Poznan
Contact: Liana Yepremyan, liana.yepremyan@emory.edu
Date: 2024-03-29 at 4:00PM
Venue: MSC W201
Download Flyer
Abstract:
An ordered matching M_n is a partition of a linearly ordered set of size 2n into n pairs (called edges). Taking the linear ordering into account, every pair of edges forms one of three patterns: AABB, ABBA, or ABAB. A submatching with all pairs of edges forming the same pattern is called a clique. In my talk, I will first show an Erdos-Szekeres type result guaranteeing a large clique in every matching M_n. Then I will move on to a random (uniform) setting and investigate the largest size of a clique of a given type (pattern) present in almost all matchings. Finally, I will attempt to generalize these results to r-uniform hypermatchings, that is, partitions of a linearly ordered set of size rn into n r-element subsets. This is joint work with Andrzej Dudek and Jarek Grytczuk.
Title: Degeneracy of eigenvalues and singular values of parameter dependent matrices
Seminar: Numerical Analysis and Scientific Computing
Speaker: Alessandro Pugliese of Georgia Tech/University of Bary
Contact: Manuela Manetta, manuela.manetta@emory.edu
Date: 2024-03-28 at 10:00AM
Venue: MSC W201
Download Flyer
Abstract:
Hermitian matrices have real eigenvalues and an orthonormal set of eigenvectors. Do smooth Hermitian matrix valued functions have smooth eigenvalues and eigenvectors? Starting from such question, we will first review known results on the smooth eigenvalue and singular values decompositions of matrices that depend on one or several parameters, and then focus on our contribution, which has been that of devising topological tools to detect and approximate parameters' values where eigenvalues or singular values of a matrix valued function are degenerate (i.e. repeated or zero). The talk will be based on joint work with Luca Dieci (Georgia Tech) and Alessandra Papini (Univ. of Florence).
Title: Collective migration model on a viscoelastic collagen network
Seminar: Analysis and Differential Geometry
Speaker: Andrei Tarfulea of Louisiana State University
Contact: Maja Taskovic, maja.taskovic@emory.edu
Date: 2024-03-27 at 10:00AM
Venue: White Hall 110
Download Flyer
Abstract:
We explore a model of self-generated directional cell migration on viscoelastic substrates in the absence of apparent intrinsic polarity. Mathematically, this takes the form of a reaction-diffusion equation for the network deformation, along with a moving cell-cluster source term which itself moves according to the local network deformation. This creates a strange form of nonlinear interaction. We show global well-posedness, conditional existence/absence of traveling waves, and address the stability of traveling waves.
Title: A few steps towards the Erdos–Hajnal conjecture
Seminar: Combinatorics
Speaker: Tung Nguyen of Princeton University
Contact: Liana Yepremyan, liana.yepremyan@emory.edu
Date: 2024-03-26 at 10:00AM
Venue: MSC W201
Download Flyer
Abstract:
A cornerstone of Ramsey theory says that every graph contains a clique or independent set of logarithmic size, which is asymptotically optimal for almost all graphs. The Erd?s–Hajnal conjecture from 1977 predicts a very different situation in graphs with forbidden induced subgraphs; more precisely, the conjecture asserts that for every graph $H$, there exists $c=c(H)>0$ such that every $n$-vertex graph with no induced copy of $H$ has a clique or independent set of size at least $n^c$. This conjecture remains open, and we will discuss recent progress on it in the talk.
Title: Local heights on hyperelliptic curves for quadratic Chabauty
Seminar: Algebra
Speaker: Juanita Duque-Rosero of Boston University
Contact: Andrew Kobin, ajkobin@emory.edu
Date: 2024-03-26 at 4:00PM
Venue: MSC W303
Download Flyer
Abstract:
The method of quadratic Chabauty is a powerful tool to determine the set of rational points on curves. A key input for this method is the values of local height functions. In this talk, we will discuss an algorithm to compute these local heights at odd primes v not equal to p for hyperelliptic curves. Through applications, we will see how this work extends the reach of quadratic Chabauty to curves previously deemed inaccessible. This is joint work with Alexander Betts, Sachi Hashimoto, and Pim Spelier.
Title: Integer distance sets
Seminar: Discrete Analysis
Speaker: Rachel Greenfeld of Institute for Advanced Study
Contact: Cosmin Pohoata, cosmin.pohoata@emory.edu
Date: 2024-03-25 at 5:30PM
Venue: MSC W301
Download Flyer
Abstract:
A set S in the Euclidean plane is an integer distance set if the distance between any pair of its points is an integer. Interestingly, all so-far-known integer distance sets have all but up to four of their points on a single line or circle. And it had long been suspected, going back to Erd?s, that any integer distance set must be of this special form. In a recent work, joint with Marina Iliopoulou and Sarah Peluse, we developed a new approach to the problem, which enabled us to make the first progress towards confirming this suspicion. In the talk, I will discuss the study of integer distance sets, its connections to other problems, and our new developments.
Title: Structure-conforming Operator Learning via Transformers
Seminar: Numerical Analysis and Scientific Computing
Speaker: Shuhao Cao of University of Missouri-Kansas City
Contact: Yuanzhe Xi, yuanzhe.xi@emory.edu
Date: 2024-03-21 at 10:00AM
Venue: MSC W201
Download Flyer
Abstract:
GPT, Stable Diffusion, AlphaFold 2, etc., all these state-of-the-art deep learning models use a neural architecture called "Transformer". Since the emergence of "Attention Is All You Need" paper by Google, Transformer is now the ubiquitous architecture in deep learning. At Transformer's heart and soul is the "attention mechanism". In this talk, we shall dissect the "attention mechanism" through the lens of traditional numerical methods, such as Galerkin methods, and hierarchical matrix decomposition. We will report some numerical results on designing attention-based neural networks according to the structure of a problem in traditional scientific computing, such as inverse problems for Neumann-to-Dirichlet operator (EIT) or multiscale elliptic problems. Progresses within different communities will be briefed to answer some open problems on the mathematical properties of the attention mechanism in Transformers.