All Seminars

Title: Bayesian Modeling and Computation for Structural and Functional Neuroimaging
Seminar: Numerical Analysis and Scientific Computing
Speaker: Andrew Brown of Clemson University
Contact: Deepanshu Verma and Julianne Chung, deepanshu.verma@emory.edu
Date: 2024-04-25 at 10:00AM
Venue: MSC W201
Download Flyer
Abstract:
Since its advent about 30 years ago, magnetic resonance imaging (MRI) has revolutionized medical imaging due to its ability to produce high-contrast images non-invasively without the use of radiation or injection. In neuroimaging in particular, MRI has become a very popular and useful tool both in clinical settings (e.g., in vivo measurements of anatomical structures) as well as psychology (e.g., studying neuronal activations over time in response to an external stimulus). Despite the applicability and history of MR-based neuroimaging, however, considerable challenges remain in the analysis of the associated data. In this talk, I will discuss two recent projects in which collaborators and I use fully Bayesian statistical modeling to draw inference about both brain structure and brain function. The former work illustrates how prior information can be used to improve our ability to delineate the hippocampus in patients with Alzheimer’s disease. The latter work discusses an approach that makes use of the full complex-valued data produced by an MR scanner to improve our ability to not only identify task-related activation in functional MRI, but to differentiate between types of activation that might carry different biological meaning. Along the way, I will mention some computational techniques we employ to facilitate Markov chain Monte Carlo (MCMC) algorithms to approximate the posterior distributions of interest.
Title: Reduced Unitary Whitehead Groups over Function Fields of $p$-adic Curves
Defense: Dissertation
Speaker: Zitong Pei of Emory Unviersity
Contact: Zitong Pei, zitong.pei@emory.edu
Date: 2024-04-22 at 11:00AM
Venue: MSC E406
Download Flyer
Abstract:
The study of the Whitehead group of semi-simple simply connected groups is classical with an abundance of new open questions concerning the triviality of these groups. The Kneser-Tits conjecture on the triviality of these groups was answered in the negative by Platanov for general fields. There is a relation between reduced Whitehead groups and $R$-equivalence classes in algebraic groups.\\ \\ Let $G$ be an algebraic group over a field $F$. The $R$-equivalence, defined by Manin, is the equivalence relation on $G(F)$ defined by $x\sim y$ for $x, y \in G(F)$ if there exists a $F$-rational morphism ${\mathbb A}^1_K \cdots \to G$ defined at $0$ and $1$ and sending 0 to $x$ and 1 to $y$. Let $RG(F)$ be the equivalence class of the identity element in $G(F)$. Then $RG(F)$ is a normal subgroup of $G(F)$ and the quotient $G(F)/RG(F)$ is called the group of $R$-equivalence classes of $G(F)$. It is well known that for the semi-simple simply connected isotropic group $G$ over $F$, $W(G, F)$ is isomorphic to the group of $R$-equivalence classes. Thus the group of $R$-equivalence classes can be thought as Whitehead groups for general algebraic groups. The group of $R$-equivalence classes, is very useful while studying the rationality problem for algebraic groups, the problem to determine whether the variety of an algebraic group is rational or stably rational.\\ \\ Suppose that $D_0$ is a central division $F_0$-algebra. If the group $G(F_0)$ of rational points is given by $SL_n(D)$ for some $n>1$, then $W(G, F_0) $ is the reduced Whitehead group of $D_0$. Let $F$ be a quadratic field extension of $F_0$ and $D$ be a central division $F$-algebra. Suppose that $D$ has an involution of second kind $\tau$ such that $F^{\tau}=F_0$. If the hermitian form $h_{\tau}$ induced by $\tau$ is isotropic and the group $G(F_0)$ is given by $SU(h_\tau, D)$, then $W(G, F_0)$ is isomorphic to the reduced unitary Whitehead group of $D.$\\ \\ We start from the fundamental facts on reduced unitary Whitehead groups of central simple algebras, then introduce the patching techniques. Finally, let $F/F_0$ be a quadratic field extension of the function field of a $p$-adic curve. Let $A$ be a central simple algebra over $F$. Assume that the period of $A$ is two and $A$ has a unitary $F/F_0$ involution. We provide a proof for the triviality of the reduced unitary Whitehead group of $A$.
Title: Quantifying the geometry of immune response and infection
Seminar: Numerical Analysis and Scientific Computing
Speaker: Manuchehr Aminian of Cal Poly Pomona
Contact: Manuela Girotti, manuela.girotti@emory.edu
Date: 2024-04-18 at 10:00AM
Venue: MSC W201
Download Flyer
Abstract:
In improving outcomes for infection in humans and animals, it is important to understand how the body responds to an infection, whether infection has happened at all, and how this varies from individual to individual. Traditionally, this is a simple measurement -- does someone have a fever or not? With more precise, high-frequency measurements of macro-scale data (e.g. body temperature time series) and micro-scale data (e.g. protein or RNA data from biological samples, i.e. "omics"), we can develop and study the efficacy of more sophisticated algorithms and diagnostics. I will present past and ongoing work in applying ideas from geometrical data analysis and machine learning which aid us in making predictions in classification questions such as early prediction of infection, model-free learning of time series patterns and anomaly detection, and "inverse" problems such as prediction of time since infection. We will introduce algorithmic ideas to newcomers as well as our quantitative results on data coming from clinical studies with humans challenged with influenza-like illnesses, and Collaborative Cross mice studies, in work with our collaborators at Colorado State University and Texas A&M University.
Title: Joint Athens-Atlanta Number Theory Seminar
Seminar: Algebra
Speaker: Jiuya Wang and Andrew Obus of University of Georgia and The City University of New York
Contact: Andrew Kobin, andrew.jon.kobin@emory.edu
Date: 2024-04-16 at 4:00PM
Venue: Atwood 240
Download Flyer
Abstract:
Title: Are there sparse codes with large convex embedding dimension?
Seminar: Combinatorics
Speaker: Amzi Jeffs of Carnegie Mellon University
Contact: Liana Yepremyan, liana.yepremyan@emory.edu
Date: 2024-04-11 at 10:00AM
Venue: MSC E406
Download Flyer
Abstract:
How can you arrange a collection of convex sets in Euclidean space? This question underpins the study of "convex codes," a vein of research that began in 2013 motivated by the study of hippocampal place cells in neuroscience. Classifying convex codes is exceedingly difficult, even in the plane, and gives rise to a number of striking examples and neat geometric theorems. We will focus on a particular open question about how the sparsity of a code relates to its embedding dimension, and some recent partial progress.
Title: The Fermi-Pasta-Ulam-Tsingou paradox: history, numeric, analytical results and some ideas (involving Neural Networks)
Seminar: Numerical Analysis and Scientific Computing
Speaker: Guido Mazzuca of Tulane University
Contact: Manuela Girotti, manuela.girotti@emory.edu
Date: 2024-04-11 at 10:00AM
Venue: MSC W201
Download Flyer
Abstract:
In this presentation, I tell the story of the Fermi-Pasta-Ulam-Tsingou (FPUT) paradox from its discovery to the present day. While focusing on recent developments, I introduce the concept of adiabatic invariants, a generalization of conserved quantities, as a means to solve the FPUT paradox within a probabilistic framework. Additionally, I shed light on unresolved issues that can be approached through various methodologies, including potential utilization of Neural Networks. Zoom Option: https://emory.zoom.us/j/94678278895?pwd=bDFxK2RaOTZRMjA5bzQ4UUtxNWJsZz09
Title: Sensitivity analysis in forward and inverse problems
Seminar: Numerical Analysis and Scientific Computing
Speaker: John Darges of North Carolina State University
Contact: Matthias Chung, matthias.chung@emory.edu
Date: 2024-04-09 at 10:00AM
Venue: MSC W201
Download Flyer
Abstract:
Global sensitivity analysis (GSA) offers a flexible framework for understanding the structural importance of uncertain parameters in mathematical models. We focus on forward and inverse problems arising in uncertainty quantification and the computation of measures of variance-based sensitivity. The models involved in these problems are often computationally expensive to evaluate. Traditional methods for sensitivity analysis then come at an unreasonable cost. A preferred workaround is to create a surrogate model that is less cumbersome to evaluate. Surrogate methods that accelerate GSA are proposed and studied. A new class of surrogate models is introduced, using random weight neural networks for surrogate-assisted GSA, presenting analytical formulas for Sobol' indices. The proposed algorithm enhances accuracy through weight sparsity selection, as shown by its application to forward problems derived from ordinary differential equation systems. We also tackle sensitivity analysis in Bayesian inverse problems. A framework for variance-based sensitivity analysis of Bayesian inverse problems with respect to prior hyperparameters is introduced, along with an efficient algorithm combining importance sampling and surrogate modeling. The approach is demonstrated on a nonlinear Bayesian inverse problem from epidemiology, showcasing its effectiveness in quantifying uncertainty in posterior statistics.
Title: Counting 5-isogenies of elliptic curves over the rationals
Seminar: Algebra
Speaker: Santiago Arango-Piñeros of Emory University
Contact: Andrew Kobin, ajkobin@emory.edu
Date: 2024-04-09 at 4:00PM
Venue: MSC W303
Download Flyer
Abstract:
We study the asymptotic order of growth of the number of 5-isogenies of elliptic curves over the rationals, with bounded naive height. This is forthcoming work in collaboration with Changho Han, Oana Padurariu, and Sun Woo Park.
Title: Multifidelity linear regression for scientific machine learning from scarce data
Seminar: Numerical Analysis and Scientific Computing
Speaker: Elizabeth Qian of Georgia Tech
Contact: Elizabeth Newman, elizabeth.newman@emory.edu
Date: 2024-04-04 at 10:00AM
Venue: MSC W201
Download Flyer
Abstract:
Machine learning (ML) methods have garnered significant interest as potential methods for learning surrogate models for complex engineering systems for which traditional simulation is expensive. However, in many scientific and engineering settings, training data are scarce due to the cost of generating data from traditional high-fidelity simulations. ML models trained on scarce data have high variance and are sensitive to vagaries of the training data set. We propose a new multifidelity training approach for scientific machine learning that exploits the scientific context where data of varying fidelities and costs are available; for example high-fidelity data may be generated by an expensive fully resolved physics simulation whereas lower-fidelity data may arise from a cheaper model based on simplifying assumptions. We use the multifidelity data to define new multifidelity Monte Carlo estimators for the unknown parameters of linear regression models, and provide theoretical analyses that guarantee accuracy and improved robustness to small training budgets. Numerical results show that multifidelity learned models achieve order-of-magnitude lower expected error than standard training approaches when high-fidelity data are scarce.
Title: Improving Sampling and Function Approximation in Machine Learning Methods for Solving Partial Differential Equations
Defense: Dissertation
Speaker: Xingjian Li of Emory University
Contact: Xingjian Li, xingjian.li@emory.edu
Date: 2024-03-29 at 9:30AM
Venue: White Hall 200
Download Flyer
Abstract:
Numerical solutions to partial differential equations (PDEs) remain one of the main focus in the field of scientific computing. Deep learning and neural network based methods for solving PDEs have gained much attention and popularity in recent years. The universal approximation property of neural networks allows for a cheaper approximation of functions in high dimensions compared to many traditional numerical methods. Reformulating PDE problems as optimization tasks also enables straightforward implementation and can sometimes circumvent stability concerns common for classic numerical methods that rely on explicit or semi-explicit time discretization. However low accuracy and convergence difficulty stand as challenges to deep learning based schemes, fine-tuning neural networks can also be time-consuming at times.\\ \\ In our work, we present some of our findings using machine learning methods for solving certain PDEs. We divide our work into two sections, in the first half we focus on the popular Physics Informed Neural Networks (PINNs) framework, specifically in problems with dimensions less than or equal to three. We present an alternative optimization based algorithm using a B-spline polynomial function approximator and accurate numerical integration with a grid based sampling scheme. With implementation using popular machine learning libraries, our approach serves as a direct substitute for PINNs, and through performance comparison between the two methods over a wide selection of examples, we find that for low dimensional problems, our proposed method can improve both accuracy and reliability when compared to PINNs. In the second half, we focus on a general class of stochastic optimal control (SOC) problems. By leveraging the underlying theory we propose a neural network solver that solves the SOC problem and the corresponding Hamilton–Jacobi–Bellman (HJB) equation simultaneously. Our method utilizes the stochastic Pontryagin maximum principle and is thus unique in the sampling strategy, this combined with modifying the loss function enables us to tackle high-dimensional problems efficiently.