Machine Learning meets Partial Differential Equations

Abstract

One of the most promising areas in artificial intelligence is deep learning, a form of machine learning that uses neural networks containing many hidden layers. Recent success has led to breakthroughs in applications such as speech and image recognition. However, more theoretical insight is needed to create a rigorous scientific basis for designing and training deep neural networks, increasing their scalability, and providing insight into their reasoning. In the first part of my talk, I will outline several recent advances made by interpreting some deep learning techniques as discretizations of (partial) differential equations. This viewpoint has opened the door to applications of established mathematical theory and numerical algorithms to machine learning in novel ways. I will provide several representative examples of this promising direction toward more reliable and efficient deep learning. In the second part of my talk, I will showcase how deep learning promises new ways to solve classical mathematical problems at an ever-larger scale. As an example, I will demonstrate how deep learning can solve the century-old optimal mass transport problem in high dimensions. Using this example, I will also discuss how applied mathematics paradigms in research and teaching are being reshaped in the era of machine learning.

Date
Jul 6, 2020 5:00 PM
Lars Ruthotto
Lars Ruthotto
Winship Distinguished Research Associate Professor of Mathematics and Computer Science