Over the past ten years, optimal transport has become a fundamental tool in statistics and machine learning: the 2-Wasserstein metric provides a new notion of distance for classifying distributions and a rich geometry for interpolating between them. In parallel, optimal transport has gained mathematical significance by providing new tools for studying stability and limiting behavior of partial differential equations, through the theory of 2-Wasserstein gradient flows.
In fact, the success optimal transport in each of these contexts ultimately relies on the same fundamental property of the 2-Wasserstein metric: as originally discovered by Otto, the 2-Wasserstein metric is unique among classical optimal transport metrics in that it has a formal Riemannian structure. In my talk, I will introduce the theory of optimal transport, explain the special geometric structure of the 2-Wasserstein metric, and illustrate the essential role it plays in how optimal transport is used in both machine learning and partial differential equations.