## Applied Mathematics

•  Ben Adcock, Simon Fraser University
•  The troublesome kernel: instabilities in deep learning for inverse problems; zoom link @ https://sites.google.com/view/minds-seminar/home
•  06/04/2020
•  2:30 PM - 3:30 PM
•

(Part of One World MINDS seminar: https://sites.google.com/view/minds-seminar/home)  Due to their stunning success in traditional machine learning applications such as classification, techniques based on deep learning have recently begun to be actively investigated for problems in computational science and engineering. One of the key areas at the forefront of this trend is inverse problems, and specifically, inverse problems in imaging. The last few years have witnessed the emergence of many neural network-based algorithms for important imaging modalities such as MRI and X-ray CT. These claim to achieve competitive, and sometimes even superior, performance to current state-of-the-art techniques. However, there is a problem. Techniques based on deep learning are typically unstable. For example, small perturbations in the data can lead to a myriad of artefacts in the recovered images. Such artifacts can be hard to dismiss as obviously unphysical, meaning that this phenomenon has potentially serious consequences for the safe deployment of deep learning in practice. In this talk, I will first showcase the instability phenomenon empirically in a range of examples. I will then focus on its mathematical underpinnings, the consequences of these insights when it comes to potential remedies, and the future possibilities for computing genuinely stable neural networks for inverse problems in imaging. This is joint work with Vegard Antun, Nina M. Gottschling, Anders C. Hansen, Clarice Poon, and Francesco Renna Papers: https://www.pnas.org/content/early/2020/05/08/1907377117 https://arxiv.org/abs/2001.01258

## Contact

Department of Mathematics
Michigan State University