 Gitta Kutyniok, TU Berlin
 Understanding Deep Neural Networks: From Generalization to Interpretability; zoom link @ https://sites.google.com/view/mindsseminar/home
 06/18/2020
 2:30 PM  3:30 PM

(Part of One World MINDS seminar:
https://sites.google.com/view/mindsseminar/home)
\[
\]
Deep neural networks have recently seen an impressive comeback with
applications both in the public sector and the sciences. However,
despite their outstanding success, a comprehensive theoretical
foundation of deep neural networks is still missing.
For deriving a theoretical understanding of deep neural networks,
one main goal is to analyze their generalization ability, i.e.
their performance on unseen data sets. In case of graph
convolutional neural networks, which are today heavily used, for
instance, for recommender systems, already the generalization
capability to signals on graphs unseen in the training set,
typically coined transferability, was not rigorously analyzed. In
this talk, we will prove that spectral graph convolutional neural
networks are indeed transferable, thereby also debunking a common
misconception about this type of graph convolutional neural
networks.
If such theoretical approaches fail or if one is just given a
trained neural network without knowledge of how it was trained,
interpretability approaches become necessary. Those aim to "break
open the black box" in the sense of identifying those features from
the input, which are most relevant for the observed output. Aiming
to derive a theoretically founded approach to this problem, we
introduced a novel approach based on ratedistortion theory coined
RateDistortion Explanation (RDE), which not only provides
stateoftheart explanations, but in addition allows first
theoretical insights into the complexity of such problems. In this
talk we will discuss this approach and show that it also gives a
precise mathematical meaning to the previously vague term of
relevant parts of the input.