Comparing Neural Representations
Heiko Schütt, University of Luxembourg 2.28.0.10810:15 - 11:45
In computational neuroscience and machine learning, we develop more and more complex deep neural network models to mimic the human brain and to solve many applied problems for us. In either field, a central tool to understand what our models do, is to compare the internal representations of these models to each other. Here, I will discuss statistical issues with such comparisons, due to the high dimensionality of representations the complex variability between people and neural networks. We will cover standard comparison methods like linear encoding models and representational similarity analysis and how we can perform valid statistical inference for the results of such analyses. These analysis methods are based on novel corrections for bootstrapping methods to deal with two random factors simultaneously and to take cross-validation into account adequately. Additionally, I will discuss a novel approach to comparing representations based on a Bayesian treatment of linear encoding models, which we continue to develop actively in my group.