The Center for Cognitive Computation (CCC) invites you to the lecture of Ádám Gosztolai (École Polytechnique Fédérale de Lausanne)
Interpretable representations of neural dynamics using geometric deep learning
It is increasingly recognised that computations in the brain and artificial neural networks can be understood as outputs of a high-dimensional dynamical system conformed by the activity of large neural populations. Yet revealing the structure of the underpinning latent dynamical processes from data and interpreting their relevance in computational tasks remains a fundamental challenge. A prominent line of research has observed that task-relevant neural activity often takes place on low-dimensional smooth subspaces of the state space called neural manifolds. However, there is a lack of theoretical frameworks for the unsupervised representation of neural dynamics that are interpretable based on behavioural variables, comparable across systems, and decodable to behaviour with high accuracy.
To address these challenges, we introduce Manifold Representation Basis Learning (MARBLE), a fully unsupervised representation-learning framework for non-linear dynamical systems. Our approach combines empirical dynamical modelling and geometric deep learning to transform neural activations during a set of trials into statistical distributions of local flow fields (LFFs). Our central insight is that LFFs vary continuously over the neural manifold, allowing for unsupervised learning, and are preserved under different manifold embeddings, allowing the comparison of neural computations across networks and animals.
We show that MARBLE offers a well-defined similarity metric between different neural systems that is expressive enough to compare computations and detect fine-grained changes in dynamics due to task variables, e.g., decision thresholds and gain modulation. Being unsupervised, MARBLE is uniquely suited to biological discovery. Indeed, we show that it discovers more interpretable neural representations in several motor, navigation and cognitive tasks than generative models such as LFADS or (semi-)supervised models such as CEBRA. Intriguingly, this interpretability implies significantly higher decoding performance than state-of-the-art. Our results suggest that using the manifold structure yields a new class of algorithms with higher performance and the ability to assimilate data across experiments.