Inferring the dynamics of learning from sensory decision-making behavior
The dynamics of learning in natural and artificial environments is a problem of great interest to both neuroscientists and artificial intelligence experts. However, standard analyses of animal training data either treat behavior as fixed, or track only coarse performance statistics (e.g., accuracy and bias), providing limited insight into the dynamic evolution of behavioral strategies over the course of learning. To overcome these limitations, we propose a dynamic psychophysical model that efficiently tracks trial-to-trial changes in behavior over the course of training. In this talk, I will describe recent work based on a dynamic logistic regression model that captures the time-varying dependencies of behavior on stimuli and other task covariates. We applied our method to psychophysical data from both human subjects and rats learning a sensory discrimination task. We successfully tracked the dynamics of psychophysical weights during training, capturing day-to-day and trial-to-trial fluctuations in behavioral strategy. We leverage the model's flexibility model to investigate why rats frequently make mistakes on easy trials, demonstrating that so-called "lapses" often arise from sub-optimal weighting of task covariates. Finally, I will describe recent work on adaptive optimal training, which combines ideas from reinforcement learning and adaptive experimental design to formulate methods for inferring animal learning rules from behavior, and using these rules to speed up animal training.