Meta-training neural networks to control themselves
Animals learn to adapt to levels of uncertainty in the environment by monitoring errors and engaging control processes. Recently, deep networks have been proposed as theories of animal perception, cognition and learning, but there is theory that allows us to incorporate error monitoring or control into neural networks. Here, we asked whether it was possible to meta-train deep RL agents to adapt to the level of controllability of the environment. We found that this was only possible if we encouraged them to compute action prediction errors – error signals similar to those generated in mammalian medial PFC. APE-trained networks meta-learned policies in an “observe vs. bet” bandit task that closely resembled those of humans. We also show that biases in this error computation lead the network to display pathologies of control characteristic of psychological disorders, such as compulsivity and learned helplessness.
Date: 16 May 2024, 14:30 (Thursday, 4th week, Trinity 2024)
Venue: Venue to be announced
Speakers: Chris Summerfield (University of Oxford), Kai Sandbrink (University of Oxford)
Organising department: Medical Sciences Division
Organiser: Dr Rui Ponte Costa (University of Oxford)
Part of: Oxford Neurotheory Forum
Booking required?: Not required
Audience: Members of the University only
Editor: Rui Costa