Action selection performance of a reconfigurable basal ganglia inspired model with Hebbian–Bayesian Go-NoGo connectivity

Berthet, Pierre and Hellgren-Kotaleski, Jeanette and Lansner, Anders (2012) Action selection performance of a reconfigurable basal ganglia inspired model with Hebbian–Bayesian Go-NoGo connectivity. Frontiers in Behavioral Neuroscience, 6. ISSN 1662-5153

[thumbnail of pubmed-zip/versions/1/package-entries/fnbeh-06-00065/fnbeh-06-00065.pdf] Text
pubmed-zip/versions/1/package-entries/fnbeh-06-00065/fnbeh-06-00065.pdf - Published Version

Download (1MB)

Abstract

Several studies have shown a strong involvement of the basal ganglia (BG) in action selection and dopamine dependent learning. The dopaminergic signal to striatum, the input stage of the BG, has been commonly described as coding a reward prediction error (RPE), i.e., the difference between the predicted and actual reward. The RPE has been hypothesized to be critical in the modulation of the synaptic plasticity in cortico-striatal synapses in the direct and indirect pathway. We developed an abstract computational model of the BG, with a dual pathway structure functionally corresponding to the direct and indirect pathways, and compared its behavior to biological data as well as other reinforcement learning models. The computations in our model are inspired by Bayesian inference, and the synaptic plasticity changes depend on a three factor Hebbian–Bayesian learning rule based on co-activation of pre- and post-synaptic units and on the value of the RPE. The model builds on a modified Actor-Critic architecture and implements the direct (Go) and the indirect (NoGo) pathway, as well as the reward prediction (RP) system, acting in a complementary fashion. We investigated the performance of the model system when different configurations of the Go, NoGo, and RP system were utilized, e.g., using only the Go, NoGo, or RP system, or combinations of those. Learning performance was investigated in several types of learning paradigms, such as learning-relearning, successive learning, stochastic learning, reversal learning and a two-choice task. The RPE and the activity of the model during learning were similar to monkey electrophysiological and behavioral data. Our results, however, show that there is not a unique best way to configure this BG model to handle well all the learning paradigms tested. We thus suggest that an agent might dynamically configure its action selection mode, possibly depending on task characteristics and also on how much time is available.

Item Type: Article
Subjects: STM Digital Library > Biological Science
Depositing User: Unnamed user with email support@stmdigitallib.com
Date Deposited: 22 Mar 2023 06:34
Last Modified: 20 Jun 2024 13:21
URI: http://archive.scholarstm.com/id/eprint/690

Actions (login required)

View Item
View Item