The neural mechanisms of learning from competitors.

Howard-Jones PA
Yoo JH
Leonards U
Demetriou S

Scientific Abstract

Learning from competitors poses a challenge for existing theories of reward-based learning, which assume that rewarded actions are more likely to be executed in the future. Such a learning mechanism would disadvantage a player in a competitive situation because, since the competitor's loss is the player's gain, reward might become associated with an action the player should themselves avoid. Using fMRI, we investigated the neural activity of humans competing with a computer in a foraging task. We observed neural activity that represented the variables required for learning from competitors: the actions of the competitor (in the player's motor and premotor cortex) and the reward prediction error arising from the competitor's feedback. In particular, regions positively correlated with the unexpected loss of the competitor (which was beneficial to the player) included the striatum and those regions previously implicated in response inhibition. Our results suggest that learning in such contexts may involve the competitor's unexpected losses activating regions of the player's brain that subserve response inhibition, as the player learns to avoid the actions that produced them.

Related Datasets

Howard-Jones PA, Bogacz R, Yoo JH, Leonards U, Demetriou S

Reinforcement learning behaviour in a presence of a competitor

Similar content

Preprint
Liebana Garcia S, Laffere A, Toschi C, Schilling L, Podlaski J, Fritsche M, Zatka-Haas P, Li Y, Bogacz R, Saxe A, Lak A

Striatal dopamine reflects individual long-term learning trajectories

Preprint
Pinchetti L, Qi C, Lokshyn O, Oliviers G, Emde C, Tang M, M'Charrak A, Frieder S, Menzat B, Bogacz R, Lukasiewicz T, Salvatori T

Benchmarking Predictive Coding Networks -- Made Simple

The neural mechanisms of learning from competitors.

Howard-Jones PA
Yoo JH
Leonards U
Demetriou S

Scientific Abstract

Learning from competitors poses a challenge for existing theories of reward-based learning, which assume that rewarded actions are more likely to be executed in the future. Such a learning mechanism would disadvantage a player in a competitive situation because, since the competitor's loss is the player's gain, reward might become associated with an action the player should themselves avoid. Using fMRI, we investigated the neural activity of humans competing with a computer in a foraging task. We observed neural activity that represented the variables required for learning from competitors: the actions of the competitor (in the player's motor and premotor cortex) and the reward prediction error arising from the competitor's feedback. In particular, regions positively correlated with the unexpected loss of the competitor (which was beneficial to the player) included the striatum and those regions previously implicated in response inhibition. Our results suggest that learning in such contexts may involve the competitor's unexpected losses activating regions of the player's brain that subserve response inhibition, as the player learns to avoid the actions that produced them.

Citation

2010.Neuroimage, 53(2):790-9.

Related Datasets

Howard-Jones PA, Bogacz R, Yoo JH, Leonards U, Demetriou S

Reinforcement learning behaviour in a presence of a competitor

Similar content

Preprint
Liebana Garcia S, Laffere A, Toschi C, Schilling L, Podlaski J, Fritsche M, Zatka-Haas P, Li Y, Bogacz R, Saxe A, Lak A

Striatal dopamine reflects individual long-term learning trajectories

Preprint
Pinchetti L, Qi C, Lokshyn O, Oliviers G, Emde C, Tang M, M'Charrak A, Frieder S, Menzat B, Bogacz R, Lukasiewicz T, Salvatori T

Benchmarking Predictive Coding Networks -- Made Simple