Optimal decision network with distributed representation.
Scientific Abstract
On the basis of detailed analysis of reaction times and neurophysiological data from tasks involving choice, it has been proposed that the brain implements an optimal statistical test during simple perceptual decisions. It has been shown recently how this optimal test can be implemented in biologically plausible models of decision networks, but this analysis was restricted to very simplified localist models which include abstract units describing activity of whole cell assemblies rather than individual neurons. This paper derives the optimal parameters in a model of a decision network including individual neurons, in which the alternatives are represented by distributed patterns of neuronal activity. It is also shown how the optimal weights in the decision network can be learnt via iterative rules using information accessible for individual synapses. Simulations demonstrate that the network with the optimal synaptic weights achieves better performance and matches fundamental behavioural regularities observed in choice tasks (Hick's law and the relationship between the error rate and the time for decision) better than a network with synaptic weights set according to a standard Hebb rule.
Similar content
Preprint
Striatal dopamine reflects individual long-term learning trajectories
Paper
Benchmarking Predictive Coding Networks - Made Simple
2025. International Conference on Learning Representations
Paper
Predictive Coding Model Detects Novelty on Different Levels of Representation Hierarchy.
2025. Neural Comput, 37(8):1373-1408.
Free Full Text at Europe PMC
PMC7618029
Optimal decision network with distributed representation.
Scientific Abstract
On the basis of detailed analysis of reaction times and neurophysiological data from tasks involving choice, it has been proposed that the brain implements an optimal statistical test during simple perceptual decisions. It has been shown recently how this optimal test can be implemented in biologically plausible models of decision networks, but this analysis was restricted to very simplified localist models which include abstract units describing activity of whole cell assemblies rather than individual neurons. This paper derives the optimal parameters in a model of a decision network including individual neurons, in which the alternatives are represented by distributed patterns of neuronal activity. It is also shown how the optimal weights in the decision network can be learnt via iterative rules using information accessible for individual synapses. Simulations demonstrate that the network with the optimal synaptic weights achieves better performance and matches fundamental behavioural regularities observed in choice tasks (Hick's law and the relationship between the error rate and the time for decision) better than a network with synaptic weights set according to a standard Hebb rule.
Citation
2007.Neural Netw, 20(5):564-76.
Downloads
Similar content
Preprint
Striatal dopamine reflects individual long-term learning trajectories
Paper
Benchmarking Predictive Coding Networks - Made Simple
2025. International Conference on Learning Representations
Paper
Predictive Coding Model Detects Novelty on Different Levels of Representation Hierarchy.
2025. Neural Comput, 37(8):1373-1408.
Free Full Text at Europe PMC
PMC7618029