Paper ID: 2411.00393 • Published Nov 1, 2024
Advantages of Neural Population Coding for Deep Learning
Heiko Hoffmann
TL;DR
Get AI-generated summaries with premium
Get AI-generated summaries with premium
Scalar variables, e.g., the orientation of a shape in an image, are commonly
predicted using a single output neuron in a neural network. In contrast, the
mammalian cortex represents variables with a population of neurons. In this
population code, each neuron is most active at its preferred value and shows
partial activity for other values. Here, we investigate the benefit of using a
population code for the output layer of a neural network. We compare population
codes against single-neuron outputs and one-hot vectors. First, we show
theoretically and in experiments with synthetic data that population codes
improve robustness to input noise in networks of stacked linear layers. Second,
we demonstrate the benefit of population codes to encode ambiguous outputs, as
found for symmetric objects. Using the T-LESS dataset of feature-less
real-world objects, we show that population codes improve the accuracy of
predicting object orientation from RGB-image input.