Concepedia

Publication | Open Access

Random synaptic feedback weights support error backpropagation for deep learning

816

Citations

52

References

2016

Year

TLDR

The brain’s multilayer neural architecture is powerful but learning is hindered by difficulty in attributing errors, especially since backpropagation requires precise symmetric connectivity that is unlikely in biological circuits. The study aims to show that precise symmetric connectivity is unnecessary for effective error propagation. The authors propose a simple method that propagates errors by multiplying them with random synaptic weights, rather than requiring exact backward weights. The random-weight mechanism successfully transmits teaching signals across layers and matches backpropagation performance, challenging assumptions about necessary algorithmic constraints in the brain.

Abstract

Abstract The brain processes information through multiple layers of neurons. This deep architecture is representationally powerful, but complicates learning because it is difficult to identify the responsible neurons when a mistake is made. In machine learning, the backpropagation algorithm assigns blame by multiplying error signals with all the synaptic weights on each neuron’s axon and further downstream. However, this involves a precise, symmetric backward connectivity pattern, which is thought to be impossible in the brain. Here we demonstrate that this strong architectural constraint is not required for effective error propagation. We present a surprisingly simple mechanism that assigns blame by multiplying errors by even random synaptic weights. This mechanism can transmit teaching signals across multiple layers of neurons and performs as effectively as backpropagation on a variety of tasks. Our results help reopen questions about how the brain could use error signals and dispel long-held assumptions about algorithmic constraints on learning.

References

YearCitations

Page 1