Publication | Open Access
Predictive learning as a network mechanism for extracting low-dimensional latent space representations
77
Citations
64
References
2021
Year
Artificial neural networks have achieved successes in sequential processing and planning, attributed to the emergence of low‑dimensional latent structure in their activity. The study investigates whether learning to predict observations generates representations with low‑dimensional latent structure and how this aligns with extracting latent variables, aiming to aid the analysis of experimental data. They trained a recurrent neural network to predict sequences of observations, then quantified the resulting dynamics with nonlinear intrinsic dimensionality measures and linear decoding of latent variables, supported by mathematical arguments. The predictive training produced low‑dimensional, nonlinearly transformed representations that faithfully encode the latent structure of the sensory environment, as shown by intrinsic dimensionality metrics and successful linear decoding of latent variables.
Abstract Artificial neural networks have recently achieved many successes in solving sequential processing and planning tasks. Their success is often ascribed to the emergence of the task’s low-dimensional latent structure in the network activity – i.e., in the learned neural representations. Here, we investigate the hypothesis that a means for generating representations with easily accessed low-dimensional latent structure, possibly reflecting an underlying semantic organization, is through learning to predict observations about the world. Specifically, we ask whether and when network mechanisms for sensory prediction coincide with those for extracting the underlying latent variables. Using a recurrent neural network model trained to predict a sequence of observations we show that network dynamics exhibit low-dimensional but nonlinearly transformed representations of sensory inputs that map the latent structure of the sensory environment. We quantify these results using nonlinear measures of intrinsic dimensionality and linear decodability of latent variables, and provide mathematical arguments for why such useful predictive representations emerge. We focus throughout on how our results can aid the analysis and interpretation of experimental data.
| Year | Citations | |
|---|---|---|
Page 1
Page 1