Publication | Closed Access
Long-term recurrent convolutional networks for visual recognition and description
5.2K
Citations
40
References
2015
Year
Deep convolutional networks dominate image interpretation, but recurrent, temporally deep models promise to handle variable‑length sequences by learning spatial and temporal layers and mapping inputs to outputs via backpropagation. The authors develop an end‑to‑end trainable recurrent convolutional architecture for large‑scale visual learning and evaluate it on video recognition, image description, retrieval, and narration benchmarks. The architecture couples recurrent long‑term units with modern convolutional nets, enabling joint training of temporal dynamics and perceptual representations. The models outperform state‑of‑the‑art recognition and generation systems, especially for complex concepts or limited data, and can learn.
Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or "temporally deep", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are "doubly deep" in that they can be compositional in spatial and temporal "layers". Such models may have advantages when target concepts are complex and/or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and/or optimized.
| Year | Citations | |
|---|---|---|
1997 | 93.8K | |
2017 | 75.5K | |
2014 | 75.4K | |
2009 | 60.2K | |
2015 | 46.2K | |
2015 | 39.5K | |
2001 | 20.9K | |
2014 | 13.3K | |
2014 | 11.1K | |
2014 | 6.5K |
Page 1
Page 1