Publication | Closed Access
A Biologically Inspired System for Action Recognition
718
Citations
66
References
2007
Year
Unknown Venue
EngineeringBiologically-motivated SystemVideo InterpretationImage AnalysisPattern RecognitionRobot LearningVideo TransformerVision RecognitionCognitive ScienceMachine VisionAction PatternHierarchical FeedforwardComputer ScienceVideo UnderstandingDeep LearningComputer VisionMotion DetectionBiologically Inspired SystemObject RecognitionActivity RecognitionMotion Processing
The work builds on hierarchical feedforward architectures for object recognition and neurobiological models of motion processing in the visual cortex. We present a biologically‑motivated system for action recognition from video sequences. The system uses a hierarchy of spatio‑temporal feature detectors, beginning with motion‑direction sensitive units and progressing to position‑invariant detectors, and explores various unit types and architectures. The system achieves state‑of‑the‑art performance, with sparse intermediate features and simple feature selection yielding higher accuracy using far fewer features across multiple public action datasets.
We present a biologically-motivated system for the recognition of actions from video sequences. The approach builds on recent work on object recognition based on hierarchical feedforward architectures [25, 16, 20] and extends a neurobiological model of motion processing in the visual cortex [10]. The system consists of a hierarchy of spatio-temporal feature detectors of increasing complexity: an input sequence is first analyzed by an array of motion- direction sensitive units which, through a hierarchy of processing stages, lead to position-invariant spatio-temporal feature detectors. We experiment with different types of motion-direction sensitive units as well as different system architectures. As in [16], we find that sparse features in intermediate stages outperform dense ones and that using a simple feature selection approach leads to an efficient system that performs better with far fewer features. We test the approach on different publicly available action datasets, in all cases achieving the highest results reported to date.
| Year | Citations | |
|---|---|---|
1998 | 56.5K | |
2011 | 41.1K | |
2003 | 6.9K | |
1973 | 4.4K | |
1996 | 3.9K | |
1985 | 3.5K | |
1999 | 3.4K | |
2004 | 2.9K | |
1983 | 2.9K | |
2006 | 2.5K |
Page 1
Page 1