Publication | Closed Access
Where-and-When to Look: Deep Siamese Attention Networks for Video-Based Person Re-Identification
250
Citations
59
References
2018
Year
Image AnalysisMachine LearningMachine VisionData SciencePattern RecognitionSiamese Attention ArchitectureBiometricsSimilarity MetricsSpatial InformationEngineeringHuman IdentificationVideo TransformerVideo UnderstandingDeep LearningVideo RetrievalVideo InterpretationComputer VisionVideo-based Person Re-identification
Video-based person re-identification (re-id) is a central application in surveillance systems with a significant concern in security. Matching persons across disjoint camera views in their video fragments are inherently challenging due to the large visual variations and uncontrolled frame rates. There are two steps crucial to person re-id, namely, discriminative feature learning and metric learning. However, existing approaches consider the two steps independently, and they do not make full use of the temporal and spatial information in the videos. In this paper, we propose a Siamese attention architecture that jointly learns spatiotemporal video representations and their similarity metrics. The network extracts local convolutional features from regions of each frame and enhances their discriminative capability by focusing on distinct regions when measuring the similarity with another pedestrian video. The attention mechanism is embedded into spatial gated recurrent units to selectively propagate relevant features and memorize their spatial dependencies through the network. The model essentially learns which parts (where) from which frames (when) are relevant and distinctive for matching persons and attaches higher importance therein. The proposed Siamese model is end-to-end trainable to jointly learn comparable hidden representations for paired pedestrian videos and their similarity value. Extensive experiments on three benchmark datasets show the effectiveness of each component of the proposed deep network while outperforming state-of-the-art methods.
| Year | Citations | |
|---|---|---|
Page 1
Page 1