Publication | Closed Access
Pose-Driven Deep Convolutional Model for Person Re-identification
899
Citations
55
References
2017
Year
Unknown Venue
Convolutional Neural NetworkImproved Feature ExtractionMachine LearningEngineeringHuman Pose Estimation3D Pose EstimationBiometricsFeature ExtractionImage AnalysisData SciencePattern RecognitionMachine VisionFeature LearningData Re-identificationDeep LearningComputer VisionGlobal Human BodyHuman IdentificationPerson Re-identification
Feature extraction and matching are crucial in person Re‑Identification, yet pose deformations and view variations greatly increase the difficulty of learning and matching robust features. This work proposes a Pose‑Driven Deep Convolutional (PDC) model to learn improved feature extraction and matching end‑to‑end. The architecture leverages human part cues to reduce pose variation, extracting robust features from both global images and local parts, and a pose‑driven feature‑weighting sub‑network adaptively fuses these features. Experiments on three popular datasets demonstrate that the PDC model significantly outperforms all published state‑of‑the‑art methods.
Feature extraction and matching are two crucial components in person Re-Identification (ReID). The large pose deformations and the complex view variations exhibited by the captured person images significantly increase the difficulty of learning and matching of the features from person images. To overcome these difficulties, in this work we propose a Pose-driven Deep Convolutional (PDC) model to learn improved feature extraction and matching models from end to end. Our deep architecture explicitly leverages the human part cues to alleviate the pose variations and learn robust feature representations from both the global image and different local parts. To match the features from global human body and local body parts, a pose driven feature weighting sub-network is further designed to learn adaptive feature fusions. Extensive experimental analyses and results on three popular datasets demonstrate significant performance improvements of our model over all published state-of-the-art methods.
| Year | Citations | |
|---|---|---|
Page 1
Page 1