Publication | Open Access
Unifying Visual and Vision-Language Tracking via Contrastive Learning
36
Citations
44
References
2024
Year
Modality-adaptive Box HeadMachine LearningEngineeringVideo InterpretationNatural Language ProcessingMultimodal LlmImage AnalysisPattern RecognitionObject TrackingContrastive LearningSpecific ModalityMachine TranslationMachine VisionDifferent ModalitiesVision Language ModelMoving Object TrackingVideo UnderstandingDeep LearningComputer VisionEye Tracking
Single object tracking aims to locate the target object in a video sequence according to the state specified by different modal references, including the initial bounding box (BBOX), natural language (NL), or both (NL+BBOX). Due to the gap between different modalities, most existing trackers are designed for single or partial of these reference settings and overspecialize on the specific modality. Differently, we present a unified tracker called UVLTrack, which can simultaneously handle all three reference settings (BBOX, NL, NL+BBOX) with the same parameters. The proposed UVLTrack enjoys several merits. First, we design a modality-unified feature extractor for joint visual and language feature learning and propose a multi-modal contrastive loss to align the visual and language features into a unified semantic space. Second, a modality-adaptive box head is proposed, which makes full use of the target reference to mine ever-changing scenario features dynamically from video contexts and distinguish the target in a contrastive way, enabling robust performance in different reference settings. Extensive experimental results demonstrate that UVLTrack achieves promising performance on seven visual tracking datasets, three vision-language tracking datasets, and three visual grounding datasets. Codes and models will be open-sourced at https://github.com/OpenSpaceAI/UVLTrack.
| Year | Citations | |
|---|---|---|
Page 1
Page 1