Concepedia

Publication | Closed Access

SiamRPN++: Evolution of Siamese Visual Tracking With Very Deep Networks

2.4K

Citations

37

References

2019

Year

TLDR

Siamese trackers formulate tracking as convolutional feature cross‑correlation but lag behind state‑of‑the‑art methods and cannot exploit deep networks such as ResNet‑50. This work demonstrates that the accuracy gap stems from a lack of strict translation invariance and releases a model to advance the field. We break the restriction with a spatial‑aware sampling strategy, train a ResNet‑driven Siamese tracker, and introduce depth‑wise and layer‑wise aggregation to enable extensive ablation studies. The tracker achieves significant performance gains, further accuracy improvements with reduced model size, and currently best results on OTB2015, VOT2018, UAV123, and LaSOT.

Abstract

Siamese network based trackers formulate tracking as convolutional feature cross-correlation between target template and searching region. However, Siamese trackers still have accuracy gap compared with state-of-the-art algorithms and they cannot take advantage of feature from deep networks, such as ResNet-50 or deeper. In this work we prove the core reason comes from the lack of strict translation invariance. By comprehensive theoretical analysis and experimental validations, we break this restriction through a simple yet effective spatial aware sampling strategy and successfully train a ResNet-driven Siamese tracker with significant performance gain. Moreover, we propose a new model architecture to perform depth-wise and layer-wise aggregations, which not only further improves the accuracy but also reduces the model size. We conduct extensive ablation studies to demonstrate the effectiveness of the proposed tracker, which obtains currently the best results on four large tracking benchmarks, including OTB2015, VOT2018, UAV123, and LaSOT. Our model will be released to facilitate further studies based on this problem.

References

YearCitations

Page 1