Concepedia

Publication | Closed Access

Self-Supervised Pretraining via Multimodality Images With Transformer for Change Detection

35

Citations

41

References

2023

Year

Abstract

Self-supervised learning has shown remarkable success in image representation learning. Among these methods, masked image modeling and contrastive learning are the most recent and dominant methods. However, these two approaches will behave differently after being transferred into various downstream tasks. In this paper, we propose a RGB-elevation contrastive and image mask prediction pre-training framework. The elevation is normalized digital surface model. Then we evaluate the learned representation by transferring the pre-trained model into change detection task. To this end, we leverage the recently proposed vision transformer’s capability of attending to objects and combine it with the pretext task which is consist of masked image modeling and instance discriminant for fine-tuning the spatial tokens. Besides, the change detection task also requires us to do information interaction between the two temporal remote sensing images. To counter this problem, we propose a plug-in temporal fusion module based on masked cross attention and then we evaluate its effectiveness in three open change detection datasets in terms of initializing the supervised training weights. Our method achieves improvements in comparison to supervised learning methods and two mainstream self-supervised learning methods MoCo and DINO on change detection task. The results of our experiment also achieve state-of-the-art in four change detection datasets. The code will be available at URL.

References

YearCitations

Page 1