Publication | Open Access
GasMono: Geometry-Aided Self-Supervised Monocular Depth Estimation for Indoor Scenes
26
Citations
47
References
2023
Year
Unknown Venue
Indoor ScenesEngineeringMachine LearningDepth Map3D Computer VisionImage AnalysisPattern RecognitionRobot LearningComputational GeometryGeometric ModelingMachine VisionLow TextureComputer ScienceDeep LearningComputer VisionLarge Rotation3D VisionNatural SciencesExtended RealityScene UnderstandingScene Modeling
This paper tackles the challenges of self-supervised monocular depth estimation in indoor scenes caused by large rotation between frames and low texture. We ease the learning process by obtaining coarse camera poses from monocular sequences through multi-view geometry to deal with the former. However, we found that limited by the scale ambiguity across different scenes in the training dataset, a naïve introduction of geometric coarse poses cannot play a positive role in performance improvement, which is counter-intuitive. To address this problem, we propose to refine those poses during training through rotation and translation/scale optimization. To soften the effect of the low texture, we combine the global reasoning of vision transformers with an overfitting-aware, iterative self-distillation mechanism, providing more accurate depth guidance coming from the network itself. Experiments on NYUv2, ScanNet, 7scenes, and KITTI datasets support the effectiveness of each component in our framework, which sets a new state-of-the-art for indoor self-supervised monocular depth estimation, as well as outstanding generalization ability. Code and models are available at https://github.com/zxcqlf/GasMono
| Year | Citations | |
|---|---|---|
Page 1
Page 1