Publication | Closed Access
GeoNet: Unsupervised Learning of Dense Depth, Optical Flow and Camera Pose
1.3K
Citations
45
References
2018
Year
Unknown Venue
Geometric LearningScene AnalysisEngineeringMachine LearningOptical FlowDepth MapImage AnalysisData SciencePattern RecognitionRobot LearningUnsupervised LearningMachine VisionComputer ScienceStructure From MotionDeep LearningDense DepthComputer Vision3D VisionEgomotion EstimationScene UnderstandingCamera PoseMonocular DepthScene Modeling
We propose GeoNet, a jointly unsupervised learning framework for monocular depth, optical flow and egomotion estimation from videos. GeoNet couples the three components through 3D scene geometry, jointly learning them end‑to‑end by extracting geometric relationships from each module’s predictions, combining them into an image reconstruction loss that treats static and dynamic parts separately, and adding an adaptive geometric consistency loss to robustly handle outliers, non‑Lambertian regions, occlusions, and texture ambiguities. On the KITTI driving dataset, GeoNet achieves state‑of‑the‑art performance across all three tasks, outperforming prior unsupervised methods and approaching supervised results.
We propose GeoNet, a jointly unsupervised learning framework for monocular depth, optical flow and egomotion estimation from videos. The three components are coupled by the nature of 3D scene geometry, jointly learned by our framework in an end-to-end manner. Specifically, geometric relationships are extracted over the predictions of individual modules and then combined as an image reconstruction loss, reasoning about static and dynamic scene parts separately. Furthermore, we propose an adaptive geometric consistency loss to increase robustness towards outliers and non-Lambertian regions, which resolves occlusions and texture ambiguities effectively. Experimentation on the KITTI driving dataset reveals that our scheme achieves state-of-the-art results in all of the three tasks, performing better than previously unsupervised methods and comparably with supervised ones.
| Year | Citations | |
|---|---|---|
Page 1
Page 1