Publication | Closed Access
UnOS: Unified Unsupervised Optical-Flow and Stereo-Depth Estimation by Watching Videos
178
Citations
45
References
2019
Year
Unknown Venue
Convolutional Neural NetworkEngineeringDepth MapImage AnalysisStereo VisionStereo Depth EstimationMachine VisionStereo-depth EstimationComputer ScienceStructure From MotionDeep LearningComputer Vision3D VisionComputer Stereo VisionUnified SystemScene UnderstandingExtended RealityMulti-view GeometryStereoscopic Processing
In this paper, we propose UnOS, an unified system for unsupervised optical flow and stereo depth estimation using convolutional neural network (CNN) by taking advantages of their inherent geometrical consistency based on the rigid-scene assumption. UnOS significantly outperforms other state-of-the-art (SOTA) unsupervised approaches that treated the two tasks independently. Specifically, given two consecutive stereo image pairs from a video, UnOS estimates per-pixel stereo depth images, camera ego-motion and optical flow with three parallel CNNs. Based on these quantities, UnOS computes rigid optical flow and compares it against the optical flow estimated from the FlowNet, yielding pixels satisfying the rigid-scene assumption. Then, we encourage geometrical consistency between the two estimated flows within rigid regions, from which we derive a rigid-aware direct visual odometry (RDVO) module. We also propose rigid and occlusion-aware flow-consistency losses for the learning of UnOS. We evaluated our results on the popular KITTI dataset over 4 related tasks, \ie stereo depth, optical flow, visual odometry and motion segmentation.
| Year | Citations | |
|---|---|---|
Page 1
Page 1