Publication | Closed Access
Live dense reconstruction with a single moving camera
366
Citations
15
References
2010
Year
Unknown Venue
EngineeringDepth MapLive Dense ReconstructionImage AnalysisVirtual RealityDepth MapsComputational GeometrySingle Live CameraGeometric ModelingMachine VisionStructure From MotionComputer Vision3D VisionNatural SciencesDense ReconstructionExtended Reality3D ReconstructionMulti-view GeometryScene Modeling
Point‑based real‑time structure‑from‑motion provides accurate 3D camera poses and a sparse point cloud as the foundation for dense reconstruction. We propose a method that enables rapid, dense reconstruction of scenes using a single live camera. The method builds an approximate smooth base mesh from the SFM, predicts views at multiple poses around selected reference frames, and warps this mesh into accurate depth maps using view‑predictive optical flow and constrained scene‑flow updates. The resulting depth maps enable a convincing global scene model, allow reconstruction of cluttered indoor environments in seconds on desktop hardware, and support real‑time novel view synthesis and physically interacting augmented reality with correct occlusion clipping.
We present a method which enables rapid and dense reconstruction of scenes browsed by a single live camera. We take point-based real-time structure from motion (SFM) as our starting point, generating accurate 3D camera pose estimates and a sparse point cloud. Our main novel contribution is to use an approximate but smooth base mesh generated from the SFM to predict the view at a bundle of poses around automatically selected reference frames spanning the scene, and then warp the base mesh into highly accurate depth maps based on view-predictive optical flow and a constrained scene flow update. The quality of the resulting depth maps means that a convincing global scene model can be obtained simply by placing them side by side and removing overlapping regions. We show that a cluttered indoor environment can be reconstructed from a live hand-held camera in a few seconds, with all processing performed by current desktop hardware. Real-time monocular dense reconstruction opens up many application areas, and we demonstrate both real-time novel view synthesis and advanced augmented reality where augmentations interact physically with the 3D scene and are correctly clipped by occlusions.
| Year | Citations | |
|---|---|---|
Page 1
Page 1