Publication | Closed Access
Unifying Planar and Point Mapping in Monocular SLAM
33
Citations
17
References
2010
Year
Unknown Venue
EngineeringGeometryField RoboticsDepth MapPlanar FeaturesLocalizationPlanar ConstraintRobot LearningComputational GeometryGeometric ModelingCartographyMachine VisionStructure From MotionPlanar StructureComputer VisionPoint Mapping3D VisionOdometryNatural SciencesMulti-view Geometry
Planar features in filter-based Visual SLAM systems require an initialisation stage that delays their use within the estimation. In this stage, surface and pose are initialised either by using an already generated map of point features [2, 3] or by using visual clues from frames [4]. This delay is unsatisfactory specially in scenarios where the camera moves rapidly such that visual features are observed for a very limited period. In this paper we present a unified approach to mapping in which points and planes are initialised alongside each other within the same framework. The best structure emerges according to what the camera observes, thus avoiding delayed initialisation for planar features. To do this we use a similar parameterisation to the one used for planar features in [3, 4]. The Inverse Depth Planar Parameterisation (IDPP), as we call it, is used to represent both planes and points. This IDPP is also combined with a point based measurement model where the planar constraint is introduced. The latter allows us to estimate and grow a planar structure if suitable, or to estimate a 3-D point if visual measurements do not support the constraint. The IDPP contains three main components: (1) A reference camera (RC); (2) the depth w.r.t. the RC of a seed 3-D point on the plane; (3) the normal of the plane.
| Year | Citations | |
|---|---|---|
Page 1
Page 1