Publication | Open Access
Unsupervised extrinsic calibration of depth sensors in dynamic scenes
27
Citations
25
References
2013
Year
Unknown Venue
Engineering3D Pose EstimationField RoboticsDepth MapRough TransformLocalizationDepth SensorsInexpensive Depth SensorsImage AnalysisCalibrationCamera CalibrationComputational GeometryMachine VisionComputer ScienceStructure From MotionComputer Vision3D VisionNatural SciencesMulti-view Geometry
While inexpensive depth sensors are becoming increasingly ubiquitous, field of view and self-occlusion constraints limit the information a single sensor can provide. For many applications one may instead require a network of depth sensors, registered to a common world frame and synchronized in time. Historically such a setup has required a tedious manual calibration procedure, making it infeasible to deploy these networks in the wild, where spatial and temporal drift are common. In this work, we propose an entirely unsupervised procedure for calibrating the relative pose and time offsets of a pair of depth sensors. So doing, we make no use of an explicit calibration target, or any intentional activity on the part of a user. Rather, we use the unstructured motion of objects in the scene to find potential correspondences between the sensor pair. This yields a rough transform which is then refined with an occlusion-aware energy minimization. We compare our results against the standard checkerboard technique, and provide qualitative examples for scenes in which such a technique would be impossible.
| Year | Citations | |
|---|---|---|
Page 1
Page 1