Publication | Closed Access
YOLO-6D+: Single Shot 6D Pose Estimation Using Privileged Silhouette Information
17
Citations
22
References
2020
Year
Unknown Venue
EngineeringMachine LearningHuman Pose Estimation3D Pose EstimationBiometrics3D Computer VisionImage AnalysisRobot LearningComputational GeometryEdge Restrain LossSingle Rgb ImageMachine VisionStructure From MotionDeep Learning3D Object RecognitionComputer Vision3D VisionSingle Shot 6DNatural SciencesScene UnderstandingExtended RealityObject Pose EstimationRobotics
The task of estimating the 6D pose of the object from a single RGB image is important for augmented reality and robotic grasping applications. In this work, we introduce YOLO-6D+, a new end-to-end deep network for 6D object pose estimation. In particular, we propose a novel silhouette prediction branch that outputs the predicted segmentation mask in our network, which can force underlying features to learn the silhouette information of the object. Furthermore, we introduce edge restrain loss, a new loss function that focuses on constraining the 3D shape of an object. We use a two-stage method: we predict 2D keypoints firstly and then 6D pose is estimated using the PnP algorithm. On the public LINEMOD dataset, we demonstrate the proposed approach can outperform the state-of-the-art YOLO-based single shot pose estimation approach [1] by 4.09% and 11.72% under the 2D projection metric and the ADD(-s) metric respectively.
| Year | Citations | |
|---|---|---|
Page 1
Page 1