Publication | Open Access
Self-Supervised Siamese Learning on Stereo Image Pairs for Depth Estimation in Robotic Surgery
104
Citations
0
References
2017
Year
Unknown Venue
Convolutional Neural NetworkEngineeringMachine LearningStereo ImagingSurgeryDepth MapSelf-supervised Siamese Learning3D Computer VisionImage AnalysisStereo VisionRobot LearningRadiologyMachine VisionComputer-assisted SurgeryMedical ImagingStereo Image PairsMedical Image ComputingDeep LearningAugmented RealityComputer Vision3D VisionComputer Stereo VisionRobotic SurgeryExtended RealityDepth EstimationMedicineStereoscopic Processing
Robotic surgery has become a powerful tool for performing minimally invasive procedures, providing advantages in dexterity, precision, and 3D vision, over traditional surgery. One popular robotic system is the da Vinci surgical platform, which allows preoperative information to be incorporated into live procedures using Augmented Reality (AR). Scene depth estimation is a prerequisite for AR, as accurate registration requires 3D correspondences between preoperative and intraoperative organ models. In the past decade, there has been much progress on depth estimation for surgical scenes, such as using monocular or binocular laparoscopes [1,2]. More recently, advances in deep learning have enabled depth estimation via Convolutional Neural Networks (CNNs) [3], but training requires a large image dataset with ground truth depths. Inspired by [4], we propose a deep learning framework for surgical scene depth estimation using self-supervision for scalable data acquisition. Our framework consists of an autoencoder for depth prediction, and a differentiable spatial transformer for training the autoencoder on stereo image pairs without ground truth depths. Validation was conducted on stereo videos collected in robotic partial nephrectomy.