Publication | Closed Access
General-Purpose Deep Point Cloud Feature Extractor
43
Citations
53
References
2018
Year
Unknown Venue
Geometric LearningEngineeringMachine LearningPoint Cloud ProcessingPoint CloudDepth Sensors3D Computer VisionImage AnalysisData SciencePattern RecognitionFeature (Computer Vision)Computational ImagingRobot LearningMachine VisionComputer ScienceAutonomous DrivingDeep Learning3D Object RecognitionComputer VisionGraph 3D
Depth sensors used in autonomous driving and gaming systems often report back 3D point clouds. The lack of structure from these sensors does not allow these systems to take advantage of recent advances in convolutional neural networks which are dependent upon traditional filtering and pooling operations. Analogous to image based convolutional architectures, recently introduced graph based architectures afford similar filtering and pooling operations on arbitrary graphs. We adopt these graph based methods to 3D point clouds to introduce a generic vector representation of 3D graphs, we call graph 3D (G3D). We believe we are the first to use large scale transfer learning on 3D point cloud data and demonstrate the discriminant power of our salient latent representation of 3D point clouds on unforeseen test sets. By using our G3D network (G3DNet) as a feature extractor, and then pairing G3D feature vectors with a standard classifier, we achieve the best accuracy on ModelNet10 (93.1%) and ModelNet 40 (91.7%) for a graph network, and comparable performance on the Sydney Urban Objects dataset to other methods. This general-purpose feature extractor can be used as an off-the-shelf component in other 3D scene understanding or object tracking works.
| Year | Citations | |
|---|---|---|
Page 1
Page 1