Publication | Open Access
3D ShapeNets: A deep representation for volumetric shapes
4.5K
Citations
31
References
2015
Year
Unknown Venue
Geometric LearningEngineeringMachine LearningDeep Learning Model3D Computer VisionImage AnalysisData SciencePattern RecognitionDepth MapsRobot LearningComputational GeometryShape RepresentationGeometric ModelingMachine VisionGeometric 3DDeep RepresentationComputer ScienceDeep Learning3D Object RecognitionComputer Vision3D VisionNatural SciencesScene Modeling
3D shape is a crucial but underutilized cue in computer vision, and the advent of inexpensive 2.5D depth sensors such as Microsoft Kinect has heightened the need for robust 3D shape representations, especially for recovering full 3D shapes from 2.5D depth maps. The authors aim to represent a geometric 3D shape as a probability distribution over binary variables on a 3D voxel grid using a Convolutional Deep Belief Network. Their 3D ShapeNets model learns the distribution of complex 3D shapes across categories and poses from raw CAD data, discovers hierarchical part representations, and is trained on the large‑scale ModelNet CAD dataset. The representation enables joint object recognition and shape completion from 2.5D depth maps, supports active recognition via view planning, and achieves significant performance gains over state‑of‑the‑arts across multiple tasks.
3D shape is a crucial but heavily underutilized cue in today's computer vision systems, mostly due to the lack of a good generic shape representation. With the recent availability of inexpensive 2.5D depth sensors (e.g. Microsoft Kinect), it is becoming increasingly important to have a powerful 3D shape representation in the loop. Apart from category recognition, recovering full 3D shapes from view-based 2.5D depth maps is also a critical part of visual understanding. To this end, we propose to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network. Our model, 3D ShapeNets, learns the distribution of complex 3D shapes across different object categories and arbitrary poses from raw CAD data, and discovers hierarchical compositional part representation automatically. It naturally supports joint object recognition and shape completion from 2.5D depth maps, and it enables active object recognition through view planning. To train our 3D deep learning model, we construct ModelNet - a large-scale 3D CAD model dataset. Extensive experiments show that our 3D deep representation enables significant performance improvement over the-state-of-the-arts in a variety of tasks.
| Year | Citations | |
|---|---|---|
Page 1
Page 1