Publication | Open Access
Decomposing 3D Scenes into Objects via Unsupervised Volume Segmentation
29
Citations
40
References
2021
Year
EngineeringMachine LearningComputer-aided DesignUnsupervised Volume Segmentation3D Computer VisionImage AnalysisDifferentiable RenderingData SciencePattern RecognitionComputational GeometryNerf DecoderGeometric ModelingPresent ObsurfMachine VisionDeep LearningMedical Image Computing3D Object RecognitionVolume RenderingComputer VisionNatural SciencesScene UnderstandingNeural Radiance Fields3D ReconstructionScene Modeling
We present ObSuRF, a method which turns a single image of a scene into a 3D model represented as a set of Neural Radiance Fields (NeRFs), with each NeRF corresponding to a different object. A single forward pass of an encoder network outputs a set of latent vectors describing the objects in the scene. These vectors are used independently to condition a NeRF decoder, defining the geometry and appearance of each object. We make learning more computationally efficient by deriving a novel loss, which allows training NeRFs on RGB-D inputs without explicit ray marching. After confirming that the model performs equal or better than state of the art on three 2D image segmentation benchmarks, we apply it to two multi-object 3D datasets: A multiview version of CLEVR, and a novel dataset in which scenes are populated by ShapeNet models. We find that after training ObSuRF on RGB-D views of training scenes, it is capable of not only recovering the 3D geometry of a scene depicted in a single input image, but also to segment it into objects, despite receiving no supervision in that regard.
| Year | Citations | |
|---|---|---|
Page 1
Page 1