Concepedia

TLDR

The growing demand for scalable 3D content creation in virtual worlds is hampered by existing generative models that lack geometric detail, complex topology, texture support, or rely on neural renderers, limiting their usability in standard 3D software. The study introduces GET3D, a generative model designed to produce explicit textured 3D meshes directly consumable by rendering engines. GET3D is trained by integrating differentiable surface modeling, differentiable rendering, and 2D GAN techniques on large 2D image datasets. GET3D produces high‑quality textured meshes for cars, chairs, animals, motorbikes, human characters, and buildings, achieving significant improvements over previous methods.

Abstract

As several industries are moving towards modeling massive 3D virtual worlds, the need for content creation tools that can scale in terms of the quantity, quality, and diversity of 3D content is becoming evident. In our work, we aim to train performant 3D generative models that synthesize textured meshes which can be directly consumed by 3D rendering engines, thus immediately usable in downstream applications. Prior works on 3D generative modeling either lack geometric details, are limited in the mesh topology they can produce, typically do not support textures, or utilize neural renderers in the synthesis process, which makes their use in common 3D software non-trivial. In this work, we introduce GET3D, a Generative model that directly generates Explicit Textured 3D meshes with complex topology, rich geometric details, and high-fidelity textures. We bridge recent success in the differentiable surface modeling, differentiable rendering as well as 2D Generative Adversarial Networks to train our model from 2D image collections. GET3D is able to generate high-quality 3D textured meshes, ranging from cars, chairs, animals, motorbikes and human characters to buildings, achieving significant improvements over previous methods.