Concepedia

TLDR

Embedding 3D morphable basis functions into deep neural networks promises powerful models, yet strong regularization is needed to resolve learning ambiguities, limiting the fidelity of face models. The study proposes learning auxiliary proxies to bypass heavy regularization and enhance detailed shape and albedo representation. A dual‑pathway network architecture is introduced to balance global and local modeling, easing the learning process. The resulting model outperforms linear and prior nonlinear counterparts, achieving state‑of‑the‑art 3D face reconstruction by optimizing latent representations alone.

Abstract

Embedding 3D morphable basis functions into deep neural networks opens great potential for models with better representation power. However, to faithfully learn those models from an image collection, it requires strong regularization to overcome ambiguities involved in the learning process. This critically prevents us from learning high fidelity face models which are needed to represent face images in high level of details. To address this problem, this paper presents a novel approach to learn additional proxies as means to side-step strong regularizations, as well as, leverages to promote detailed shape/albedo. To ease the learning, we also propose to use a dual-pathway network, a carefully-designed architecture that brings a balance between global and local-based models. By improving the nonlinear 3D morphable model in both learning objective and network architecture, we present a model which is superior in capturing higher level of details than the linear or its precedent nonlinear counterparts. As a result, our model achieves state-of-the-art performance on 3D face reconstruction by solely optimizing latent representations.

References

YearCitations

Page 1