Publication | Closed Access
MVF-Net: Multi-View 3D Face Morphable Model Regression
123
Citations
36
References
2019
Year
Unknown Venue
EngineeringMachine LearningBiometricsHuman FaceFace Detection3D Computer VisionFacial Recognition SystemImage AnalysisPattern RecognitionFacial ReconstructionGeometric ModelingMachine VisionOther 3DmmMulti-view 3DDeep LearningMultiple ViewsComputer Vision3D VisionMorphable ModelsNatural Sciences3D ReconstructionMulti-view GeometryScene Modeling
Recent 3D Morphable Model reconstruction has mainly focused on single‑view inputs, which suffer from ambiguities due to insufficient 3D constraints. This work aims to recover a face’s 3D geometry from multiple facial images. We train a CNN to regress 3DMM parameters from multi‑view inputs, enforcing dense correspondences through a self‑supervised view‑alignment loss that uses a differentiable optical‑flow estimator to back‑propagate alignment errors between views. The proposed method yields more accurate 3D shapes, as evidenced by better alignment of synthetic projections and superior performance compared to existing 3DMM approaches in extensive experiments.
We address the problem of recovering the 3D geometry of a human face from a set of facial images in multiple views. While recent studies have shown impressive progress in 3D Morphable Model (3DMM) based facial reconstruction, the settings are mostly restricted to a single view. There is an inherent drawback in the single-view setting: the lack of reliable 3D constraints can cause unresolvable ambiguities. We in this paper explore 3DMM-based shape recovery in a different setting, where a set of multi-view facial images are given as input. A novel approach is proposed to regress 3DMM parameters from multi-view inputs with an end-to-end trainable Convolutional Neural Network (CNN). Multi-view geometric constraints are incorporated into the network by establishing dense correspondences between different views leveraging a novel self-supervised view alignment loss. The main ingredient of the view alignment loss is a differentiable dense optical flow estimator that can backpropagate the alignment errors between an input view and a synthetic rendering from another input view, which is projected to the target view through the 3D shape to be inferred. Through minimizing the view alignment loss, better 3D shapes can be recovered such that the synthetic projections from one view to another can better align with the observed image. Extensive experiments demonstrate the superiority of the proposed method over other 3DMM methods.
| Year | Citations | |
|---|---|---|
Page 1
Page 1