Publication | Closed Access
OTAvatar: One-Shot Talking Face Avatar with Controllable Tri-Plane Rendering
50
Citations
27
References
2023
Year
Unknown Venue
Identity CodeAvatar AnimationImage AnalysisMachine VisionEngineeringDifferentiable RenderingFacial AnimationBiometricsVirtual RealityGeneralized Face AvatarControllable Tri-plane RenderingCommunicationHuman Image SynthesisDeep LearningVirtual HumanVolume RenderingComputer VisionSynthetic Image Generation
Controllability, generalizability and efficiency are the major objectives of constructing face avatars represented by neural implicit field. However, existing methods have not managed to accommodate the three requirements simultaneously. They either focus on static portraits, restricting the representation ability to a specific subject, or suffer from substantial computational cost, limiting their flexibility. In this paper, we propose One-shot Talking face Avatar (OTAvatar), which constructs face avatars by a generalized controllable tri-plane rendering solution so that each personalized avatar can be constructed from only one portrait as the reference. Specifically, OTAvatar first inverts a portrait image to a motion-free identity code. Second, the identity code and a motion code are utilized to modulate an efficient CNN to generate a tri-plane formulated volume, which encodes the subject in the desired motion. Finally, volume rendering is employed to generate an image in any view. The core of our solution is a novel decoupling-by-inverting strategy that disentangles identity and motion in the latent code via optimization-based inversion. Benefiting from the efficient tri-plane representation, we achieve controllable rendering of generalized face avatar at 35 FPS on AIOO. Experiments show promising performance of crossidentity reenactment on subjects out of the training set and better 3D consistency. The code is available at https://github.com/theEricMaIOTAvatar.
| Year | Citations | |
|---|---|---|
Page 1
Page 1