Published on Thu Apr 22 2021

Pose-Controllable Talking Face Generation by Implicitly Modularized Audio-Visual Representation

Hang Zhou, Yasheng Sun, Wayne Wu, Chen Change Loy, Xiaogang Wang, Ziwei Liu
0
0
0
Abstract

While accurate lip synchronization has been achieved for arbitrary-subject audio-driven talking face generation, the problem of how to efficiently drive the head pose remains. Previous methods rely on pre-estimated structural information such as landmarks and 3D parameters, aiming to generate personalized rhythmic movements. However, the inaccuracy of such estimated information under extreme conditions would lead to degradation problems. In this paper, we propose a clean yet effective framework to generate pose-controllable talking faces. We operate on raw face images, using only a single photo as an identity reference. The key is to modularize audio-visual representations by devising an implicit low-dimension pose code. Substantially, both speech content and head pose information lie in a joint non-identity embedding space. While speech content information can be defined by learning the intrinsic synchronization between audio-visual modalities, we identify that a pose code will be complementarily learned in a modulated convolution-based reconstruction framework. Extensive experiments show that our method generates accurately lip-synced talking faces whose poses are controllable by other videos. Moreover, our model has multiple advanced capabilities including extreme view robustness and talking face frontalization. Code, models, and demo videos are available at https://hangz-nju-cuhk.github.io/projects/PC-AVS.

Mon Feb 24 2020
Computer Vision
Audio-driven Talking Face Video Generation with Learning-based Personalized Head Pose
Real-world talking faces often accompany with natural head movement. Most existing talking face video generation methods only consider facial grotesqueanimation with fixed head pose. To address this challenge, we reconstruct 3D face animation and re-render it into synthesized frames.
0
0
0
Sun Apr 25 2021
Computer Vision
3D-TalkEmo: Learning to Synthesize 3D Emotional Talking Head
0
0
0
Thu Jul 16 2020
Computer Vision
Talking-head Generation with Rhythmic Head Motion
Generating a lip-synced video while moving head naturally is challenging. We propose a 3D-aware generative network along with a hybrid embedding module and a non-linear composition module. Our approach achieves photo-realistic, and temporally coherent talking-head videos.
0
0
0
Mon Dec 17 2018
Computer Vision
Arbitrary Talking Face Generation via Attentional Audio-Visual Coherence Learning
Talking face generation aims to synthesize a face video with precise lip synchronization. Most existing methods mainly focus on disentangling the information in a single image. We propose a novel arbitrary talking face generation framework by discovering the audio-visual coherence.
0
0
0
Tue Jun 08 2021
Computer Vision
LipSync3D: Data-Efficient Learning of Personalized 3D Talking Faces from Video using Pose and Lighting Normalization
We present a video-based learning framework for animating personalized 3D talking faces from audio. We introduce two training-time data normalizations that significantly improve data sample efficiency. Human ratings and objective metrics demonstrate that our method outperforms contemporary audio-driven video reenactment benchmarks.
3
13
48
Wed Oct 02 2019
Machine Learning
Animating Face using Disentangled Audio Representations
All previous methods for audio-driven talking head generation assume the input audio to be clean with a neutral tone. One can easily break these systems by adding certain background noise to the audio or changing its emotional tone (to such as sad) To make talking head generation robust to such variations, we
0
0
0