- 7
- 68 724
Chen Cao
United States
เข้าร่วมเมื่อ 8 ต.ค. 2011
Face tracking and animation research during my Ph.D.
Authentic Volumetric Avatars From a Phone Scan (SIGGRAPH 2022)
Paper: drive.google.com/file/d/1i4NJKAggS82wqMamCJ1OHRGgViuyoY6R/view?usp=sharing
Authors: Chen Cao, Tomas Simon, Jin Kyu Kim, Gabe Schwartz, Michael Zollhoefer, Shunsuke Saito, Stephen Lombardi, Shih-en Wei, Danielle Belko, Shoou-i Yu, Yaser Sheikh, Jason Saragih
Abstract: Creating photorealistic avatars of existing people currently requires extensive person-specific data capture, a process that has been primarily employed in the VFX industry due to its complexity. Our work aims to address this drawback by relying only on a short mobile phone capture to obtain a drivable 3D head avatar that matches a person’s likeness faithfully. In contrast to existing approaches, our architecture avoids the complex task of directly modeling the entire manifold of human appearance, aiming instead to generate an avatar model that can be specialized to novel identities using only small amounts of data. The model dispenses with low-dimensional latent spaces that are commonly employed for hallucinating novel identities, and instead, uses a conditional representation that can extract person-specific information at multiple scales from a registered neutral phone scan. We achieve this through the use of a novel universal avatar prior that has been trained on high-quality multi-view video captures of facial performances of hundreds of human subjects. By fine-tuning the model using inverse rendering we achieve increased realism and personalize its range of motion. The output of our approach is not only a high-fidelity 3D head avatar that matches the person’s facial shape and appearance, but one that can also be driven using a jointly discovered shared global expression space with disentangled controls for gaze direction. Via a series of extensive experiments we demonstrate that our avatars are faithful representations of the subject’s likeness. Compared to other state-of-the-art methods for lightweight avatar creation, our approach exhibits superior visual quality and animateability.
Authors: Chen Cao, Tomas Simon, Jin Kyu Kim, Gabe Schwartz, Michael Zollhoefer, Shunsuke Saito, Stephen Lombardi, Shih-en Wei, Danielle Belko, Shoou-i Yu, Yaser Sheikh, Jason Saragih
Abstract: Creating photorealistic avatars of existing people currently requires extensive person-specific data capture, a process that has been primarily employed in the VFX industry due to its complexity. Our work aims to address this drawback by relying only on a short mobile phone capture to obtain a drivable 3D head avatar that matches a person’s likeness faithfully. In contrast to existing approaches, our architecture avoids the complex task of directly modeling the entire manifold of human appearance, aiming instead to generate an avatar model that can be specialized to novel identities using only small amounts of data. The model dispenses with low-dimensional latent spaces that are commonly employed for hallucinating novel identities, and instead, uses a conditional representation that can extract person-specific information at multiple scales from a registered neutral phone scan. We achieve this through the use of a novel universal avatar prior that has been trained on high-quality multi-view video captures of facial performances of hundreds of human subjects. By fine-tuning the model using inverse rendering we achieve increased realism and personalize its range of motion. The output of our approach is not only a high-fidelity 3D head avatar that matches the person’s facial shape and appearance, but one that can also be driven using a jointly discovered shared global expression space with disentangled controls for gaze direction. Via a series of extensive experiments we demonstrate that our avatars are faithful representations of the subject’s likeness. Compared to other state-of-the-art methods for lightweight avatar creation, our approach exhibits superior visual quality and animateability.
มุมมอง: 40 191
วีดีโอ
Real-time 3D Neural Facial Animation from Binocular Video (SIGGRAPH 2021)
มุมมอง 6K3 ปีที่แล้ว
We present a method for performing real-time facial animation of a 3D avatar from binocular video. Existing facial animation methods fail to automatically capture precise and subtle facial motions for driving a photo-realistic 3D avatar "in-the-wild" (i.e., variability in illumination, camera noise). The novelty of our approach lies in a light-weight process for specializing a personalized face...
Real-Time High-Fidelity Facial Performance Capture (Siggraph 2015)
มุมมอง 1.5K8 ปีที่แล้ว
Real-Time High-Fidelity Facial Performance Capture Chen Cao, Derek Bradley, Kun Zhou, Thabo Beeler ACM Transactions on Graphics (SIGGRAPH 2015)
Real-Time Facial Animation with Image-based Dynamic Avatars (Siggraph 2016)
มุมมอง 14K8 ปีที่แล้ว
Real-Time Facial Animation with Image-based Dynamic Avatars Chen Cao, Hongzhi Wu, Yanlin Weng, Tianjia Shao, Kun Zhou ACM Transactions on Graphics (SIGGRAPH 2016)
Displaced Dynamic Expression Regression for Real-time Facial Tracking and Animation (Siggraph 2014)
มุมมอง 1.1K8 ปีที่แล้ว
Displaced Dynamic Expression Regression for Real-time Facial Tracking and Animation Chen Cao, Qiming Hou, Kun Zhou ACM Transactions on Graphics (SIGGRAPH 2014)
3D Shape Regression for Real-time Facial Animation (Siggraph 2013)
มุมมอง 2.7K8 ปีที่แล้ว
3D Shape Regression for Real-time Facial Animation Chen Cao, Yanlin Weng, Stephen Lin, Kun Zhou ACM Transactions on Graphics (SIGGRAPH 2013)
FaceWarehouse: a 3D Facial Expression Database for Visual Computing (TVCG 2014)
มุมมอง 3.5K8 ปีที่แล้ว
FaceWarehouse: a 3D Facial Expression Database for Visual Computing Chen Cao, Yanlin Weng, Shun Zhou, Yiying Tong, Kun Zhou IEEE Transactions on Visualization and Computer Graphics 2014