[SIGGRAPH Asia'24] GaussianHeads: End-to-End Learning of Drivable Gaussian Head Avatars

แชร์
ฝัง
  • เผยแพร่เมื่อ 15 ม.ค. 2025

ความคิดเห็น • 2

  • @steves5476
    @steves5476 หลายเดือนก่อน

    Very nice results! Questions:
    -Do you iitialize the density of gaussians according to perceptual importance? E.g. placing more smaller guassians initially around the eyes, mouth, teeth and lips? (Could be achieved by making the surface area on the UV map for these regions larger.)
    -Have you considered using something like FLAME for the coarse mesh? I think there may be decent polygonal mesh based head tracking approaches that could be used. If they can track the lips, teeth and tongue, it could help reduce the blurring that occurs around those regions when they make sudden movements.

    • @kartikteotia2227
      @kartikteotia2227 หลายเดือนก่อน

      Thanks for your interest!
      1) Yes, our UV Map has been sort of custom designed to place more 3D Gaussians in the mouth interior. Since the 3D Gaussians can move around thanks to the fine-deformation step in our approach. Regions requiring more attention can "grab" 3D Gaussians from nearby regions which do not demand higher capacity.
      2) We in fact use FLAME tracking to regularize our coarse tracking results as part of the training. There are potential modifications to existing open-source approaches that might allow for tracking some part of the mouth interior. But, I am not aware of a public version that allows for tracking the mouth interior, including the tongue.