Learning 3D Human Pose Estimation from Dozens of Datasets by Bridging Skeleton Formats (WACV'23)

แชร์
ฝัง
  • เผยแพร่เมื่อ 4 ก.ย. 2024
  • István Sárándi, Alexander Hermans, Bastian Leibe (2023). Learning 3D Human Pose Estimation from Dozens of Datasets using a Geometry-Aware Autoencoder to Bridge Between Skeleton Formats. In IEEE/CVF Winter Conference on Applications of Computer Vision (WACV).
    Project page: istvansarandi....
    Paper: arxiv.org/abs/...
    More qualitative results: • [Qualitative Results] ...
    Abstract-Deep learning-based 3D human pose estimation performs best when trained on large amounts of labeled data, making combined learning from many datasets an important research direction. One obstacle to this endeavor are the different skeleton formats provided by different datasets, i.e., they do not label the same set of anatomical landmarks. There is little prior research on how to best supervise one model with such discrepant labels. We show that simply using separate output heads for different skeletons results in inconsistent depth estimates and insufficient information sharing across skeletons. As a remedy, we propose a novel affine-combining autoencoder (ACAE) method to perform dimensionality reduction on the number of landmarks. The discovered latent 3D points capture the redundancy among skeletons, enabling enhanced information sharing when used for consistency regularization. Our approach scales to an extreme multi-dataset regime, where we use 28 3D human pose datasets to supervise one model, which outperforms prior work on a range of benchmarks, including the challenging 3D Poses in the Wild (3DPW) dataset. Our code and models are available for research purposes.

ความคิดเห็น • 5

  • @alexanderokak5112
    @alexanderokak5112 2 หลายเดือนก่อน +1

    Very good work

  • @user-er5bx4ev3f
    @user-er5bx4ev3f 8 หลายเดือนก่อน +1

    Really impressive result! Is this model able to perform in real-time ?

    • @Istvan_Sarandi
      @Istvan_Sarandi หลายเดือนก่อน

      Yes, depending on the number of people. Up to about 3 people, the model with the smaller backbone can run in real-time. The large backbone version should be around real-time for a single person or slighlty below, depending on how beefy the GPU is.

  • @Noah-gw9cg
    @Noah-gw9cg ปีที่แล้ว +2

    Great stuff! Would something similar work for 2d keypoints?

    • @Istvan_Sarandi
      @Istvan_Sarandi หลายเดือนก่อน

      In principle yes, but the inconsistent predictions for different skeletons was a more prominent effect for the depth coordinates. The XY coords were quite good even in the baseline of learning separate output heads for each skeleton.