Learning Disentangled Representation for Robust Person Re-identification

แชร์
ฝัง
  • เผยแพร่เมื่อ 14 ต.ค. 2024
  • 발표자: 엄찬호 (연세대 박사과정 연구원)
    발표월: 2020.01
    더욱 다양한 영상을 보시려면 NAVER Engineering TV를 참고하세요. tv.naver.com/n...
    ○ 개요
    We address the problem of person re-identification (reID), that is, retrieving person images from a large dataset, given a query image of the person of interest. The key challenge is to learn person representations robust to intra-class variations, as different persons can have the same attribute and the same person's appearance looks different with viewpoint changes. Recent reID methods focus on learning discriminative features but robust to only a particular factor of variations (e.g., human pose) and this requires corresponding supervisory signals (e.g., pose annotations). To tackle this problem, we propose to disentangle identity-related and -unrelated features from person images. Identity-related features contain information useful for specifying a particular person (e.g.,clothing), while identity-unrelated ones hold other factors (e.g., human pose, scale changes). To this end, we introduce a new generative adversarial network, dubbed identity shuffle GAN (IS-GAN), that factorizes these features using identification labels without any auxiliary information. We also propose an identity shuffling technique to regularize the disentangled features. Experimental results demonstrate the effectiveness of IS-GAN, largely outperforming the state of the art on standard reID benchmarks including the Market-1501, CUHK03 and DukeMTMC-reID. Our code and models will be available online at the time of the publication.
    ○ 목차
    1. Introduction to “Person re-identification”
    2. Related works
    3. Motivation of IS-GAN
    4. Framework of IS-GAN
    5. Results
    ○ 사전 필요 지식
    Deep learning, Generative Adversarial Nets (GAN)
    ○ 발표 슬라이드
    www.slideshare...

ความคิดเห็น • 2