Matthew Tancik: Neural Radiance Fields for View Synthesis

แชร์
ฝัง
  • เผยแพร่เมื่อ 2 ม.ค. 2025

ความคิดเห็น • 7

  • @animeshkarnewar3
    @animeshkarnewar3 4 ปีที่แล้ว +2

    Love the way the presentation is arranged. Didn't get distracted at all. Awesome job!

  • @alanmelling3153
    @alanmelling3153 4 ปีที่แล้ว +2

    Great tour of recent research. And brilliant work with NeRF

  • @animeshkarnewar3
    @animeshkarnewar3 4 ปีที่แล้ว +2

    The analysis in the 2nd project is really insightful. Although I think the whole NTK theory is not applicable when the loss (objective) function is not some variant of l2 loss. For instance if you are minimizing a discriminator's adversarial loss or a VGG's perceptual loss, then in that case, the same neural network without these mappings is able to represent the high-frequency details that we care about. It would be really interesting to consider this effect in the analysis too. I look forward to the follow-up work. Thanks a lot for sharing the presentation.

  • @benibachmann9274
    @benibachmann9274 4 ปีที่แล้ว +1

    Thank you! This is a really exciting paper which I see a number of practical applications for. For example product visualization where no 3D model exists; especially if the inference time could be reduced to sub second time frames. This approach could also lead to improved photogrammetry tools by reducing the number of photos needed and tackling challenges with reflective surfaces.

  • @Klarpimier
    @Klarpimier 3 หลายเดือนก่อน

    When do you think Zillow will incorporate this tech for like house tours?

  • @randing7233
    @randing7233 3 ปีที่แล้ว +2

    Is there a link to ppt?

  • @김인수-z2p
    @김인수-z2p 3 ปีที่แล้ว +2

    Bookmark for myself 12:56