Introduction to Normalizing Flows (ECCV2020 Tutorial)

แชร์
ฝัง
  • เผยแพร่เมื่อ 20 ก.ย. 2024

ความคิดเห็น • 28

  • @anselmud
    @anselmud 5 หลายเดือนก่อน +7

    The relevance of this tutorial from 2020 increased to the roof in 2024 after the recent release of Stable Diffusion 3 and its implementation of Flow Matching as an alternative to Diffusion.
    This is a very good building block to understand Flow Matching, that is why I happened here.
    It must be weird for researchers in Normalizing Flows at that time to witness the explosion of Gen AI through Diffusion models that were so close to what they were doing, it was like being missed by a nuclear bomb.
    But good research resist in the face of time and and the author predictions on Continuous-time Normalizing Flows and the research started by FFJORD were spot on.
    Kudos and thanks for putting this together, back in the day.
    I hope you resume posting videos like this!

  • @user-or7ji5hv8y
    @user-or7ji5hv8y 3 ปีที่แล้ว +27

    this video deserves a million view. so clearly explained.

  • @nocomments_s
    @nocomments_s 3 ปีที่แล้ว +1

    Amazing video, will share it with my colleagues and friends, it deserves much more views than it has

  • @prabhnoorsingh2104
    @prabhnoorsingh2104 3 ปีที่แล้ว +5

    WOW! This has been so helpful. You deserve a medal prof :)

  • @jackshi7613
    @jackshi7613 2 ปีที่แล้ว +2

    Super useful, I have been looking for this video for a long time, finally, I got it. Great video, keep going!

    • @MarcusBrubaker
      @MarcusBrubaker  2 ปีที่แล้ว

      Glad it helped! You may want to check out the more recent version of this tutorial here: th-cam.com/video/8XufsgG066A/w-d-xo.html Much of the same content although a few things have been updated and refined.

  • @williamashbee
    @williamashbee 2 ปีที่แล้ว +2

    don't stop making videos, you are fantastic.

  • @cobaltl8557
    @cobaltl8557 ปีที่แล้ว

    Thank you for making this excellent tutorial.

  • @derroitionman
    @derroitionman 3 ปีที่แล้ว +4

    Great presentation, thanks for sharing it.

  • @huseyintemiz5249
    @huseyintemiz5249 3 ปีที่แล้ว +5

    Nice tutorial.

  • @thijsvanweezel4669
    @thijsvanweezel4669 3 หลายเดือนก่อน +1

    At 47:00, the goal is to reduce dimensionality, but 4x4x1==2x2x4. How does that help?

  • @mausci71
    @mausci71 3 ปีที่แล้ว +2

    Loved your tutorial, Marcus.

  • @Gaetznaa
    @Gaetznaa 2 ปีที่แล้ว +1

    Thanks for the video! Very clearly explained :)

  • @payamjomeyazdian1794
    @payamjomeyazdian1794 3 ปีที่แล้ว +1

    Very nice slides and presentation.

  • @spyrosp.551
    @spyrosp.551 7 หลายเดือนก่อน +1

    I will get out from this lecture that "If d is small that is not a big deal".

  • @avideepmukherjee9307
    @avideepmukherjee9307 3 ปีที่แล้ว +5

    Can we get the slides, please?

  • @qichaoying4478
    @qichaoying4478 3 ปีที่แล้ว +2

    Why GLOW is skipped??

  • @ibraheemmoosa
    @ibraheemmoosa 3 ปีที่แล้ว +2

    I have a question. How does taking cube root change a bimodal distribution to a unimodal distribution at 13:50?

    • @MarcusBrubaker
      @MarcusBrubaker  3 ปีที่แล้ว

      It's hard to give a good intuitive explanation for how/why a cubic transform creates multi-modality. However. I can confirm that this is actually what happens in that particular example, those figures are the real result of transforming those distributions.

  • @hosseinrafipoor8784
    @hosseinrafipoor8784 2 ปีที่แล้ว

    Thanks for the great explanatoin!

  • @aojing
    @aojing 6 หลายเดือนก่อน

    First, explain the name, what is "normalizing"?

  • @laurenpinschannels
    @laurenpinschannels ปีที่แล้ว +1

    he keeps saying probabilistic graphical models when he meant probabilistic generative models

  • @CourtOfWinter
    @CourtOfWinter ปีที่แล้ว

    Around 19:30 you say that flow layers technically need to be diffeomorphisms, but is that actually the case? I don't see any reason why the inverse needs to be differentiable as well.

    • @MarcusBrubaker
      @MarcusBrubaker  ปีที่แล้ว +1

      You need the flow to be differentiable in the normalizing direction in order to enable training and the computation of the Jacobian. Further, the Jacobian needs to be non-singular (non-zero determinant), and that implies (by the inverse function theorem) that the inverse is also differentiable.

    • @CourtOfWinter
      @CourtOfWinter ปีที่แล้ว

      @@MarcusBrubaker Ah, okay, makes sense. Thanks!

  • @ff-fh8nh
    @ff-fh8nh 2 ปีที่แล้ว

    In coupling flows: does the split step split the x along the channel(features) or along others?

    • @MarcusBrubaker
      @MarcusBrubaker  2 ปีที่แล้ว

      It can split the dimensions in any way. Traditionally in image applications the split is along channels dimensions, but, e.g., this paper split along spatial dimensions: proceedings.neurips.cc/paper/2020/hash/ecb9fe2fbb99c31f567e9823e884dbec-Abstract.html

  • @chainonsmanquants1630
    @chainonsmanquants1630 3 ปีที่แล้ว

    Thanks