Lecture 14: Visualizing and Understanding

แชร์
ฝัง
  • เผยแพร่เมื่อ 18 ธ.ค. 2024

ความคิดเห็น • 16

  • @thats_so_laven
    @thats_so_laven 3 ปีที่แล้ว +18

    "So there's this great paper by Johnson et al" was comedy GOLD

  • @vitalispyskinas5595
    @vitalispyskinas5595 ปีที่แล้ว +1

    For the Gram matrix (at 57:00) I don't see how the efficient reshaping method for finding the Gram matrix is the same as the initially presented way. The reshaped matrix version suggests that G_i,j = dot product between a H x W slice for filter i, and a H x W slice for filter j. However, the initally presented way suggests that G_i,j will contain, on it's diagonal, a product between two different elements in the same H x W slice, which is impossible in the efficient version.
    Is there a nuance to the reshaping that isn't provided in the lecture or did I miss something?
    Also note when I say H x W slice, I mean the activation map for one filter.

    • @vitalispyskinas5595
      @vitalispyskinas5595 ปีที่แล้ว

      Looking at the slides, I think it's the initialy presentation that more clearly illustrates what is happening, please let me know if that's not true.

    • @mrfli24
      @mrfli24 ปีที่แล้ว

      The efficient thing is that computation is done via matrix multiplication, otherwise they are the same.

    • @vitalispyskinas5595
      @vitalispyskinas5595 ปีที่แล้ว

      Yh i get that but, the matrix multiplication on the slide looks like a simplification. Let F = C x HW. F*(F^T) has a sum of H*W elements as every element. whereas if we did it pair wise it would have a sum of ~ (HW)^2 elements in each element; maybe it's an interesting trick like how variance can be calculated as the sum of all squared pairwise differences, I'm not bothered to derive that tho lol.@@mrfli24

    • @mrfli24
      @mrfli24 ปีที่แล้ว

      @@vitalispyskinas5595 okay, let me explain why these computations are the same. The first method computes the sum over (HxW) feature vectors v of outer products vv^T = VV^T where V is the matrix whose columns are vectors v (not a trick, just linear algebra fact). The matrix V of size (C, HxW) is the flatten matrix F in the slide.

    • @vitalispyskinas5595
      @vitalispyskinas5595 ปีที่แล้ว

      @@mrfli24 okay, but even though you can view a matrix multiplication as a sum of outer products, v_1 is never outer-producted with v_2. There are only ever outer products of the form v_k*v_k, and never v_k*v_j, for k!=j.

  • @Evv_McMuffin
    @Evv_McMuffin 2 ปีที่แล้ว +1

    I said WOW out loud like three times during this lecture... WOL

  • @eurekad2070
    @eurekad2070 3 ปีที่แล้ว +3

    1:01:00 "so now some really brilliant person had an idea" XD

  • @kennethnavarro661
    @kennethnavarro661 2 ปีที่แล้ว

    I'm having a hard time understanding the aspect where 'fast neural style transfer' differs from the paper that you improve upon that optimizes on the loss of both content and style(but is much slower).

    • @kennethnavarro661
      @kennethnavarro661 2 ปีที่แล้ว

      Like you said just stick a feed forward network in there to do it in a single pass but im not sure what you mean by that.

  • @harshdeepsingh3872
    @harshdeepsingh3872 4 หลายเดือนก่อน +1

    WOW