Deep Learning(CS7015): Lec 10.4 Continuous bag of words model

แชร์
ฝัง
  • เผยแพร่เมื่อ 31 ต.ค. 2024

ความคิดเห็น • 29

  • @tanmaysinha987
    @tanmaysinha987 5 ปีที่แล้ว +22

    Dr .Mitesh , u r one of the finest lecturers. I have undergone cs224d but u r much better

  • @namanrastogi4501
    @namanrastogi4501 3 ปีที่แล้ว +12

    Wword should have the dimensions as |v| x k, because the hidden layer has the dimension as k x 1, and so to result in the output layer with dimensions as |v| x 1, it makes sense to have dimensions of Wword as |v| x k. Therefore, it should be jth column of Wcontext and ith row of Wword and not ith column of Wword. So while considering to have the word embedding from such a model, Wcontext will have columns representing the word vectors and Wword will have rows representing the word vecrtors.

  • @pratyushsingh2809
    @pratyushsingh2809 4 ปีที่แล้ว +5

    Thanks for the lecture. It really helped me in understanding the concepts behind word2vec.

  • @koushik7604
    @koushik7604 4 ปีที่แล้ว +4

    the best explanation I have ever had

  • @abhisekmukherjee1811
    @abhisekmukherjee1811 5 ปีที่แล้ว +8

    Has anybody else noticed that the corpus is the script of Interstellar?

  • @batteryonfire
    @batteryonfire 4 ปีที่แล้ว +2

    Absolutely amazing clarity.

  • @siddharthsvnit
    @siddharthsvnit 5 ปีที่แล้ว +15

    15:31 should be i_th row of matrix with vector

  • @abhisekmukherjee1811
    @abhisekmukherjee1811 5 ปีที่แล้ว +1

    One can intuitively say that the context and the word will have similar vectors, without going into too much mathematics. Since the value of the softmax is dependent on the dot product of context and word vector, one can say that the numerator of the softmax is maximized when the cosine similarity is close to 1. As long as it is not close to 1, the network still has room for optimization. The side effect of keeping on optimizing, till gets the highest value , is that the two vectors will be forced to come closer to each other to maximize the numerator.

    • @mr_law886
      @mr_law886 4 ปีที่แล้ว

      What do W word and W context contain?
      I've seen the previous lectures but haven't understood the use of W word and W context.
      Please Explain.

  • @anujprasad001
    @anujprasad001 3 ปีที่แล้ว +1

    Amazing explanation. Only thing is that I got confused near 32:27. As the title of the video suggests, it is a Continuous bag of words. But at the marked time, it was stated that the order does not matter which makes it just a simple BOW instead of CBOW. Please if someone can provide an explanation. Thanks in advance.
    Overall the video was very clear.

  • @madhulasathwikreddy5764
    @madhulasathwikreddy5764 3 ปีที่แล้ว +2

    Should not we apply a activation function for the middle layer ¿???

  • @karthikmr416
    @karthikmr416 ปีที่แล้ว

    The selected column vector u_{c} of the matrix W_{context} is also an optimization parameter right?, so the gradient has to be computed for this term as well, isn't it?

  • @anuragdathatreya1598
    @anuragdathatreya1598 4 ปีที่แล้ว +1

    Please verify that @15:31 it should i_th row not i_th column otherwise matrix multiplication does not make sense, therefore its the i_th row of Wword and j_th column of Wcontext, if this is not the case the please reply with an explanation as to where I'm wrong or what am i missing??

    • @anuragdathatreya1598
      @anuragdathatreya1598 4 ปีที่แล้ว

      unless, if the values in h are normalized then it makes sense that the i_th column of Wword is the representation

    • @sriharsha8802
      @sriharsha8802 4 ปีที่แล้ว +1

      the dimension of w word is k x v here, so every target word is represented as a column vector. Whereas the dimension of w word is v x k in previous slides

    • @anuragdathatreya1598
      @anuragdathatreya1598 4 ปีที่แล้ว +1

      @@sriharsha8802 thanks Harsha, I've come to realize that and made a note on the side in my notebooks

  • @nagithareddy1798
    @nagithareddy1798 3 ปีที่แล้ว

    How to implement continuous bag of words model using knn algorithm (using python).

  • @Pruthvikajaykumar
    @Pruthvikajaykumar 2 ปีที่แล้ว

    15:42 shouldn't it be i'th row of Wword?

  • @debapratimdasdawn3672
    @debapratimdasdawn3672 5 ปีที่แล้ว +1

    Is it possible to test word2vec model in any other language like Hindi or Urdu except English?

    • @abhisekmukherjee1811
      @abhisekmukherjee1811 5 ปีที่แล้ว +2

      Yes, any language can be properly vectorized as long as there are sufficient training data ( novels , text books etc.) . The language here is irrelevant because we are never looking at the word itself, we have merely assigned some random weights to a word, and are trying to optimize it so that the neighbor has a high probability in the output. The word in itself is irrelevant. You could potentially have a long list of related pictures ( and arrange them so that their relation is maximized) and do the same thing to get a vector for a picture.

  • @raufbhat8096
    @raufbhat8096 4 ปีที่แล้ว

    cant we just add the one hot representations of input words and then do the forward pass rather taking i'th and j'th columns of the word weight matrix.

  • @pawanchoudhary619
    @pawanchoudhary619 10 หลายเดือนก่อน

    why log(y_pred) , why not y_true * log(y_pred)

    • @aniketsukhija9916
      @aniketsukhija9916 6 หลายเดือนก่อน

      Y_true = 1 for the current scenario, so it becomes log(y_pred)