Machine learning - Neural networks

แชร์
ฝัง
  • เผยแพร่เมื่อ 17 พ.ย. 2024

ความคิดเห็น • 12

  • @DlVirgin
    @DlVirgin 11 ปีที่แล้ว +1

    this is the best lecture on neural networks I have ever seen (i have seen many)...you very thoroughly explained every aspect of how ANNs work in a way that was easy to understand...

  • @xbuchtak
    @xbuchtak 11 ปีที่แล้ว +2

    I must agree, this is an excellent lecture and the most easy to understand explanation of backprop I've ever seen.

  • @JaysonSunshine
    @JaysonSunshine 6 ปีที่แล้ว

    At 1:03:45, it is stated that the hyperbolic tangent function represents a solution to the vanishing gradient problem, but this false according to Wikipedia (and other sources): en.wikipedia.org/wiki/Vanishing_gradient_problem. The ReLU activation function does help/resolve this problem, though.

  • @JaysonSunshine
    @JaysonSunshine 6 ปีที่แล้ว

    There are errors at 1:01:58; the learning rate is missing from the batch equation, and in both cases it is more informative to switch the sign so it's clear we're moving opposite the gradient and the step size is positive.

  • @lradhakrishnarao902
    @lradhakrishnarao902 7 ปีที่แล้ว +1

    The videos and lecture are amazing. Have resolved lot of my issues. However, I want to add, something.
    Where are the topics for SVM and HMM?
    Also, it would be nice, if one or two complex equations are shown , how to solve.

  • @qdcs524gmail
    @qdcs524gmail 9 ปีที่แล้ว

    Sir, may I know the activation function used in the ANN 4-layer example with the canary where 4 output neurons (sing, move, etc.) are activated at the same time? Does each layer use the same activation function? Please advise. Thanks.

  • @6katei
    @6katei 10 ปีที่แล้ว +1

    I also agree.

  • @tobiaspahlberg1506
    @tobiaspahlberg1506 8 ปีที่แล้ว

    Was there a reason why x_i1 and x_i2 were replaced by just x_i in the regression MLP example?

    • @chandreshmaurya8102
      @chandreshmaurya8102 8 ปีที่แล้ว

      x_i is vector with components x_i1 and x_i2. Shorthand notation.

    • @shekarforoush
      @shekarforoush 8 ปีที่แล้ว

      Nop,if you paid attention to the xi values in the table,you may understand they are scalars,so in this example instead of having 2 inputs we only one input x feature at times i,

  • @im_sanjay
    @im_sanjay 6 ปีที่แล้ว

    Can I get the slides?

    • @sehkmg
      @sehkmg 6 ปีที่แล้ว +1

      Just go to the course website then you'll get slides. www.cs.ubc.ca/~nando/540-2013/lectures.html