01L - Gradient descent and the backpropagation algorithm

แชร์
ฝัง
  • เผยแพร่เมื่อ 27 ก.ค. 2024
  • Course website: bit.ly/DLSP21-web
    Playlist: bit.ly/DLSP21-TH-cam
    Speaker: Yann LeCun
    Chapters
    00:00:00 - Supervised learning
    00:03:43 - Parametrised models
    00:07:23 - Block diagram
    00:08:55 - Loss function, average loss
    00:12:23 - Gradient descent
    00:30:47 - Traditional neural nets
    00:35:07 - Backprop through a non-linear function
    00:40:41 - Backprop through a weighted sum
    00:50:55 - PyTorch implementation
    00:57:18 - Backprop through a functional module
    01:05:08 - Backprop through a functional module
    01:12:15 - Backprop in practice
    01:33:15 - Learning representations
    01:42:14 - Shallow networks are universal approximators!
    01:47:25 - Multilayer architectures == compositional structure of data

ความคิดเห็น • 155

  • @AICoffeeBreak
    @AICoffeeBreak 3 ปีที่แล้ว +58

    Thanks for posting these! With this, you reach a very wide audience and help anyone who does not have access to such teachers and universities! 👏

    • @alfcnz
      @alfcnz  3 ปีที่แล้ว +14

      Yup, that's the plan! 😎😎😎

    • @Navhkrin
      @Navhkrin 2 ปีที่แล้ว +2

      Hello Ms Coffee beans

    • @AICoffeeBreak
      @AICoffeeBreak 2 ปีที่แล้ว +1

      @@Navhkrin Hello! ☕

  • @makotokinoshita6337
    @makotokinoshita6337 2 ปีที่แล้ว +2

    You’re doing a massive favor to the community who wants to access to high quality content without paying a huge amount of money. Thank you so much!

    • @alfcnz
      @alfcnz  2 ปีที่แล้ว +1

      You're welcome 😇😇😇

  • @johnhammer8668
    @johnhammer8668 3 ปีที่แล้ว +1

    Thanks very much for the content. What a time to be alive. To hear from the master himself.

    • @alfcnz
      @alfcnz  3 ปีที่แล้ว

      💜💜💜

  • @sutirthabiswas8273
    @sutirthabiswas8273 2 ปีที่แล้ว +2

    From seeing the name of Yan in a research paper during a literature survey in my internship program , to attending his lectures is really a thriller. Quite enriching and mathematically profound stuff here. Thanks for sharing it free!

    • @alfcnz
      @alfcnz  2 ปีที่แล้ว +2

      You're welcome 😊😊😊

  • @dr.mikeybee
    @dr.mikeybee 2 ปีที่แล้ว +6

    Wow! Yann is such a great teacher. I thought I knew this material fairly well, but Yann is enriching my understanding with every slide. It seems to me that his teaching method is extremely efficient. I suppose that's because he has such a deep understanding of the material.

    • @alfcnz
      @alfcnz  2 ปีที่แล้ว +2

      🤓🤓🤓

  • @jobiquirobi123
    @jobiquirobi123 3 ปีที่แล้ว

    Great content! It’s just great to have this quality information available

    • @alfcnz
      @alfcnz  3 ปีที่แล้ว +1

      You're welcome 🐱🐱🐱

  • @user-co6pu8zv3v
    @user-co6pu8zv3v 3 ปีที่แล้ว

    I have watched this lecture twice in the last year. Mister LeCun is great! :)

    • @alfcnz
      @alfcnz  3 ปีที่แล้ว +3

      Professor / doctor LeCun 😜

  • @isurucumaranathunga
    @isurucumaranathunga 2 ปีที่แล้ว

    Thank you so much for this valuable content. This teaching method is extremely amazing.

    • @alfcnz
      @alfcnz  2 ปีที่แล้ว +1

      Yay! I'm glad you fancy it! 😊😊😊

  • @monanasery1992
    @monanasery1992 หลายเดือนก่อน

    Thank you so much for sharing this 🥰 This was the best video for learning gradient descent and backpropagation.

    • @alfcnz
      @alfcnz  หลายเดือนก่อน +1

      🥳🥳🥳

  • @neuroinformaticafbf5313
    @neuroinformaticafbf5313 2 ปีที่แล้ว

    I just can't believe this content is free. Amazing! Long life to Open Source! Grazie Alfredo :)

    • @alfcnz
      @alfcnz  2 ปีที่แล้ว +1

      ❤️❤️❤️

  • @thanikhurshid7403
    @thanikhurshid7403 2 ปีที่แล้ว

    You are a great man. Thanks to you someone even in a third world country can learn DL from one of the inventors himself. THIS IS CRAZY!

    • @alfcnz
      @alfcnz  2 ปีที่แล้ว

      😇😇😇

  • @mataharyszary
    @mataharyszary 2 ปีที่แล้ว

    This intimate atmosphere allows for a better understanding of the subject matter. Great questions 【ツ】 and of course great answers. Thank you

    • @alfcnz
      @alfcnz  2 ปีที่แล้ว

      You're welcome 😁😁😁

  • @fuzzylogicq
    @fuzzylogicq 2 ปีที่แล้ว

    Mehn!! these are gold.. especially for people who don't have access to these types of teachers, and methods of teaching, plus the material etc (that's a lot of people actually).

    • @alfcnz
      @alfcnz  2 ปีที่แล้ว

      🤗🤗🤗

  • @chrcheel
    @chrcheel 2 ปีที่แล้ว

    This is wonderful. Thank you ❤️

    • @alfcnz
      @alfcnz  2 ปีที่แล้ว

      😃😃😃

  • @mpalaourg8597
    @mpalaourg8597 2 ปีที่แล้ว

    Thank you so much, Alfredo, for organizing the material in such a nice and compact way for us! The insights of Yann and your examples, explanations and visualization are an awesome tool for anybody willing to learn (or to remember stuff) about deep learning. Greetings from Greece and I owe you a coffee, for your tireless effort.
    PS. Sorry for my bad English. I am not a native speaker.

    • @alfcnz
      @alfcnz  2 ปีที่แล้ว +1

      I'm glad the content is of any help.
      Looking forward to get that coffee in Greece. I've never visited… 🥺🥺🥺 Hopefully I'll fix that soon. 🥳🥳🥳

    • @mpalaourg8597
      @mpalaourg8597 2 ปีที่แล้ว

      @@alfcnz Easy fix, I'll send a Pull Request in no time!

    • @alfcnz
      @alfcnz  2 ปีที่แล้ว

      For coming to Greece? 🤔🤔🤔

  • @alexandrevalente9994
    @alexandrevalente9994 2 ปีที่แล้ว

    I really love that discussion about solving non convex problems.... finally we get out of the books ! At least we unleash our mind.

    • @alfcnz
      @alfcnz  2 ปีที่แล้ว

      🧠🧠🧠

  • @dr.mikeybee
    @dr.mikeybee 2 ปีที่แล้ว

    At 1:05:40 Yann is explaining the two jacobians, but I was having trouble getting the intuition. Then I realized that the first jacobian was getting the gradient to modify the weights w[k+1] for function z[k+1] and the second jacobian was back propagating the gradient to function z[k] which can then be used to calculate the gradient at k for yet another jacobian to adjust weights w[k]. So one jacobian is for the parameters and the other is for the state since both the parameter variable and state variable are column vectors. Yann explains it really well. I'm amazed that I seem to be understanding this complicated mix of symbols and logic. Thank you.

    • @alfcnz
      @alfcnz  2 ปีที่แล้ว

      👍🏻👍🏻👍🏻

  • @mahdiamrollahi8456
    @mahdiamrollahi8456 3 ปีที่แล้ว +2

    That is my honor to learn from you and Sir...

    • @alfcnz
      @alfcnz  3 ปีที่แล้ว +3

      Don't forget to subscribe to the channel and like the video to manifest your appreciation.

    • @mahdiamrollahi8456
      @mahdiamrollahi8456 3 ปีที่แล้ว

      @@alfcnz Ya, I did that 🤞

    • @alfcnz
      @alfcnz  3 ปีที่แล้ว +2

      🥰🥰🥰

  • @andylee8283
    @andylee8283 2 ปีที่แล้ว

    thank you for share, those are help for me more and more

    • @alfcnz
      @alfcnz  2 ปีที่แล้ว

      🤓🤓🤓

  • @adarshraj6721
    @adarshraj6721 2 ปีที่แล้ว

    Love from India sir.I really like the discussion & doubt clearing part. Hope to join NYU for my MS in 2023.:)

    • @alfcnz
      @alfcnz  2 ปีที่แล้ว

      💜💜💜

  • @gurdeeepsinghs
    @gurdeeepsinghs 2 ปีที่แล้ว

    Alfredo Canziani ... drinks are on me if you ever visit India ... this is extremely high quality content!

    • @alfcnz
      @alfcnz  2 ปีที่แล้ว

      Thanks! I prefer food, though 😅
      And yes, I'm planning to come over soon-ish.

  • @aymensekhri2133
    @aymensekhri2133 3 ปีที่แล้ว

    Thank you very much

    • @alfcnz
      @alfcnz  3 ปีที่แล้ว

      You're very welcome 🐱🐱🐱

  • @dr.mikeybee
    @dr.mikeybee 2 ปีที่แล้ว +1

    Just FYI, at 1:01:00 Yann correctly says dc/dzg, but the diagram has dc/zg. Should that also be dc/dwg and dc/dwf?

  • @mahdiamrollahi8456
    @mahdiamrollahi8456 ปีที่แล้ว

    Following the contours, there are infinite numbers for range of w that have the same losses. So, do we have the same prediction for all these params which the loss is equal for them?

  • @WeAsBee
    @WeAsBee 2 ปีที่แล้ว

    Discussion on stochastic gradient descent (12:23) and with adams (1:16:15) are great. General misconception.

    • @alfcnz
      @alfcnz  ปีที่แล้ว

      🥳🥳🥳

  • @mdragon6580
    @mdragon6580 7 หลายเดือนก่อน

    1:02:09 The "Einstein summation convention" is being used here. The student asking the question is not familiar with this convention, and Yann doesn't seem to realize that the student is unfamiliar with this convention

    • @alfcnz
      @alfcnz  7 หลายเดือนก่อน

      It’s not. It’s just a vector matrix multiplication.

    • @mdragon6580
      @mdragon6580 7 หลายเดือนก่อน

      @@alfcnz Ohhh I see. I was reading ∂c/∂z_f as "the f-th entry of the vector", but it actually denotes the entire vector. Similarly, I was reading ∂z_g/∂z_f as "the (g,f)-th entry of the Jacobian, whereas it actually denotes the entire Jacobian matrix. Sorry, I misread.
      Yann's notation for the (i,j)-th entry of the Jacobian matrix is given in the last line of the same slide.
      Thank you so much Alfredo for your quick reply above! And thank you so so much for putting these videos on TH-cam for everyone!

  • @mahdiamrollahi8456
    @mahdiamrollahi8456 ปีที่แล้ว

    So, how we can find that at least there is a pattern in our distribution, so we can find it by any model? Suppose we are going to find the md5 hash code of a string. For this one, we ourselves may know that there is not any pattern in it, but how we can find it for any other problem? Thanks

  • @geekyrahuliitm
    @geekyrahuliitm 2 ปีที่แล้ว

    @Alfredo, This content is amazing. Although I have 2 questions. It would be great if you can help me with it:
    Does this mean that in SGD, we are going to compute weight update steps for all the samples(randomly)?
    If we perform it on all the samples individually, how is it going to affect the training time? Is it going to increase/decrease as compared to batch GD?

    • @maxim_ml
      @maxim_ml 2 ปีที่แล้ว

      SGD _is_ mini-batch GD

  • @dr.mikeybee
    @dr.mikeybee 2 ปีที่แล้ว

    I don't know if this helps anyone, but it might. Weighted sums like s[0] are always to the to the first power. There are no squared weighted sums or cubed. So the derivative using the power rule of nx to the first power is equal to n. The derivative of ws[0] is always the weight w. That's why the application of the chain rule is so simple. Here's some more help. If y=2x, y'=2. If q=3y, q'=3; so y(q(x))' = 2 * 3. Picture the graph of y(q(x)), What is the slope? It's 6. And as many layers as you add in a neural net, the partial slopes will be multiples of the weights.

    • @alfcnz
      @alfcnz  2 ปีที่แล้ว +1

      Things get a little more fussy when moving away from the 1D case, though. 😬😬😬

  • @alexandrevalente9994
    @alexandrevalente9994 2 ปีที่แล้ว

    Does the trick explained in normalizing training samples (01:20:00) applies also to convolutional neural networks?

    • @alfcnz
      @alfcnz  2 ปีที่แล้ว

      Indeed.

  • @lam_roger
    @lam_roger 2 ปีที่แล้ว

    At the 40:41 section - is the purpose of using back propagation to find the derivative of the cost function wrt z to find the best direction to "move"? I've only gotten through half of the lecture so forgive me if this is answered later

    • @alfcnz
      @alfcnz  2 ปีที่แล้ว +2

      Say z = f(wᵀx). If you know ∂C/∂z, then you can compute ∂C/∂w = ∂C/∂z ∂f/∂w.

  • @sobhanahmadianmoghadam9211
    @sobhanahmadianmoghadam9211 ปีที่แล้ว

    Hello. Isn't
    ds[0] * dc / ds[0] + ds[1] * dc / ds[1] + ds[2] * dc / ds[2] = 3dc
    instead of dc? (At time 41:00)

  • @ayushimittal6496
    @ayushimittal6496 2 ปีที่แล้ว

    Hi Alfredo! Thank you so much for posting these lectures here! I wanted to know if there's any textbook for this course that I could refer to, along with following the lectures. Thanks :)

    • @alfcnz
      @alfcnz  2 ปีที่แล้ว +3

      Yes, I'm writing it. Hopefully a draft will be available by December. 🤓🤓🤓

    • @juliusolaifa5111
      @juliusolaifa5111 2 ปีที่แล้ว

      @@alfcnz The eagernessssssssssss

    • @alfcnz
      @alfcnz  2 ปีที่แล้ว +1

      Not sure anything will come out _this_ December, though…

    • @juliusolaifa5111
      @juliusolaifa5111 2 ปีที่แล้ว

      I’m hanging in there any day it does come out. Alfredo can I mail you? About the possibility of phd supervision?

    • @alfcnz
      @alfcnz  2 ปีที่แล้ว

      Uh… are you an NYU student?

  • @alexandrevalente9994
    @alexandrevalente9994 2 ปีที่แล้ว

    About the notebooks... are there corrections? Or can we send them to you?
    Thanks

    • @alfcnz
      @alfcnz  2 ปีที่แล้ว

      What notebooks would you want to send to me? 😮😮😮

  • @mahdiamrollahi8456
    @mahdiamrollahi8456 3 ปีที่แล้ว

    How libraries like Pytorch or Tensorflow calcualte the derivative of a function? Do they calculate the lim (f(x+dx) - f(x))/(dx) or just they have the pre-defined derivatives?

    • @alfcnz
      @alfcnz  3 ปีที่แล้ว +1

      Each function f comes with its analytical derivative f'. Forward calls f, while backward calls f'.

    • @mahdiamrollahi8456
      @mahdiamrollahi8456 3 ปีที่แล้ว

      @@alfcnz Actually I asked that before I watched it at 54:30, Regards 🤞

    • @alfcnz
      @alfcnz  3 ปีที่แล้ว +1

      If you remove ' and ", that becomes a link. 🔗🔗🔗

    • @mahdiamrollahi8456
      @mahdiamrollahi8456 3 ปีที่แล้ว

      @@alfcnz Cool !

  • @alexandrevalente9994
    @alexandrevalente9994 2 ปีที่แล้ว

    Is this paper to use in order to better understand backprop (the way explained on this video)? Or should we read some other work from Yann ?

    • @alfcnz
      @alfcnz  2 ปีที่แล้ว

      What paper? You need to point out minutes:seconds if you want me to address a specific question regarding the video.

    • @alexandrevalente9994
      @alexandrevalente9994 2 ปีที่แล้ว

      @@alfcnz i forgot to paste the link… i’ll do later. I from 1988… i will review the link.

  • @mahdiamrollahi8456
    @mahdiamrollahi8456 ปีที่แล้ว

    Hello Alfredo, at 1:11:50, where do we have the loops in gradient graph? Is there any prime example? Thanks

    • @alfcnz
      @alfcnz  ปีที่แล้ว +1

      That would be a system that we don't know how to handle. Every other connection is permitted.

    • @mahdiamrollahi8456
      @mahdiamrollahi8456 ปีที่แล้ว

      @@alfcnz 🙏🌿

  • @alexandrevalente9994
    @alexandrevalente9994 2 ปีที่แล้ว

    One of my question was overlooked ... "What is the difference between lesson x and lesson xL ?" So what is the difference between 01 and 01L for example ?

    • @alfcnz
      @alfcnz  2 ปีที่แล้ว +1

      Lecture and practica. This used to be a playlist of only practica. Then it turned into a full course.

  • @dr.mikeybee
    @dr.mikeybee 2 ปีที่แล้ว

    How do you perturb the output and backprop? Earlier the derivative of cost function was 1. (around 1:50:00)

    • @alfcnz
      @alfcnz  2 ปีที่แล้ว

      I've listened to it and there's no mention to backprop at that timestamp.

    • @dr.mikeybee
      @dr.mikeybee 2 ปีที่แล้ว

      @@alfcnz Thank you Alfredo. I probably messed up. It's where Yann mentions Q-learning and Deep Mind. I imagine he will cover all this in a later lecture. Thank you for doing all this. Sorry for all the comments. I'm just enjoying this challenging material a lot. I just forked your repo, and I'm starting the first notebook. Cheers!

    • @dr.mikeybee
      @dr.mikeybee 2 ปีที่แล้ว

      I see what I did. I gave you the end of video timestamp. My bad. LOL!

    • @dr.mikeybee
      @dr.mikeybee 2 ปีที่แล้ว

      It's just after 1:27:00.

  • @inertialdataholic9278
    @inertialdataholic9278 2 ปีที่แล้ว

    51:05 shouldn't it be self.m0(z0) as it takes in the flattened input?

    • @alfcnz
      @alfcnz  2 ปีที่แล้ว +1

      Of course.

  • @dr.mikeybee
    @dr.mikeybee 2 ปีที่แล้ว

    I thought that haar-like features were not that recognizable. (1:48:00)

  • @dr.mikeybee
    @dr.mikeybee 2 ปีที่แล้ว

    Just to clarify, the first code you show defines a model's graph, but it is untrained; so it can't be used yet for inference.

    • @alfcnz
      @alfcnz  2 ปีที่แล้ว

      You need to tell me minutes:seconds, or I have no clue what you're asking about.

    • @dr.mikeybee
      @dr.mikeybee 2 ปีที่แล้ว

      50:00

  • @alexandrevalente9994
    @alexandrevalente9994 2 ปีที่แล้ว

    About the code in PyTorch... (51:00 in the video)... the code instantiates the mynet class and stores the reference in model variable... but nowhere it calls the "forward" method... so how does the out variable receive any output from the model object? Is there some Pytorch magic which is not explained here ?

    • @alfcnz
      @alfcnz  2 ปีที่แล้ว

      Yup. When you call a nn.Module the forward function is called after and before some other stuff.

    • @alexandrevalente9994
      @alexandrevalente9994 2 ปีที่แล้ว

      @@alfcnz O Yes! My bad... I was distracted.... indeed the mynet inherit from nn.module and I suppose that forward is the implementation of an abstract method.

    • @alfcnz
      @alfcnz  2 ปีที่แล้ว

      Correct. 🙂🙂🙂

  • @matthewevanusa8853
    @matthewevanusa8853 3 ปีที่แล้ว

    One reason I agree it's better not to call a unit a "neuron" is the growing acceptance that single neurons in the brain are capable of complex computation via dendritic compartment computation

    • @alfcnz
      @alfcnz  3 ปีที่แล้ว

      If this is a question or note about the content, you need to add minutes:seconds, or I have no clue what you're referring at.

    • @matthewevanusa8853
      @matthewevanusa8853 3 ปีที่แล้ว

      @@alfcnz ah, sorry. Was just to add on, at ~ 31:25 when Prof. LeCun explains why people don't like to refer to the units as 'neurons' persay

    • @alfcnz
      @alfcnz  3 ปีที่แล้ว +1

      Cool! 😇😇😇

  • @hyphenpointhyphen
    @hyphenpointhyphen 9 วันที่ผ่านมา

    Why cant we use counters for the loops in neural nets - would a loop not make the network more robust in the sense of stabilizing output?

    • @alfcnz
      @alfcnz  9 วันที่ผ่านมา +1

      You need to add a timestamp if you’re expecting an answer to a specific part of the video. Otherwise it’s impossible for me to understand what you’re talking about.

    • @hyphenpointhyphen
      @hyphenpointhyphen 8 วันที่ผ่านมา

      @@alfcnz Sorry, around 34:39 - thanks for replying

  • @pranabsarma18
    @pranabsarma18 3 ปีที่แล้ว

    Great videos. may I know what does L stands for in the video title eg: 01L

    • @alfcnz
      @alfcnz  3 ปีที่แล้ว

      Lecture.

    • @pranabsarma18
      @pranabsarma18 3 ปีที่แล้ว

      @@alfcnz what about the videos which does not have L. It might sound silly but I am so confused 😂

    • @alfcnz
      @alfcnz  3 ปีที่แล้ว

      Those are my sessions, the practica. So, they should have a P, if I would want to be super precise.

    • @alfcnz
      @alfcnz  3 ปีที่แล้ว +1

      At the beginning there were only my videos. Yann's videos were not initially going to come online. It's too much work…

    • @pranabsarma18
      @pranabsarma18 3 ปีที่แล้ว

      Thank you Alfredo. ☺️🤗

  • @mohammedelfatih8018
    @mohammedelfatih8018 2 ปีที่แล้ว

    How can I press the like button more than one ?

  • @xXxBladeStormxXx
    @xXxBladeStormxXx 3 ปีที่แล้ว

    Can you please link the reinforcement learning course Yann mentioned? Or at least the name of the author, I couldn't fully understand.

    • @alfcnz
      @alfcnz  3 ปีที่แล้ว

      Without telling me minute:second I have no clue what you're talking about.

    • @xXxBladeStormxXx
      @xXxBladeStormxXx 3 ปีที่แล้ว

      @@alfcnz Oh right! sorry. It's at 1:18

    • @xXxBladeStormxXx
      @xXxBladeStormxXx 3 ปีที่แล้ว

      @@alfcnz Actually after re-listening to it, it sounded a lot clearer. It's the NYU reinforcement learning course by Larrel Pinto.

    • @alfcnz
      @alfcnz  3 ปีที่แล้ว

      Yes, that's correct. 😇😇😇

  • @bhaswarbasu2288
    @bhaswarbasu2288 10 หลายเดือนก่อน

    where to get the slides from?

    • @alfcnz
      @alfcnz  10 หลายเดือนก่อน

      The course website. 😇

  • @alexandrevalente9994
    @alexandrevalente9994 2 ปีที่แล้ว

    What is the difference between lesson x and lesson xL ?

    • @alfcnz
      @alfcnz  2 ปีที่แล้ว

      L stands for lecture. Initially I was going to publish only my sessions. Then I added Yann's.

  • @mahdiamrollahi8456
    @mahdiamrollahi8456 ปีที่แล้ว

    22:27 how to ensure about the convexity...

    • @alfcnz
      @alfcnz  ปีที่แล้ว +1

      We don't.

    • @mahdiamrollahi8456
      @mahdiamrollahi8456 ปีที่แล้ว

      @@alfcnz Yes😅, I just wanted to mention the question which you asked sir and he answered at that time slot. Thanks 🙏

  • @AIwithAniket
    @AIwithAniket 3 ปีที่แล้ว

    I didn't get "if batch-size >> num_classes then we are wasting computation". Could someone explain?

    • @alfcnz
      @alfcnz  3 ปีที่แล้ว

      You need to add minutes:seconds, or I cannot figure out what you're talking about.

    • @AIwithAniket
      @AIwithAniket 3 ปีที่แล้ว

      @@alfcnz wow I didn't expect a reply this soon 💜.
      My question was from 30:17

    • @alfcnz
      @alfcnz  3 ปีที่แล้ว +2

      Let's say you have exactly 10 images, one per digit. Now clone them 6k times, so you have a data set of size 60k samples (same size as MNIST). Now, if your batch is anything larger than 10, say 20 (you pick two images per digit), for example, you're computing the same gradient twice for no good reason.
      Now take the real MNIST. It is certainly not as bad as the toy data set described above, but most images for a given digit look very similar (hopefully so, otherwise it would be impossible to recognise)! So, you're in a very very similar situation.

    • @AIwithAniket
      @AIwithAniket 3 ปีที่แล้ว

      @@alfcnz oh got it. Thanks for explaining with the intuitive example 🙏

    • @alfcnz
      @alfcnz  3 ปีที่แล้ว +1

      That's Yann's 😅😅😅

  • @wangvince5857
    @wangvince5857 หลายเดือนก่อน

    I found Yann's boring face when he tried to explain chain rule at 38:43 lol

    • @alfcnz
      @alfcnz  หลายเดือนก่อน

      🤣🤣🤣

  • @balamanikandanjayaprakash6378
    @balamanikandanjayaprakash6378 2 ปีที่แล้ว

    Hi Alfredo, in 20:58 Yann mentioned "objective function need to be Continuous mostly and differentiable almost everywhere". What does he mean? isn't the function differentiable is always continuous? also is there a function where some part only differentiable? Can someone give me one example in deep learning functions? pls help me out.
    And, Thanks for this amazing videos!!!

    • @aniketthomas6387
      @aniketthomas6387 2 ปีที่แล้ว +2

      I think, he meant that the function has to be continuous everywhere (but not differentiable but it should be differentiable "almost" everywhere, as seen in the case with relu function max(0,x) it is non differentiable at x = 0, but elsewhere it is differentiable and it is continuous everywhere) so if the function is differentiable everywhere that is awesome but that is not necessary condition.
      The thing is, it should be continuous so that we can estimate the gradients and there's no break in the function. If there's a break somewhere in your objective function, you can't estimate gradient and your network has no way of knowing what to do.
      If I am wrong please do correct.

    • @alfcnz
      @alfcnz  2 ปีที่แล้ว +2

      Yup, Aniket's correct.

    • @balamanikandanjayaprakash6378
      @balamanikandanjayaprakash6378 2 ปีที่แล้ว +1

      Hey, Thanks for the explanation !!

  • @wangyeelinpamela
    @wangyeelinpamela ปีที่แล้ว

    This might be the larrel pinto course he references at 1:26 th-cam.com/video/sKqz9T_F_EU/w-d-xo.html

  • @advaitathreya5558
    @advaitathreya5558 ปีที่แล้ว

    12:55

    • @alfcnz
      @alfcnz  ปีที่แล้ว

      ?

    • @advaitathreya5558
      @advaitathreya5558 ปีที่แล้ว

      @@alfcnz A timestamp for myself to visit later :)

    • @alfcnz
      @alfcnz  ปีที่แล้ว

      🤣🤣🤣

  • @DaHrakl
    @DaHrakl 2 ปีที่แล้ว

    20:03 Doesn't he contradict himself? First he mentions that smaller batches are better (I assume that by "better" he meant model quality) in most cases, and a few seconds later he says that it's just a hardware matter.

    • @alfcnz
      @alfcnz  2 ปีที่แล้ว

      We use mini-batches because we use GPU or other accelerators. Learning wise, we would prefer purely stochastic gradient descent (batch size of 1).

  • @alexandrevalente9994
    @alexandrevalente9994 11 หลายเดือนก่อน

    2 years later.... in the video...At 54:30.... x still not fixed ... must be s0 .... 🤣🤣🤣🤣🤣🤣🤣🤣🤣 Just a joke ;-)

    • @alfcnz
      @alfcnz  11 หลายเดือนก่อน

      😭😭😭

    • @alexandrevalente9994
      @alexandrevalente9994 11 หลายเดือนก่อน

      @@alfcnzhahaaaaa... Gli esperti in computer... Cosa ci vuoi fare ? Siamo cosi 😂😂😂😂😂😂😂