Forward propagation in training neural networks step by step

แชร์
ฝัง
  • เผยแพร่เมื่อ 4 ก.พ. 2025

ความคิดเห็น • 44

  • @KurshanCraigSandlerCasilen
    @KurshanCraigSandlerCasilen 8 วันที่ผ่านมา +1

    Very Good Very Nice

  • @Midria2024
    @Midria2024 14 วันที่ผ่านมา

    such an clear explaination, this is truly a gift from the internet, thank you Bevan

  • @quadrialli3715
    @quadrialli3715 ปีที่แล้ว +6

    Beautiful video. The patience, the calmness in your voice in these videos. Thank you so much

  • @PrashantThakre
    @PrashantThakre 3 ปีที่แล้ว +4

    This is the best video on forward propagation in TH-cam. Thanks for posting such videos.

  • @fundatamdogan
    @fundatamdogan 3 ปีที่แล้ว +7

    I 've never seen such a perfect explanation.I have faced with a problem while I was writing my kernel and searched everywhere to find a solution for understanding reason of error I ve got.Suddenly I saw the video and started to listen carefully .Thank you so much sir

  • @Edin12n
    @Edin12n 2 ปีที่แล้ว +10

    Part 1 and Part 2 are the best explanation of these subjects going. Thanks so much Bevan. You have a real talent for explaining a difficult subject in a way that’s as easy as possible to grasp. Brilliant videos

  • @dhishsaxena5746
    @dhishsaxena5746 ปีที่แล้ว +1

    Thanks a lot Bevan. Excellent insights with simplicity. Kudos!

  • @bobdillon1138
    @bobdillon1138 ปีที่แล้ว +3

    Simply the best explanations i have found if i could give part one and two a hundred likes i would...Please do more AI content you have a gift for teaching.

  • @geld5220
    @geld5220 2 ปีที่แล้ว +2

    the best explanation so far...

  • @mmacaulay
    @mmacaulay 3 ปีที่แล้ว +2

    Thank you so much. This the first explanation of forward propagation in neural networks, that I actually understood.

  • @wd8222
    @wd8222 ปีที่แล้ว +2

    Excellent presentation! I wish a intro in Transformer Architecture, which today replaces many NN (CNN,RNN,…).

  • @myprofile6668
    @myprofile6668 3 ปีที่แล้ว +3

    Thank you so much for such a simple way to teach a medium student. It was too clear and obvious to understand and learn the NN.. Love from Pakistan😀

  • @kaushikplabon8530
    @kaushikplabon8530 ปีที่แล้ว

    so perfect brother nice job

  • @EricD_192
    @EricD_192 2 ปีที่แล้ว

    The best content I have found, thanks for such a detailed explanation!

  • @TheKwame83
    @TheKwame83 2 ปีที่แล้ว

    I'm currently taking an AI course but I.must say your explanation is.more understandable. Thank you.

  • @kdSU30
    @kdSU30 2 ปีที่แล้ว +5

    Dear Bevan,
    You have presented both the forward and backward propagation concepts in an exceptional manner, especially the example that you have chosen for doing so. The majority of ANN tutorials stick to 'cat-dog' kind of categorical examples.
    I have one query though. Do you recommend the use of a non-linear activation function in the output layer? And if yes, which non-linear activation function will you prefer for continuous problems. The thing to keep in mind while choosing this output layer activation function is that it should be able to provide an output which can exceed 1 or can be less than 0 or -1. Sigmoid or tanh activation function in the output layer will not allow us to have such 'greater than 1 or less than 0 / -1' outputs.
    I hope to hear from you. Thanks!

  • @mariammkassim7879
    @mariammkassim7879 3 ปีที่แล้ว

    Thank You. Very Informative

  • @mustafizurrahman5699
    @mustafizurrahman5699 ปีที่แล้ว

    Awesome

  • @gourinathhs3850
    @gourinathhs3850 3 ปีที่แล้ว

    Great work
    Thankyou

  • @jaeen7665
    @jaeen7665 2 ปีที่แล้ว

    This is a fantastic explanation, but you'll need some knowledge of machine learning from scratch, particularly linear and logistic regression. This is perfect and exactly what I needed to understand NN. Liked and subbed.

  • @xray788
    @xray788 3 หลายเดือนก่อน

    Is it possible for you to upload the slides for the series please?

  • @osirismaat9695
    @osirismaat9695 ปีที่แล้ว

    Stochastic Gradient Decent. It is Stochastic since it Randomly shuffle samples in the training set.

  • @onepointhero_official
    @onepointhero_official ปีที่แล้ว

    May I ask how you can determine the bias to be added in z1 calculation?

  • @luciaballesterosgarcia1306
    @luciaballesterosgarcia1306 3 ปีที่แล้ว

    Thanks for your explanation! How do you choose the weights? or its a random decision? Thank u in advance!

    • @bevansmithdatascience9580
      @bevansmithdatascience9580  3 ปีที่แล้ว

      Hi Lucia. Yes it was a random decision. For a more thorough explanation of the initial values of the weights, check out Andrew Ng (th-cam.com/video/6by6Xas_Kho/w-d-xo.html). Good luck!

  • @dubeypankaj1983
    @dubeypankaj1983 ปีที่แล้ว

    Why haven't we performed Activation function on Ypred ??

  • @SuzyZou1998
    @SuzyZou1998 ปีที่แล้ว

    I was wondering why didn't you write the active fucnton in your output neuron? is there a specific reason?

    • @bevansmithdatascience9580
      @bevansmithdatascience9580  ปีที่แล้ว

      Most likely because it is just a linear summation of the inputs. There was no need to pass it through a non linear activation function

  • @limpblz1988
    @limpblz1988 3 ปีที่แล้ว

    Why do we only use 2 nodes?

    • @bevansmithdatascience9580
      @bevansmithdatascience9580  3 ปีที่แล้ว +3

      It is just an example. There can be as many nodes/neurons as you wish. I just used two for this example

  • @yinka366
    @yinka366 2 ปีที่แล้ว

    Thanks for this! clear and straight to the point. The simplest example I've seen so far.
    As regards biases, does this apply to the RBF Neural Networks as well? For instance, if the input layer has a bias unit, do we add the bias during forward propagation? Thanks

  • @ChargedPulsar
    @ChargedPulsar 2 ปีที่แล้ว

    What is the point of saying "okay" every three seconds?
    Imagine people listening to your videos multiple times, it makes a lot of "okay, okay, okay, okay, okay". Which is just "noise" in the information that's actually blocking the material!

    • @bevansmithdatascience9580
      @bevansmithdatascience9580  2 ปีที่แล้ว

      Then don't watch. Go elsewhere

    • @ChargedPulsar
      @ChargedPulsar 2 ปีที่แล้ว

      @@bevansmithdatascience9580 Ignorance is bliss. Thanks, but no one is asking you what they should do next.

    • @bevansmithdatascience9580
      @bevansmithdatascience9580  2 ปีที่แล้ว

      @@ChargedPulsar I'm telling you

    • @joachimguth6226
      @joachimguth6226 ปีที่แล้ว

      You may learn how to bring across your message in a more pleasant manner. We get an excellent lesson for free. And you are focusing on a minor issue. Think about it, if you can.

    • @ChargedPulsar
      @ChargedPulsar ปีที่แล้ว

      @@joachimguth6226 You are right, it could have been sugarcoated more. Unfortunately this feedback to benefit and improve the video/channel is answered with insult and disrespect. When a teacher starts insulting the audience with disrespect, it's a clear sign that he/she isn't the right character to be a teacher, but just uses information as an excuse to feel above others.