Neural Network from Scratch without ML libraries | 100 lines of Python code

แชร์
ฝัง
  • เผยแพร่เมื่อ 17 ธ.ค. 2024

ความคิดเห็น • 13

  • @HeadshotComing
    @HeadshotComing ปีที่แล้ว +2

    Very cool! Thanks for the video!

  • @deeber35
    @deeber35 ปีที่แล้ว +1

    For multiple layers, the first weights and biases are set randomly; how do you determine what to use for the next layer's weights and biases {when going forward, not backward)?

    • @papersin100linesofcode
      @papersin100linesofcode  ปีที่แล้ว

      Hi, thank you for your question. Maybe you confuse weights and hidden units? All the weights are initialized randomly. Then, the hidden units of the first layer are computer from the weights of the first layer together with the input. Then, those hidden units are used to compute the hidden units of the following layers.
      I hope it is clear :) Let me know otherwise

    • @deeber35
      @deeber35 ปีที่แล้ว +1

      @@papersin100linesofcode I know how to compute inputs and outputs as you go on to the next layers; but what weights and biases are used to compute the layers after that? For layer 1, you set them yourself, randomly. I don't see an equation to compute weights for all the layers after that. Are they set to random as well in the initial run, changed later by back optimization Thanks.

    • @papersin100linesofcode
      @papersin100linesofcode  ปีที่แล้ว +1

      @@deeber35 Yes exactly! They are initialized randomly when each layer is iniitalized, and then, updated with backpropagation

  • @legion_prex3650
    @legion_prex3650 ปีที่แล้ว +1

    that seems pretty cool. i will try it! thank you!

  • @emjizone
    @emjizone 4 หลายเดือนก่อน +1

    Quite à subir l'accent français, je préfèrerais l'entendre dans ma langue natale. 😄
    I barely stand it in english, sorry.
    Congratulations on the effort to synthesize so much into one short implementation and video. It quite demystifies machine learning.

  • @JessieJussMessy
    @JessieJussMessy ปีที่แล้ว +1

    Nice