For multiple layers, the first weights and biases are set randomly; how do you determine what to use for the next layer's weights and biases {when going forward, not backward)?
Hi, thank you for your question. Maybe you confuse weights and hidden units? All the weights are initialized randomly. Then, the hidden units of the first layer are computer from the weights of the first layer together with the input. Then, those hidden units are used to compute the hidden units of the following layers. I hope it is clear :) Let me know otherwise
@@papersin100linesofcode I know how to compute inputs and outputs as you go on to the next layers; but what weights and biases are used to compute the layers after that? For layer 1, you set them yourself, randomly. I don't see an equation to compute weights for all the layers after that. Are they set to random as well in the initial run, changed later by back optimization Thanks.
Quite à subir l'accent français, je préfèrerais l'entendre dans ma langue natale. 😄 I barely stand it in english, sorry. Congratulations on the effort to synthesize so much into one short implementation and video. It quite demystifies machine learning.
Very cool! Thanks for the video!
I am glad you like it, thank you for your comment!
?????????
For multiple layers, the first weights and biases are set randomly; how do you determine what to use for the next layer's weights and biases {when going forward, not backward)?
Hi, thank you for your question. Maybe you confuse weights and hidden units? All the weights are initialized randomly. Then, the hidden units of the first layer are computer from the weights of the first layer together with the input. Then, those hidden units are used to compute the hidden units of the following layers.
I hope it is clear :) Let me know otherwise
@@papersin100linesofcode I know how to compute inputs and outputs as you go on to the next layers; but what weights and biases are used to compute the layers after that? For layer 1, you set them yourself, randomly. I don't see an equation to compute weights for all the layers after that. Are they set to random as well in the initial run, changed later by back optimization Thanks.
@@deeber35 Yes exactly! They are initialized randomly when each layer is iniitalized, and then, updated with backpropagation
that seems pretty cool. i will try it! thank you!
Thanks!
Quite à subir l'accent français, je préfèrerais l'entendre dans ma langue natale. 😄
I barely stand it in english, sorry.
Congratulations on the effort to synthesize so much into one short implementation and video. It quite demystifies machine learning.
Merci! :)
Nice
Thank you so much!