Neural Networks Pt. 4: Multiple Inputs and Outputs

แชร์
ฝัง
  • เผยแพร่เมื่อ 27 ก.ค. 2024
  • So far, this series has explained how very simple Neural Networks, with only 1 input and 1 output, function. This video shows how these exact same concepts generalize to multiple inputs and outputs and provides a context within we can discuss SoftMax and ArgMax for modifying the output data.
    NOTE: This StatQuest assumes you already know...
    The main ideas behind Neural Networks: • The Essential Main Ide...
    The ReLU Activation Function: • Neural Networks Pt. 3:...
    For a complete index of all the StatQuest videos, check out:
    statquest.org/video-index/
    If you'd like to support StatQuest, please consider...
    Buying my book, The StatQuest Illustrated Guide to Machine Learning:
    PDF - statquest.gumroad.com/l/wvtmc
    Paperback - www.amazon.com/dp/B09ZCKR4H6
    Kindle eBook - www.amazon.com/dp/B09ZG79HXC
    Patreon: / statquest
    ...or...
    TH-cam Membership: / @statquest
    ...a cool StatQuest t-shirt or sweatshirt:
    shop.spreadshirt.com/statques...
    ...buying one or two of my songs (or go large and get a whole album!)
    joshuastarmer.bandcamp.com/
    ...or just donating to StatQuest!
    www.paypal.me/statquest
    Lastly, if you want to keep up with me as I research and create new StatQuests, follow me on twitter:
    / joshuastarmer
    0:00 Awesome song and introduction
    2:07 Multiple inputs and outputs
    3:57 The blue bent surface for Setosa
    6:28 The orange bent surface for Setosa
    6:52 The green crinkled surface for Setosa
    8:42 Predicting Setosa
    9:42 Versicolor
    11:11 Virginica
    #StatQuest #NeuralNetworks

ความคิดเห็น • 262

  • @statquest
    @statquest  2 ปีที่แล้ว +10

    The full Neural Networks playlist, from the basics to deep learning, is here: th-cam.com/video/CqOfi41LfDw/w-d-xo.html
    Support StatQuest by buying my book The StatQuest Illustrated Guide to Machine Learning or a Study Guide or Merch!!! statquest.org/statquest-store/

  • @Tapsthequant
    @Tapsthequant 2 ปีที่แล้ว +51

    Now is the time for some shameless appreciation. Thank you Josh

    • @statquest
      @statquest  2 ปีที่แล้ว +10

      Hooray!!! Thank you very much! BAM! :)

  • @alicecandeias2188
    @alicecandeias2188 3 ปีที่แล้ว +60

    a brazilian bank supporting this statquest video: bam
    me, a brazilian watching the video: YO DOUBLE BAM

    • @statquest
      @statquest  3 ปีที่แล้ว +29

      Muito bem!!! (Muito BAM!!! :)

    • @jpmagalhaes6645
      @jpmagalhaes6645 3 ปีที่แล้ว +13

      @@statquest Another Brazilian watching this amazing channel: TRIPLE BAM!
      This Brazilian is a teacher and recommends this channel in all classes: SUPER BAM!

    • @statquest
      @statquest  3 ปีที่แล้ว +8

      @@jpmagalhaes6645 Muito obrigado!!! :)

    • @TheKenigham
      @TheKenigham 2 ปีที่แล้ว +1

      I even google to confirm if Itaú was actually a Brazilian company LOL

    • @djiodsjio
      @djiodsjio หลายเดือนก่อน +1

      @@statquest that was amazing, as a brazilian I appreciate this joke very much

  • @mingli8919
    @mingli8919 3 ปีที่แล้ว +21

    your videos made me sincerely become interested in the subjects and want to learn more, not just because it's a useful skill that I had to force myself to learn, thank you, Sir!(salute)

    • @statquest
      @statquest  3 ปีที่แล้ว +2

      Thank you very much and thank you for your support!!! :)

  • @cesarbarros2931
    @cesarbarros2931 2 ปีที่แล้ว +11

    Hey, Sir Josh "Bam", you deserve an Oscar award, such meticulousness in conveying dense information in a paradoxically light and witty way. In my opinion, it seems to be an innovation in the process of transmitting non-trivial mathematical and related knowledge. Small video masterpieces with ultra-concatenated information at an extremely adjusted pace. I wish you even more the much-deserved recognition and success. Directly from Brazil, I send you my very special congratulations.

    • @statquest
      @statquest  2 ปีที่แล้ว +4

      Muito obrigado!!! :)

    • @cesarbarros2931
      @cesarbarros2931 2 ปีที่แล้ว +4

      @@statquest In Portuguese, "Muito obrigado!" - = Thank you very much! - can be replicated in the end by "Eu que agradeço", which brings a sense that the person who received great services or experienced great experiences is thankful in the end. As this seems to be the case...Eu que agradeço, Mr. Josh. Abraços fortes.

    • @statquest
      @statquest  2 ปีที่แล้ว +3

      @@cesarbarros2931 VERY COOL! I've been to Brazil twice and hope I can return one day to learn (and speak) more Portuguese.

  • @TeaTimeWithIroh
    @TeaTimeWithIroh 3 ปีที่แล้ว +47

    Thanks for all you do Josh! These videos help lay out the foundation for me - and help make the actual math easier to understand :-) BAM!!

    • @statquest
      @statquest  3 ปีที่แล้ว +6

      Thank you very much! :)

  • @paulakaorisiya1517
    @paulakaorisiya1517 3 ปีที่แล้ว +5

    I was very surprised and happy to see that Itaú suport this content! I've beem working at Itaú for 6 years and now I am studying neural networks to improve some process here. I love your videos Josh :)

  • @gundeepdhillon9099
    @gundeepdhillon9099 3 ปีที่แล้ว +35

    I really appreciate your attention to detail whether its your content or personally reading and acknowledging each and every comment on social media (be it YT or linkedin).. you sir are the best teacher and a great human being...#Respect #Mentor #BestTeacher

    • @statquest
      @statquest  3 ปีที่แล้ว +6

      Thank you very much!!! :)

  • @84mchel
    @84mchel 3 ปีที่แล้ว +9

    The amount of value you provide with these videos is galactic! Keep it up. I was literally looking for a visual representation of multi input nn and how the relu (shape) looks like. Hard to imagine when you have 3 inputs (eg pixels) its like a 4d relu shape?!

    • @statquest
      @statquest  3 ปีที่แล้ว +3

      Yes, if we have 3 inputs, then we end up with a 4D shape and it's much harder to draw! :)

  • @mikaelbergman2093
    @mikaelbergman2093 3 ปีที่แล้ว +8

    Hooray!!! Thank you Josh & StatQuest Land for this video! What an amazing approximately-Birthday-surprise!
    I really do like silly songs, mathematics, statistics, machine learning and I love StatQuest. Greetings from Mikael to every soulmate out there from Svalbard 78° North. BAM!!!

    • @statquest
      @statquest  3 ปีที่แล้ว +6

      Hooray!!! Thank you Mikael!!! And thank you for helping me edit the first draft of this video!!! I'm looking forward to talking with you about Convolutional Neural Networks soon. :)

    • @gundeepdhillon9099
      @gundeepdhillon9099 3 ปีที่แล้ว +1

      believe it or not these days I'm watching alot of vids on Svalbard..... whatta fantastic place and history... my fav vid is one on seed vault....amazing th-cam.com/video/2_OEsf-1qgY/w-d-xo.html

  • @sinaro93
    @sinaro93 ปีที่แล้ว +1

    This is more than triple! This is quadruple, quintuple or even sextuple BAAAM!! I love the simplicity of your explanation.

  • @tymothylim6550
    @tymothylim6550 2 ปีที่แล้ว +8

    What a great video! Thanks for all the hard work plotting those 3D points :)

    • @statquest
      @statquest  2 ปีที่แล้ว +4

      Hooray!!! I'm glad you appreciate the work! I spent a lot of time on this video. :)

  • @juaneshberger9567
    @juaneshberger9567 ปีที่แล้ว +1

    The quality of these videos is incredible. Thanks, Josh!

    • @statquest
      @statquest  ปีที่แล้ว

      Glad you like them!

  • @minhtuanle9268
    @minhtuanle9268 ปีที่แล้ว +1

    for all the effort to make this video,you deserve my respect

    • @statquest
      @statquest  ปีที่แล้ว

      Thank you so much 😀!

  • @jennycotan7080
    @jennycotan7080 7 หลายเดือนก่อน +2

    Sir... Mr. Starmer, maybe I'll give myself your book about Machine Learning as a gift for the Lunar New Year if I get a great result in the coming Maths final exam. Because your videos fit us tech kids so much!

    • @statquest
      @statquest  7 หลายเดือนก่อน

      Good luck! :)

  • @joshwang3500
    @joshwang3500 ปีที่แล้ว +1

    Fantastic video, Josh!, these animations and accompanying text clearly help me explain the logic behind. Thank you so much for all you do !!

    • @statquest
      @statquest  ปีที่แล้ว

      Thank you very much! :)

  • @bijoydey479
    @bijoydey479 3 ปีที่แล้ว +1

    One of the best teacher in statistics....👍👍👍

    • @statquest
      @statquest  3 ปีที่แล้ว

      Thanks a lot 😊!

  • @rohit2761
    @rohit2761 2 ปีที่แล้ว +2

    Hello Josh, I cannot express my gratitude for finding your channel. I am literally Binge-watching to get the conceptual clarity.
    Like a greedy subscriber, I just wanna request to upload more *Deep learning* videos
    #DeepLearning CNN RNN ImageCV etc.
    Content is magnificent
    May God bless you.

    • @statquest
      @statquest  2 ปีที่แล้ว

      Thanks! There is already a CNN video th-cam.com/video/HGwBXDKFk9I/w-d-xo.html and I hope to have an RNN video out soon.

  • @user-ry5zu1wo4e
    @user-ry5zu1wo4e 2 หลายเดือนก่อน +1

    What an amazing explanation.
    Thank you so much

    • @statquest
      @statquest  2 หลายเดือนก่อน

      Thanks!

  • @pedramporbaha
    @pedramporbaha 2 ปีที่แล้ว +1

    Triple BAM!!! after several years of Ambiguous in machine learning, I found you! i love your contets ! In addittion, I love that you're multidimentional man(like data in thes video) and it caused me to loved you more, beacause of I am a music-composer,HammeredDulcimer player,math-lover pharmacist too ! You're inspirieg to me!🌸

    • @statquest
      @statquest  2 ปีที่แล้ว +1

      Awesome! Thank you!

  • @rafael_l0321
    @rafael_l0321 2 ปีที่แล้ว +1

    Unexpected Brazilian sponsorship! A BAM from Brazil!

    • @statquest
      @statquest  2 ปีที่แล้ว

      Muito obrigado!

  • @twandekorte2077
    @twandekorte2077 3 ปีที่แล้ว +4

    Great video! Your explanations are very intuitive as always. BAM

    • @statquest
      @statquest  3 ปีที่แล้ว +1

      Thank you very much and thank you for your support!! BAM! :)

  • @samuelyang1870
    @samuelyang1870 ปีที่แล้ว +1

    Amazing video, the quality of the explanation is way above the views you're getting. Keep it up!

    • @statquest
      @statquest  ปีที่แล้ว

      Thanks, will do!

  • @lucarauchenberger628
    @lucarauchenberger628 2 ปีที่แล้ว +1

    so clearly explained! stunning!

  • @ananthakrishnank3208
    @ananthakrishnank3208 ปีที่แล้ว +1

    Truely a master of machine learning.

    • @statquest
      @statquest  ปีที่แล้ว +1

      Thank you! :)

  • @enzy7497
    @enzy7497 ปีที่แล้ว +1

    Amazing work. I loved this example so much! God bless you.

  • @BlasterMate
    @BlasterMate 3 ปีที่แล้ว +1

    I would never imagine that i could imagine a neural network.Thank you.

  • @gilao
    @gilao 17 วันที่ผ่านมา +1

    Another great one! Thanks!

    • @statquest
      @statquest  17 วันที่ผ่านมา

      Thank you! :)

  • @harishbattula2672
    @harishbattula2672 2 ปีที่แล้ว +1

    Thank you very much for the explanation, i recommended all you videos to my classmates.

    • @statquest
      @statquest  2 ปีที่แล้ว +1

      Awesome, thank you!

  • @TheGoogly70
    @TheGoogly70 3 ปีที่แล้ว +1

    Great video. You’ve explained the complex concepts so simply. I hope there will be a video on how to determine the weights and biases in a neutral network such as for the Iris example.

    • @statquest
      @statquest  3 ปีที่แล้ว +2

      We use backpropagation to determine the weights and biases, and backpropagation is covered in these videos: th-cam.com/video/IN2XmBhILt4/w-d-xo.html th-cam.com/video/iyn2zdALii8/w-d-xo.html th-cam.com/video/GKZoOHXGcLo/w-d-xo.html th-cam.com/video/xBEh66V9gZo/w-d-xo.html

    • @TheGoogly70
      @TheGoogly70 3 ปีที่แล้ว +2

      @@statquest Great! Thanks a ton.

  • @pielang5524
    @pielang5524 3 ปีที่แล้ว +1

    excellent explanation! Thank you

  • @zchasez
    @zchasez 3 ปีที่แล้ว +1

    Thanks for all the videos that you made! they've been a great help!
    That being said, is it possible to make a video about Accuracy, Recall, and Precision? I really can't wrap my head around these concepts. Thanks again!!

    • @statquest
      @statquest  3 ปีที่แล้ว +2

      Yes, I can do that.

  • @Gautam1108
    @Gautam1108 ปีที่แล้ว +1

    Excellent!! Thank you so much Josh

  • @NoirKuro_
    @NoirKuro_ 5 หลายเดือนก่อน +1

    Love this video, compare with other explaination video, this is most simple and puurrrrrfffeeeecccttooo (i mean after watch this video im able to make my multistep multivariate deep neural network model even without cs background). Thank you *sorry for my "broken english"

    • @statquest
      @statquest  5 หลายเดือนก่อน

      Glad you liked it!

  • @ngusumakofu1
    @ngusumakofu1 ปีที่แล้ว +1

    I came here for the knowledge and the BAM!

  • @Ash-bc8vw
    @Ash-bc8vw 2 ปีที่แล้ว +1

    Awesome video!

  • @naomyduarteg
    @naomyduarteg ปีที่แล้ว +1

    Love the comments such as "I thought they were all petals!" 🤣🤣🤣
    Great series!

    • @statquest
      @statquest  ปีที่แล้ว +1

      Thank you so much! BAM! :)

  • @MilanMarojevic
    @MilanMarojevic ปีที่แล้ว +1

    Really useful ! Thanks :)

    • @statquest
      @statquest  ปีที่แล้ว

      Glad to hear that!

  • @spearchew
    @spearchew 3 ปีที่แล้ว +1

    A great series on NN. I wonder what would happen if you found an iris in the woods that had features outside of the normalised "zero to one" range of our training data. If it was a very small iris, I guess our input would then have to be negative. If it was a freakishly big iris, our input values might be >>1.0.... Perhaps this would break the squiggle machine.

    • @statquest
      @statquest  3 ปีที่แล้ว +2

      Probably. And that is, in general, a limitation of all machine learning methods. If new data is outside of the range of the original training data, your predictions are probably going to be pretty random.

  • @lilaberkani4376
    @lilaberkani4376 3 ปีที่แล้ว +3

    You should consider singing with Peobee Buffay sometime Hhahah ! I love ur videos

  • @andrewdouglas9559
    @andrewdouglas9559 ปีที่แล้ว +2

    I can't imagine how much time it must take to make one of these videos.

    • @statquest
      @statquest  ปีที่แล้ว +1

      It takes a lot of time, but it's fun! :)

  • @BaerFlorian
    @BaerFlorian 3 ปีที่แล้ว

    Thanks for the amazing video! Maybe you‘ll find some time to also make a video on how to apply backpropagation to a neural network with multiple inputs and outputs.

    • @statquest
      @statquest  3 ปีที่แล้ว +2

      That's coming up in a few weeks. We'll do an example using cross entropy and softmax.

  • @youhadayoub9567
    @youhadayoub9567 ปีที่แล้ว +1

    thanks a lot, you are really a life saver

    • @statquest
      @statquest  ปีที่แล้ว

      You're welcome!

  • @michaelfreeman4460
    @michaelfreeman4460 3 ปีที่แล้ว

    Looking forwart to see your take on LSTM and backpropagation through time! interesting staff there ^_^

    • @statquest
      @statquest  3 ปีที่แล้ว

      I'll keep that in mind!

  • @bessa0
    @bessa0 2 ปีที่แล้ว +1

    Dude, I love you.

  • @admggm
    @admggm 3 ปีที่แล้ว +5

    question: 1.what happens with inputs from different types: discrete vs continuous? 2.what happens if we would like to have, for example, a "predominant color" as an input along with the widths? thanks a lot!

    • @Rufus1250
      @Rufus1250 3 ปีที่แล้ว

      "predominant color" is a categorical value. Therefore you would need to do a one hot encoding before. e.g. blue = (0,1), red = (1,0) for 2 possible colors.

    • @statquest
      @statquest  3 ปีที่แล้ว

      That's correct. The inputs for a categorical input would just be 0 or 1 (rather than values between 0 and 1).

  • @user-bz8nm6eb6g
    @user-bz8nm6eb6g 3 ปีที่แล้ว +1

    love it!

  • @RomaineGangaram
    @RomaineGangaram 2 หลายเดือนก่อน +1

    Bruh you make this easy. How¿? I made a new lmm because of you! Shameless congratulations

    • @statquest
      @statquest  2 หลายเดือนก่อน

      BAM! :)

  • @s25412
    @s25412 3 ปีที่แล้ว +1

    Could you pls confirm @ 10:51 and 12:00, when you say "change the scale for y-axis," does that simply mean zooming in on that y-axis range and looking at values in that range only? Or does it involve mathematical manipulation of y values to fit that range?

    • @statquest
      @statquest  3 ปีที่แล้ว +4

      Changing the scale on the y-axis just means zooming in on the part we are interested. The values remain the same, we just zooming in on them.

  • @user-go7lu8hq5l
    @user-go7lu8hq5l 3 ปีที่แล้ว +1

    very useful

  • @thamburus7332
    @thamburus7332 ปีที่แล้ว +1

    Thank You

  • @robert-dr8569
    @robert-dr8569 ปีที่แล้ว +1

    I love your simple and clear explanations!

  • @a.lex.
    @a.lex. 2 หลายเดือนก่อน

    Hi StatQuest, you said that by scaling the inputs between 0 and 1 it makes the math easier, but what would change if the inputs were not scaled. Also great series of videos :))

    • @statquest
      @statquest  2 หลายเดือนก่อน +1

      Not much. The numbers would be larger and wouldn't fit so nicely in the small boxes I created.

  • @ADHAM840
    @ADHAM840 3 หลายเดือนก่อน

    what an amazing illustration for this specific topic, what i didn't get or follow is why the y-scale in each different iris type was different (0 and 2 in Setosa, -6 and 6 in Versicolor, -6 and 6 in Virginica ) ? , where these numbers came from ? thanks again for your style in explaining hard stuff that most of people take them for granted :)

    • @statquest
      @statquest  3 หลายเดือนก่อน

      To learn why the y-axis values are different, see: th-cam.com/video/CqOfi41LfDw/w-d-xo.html

  • @libalele3460
    @libalele3460 ปีที่แล้ว

    Great video once again! Is optimizing the weights and biases in a NN with several inputs the same process as a NN with just 1 input?

    • @statquest
      @statquest  ปีที่แล้ว +1

      Yes. However, if you'd like to look at examples of how the derivatives are calculated, see: th-cam.com/video/KpKog-L9veg/w-d-xo.html th-cam.com/video/M59JElEPgIg/w-d-xo.html th-cam.com/video/6ArSys5qHAU/w-d-xo.html th-cam.com/video/xBEh66V9gZo/w-d-xo.html

  • @abdullahalmussawi5291
    @abdullahalmussawi5291 7 หลายเดือนก่อน +1

    Hey Josh amazing video like always. can you answer my dumb question please :)
    on the previous video you applied the ReLU function after adding the final bias, why we did not do that in this video? does adding more than one input or output affect this ?
    thanks again for the amazing content.

    • @statquest
      @statquest  7 หลายเดือนก่อน +1

      I didn't add a final ReLU because I didn't need it. There are no rules for building neural networks and you can just build them the way that works best with whatever data you have.

    • @revatinanda6318
      @revatinanda6318 7 หลายเดือนก่อน +2

      @@statquest Always a Fan of your content and have suggested others to understand the basics of ML through your videos.
      Really appreciate your quick response on @abdullahalmussawi5291 query....
      God bless you brother... :)

  • @nebuer54
    @nebuer54 2 ปีที่แล้ว +2

    Awesome video as always! Beginner here, so a couple of questions -
    1). Do the outputs refer to probability values? for eg, at 13:07 does the output value of 0.86 mean there's a 86% chance of the flower being a Versicolor, given this particular sepal and petal width? If so, is there a special case (distribution?) where the output probabilities sum to 1?
    2) Does the number of inputs play a critical role in choice of any key component in the architecture - like which loss function to use? or which activation function to use?, etc.
    3) At what point in an n-dimensional crinkled hyperspace does the universe go n-BAM?! just kidding. not a real question :D

    • @statquest
      @statquest  2 ปีที่แล้ว +1

      1) In this case, the outputs are not probabilities (you can tell because they don't add up to 1). The next video in this series shows that it is very common to add "SoftMax" to this sort of Neural Network to give "probabilities". I put the "probabilities" in quotes because their interpretation comes with some caveats. For more details, see: th-cam.com/video/KpKog-L9veg/w-d-xo.html

    • @statquest
      @statquest  2 ปีที่แล้ว +1

      2) The inputs don't really affect the loss function or activation function. However, it might effect the number of hidden layers and nodes in each layer.

  • @andersk
    @andersk 2 ปีที่แล้ว +1

    Hi Josh, thanks again for an awesome video. At 8:13 you mention that these values for width are obviously scaled, so you would do the same with a validation set or a prediction set - is there no potential issue with the scaled new observation being a minus number? Really shooting in the dark here, but I'm thinking maybe somewhere in the neural network there could be a situation where taking away a very small width would be a number close to zero, but if you now have scaled negative values, the two minus signs would go to a plus and maybe incorrectly classify this flower with smaller petals than any in the training set as one with bigger ones because it went past the training limits?

    • @andersk
      @andersk 2 ปีที่แล้ว

      I'm going on a real tangent here, so if there's nothing to worry about, a simple 'no' would be a fine answer :D thanks again!

    • @statquest
      @statquest  2 ปีที่แล้ว

      I'm not really sure. My guess is that if you think you might run into this sort of problem, then you need to be careful with how you scale your data.

    • @andersk
      @andersk 2 ปีที่แล้ว +1

      @@statquest will do, was only asking in case it was a known pitfall but too rare to put into the video. Thanks for your reply & all these videos, I'm sure you get a message every hour on this but you're really the best educator I've ever come across 🙏

  • @dikaixue3050
    @dikaixue3050 2 ปีที่แล้ว +1

    thank you

  • @victorreloadedCHANEL
    @victorreloadedCHANEL 3 ปีที่แล้ว +1

    Good morning! I've tried to buy some study guides but there is no option "pay with credit card" after selecting "pay with paypal" and going to the login screen.
    How can we solve this?
    Thank you!

    • @statquest
      @statquest  3 ปีที่แล้ว

      If you scroll down on the login screen you should see a button that says Pay With Debt or Credit Card. It is possible you did not scroll down far enough, though. It's hard to see, however, I just tried it and it worked. However, let me know if you are still having trouble - you can contact me via: statquest.org/contact/

  • @wesleysbr
    @wesleysbr 6 หลายเดือนก่อน

    Another fantastic class Josh! Can I ask you something? In the case of classifying flowers into versicolor, setosa and virginica, to estimate the network parameters you needed to train the model with a response variable, right?

    • @wesleysbr
      @wesleysbr 6 หลายเดือนก่อน +1

      Josh I found the answer:
      "I started out by creating a neural network with 3 outputs, one for setosa, one for versicolor and one for virginica. I then trained the neural network with backpropagation to get the full neural network used in this video. I then ran the same training data through the network to see which output gave high values for setosa and I then labeled that output "setosa"."
      Thanks

    • @statquest
      @statquest  6 หลายเดือนก่อน

      yep

  • @WALID0306
    @WALID0306 8 หลายเดือนก่อน +1

    thanks !!

    • @statquest
      @statquest  8 หลายเดือนก่อน

      Bam! :)

  • @sameerrao20
    @sameerrao20 8 หลายเดือนก่อน

    Thanks a lot , This is amazing , i had been following your book alongside( which is equally amazing as welll ) the lectures . but these topics are not present there ... Desperately waiting for the next book , is there any release date in hand ? Kindly suggest any revision alternatives till the 2nd addition of book comes out !!

    • @statquest
      @statquest  8 หลายเดือนก่อน

      I'm working on a new book focused just on neural networks that covers the theory (like this video) but also has worked out code examples. However, it's still at least a year away. :(

  • @AlberthAndrade
    @AlberthAndrade 2 หลายเดือนก่อน

    Hey Josh, good evening!
    First, thanks for share your knowledgement with us! Could you please help with the Virginica score? You set +1 after the sum and, unfortunately, I was not able to understand why. Was this value set randomly? And Why do you not set new values for another plants? Thank you!

    • @statquest
      @statquest  2 หลายเดือนก่อน

      All of the weights and biases in a neural network are always determined using something called Backpropagation. To learn more, see: th-cam.com/video/IN2XmBhILt4/w-d-xo.html

  • @hoaxuan7074
    @hoaxuan7074 3 ปีที่แล้ว

    If you understand ReLU as a switch you can work out by hand the 3 dot products the net collapses to for each particular input. If I had a laptop instead of a phone I'd do it for you.

  • @4wanys
    @4wanys 3 ปีที่แล้ว +1

    great video thank you, are you gonna to apply this series with python ?

  • @user-to4zj9tg8s
    @user-to4zj9tg8s 10 หลายเดือนก่อน

    Thanks for your great videos. I have enjoyed all the previous videos , but have to agree I got a bit lost with this one. From what i understood here we first train the neural network to give a perfect fit for Setosa. So we will arrive at optinal values for the weights say w1,b1,w2,b2 , etc. After this we train for Versicolor . Wont this change the previous values of weights which we already optimized for Setosa ?

    • @statquest
      @statquest  10 หลายเดือนก่อน +1

      We actually train all 3 outputs at the same time - so those weights work with all 3.

    • @user-to4zj9tg8s
      @user-to4zj9tg8s 10 หลายเดือนก่อน +1

      @@statquest Thank you for a quick reply and clearing my confusion !!!

  • @rs9130
    @rs9130 2 ปีที่แล้ว

    hello author,
    i want to train a model to predict heatmap (mean square error loss) and binary segmentation (binary cross entropy loss).
    i tried to train model using multi branch (2 branch duplicates for 2 output). but the the final output will favour for only one type of output.
    For example when i train using model.fit with equal loss weights, the output is good for heatmap, but binary mask output is wrong and gives pixels 1 for the regions similar to heatmaps.
    And when i train using gradtape loop, the output is good for segmentation mask, but heatmaps are wrong and looks like masks.
    how can i solve this, please give me your suggestion.
    thank you

    • @statquest
      @statquest  2 ปีที่แล้ว

      Unfortunately I have no idea.

  • @Marcelscho
    @Marcelscho 3 ปีที่แล้ว

    Hey! Please make a video on Expaction maximation. Thanks!

    • @statquest
      @statquest  3 ปีที่แล้ว +1

      I'll keep that in mind.

  • @krrsh
    @krrsh ปีที่แล้ว

    How are you selecting the weights for multiplying and bias for adding to Y value for different outputs?

    • @statquest
      @statquest  ปีที่แล้ว

      The weights and biases are all optimized via backpopagation, just like they are for every other neural network. For details about backpopagation, see: th-cam.com/video/IN2XmBhILt4/w-d-xo.html th-cam.com/video/iyn2zdALii8/w-d-xo.html and th-cam.com/video/GKZoOHXGcLo/w-d-xo.html

  • @wendy6792
    @wendy6792 2 ปีที่แล้ว

    Thank you for your nice explanation, could you please let me know how did you derive the value for those weights (e.g. x -2.5, x -1.5 etc)? Many thanks in advance.

    • @statquest
      @statquest  2 ปีที่แล้ว

      The weights and biases were derived using backpropagation. For details, see: th-cam.com/video/IN2XmBhILt4/w-d-xo.html th-cam.com/video/iyn2zdALii8/w-d-xo.html and th-cam.com/video/GKZoOHXGcLo/w-d-xo.html

    • @wendy6792
      @wendy6792 2 ปีที่แล้ว

      @@statquest Thank you Josh! Will have a good look at them!

  • @starkarabil9260
    @starkarabil9260 2 ปีที่แล้ว

    thanks for this great video. I have a dummy question: How do we know in this sample that if we add ZERO this is the output for Setosa? 7:35

    • @statquest
      @statquest  2 ปีที่แล้ว

      That bias term, 0, is the result of backpropagation. For details, see: th-cam.com/video/IN2XmBhILt4/w-d-xo.html

  • @rodrigoamoedo8523
    @rodrigoamoedo8523 3 ปีที่แล้ว

    grate video, but you point sepals and petals backwards. Keep at the good work, love your content

    • @statquest
      @statquest  3 ปีที่แล้ว

      Can you provide me with a link that shows that I got the petals and sepals backwards? Pretty much very single page I found is consistent with what I present here. For example, see: images.app.goo.gl/iAbv954ML8dExUru9
      images.app.goo.gl/ugN6JPWs6of1FBWj6
      images.app.goo.gl/ZbKVgCkdnC5hdgBA9

  • @paulbrown5839
    @paulbrown5839 3 ปีที่แล้ว

    At 08:42, Why did you pick Setosa first? How did you know the output neuron type should be Setosa? Is it because your sample for this forward pass is labelled as Setosa?

    • @statquest
      @statquest  3 ปีที่แล้ว

      I started out by creating a neural network with 3 outputs, one for setosa, one for versicolor and one for virginica. I then trained the neural network with backpropagation to get the full neural network used in this video. I then ran the same training data through the network to see which output gave high values for setosa and I then labeled that output "setosa".

  • @Mohamm-ed
    @Mohamm-ed 3 ปีที่แล้ว +2

    I love this channel bacuse the songs. Thanks very much.. Hooray

    • @statquest
      @statquest  3 ปีที่แล้ว +1

      Thank you! :)

  • @vigneshvicky6720
    @vigneshvicky6720 3 ปีที่แล้ว +1

    At last in this video only I came to know how to sum at last

  • @NoNonsense_01
    @NoNonsense_01 2 ปีที่แล้ว

    At 6:17 when output value is multiplied by negative 0.1 and the new y value is negative 0.16, shouldn't it be plotted below 0 on Setosa axis. Or, am I missing something?

    • @statquest
      @statquest  2 ปีที่แล้ว +1

      It is plotted below zero. But since the number is close to zero, it may not be easy to see.

    • @NoNonsense_01
      @NoNonsense_01 2 ปีที่แล้ว +1

      @@statquest Noted! Thanks for the response Mr. Starmer. Know that you are an incredible teacher and greatly appreciated by us!

  • @NimN0ms
    @NimN0ms 3 ปีที่แล้ว +1

    Hello Josh, this might be completely out of left field, but if you take requests, could you explain Latent Class Analysis?

    • @statquest
      @statquest  3 ปีที่แล้ว +1

      I'll keep that in mind.

    • @NimN0ms
      @NimN0ms 3 ปีที่แล้ว +1

      @@statquest Thank you! I have watched your videos through my undergrad and still watch a lot of them as I am getting my PhD (Epidemiology)!!! Thanks for all you do!

  • @beakmann
    @beakmann 3 ปีที่แล้ว +4

    It's not a squiggle anymore :/

    • @statquest
      @statquest  3 ปีที่แล้ว +1

      Nope, it's crinkled surface.

  • @thenkindler001
    @thenkindler001 ปีที่แล้ว

    Still not sure how weights and biases are being initialised. Are you stipulating them at random or are they determined by the data and, if so, how?

    • @statquest
      @statquest  ปีที่แล้ว +1

      In a neural network, weights and biases start out as random number, but are then optimized using Gradient Descent and Backpropagation. For details, see: th-cam.com/video/sDv4f4s2SB8/w-d-xo.html and th-cam.com/video/IN2XmBhILt4/w-d-xo.html

    • @thenkindler001
      @thenkindler001 ปีที่แล้ว +1

      @@statquest BAM

  • @MandeepKaur-ks6lk
    @MandeepKaur-ks6lk หลายเดือนก่อน

    We understood the calculation of weights and biases. But how would i know about the nodes...how do i understand the logic to connect all the inputs to activation function and to the output.?..and how many hidden layes we need? And no example for more than one hidden layer
    Could you please help me here

    • @statquest
      @statquest  หลายเดือนก่อน +1

      Designing neural networks is more of an art than a science - there are general guidelines, but generally speaking you find something that works on a related dataset and then train it with your own data. In other words, you rarely build your own neural network. However, if you are determined to build your own, the trade off is this - the more hidden layers and nodes within the hidden layers, the better your model will be able to fit any kind of data, no matter how complicated, but at the same time, you will increase the computation and training will be slow.

  • @arhanahmed8123
    @arhanahmed8123 หลายเดือนก่อน

    Well explained, but how do you visualize when we have more then 2 inputs in order to optimize our function? We cannot visualize more then 3 dimensions! Please explain

    • @statquest
      @statquest  หลายเดือนก่อน

      I have no idea.

  • @r0cketRacoon
    @r0cketRacoon 4 หลายเดือนก่อน

    how does the backward propagation work on multiple output?
    could u do another video of that?

    • @statquest
      @statquest  4 หลายเดือนก่อน

      See: th-cam.com/video/xBEh66V9gZo/w-d-xo.html

    • @r0cketRacoon
      @r0cketRacoon 4 หลายเดือนก่อน +1

      @@statquest oh, really helpfull, tks

  • @shivanshjayara6372
    @shivanshjayara6372 3 ปีที่แล้ว

    here we have taken last bias 0, 0 and 1. So it is just for sample an coz we can have any optimise bias value then in that case output value will also get change...isn't it?

    • @statquest
      @statquest  3 ปีที่แล้ว +1

      I'm not sure I understand your question. In this example, the neural network is trained given the Iris dataset. If we trained it with different data (or even just a different random seed for the same data), we would probably get different values for the biases.

  • @JuanCamiloAcostaArango
    @JuanCamiloAcostaArango 6 หลายเดือนก่อน

    Why you didn't use the ReLu function in the outputs like in the previous example with the doses? 🤔

    • @statquest
      @statquest  6 หลายเดือนก่อน

      Because I didn't need to. There are no rules for building neural networks. You simply design something that works.

  • @user-ik8my9kb5h
    @user-ik8my9kb5h 3 ปีที่แล้ว

    So if i get the theory right(in a vague sense), a neural network is just giving the computer a set of function(the nodes), the computer transforms them, and then fuse them in order to create new functions, one for each category so those functions have greater values than the others when the input is from that category.
    Did i get it right?

    • @statquest
      @statquest  3 ปีที่แล้ว +2

      Yes, that pretty much sums it up.

    • @user-ik8my9kb5h
      @user-ik8my9kb5h 3 ปีที่แล้ว +2

      @@statquest You are officially a life saver.

  • @minakshimathpal8698
    @minakshimathpal8698 2 ปีที่แล้ว

    Hi josh ....Your videos on neural network are just awesome..but plzz help me to understand that how are you(or NN) is deciding the scale of y coordinate. like for setosa it was 0 to 1 and for other two species again it is different.

    • @statquest
      @statquest  2 ปีที่แล้ว

      What time point, minutes and seconds, are you asking about?

    • @minakshimathpal8698
      @minakshimathpal8698 2 ปีที่แล้ว

      @@statquest (9.44 to 9.57) and (3.44) and (12.1 to 12.9)

  • @ganavimadduri7834
    @ganavimadduri7834 3 ปีที่แล้ว

    Hi. Please make a video on lightgbm as well as on catboost.. 🙏🙏

    • @statquest
      @statquest  3 ปีที่แล้ว +1

      I'll keep that in mind.

  • @iReaperYo
    @iReaperYo 2 หลายเดือนก่อน

    Hi StatQuest, something I don't understand is why you put examples through the neural network to 'fit' the curve to the training set. Wouldn't applying a neural network with initialised weights inherently fit the training set? is this just for illustrative purposes to show that the curves can be formed through putting examples in to our function and getting an output /prediction back?
    is this essentially the simpler way to explain neural nets without explicitly showing us the equations that each activation would represent? or are you essentially plugging in the examples one would use to compare to the ground truth values in the test set?
    You're essentially showing us visually what curve the *current* parameters approximate /estimate to match the underlying function. But you're doing this step by step? are these curves you're getting fits of the test set?

    • @statquest
      @statquest  2 หลายเดือนก่อน +2

      The main idea of neural networks is that they are functions that fit shapes to your data, and by running values through a "trained" neural network, I can illustrate both the shape and how that shape came to be. If you'd like to learn about training a neural network, see: th-cam.com/video/IN2XmBhILt4/w-d-xo.html

    • @iReaperYo
      @iReaperYo 2 หลายเดือนก่อน

      @@statquest This makes sense, thank you for your great work. I will give the video a watch, going through your series to learn NLP!

    • @statquest
      @statquest  2 หลายเดือนก่อน +2

      @@iReaperYo Here's the whole playlist: th-cam.com/video/CqOfi41LfDw/w-d-xo.html

  • @fuzzywuzzy318
    @fuzzywuzzy318 3 ปีที่แล้ว +1

    my gods! so complicated multiple layer and nodes neural network buy you use a so eas to follow and understanding way to teach! many profS in universitIES get a high paid but not good teach as you ,and only made students feel them are stupid!!!!!!! BAM!!

    • @statquest
      @statquest  3 ปีที่แล้ว

      Thank you! :)

  • @alexroberts6416
    @alexroberts6416 8 หลายเดือนก่อน

    I don't understand how you already started with the weights and bias. Did you already use back propagation etc.. with known data?

    • @statquest
      @statquest  8 หลายเดือนก่อน

      All the weights and biases for all neural networks come from backpropagation applied a training dataset, so that's what I used here. If you'd like to learn more about back propagation, see: th-cam.com/video/IN2XmBhILt4/w-d-xo.html

  • @hoaxuan7074
    @hoaxuan7074 3 ปีที่แล้ว

    It's well worth studying all the math of the dot product including the statistical and DSP filtering aspects. If you look into the basement of the NN castle you are left a little shocked by how weak and crumbly its foundation is because even top researchers have started with NN books that begin you with the term weighted sum and work forward from there. Never to go back and look at the details. And as I said before ReLU is a sad misunderstood switch.

    • @statquest
      @statquest  3 ปีที่แล้ว

      Noted

    • @hoaxuan7074
      @hoaxuan7074 3 ปีที่แล้ว

      @@statquest As an example if you want to make the output of a dot product a specific value (say 1) for a specific input vector you can make the angle to the 'weight' vector zero. You may even get error correction in that case (reduction in variance for noise in (across) the input vector.) If you make the angle close to 90 degrees then the magnitude of the weight vector has to be large to get 1 out and the noise will be greatly magnified. The variance equation for linear combinations of random variables applies to the dot product. Understanding such things you may construct say a general associative memory out of the dot product. Eg. Vector to vector random projection, bipolar (+1,-1) binarization, then the dot product. To train find the recall error, divide by the number of dimensions, then add or subtract that to each weight to make the error zero as indicated by the +1 or -1 binarization. If you look into the matter you will see that you have added a little Gaussian noise to all the prior associations (CLT.) The RP+binarization is a locality sensitive hash. Close inputs only give a few bits different in the output. To understand the system you could consider the case of a full hash where the slightest change in the input produced a totally random change.

  • @isratara3933
    @isratara3933 2 ปีที่แล้ว +1

    Hi, do you have the code for this please?

    • @statquest
      @statquest  2 ปีที่แล้ว

      I'm currently working on a series of videos that show everything you need to know to create neural networks in PyTorch-Lightning.

  • @arifproklamasi8120
    @arifproklamasi8120 3 ปีที่แล้ว +1

    You get a high quality content for free, Bam

  • @sallu.mandya1995
    @sallu.mandya1995 3 ปีที่แล้ว

    it would be great if you teach sql , ai , dl , high school maths and history toooooooooo

    • @statquest
      @statquest  3 ปีที่แล้ว +1

      Maybe one day I will. :)

  • @julescesar4779
    @julescesar4779 3 ปีที่แล้ว +1

  • @mohamadsamaei555
    @mohamadsamaei555 ปีที่แล้ว +1

    wow. I didn't know there is living creature in Svalbard other than polar bear (((:

  • @Fan-vk9gx
    @Fan-vk9gx 2 ปีที่แล้ว +1

    You are a genius! And a very kind one! Thank you for all these things you made. I was just wondering, can items in your store be shipped to Canada? You must have more fans and make more money in the near future, you deserve them!

    • @statquest
      @statquest  2 ปีที่แล้ว

      Thanks! I'm pretty sure that items in my store can be shipped pretty much anywhere in the world, including Canada.

  • @troller7779
    @troller7779 3 ปีที่แล้ว

    Hey Josh this is urgent...... Can you PLEASE PLEASE provide me with the EXCEL data sheet for "LDA clearly explained" video (the 10000 genes data sheet which you plotted on 2 dimensionon in that video)....... I have a class presentation tomorrow.... !!
    I just need to show them that I do have the data sheet. I am using your video for the presentation.

    • @statquest
      @statquest  3 ปีที่แล้ว +1

      Unfortunately I have no idea where that data is. Good news, though, I use the standard Iris dataset in this R code: github.com/StatQuest/linear_discriminant_analysis_demo/blob/master/linear_discriminant_analysis_demo.R

  • @MrCracou
    @MrCracou 3 ปีที่แล้ว +2

    Fisher was here. Iris forever!

    • @statquest
      @statquest  3 ปีที่แล้ว

      Bam!

    • @MrCracou
      @MrCracou 3 ปีที่แล้ว +1

      @@statquest This is really excellent. Do you allow me to link those videos to my students?

    • @statquest
      @statquest  3 ปีที่แล้ว

      @@MrCracou Of course. Please share the links with your students.