Neural Network from Scratch | Mathematics & Python Code

แชร์
ฝัง
  • เผยแพร่เมื่อ 22 ม.ค. 2025

ความคิดเห็น • 286

  • @G83X
    @G83X 3 ปีที่แล้ว +82

    In the backward function of the dense class you're returning a matrix which uses the weight parameter of the class after updating it, surely you'd calculate this dE/dX value before updating the weights, and thus dY/dX?

    • @independentcode
      @independentcode  3 ปีที่แล้ว +19

      Wow, you are totally right, my mistake! Thank you for noticing (and well catched!). I just updated the code and I'll add a comment on the video :)

    • @independentcode
      @independentcode  3 ปีที่แล้ว +18

      I can't add text or some kind of cards on top of the video, so I pinned this comment in the hope that people will notice it!

    • @StarForgers
      @StarForgers 2 ปีที่แล้ว +3

      @@independentcode Why can't you?
      Did the youtube developers remove that awesome function too?
      No wonder I've felt things have been off for so long!

    • @jonathanrigby1186
      @jonathanrigby1186 2 ปีที่แล้ว

      Can you plz help me with this .. I want a chess ai to teach me what it learnt
      th-cam.com/video/O_NglYqPu4c/w-d-xo.html

    • @blasttrash
      @blasttrash 2 ปีที่แล้ว

      just curious what happens if we propagate the updated weights backward like in the video? Will it not work? Or will it slowly converge?

  • @ldx8492
    @ldx8492 ปีที่แล้ว +21

    This video, instead of the plethora of other videos on "hOw tO bUiLd A NeUrAl NeTwOrK fRoM sCraTcH", is the literal best. It deserves 84 M views, not 84 k views. It is straight to the point, no 10 minutes explanation of pretty curves with zero math, no 20 minutes introduction on how DL can change the world
    I truly mean it, it is a refreshing video.

    • @independentcode
      @independentcode  ปีที่แล้ว +2

      I appreciate the comment :)

    • @ldx8492
      @ldx8492 ปีที่แล้ว +2

      @@independentcode Thank you for the reply! I am a researcher, and I wanted to create my own DL library, using yours as base, but expanding it for different optim algorithms, initializations, regularizations, losses etc (i am now just developing it on my own privately), but one day I'll love to post it on my github. How can I appropriately cite you?

    • @independentcode
      @independentcode  ปีที่แล้ว +3

      That's a great project! You can mention my name and my GitHub profile: "Omar Aflak, github.com/omaraflak". Thank you!

  • @robinferizi9073
    @robinferizi9073 3 ปีที่แล้ว +47

    I like how he said he wouldn’t explain how a neural network works, then proceeds to explain it

  • @generosonunezarias369
    @generosonunezarias369 3 ปีที่แล้ว +6

    This might be the most intuitive explanation of the backpropagation algorithm on the Internet. Amazing!

  • @rubenfalvert5540
    @rubenfalvert5540 4 ปีที่แล้ว +17

    Probably the best explaination of neural network of TH-cam ! The voice and the musique backside is realy soothing !

  • @wagsman9999
    @wagsman9999 ปีที่แล้ว +7

    Not only was the math presentation very clear, but the Python class abstraction was elegant.

  • @orilio3311
    @orilio3311 8 หลายเดือนก่อน +3

    I love the 3b1b style of animation and also the consistency with his notation, this allows people to learn the matter with multiple explanations while not losing track of the core ideas. Awesome work man

  • @ardumaniak
    @ardumaniak 2 ปีที่แล้ว +13

    The best tutorial on neural networks I've ever seen! Thanks, you have my subscription!

  • @adealtas
    @adealtas 2 ปีที่แล้ว +7

    THANK YOU !
    This is exactly the video I was looking for.
    I always struggled with making a neural network, but following your video, I made a model that I can generalize and it made me understandexactly the mistakes I made in my previous attempts.
    It's easy to find on youtube videos of people explaining singular neurons and backpropagation, but then quickly going over the hard part: how do you compute the error in an actual network, the structural implementation and how it all ties together.
    This approach with separating the Dense layer from the activation layer also makes things 100x clearer, and many people end up smacking them both in the same class carelessly.
    The visuals make the intuition for numpy also much much easier. It's always a thing I struggled with and this explained why we do every operation perfectly.
    even though I was only looking for one video, after seeing such quality, I HAVE to explore the rest of your channel ! Great job.

    • @independentcode
      @independentcode  2 ปีที่แล้ว +4

      Thank you so much for taking the time to write this message! I went through the same struggle when I wanted to make my own neural networks, which is exactly why I ended up doing a video about it! I'm really happy to see that it serves as I intended :)

  • @dhudach
    @dhudach 4 หลายเดือนก่อน +1

    This is an unbelievably clear and concise video. It answers all of the questions that linger after watching dozens of other videos. WELL DONE!!

  • @rogeliogarcia8730
    @rogeliogarcia8730 2 ปีที่แล้ว +15

    Thanks for making such great quality videos. I'm working on my Ph.D., and I'm writing a lot of math regarding neural networks. Your nomenclature makes a lot of sense and has served me a lot. I'd love to read some of your publications if you have any.

  • @Adityaaaa0408
    @Adityaaaa0408 หลายเดือนก่อน +1

    I have been struggling with backpropagation in MLP from 2 weeks and when I was just searching for a video which can help me understand the process mathematically this video grabbed my attention and in this video I was able to understand the whole process both conceptually and mathematically actually the code given by you was the same code given by our mentor to us but he he was unable to explain clearly and the animations shown in the video were really great finally thank you for posting this video!!!!🛐I CAN ADVANCE IN MY PROJECT FURTHER!!!

  • @thiagoborn
    @thiagoborn 24 วันที่ผ่านมา

    by far, the best video of this topic that I saw in the whole platform

  • @aflakmada6311
    @aflakmada6311 4 ปีที่แล้ว +15

    Very clean and pedagogical explanation. Thanks a lot!

  • @neuralworknet
    @neuralworknet ปีที่แล้ว +5

    Best tutorial video about neural networks i've ever watched. You are doing such a great job 👏

  • @samirdaniels
    @samirdaniels 2 ปีที่แล้ว +1

    This was the best mathematical explanation on TH-cam. By far.

  • @darshangowda309
    @darshangowda309 3 ปีที่แล้ว +63

    This could be 3Blue1Brown for programmers! You got yourself a subscriber! Great video!

    • @independentcode
      @independentcode  3 ปีที่แล้ว +12

      I'm very honored you called me that. I'll do my best, thank you !

    • @jumpsneak
      @jumpsneak 2 ปีที่แล้ว +2

      +1

    • @quasistarsupernova
      @quasistarsupernova 2 ปีที่แล้ว

      @@independentcode +1 sub

  • @MichaelChin1994
    @MichaelChin1994 2 ปีที่แล้ว +5

    Thank you so very, very, very much for this video. I have been wanting to do Machine Learning, but without "Magic". It drives me nuts when all the tutorials say "From Scratch" and then proceed to open Tensor Flow. Seriously, THANK you!!!

    • @independentcode
      @independentcode  2 ปีที่แล้ว +3

      I feel you :) Thank you for the comment, it makes me genuinely happy.

  • @samuelmcdonagh1590
    @samuelmcdonagh1590 ปีที่แล้ว

    jesus christ this is a good video and shows clear understanding. no "i've been using neural networks for ten years, so pay attention as i ramble aimlessly for an hour" involved

  • @anhtuanmai537
    @anhtuanmai537 2 ปีที่แล้ว +1

    I think the last row's indices of the W^T matrix at 17:55 must be (w1i, w2i,...,wji).
    Still the best explannation i have ever seen btw, thank you so much. I dont know why this channel is still so underrated, looking forward to seeing your new videos in the future

    • @independentcode
      @independentcode  2 ปีที่แล้ว +1

      Yeah I know, I messed it up. I've been too lazy to add a caption on that, but I really should. Thank you for the kind words :)

  • @ExXGod
    @ExXGod 16 วันที่ผ่านมา

    I just watched your CNN video, the next one and I couldn't resist watching this one. Although I knew most things in this video, watching everything work from scratch felt amazing.

  • @SleepeJobs
    @SleepeJobs ปีที่แล้ว

    This video really saved me. From matrix representation to chain rule and visualisation, everything is clear now.

  • @faida.6665
    @faida.6665 4 ปีที่แล้ว +52

    This is basically ASMR for programmers

    • @nikozdev
      @nikozdev ปีที่แล้ว +1

      I almost agree, the only difference is that I can’t sleep thinking about it

    • @tanker7757
      @tanker7757 ปีที่แล้ว +1

      @@nikozdevbruh I fall asleep and allow my self to hallucinate in math lol

    • @nalcow
      @nalcow 10 หลายเดือนก่อน

      I felt relaxed definetly :D

  • @_skeptik
    @_skeptik 2 ปีที่แล้ว +1

    This is a so high quality content. I have only basic knowledge of linear algebra and being a non-native speaker I could fully understand this

  • @rumyhumy
    @rumyhumy ปีที่แล้ว

    Man, I love you. How many times i tried too do the multilayer nn on my own, but always faced thousand of problems. But this video explained everything. Thank you

  • @swapnilmasurekar5431
    @swapnilmasurekar5431 2 ปีที่แล้ว

    This video is the best on TH-cam for Neural Networks Implementation!

  • @bernardcrnkovic3769
    @bernardcrnkovic3769 2 ปีที่แล้ว +3

    Absolutely astonishing quality sir. Literally on the 3b1b level. I hope this will help me pass the uni course. SUB!

  • @erron7682
    @erron7682 3 ปีที่แล้ว +1

    This is the best channel for learning deep learning!

  • @aashishrana9356
    @aashishrana9356 2 ปีที่แล้ว +1

    one of the best video i have ever seen.
    struggled alot to understand this and you have explained so beautifully
    you made me fall in love with the neural network which i was intimidating from.
    thank you so much.

    • @independentcode
      @independentcode  2 ปีที่แล้ว

      Thank you for your message, it genuinely makes me happy to know this :)

  • @mohammadrezabanakermani2924
    @mohammadrezabanakermani2924 3 ปีที่แล้ว +1

    It is the best one I've seen among the explanation videos available on TH-cam!
    Well done!

  • @ThierryAZALBERT
    @ThierryAZALBERT ปีที่แล้ว

    Thank you very much for your videos explaining how to build ANN and CNN from scratch in Python: your explanations of the detailed calculations for forward and backward propagation and for the calculations in the kernel layers of the CNN are very clear, and seeing how you have managed to implrment them in only a few lines of code is very helpful in 1. understanding the calculations and processes, 2. demistifying the what is a black box in tensorflow / keras.

  • @baongocnguyenhong5674
    @baongocnguyenhong5674 29 วันที่ผ่านมา

    i've taken inspirations from your code and cited your channel for my neural network paper for a college project, im just letting you know this here and hope that you won't feel particularly mind for it.
    btw, thank you so much for the video, 3blue1brown's series on neural network is great and all, but it is your video that makes all the computations really sink in and make actual sense, representing the gradients as linear algebra operations just ties everything together so neatly, compared to individual derivative formulas for the weights and bias, which is how it's usually written. And the choice of seperating the dense layers and the activation layers was, to put mildly, fucking brilliant.

    • @independentcode
      @independentcode  29 วันที่ผ่านมา

      Of course! Thank you for the kind words :)

  • @rishikeshkanabar4650
    @rishikeshkanabar4650 3 ปีที่แล้ว +1

    This is such an elegant and dynamic solution. Subbed!

  • @shafinmahmud2925
    @shafinmahmud2925 2 ปีที่แล้ว +1

    There are many solutions on the internet...but i must say this one is the best undoubtedly...👍 cheers man...pls keep posting more.

  • @marisakirisame659
    @marisakirisame659 2 ปีที่แล้ว

    This is a very good approach to building neural nets from scratch.

  • @marvinmartin1373
    @marvinmartin1373 4 ปีที่แล้ว +5

    Amazing approach ! Very well explained. Thanks!

  • @cankoban
    @cankoban 2 ปีที่แล้ว

    I loved the background music. It gives peaceful mind. I hope, you will continue to make videos, very clear explanation

  • @imgajeed
    @imgajeed 2 ปีที่แล้ว +1

    Thank you, that's the best video I have ever seen about neural networks!!!!! 😀

  • @nudelsuppe3dsemmelknodel990
    @nudelsuppe3dsemmelknodel990 2 ปีที่แล้ว

    You are the only youtuber I sincierly want to return. We miss you!

  • @Dynamyalo
    @Dynamyalo 5 หลายเดือนก่อน

    this has to be the single best neural network explaining video I have ever watched

  • @black-sci
    @black-sci 10 หลายเดือนก่อน

    best video, very clear-cut. Finally I got the backpropagation and derivatives.

  • @lucasmercier5813
    @lucasmercier5813 4 ปีที่แล้ว +5

    Impressive, lot of information but remains very clear ! Good job on this one ;)

  • @lowerbound4803
    @lowerbound4803 2 ปีที่แล้ว +3

    Very well-done. I appreciate the effort you put into this video. Thank you.

  • @e.i.l.9584
    @e.i.l.9584 ปีที่แล้ว

    Thank you so much, my assignment was so unclear, this definitely helps!

  • @omegaui
    @omegaui 6 หลายเดือนก่อน

    Such a great video. Really helped me to understand the basics.

  • @OmkarKulkarni-wf7ug
    @OmkarKulkarni-wf7ug 9 หลายเดือนก่อน +1

    How output gradient is calculated and passed into the backward function?

  • @naheedray
    @naheedray 8 หลายเดือนก่อน

    This is the best video i have seen so far ❤

  • @chrisogonas
    @chrisogonas 2 ปีที่แล้ว

    That was incredibly explained and illustrated. Thanks

    • @independentcode
      @independentcode  2 ปีที่แล้ว +1

      Thank you! I'm glad you liked it :)

    • @chrisogonas
      @chrisogonas 2 ปีที่แล้ว

      @@independentcode Most welcome!

  • @_sarps
    @_sarps 3 ปีที่แล้ว

    This is really dope. The best by far. Subscribed right away

  • @arvindh4327
    @arvindh4327 2 ปีที่แล้ว

    Only 4 video and you have avove 1k subs,
    Please continue your work 🙏🏼

  • @ti4680
    @ti4680 3 ปีที่แล้ว

    Finally found the treasure. Please do more video bro. SUBSCRIBED

  • @Xphy
    @Xphy 3 ปีที่แล้ว

    Whyyyy you don't have 3Million subscriptions you deserve it ♥️♥️

  • @spritstorm9037
    @spritstorm9037 2 ปีที่แล้ว

    actually,you saved my life, thanks for doing these

  • @Rustincohle88
    @Rustincohle88 2 หลายเดือนก่อน

    This is literally a masterpiece

  • @nikozdev
    @nikozdev ปีที่แล้ว

    I developed my first neural network in one night yesterday. that could not learn because of backward propagation, it was only going through std::vectors of std::vectors to get the output. I was setting weights to random values and tried to guess how to apply backward propagation from what i have heard about it.
    But it failed to do anything, kept guessing just as I did, giving wrong answers anyway.
    This video has a clean comprehensive explanation of the flow and architecture. I am really excited how simple and clean it is.
    I am gonna try again.
    Thank you.

    • @nikozdev
      @nikozdev ปีที่แล้ว +1

      I did it ! Just now my creature learnt xor =D

  • @ANANT9699
    @ANANT9699 ปีที่แล้ว +1

    Wonderful, informative, and excellent work. Thanks a zillion!!

  • @yiqiangjizhang
    @yiqiangjizhang 3 ปีที่แล้ว

    This is so ASMR and well explained!

  • @RAHULKUMAR-sx8ui
    @RAHULKUMAR-sx8ui 2 ปีที่แล้ว

    you are the best 🥺❤️..wow.. finally i able to understand the basics thanks

  • @cicADA001
    @cicADA001 3 ปีที่แล้ว +2

    your voice is calming and relaxing, sorry if that is weird

    • @independentcode
      @independentcode  3 ปีที่แล้ว +2

      Haha thank you for sharing that :) Maybe I should have called the channel JazzMath .. :)

  • @macsiaproduction7823
    @macsiaproduction7823 9 หลายเดือนก่อน

    Thank you for really great explanation!
    Wish you will make even more 😉

  • @tangomuzi
    @tangomuzi 3 ปีที่แล้ว

    I think most of the ML PhDs dont aware of this abstraction. Simply the best.

    • @independentcode
      @independentcode  3 ปีที่แล้ว +2

      I don't know about PhDs since I am not a PhD myself, but I never found any simple explanation of how to make such an implementation indeed, so I decided to make that video :)

    • @tangomuzi
      @tangomuzi 3 ปีที่แล้ว +1

      @@independentcode I think you should keep going video seris and show how capable this type of abstraction. Implemnting easiliy almost every type of neural nets.

    • @independentcode
      @independentcode  3 ปีที่แล้ว +1

      Thank you for the kind words. I did actually take that a step further, it's all on my GitHub here: github.com/OmarAflak/python-neural-networks
      I managed to make CNNs and even GANs from scratch! It supports any optimization method, but since it's all on CPU you get very quickly restricted by computation time. I really want to make series about it, but I'll have to figure out a nice way to explain it without being boring since it involves a lot of code.

    • @edilgin
      @edilgin 2 ปีที่แล้ว

      @@independentcode GANs would be great also you could try to do RNNs too and maybe even some reinforcement learning stuff :D

  • @aiforchange1801
    @aiforchange1801 2 ปีที่แล้ว

    Big Fan of you from today !

  • @shivangitomar5557
    @shivangitomar5557 ปีที่แล้ว

    Amazing explanation!!

  • @sprucestreet1676
    @sprucestreet1676 2 ปีที่แล้ว

    I have a question, while backpropagating in Activation Layer, why are we ignoring the learning rate in the implementation? 22:07

    • @independentcode
      @independentcode  2 ปีที่แล้ว +1

      The learning rate is used when we update trainable parameters (weights & biases). In the activation layer there is no parameter to update, we simply return the input gradient to the previous layer.

  • @vtrandal
    @vtrandal 3 ปีที่แล้ว

    Thank you! Well done! Absolutely wonderful video.

  • @filatnicolae2883
    @filatnicolae2883 ปีที่แล้ว +2

    In your code you compute the gradient step for each sample and update immediately. I think that this is called stochastic gradient descent.
    To implement full gradient descent where I update after all samples I added a counter in the Dense Layer class to count the samples.
    When the counter reached the training size I would average all the stored nudges for the bias and the weights.
    Unfortunately when I plot the error over epoch as a graph there are a lot of spikes (less spikes than when using your method) but still some spikes.
    My training data has (x,y) and tries to find (x+y).

    • @gregynardudarbe7009
      @gregynardudarbe7009 ปีที่แล้ว

      Would you be able to share the code? This is where the part where I’m confused.

  • @huberhans7198
    @huberhans7198 3 ปีที่แล้ว

    Very nice and clean video, keep it up

  • @birajpatel2804
    @birajpatel2804 2 ปีที่แล้ว

    At 12:42, I didn't understand why you had to take the sum.
    We want to calculate dE/dw12, and if I am understanding it correctly, it is the derivative of error wrt our layer's 1st neuron's 2nd weight(w12). So it should be simply dE/dy1 * dy1/dw12, since the output of that neuron is just y1. If we can get it directly, then why did we take the sum to arrive here? Am I missing something?

    • @independentcode
      @independentcode  2 ปีที่แล้ว +2

      Hi Biraj. I'm showing the sum as it would be the most general/repeatable way of proceeding for any of the derivatives, but you are right: if you can see immediately that w12 only appears in y1, then don't bother doing the sum. When I say repeatable, I mean what if the derivative was with respect to x2 for instance ? Then you would need to take into account all the y variables. But it might become confusing to some of the viewers why we proceed in one way in one case and in another for some other case. That's why I like to show the sum as a first systematic step. I hope it makes sense!

  • @mr.anderson5077
    @mr.anderson5077 3 ปีที่แล้ว

    Keep it up .please make a deep learning and ml series for future.

  • @NoomerLoL
    @NoomerLoL ปีที่แล้ว +1

    Hi there, great video, super helpful, but at 19:21 line 17 the gradient is computed with the updated weights instead of the original weights which (I believe) caused some exploding/vanishing gradient problems for my test data (iris flower dataset). Fixing that solved all my problems. If I am wrong please let me know.
    Note: I used leaky RELU as activation function

    • @Vawxel
      @Vawxel ปีที่แล้ว

      Hello, how did you fix this issue?

  • @salaheddinelachkar5683
    @salaheddinelachkar5683 3 ปีที่แล้ว +2

    That was helpful, thank you so much.

  • @Ryanxyang
    @Ryanxyang 3 หลายเดือนก่อน +2

    Great video! At 17:45, last row of matrix W' (transpose of W), subscript got a bit messed up. w_1j, w_2j and w_ij should be w_1i, w_2i and w_ji, i.e., j rows and i columns.

  • @ramincybran
    @ramincybran 10 หลายเดือนก่อน

    whiteout any doubt best explanation of NN ive ever seen - why you stop your productivity my friend ?

  • @TheAstralftw
    @TheAstralftw 3 ปีที่แล้ว

    Dude this is amazing

  • @princewillinyang5993
    @princewillinyang5993 2 ปีที่แล้ว

    Content at it's peak

  • @ionutbosie6017
    @ionutbosie6017 2 ปีที่แล้ว

    after 1000 videos watched, i think i get it now, thanks

  • @sythatsokmontrey8879
    @sythatsokmontrey8879 3 ปีที่แล้ว

    Thanks you so much for your contribution in this field.

  • @prem7676
    @prem7676 2 ปีที่แล้ว

    Awesome man!!

  • @vanshajchadha7612
    @vanshajchadha7612 11 หลายเดือนก่อน

    This is one of the best videos to really understand the vectorized form of neural networks! Really appreciate the effort you've put into this.
    Just as a clarification, the video is considering only 1 data point and thereby performing SGD, so during the MSE calculation Y and Y* are in a way depicting multiple responses at the end for 1 data point only right? So for MSE it should not actually be using np.mean to sum them up?

  • @zozodejante8350
    @zozodejante8350 3 ปีที่แล้ว

    I love u , best ML video ever

  • @blasttrash
    @blasttrash 2 ปีที่แล้ว

    amazing video. one thing we could do is to have layers calculate inputs automatically if possible. Like if I give Dense(2,8), then the next layer I dont need to give 8 as input since its obvious that it will be 8. Similar to how keras does this.

  • @andreytolkushkin3611
    @andreytolkushkin3611 ปีที่แล้ว

    why do we use the dot product function for matrix multiplication? i thought that those did different things

  • @brenojust6436
    @brenojust6436 4 หลายเดือนก่อน

    Hi,
    I'm naive to math and coding, can anyone explain where in the def backward part of the dense layer code the derivatives are computed? The video explains that the derivatives are there, but I was expecting to see a function to compute it. Where exactly does the derivative appear there?

  • @Gabriel-V
    @Gabriel-V 2 ปีที่แล้ว

    Clear, to the point. Thank you. Like (because there are just 722, and have to be a lot more)

  • @blasttrash
    @blasttrash 2 ปีที่แล้ว

    how can we update this to include mini-batch gradient descent? Especially how will the equations change?

  • @baongocnguyenhong5674
    @baongocnguyenhong5674 2 หลายเดือนก่อน

    This video is godsend, thank you.

  • @erikasgrim2871
    @erikasgrim2871 3 ปีที่แล้ว

    Amazing tutorial!

  • @albinjose8272
    @albinjose8272 ปีที่แล้ว

    When I checked the output of the dense layer I was getting an array of size (output size, output size) instead of (output size, 1), later they said it's due to broadcast. I dont know what it is. But when i changed bias shape from (output size,1) to (output size) i get the result with shape (output size,1)

  • @oglothenerd
    @oglothenerd ปีที่แล้ว +1

    I followed the code exactly, and I still get Numpy shape errors.

  • @IzUrBoiKK
    @IzUrBoiKK 2 ปีที่แล้ว

    I would like alot if u continue your channel bro

  • @bassmit2304
    @bassmit2304 ปีที่แล้ว

    when looking at the error and it's derivative wrt some y[i], intuitively I would expect that if I increased y[i] by 1 the error would increase by dE/dy[i], but if I do the calculations the change in the error is 1/n off from the derivative, does this make sense?

  • @Leo-dw6gk
    @Leo-dw6gk 5 หลายเดือนก่อน

    This video should be the first video you see when you search neural network.

  • @black-sci
    @black-sci 8 หลายเดือนก่อน

    In tensorflow they use weight matrix W dimensions i x j then take transpose in calculation.

  • @flankechen
    @flankechen 3 ปีที่แล้ว

    18:18, about W transpose, it should be w11, w12, ..., w1i, column wise. it's a i by j matrix. am I right ?

    • @independentcode
      @independentcode  3 ปีที่แล้ว +1

      Wow I messed up the last row! It should have been (W1i, W2i, ..., Wji) !!

    • @independentcode
      @independentcode  3 ปีที่แล้ว

      The matrix W itself is of size (i, j), the transposed matrix is (j, i).

    • @flankechen
      @flankechen 2 ปีที่แล้ว

      @@independentcode make sense, it's W transpose. j by i . and little typo. Thanks again for this greate tutorial

  • @Djellowman
    @Djellowman 2 ปีที่แล้ว +1

    Is there a practical reason why the activation functions are implemented as layers, rather than the other layers, such as Dense, taking the activation function as an argument & applying it internally?

    • @independentcode
      @independentcode  2 ปีที่แล้ว +1

      Yes, for simplicity. If you apply the activation inside the layer, then that layer will also have to account for the activation during backward propagation. And the Dense layer is not the only layer that might use an activation, so will you implement it in every such layer? That's why it's a separate thing

    • @Djellowman
      @Djellowman 2 ปีที่แล้ว +1

      @@independentcode That's a good point. Although i suppose you could implement the activation function handling for both forward and backwards propagation in the base Layer class, right? I'm asking this because I started working on a project where I build a Dense neural net to classify some data, but I decided I might as well build a little neural net library. Your video made me think about creating a better design. I first passed the architecture of the network as a list of layer_lengths to a DenseNeuralNet class. I prefer your design of making a base Layer class that will function as an abstract base class, and specifying separate layer objects, as it's more modular than my initial design.

  • @filatnicolae2883
    @filatnicolae2883 ปีที่แล้ว

    Hi, thank you for such a great explanation. I understood the core idea of what you explain. I am not familiar with matrix calculus and derivatives.
    12:41 Here I don't really understand what rule are you using for expanding the sum out. If you could point me to some resource online where I can learn this I would be grateful.

    • @independentcode
      @independentcode  ปีที่แล้ว

      Hi. We're able to do this because E is a function of all the y variables. Let's take a simple example:
      X=Y1+Y2
      Y1=3Z1
      Y2=2Z1+Z2
      Then,
      ∂X/∂Z1 = ∂X/∂Y1 * ∂Y1/∂Z1 + ∂X/∂Y2 * ∂Y2/∂Z1
      = 1 * 3 + 1 * 2
      = 5
      Note that it is exactly the same as expanding first the expression of X and then deriving with respect to Z1:
      X=3Z1+2Z1+Z2
      =5Z1+Z2
      ∂X/∂Z1=5
      It's called the chain rule.

    • @filatnicolae2883
      @filatnicolae2883 ปีที่แล้ว +1

      @@independentcode Thank you very much. I understand now. Because E is the mean squared error it's a sum of terms that involves y variables.

  • @onurkrmz9206
    @onurkrmz9206 3 ปีที่แล้ว

    this is an amazing video which explains so perfectly how neural networks work. I appreciate and thank you for all the effort energy you put in this video and it is shame that your work did not receive enough views that it deserves. I believe you use manim to make animations like 3b1b, dont you?

    • @independentcode
      @independentcode  3 ปีที่แล้ว

      Thanks a lot for the kind comment 😌 I'm glad if the video helped you in any way :) Yes it is indeed Manim!

    • @onurkrmz9206
      @onurkrmz9206 3 ปีที่แล้ว

      sir please keep up with your videos I learn a lot

  • @simawpalmer7721
    @simawpalmer7721 ปีที่แล้ว

    would you mind sharing the manim project for this video?

  • @Djellowman
    @Djellowman 2 ปีที่แล้ว +1

    Is there a way to feed it all out data at once, instead of going through the entire forward & backward prop for every datapoint?

    • @Djellowman
      @Djellowman 2 ปีที่แล้ว

      I'm guessing what you implemented is stochastic gradient descent, where every epoch, you update parameters for every observation, rather than for the set of all observations? Would your implementation work when back & forward prop take X and Y as arguments, instead of x and y?

    • @independentcode
      @independentcode  2 ปีที่แล้ว +1

      You could implement something else than stochastic by not updating directly after each sample, but by average over many samples and updating then. However, it wouldn't change what you mentioned first that is we still have to loop through each data point and we don't take advantage of vectorization. If we wanted to do so, I think we'd need to make each layer accept a batch of inputs instead of a single one, and make sure the layer processes it all at once. But it would have made the video more complicated, and the goal here was to have something very simple, yet somewhat general :)

    • @Djellowman
      @Djellowman 2 ปีที่แล้ว

      Correct me if I'm wrong, but doesn't your implementation handle batches of data as well?
      def forward(self, input):
      self.input = input
      return np.dot(self.weights, self.input) + self.bias
      If input is a matrix instead of a vector, wouldn't the dotproduct just apply to every column? Same with backprop

    • @independentcode
      @independentcode  2 ปีที่แล้ว +1

      @@Djellowman you're correct, it just so happens that this implementation of the dense layer supports batch input. The activation would also support it since it's just applying a function to the input regardless of its size, and mse_prime in our case would also work out since it's just doing Y*-Y. So I guess here it works! But in the next video where I implement a CNN, I don't think it will, at least I haven't done it intentionally :)

  • @rajansahu6450
    @rajansahu6450 2 ปีที่แล้ว

    Hi , Im trying to print the weights after every epoch but I'm not able to do so. Can u help whats going wrong with this approach ..I simply tried to use the forward method..during training,
    def predict(network, input,train=True):
    output = input
    for layer in network:
    if layer.__class__.__name__ =='Dense':
    output = layer.forward(output)
    list_.append(layer.weights)
    else :
    output = layer.forward(output)
    however i get the same corresponding weights all the time

    • @independentcode
      @independentcode  2 ปีที่แล้ว

      I think you're getting the same value in the list because layer.weights is a reference. You need to copy it. So just do: list_.append(np.copy(layer.weights))