Capsule Networks (CapsNets) - Tutorial

แชร์
ฝัง
  • เผยแพร่เมื่อ 18 ก.ย. 2024

ความคิดเห็น • 271

  • @9thdimension707
    @9thdimension707 6 ปีที่แล้ว +234

    Geoff Hinton on this video: "This is an amazingly good video. I wish I could explain capsules that well."

    • @Chhillee
      @Chhillee 6 ปีที่แล้ว +25

      If anybody else was curious where he posted it: www.reddit.com/r/MachineLearning/comments/7ew7ba/d_capsule_networks_capsnets_tutorial/dq8yc9p/

    • @xiaofeiyin319
      @xiaofeiyin319 6 ปีที่แล้ว

      Do you know how to get the slides in the video?if you kwon ,please give me the link, thank you!

    • @mouduge
      @mouduge 6 ปีที่แล้ว +5

      Hi Xiaofei, I put the link to the slides at the end of the video description. Enjoy!

    • @ibamer92
      @ibamer92 6 ปีที่แล้ว

      Yes Yes Yes!

    • @nawinksharma
      @nawinksharma 5 ปีที่แล้ว

      haha true

  • @BadriNathJK
    @BadriNathJK 6 ปีที่แล้ว +35

    You are probably the one TH-camr among several blog poster/TH-camr/Researchers who clearly understands what CapsuleNet is and how dynamic routing works. Great work on the slides.

  • @karolpiaskowski3112
    @karolpiaskowski3112 6 ปีที่แล้ว +67

    Brilliant. Just brilliant. One of the best videos not only about capsule nets, but also about deep learning. The way you explain this architecture is amazing. Really, really good job.

    • @AurelienGeron
      @AurelienGeron  6 ปีที่แล้ว +2

      Wow, thanks a lot, you are very encouraging. :)

    • @colameomarcello
      @colameomarcello 6 ปีที่แล้ว

      so only C_ij are learned using routing algo...other parameters backprop using total loss?

    • @AurelienGeron
      @AurelienGeron  6 ปีที่แล้ว +3

      Well actually the routing weights are *not* learned: the raw routing weights actually get reset to 0 for every new image, both during training and for new predictions. The routing by agreement algorithm is a *routing* algorithm, *not* a learning algorithm. It decides where to send the signal to, based on the amount of agreement it measures, and that's it. The learned parameters are the convolutional layers' weights, and the W_ij matrices, and those are learned using regular backpropagation. I hope this helps.

    • @colameomarcello
      @colameomarcello 6 ปีที่แล้ว +3

      Thank ..last question...why the decoder helps to avoid overfitting and to generalize?

    • @AurelienGeron
      @AurelienGeron  6 ปีที่แล้ว +5

      Great question! Imagine if, by chance, all the images of the digit 5 in the training set had a particular set of 3 pixels that were black, and these 3 pixels were not all black for other digits. During training, the CapsNet would certainly detect this pattern and use it for classification. Without a decoder, all the CapsNet would need to classify the digit 5 would be the value of these 3 pixels. It would perform great on the training set, but it would probably not work well on new instances (assuming the particular pattern was a result of chance). In other words, the CapsNet would overfit the training data.
      By adding the decoder on top of the digit layer, we are encouraging the CapsNet to propagate all the information required to reconstruct the digit all the way from the inputs up to the digit layer. This forces the CapsNet to encode the whole image up to the digit layer, not just the few pixels that it thinks it needs, and thus it is more likely to make its final classification decision based on these higher level representations of the input image, rather than on just a few pixel values.
      Another way to explain this is to say that we are imposing a constraint on the CapsNet, thus we are limiting its degrees of freedom during training, and this is the essence of regularization. By reducing the degrees of freedom of a learning algorithm, we reduce the chances that it will overfit the training data (but if we go too far, it may underfit the data).
      Hope this makes sense.

  • @hstrainer
    @hstrainer 6 ปีที่แล้ว +5

    Thank you so much for subtitling the video. For the hearing impaired such as me, unsubtitled videos are hard to follow and for some of us, basically useless. Not only is the video subtitled, the explanation is amazingly straightforward and intuitive. I definitely would not have managed to understand this paper without you.

    • @AurelienGeron
      @AurelienGeron  6 ปีที่แล้ว +1

      hstrainer I'm glad it was useful to you! I also added subtitles to my other videos, in case you're interested. :)

  • @animeshkarnewar3
    @animeshkarnewar3 6 ปีที่แล้ว +12

    In all my questions, I totally forgot to congratulate you on the job done.
    Great explanation man! Seriously, this video is the most insightful one around the web.👍

  • @marvlousdasta2566
    @marvlousdasta2566 6 ปีที่แล้ว +31

    I was expecting "just another capsule tutorial" but I was wrong. Good job

  • @Anikung17
    @Anikung17 6 ปีที่แล้ว +1

    Spent an entire day trying to understand hinton's paper. Then i came upon this video, which explained the architecture amazingly clearly. Great job!

  • @josephedappully1482
    @josephedappully1482 6 ปีที่แล้ว +43

    What an exceptional explanation. Thank you.

    • @mouduge
      @mouduge 6 ปีที่แล้ว +1

      You're very welcome, Joseph, I'm glad you liked the video. :)

  • @Amapramaadhy
    @Amapramaadhy 6 ปีที่แล้ว +3

    Please keep making more videos. Having read the paper a few times, I was still confused in parts. Your lucid explanation cleared up most of them. Hats off!

  • @veramentegina
    @veramentegina 6 ปีที่แล้ว +2

    i went and bought your book after watching this video.. you explain so clearly here that i had no doubts the book would be as clear and awesome. Thank you.

    • @AurelienGeron
      @AurelienGeron  6 ปีที่แล้ว

      Thanks, I hope you find it useful! :)

  • @MrPouyan1987
    @MrPouyan1987 6 ปีที่แล้ว +3

    This is one of the most pedagogic videos in deep learning. Explaining the CapsNets in a very simple and efficient language is indeed an art. Thank you @Aurélien Géron

  • @royfrenkel9956
    @royfrenkel9956 6 ปีที่แล้ว +1

    A clear explanation without being too technical, I find it a good intro to capsule networks before jumping into the article and/or the codes... thank you for taking your time and preparing this video!

  • @SaranshKarira
    @SaranshKarira 6 ปีที่แล้ว +4

    You wrote this book? You are technically my mentor. This book gave me the exact kickstart from theoretical aspects to the hands-on programming and It was so easy and fun to comprehand that I wish I'd started my journey with it directly in the first place! Amazing work, book and video.

    • @AurelienGeron
      @AurelienGeron  6 ปีที่แล้ว

      Thanks for your very kind words Saransh, you are very encouraging! :) My best reward for this work is when I meet people who tell me what cool things they built using Machine Learning. It's always different and exciting. Glad I could help!

  • @markushartner4565
    @markushartner4565 6 ปีที่แล้ว +1

    If, as you said this was your first video on TH-cam, then there is a great "learning from you" future ahead of us. Brilliant, please keep going!

  • @mikiasabera
    @mikiasabera 6 ปีที่แล้ว +4

    You're a fantastic communicator! I think anybody who watches your videos or reads your content appreciates the work you put in. I hope you continue making great content :)

  • @slomoy2k
    @slomoy2k 6 ปีที่แล้ว +3

    All, just want to add that the book is brilliant as well. Very easy read and as the title says, hands-on (if you choose to). Highly recommend that people buy it!
    Aurelien, keep the videos coming,you are a natural.

  • @savma1
    @savma1 6 ปีที่แล้ว +5

    Wow! This is just marvelous! It's like adding particle filters to neural networks. Brilliant idea!

  • @nicksturkenboom2879
    @nicksturkenboom2879 6 ปีที่แล้ว +1

    Well done, as someone with only a minimal background in mathematics and A.I., this was very understandable, you truly have a gift!

    • @AurelienGeron
      @AurelienGeron  6 ปีที่แล้ว

      Thanks Nick, I appreciate it!

  • @aneeshbhat7328
    @aneeshbhat7328 4 ปีที่แล้ว +1

    This has to be one of the best explained tutorials on any ML subject I've ever watched! Had to pause, revisit certain points in the video, but understood CapsNets perfectly. Hope to discover other tutorials by you!

  • @TheSentientCloud
    @TheSentientCloud 6 ปีที่แล้ว

    Despite not knowing much about machine learning, I'm actually able to follow much of what this guy is saying. He's a very good teacher!

  • @Navhkrin
    @Navhkrin 6 ปีที่แล้ว

    I have to say that i was quite shocked when i heard "its my first youtube video", the quality of this video is as if you have been making videos for past 15 years. Please do make more videos

  • @satishkottapalli25
    @satishkottapalli25 6 ปีที่แล้ว +2

    By far the best explanation of capsule networks I have seen. kudos for simplifying it so well. Please keep making more videos. Will be buying your book; your ability to simplify concepts and explain meaningfully is more than amply demonstrated by just this one video.

  • @shofada
    @shofada 6 ปีที่แล้ว +1

    Aurélien Géron this is amazing. By all standards I think you are supremely gifted. And your approach as well is quite unique i.e. taking papers and making them understandable to everyone. Keep going man!

    • @AurelienGeron
      @AurelienGeron  6 ปีที่แล้ว

      Thanks Selase, I appreciate it! There are many other channels that summarize papers these days, such as "Two minute papers", Siraj Raval's channel, "CrashCourse" and so on, and there are a lot of great channels to learn Machine Learning, whether basic or advanced, such as on Hugo Larochelle's channel, Standford's channel, MIT OpenCourseWare, not to mention of course Andrew Ng's Coursera classes, Geoffrey Hinton's classes and many more. I'd love to stand on the shoulders of these giants, but I have yet to reach the top of there toes. :)

    • @shofada
      @shofada 6 ปีที่แล้ว

      Well, you make a good point and I do understand. Siraj Raval's is a great channel as well. Thank you very much.

  • @maximiliendedinechin5023
    @maximiliendedinechin5023 6 ปีที่แล้ว +1

    Hinton about this video: « I wish I could explain capsules that well ». Kudos!

  • @CedricChee
    @CedricChee 6 ปีที่แล้ว +1

    Finally, I found an intuitive explanation of the capsule idea. Thank you for doing a great job of distilling the idea.

  • @hoangnhatpham8076
    @hoangnhatpham8076 6 ปีที่แล้ว +1

    I absolutely love your book. This video sure helps a lot. Please do make more videos and release more books on ML and DL, and I will be sure to be the first one to support them.

  • @ShuTV42
    @ShuTV42 6 ปีที่แล้ว +2

    Yes, please make more videos! I'm currently learning and working through your book. The way you explain things is very helpful for me, videos on new papers and techniques would be very much appreciated. Good work! :)

  • @harborned
    @harborned 6 ปีที่แล้ว +3

    This is one of the, if not THE best video explanations of a concept/technique i’ve watched.... and i am constantly watching such videos.
    I look forward to more of your content :)

    • @AurelienGeron
      @AurelienGeron  6 ปีที่แล้ว +2

      Thanks a lot Daniel, such encouraging feedback definitely motivates me to create more videos. I'm currently recording a second video to dive into the implementation of capsule networks. Then I have a couple ideas for more videos, but I hope I'll have time to work on them.

    • @harborned
      @harborned 6 ปีที่แล้ว +1

      Awesome, sounds good :) If you do as good a job as you did with this one, I imagine even videos introducing the well established techniques (basics of CNNs, RNNs etc ) would gain a lot of traction too....

  • @smniar
    @smniar 4 ปีที่แล้ว

    A very clear presentation of CapsNets, Bravo Aurélien.

  • @SleepyPeat
    @SleepyPeat 6 ปีที่แล้ว +1

    I'm still pretty new to this but your explanation made a lot of sense. Please keep making these videos! They're excellent.

    • @AurelienGeron
      @AurelienGeron  6 ปีที่แล้ว

      Thanks Thomas, these comments are certainly motivating me to make more! :)

  • @jayce8978
    @jayce8978 6 ปีที่แล้ว +2

    Great explanation, clean slides and fantastic English ! Thanks !

  • @RomanZillek
    @RomanZillek 6 ปีที่แล้ว +1

    Great intro ! Sound, slow enough to follow and good examples to convey the idea ! Thx

  • @haroonmughal4449
    @haroonmughal4449 4 ปีที่แล้ว

    there couldn't be any better explanation than this, although their paper was not that much explicit still, you have explained it so clearly

  • @aayushsaxena7374
    @aayushsaxena7374 6 ปีที่แล้ว

    best description of capsNet even hinton could not do this :)

  • @bent4725
    @bent4725 5 ปีที่แล้ว +2

    wow! I think I ve found the best video explaining CapsNet

  • @anheuser-busch
    @anheuser-busch 6 ปีที่แล้ว +1

    Excellent. I was pumped when I saw your name on this video, as I am a huge fan of your Hands-on ML book! Thank you!

  • @AZTECMAN
    @AZTECMAN 6 ปีที่แล้ว +1

    Sweet video, I bought your book a few months ago and highly look forward to reading it.

  • @faithfullady1957
    @faithfullady1957 4 ปีที่แล้ว

    Great idea to use a simple example to describe the features of capsule networks. Great job!

  • @PerisMartin
    @PerisMartin 6 ปีที่แล้ว +1

    Thank you for the video Aurélien! Please, make more of them. Your explanations are very clear and easy to follow. Actually your book, Hands-on ML, was the first book about Deep Learning that didn't make me want to kill myself while reading it xD

  • @ClaudeCOULOMBE
    @ClaudeCOULOMBE 6 ปีที่แล้ว +1

    Enlightning explanation of Hinton's capsule! Thank you for this and your nice book!

  • @黃冠豪-t1b
    @黃冠豪-t1b 6 ปีที่แล้ว

    Your explanation and illustration are unbelievably great!

  • @aleposada10
    @aleposada10 6 ปีที่แล้ว +1

    Great, Aurélien. You should do more videos like this one!

  • @hemingchen7234
    @hemingchen7234 6 ปีที่แล้ว +1

    By far the best explanation video I've seen, this is just brilliant!

  • @Sassy7712
    @Sassy7712 6 ปีที่แล้ว +2

    Hi Aurélien, fantastic video! Your book has been extremely valuable in moving from theoretical understanding and basic implementations of ML/DL algorithms to real-world application and pipelines. Looking forward to more videos in future (subscribed).

  • @yelenab2366
    @yelenab2366 4 ปีที่แล้ว +1

    This was so great. I'd love to see something like this for transformers!

  • @truliapro7112
    @truliapro7112 6 ปีที่แล้ว

    Aurélien Géron (Author)
    - You are awesome and the best teacher in the world :)

  • @ikramnaurang
    @ikramnaurang 3 ปีที่แล้ว

    Very clear explanation! Would love to watch your other videos 👍

  • @yousofebneddin7430
    @yousofebneddin7430 6 ปีที่แล้ว +3

    Awesome video! Waiting for more.

  • @michaelcarlon1831
    @michaelcarlon1831 6 ปีที่แล้ว +2

    This is super clear and extremely useful! Good work!

  • @AhmedKachkach
    @AhmedKachkach 6 ปีที่แล้ว +1

    Merci Aurélien!
    Superbe vidéo, bien présentée, et droit au but :) Thanks for taking the effort to produce this, and making it time-efficient to watch.

  • @govindnarasimman1536
    @govindnarasimman1536 6 ปีที่แล้ว +3

    oh was craving for this type of explanation

  • @EsdrasSoutoCosta
    @EsdrasSoutoCosta 5 ปีที่แล้ว +3

    Awesome explanation. Would be nice if we have a video explanation about Matrix Capsules with EM routing paper. I'm currently in second re-read but still having confused about some concepts presented in the paper.

  • @woolfel
    @woolfel 6 ปีที่แล้ว +1

    excellent explanation. Much clearer than the paper and I've read that a few times :)

  • @foobar1672
    @foobar1672 3 ปีที่แล้ว +1

    Excellent explanation and excellent source code. Thank you!

  • @MrIvanluke
    @MrIvanluke 4 ปีที่แล้ว

    Thanks for the very clear explanation : it made a difficult architecture seem much simpler.

  • @EmergentUniverse
    @EmergentUniverse 6 ปีที่แล้ว +1

    Excellent! Very clear delivery.

  • @pallavichauhan2964
    @pallavichauhan2964 4 ปีที่แล้ว

    Fantastic video on capsule networks. This is the best video I have seen. Nice explanation

  • @tercioae
    @tercioae 5 ปีที่แล้ว +1

    Great explanation! And off course, my preferred book on Machine Learning. Thanks a lot for sharing all this information!

  • @srinivasvalekar9904
    @srinivasvalekar9904 6 ปีที่แล้ว

    This is by far an amazing explanation I have ever seen. Please do videos on other major papers out there. I subscribed to you right way!

  • @ameliajimenezsanchez2945
    @ameliajimenezsanchez2945 6 ปีที่แล้ว +1

    Excellent video, thanks a lot for the clear explanation and the examples, it's a great complement to the paper. I have subscribed and I am looking forward for other paper reviews!

  • @potatooflife8603
    @potatooflife8603 6 ปีที่แล้ว +1

    Awesome explanation, thank you!
    Looking forward to the implementation video.

  • @Chhillee
    @Chhillee 6 ปีที่แล้ว

    Excellent video. Hard to find good ml explanations on TH-cam among all the hype.

    • @AurelienGeron
      @AurelienGeron  6 ปีที่แล้ว

      Thanks Horace, any particular topic you would like more information on? This is my very first video, but I plan on making more, so I'll be looking for good topics to cover.

    • @Chhillee
      @Chhillee 6 ปีที่แล้ว +1

      I think there's a severe lack of material past the beginner's level that help people learn more than just "how do I apply this in code". I think blogs/publications like www.inference.vc/ or distill.pub/ are on this level.
      I think the best thing to do is choose topics that you feel like you have the most insight/intuition towards. However, here's a short list of topics I think would be cool to have a video on: A3C, policy gradients (well, RL in general), generative adversarial networks, maybe some of the theoretical work on generalization/local optima (eg: why neural nets don't get stuck in bad local minima, why they generalize, etc.), variational autoencoders.
      Great work, and I'm sure you've already seen Hinton's comment on your video! :)

    • @mouduge
      @mouduge 6 ปีที่แล้ว

      Nice suggestions, thanks. I'm a big fan of distill.pub, which I discovered through Christopher Olah's fantastic blog. I'm not sure I can reach that level (I'm just a practitioner, not a researcher), but I can always try. :)

  • @sairaamvenkatraman5998
    @sairaamvenkatraman5998 6 ปีที่แล้ว +1

    This is so awesome! Please keep making videos.

  • @francosoloqui9557
    @francosoloqui9557 10 หลายเดือนก่อน

    Thank you, the explanation was clear, and with the code, I can understand the Capsule Networks better.

  • @luisvalesilva8931
    @luisvalesilva8931 6 ปีที่แล้ว

    Amazing explanation. Please do make more of these videos!

  • @nateamus3920
    @nateamus3920 6 ปีที่แล้ว

    Incredible work, Mr Géron. Beautiful explanation!

  • @TheZobot1
    @TheZobot1 6 ปีที่แล้ว

    Wow, this was an amazingly clear explanation of CapsNets. Thanks for all of your hard work!

  • @jiawenwang5165
    @jiawenwang5165 6 ปีที่แล้ว

    this is the most wonderful capsnet video tutorial I've ever seen!!
    thanks a lot

  • @rantaoca491
    @rantaoca491 6 ปีที่แล้ว

    This was a great explanation! Please make more videos on different ml topics

  • @bluegreensomething
    @bluegreensomething 6 ปีที่แล้ว +1

    Super solid. I got here via metafilter. I am subscribed now. Thanks!

  • @animeshkarnewar3
    @animeshkarnewar3 6 ปีที่แล้ว +10

    Why is there a need of a convolutional layer for the first layer of the CapsNet? What is essentially done is that the output feature map of the convolutional layer is converted into small vectors by reshaping and then routed to the capsules in the next layer. Why can't we consider the input image as a raw pixels' feature map and route that information directly to a layer of capsules (the convolutional capsules)?

    • @AurelienGeron
      @AurelienGeron  6 ปีที่แล้ว +11

      Interesting question. If I understand correctly, there would be one primary capsule per pixel, and each primary capsule would simply output a vector equal to the squashed pixel value (a 3D vector for an RGB image). That might actually work, I encourage you to try! I'm guessing the authors chose to start with two convolutional layers for performance reasons. Another possible reason why they didn't choose this option might be that the routing by agreement algorithm pushes capsules to route their output to just one (or very few) capsules in the next layer. Suppose there's a black pixel in the image: it might be a part of just about any possible object, but routing by agreement will push this pixel's capsule to route its output to just one (or a few) higher-level capsules. There's a risk that the correct one won't be in the list. It seems to me that it might be a good idea to narrow down the options before routing by agreement kicks in. That's what the two convolutional layers do. That said, I'm just guessing, here: you could try your idea and publish a paper if it works well! :)

    • @animeshkarnewar3
      @animeshkarnewar3 6 ปีที่แล้ว

      Well, firstly thanks for such a quick reply. No, I am not saying on the pixel level. We can create patches of the image that the convolutional capsule takes as an input. So, it is more like a small window of fixed receptive field let's say (9 x 9) . We can take all those pixels and create the high dimensional vector for that window.
      My intuition behind doing this is quite similar to the content based attention used in Seq-2-Seq models. It's like we are assigning only a small part of the image a capsule (well technically, to multiple capsules with different co-efficients.).
      As you said, let's say a part of the image that contains an eye (for the facial dataset), it could be contributing to multiple capsules in the next layers. But, we don't have to worry about it. The capsules that have higher scalar product with this eye only, will receive the eye patch.
      I loved concept of the paper, there was this only thing that bugged me a bit. The architecture is not homogeneous. first layer is convolutional and subsequent layers are capsules.
      Anyways, this is still an idea that I have. Will definitely work on it. Thank you for your encouragement. Btw, what's your github? I would love to follow you.
      mine -> github.com/akanimax

    • @AurelienGeron
      @AurelienGeron  6 ปีที่แล้ว +5

      You're very welcome. Oh, right, I see what you want to do. To go from the 9x9 patch to the high-dimensional vector for that patch, would you simply flatten the patch to get an 81D vector, then squash that vector? If so, then the patch's brightness would determine the output vector's length, which represents the estimated probability in a capsule network. So the primary capsules could never be confident about the presence of a dark feature. This seems problematic. To avoid this, we would probably have to apply a transformation matrix to the 9x9 patch before we flatten and squash it, so that the brightness information ends up being a part of the orientation of the output vector (it's a pose parameter), rather than defining its length. A convolutional layer would be a convenient way to do this.

    • @animeshkarnewar3
      @animeshkarnewar3 6 ปีที่แล้ว

      Robin Richtsfeld Thanks for the explanation. Now it is clear to me why the lower layer needs to be convolutional. Still I'd like to see what happens if we directly start with the primary caps layer instead of the convolutional layer.

    • @animeshkarnewar3
      @animeshkarnewar3 6 ปีที่แล้ว +2

      Aurélien Géron "So the primary capsule could never be confident about the presence of a dark feature". I don't know, but I am wondering: doesn't it make the architecture more like humans? Even we can't be sure of dark features.
      Btw, thank you for your explanation. I now understand why we need the convolutional layer at the beginning. If I understand correctly, the only difference between the primary caps layer and the first convolutional layer's architectures is that the output of the primary caps layer is dynamically routed to the digitCaps layer while the output of the convolutional layer is fed as it is (with a stride of 2).
      Just a last question: why isn't the output of convolutional layer not routed to the primary caps layer? Wouldn't that make the convolutional layer another primary caps layer? I have tried to find an explanation of this a lot, but couldn't. From where I am, there are no mentors. It is my sincere request if you could please address this last question. I promise this is the last one :).
      Thanks!

  • @Mr842157
    @Mr842157 4 ปีที่แล้ว

    That's the most useful video I found on this topic.

  • @samhodge1972
    @samhodge1972 6 ปีที่แล้ว

    this is a wonderful clear presentation, well done.

  • @TheCanon03
    @TheCanon03 6 ปีที่แล้ว +1

    Amazing explanation. brilliant work.

  • @sidrahliaqat7637
    @sidrahliaqat7637 6 ปีที่แล้ว +1

    This explanation is the best one that I have found! Thank you so much for making this video. Definitely looking forward to the next one about implementation of capsule networks. Do you think maybe you could also do one on the mathematical intuition behind the part where the weights and coupling coefficients interact to compute the prediction vector and then ultimately the activity vector?

    • @AurelienGeron
      @AurelienGeron  6 ปีที่แล้ว

      Hi Sidrah, thanks for your kind words. I actually uploaded a video about the TensorFlow implementation of CapsNets just a few hours ago: th-cam.com/video/2Kawrd5szHE/w-d-xo.html
      I like the idea of a video about the math intuitions behind ML! In the meantime, you may want to take a look at my Jupyter notebook about Linear Algebra: nbviewer.jupyter.org/github/ageron/handson-ml/blob/master/math_linear_algebra.ipynb
      It explains how a matrix multiplication can perform all sorts of linear transformations, such as a rotation, resize, skew, and so on. Hope this helps. :)

    • @sidrahliaqat7637
      @sidrahliaqat7637 6 ปีที่แล้ว

      Thank you... These resources are helping immensely!

  • @HadiseMgds
    @HadiseMgds ปีที่แล้ว +1

    In a word, it was great!

  • @juliano3251
    @juliano3251 2 ปีที่แล้ว

    Clear and simple explanation. Thanks!

  • @timokarttinen3935
    @timokarttinen3935 6 ปีที่แล้ว +2

    Thank you so much for these excellent videos!
    And I think your book is very good companion to Coursera Andrew Ng courses.

  • @sangwonlee2751
    @sangwonlee2751 6 ปีที่แล้ว

    From the excellent book to the great videos... Thank you!

  • @aa-xn5hc
    @aa-xn5hc 6 ปีที่แล้ว

    Wow.... what a brilliant explanation!!!! yes, please, more videos from you

  • @arazsharma4781
    @arazsharma4781 3 ปีที่แล้ว

    Still slightly confused on some aspects, but got most of it! Great Explaination! :D

  • @camf1991
    @camf1991 11 หลายเดือนก่อน

    Thank you for this video. It's been helpful for my capstone

  • @TheSentientCloud
    @TheSentientCloud 6 ปีที่แล้ว

    I have a question. Pardon me if this is a stupid question, but in case my dad won't be able to answer it I will post what I asked him. This is what I sent my dad; perhaps you'd be better at explaining it:
    I have questions. When he is talking about the primary capsules to create the house and boat capsules, does he say that he uses both the rectangle capsule then the triangle capsule as the primary capsules in order to create the potential position of the other shape? What use is this when both structure the same shape? When it comes to the crowding, do you need to have trained the network to recognize a house capsule and a boat capsule to look for those first so that it becomes the more logical explanation of the image (as opposed to the inverted house with random rectangle and triangle), or will the system be able to learn what a boat capsule and house capsule is with no prior training and be able to separate the two figures from that shape?

  • @NitinKumar-xz8cz
    @NitinKumar-xz8cz 6 ปีที่แล้ว

    This was an excellent explanation! Thanks for the video and the implementation links!

  • @luisluiscunha
    @luisluiscunha 5 ปีที่แล้ว +1

    Thank you very much: this was very clear, and very well explained.

  • @amirkhan355
    @amirkhan355 5 ปีที่แล้ว +2

    What a brilliant man!

  • @beizhou2488
    @beizhou2488 4 ปีที่แล้ว +1

    Thank you such lucid explanation. Nowadays, it seems that capsule networks gain less attention than 2 years ago. Is this true or just my false perception?

  • @ironheadchuang
    @ironheadchuang 6 ปีที่แล้ว +5

    Hello,
    the margin loss equation at 15:55 is T_k max(0, m_p - ||v||^2) + lambda (1 - T_k) max(0, ||v||^2 - m_m)
    but the "square" is outside the max functions on the paper (v2, arxiv.org/abs/1710.09829)
    which one is correct?

    • @AurelienGeron
      @AurelienGeron  6 ปีที่แล้ว

      Hi, good catch, this is indeed an error in my video, the square should be outside the max. My apologies. I'll add an annotation at that point in the video. Thank you very much for spotting this error.

    • @AurelienGeron
      @AurelienGeron  6 ปีที่แล้ว +2

      Oh nooooo! TH-cam dropped video annotations , I can't add an error message. I guess the only option is to add a warning in the video description. I'll soon upload another video to go into the details of the implementation, and I'll make it clear at that point. Thanks again for catching the error.

    • @ironheadchuang
      @ironheadchuang 6 ปีที่แล้ว +1

      It still a good tutorial! Thank you!

  • @lasredchris
    @lasredchris 4 ปีที่แล้ว

    Capsule networks
    Dynamic routing between capsules
    Computer graphics - instantiation parameter - rendering
    Capsule predicts

  • @lasredchris
    @lasredchris 4 ปีที่แล้ว

    Boat - rectangle triangle
    Capsule network - image classifier

  • @vidurwadhwa6897
    @vidurwadhwa6897 4 ปีที่แล้ว

    Awesome video!! Each and every second is so informative!! Thanks a lot.

  • @zehaowang3677
    @zehaowang3677 4 ปีที่แล้ว

    This is really a fantastic tutorial video and it helps me A LOT. Thank you for your sharing!

  • @yassinemarrakchi5941
    @yassinemarrakchi5941 6 ปีที่แล้ว +1

    Thanks for the brilliant explanation

  • @FrenchIDvideos
    @FrenchIDvideos 6 ปีที่แล้ว +1

    Hi Aurélien, this video is amazingly clear. About being less sensitive to pose estimation and scale... Humans/Toddlers do learn with an integrated camera viewpoint estimator (an integrated accelerometer called the internal hear... which leads to pitch and roll) a depth map (stereovision with 2 eyes which can lead to quickly determining the size) ... and they're able to walk/interact around the object.... Don't you think these priors could help to boost what's necessary to push capsnets to the next level ? For instance when i'm looking at the boat from the sea, i know it's far away (disparity ...) and i see it moving... and the horizon is not tilted because my head is standing vertically. It is much different from a boat drawing printed on a book that could be tilted because the book is tilted... (i know it's a planar representation using disparity again + not moving). So we can let computers try to learn by themselves that extra complexity of not knowing how the pictures were taken but maybe helping them could lead to better performances.

  • @RomualdMenuet
    @RomualdMenuet 6 ปีที่แล้ว

    For a 1st video, it's an awesome one ! Bien joué :)

  • @susdoge3767
    @susdoge3767 ปีที่แล้ว +1

    i just came across this thing and holy shit this is absoutely captivating,!!brilliant

  • @EngineeringEducation
    @EngineeringEducation 6 ปีที่แล้ว

    Excellent video!

  • @anantbhat3640
    @anantbhat3640 2 ปีที่แล้ว

    Informative video actually and helped me a lot to know about capsule network. Thanks a lot man😊

  • @Loppy2345
    @Loppy2345 6 ปีที่แล้ว

    Thanks for the video. Good luck with your channel!

  • @zhenyueqin6910
    @zhenyueqin6910 6 ปีที่แล้ว +1

    Thank you very much for this excellent video!